text
stringlengths
100
500k
subset
stringclasses
4 values
Duke Dissertations Enabling Technologies for High-Rate, Free-Space Quantum Communication Cahall, Clinton T. Kim, Jungsang Quantum communication protocols, such as quantum key distribution (QKD), are practically important in the dawning of a new quantum information age where quantum computers can perform efficient prime factorization to render public key cryptosystems obsolete. QKD is a communication scheme that utilizes the quantum state of a single photons to transmit information, such as a cryptographic key, that is robust against adversaries including those with a quantum computer. In this thesis I describe the contributions that I have made to the development of high-rate, free-space quantum communication systems. My effort is focused on building a robust quantum receiver for a high-dimensional time-phase QKD protocol where the data is encoded and secured using a single photon's timing and phase degrees of freedom. This type of communication protocol can encode information in a high-dimensional state, allowing the transmission of $>1$ bit per photon. To realize a successful implementation of the protocol a high-performance single-photon detection system must be constructed. My contribution to the field begins with the development of low-noise, low-power cryogenic amplifiers for a detection system using superconducting nanowire single-photon detectors (SNSPDs). Detector characteristics such as maximum count rate and timing resolution are heavily influenced by the design of the read-out circuits that sense and amplify the detection signal. I demonstrate a read-out system with a maximum count rate $>20\,$million counts-per-second and timing resolution as high as $35$\,ps. These results are achieved while maintaining a low power dissipation $<3$\,mW at 4\,K operation, enabling a scalable read-out circuit strategy. A second contribution I make to the development of detection systems utilizing SNSPDs is extending the superb performance of these detectors to include photon number resolving capabilities. I demonstrate that SNSPDs exhibit multi-photon detection up to four photons where the absorbed photo number is encoded in the rise time of the electrical waveform generated by the detector. Additionally, our experiment agrees well with the predictions of a universal model for turn-on dynamics of SNSPDs. A feature our multi-photon detection system demonstrates high resolution between $n=1$ and $n>1$ photons with a bit-error-rate (BER) of $4.2\times10^{-4}$. Finally, I extend the utility of the time-phase QKD protocol to free-space applications. Atmospheric turbulences cause spatial mode scrambling of the optical beam during transmission. Therefore, the quantum receiver, and most importantly the time-delay interferometer needed for the measurement of a phase encoding of a single photon, must support many spatial modes. I construct and characterize an interferometer with a 5\,GHz free spectral range that has a wide field-of-view and is passively a-thermal. The results of interferometer characterization are highlighted by a $>99\,\%$ single-mode, and $>98\,\%$ multi-mode interference visibility with negligible dependence on the spatial mode structure of the input beam and modest temperature fluctuations. Additionally, the interferometer displays a small path-length shift of 130\,nm/$^{\,\circ}$C, allowing for great thermal stability with modest temperature control. cryogenic electronics multi-photon detection Quantum communication quantum key distribution single-photon detector Cahall, Clinton T. (2019). Enabling Technologies for High-Rate, Free-Space Quantum Communication. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/18833. Rights for Collection: Duke Dissertations
CommonCrawl
Existence results for the Klein-Gordon-Maxwell equations in higher dimensions with critical exponents CPAA Home The maximal number of interior peak solutions concentrating on hyperplanes for a singularly perturbed Neumann problem March 2011, 10(2): 719-730. doi: 10.3934/cpaa.2011.10.719 On existence and nonexistence of the positive solutions of non-newtonian filtration equation Emil Novruzov 1, Department of Mathematics, Hacettepe University, 06800 Beytepe - Ankara, Turkey Received June 2010 Revised September 2010 Published December 2010 The subject of this investigation is existence and nonexistence of positive solutions of the following nonhomogeneous equation $ \rho (|x|) \frac{\partial u}{\partial t}- \sum_{i=1}^N D_i(u^{m-1}|D_i u|^{\lambda -1}D_i u)+g(u)+lu=f(x)$ (1) or, after the change $v=u^{\sigma}$, $\sigma =\frac{m+\lambda -1}{\lambda }, $ of equation $\rho (|x|) \frac{\partial v^{\frac{1}{ \sigma }}}{\partial t}-\sigma ^{-\lambda }\sum_{i=1} ^N D_i(|D_i v|^{\lambda -1}D_i v)+g(v^{\frac{1}{\sigma }}) +lv^{\frac{1}{ \sigma }}=f(x),$ (1') in unbounded domain $R_+\times R^N,$ where the term $g(s)$ is supposed to satisfy just a lower polynomial growth condition and $g'(s) > -l_1$. The existence of the solution in $ L^{1+1/\sigma}(0, T; L^{1+1/\sigma}(R^N))\cap L^{\lambda +1}(0, T; W^{1,\lambda +1}(R^N))$ is proved. Also, under some condition on $g(s)$ and $u_0$ is shown a nonexistence of the solution. Keywords: nonexistence., Nonlinear degenerate equation, existence, FSP(finite speed of propagation of perturbations). Mathematics Subject Classification: Primary: 35K15, 35K65; Secondary: 35B3. Citation: Emil Novruzov. On existence and nonexistence of the positive solutions of non-newtonian filtration equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 719-730. doi: 10.3934/cpaa.2011.10.719 N. Ahmed and D. K. Sunada, Nonlinear flows in porous media,, J. Hydraulics. Div. Proc. Amer. Soc. Civil Eng., 95 (1969), 1847. Google Scholar D. Blanchard and G. A. Francfort, Study of double nonlinear heat equation with no growth assumptions on the parabolic term,, SIAM J. Math. Anal., 19 (1988), 1032. doi: doi:10.1137/0519070. Google Scholar S. P. Degtyarev and A. F. Tedeev, $L_1-L_\infty$- estimates of solutions of the Cauchy problem for an anisotropic degenerate parabolic equation with double non-linearity and growing initial data,, Sb. Math., 198 (2007), 639. doi: doi:10.1070/SM2007v198n05ABEH003853. Google Scholar J. R. Esteban and J. L. Vazquez, Homogeneous diffusion in $R$ with power-like nonlinear diffusivity,, Arch. Rational Mech. Anal., 103 (1988), 39. doi: doi:10.1007/BF00292920. Google Scholar V. Kalantarov and O. A. Ladyzhenskaya, The occurence of collapse for quasilinear equation of parabolic and hyperbolic types,, J. Sov. Math., 10 (1978), 53. doi: doi:10.1007/BF01109723. Google Scholar H. A. Levine, Some nonexistence and instability theorems for solutions of formally parabolic equations of the form $Pu_t=-Au+F(u) $,, Arch. Rational Mech. Anal., 51 (1973), 371. doi: doi:10.1007/BF00263041. Google Scholar J. L. Lions, "Quelques Methodes de Resolution des Problemes aux Limites Nonlineaires,", Dunod, (1969). Google Scholar J. L. Lions and E. Magenes, "Non-Homogeneous Boundary Value Problems and Applications,", Springer-Verlag, (1972). Google Scholar A. V. Martynenko, and A. F. Tedeev, The Cauchy problem for a quasilinear parabolic equation with a source and nonhomogeneous density,, Comput. Math. Math. Phys., 47 (2007), 238. doi: doi:10.1134/S096554250702008X. Google Scholar E. Novruzov, On blow-up of solution of nonhomogeneous polytropic equation with source,, Nonlinear Anal., 71 (2009), 3992. doi: doi:10.1016/j.na.2009.02.069. Google Scholar L. E. Payne and D. H. Sattinger, Saddle points and instability of nonlinear hyperbolic equations,, Israel J. Math., 22 (1975), 273. doi: doi:10.1007/BF02761595. Google Scholar P. Pucci and J. Serrin, Global nonexistence for abstract evolution equations with positive initial energy,, J. Diff. Equ., 150 (1998), 203. doi: doi:10.1006/jdeq.1998.3477. Google Scholar G. Reyes and J. L. Vázquez, The inhomogeneous PME in several space dimensions. Existence and uniqueness of finite energy solutions,, Communications on Pure and Applied Analysis, 7 (2008), 1275. doi: doi:10.3934/cpaa.2008.7.1275. Google Scholar A. F. Tedeev, Conditions for the time global existence and nonexistence of a compact support of solutions to the Cauchy problem for quasilinear degenerate parabolic equations,, Siberian Math.J., 45 (2004), 155. doi: doi:10.1023/B:SIMJ.0000013021.66528.b6. Google Scholar A. F. Tedeev, The interface blow-up phenomenon and local estimates for doubly degenerate parabolic equations,, Applicable Analysis, 86 (2007), 755. doi: doi:10.1080/00036810701435711. Google Scholar M. Tsutsumi, On solutions of some doubly nonlinear degenerate parabolic equations with absorption,, JMMA, 132 (1988), 187. Google Scholar Z. Xiang, Ch. Mu and X. Hu, Support properties of solutions to a degenerate equation with absorption and variable density,, Nonlinear Anal., 68 (2008), 1940. doi: doi:10.1016/j.na.2007.01.021. Google Scholar Y. Zhou, Global nonexistence for a quasilinear evolution equation with a general Lewis function,, J. for Analysis and its Applications, 24 (2005), 179. Google Scholar S. Bonafede, G. R. Cirmi, A.F. Tedeev. Finite speed of propagation for the porous media equation with lower order terms. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 305-314. doi: 10.3934/dcds.2000.6.305 Antoine Benoit. Finite speed of propagation for mixed problems in the $WR$ class. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2351-2358. doi: 10.3934/cpaa.2014.13.2351 Lihua Min, Xiaoping Yang. Finite speed of propagation and algebraic time decay of solutions to a generalized thin film equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 543-566. doi: 10.3934/cpaa.2014.13.543 Jean-Daniel Djida, Juan J. Nieto, Iván Area. Nonlocal time-porous medium equation: Weak solutions and finite speed of propagation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4031-4053. doi: 10.3934/dcdsb.2019049 Yong Zhou, Zhengguang Guo. Blow up and propagation speed of solutions to the DGH equation. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 657-670. doi: 10.3934/dcdsb.2009.12.657 David Henry. Infinite propagation speed for a two component Camassa-Holm equation. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 597-606. doi: 10.3934/dcdsb.2009.12.597 Belkacem Said-Houari, Flávio A. Falcão Nascimento. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction. Communications on Pure & Applied Analysis, 2013, 12 (1) : 375-403. doi: 10.3934/cpaa.2013.12.375 Gabriele Bonanno, Pasquale Candito, Roberto Livrea, Nikolaos S. Papageorgiou. Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1169-1188. doi: 10.3934/cpaa.2017057 Takahiro Hashimoto. Existence and nonexistence of nontrivial solutions of some nonlinear fourth order elliptic equations. Conference Publications, 2003, 2003 (Special) : 393-402. doi: 10.3934/proc.2003.2003.393 Eric R. Kaufmann. Existence and nonexistence of positive solutions for a nonlinear fractional boundary value problem. Conference Publications, 2009, 2009 (Special) : 416-423. doi: 10.3934/proc.2009.2009.416 Xie Li, Zhaoyin Xiang. Existence and nonexistence of local/global solutions for a nonhomogeneous heat equation. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1465-1480. doi: 10.3934/cpaa.2014.13.1465 Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2005, 4 (4) : 861-869. doi: 10.3934/cpaa.2005.4.861 Andrea L. Bertozzi, Dejan Slepcev. Existence and uniqueness of solutions to an aggregation equation with degenerate diffusion. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1617-1637. doi: 10.3934/cpaa.2010.9.1617 Changchun Liu. A fourth order nonlinear degenerate parabolic equation. Communications on Pure & Applied Analysis, 2008, 7 (3) : 617-630. doi: 10.3934/cpaa.2008.7.617 Chi-Cheung Poon. Blowup rate of solutions of a degenerate nonlinear parabolic equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-20. doi: 10.3934/dcdsb.2019060 Françoise Demengel, O. Goubet. Existence of boundary blow up solutions for singular or degenerate fully nonlinear equations. Communications on Pure & Applied Analysis, 2013, 12 (2) : 621-645. doi: 10.3934/cpaa.2013.12.621 Francesco Leonetti, Pier Vincenzo Petricca. Existence of bounded solutions to some nonlinear degenerate elliptic systems. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 191-203. doi: 10.3934/dcdsb.2009.11.191 Chufen Wu, Peixuan Weng. Asymptotic speed of propagation and traveling wavefronts for a SIR epidemic model. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 867-892. doi: 10.3934/dcdsb.2011.15.867 Patrick W. Dondl, Michael Scheutzow. Positive speed of propagation in a semilinear parabolic interface model with unbounded random coefficients. Networks & Heterogeneous Media, 2012, 7 (1) : 137-150. doi: 10.3934/nhm.2012.7.137 Elena Trofimchuk, Manuel Pinto, Sergei Trofimchuk. On the minimal speed of front propagation in a model of the Belousov-Zhabotinsky reaction. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1769-1781. doi: 10.3934/dcdsb.2014.19.1769 PDF downloads (6) Emil Novruzov
CommonCrawl
Transport equipment network analysis: the value-added contribution Luis Gerardo Hernández García ORCID: orcid.org/0000-0003-3985-21231 Emerging in the twenty-first century, Network Science provides practical measures to interpret a system's interactions between the components and their links. Literature has focused on countries' interconnections on the final goods, but its application on the value-added from a network perspective in trade is still imitated. This paper applies network science properties and a multi-regional input–output analysis by using the UNCTAD-Eora Global Value Chain Database on the Transport Equipment value added on 2017 to unwrap the specific structural characteristics of the industry. Results show that the industry is highly centralized. The center of the network is dominated by developed countries, mainly from Europe, the United States, and Japan. Emerging countries such as China, Mexico, Thailand, and Poland also have an important position. In addition, the structure reveals two sub-hubs located in East Europe and North America. By extending to community detection, the network consists of three different communities led by Germany, the United States, and the United Kingdom, associated with more significant value-added flows. The study concludes that flows are not always consistent with the economy's geographical location as usually final goods analysis suggests, and highlight the need to continue using the complex network to reveal the world trade structure. The increase in technological processes and reduction in the production and communication cost have transformed the international trade structure into a more dynamic and fragmented system. Thus, it has created different opportunities for diverse economies to be immersed in the Global Value Chains (GVCs). One of these opportunities is present in the manufacturing sector, specifically in the transport equipment industry, due to the rapid acceleration in the automobile sector, their capacity for re-allocation and the Foreign Direct Investment (FDI) flows. Due to this transformation, a simple analysis of final imports and exports is no longer sufficient to evaluate an economy's strengths and weaknesses. This type of analysis limits itself by not being able to detect how many economies were related by transforming the final product and due to the challenges of collecting data about the value-added for different products, very few organizations and studies have addressed this phenomenon. Fortunately, network analysis is an efficient tool to represent and examine different systems and their interdependency. The approach has been used widely in various fields, including neuroscience (Strogatz 2001), biology (Buchanan et al. 2010; Leyeghifard et al. 2017), environment (Kagawa et al. 2015; Chen et al. 2018; Chai et al. 2011), and to less extent, in economics (Brailly 2016; Kitsak et al. 2010). Barabási (2016) defines a network as a "catalog of a system's components often called nodes or vertices and the direct interactions between them call links or edges." As Amador and Cabral (2016) suggest, networks suppose the interdependence of observable [and non-observable] data and inspect the influence of connections instead of being treated as isolated agents. In addition, the importance of network analysis relies on the study of relations in a structural form; that is, it considers the effects of a third party on the relationship of i and j, and how this can influence others. In the context of trade, Synder and Kick (1979) are one of the early studies using network analysis. They examine the world system theory in line of the center and periphery during the 1980s; however, the conceptualization of the rise of globalization cannot be explained. In a recent study, Kali et al. (2013) considered a country's specialization patterns to unwrap the trade network system, finding that density and proximity are relevant variables for a country to move to higher-income products; therefore, higher growth rates. Noguera et al. (2016) applied a traditional input–output (IO) and network analysis, concluding that many economic sectors are related because they present similar characteristics in all countries. They concluded that development level implies an increasing concentration of economic activity in more and better-connected sectors. Gala et al. (2018) pointed out that countries at the core tend to specialize in producing and exporting goods with high value-added (complex goods), being those countries in their majority high-income countries, while the periphery consists in low-income countries and with low technology and complexity products. More recently, Gould et al. (2018) found that the main channels to determine growth in the trade networks are the FDI flows, migration, and the internet from a multidimensional connectivity perspective. Zhou (2020), analyzing the Regional Trade Agreement (RTA) effects, suggested that two trading partner countries located in the center of the network and with an RTA have a more significant trade level than those in the periphery. Vidya and Prabheesh (2020), in the situation of the COVID-19 pandemic, concluded that emerging Asian economies, such as China and India, have taken the lead roles in the world trade networks. Therefore, network analysis can be applied in the GVCs context to examine the structure, connectivity and dependence, the countries participating in, and their dominance, a different perspective from the world system theory, which focuses only on identifying the core and periphery without analyzing the interdependency among members. In that sense, Amador and Cabral (2016), using the World Input–Output Database, analyzed the evolution of the degree centrality in a weighted directed network. They concluded that more countries are joining the GVC as the density of the different networks increases within years. Moreover, Cerina et al. (2015) explore the GVC's interconnectivity of industries and their flows at the global, regional, and local levels. Their findings are that industries are asymmetrically connected at the global level, which causes shocks to lead to fluctuation in the whole network, concluding that inter-relationships of industries identified at the cross-sectional country level are still at a regional level. With a database from BACI-CEPII (Base pour l'Analyse du Commerce International-Centre d'Etudes Prospectives d'Informations Internationales), De Benedictis et al. (2013) uses centrality measures in a local and global sense to differentiate countries' position in the general trade network as well in commodities such as Bananas, Cement, Movies, Oil, Footwear and Engines, concluding that the network for selected products is characterized by oligopolistic structure. More recently, Cingolani et al. (2017) used the electronic sector, motor vehicles, textiles, and apparel between 2007 and 2014, examining the effect of countries' neighborhoods and the effect of third countries, allowing to detect clusters and hubs in the networks. However, although the network concept has existed in the trade literature, its use has been limited in the field of the GVC. This delay is mainly because GVC literature focuses on how these value chains are formed, such as the snake and spider shapes (World Bank, Global Value Chain Development Report 2017), and not on the structure itself, its connectivity, and dependency relationships. Therefore, it is necessary to distinguish that the shape is not the same as the structure, and this is where the compatibility between the value-added flows on the GVC and the Network Science exists. Based on the paper of Cingolani et al. (2017) and having as a primary reference the study of De Benedictis et al. (2013) and Cerina et al. (2015), I aim to go further with the network analysis. It is essential to highlight that these studies only provided the topological approach (structure) and the measure approaches, such as degree, density, and centralities. Because of this limitation, I would like to use a community detection algorithm, a frequently used tool in network science, to provide an extended version of interpreting and understanding the network structure. The reason to use this tool is that, contrary to the shape of the GVC that classifies a country according to the production stage, the network community uses link density and centralities to classify them. The concept goes beyond grouping countries by their location. Instead, it groups them by the interconnectivity and the effect of third parties on the country i and j relationship. It is important because it classifies countries according to their structural position in the network, revealing hierarchical organization inside the community and the closest interdependency among the members. Its use is applied primarily in computational sciences, such as Lu et al. (2018), Yang and Le (2021), Yang et al. (2013), in biology, Vandeputte et al. (2017), or social science, Šubelj et al. (2016). This study distinguishes transport equipment from others as, for the industry, core regions have been distinguished by different markets from the final demand, but not the inter-relations between the economies by the value-added contribution, except for Pavlínek (2021), who conducted an analysis for the automotive industry in the Eurozone. In addition, the industry is one of the manufacturing industries that capture more presence of Research and Development (R&D), management, and complex activities based on the labor market; all of them are essential characteristics in the GVC studies. The transport equipment industry has peculiar characteristics to be considered a highly complex value chain. The industry is characterized as a mature industry that offers high-level products. Moreover, the value chain presents different degrees of fragmentation and technological capacities. That is, multinational companies (MNCs) seek places with the lower cost of the different stages of production. At the same time, in the main branch, they focus on developing the new technology to be added to the product. In other words, the industry added technical innovations to the existing products rather than developing new ones. Schwabe (2020) provides a study focusing on German suppliers' risks and strategies with the transition from combustion engines to electric engines in the transport equipment industry. He mentioned that lead firms tend to reallocate their production process close to the assembly lines due to the increased cost. On a more disaggregated level, Fana and Villani (2022) decompose the automotive supply chain by analyzing the employment, the value-added, and the occupation structure for countries such as Germany, France, the United Kingdom, and Italy. They found that after the global financial crisis, the supply chains reorganized, being the Eastern Europe countries the ones who benefitted the more due to the strong position of Germany in the industry. Moreover, they emphasize that German car makers offshored the production of the intermediate components, kept the final assembly domestic, and dominated activities such as R&D. Grodzicki and Skrzypek (2020) found similar results using a panel-data ARDL model, concluding that Germany's strong position in the GVCs is because the country can maintain high value-added inputs in its final goods. In contrast, Spain relies more on foreign suppliers. Dussel Peters (2022) studied the relationship between the United States, China, and Latin America in the global auto parts chain. He highlights that China is gradually taking the place of Canada in the intra-NAFTA relationships as a result of its local production policy and exports oriented after joining the World Trade Organization. Moreover, he states that Mexico has benefited from the preferential tariffs originated from the agreement and from the recent forced local content in the automobile industry that the USMCA agreement demands. Nevertheless, the main limitation of this study is that even if it tries to explain the triangular relationship, it does from the aggregate value and not the value-added flows perspective. From a geographical and knowledge creation perspective, Rodríguez-De la Fuente and Lampón (2020) studied the cases of the Mexican and Spain transport industries, mainly the automobile, which is the most representative of the industry. They concluded that the status of Mexico on the GVCs is characterized by a low added value and knowledge contents of production activities and cannot generate technology, and its value-added is mainly due to the North America Automobile production system. On the other side, Spain's position is characterized by adding value and knowledge to the production activity; however, it is still in an intermediate position due to its dependency on the European production system. Crossa and Ebner (2020) got similar results for the Mexico case and Sancak (2021) for Mexican and Turkish suppliers. The study by Lee et al. (2021) focuses on the automobile sector of China, Thailand, and Malaysia and compares their cases with the success of South Korea. They concluded that the success case of the upgrading case of China is due to the increase of the share of domestic value-added in their exports, labor productivity, and substantial investment in R&D. On the other hand, Thailand has focused on increasing their exports. However, the value-added content is from MNCs, mainly from Japan; as a result, Thailand has just the connector role in the industry. Finally, Malaysia still needs to increase its role in the GVCs due to the lack of competitiveness in the local markets; thus, the few existing local firms do not add value to their products nor focus on the exports side. Nevertheless, none of the previous studies has conducted a global performance of the transport equipment industry, nor have they used the network approach to explain this phenomenon. To fill the gap in the GVC literature, this research aims to unravel the particular characteristics of the contribution of the value-added in the transport equipment industry network by addressing the following questions: Is the transport equipment industry highly centralized from the value-added flows? Which countries are part of the center and which are still far from joining the industry? How can we measure the relevance of each country in the network? Do communities exist in the industry, and what are their specific characteristics? Are they part of the same territory? To answer the above questions, I used the UNCTAD-Eora Global Value Chain database, which contains information on the in- and out-flows of value-added of the industry, and this information is computed as a complex network system where countries are represented as nodes and value-added flows as edges. One of the main contributions of this paper is that it analyzes the industry as a whole structure and provides information on the integration process in this network not only by region, but also identifying the leading economies that influence the movement of flows through their interactions. Moreover, by applying the various measures and answering the previous questions, the study clarifies the mixed results in the adopted policies that each country and regions have and how in/dependent on the production process they are. These measures confirm the existence of different governance structures in the industry across countries and regions that are attributable beyond a geographical factor. Finally, it contributes by expanding the use of the Network measures in the economic field related to the GVC studies. The structure of this paper is as follows. Section 2 provides information about Data and the methodology. In Sect. 3, the main questions for this study are driven and provide the network analysis results, divided into four different approaches: the first approach consists of visual tools, the second approach applies measure tools, such as degree, clustering, density, which will reflect those countries with high out-degree dominate the supply of value-added in the industry, while high in-degree countries are the users. The third approach uses the centrality concept split into two major components, Eigen betweenness and Eigenvector centrality measures; they provide valuable information about the position of each country and its role in the industry. Finally, detecting communities, the fourth and last approach, provides information on highly related countries by grouping them according to the in–out-flows density. Section 4 concludes. Data sources and methodology The empirical research uses data from the UNCTAD-Eora Global Value Chain Database (from now on, referred to as the Eora database). The data covers 189 different countries and regions time series related to critical indicators for the GVC as the foreign value added (FVA), domestic value-added (DVA), and indirect value added (DVX), which are generated from the EORA Multi-Region Input–Output tables (MRIOs) for the Transport Equipment Industry. One of the advantages of using this database is the length of the time coverage, which uses the most recent information, 2017. In addition, it provides a broader panorama by including more countries and regions than the OECD TiVA data set. It is essential to highlight that, accordingly to the methodology used to calculate the flows proposed by Casella et al. (2019) for the Eora database, overall, the results for this dataset and the OECD confirm an alignment and consistent results. Even though BACI-CEPII attempts to have the largest number of countries and time series and follows the reconciliation methodology purposed by Gualier and Zignago (2010) to reduce the number of missing values, this research aims to contribute to the results obtained by De Benedictis et al. (2013) by exploring the industry using the EORA database and providing an extension panorama of the network science studies related to trade. To my understanding, the above studies have not addressed the Transport Equipment Industry issue from the context of the value-added contribution nor detecting communities according to the density of links between the members. In addition, none of the previous studies have used the Eora database, which gives this paper the novelty to be used for a different perspective and future reference for comparative analysis. Setting the network structure Using the Eora database, only 129 countries provide information about the Transport Equipment Industry, creating 16,461 edges. However, to better understand the network characteristics, I applied a cut set to provide more specific results in the industry. This cut set uses the "total network average flow" benchmark to select only the top flows in the industry. As a result, the final selection covers 62 countries (nodes) and 689 edges, accounting for 97% of the total Transport Equipment Industry value-added flows. As the proper methodology in the Eora database says, the derivation of the value-added trade from the MRIO tables has to be established by using the standard IO analysis, and this model can be expressed as: $$X = AX + Y,$$ where X is the vector of the total outputs by countries; A reflects the vector of intermediate uses, the inter-industrial matrix between all economies measured per unit output; and Y reflects the final demand. The relationships of countries in (1) can be expressed in terms of the MRIO framework as: $$X = \left( {I - A} \right)^{ - 1} Y = LY,$$ where \(L = \left( {I - A} \right)^{ - 1}\) is the Leontief inverse matrix which provides the information of direct and indirect outputs to satisfy one unit of the final demand. Therefore, based on Casella et al. (2019) methodology, this can be transferred as well to the value-added trade framework between countries and can be expressed as: $$F = \left( {\begin{array}{*{20}c} {F^{11} } & \ldots & {F^{1N} } \\ \vdots & \ddots & \vdots \\ {F^{N1} } & \ldots & {F^{NN} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {V^{1} } & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & {V^{N} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {L^{11} } & \ldots & {L^{1N} } \\ \vdots & \ddots & \vdots \\ {L^{N1} } & \ldots & {L^{NN} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {E^{1} } & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & {E^{N} } \\ \end{array} } \right) ,$$ where F shows the Transport Equipment flows between countries r (r = 1,…N) and countries s (s = 1, …, N). V represents the value/added share, while E represents the exports. Hence, this matrix describes the value-added contained in the transport equipment industry. The value-added contribution as a supplier in the transport equipment industry is defined as the total sum of exports of countries r to meet the final demand of all other regions and can be described as: $$F_{r}^{\exp } = \sum \limits_{s \ne r}^{N} F_{rs} .$$ The value-added contribution as a user in the transport equipment industry is defined as the total sum of imports of countries r from all other regions, handled by the final demand of countries r and is indicated by: $$F_{r}^{{{\text{imp}}}} = \sum \limits_{s \ne r}^{N} F_{sr} .$$ Therefore, the total balance of the value-added contribution is obtained as follows: $$F_{r}^{{{\text{net}}}} = F_{r}^{\exp } - F_{r}^{{{\text{imp}}}} .$$ Hence, countries r become net suppliers when the balance is positive; otherwise, they are users. Network indicators The value-added contribution in the transport equipment industry flows can be presented as a complex network structure: the countries that participate in both the supply and the user side are the nodes, while the flows among them are the edges. It can be characterized in a binary form (undirected) or directed. The difference is that the first only reflects the existence of a link between the nodes, while the second reflects the weight of the link, that is, the total flows in each link and if it is an in or outgoing link. This research follows the network calculations proposed by Barabási (2016) and uses fundamental properties of nodes involved in the network, such as its degree, clustering, centrality, and communities. Degree and degree distribution In directed networks like the present one, the distinction between incoming and outcoming flows is necessary. Therefore, in-degree and out-degree measures would be used to calculate the total degree of the node. This representation is as follows: $$k_{i}^{{{\text{in}}}} = \sum \nolimits_{j = 1}^{N} g_{ij} ,$$ $$k_{i}^{{{\text{out}}}} = \sum \nolimits_{j = 1}^{N} g_{ji} ,$$ $$k_{i} = k_{i}^{{{\text{in}}}} + k_{i}^{{{\text{out}}}} ,$$ where \(g_{ij}\) is a dummy variable that denotes whether there is a contribution in value-added flows in the transport industry from economy i to economy j, N represents the total number of economies in the network, k represents the degree, while \(k_{i}^{{{\text{in}}}}\) and \(k_{i}^{{{\text{out}}}}\) indicate the in-degree and out-degree, respectively, and \(k_{i} { }\) is the total degree of the node. In the case of this study, a weighted directed network is used, which means the edges that connect two economies are weighted in portion to the flows between them. The degree distribution measure provides the probability that a randomly selected node has a degree k. Since we want to know the probabilities and analyze the scale-free structure of the network, the normalized probability distribution is as follows: $$\sum \limits_{k = 1}^{\infty } p_{k} = 1,$$ where \(p_{k} = n_{k} /n\), and \(n_{k}\) is the number of economies with the same degree k. That is, \(p_{k}\) indicates the probability that a given node has degree k in the network. Equation 10 provides the network's cumulative degree distribution, that is, the sum of all probabilities of all nodes with k degrees, equivalent to 1 for the normalized probability distribution. In addition, the network is scale-free if a power-law distribution well fits its degree distribution, that is, \(p\left( k \right) \propto k^{ - \gamma }\), meaning that few countries with larger links and high centralities exist and many others with few links and low centralities. Important it is to say that networks have a classification, such as random networks and scale-free. For more details consult Barabási (2016). Paths and distance Gravity models suggest that physical distance is a significant component that affects trade between economies, saying that the closer they are, the more trade between them. In the context of a complex network, distance is a more challenging concept. Since the network framework lacks a concept of physical distance, this is replaced by path length. A path is a route that runs along with the links of the network and measures the number of economies (for this study) that need to pass through between two economies for a value-added contribution trade relationship in the transport equipment industry. There exist some properties related to path and distance. The shortest path between nodes i and j is the one with the fewest edges. In diverse literature, the concept is often called the distance between node i and j, denoted by \(d_{ij}\). In undirected networks, where it does not matter the direction of the flows, the distance between i and j is the same; however, this does not apply for directed networks, where the existence of a path from node i to node j does not assure the existence of a path from j to i. What concerns the most for this study is, on average, how many economies a randomly selected country needs to pass through. The average path length is denoted by: $$\left\langle d \right\rangle = \frac{1}{{N\left( {N - 1} \right)}} \sum \limits_{i,j = 1,N;i \ne j} d_{ij} .$$ Clustering coefficient and density Clustering coefficient expresses the degree to which the neighbors of a given node relate to each other. For our specific network, it will measure if trade relations by the added-value contribution exist between the trading partners in the whole network and has to be calculated in the average over all nodes, which is: $$\left\langle C \right\rangle = \frac{1}{N} \sum \limits_{i = 1}^{N} c_{i} ,$$ where N is the nodes (economies) in the network, and \(c_{i} = \frac{{e_{i} }}{{k_{i} \left( {k_{i} - 1} \right)}}\) represents the clustering coefficient of a specific node, obtained by the number of links \(e_{i}\), between the \(k_{i}\) neighbors of node i. Therefore, the more densely interconnected the network, the higher the clustering coefficient. A complete network is when all nodes are connected, reflecting the complexity of reciprocity among the nodes, in this case, the economies. If links connect all nodes, we said the network is totally dense, while the lower range, the less dense the network is. Density can be described as follows: $$D = \frac{2e}{{N\left( {N - 1} \right)}},$$ where \(e\) is the current links in the network, D ranges between 0 and 1. Real networks expect to have a coefficient far less than 1. Centrality An economy can play an important role as a user and supplier of value-added flows in the network as it acts as a catalyzer for transferring this value-added contribution. Such importance for connectivity can be measured by the betweenness centrality and the eigenvector centrality. Betweenness centrality indicates how important a node is in terms of connecting other nodes (De Benedictis et al. 2013) and is obtained by: $$b_{k} = \sum \limits_{i = 1}^{n} \sum \limits_{j = 1}^{n} \sigma_{ij} \left( k \right)/\sigma_{ij} ,$$ where \(\sigma_{ij}\) is the number of shortest paths between economy i and economy j, \(\sigma_{ij} \left( k \right)\) is the number of shortest paths between economy i and j that pass-by economy k (Liu et al. 2022). Eigenvector centrality calculates the node influence by evaluating the importance of its neighbors; as De Benedictis et al. (2013) pointed out, what matters is the centrality of the linked countries to a specific node and not the node's centrality itself. Hence, a country is influential in the network by being associated with the countries with large value-added contribution flows. The measure is defined as: $$v_{i} = \lambda^{ - 1} \sum \limits_{j = 1}^{{n_{i} }} g_{ij} v_{j} ,$$ where \(\lambda\) and \(v_{j}\) correspond to the largest eigenvalue and the associated eigenvector. The advantages of using this measure are that it will provide the importance of the country itself, the importance of its neighbors, and the effects of the third countries on the selected country. A group of nodes that tend to have a higher plausibility of connecting to each other is called a community in network science. One of the most used algorithms to detect community groups in network science was proposed by Girvan and Newman (2002), which consists of the use of the link betweenness as a centrality measure in order to detect the shortest path between the nodes and cut one by one the less significative edges that connect nodes to generate the communities. It also uses the modularity function to select the optimal cut to divide into groups. The algorithm function is as follows: $$Q = \frac{1}{2m} \sum \limits_{i = 1}^{N} \sum \limits_{j = 1}^{N} \left[ {w_{ij} - \frac{{z_{i} z_{j} }}{2m}} \right]\delta \left( {c_{i} ,c_{j} } \right),$$ where \(w_{ij} = F_{ij} + F_{ji}\) is the total amount of value-added flows between country i and country j; \(z_{i} = \sum \limits_{j = 1}^{n} w_{ij}\) is the sum of value-added flows attached to economy i, the indicators c represents the community to which economy is assigned; \(\delta \left( {c_{i} ,c_{j} } \right)\) is a function that takes the value of 1 if \(c_{i} = c_{j}\) and 0 otherwise; and \(m = \sum \limits_{i = 1}^{n} \sum \limits_{j = 1}^{n} w_{ij} /2\). Besides community detection, the algorithm detects the cliques inside the whole network. Luce and Perry (1949) defined a community as a group of individuals whose members know each other; consequently, a clique is a complete subgraph with maximal link density. This type of graph allows detecting the hierarchical clustering in a graph, suggesting which nodes are more likely to join and lead the industry. Complementary measures There are some measures that provide information in how the structure of the network is. These measures are more related with the connectivity of the network and allow us provide complementary information from the measures above. According to Newman (2003), assortativity refers to the increase or reduction in the probability of connecting two nodes based on their correlation degree. With this measure, it is possible to check if countries are likely to trade with those with similar values (if the value negative) or if countries tend to trade with those with different values (positive value). Assortativity is calculated as follows: $$\Omega = \frac{{ \sum \nolimits_{jk} jk\left( {e_{jk} - q_{j} q_{k} } \right)}}{{\sigma_{q}^{2} }},$$ where \(q_{k}\) is the distribution of the remaining degree, \(e_{jk}\) represents the jointly probability distribution of the remaining degrees of the two nodes, \(e_{i,j}\) is the fraction of edges connecting nodes of type i and j while \(\sigma_{q}^{2}\) is the standard deviation of q, in which q is the sum of all \(e_{i,j}\) in both 'in' and 'out' flows in a directed network. In addition, reciprocity is most commonly defined as the probability that exists mutual connections of a directed link between existing nodes, that is, that the counterpart of a node also includes a link for the selected node. This measure is calculated as follows: $$R = \frac{{ \sum \nolimits_{ij} \left( {A \cdot A^{\prime}} \right)_{ij} }}{{ \sum \nolimits_{ij} A_{ij} }},$$ where \(A \cdot A^{\prime}\) is the element-wise product of a matrix \(A\) and its transpose, which in this case, reflects the contribution of value-added. Following Freeman (1979), centralization is a general method for calculating a graph-level centrality score based on node-level centrality measure, which is a complement of the simple degree measure. This score reflects the degree in which links spreads throughout the network, proving information of a possible existence of cluster if the score is high. Centralization is obtained as follows: $${\mathbb{C}} = \frac{{\sum\nolimits_{n} {\left( {\mathop {\max }\limits_{w} {\mathbb{c}}_{w} - {\mathbb{c}}_{n} } \right)} }}{{\left[ {\left( {N - 1} \right)\left( {N - 2} \right)} \right]}},$$ where \({\mathbb{c}}_{n}\) is the centrality of node n and \(\mathop {\max }\limits_{w} {\mathbb{c}}_{w}\) represents the maximum value in the network. The graph centrality score can be normalized by dividing by the maximum theoretical score for a graph with the same number of edges as the graph under study, in this scare, the transport equipment industry network. This section provides general details on the value-added network in the transport equipment industry and analyses its structure on both sides, the users and suppliers, during 2017. First approach: visualization Figure 1 provides the network structure. Each country represents a node, and the area reflects the total flow of value-added in or out, with the relative proportion of in-flows and out-flows indicated in white and blue, respectively. In addition, the width of the links reflects the volume of the flows regarding the origin and destination. (Source: Own elaboration with EORA data) Value-added contribution in Transport Equipment Network. The node area reflects the total flows of value-added in or out of the industry, with relative proportion of in- and out-flows indicated in blue and white, respectively. Edges' width reflects the volume of flows from the origin and destination. The Large Graph layout was used to produce this graph As Amador and Cabral (2016) pointed out, larger economies tend to have a bigger size and are located in the center of the network, mainly because they contribute more to the value-added and smaller countries tend to be placed outside the center. The value-added has been concentrated primarily in developed countries such as the United States, Germany, France, Italy, and Japan, with a few developing countries such as China, Thailand, Mexico, Poland, and Brazil. The result of these countries' position is as expected. Mainly due to the contribution of the value-added from automotive manufacturers to the transport equipment industry, in which firms such as Daimler (Germany), General Motors (United States), Renault (France), Fiat (Italy), Nissan (Japan) and Toyota (Japan) have higher market participation worldwide. In addition, these firms have filial in Poland, Mexico, Brazil, and Thailand. In the case of China, the strong local demand has helped the country to create value through their national firms, such as SAIC Motors, Dongfeng, and Changan Automobile, among others. Performance of different firms was conducted by Ferreira et al. (2021), highlighting the role of Japanese firms in the industry to have greater dominance with alliances such as the Nissan–Renault (Japan–Italy) and Ford–Mazda (Japan–United States). On the side of the aircraft, American companies dominate the industry with some relevant competition from Russia, France, Netherlands, Italy and Japan. European countries in the center have a more significant proportion of out-flows than in-flows, meaning that the value-added they generate is smaller than they receive. As Pavlínek (2021) demonstrates for the European automotive industry, this result is expected due to the integration process these countries already have. Moreover, according to the European business fact and figures from Eurostat (2018), the manufacture of transport equipment within the European Union concentrates more than 70% of the value added of the industry in the motor and vehicles sectors, followed by Aircraft and Spacecraft with no more than 15%, Ships and boats 6% and the rest between railway equipment and miscellaneous transport equipment (European Business: Facts and figures—2009 edition); therefore it is not a surprise to see in the center of the network countries such as Germany, Italy, France, Spain, and in less extend the United Kingdom and Sweden, which are the leading producers in the region for the automobile industry. Contrary, the United States' in-flows proportion is explained by the larger out-flows that Canada and Mexico have due to NAFTA, which can be seen with the width of the links between them, reflecting the importance of the industry for this region. The same applies to Japan, as the country has been investing in Mexico in the automotive industry. For countries such as Thailand, South Korea, Mexico, Brazil, Poland, and the Czech Republic, whose transport equipment industry is essential, their position on the network is such that it reflects their degree of integration in the value chains as users and suppliers of value-added, especially since its trade relations are mainly with countries at the center. However, as we will show later with the centrality measures (Sect. 3.3), the position of these countries, except for South Korea, is because of their relatively lower cost, their geographical proximity to the centers, optimized transportation options, and enough labor force. Some caution is necessary when reading the value-added contribution for countries further away from the center. As Fig. 1 shows, these countries' flows are out-flows, except for Ukraine and Romaine, but these flows are relatively smaller than countries in the center or close to them, suggesting these countries are still behind in joining the GVC on the industry. Second approach: measures Density describes the portion of possible connections in a network that are actual connections. Table 1 describes the network statistics. For the contribution on the value-added in the transport equipment, by applying Eq. (13), density is 0.1821, meaning that from the total possible edges, the network consists of only 18.21%, even though these flows contribute 97% of the total flows. In other words, the existing nodes and their flows account for almost the totality of the industry, which lead us to infer a high degree of concentration; therefore, a possible cluster. Table 1 Network statistics. Reciprocity expresses the level of two-way ties, where two economies contribute and use value-added from one another. Using Eq. (18), the network has a 0.5341 reciprocity coefficient, meaning that more than half of the edges show reciprocity between them. Given that the coefficient is above 0.5, we infer a high degree of integration of the transportation equipment industry, mainly due to the regionalization and fragmentation of the production process that causes more and more countries and regions to become involved in the value creation chains. As Fig. 2a shows, more countries with a lower degree exist. Panel (b) suggests that the network follows a power-law distribution. Few economies can be called "hubs" in the network, meaning that few economies concentrate the higher values on the network due to a large number of connections, as approximately 20% of the total links are in the hands of those countries with a degree equal or higher than 40. Degree distribution of the Transport Equipment Industry. a Provides the histogram of the degree distribution. The Horizontal axis measures the number of links each country has and the frequency on the vertical axis reflects the number of countries with that degree. b Provides the complementary cumulative distribution in its log–log scale Table 2 provides the information on the total degree by country and its ranking. These results are obtained by applying Eq. (9). Only 13 countries have a degree higher than 40, reinforcing the previous statement. Refer to Table 3 in Appendix for the country list. Table 2 Network degree by country. Table 3 Country codes and names. Germany has more connections, followed by France, the United States, Italy, and Spain. Particular interest is that top countries belong to Europe except for the United States. These results suggest that the value-added contribution in the industry is dominated by advanced economies, as Fig. 1 shows. However, each country's degree only reflects the number of links each country has, not their influence over the entire network. Therefore, a more advanced analysis is necessary. Centralization is a complement to the simple degree measure. It provides the distribution of these degree centrality scores. Borgatti et al. (2018) note that low centralization scores suggest that trade ties are spread uniformly throughout the network. In other words, if the score is high, the flows are concentrated in a small pair of countries, while a low score suggests no existence of a cluster. The centralization score is 0.5896 calculated using Eq. (19); the value-added flows tend to be concentrated in a small set of countries, potentially getting towards a hierarchical network structure. Assortativity refers to the increase or reduction in the probability of connecting two nodes based on their correlation degree. A positivity coefficient would indicate that low degree countries connect with high degree countries, whereas a negative suggests that countries are likely to interact with those with similar degree centrality scores. The − 0.5336 coefficient shows that ties between the nodes tend to connect with similar ones, which was obtained from Eq. (17). Applying Eq. (12), the clustering coefficient is 0.76, indicating that three-quarters of the neighbors of a selected node may become neighbors of other nodes or that a random node may be connected with the hubs. A Supporting measure is the average path length, with a coefficient of 1.8. On average, most proximately connected economies indirectly associated with other nodes through their neighbors in only 1.8 steps, accordingly with Eq. (11). Third approach: centralities Betweenness centrality and eigenvector centrality are the most valuable tools to analyze the importance and influence of specific nodes. The former indicates how well situated a node is in terms of the path that it lies on, while the latter provides information about the centralities of countries that are surrounded and linked by; in other words, it reflects the importance of the countries from which the node is connected and the influence by third countries. Related to interconnectivity, measured by the betweenness centrality in Eq. (14), and reflected in Fig. 3, Germany, the United States, Japan, China, and France are the countries with the best results. On the other hand, countries with higher eigenvector centrality, and obtained from Eq. (15) are Germany, France, Italy, Spain, and the United Kingdom. Both results place Germany as the most crucial country in the network. In addition, the position of France in the network is essential as it ranks in the top five economies. Eigenvector centrality suggests that from the point of view of the neighbors, Europe tends to be the center of the industry, indicating that the value-added contribution is still at a regional level. Centrality measures. The graph shows the relation of the betweenness centrality and the eigenvector centrality by country. a Reveals high low betweenness centrality and high eigenvector centrality. b Provides both high betweenness centrality and high eigenvector centrality. c Shows both low betweenness centrality and low eigenvector centrality. d Inform high betweenness centrality but low eigenvector centrality. The division of betweenness centrality relies on the mean for all countries, while eigenvector centrality takes 0.5 as the division point Within European Union, Germany concentrates 30.4% of the value added in the motor trade, followed by France, Italy, Spain, and the Netherlands with 15.8%, 10.1%, 8.0%, and 5.5%, respectively, in 2018 (European Commission, Eurostat 2021). On the other hand, Germany leads the employment in the industry with 25.3%, alongside France (12.0%), Italy (11.0%), Spain (8.5%), and Poland (8.5%). Hence, the position of Germany as the creator of the value-added in the motors trade is non-refutable. Figure 3 divides the position of countries into four different sections. Panel (a) includes countries with low betweenness centrality but high eigenvector centrality; that is, the importance of these countries is more from the side of the effect of third countries and their neighbors than their proper location on the network. This panel includes Belgium, Canada, Netherlands, Thailand, Czech Republic, Slovakia, Hungary, Poland, and Brazil. For the European Union, Belgium and Slovakia figures as the top five countries with the largest share of distribution of the value-added in the motor trade, with 14.6% and 14.2%, whereas Hungary and Poland have less average personnel cost in the motor trade from the same region (European Commission, Eurostat 2021). Therefore, its position in Fig. 3a is the reflection of their policies to concentrate value-added from third countries instead of creating themselves. Thailand abandoned its policy of local content in the transport industry after China joined the WTO to be enough competitive in the region. As a result of this policy, lower tariffs attract diverse MNCs and since then, Thailand has focused its trade policy in exports oriented instead of continue creating value for their local firms (Lee et al. 2021). Hence, the dependency of Thailand in third parties in the network is evident. On the other hand, panel (d) provides high betweenness centrality but low eigenvector centrality. Only China is inside this panel, which suggests China's importance is because of its position in the network and not properly because of its neighbors. Lee et al. (2021) already pointed out that the Chinese environment in the transport equipment industry favors adding value locally and is self-sufficient in supplying its market. Due to the country's large population, getting a labor force with sufficient skills has not been a problem. In recent years, China has upgraded its value chain from being a simple assembler to creating its products to investing in and developing new technology through R&D. Even China has managed to locate its products with medium–high value content (auto parts and components) abroad, mainly in the United States; this has not led to total dependence on the global market, which is why China's industry is highly resilient. Thus, China is placed in panel (b) where its position in the structure does not correspond to the influence of other countries' relations. Panel (c) reflects countries with low coefficients in both measures; they have no advantageous position in the network, nor are they important to their surrounding countries' effect. Surprisingly, South Korea is in this panel even though its position is close to joining China in panel b. The same situation is seen with Portugal, but close to panel a. Tukey, Argentine, Finland, and Slovenia are other countries inside this panel. Possible explanations for South Korea's position are that even though Korean transport equipment is representative in the manufacturing sector (with companies such as Hyundai and Kia motors), the increasing share of services activities in the country-oriented to exports has changed the composition of the value-added and GDP in the country. In addition, in 2017, South Korea experienced a decline in motor vehicle part production due to weak global demand and increased production costs. Furthermore, the non-favorable environment in the country created that General Motors started to consider closing one of its facilities, and eventually, 1 year later happened. Finally, panel (b) provides countries well located in both measures because of their proper location in the network and the importance of their neighbors. The panel reveals the leader position of Germany in the industry, followed by advanced countries such as Japan, the United States, France, Italy, Spain, the United Kingdom, Sweden, and Mexico, the only developing country. In the manufacturing industry of Mexico, 50% of the total came from the automobile industry, which represents approximately 18% of the GDP of the country, according to the National Institute of Statistics and Geography of Mexico. This high participation is the result of Mexico's foreign trade policy, which has successfully attracted FDI from Japan, Germany, and South Korea, and also as the NAFTA integration. Mexico has created a cluster in the "Bajío" zone, with preferential tariffs, lower costs, and a labor force. However, Mexico has failed to absorb the technology and transfer it to the local producers, which creates that the value added originated in Mexico has not increased. Therefore, even though Mexico is in panel (d), its importance in the industry is because the country offers a suitable environment for MNCs to reallocate their production, allowing the country to satisfy the international demand. Results proved that advanced European countries lead the industry by the added-value flows, and as a result of the United States and Japan's investment in Mexico in the industry, this last country has gained importance. Notwithstanding, as De Benedictis et al. (2013) mentioned, centralities measures must be read carefully. In one sense reflects the significance of the country as a central role or, on the other hand, the severe dependence on significant economies, such as the case of Mexico being part of panel b in Fig. 3. One way to confirm these results is by finding the largest cliques in the network, which helps detect the countries that compound a subgraph; that is, all countries have connections between them. Figure 4 shows the two largest cliques, consisting of 18 countries. Both cliques share 14 members; the rest four differs. The countries that are inside both subgraphs are Germany, France, Spain, Italy, United States, United Kingdom, Belgium, Japan, China, Austria, Sweden, Netherlands, India, and South Korea, which compounds the central cluster with two different patterns which include (a) Slovakia, Czech Republic, Hungary and Poland (East Europe), and (b) Canada, Mexico, Brazil (American), and Thailand. Largest clique. Color represents the countries that are inside the largest clique. The Fruchterman–Reingold layout was used to produce this graph However, it is still not clear which countries relate with others, which is the main scope of the following subsection. Fourth approach: community detection Detecting a community helps identify groups of countries that are densely interconnected but sparsely connected with others. The algorithm used in the research identifies the nodes to be part of only one community because it relies on the hypothesis that the partition offers the best community structure. As a result, communities are unique. Contrary to geographical classification such as core and periphery, community detection would reveal mutual preferences on the basis that one economy can use and aggregate another's country value-added more easily with those who are part of the same community than those who belong to other communities and are also further from them. Figure 5 reveals that the network consists of three different communities based on Eq. (16). Community 1 and the less dense includes its majority, East Europe countries, such as Poland, Czech Republic, Belgium, and some predominant nodes as the United Kingdom, Brazil, and Korea, with a total of 18 members. Community 2, the largest one, is led by Spain, France, Germany, Italy, and surprisingly China and includes 27 members. Community 3 includes 17 economies, is denser than community 1, and is led by the United States, Japan, Canada, and Mexico. Due to the integration degree of North America and its relationships with Japan, this last country acts as a bridge connector with countries such as Qatar and then connects to the east Asian countries such as Thailand and Singapore. Community detection. The left axis of the dendrogram shows the maximum density links between countries after pairing As a consequence of community detection, three different aspects arise: The link weights of each community are expected to be correlated with each other due to betweenness centrality, therefore as was reflected in Fig. 3, Germany, which has the top position in the measure, leads the community 2 and is related to France, Italy, and Spain; being the rest of the members the ones connected with these countries. As a consequence, community 2 is mainly at a regional level, predominant with Advanced European countries, but not limited geographically to that territory. In addition, as Azmeh et al. (2022) pointed out, North African countries such as Algeria and Egypt have become important locations for German companies in the reallocation production process as a strategy to seek new low-cost spaces. Therefore, these countries belong to community two, which Germany dominates. Countries with few links inside their communities are more likely to leave the community than those with multiple links. For example, the connection between Mexico and Colombia can be affected if, let us say, Colombia adds a link with Brazil, which belongs to the first community. This evolution happens when a node has a new-stronger weighted link with a country outside of the community. Therefore, the denser the community, the higher the probability they stay together and more probability of having a cluster; otherwise, the community would change and; The use of community detection gives us a broader aspect of how the value-added contribution structures the industry because of its natural production cycle and not the final goods (final consumption which occurs at once), and how even countries that do not belong to the same territory can be in the same community, such as Korea and Brazil. While GVCs might offer opportunities to upgrade the production process and offers possibilities for new learnings, emerging economies and less developing countries remain in the surrounding area of the network with different types of dependency and regional hubs. Nevertheless, the members of communities are not limited to a geographical area, such as community three, which has America, East Asia, and Oceania countries. Regardless of the results, community detection can help policymakers to conduct specific policies to improve their position in the industry according to members inside and outside the community. Conducting complex network analysis, this paper applies techniques to visualize and interpret Transport Equipment Industry's value-added contribution using the Eora Database for 2017. By quantifying the basic statistical properties of the network, countries are likely to interact with those with similar link structures, the network follows a power-law distribution, which means few, but large countries have dominance in the industry. Therefore, the first conclusion of the research is that the industry is highly centralized and tends to have clusters, which answers the first question addressed in this research. Subsequently, focusing on visual analysis and applying degree, clustering, and density measures, it was found that, Advanced European countries led the industry, alongside countries like the United States, Japan, and China. The statement is the result of the Euro Zone's integration in the industry, NAFTA effects, and the continually expanding role of China in the global production system. Other countries with high relevance are Mexico and Brazil in the Americas, Poland and the Czech Republic in East Europe, and Thailand with South Korea in Asia. Therefore, for the second question of the present study, advanced economies are in their majority the ones in the center, developing countries in which transport equipment is part of the essential industries are located close to the center, while the rest of the countries are far from joining the industry. The network consists of one big cluster with 14 countries and two bridges, one located in East Europe and one in America, suggesting that the governance of the global production from the transport equipment is still regionally even more countries are part of the production chain. One important aspect of the transport equipment industry is the rapid acceleration of the automatization process. Therefore, the impact of higher automation would create a new network configuration. That is, countries that possess the technology to include it in the production would continue creating a higher value-added in the industry, mostly these countries are the United States, China, and Germany; in contrast developing countries will suffer negatively on the employment side as the automation process focuses on robot adoption instead of the low-educated machine operators. Such is the case of Mexico, in which, according to the study conducted by Artuc et al. (2019), the increasing exposure to the continuous use of robots by the United States industries is affecting the export growth of Mexico to the United States, its principal market. Centralities measures were used to see the country's effect on others and the influence of the surrounding countries. In particular, the emergence of European countries was the most influential countries not only because of their position on the network structure, but also for the influence they received from their neighbors. Nevertheless, the increasing participation of China and the recent tension between the United States and China would reconfigure the production chains. In addition, as Gereffi et al. (2021) pointed out, the industry now tends to be more concentrated in regional production hubs. These bring a more significant challenge for countries such as Mexico, Poland, the Czech Republic, and Thailand, which have increased their participation in the transport equipment industry through the inflows of FDI, but with a lower impact on the technology side for local producers. For the case of Thailand, even if the country turned in the last years to a more FDI-led policy in the transport equipment industry, its model should include measures to learn foreign knowledge and production skills to start to develop their market instead of being oriented to the re-exports, if the country wants to be part of the influential countries and the ones that have strong connections too. At first glance, Mexico appears as part of the "important" countries; however, in reality, the country is part of the bridge between the center and the surrounding areas because of its highly dense value-added flows with countries such as the United States, Canada, Japan, and Germany. Therefore, Mexico should implement policies that help the local manufacturers to absorb the technology from the MNCs. It is expected that in the coming years, the country will upgrade the value chains due to the USMCA rule of the local content in the automobile industry for North America, which would help the country continue being competitive against China in the American market. Hence, betweenness centrality and eigenvector centrality contain relevant information but may depend on how we interpret the results. The measure implies countries' dominant positions in the network and can be related to access to innovation, R&D, market access, and strong institutional resources, whereas for emerging countries is translated as lower-cost competitiveness, fewer tariffs, and enough labor force. Finally, community detection provided more than the traditional image of grouping countries by regions. It captures the different aspects of the links between countries, revealing that countries' value-added contribution flows are not always consistent with their geographical location. Three different communities were detected, led by the United Kingdom, Germany, and the United States. Nevertheless, even though some countries attract global firms by offering low-cost labor to be integrated into the GVCs, they do not add value to the production process. Suppose they do not support their local firms with policies such as the domestic content on their products or industrial policies that allow local firms to absorb the learning process, such as the success case of China in different industries. In that case, they will remain dependent on the other actors and continue to be part of the assembly-production process. In the context of the GVCs, the decision of dominant firms to reallocate their production process has a high impact on the value-added content of other countries, whereas it is from the re-exports side or import side. A clear example is Germany; in the last years, they have adopted a more "local" production process to keep the value-added within the country or close areas such as Poland and the Czech Republic due to the lower costs, or United States, Japan, China that has been focusing in the technology with higher investment in R&D. As discussed throughout the study, analysis based on the GVC using network analysis will allow countries for a much more detailed understanding of their position in the industry as it provides a deeper understanding of the structural form according to other countries' interactions. Therefore, from this perspective, network science is a valuable tool for adopting proper policies in the trade agenda. Future research can include "cascade falls" phenomenon in the context of the US–China trade war and the COVID-19 pandemic to understand how resilient countries are according to their position in the networks and how much their influence affects others under such circumstances. The data and the R studio files that support the findings of this study are available from the corresponding author upon reasonable request. BACI-CEPII: Base pour l'Analyse du Commerce International-Centre d'Etudes Prospectives d'Informations Internationales DVA: Domestic value-added DVX: Indirect value-added FDI: FVA: Foreign value-added GVC: Global Value Chains MNCs: MRIOs: Multi-Region Input–Output NAFTA: North America Free Trade Agreement RTA: Regional Trade Agreements R&D: Amador J, Cabral S (2016) Networks of value-added trade. World Econ 40(7):1291–1313. https://doi.org/10.1111/twec.12469 Artuc E, Christiaensen L, Winkler HJ (2019) Does automation in rich countries hurt developing ones. Evidence from the US and Mexico. The World Bank. http://hdl.handle.net/10986/31279 Azmeh S, Nguyen H, Kuhn M (2022) Automation and industrialization through global value chains: North Africa in the German automotive wiring harness industry. Struct Change Econ Dyn 63:125–138. https://doi.org/10.1016/j.strueco.2022.09.006 Barabási LA (2016) Network science. Cambridge University Press, Cambridge Borgatti SP, Everett MG, Johnson JC (2018) Analyzing social networks, 2nd edn. SAGE Publications Ltd, Los Angeles Brailly J (2016) Dynamics of networks in trade fairs—a multilevel relational approach to the cooperation among competitors. J Econ Geogr 16(6):1279–1301. https://doi.org/10.1093/jeg/lbw034 Buchanan M, Caldarelli G, De Los Ríos P, Michele V (2010) Networks in cell biology. Cambridge University Press, Cambridge Casella B, Bolwijn R, Moran D, Kanemoto K (2019) Improving the analysis of global value chains: the UNCTAD-Eora database. Transnatl Corp 26(3):115–142 Cerina F, Zhu Z, Chessa A, Riccaboni M (2015) World input–output network. PLoS ONE 10(7):e0134025. https://doi.org/10.1371/journal.pone.0134025 Chai C-L, Liu X, Zhang WJ, Baber Z (2011) Application of social network theory to prioritizing Oil & Gas industries protection in a networked critical infrastructure system. J Loss Prev Process Ind 24(5):688–694. https://doi.org/10.1016/j.jlp.2011.05.011 Chen B, Li JS, Wu XF, Han MY, Zeng L, Li Z, Chen GQ (2018) Global energy flows embodied in international trade: a combination of environmentally extended input-output analysis and complex network analysis. Appl Energy 210:98–107. https://doi.org/10.1016/j.apenergy.2017.10.113 Cingolani I, Panzarasa P, Tajoli L (2017) Countries' positions in the international global value networks: centrality and economic performance. Appl Netw Sci 2(21):1–20. https://doi.org/10.1007/s41109-017-0041-4 Crossa M, Ebner N (2020) Automotive global value chains in Mexico: a mirage of development? Third World Q 41(7):1218–1239. https://doi.org/10.1080/01436597.2020.1761252 De Benedictis L, Nenci S, Santoni G, Tajoli L, Vicarelli C (2013) Network analysis of world trade using the BACI-CEPII dataset. CEPII Working paper 2013–24. http://www.cepii.fr/pdf_pub/wp/2013/wp2013-24.pdf. Accessed 22 Nov 2021 Dussel Peters E (2022) The new triangular relationship between the US, China, and Latin America: the case of trade in the autoparts-automobile global value chain (2000–2019). J Curr Chin Aff 51(1):60–82. https://doi.org/10.1177/18681026211024667 European Commission, Eurostat (2018) European business: facts and figures, 2009 edition. Publications Office. https://doi.org/10.2785/23246 European Commission, Eurostat (2021) Key figures on European business: statistics illustrated, 2020 edition (Corselli-Nordblad L, Strandell H, eds). Publications Office of the European Union. https://doi.org/10.2785/82544 Fana M, Villani D (2022) Decomposing the automotive supply chain: employment, value added and occupational structure. Struct Chang Econ Dyn 62:407–419. https://doi.org/10.1016/j.strueco.2022.04.004 Ferreira AS, Sacomano Neto M, Candido SEA, Ferrati GM (2021) Network centrality and performance: effects in the automotive industry. Revista Brasileira De Gestão De Negócios 23(4):677–695. https://doi.org/10.7819/rbgn.v23i4.4132 Freeman LC (1979) Centrality in social networks conceptual clarification. Soc Netw 1(3):215–239. https://doi.org/10.1016/0378-8733(78)90021-7 Gala P, Camargo J, Freitas E (2018) The Economic Commission for Latin America and the Caribbean (ECLAC) was right: scale-free complex networks and core-periphery patterns in world trade. Camb J Econ 42:633–651. https://doi.org/10.1093/cje/bex057 Gereffi G, Lim HC, Lee J (2021) Trade policies, firm strategies, and adaptive reconfigurations of global value chains. J Int Bus Policy 4:506–522. https://doi.org/10.1057/s42214-021-00102-z Girvan M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci USA 99(12):7821–7826. https://doi.org/10.1073/pnas.122653799 Gould D, Kenett DY, Panterov G (2018) Multidimensional connectivity: benefits, risks, and policy implications for Europe and Central Asia. The World Bank, Washington, D.C. https://doi.org/10.1596/1813-9450-8438 Grodzicki MJ, Skrzypek J (2020) Cost-competitiveness and structural change in value chains—vertically-integrated analysis of the European automotive sector. Struct Change Econ Dyn 55:276–287. https://doi.org/10.1016/j.strueco.2020.08.009 Gualier G, Zignago S (2010) BACI: international trade database at the product-level. The 1994–2007 version, CEPII Working Paper, 23, pp 1–28. http://www.cepii.fr/CEPII/en/publications/wp/abstract.asp?NoDoc=2726. Accessed 7 Dec 2021 Kagawa S, Suh S, Hubacek K, Wiedmann T, Nansai K, Minx J (2015) CO2 emissions clusters within global supply chain networks: implications for climate change mitigation. Glob Environ Change 35:486–496. https://doi.org/10.1016/j.gloenvcha.2015.04.003 Kali R, Reyes J, McGee J, Shirell S (2013) Growth networks. J Dev Econ 101:216–227. https://doi.org/10.1016/j.jdeveco.2012.11.004 Kitsak M, Riccaboni M, Havlin S, Pammolli F, Stanley HE (2010) Scale-free models for the structure of business firm networks. Phys Rev E 81(3):036117. https://doi.org/10.1103/PhysRevE.81.036117 Layeghifard M, Hwang DM, Guttman DS (2017) Disentangling interactions in the microbiome: a network perspective. Trends Microbiol 25(3):217–228. https://doi.org/10.1016/j.tim.2016.11.008 Lee K, Qu D, Mao Z (2021) Global value chains, industrial policy, and industrial upgrading: automotive sectors in Malaysia, Thailand, and China in comparison with Korea. Eur J Dev Res 33:275–303. https://doi.org/10.1057/s41287-020-00354-0 Liu Y, Ma R, Guang C, Chen B, Zhang B (2022) Global trade network and CH4 emission outsourcing. Sci Total Environ 83:150008. https://doi.org/10.1016/j.scitotenv.2021.150008 Lu Z, Wahlström J, Nehorai A (2018) Community detection in complex networks via clique conductance. Sci Rep 8:5982. https://doi.org/10.1038/s41598-018-23932-z Luce RD, Perry AD (1949) A method of matrix analysis of group structure. Psychometrika 14:95–116. https://doi.org/10.1007/BF02289146 Noguera MP, Semitiel GM, López MM (2016) Interindustrial structure and economic development. An analysis from network and input–output perspective. El Trimestre Económico 83(331):581–609. https://doi.org/10.20430/ete.v83i331.212 Pavlínek P (2021) Relative positions of countries in the core-periphery structure of the European automotive industry. Eur Urban Reg Stud 29(1):59–84. https://doi.org/10.1177/09697764211021882 Rodríguez de la Fuente M, Lampón JF (2020) Regional upgrading within the automobile industry global value chain: the role of the domestic firms and institutions. Int J Automot Technol Manag 20(3):319–340. https://doi.org/10.1504/IJATM.2020.110409 Sancak M (2021) The varying use of online supplier portals in auto parts-automotive value chains and its implications for learning and upgrading: the case for the Mexican and Turkish suppliers. Glob Netw 22(4):701–715. https://doi.org/10.1111/glob.12348 Schwabe J (2020) Risk and counter-strategies: the impact of electric mobility on German automotive suppliers. Geoforum 110:157–167. https://doi.org/10.1016/j.geoforum.2020.02.011 Snyder D, Kick EL (1979) Structural position in the world system and economic growth, 1955–1970: a multiple-network analysis of transnational interactions. Am J Sociol 84:1096–1126. https://doi.org/10.1086/226902 Strogatz S (2001) Exploring complex networks. Nature 410(6825):268–276. https://doi.org/10.1038/35065725 Šubelj L, Van Eck NJ, Waltman L (2016) Clustering scientific publications based on citation relations: a systematic comparison of different methods. PLoS ONE 11(4):e0154404. https://doi.org/10.1371/journal.pone.0154404 Vandeputte D et al (2017) Quantitative microbiome profiling links gut community variation to microbial load. Nature 551(7681):507–511. https://doi.org/10.1038/nature24460 Vidya CT, Prabheesh KP (2020) Implications of COVID-19 pandemic on the global trade networks. Emerg Mark Financ Trade 56:2408–2421. https://doi.org/10.1080/1540496X.2020.1785426 World Bank Group, IDE-JETRO, OECD, UIBE, World Trade Organization (2017) Global value chain development report 2017: measuring and analyzing the impact of GVCs on economic development. World Bank, Washington, DC. https://openknowledge.worldbank.org/handle/10986/29593. License: CC BY 3.0 IGO. Accessed 10 Nov 2021 Yang H, Le M (2021) High-order community detection in the air transport industry: a comparative analysis among 10 major international airlines. Appl Sci 11(20):9378. https://doi.org/10.3390/app11209378 Yang B, Jin D, Liu J, Liu D (2013) Hierarchical community detection with applications to real-world network analysis. Data Knowl Eng 83:20–38. https://doi.org/10.1016/j.datak.2012.09.002 Zhou M (2020) Differential effectiveness of regional trade agreements, 1958–2012: the conditioning effects from homophily and world-system status. Sociol Q. https://doi.org/10.1080/00380253.2020.1834463 Newman, ME (2003). Mixing patterns in networks. Phys. Rev. E. 67: 026126. https://doi.org/10.1103/PhysRevE.67.026126 I gratefully acknowledge Dr. Ishida Osamu for his valuable comments and suggestions on an earlier version of this study. Many thanks also to the helpful comments I received in the 39th EBES Conference hosted by the Faculty of Economics, The Sapienza University of Rome, Rome, Italy, on April 6th–8th. I would like to thank the reviewers for their valuable comments that helped to improve the quality of this paper. The author has not received financial support from any organization. Graduate School of Economics, Kyushu University, Fukuoka, Japan Luis Gerardo Hernández García Correspondence to Luis Gerardo Hernández García. The corresponding author states that there is no conflict of interest. See Table 3. Hernández García, L.G. Transport equipment network analysis: the value-added contribution. Economic Structures 11, 28 (2022). https://doi.org/10.1186/s40008-022-00289-1 Community detection Centrality measures
CommonCrawl
Correction to: A CCR5+ memory subset within HIV-1-infected primary resting CD4+ T cells is permissive for replication-competent, latently infected viruses in vitro Kazutaka Terahara1, Ryutaro Iwabuchi1,2,3, Masahito Hosokawa4,5, Yohei Nishikawa2,3, Haruko Takeyama2,3,4,5, Yoshimasa Takahashi1 & Yasuko Tsunetsugu-Yokota1,6 BMC Research Notes volume 12, Article number: 322 (2019) Cite this article The Original Article was published on 29 April 2019 Correction to: BMC Res Notes (2019) 12:242 https://doi.org/10.1186/s13104-019-4281-5 After publication of the original article [1], the authors became aware of a miscalculation in the original Fig. 2d. $$ \frac{{\% \,{\text{HIV-}}1^{ + \,} {\text{activated}}\,{\text{cells}}\,{\text{at}}\,{\text{Day}}\,5 - \% \,{\text{HIV-}}1^{ + \,} \,{\text{resting}}\,{\text{cells}}\,{\text{at}}\,{\text{Day}}\,5}}{{\% \,{\text{HIV-}}1^{ + \,} {\text{activated}}\,{\text{cells}}\,{\text{at}}\,{\text{Day}}\,5}} \times 100 $$ should be calculated as: $$ \frac{{\% \,{\text{HIV-}}1^{ + \,} {\text{activated}}\,{\text{cells}}\,{\text{at}}\,{\text{Day}}\,5 - \% \,{\text{HIV-}}1^{ + \,} \,{\text{resting}}\,{\text{cells}}\,{\text{at}}\,{\text{Day}}\,5}}{{\% \,{\text{HIV-}}1^{ + \,} \,{\text{resting}}\,{\text{cells}}\,{\text{at}}\,{\text{Day}}\,5}} \times 100 $$ The corrected Fig. 2d is shown in this erratum. HIV-1 infection and culture of resting CD4+ T-cell subsets isolated by cell sorting. Subsets of naïve T cells (TN), or CCR5+ or CCR5− memory T cells (TM), were separately infected and cultured. a Schematic of the protocol of HIV-1 infection and culture. b Representative flow-cytometry profiles of cells from Donor #1 at day 3 and day 5 post-infection (resting or activated), separated according to reporter expression indicating the presence of X4 or R5 HIV-1, with the percentage of each subset indicated (left panels). The intensity of fluorescence for each viral reporter in each cell subset [except for the very low percentage of DsRed+ cells (R5+) in TN cells] is shown in the right-hand panels. c Percentages of HIV-1+ cells in each CD4+ T-cell subset in three donors. d Percentage increases in frequencies of HIV-1+ cells following activation were estimated by comparing percentages of HIV-1+ cells in the activation condition with those in the resting condition at day 5 post-infection. Significant differences (*P < 0.05, **P < 0.01) were determined by repeated-measures one-way ANOVA followed by Tukey's multiple comparison test. In c and d, HIV-1+ cells include the corresponding reporter (either EGFP or DsRed) single-positive cells and double-positive cells Although the statistical significances have been altered, the hierarchical mode between cell-subset groups remains the same. It is still shown that numbers of X4 HIV-1+ cells increased consistently in the CCR5+ TM subset of all three donors tested. Therefore, the correction does not change the scientific conclusion. Terahara K, Iwabuchi R, Hosokawa M, Nishikawa Y, Takeyama H, Takahashi Y, Tsunetsugu-Yokota Y. A CCR5+ memory subset within HIV-1-infected primary resting CD4+ T cells is permissive for replication-competent, latently infected viruses in vitro. BMC Res Notes. 2019;12:242. https://doi.org/10.1186/s13104-019-4281-5. Department of Immunology, National Institute of Infectious Diseases, 1-23-1 Toyama, Shinjuku-ku, Tokyo, 162-8640, Japan Kazutaka Terahara, Ryutaro Iwabuchi, Yoshimasa Takahashi & Yasuko Tsunetsugu-Yokota Department of Life Science and Medical Bioscience, Waseda University, 2-2 Wakamatsu-cho, Shinjuku-ku, Tokyo, 162-8480, Japan Ryutaro Iwabuchi, Yohei Nishikawa & Haruko Takeyama Computational Bio Big-Data Open Innovation Laboratory, National Institute of Advanced Industrial Science and Technology, 3-4-1 Okubo, Shinjuku-ku, Tokyo, 169-8555, Japan Research Organization for Nano & Life Innovation, Waseda University, 513 Wasedatsurumaki-cho, Shinjuku-ku, Tokyo, 162-0041, Japan Masahito Hosokawa & Haruko Takeyama Institute for Advanced Research of Biosystem Dynamics, Waseda Research Institute for Science and Engineering, Waseda University, 2-2 Wakamatsu-cho, Shinjuku-ku, Tokyo, 162-8480, Japan Department of Medical Technology, School of Human Sciences, Tokyo University of Technology, 5-23-22 Nishikamata, Ota-ku, Tokyo, 144-8535, Japan Yasuko Tsunetsugu-Yokota Kazutaka Terahara Ryutaro Iwabuchi Masahito Hosokawa Yohei Nishikawa Haruko Takeyama Yoshimasa Takahashi Correspondence to Kazutaka Terahara. Terahara, K., Iwabuchi, R., Hosokawa, M. et al. Correction to: A CCR5+ memory subset within HIV-1-infected primary resting CD4+ T cells is permissive for replication-competent, latently infected viruses in vitro. BMC Res Notes 12, 322 (2019). https://doi.org/10.1186/s13104-019-4357-2 Received: 04 June 2019
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Shouldn't the escape velocity of earth (with respect to earth) be less than $\sqrt{\frac{2GM}{R}}=11.2\,\mathrm{km/s}$ We know that the escape velocity of earth is, $$\sqrt{\frac{2GM}{R}}=11.2\,\mathrm{km/s}$$ Where $G=6.67×10^-11$ $M=\text{mass of earth}$ $R=\text{radius of earth}$ So if throw a object with velocity $11.2\,\mathrm{km/s}$ it should never come back on earth. But earth itself is rotating hence the object will also have this velocity. So it's velocity is greater than $11.2\,\mathrm{km/s}$ w.r.t to space. So if I throw an object with velocity less than $11.2\,\mathrm{km/s}$ it should still not reach earth as earth's rotational velocity will add up into it. Therefore isn't escape velocity of earth w.r.t to earth less than $11.2\,\mathrm{km/s}$. newtonian-gravity relative-motion galilean-relativity escape-velocity dmckee --- ex-moderator kitten ATHARVAATHARVA $\begingroup$ Yes in a sense, and that is why rockets are generally launched to the east. But to be picky, when one speaks of escape velocity, one takes the rotation of the planet to be zero. Otherwise, it would be different at different latitudes, and inclinations of the launch with respect to the equator. $\endgroup$ – garyp Escape velocity is defined in an inertial frame. It's the velocity required to escape a gravitational source centered at a given point. For Earth, that velocity is 11.2km/s. However, when you talk about throwing an object, you are not typically talking about velocities in an inertial frame. You are typically referring to velocities with respect to the surface of the Earth (that you're standing on). The Earth is rotating. If you convert these velocities into an inertial frame centered on Earth (known as ECI, by the way), you'll see that it takes a weaker toss to the east to reach the 11.2km/s in ECI, and a stronger toss to the west to reach the same 11.2km/s in ECI. The escape velocity is the same, the only difference is whether the rotation of the earth helped or hurt you. This is why rockets are generally launched to the east. By launching them in this direction, they benefit from the rotation of the Earth and require less fuel to reach orbit (or escape velocity). Cort AmmonCort Ammon 44k44 gold badges9090 silver badges143143 bronze badges $\begingroup$ so a person standing on the earth would see that rocket launched to east escapes earths atmosphere at speed less than 11.2 but with respect to space(inertial frame) it will be 11.2. $\endgroup$ – ATHARVA Jun 3, 2017 at 0:56 $\begingroup$ @ATHARVA That is correct. It's one of the many strangenesses that come from rotating frames! $\endgroup$ – Cort Ammon Thanks for contributing an answer to Physics Stack Exchange! What is the escape velocity of a Black Hole? Escape velocity from Earth How does escape velocity relate to energy and speed? How can escape velocity be independent of the direction of projection of a body? Escape velocity at an angle What is $R$ in the formula for escape velocity? What if we set the exact equality $u^2=\frac{2GM}{R}$? Escape or not? Escape Velocity from orbit
CommonCrawl
$\mathrm{SU(3)}$ decomposition of $\mathbf{3} \otimes \mathbf{\bar{3}} = \mathbf{8} \oplus \mathbf{1}$? I have a question about the tensor decomposition of $\mathrm{SU(3)}$. According to Georgi (page 142 and 143), a tensor $T^i{}_j$ decomposes as: \begin{equation} \mathbf{3} \otimes \mathbf{\bar{3}} = \mathbf{8} \oplus \mathbf{1} \end{equation} where the $\mathbf{1}$ represents the trace. However, I do not understand why we cannot further decompose the traceless part into a symmetric and an antisymmetric part. In order to understand my logic: A general tensor $\varphi^i$ transforms as: \begin{equation} \varphi^i \rightarrow U^i{}_j \varphi^j \end{equation} whereas $\varphi_i$ transforms as: \begin{equation} \varphi_i \rightarrow (U^*)_i{}^j \varphi_j \end{equation} where $U \in \mathrm{SU(3)}$ is a $3 \times 3$ matrix. Now, I will let $S^i{}_j$ denote the traceless part of $T^i{}_j$ (i.e. $S^i{}_j$ has dimensions $\mathbf{8}$) and we can decompose this in the "symmetric" and "antisymmetric" part as usual: \begin{equation} S^i{}_j = \frac{1}{2}(S^i{}_j + S_j{}^i) + \frac{1}{2}(S^i{}_j - S_j{}^i) \end{equation} Then under an $\mathrm{SU(3)}$ transformation: \begin{equation} S^i{}_j + S_j{}^i \rightarrow U^i{}_k (U^*)_j{}^l S^k{}_l + U^i{}_k (U^*)_j{}^l S^k{}_l = U^i{}_k (U^*)_j{}^l (S^i{}_j + S_j{}^i) \end{equation} and: \begin{equation} S^i{}_j - S_j{}^i \rightarrow U^i{}_k (U^*)_j{}^l S^k{}_l - U^i{}_k (U^*)_j{}^l S^k{}_l = U^i{}_k (U^*)_j{}^l (S^i{}_j - S_j{}^i) \end{equation} Therefore, the symmetric part keeps its symmetry and the antisymmetric part keeps its antisymmetry. Thus two invariant subspaces are created and the representation is reducible? To sum up, I would think we decompose $T^i{}_j$ as: \begin{equation} \mathbf{3} \otimes \mathbf{\bar{3}} = \mathbf{3} \oplus \mathbf{5} \oplus \mathbf{1} \end{equation} where $\mathbf{3}$ denotes the dimensions of the antisymmetric part and $\mathbf{5}$ denotes the dimensions of the symmetric part. Where am I going wrong? Edit: I got my convention from "Invariances in Physics and Group Theory" by Jean-Bernard Zuber: mathematical-physics group-theory group-representations lie-algebra representation-theory $\begingroup$ How would the tensor $S_j{}^i$ be related to $S^i{}_j$? $\endgroup$ – Olof Dec 6 '13 at 8:56 $\begingroup$ I think there is a problem in the transformations of the symmetric and antisymmetric parts... I don't agree with these equations, let me check it! $\endgroup$ – AstoundingJB Dec 6 '13 at 9:28 $\begingroup$ @AstoundingJB: That's my point. It looks like Hunter tries to raise and lower the indices in order to construct the "symmetric" and "anti-symmetric" tensors, but there is no way to do this in $SU(3)$. So the only sensible interpretation is $S_j{}^i - S^i{}_j = 0$, which of course gives an irreducible representation, but not a very interesting one :) $\endgroup$ – Olof Dec 6 '13 at 10:57 $\begingroup$ Good one @Olof ! +1 I didn't get your suggestion at first! :) Probably, this is the key point for one can't decompose further $\boldsymbol{8}$ into... something! $\endgroup$ – AstoundingJB Dec 6 '13 at 11:08 $\begingroup$ Uhm.. wait! I'm wandering if you're confusing the meaning of a transformation matrix, say those you named $U_i^{\phantom{i}j}$, and a base state for this representation, say $S^i_j$ or $T^i_j$... They are very different objects, indeed Georgi write these bases like $\big| ^i_j\big\rangle$, while the $U$s are matrices... The point is that the transpose of a base tensor like $S^i_j=\big| ^i_j\big\rangle$ doesn't actually have much sense... It has no sense mixing the two indices, one fro $\boldsymbol{3}$ and one from $\bar{\boldsymbol{3}}$... $\endgroup$ – AstoundingJB Dec 6 '13 at 13:49 Ok, I think there is a mistake here: A general tensor $\varphi^i$ transforms as: $$\varphi^i\rightarrow U^i_{\phantom{1}j}\varphi^j$$ whereas $\varphi_i$ transforms as: $$\varphi_i\rightarrow (U^\boldsymbol{\ast})_i^{\phantom{1}j}\varphi_j$$ Where did you find these equations? The unitary matrix element in the second line should not be a complex conjugate. I don't remember Giorgi's conventions but the customary notation I'm used to is this one: $$U_i^{\phantom{i}j}=U_{ij},\quad \varphi_i\rightarrow U_i^{\phantom{1}j}\varphi_j\\ U^i_{\phantom{i}j}=U^\ast_{ij},\quad \varphi^\ast_i\equiv \varphi^i\rightarrow U^i_{\phantom{1}j}\varphi^j\equiv(U_i^{\phantom{i}j}\varphi_j)^\ast.$$ Hence, in your equations I'd understand: $$(U^\ast)_i^{\phantom{i}j}\equiv U^\ast_{ij}=U^i_{\phantom{i}j}$$ and it doesn't provide the right transformation law for $\varphi_i$. EDIT: well, provided the previous comments, let me clarify some issues with the notation, that may led to confuse the meaning of these transformation laws. Let us choose the convention to denote $SU(N)$ transformations, that is $N\times N$ unitary matrices with unit determinant, with uppercase letters, like $U$, and base states (scalars, vectors and tensors) with lowercase Greek letters, $\psi\in \mathbb{C}^N$. For example vector states transform as: $$\psi\to U\psi,\quad \psi_i\to U_{ij}\psi_j\equiv U_i^{\ j}\psi_j$$ Note that here I followed the convention of writing base states of the fundamental or vector representation with lower indices, as Georgi does and as you can find here. This is the convention I'm used to, but nothing stops you to do the contrary, choosing upper indices! Note also that $U\psi$ represents the ordinary product of an $N\times N$ matrix by a vector $\psi=(\psi_1,\ldots,\psi_N)^T$, and produce a vector of the same type. In the notation $U_{ij}$ the index $i$ represents the rows whereas the second index $j$ represents the columns. It's customary to write it like $U_i^{\ j}$ to distinguish rows and columns. $\psi_i$ is a column vector and $i$ counts its rows. You can define the conjugate representation by means of the conjugate vectors $\psi_i^\ast$, whose transformation law is $$\psi^\ast\to (U\psi)^\ast=\psi^\ast U^\ast,\quad \psi_i^\ast\to (U^\ast)_{ij}\psi_j^\ast=\psi_j^\ast(U^\dagger)_{ji}$$ Since these conjugate vectors transform in a different way with respect to $\psi_i$, it's useful to introduce upper indices to distinguish them: $$\psi^i\equiv \psi_i^\ast \to U^\ast_{ij}\psi_j^\ast\equiv U^i_{\ \ j} \psi^j.$$ As you can see, now indices are "summed on the bottom-right". The extension to any arbitrary $(p,q)$-tensor is trivial, their transformation law are those of the direct (diagonal) product of $p$ type $\psi^i$ vectors and $q$ type $\psi_i$ vectors: $$\psi^{i_1\ldots i_p}_{j_1\ldots j_q}\to \big(U_{j_1}^{\ \ j'_1}\cdot\ldots\cdot U_{j_q}^{\ \ j'_q}\big)\big(U^{i_1}_{\ \ i'_1}\cdot\ldots\cdot U^{i_p}_{\ \ i'_p}\big)\psi^{i'_1\ldots i'_p}_{j'_1\ldots j'_q}.$$ Since upper and lower indices represents different objects it has no sense mixing them. AstoundingJBAstoundingJB $\begingroup$ Ok, I just took a look at Giorgi's book. The conventions I listed are consistent with Giorgi's eq. (10.6-8)! Also, take a look at this tutorial, you could find it very usefull: phys.nthu.edu.tw/~class/group_theory2012fall/doc/tensor.pdf $\endgroup$ – AstoundingJB Dec 6 '13 at 10:27 $\begingroup$ Thanks for you reply. I have made an edit so you can see where I got my conventions from. Maybe using all these different conventions are confusing me, and I need to stick to the one you are suggesting. $\endgroup$ – Hunter Dec 6 '13 at 13:31 $\begingroup$ Here it is the edit! I had to wait for the coffee break! ;) $\endgroup$ – AstoundingJB Dec 6 '13 at 16:28 $\begingroup$ Great answer! Thank you; really clarifies a lot of my confusion. $\endgroup$ – Hunter Dec 6 '13 at 16:42 First, if you take the fundamental representation (representation $N$) of $SU(N)$ made of $N$ objects $\varphi^i$, the transformation law is : $\varphi^i \to U^i{}_j \varphi^j$. By taking the complex conjugate, you get : $\varphi^{*i} \to (U^*)^i{}_j \varphi^{*j}= (U^\dagger)^j{}_i \varphi^{*j}$. Now, looking at the last expression with $U^\dagger$, one sees that it is more practical to define objects $\varphi_i$, wich transform like $\varphi^{*i}$ : $\varphi_{i} \to (U^\dagger)^j{}_i \varphi_{j}$, This is the representation $\bar N$ Now clearly, when you make the product of the two representations $N$ and $\bar N$, you have a representation $T^i_j$ which transforms as $\varphi^i\varphi_j$ : $T^i_j \to (U)^i_k (U^\dagger)^l_j T^k_l$ Secondly, you cannot symmetrize or anti-symmetrize the representation $N \otimes \bar N$, that is $T^i_j$, because the indices $i$ and $j$ have a different nature, and correspond to different representations. Now, if you consider the representation $N \otimes N$, that is some representation $S^{ij}$, then here you may separe in a symmetric and anti-symmetric part, for instance, you have : $3 \otimes 3 = 6 \oplus \bar 3 $ The $6$ is the symmetric part, while the $\bar 3$ is dual (equivalent) to the anti-symmetric part, thanks to the Levi-Civita tensor : $\varphi_i = \epsilon_{ijk} \varphi^{jk}$ Due to OP comments, some precisions : You have $U^\dagger = (U^*)^T$, where $T$ means transposed operation. Transposition means exchange of the row and columns of the matrix, that is exchange of the $i$ and $j$ indices. If you put the row indice as an upper indice and the column indice as a lower indice, then the exchange necessarily will put the row indice as a lower indice, and the column indice as a upper indice. Your notation $(U^*)^i{}_j = (U^\dagger)_j{}^i$ is a not-too-good equivalent notation, I say not-too-good, because you loose the orginal meaning that I describe above .About the representations, this is a different thing (these are not the same $i$ and $j$...), the upper indice transforms as a $N$ representation, and the lower indice transforms as a $\bar N$ representation, so it is like apples and bananas, you can only symmetrize or anti-symmetrize equivalent quantities which transform in the same manner ($2$ apples or $2$ bananas), but not $1$ banana + $1$ apple TrimokTrimok 14k1616 silver badges4040 bronze badges $\begingroup$ Thank you for your reply. I have two questions, I was hoping you could elaborate on? i) The second equation you write in your message implies $(U^*)^i{}_j = (U^\dagger)^j{}_i$ but I don't understand this as I have learned to write this as: $(U^*)^i{}_j = (U^\dagger)_j{}^i$. Could you explain me your reasoning? $\endgroup$ – Hunter Dec 6 '13 at 13:38 $\begingroup$ ii) What do you mean with: "the indices $i$ and $j$ have a different nature" (and thus cannot be symetrized)? $\endgroup$ – Hunter Dec 6 '13 at 13:40 $\begingroup$ Any help is much appreciated! $\endgroup$ – Hunter Dec 6 '13 at 13:40 $\begingroup$ @Hunter: I updated the answer $\endgroup$ – Trimok Dec 6 '13 at 20:25 $\begingroup$ Good answer! Should've gotten more points. Although one should perhaps choose a different label in e.g. $\varphi^i\varphi_j$ i.e. something like $\varphi^i\eta_j$? $\endgroup$ – Your Majesty Sep 28 '14 at 23:08 SECTION A : What remains invariant for a complex $\:3\times 3\:$ tensor depends upon its transformation law under $\:U \in SU(3)\:$ CASE 1 : $\:\boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3}=\boldsymbol{6}\boldsymbol{\oplus}\overline{\boldsymbol{3}}\:$ The transformation law for the complex $\:3\times 3\:$ tensor $\:\mathrm{X}\:$ in this case is \begin{equation} \mathrm{X }^{\prime}=U\mathrm{X}U^{\mathsf{T}}\quad \tag{A-01} \end{equation} Here the symmetry (+) or antisymmetry (-) is invariant since \begin{equation} \mathrm{X}^{\mathsf{T}}=\pm\:\mathrm{X} \Longrightarrow {(\mathrm{X }^{\prime})}^{\mathsf{T}}=(U\mathrm{X}U^{\mathsf{T}})^{\mathsf{T}}= {(U^{\mathsf{T}})}^{\mathsf{T}}\mathrm{X}^{\mathsf{T}}U^{\mathsf{T}}=U(\pm\:\mathrm{X})U^{\mathsf{T}}=\pm\:\mathrm{X }^{\prime} \tag{A-02} \end{equation} In this case it makes sense to split the tensor in its symmetric and anti-symmetric parts \begin{equation} \mathrm{\Psi}=\dfrac{1}{2} \left(\mathrm{X}+\mathrm{X}^{\mathsf{T}}\right)\:, \quad\mathrm{\Omega}=\dfrac{1}{2} \left(\mathrm{X}-\mathrm{X}^{\mathsf{T}}\right) \tag{A-03} \end{equation} The symmetric part $\:\mathrm{\Psi}\:$ depends on 6 parameters, so is identical to a complex $\:6$-vector $\:\boldsymbol{\psi}\:$ which belongs to a complex 6-dimensional invariant subspace and is transformed under a special unitary transformation $\:W \in SU(6)\:$ \begin{equation} \boldsymbol{\psi}^{\prime}=W\boldsymbol{\psi}\:, \quad W \in SU(6) \tag{A-04} \end{equation} while, on the other hand, the anti-symmetric part $\:\mathrm{\Omega}\:$ depends on 3 parameters, so is identical to a complex $\:3$-vector $\:\boldsymbol{\omega}\:$ which belongs to a complex 3-dimensional invariant subspace and is transformed under the special unitary transformation $\:\overline{U} \in SU(3)\:$ \begin{equation} \boldsymbol{\omega}^{\prime}=\overline{U}\boldsymbol{\omega}\:, \quad \overline{U} \in SU(3) \tag{A-05} \end{equation} That's why the symmetric and anti-symmetric parts give rise to the terms $\:\boldsymbol{6}\:$ and $\:\overline{\boldsymbol{3}}\:$ in the right hand of equation $\:\boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3}=\boldsymbol{6}\boldsymbol{\oplus}\overline{\boldsymbol{3}}\:$ respectively. CASE 2 : $\:\boldsymbol{3}\boldsymbol{\otimes}\overline{\boldsymbol{3}}=\boldsymbol{8}\boldsymbol{\oplus}\boldsymbol{1}\:$ The transformation law for the complex $\:3\times 3\:$ tensor $\:\mathrm{X}\:$ in this case is \begin{equation} \mathrm{X }^{\prime}=U\mathrm{X}U^{\boldsymbol{*}}=U\mathrm{X}U^{-1} \tag{A-06} \end{equation} For those interested, this is proved in SECTION B, motivated by the adventure of explaining structure of mesons under quark theory. Here the symmetry (+) or antisymmetry (-) is NOT invariant \begin{equation} \mathrm{X}^{\mathsf{T}}=\pm\:\mathrm{X} \Longrightarrow {(\mathrm{X }^{\prime})}^{\mathsf{T}}=(U\mathrm{X}U^{\boldsymbol{*}})^{\mathsf{T}}= {(U^{\boldsymbol{*}})}^{\mathsf{T}}\mathrm{X}^{\mathsf{T}}U^{\mathsf{T}}=\overline{U}(\pm\:\mathrm{X})U^{\mathsf{T}} \ne\pm\:\mathrm{X }^{\prime} \tag{A-07} \end{equation} So it makes NO SENSE to split the tensor in its symmetric and anti-symmetric parts. On the contrary : (1) if $\:\mathrm{X}\:$ is a constant tensor, that is a scalar multiple of the identity, $\:\mathrm{X}=z\mathrm{I}\:$ ($\:z \in \mathbb{C}\:$) , then is invariant $\:\mathrm{X }^{\prime}= U\mathrm{X}U^{-1}=U\left(z\mathrm{I}\right)U^{-1}=z\mathrm{I}=\mathrm{X}\:$ (2) since the transformation (A-06) is a similarity transformation, it preserves the Trace (=sum of the elements on the main diagonal) of $\:\mathrm{X}\:$, that is $\:Tr \left(\mathrm{X}^{\prime}\right)=Tr\left(\mathrm{X}\right)\:$. So a traceless tensor remains traceless. It would sound not very well, but in this case the invariants are the "tracelessness" and the "scalarness". In this case it makes sense to split the tensor in a traceless and in a scalar part : \begin{equation} \mathrm{\Phi}=\mathrm{X}-\left[\dfrac{1}{3}Tr\left(\mathrm{X}\right)\right]\cdot\mathrm{I}\:, \quad \mathrm{\Upsilon}=\left[\dfrac{1}{3}Tr\left(\mathrm{X}\right)\right]\cdot\mathrm{I} \tag{A-08} \end{equation} The traceless part $\:\mathrm{\Phi}\:$ depends on 8 (=3x3-1) parameters, so is identical to a complex $\:8$-vector $\:\boldsymbol{\phi}\:$ which belongs to a complex 8-dimensional invariant subspace NOT FURTHER REDUCED TO INVARIANTS SUBSPACES and is transformed under a special unitary transformation $\:V \in SU(8)\:$ \begin{equation} \boldsymbol{\phi}^{\prime}=V\boldsymbol{\phi}\:, \quad V \in SU(8) \tag{A-09} \end{equation} while, on the other hand, the scalar part $\:\mathrm{\Upsilon}\:$ depends on 1 parameter, so is identical to a complex $\:1$-vector $\:\boldsymbol{\upsilon}\:$ which belongs to a complex 1-dimensional invariant subspace (identical to the set of complex numbers $\:\mathbb{C}\:$) and is transformed under the special unitary transformation $\:\mathrm{I} \in SU(1)\:$ (identical to the identity) \begin{equation} \boldsymbol{\upsilon}^{\prime}=\mathrm{I}\boldsymbol{\upsilon}=\boldsymbol{\upsilon} \tag{A-10} \end{equation} Note that $\:SU(1)\equiv \{\:\mathrm{I}\:\}\:$, that is the group $\:SU(1)\:$ has only one element, the identity $\:\mathrm{I}\:$, while $\:U(1)\equiv\{\:U\::\:U=e^{i\theta}\mathrm{I}\:, \quad \theta \in \mathbb{R} \}\:$, that is mathematically identical to the unit circle in $\:\mathbb{C}\:$. ================================================================================ SECTION B : Mesons from three quarks Suppose we know the existence of three quarks only : $\boldsymbol{u}$, $\boldsymbol{d}$ and $\boldsymbol{s}$. Under full symmetry (the same mass) these are the basic states, let \begin{equation} \boldsymbol{u}= \begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix} \qquad \boldsymbol{d}= \begin{bmatrix} 0\\ 1\\ 0 \end{bmatrix} \qquad \boldsymbol{s}= \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} \tag{B-01} \end{equation} of a 3-dimensional complex Hilbert space of quarks, say $\mathbf{Q}\equiv \mathbb{C}^{\boldsymbol{3}}$. A quark $\boldsymbol{\xi} \in \mathbf{Q}$ is expressed in terms of these basic states as \begin{equation} \boldsymbol{\xi}=\xi_1\boldsymbol{u}+\xi_2\boldsymbol{d}+\xi_3\boldsymbol{s}= \begin{bmatrix} \xi_1\\ \xi_2\\ \xi_3 \end{bmatrix} \qquad \xi_1,\xi_2,\xi_3 \in \mathbb{C} \tag{B-02} \end{equation} For a quark $\boldsymbol{\eta} \in \mathbf{Q}$ \begin{equation} \boldsymbol{\eta}=\eta_1\boldsymbol{u}+\eta_2\boldsymbol{d}+\eta_3\boldsymbol{s}= \begin{bmatrix} \eta_1\\ \eta_2\\ \eta_3 \end{bmatrix} \tag{B-03} \end{equation} the respective antiquark $\overline{\boldsymbol{\eta}}$ is expressed by the complex conjugates of the coordinates \begin{equation} \overline{\boldsymbol{\eta}}=\overline{\eta}_1 \overline{\boldsymbol{u}}+\overline{\eta}_2\overline{\boldsymbol{d}}+\overline{\eta}_3\overline{\boldsymbol{s}}= \begin{bmatrix} \overline{\eta}_1\\ \overline{\eta}_2\\ \overline{\eta}_3 \end{bmatrix} \tag{B-04} \end{equation} with respect to the basic states \begin{equation} \overline{\boldsymbol{u}}= \begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix} \qquad \overline{\boldsymbol{d}}= \begin{bmatrix} 0\\ 1\\ 0 \end{bmatrix} \qquad \overline{\boldsymbol{s}}= \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} \tag{B-05} \end{equation} the antiquarks of $\boldsymbol{u},\boldsymbol{d}$ and $\boldsymbol{s}$ respectively. The antiquarks belong to a different space, the space of antiquarks $\overline{\mathbf{Q}}\equiv \mathbb{C}^{\boldsymbol{3}}$. Since a meson is a quark-antiquark pair, we'll try to find the product space \begin{equation} \mathbf{M}=\mathbf{Q}\boldsymbol{\otimes}\overline{\mathbf{Q}}\: \left(\equiv \mathbb{C}^{\boldsymbol{9}}\right) \tag{B-06} \end{equation} Using the expressions (B-02) and (B-04) of the quark $\boldsymbol{\xi} \in \mathbf{Q}$ and the antiquark $\overline{\boldsymbol{\eta}} \in \overline{\mathbf{Q}}$ respectively, we have for the product meson state $ \mathrm{X} \in \mathbf{M}$ \begin{equation} \begin{split} \mathrm{X}=\boldsymbol{\xi}\boldsymbol{\otimes}\overline{\boldsymbol{\eta}}=&\xi_1\overline{\eta}_1 \left(\boldsymbol{u}\boldsymbol{\otimes}\overline{\boldsymbol{u}}\right)+\xi_1\overline{\eta}_2 \left(\boldsymbol{u}\boldsymbol{\otimes}\overline{\boldsymbol{d}}\right)+\xi_1\overline{\eta}_3 \left(\boldsymbol{u}\boldsymbol{\otimes}\overline{\boldsymbol{s}}\right)+ \\ &\xi_2\overline{\eta}_1 \left(\boldsymbol{d}\boldsymbol{\otimes}\overline{\boldsymbol{u}}\right)+\xi_2\overline{\eta}_2 \left( \boldsymbol{d}\boldsymbol{\otimes}\overline{\boldsymbol{d}}\right)+\xi_2\overline{\eta}_3 \left(\boldsymbol{d}\boldsymbol{\otimes}\overline{\boldsymbol{s}}\right)+\\ &\xi_3\overline{\eta}_1 \left(\boldsymbol{s}\boldsymbol{\otimes}\overline{\boldsymbol{u}}\right)+\xi_3\overline{\eta}_2 \left(\boldsymbol{s}\boldsymbol{\otimes}\overline{\boldsymbol{d}}\right)+\xi_3\overline{\eta}_3 \left(\boldsymbol{s}\boldsymbol{\otimes}\overline{\boldsymbol{s}}\right) \end{split} \tag{B-07} \end{equation} In order to simplify the expressions, the product symbol $"\boldsymbol{\otimes}"$ is omitted and so \begin{equation} \begin{split} \mathrm{X}=\boldsymbol{\xi}\overline{\boldsymbol{\eta}}=&\xi_1\overline{\eta}_1 \left(\boldsymbol{u}\overline{\boldsymbol{u}}\right)+\xi_1\overline{\eta}_2 \left(\boldsymbol{u}\overline{\boldsymbol{d}}\right)+\xi_1\overline{\eta}_3 \left(\boldsymbol{u}\overline{\boldsymbol{s}}\right)+ \\ &\xi_2\overline{\eta}_1 \left(\boldsymbol{d}\overline{\boldsymbol{u}}\right)+\xi_2\overline{\eta}_2 \left( \boldsymbol{d}\overline{\boldsymbol{d}}\right)+\xi_2\overline{\eta}_3 \left(\boldsymbol{d}\overline{\boldsymbol{s}}\right)+\\ &\xi_3\overline{\eta}_1 \left(\boldsymbol{s}\overline{\boldsymbol{u}}\right)+\xi_3\overline{\eta}_2 \left(\boldsymbol{s}\overline{\boldsymbol{d}}\right)+\xi_3\overline{\eta}_3 \left(\boldsymbol{s}\overline{\boldsymbol{s}}\right) \end{split} \tag{B-08} \end{equation} Due to the fact that $\mathbf{Q}$ and $\overline{\mathbf{Q}}$ are of the same dimension, it's convenient to represent the meson states in the product 9-dimensional complex space $\:\mathbf{M}=\mathbf{Q}\boldsymbol{\otimes}\overline{\mathbf{Q}}\:$ by square $3 \times 3$ matrices instead of row or column vectors \begin{equation} \mathrm{X}=\boldsymbol{\xi}\overline{\boldsymbol{\eta}}= \begin{bmatrix} \xi_1\overline{\eta}_1 & \xi_1\overline{\eta}_2 & \xi_1\overline{\eta}_3\\ \xi_2\overline{\eta}_1 & \xi_2\overline{\eta}_2 & \xi_2\overline{\eta}_3\\ \xi_3\overline{\eta}_1 & \xi_3\overline{\eta}_2 & \xi_s\overline{\eta}_3 \end{bmatrix}= \begin{bmatrix} \xi_1\\ \xi_2\\ \xi_3 \end{bmatrix} \begin{bmatrix} \overline{\eta}_1 \\ \overline{\eta}_2 \\ \overline{\eta}_3 \end{bmatrix}^{\mathsf{T}} = \begin{bmatrix} \xi_1\\ \xi_2\\ \xi_3 \end{bmatrix} \begin{bmatrix} \overline{\eta}_1 & \overline{\eta}_2 & \overline{\eta}_3 \end{bmatrix} \tag{B-09} \end{equation} Now, under a unitary transformation $\;U \in SU(3)\;$ in the 3-dimensional space of quarks $\;\mathbf{Q}\;$, we have \begin{equation} \boldsymbol{\xi}^{\prime}= U\boldsymbol{\xi} \tag{B-10} \end{equation} so in the space of antiquarks $\overline{\mathbf{Q}}\;$, since $\;\boldsymbol{\eta}^{\prime}=U\boldsymbol{\eta}\;$ \begin{equation} \overline{\boldsymbol{\eta}^{\prime}}= \overline{U}\;\overline{\boldsymbol{\eta}} \tag{B-11} \end{equation} and for the meson state \begin{equation} \mathrm{X}^{\prime}=\boldsymbol{\xi}^{\prime}\boldsymbol{\otimes}\overline{\boldsymbol{\eta}^{\prime}}=\left(U\boldsymbol{\xi}\right)\left(\overline{U}\overline{\boldsymbol{\eta}}\right) = \Biggl(U\begin{bmatrix} \xi_1\\ \xi_2\\ \xi_3 \end{bmatrix}\Biggr) \Biggl(\overline{U}\begin{bmatrix} \overline{\eta}_1\\ \overline{\eta}_2\\ \overline{\eta}_3 \end{bmatrix}\Biggr)^{\mathsf{T}} \\= U\Biggl(\begin{bmatrix} \xi_1\\ \xi_2\\ \xi_3 \end{bmatrix} \begin{bmatrix} \overline{\eta}_1 & \overline{\eta}_2 & \overline{\eta}_3 \end{bmatrix}\Biggr)\overline{U}^{\mathsf{T}} = U\left(\boldsymbol{\xi}\boldsymbol{\otimes}\overline{\boldsymbol{\eta}}\right)U^{*}=U\;\mathrm{X}\;U^{*} \tag{B-12} \end{equation} so proving the transformation law (A-06). $===================\text{end of answer}=======================$ FIGURE : The quark structure of $\:\boldsymbol{\eta}^{\prime}\:$,$\:\boldsymbol{\eta}\:$ and $\:\boldsymbol{\pi}^{0}\:$ mesons (Note: Meson symbols $\:\boldsymbol{\eta}^{\prime}\:$ and $\:\boldsymbol{\eta}\:$ must not be confused with the complex 3-vectors in the text) (1) Meson $\:\boldsymbol{\eta}^{\prime}\:$ is a singlet, representative of $\:\boldsymbol{1}\:$ in $\:\boldsymbol{3}\boldsymbol{\otimes}\overline{\boldsymbol{3}}=\boldsymbol{8}\boldsymbol{\oplus}\boldsymbol{1}\:$ (2) Mesons $\:\boldsymbol{\eta}\:$ and $\:\boldsymbol{\pi}^{0}\:$ are members of the octet $\;\boldsymbol{\lbrace}\boldsymbol{\pi}^{+},\boldsymbol{\pi}^{-},\boldsymbol{\pi}^{0},\mathbf{K}^{+},\mathbf{K}^{-},\mathbf{K}^{0},\overline{\mathbf{K}}^{0},\boldsymbol{\eta}\boldsymbol{\rbrace}\;$, basic meson states of $\boldsymbol{8}\:$ in $\:\boldsymbol{3}\boldsymbol{\otimes}\overline{\boldsymbol{3}}=\boldsymbol{8}\boldsymbol{\oplus}\boldsymbol{1}\:$ where $\:\boldsymbol{\pi}^{+}\equiv\boldsymbol{u}\overline{\boldsymbol{d}}\:$ , $\:\boldsymbol{\pi}^{-}\equiv\boldsymbol{d}\overline{\boldsymbol{u}}\:$ , $\:\mathbf{K}^{+}\equiv\boldsymbol{u}\overline{\boldsymbol{s}}\:$ , $\:\mathbf{K}^{-}\equiv\boldsymbol{s}\overline{\boldsymbol{u}}\:$ , $\:\mathbf{K}^{0}\equiv\boldsymbol{d}\overline{\boldsymbol{s}}\:$ , $\:\overline{\mathbf{K}}^{0}\equiv\boldsymbol{s}\overline{\boldsymbol{d}}\:$ . $\:\mathbf{Q}\boldsymbol{\otimes}\overline{\mathbf{Q}}=\boldsymbol{\lbrace}\boldsymbol{\pi}^{+},\boldsymbol{\pi}^{-},\boldsymbol{\pi}^{0},\mathbf{K}^{+},\mathbf{K}^{-},\mathbf{K}^{0},\overline{\mathbf{K}}^{0},\boldsymbol{\eta}\boldsymbol{\rbrace}\boldsymbol{\oplus}\boldsymbol{\lbrace}\boldsymbol{\eta}^{\prime}\boldsymbol{\rbrace}\:$ Frobenius Not the answer you're looking for? Browse other questions tagged mathematical-physics group-theory group-representations lie-algebra representation-theory or ask your own question. $SU(3)$ irreducible representations with tensor method Decomposing a Tensor Product of $SU(3)$ Representations in Irreps Permissible combinations of colour states for gluons What is the relationship of Clebsch-Gordan decomposition with Young tableau? How to calculate $3\otimes 3$ and $3\otimes 3\otimes 3$ in $SU(3)$? Tensor decomposition under $\mathrm{SU(3)}$ Tensor product of two different Pauli matrices $\sigma_2\otimes\eta_1 $ Explicit Spinor Representations in $SO(3)$ and $SO(4)$ Why do decompositons like $16 \otimes 16 = 10 \oplus 120 \oplus 126$ tell us which Higgs representations we can use? Does Bose-Statistics mean that we are only allowed to write symmetric tensor products of Higgs fields in the Lagrangian? Writing Breit-Pauli spin-spin-coupling Hamiltonian as a sum of irreducible spin tensor operators What is the spin of the Kalb-Ramond field? How to decompose the representation of $\rm SU(5)$? Irreducible decomposition of Lorentz tensors with Young tableaux
CommonCrawl
Create an account to view solutions By signing up, you accept Quizlet's Terms of Service and Privacy Policy Marvin Turner was discussing summer employment with Tina Song, president of Motown Construction Service: Tina: I'm glad that you're thinking about joining us for the summer. We could certainly use the help. Marvin: Sounds good. I enjoy outdoor work, and I could use the money to help with next year's school expenses. Tina: I've got a plan that can help you out on that. As you know, I'll pay you $14 per hour, but in addition, I'd like to pay you with cash. Since you're only working for the summer, it really doesn't make sense for me to go to the trouble of formally putting you on our payroll system. In fact, I do some jobs for my clients on a strictly cash basis, so it would be easy to just pay you that way. Marvin: Well, that's a bit unusual, but I guess money is money. Tina: Yeah, not only that, it's tax-free! Marvin: What do you mean? Tina: Didn't you know? Any money that you receive in cash is not reported to the IRS on a W-2 form; therefore, the IRS doesn't know about the income—hence, it's the same as tax-free earnings. a. Why does Tina Song want to conduct business transactions using cash (not check or credit card)? Bella Chen's weekly gross earnings for the week ended October 20 were $2,600, and her federal income tax withholding was$554.76. Assuming the social security rate is 6% and Medicare is 1.5% of all earnings, what is Chen's net pay? A borrower has two alternatives for a loan: (1)\left(\text{1}\right)(1) issue a 240,000,60−day,8240,000, 60-day, 8% note or240,000,60−day,8\left(2\right)$issue a$240,000, 60-day note that the creditor discounts at 8%. a. Calculate the amount of the interest expense for each option. Ehrlich Co. began business on January 2, 2013. Salaries were paid to employees on the last day of each month, and social security tax, Medicare tax, and federal income tax were withheld in the required amounts. An employee who is hired in the middle of the month receives half the monthly salary for that month. All required payroll tax reports were filed, and the correct amount of payroll taxes was remitted by the company for the calendar year. Early in 2014, before the Wage and Tax Statements (Form W-2) could be prepared for distribution to employees and for filing with the Social Security Administration, the employees' earnings records were inadvertently destroyed. None of the employees resigned or were discharged during the year, and there were no changes in salary rates. The social security tax was withheld at the rate of 6.0% and Medicare tax at the rate of 1.5%. Data on dates of employment, salary rates, and employees' income taxes withheld, which are summarized as follows, were obtained from personnel records and payroll records: Arnett Nov. 16 $ 5,500 $ 1,008 Cruz Jan. 2 4,800 833 Edwards Oct. 1 8,000 1,659 Harvin Dec. 1 6,000 1,133 Nicks Feb. 1 10,000 2,219 Shiancoe Mar. 1 11,600 2,667 Ward Nov. 16 5,220 938 Calculate the amounts to be reported on each employee's Wage and Tax Statement (Form W-2) for 2013, arranging the data in the following form: GrossFederal IncomeSocial SecurityMedicareEmployeeEarningsTax WithheldTax WithheldTax Withheld\begin{array}{lcccc} \textbf{}&\textbf{Gross}\hspace{10pt}&\textbf{Federal Income}\hspace{10pt}&\textbf{Social Security}\hspace{10pt}&\textbf{Medicare}\hspace{10pt}\\ \textbf{Employee}\hspace{10pt}&\textbf{Earnings}\hspace{10pt}&\textbf{Tax Withheld}\hspace{10pt}&\textbf{Tax Withheld}\hspace{10pt}&\textbf{Tax Withheld}\hspace{10pt}\\\hline \end{array} Employee​GrossEarnings​Federal IncomeTax Withheld​Social SecurityTax Withheld​MedicareTax Withheld​​ Calculate the following employer payroll taxes for the year: (a) social security; (b) Medicare; (c) state unemployment compensation at 5.4% on the first $10,000 of each employee's earnings; (d) federal unemployment compensation at 0.8% on the first$10,000 of each employee's earnings; (e) total.
CommonCrawl
Justin Holmgren Time- and Space-Efficient Arguments from Groups of Unknown Order 📺 Abstract Alexander R. Block Justin Holmgren Alon Rosen Ron D. Rothblum Pratik Soni We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T * polylog(T) and space S * polylog(T), and the verifier runs in time n * polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated with a trusted setup from the hardness of factoring (products of safe primes), or without a trusted setup using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform. Our proof builds on DARK (Bunz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we: 1. Identify a significant gap in the proof of security of Dark. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption). In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest. Transparent Error Correcting in a Computationally Bounded World 📺 Abstract Ofer Grossman Justin Holmgren Eylon Yogev We construct uniquely decodable codes against channels which are computationally bounded. Our construction requires only a public-coin (transparent) setup. All prior work for such channels either required a setup with secret keys and states, could not achieve unique decoding, or got worse rates (for a given bound on codeword corruptions). On the other hand, our construction relies on a strong cryptographic hash function with security properties that we only instantiate in the random oracle model. Public-Coin Zero-Knowledge Arguments with (almost) Minimal Time and Space Overheads 📺 Abstract Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol. For every $\mathsf{NP}$ relation that can be verified in time $T$ and space $S$, we construct a public-coin zero-knowledge argument in which the prover runs in time $T \cdot \mathrm{polylog}(T)$ and space $S \cdot \mathrm{polylog}(T)$. Our proofs have length $\mathrm{polylog}(T)$ and the verifier runs in time $T \cdot \mathrm{polylog}(T)$ (and space $\mathrm{polylog}(T)$). Our scheme is in the random oracle model and relies on the hardness of discrete log in prime-order groups. Our main technical contribution is a new space efficient \emph{polynomial commitment scheme} for multi-linear polynomials. Recall that in such a scheme, a sender commits to a given multi-linear polynomial $P:\mathbb{F}^n \to \mathbb{F}$ so that later on it can prove to a receiver statements of the form ``$P(x)=y$''. In our scheme, which builds on commitments schemes of Bootle et al. (Eurocrypt 2016) and B{\"u}nz et al. (S\&P 2018), we assume that the sender is given multi-pass streaming access to the evaluations of $P$ on the Boolean hypercube and we show how to implement both the sender and receiver in roughly time $2^n$ and space $n$ and with communication complexity roughly $n$. On the Plausibility of Fully Homomorphic Encryption for RAMs 📺 Abstract Ariel Hamlin Justin Holmgren Mor Weiss Daniel Wichs We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database D, anybody can efficiently compute an encryption of P(D) for an arbitrary RAM program P. The running time over the encrypted data should be as close as possible to the worst case running time of P, which may be sub-linear in the data size.A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by P. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively "rewinding" any mechanism for making memory accesses oblivious.We identify a necessary prerequisite towards constructing RAM-FHE that we call rewindable oblivious RAM (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using symmetric-key doubly efficient PIR (SK-DEPIR) (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC '17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the database size N. Our basic scheme is single-hop, but we also extend it to obtain multi-hop RAM-FHE with overhead $$N^\epsilon $$ for arbitrarily small $$\epsilon >0$$ .We view our work as the first evidence that RAM-FHE is likely to exist. Permuted Puzzles and Cryptographic Hardness Abstract Elette Boyle Justin Holmgren Mor Weiss A permuted puzzle problem is defined by a pair of distributions $$\mathcal{D}_0,\mathcal{D}_1$$ over $$\varSigma ^n$$ . The problem is to distinguish samples from $$\mathcal{D}_0,\mathcal{D}_1$$ , where the symbols of each sample are permuted by a single secret permutation $$\pi $$ of [n].The conjectured hardness of specific instances of permuted puzzle problems was recently used to obtain the first candidate constructions of Doubly Efficient Private Information Retrieval (DE-PIR) (Boyle et al. & Canetti et al., TCC'17). Roughly, in these works the distributions $$\mathcal{D}_0,\mathcal{D}_1$$ over $${\mathbb F}^n$$ are evaluations of either a moderately low-degree polynomial or a random function. This new conjecture seems to be quite powerful, and is the foundation for the first DE-PIR candidates, almost two decades after the question was first posed by Beimel et al. (CRYPTO'00). However, while permuted puzzles are a natural and general class of problems, their hardness is still poorly understood.We initiate a formal investigation of the cryptographic hardness of permuted puzzle problems. Our contributions lie in three main directions: Rigorous formalization. We formalize a notion of permuted puzzle distinguishing problems, extending and generalizing the proposed permuted puzzle framework of Boyle et al. (TCC'17).Identifying hard permuted puzzles. We identify natural examples in which a one-time permutation provably creates cryptographic hardness, based on "standard" assumptions. In these examples, the original distributions $$\mathcal{D}_0,\mathcal{D}_1$$ are easily distinguishable, but the permuted puzzle distinguishing problem is computationally hard. We provide such constructions in the random oracle model, and in the plain model under the Decisional Diffie-Hellman (DDH) assumption. We additionally observe that the Learning Parity with Noise (LPN) assumption itself can be cast as a permuted puzzle.Partial lower bound for the DE-PIR problem. We make progress towards better understanding the permuted puzzles underlying the DE-PIR constructions, by showing that a toy version of the problem, introduced by Boyle et al. (TCC'17), withstands a rich class of attacks, namely those that distinguish solely via statistical queries. On the (In)security of Kilian-Based SNARGs Abstract James Bartusek Liron Bronfman Justin Holmgren Fermi Ma Ron D. Rothblum The Fiat-Shamir transform is an incredibly powerful technique that uses a suitable hash function to reduce the interaction of general public-coin protocols. Unfortunately, there are known counterexamples showing that this methodology may not be sound (no matter what concrete hash function is used). Still, these counterexamples are somewhat unsatisfying, as the underlying protocols were specifically tailored to make Fiat-Shamir fail. This raises the question of whether this transform is sound when applied to natural protocols.One of the most important protocols for which we would like to reduce interaction is Kilian's four-message argument system for all of $$\mathsf {NP}$$ , based on collision resistant hash functions ( $$\mathsf {CRHF}$$ ) and probabilistically checkable proofs ( $$\mathsf {PCP}$$ s). Indeed, an application of the Fiat-Shamir transform to Kilian's protocol is at the heart of both theoretical results (e.g., Micali's CS proofs) as well as leading practical approaches of highly efficient non-interactive proof-systems (e.g., $$\mathsf {SNARK}$$ s and $$\mathsf {STARK}$$ s).In this work, we show significant obstacles to establishing soundness of (what we refer to as) the "Fiat-Shamir-Kilian-Micali" ( $$\mathsf {FSKM}$$ ) protocol. More specifically:We construct a (contrived) $$\mathsf {CRHF}$$ for which $$\mathsf {FSKM}$$ is unsound for a very large class of $$\mathsf {PCP}$$ s and for any Fiat-Shamir hash function. The collision-resistance of our $$\mathsf {CRHF}$$ relies on very strong but plausible cryptographic assumptions. The statement is "tight" in the following sense: any $$\mathsf {PCP}$$ outside the scope of our result trivially implies a $$\mathsf {SNARK}$$ , eliminating the need for $$\mathsf {FSKM}$$ in the first place.Second, we consider a known extension of Kilian's protocol to an interactive variant of $$\mathsf {PCP}$$ s called probabilistically checkable interactive proofs ( $$\mathsf {PCIP})$$ (also known as interactive oracle proofs or $$\mathsf {IOP}$$ s). We construct a particular (contrived) $$\mathsf {PCIP}$$ for $$\mathsf {NP}$$ for which the $$\mathsf {FSKM}$$ protocol is unsound no matter what $$\mathsf {CRHF}$$ and Fiat-Shamir hash function is used. This result is unconditional (i.e., does not rely on any cryptographic assumptions). Put together, our results show that the soundness of $$\mathsf {FSKM}$$ must rely on some special structure of both the $$\mathsf {CRHF}$$ and $$\mathsf {PCP}$$ that underlie Kilian's protocol. We believe these negative results may cast light on how to securely instantiate the $$\mathsf {FSKM}$$ protocol by a synergistic choice of the $$\mathsf {PCP}$$ , $$\mathsf {CRHF}$$ , and Fiat-Shamir hash function. Towards Doubly Efficient Private Information Retrieval Ran Canetti Justin Holmgren Silas Richelson Adaptive Succinct Garbled RAM or: How to Delegate Your Database Ran Canetti Yilei Chen Justin Holmgren Mariana Raykova TCC 2020 Eurocrypt 2019 James Bartusek (1) Alexander R. Block (2) Elette Boyle (1) Liron Bronfman (1) Ran Canetti (2) Ofer Grossman (1) Ariel Hamlin (1) Fermi Ma (1) Mariana Raykova (1) Silas Richelson (1) Alon Rosen (2) Ron D. Rothblum (3) Pratik Soni (2) Mor Weiss (2) Daniel Wichs (1) Eylon Yogev (1)
CommonCrawl
How to Follow This Blog ← Starting Earlier on Lifelong Learning Active Learning and the Transformation of a Graduate Student Instructor → The things in proofs are weird: a thought on student difficulties Posted on May 20, 2020 by Ben Blum-Smith By Ben Blum-Smith, Contributing Editor "The difficulty… is to manage to think in a completely astonished and disconcerted way about things you thought you had always understood." ― Pierre Bourdieu, Language and Symbolic Power, p. 207 Proof is the central epistemological method of pure mathematics, and the practice most unique to it among the disciplines. Reading and writing proofs are essential skills (the essential skills?) for many working mathematicians. That said, students learning these skills, especially for the first time, find them extremely hard.[1] Why? What's in the way? And what are the processes by which students effectively gain these skills? These questions have been discussed extensively by researchers and teachers alike,[2] and they have personally fascinated me for most of my twenty years in mathematics education. In this blog post I'd like to examine one little corner of this jigsaw puzzle. Imported vs. enculturated To frame the inquiry, I posit that there are imported and enculturated capacities involved in reading and writing proofs. Teachers face corresponding challenges when teaching students about proof. Capacities that are imported into the domain of proof-writing are those that students can access independently of whether they have any mathematics training in school or contact with the mathematical community, let alone specific attention to proof.[3] Capacities that are enculturated are those that students do not typically develop without some encounter with the mathematics community, whether through reading, schooling, math circles, or otherwise. Examples of imported capacities are the student's capacity to reason, and fluency in the language of instruction. Enculturated capacities include, for example, knowledge of specific patterns of reasoning common to mathematics writing but rare outside it, such as the elegant complex of ideas behind the phrase, "without loss of generality, we can suppose…." For imported capacities involved in proof, the teaching challenge is to create conditions that cause students to actually access those capacities while reading and writing proofs. For enculturated capacities, the prima facie teaching challenge is to inculcate them, i.e., to cause the capacities to be developed in the first place. But there is also a prior, less obvious challenge: we have to know they're there. Since many instructors are already very well-enculturated, our culture is not always fully visible to us. If we can't see what we're doing, it's harder to show students how to do it. (This challenge has the same character as that mentioned by Pierre Bourdieu in the epigraph, although he was writing about sociology.) When my personal obsession with student difficulty with proofs first developed, I focused on imported capacities. I had many experiences in which students whom I knew to be capable of very cogent reasoning produced illogical work on proof assignments. It seemed to me that the instructional context had somehow severed the connection between the students' reasoning capacities and what I was asking them to do. I became very curious about why this was happening, i.e., what types of instructional design choices led to this severing, and even more curious about what types of choices could reverse it. My main conclusion, based primarily on experience in my own and others' classrooms, and substantially catalyzed by reading Paul Lockhart's celebrated Lament and Patricio Herbst's thought-provoking article on the contradictory demands of proof teaching, was this: It benefits students, when first learning proof, to have some legitimate uncertainty and suspense regarding what to believe, and to keep the processes of reading and writing proofs as closely tied as possible to the process of deciding what to believe.[4] I stand by this conclusion, and more broadly, by the view that the core of teaching proof is about empowering students to harness their imported capacities (in the above sense) to the task, rather than learning something wholly new. That said, in the last few years I've become equally fascinated by the challenges of enculturation that are part of teaching proof reading and writing. If I'm honest, my zealotry regarding the importance of imported capacities blinded me to importance of the enculturated ones. What I want to do in the remainder of this blog post is to propose that a particular feature of proof writing is an enculturated capacity. It's a feature I didn't even fully notice until fairly recently, because it's such a second-nature part of mathematical communication. I offer this proposal in the spirit of the quote by sociologist/anthropologist Pierre Bourdieu in the epigraph: to think in a completely astonished and disconcerted way about something we thought we already understood. Naming it as enculturated has the ultimate goal of supporting an inquiry into how students can be explicitly taught how to do it, though this goes beyond my present scope. The things in proofs are weird I recently encountered an article by Kristen Lew and Juan Pablo Mejía-Ramos, in which they compared undergraduate students' and mathematicians' judgements regarding unconventional language used by students in written proofs.[5] One of their findings was that, in their words, "… students did not fully understand the nuances involved in how mathematicians introduce objects in proofs." (2019, p. 121) The hypothesis I would like to advance in this post is offered as an explanation for this finding, as well as for a host of student difficulties I've witnessed over the years: The way we conceptualize the objects in proofs is an enculturated capacity. These objects are weird. In particular, the sense in which they exist, what they are, is weird. They have a different ontology than other kinds of objects, even the objects in other kinds of mathematical work. An aspect of learning how to read and write proofs is getting accustomed to working with objects possessing this alternative ontology.[6] If this is true, then it makes sense that undergraduates don't quite have their heads wrapped around the way that mathematicians summon these things into being. The place where this is easiest to see is in proofs by contradiction. When you read a proof by contradiction, you are spending time with objects that you expect will eventually be revealed never to have existed, and you expect this revelation to furthermore tell you that it was impossible that they had ever existed. That's bizarro science fiction on its face. But it's also true, more subtly perhaps, of objects appearing in pretty much any other type of proof. To illustrate: suppose a proof begins, be a lattice in the real vector space , and let be a nonzero vector of minimal (Euclidean) length in Question. What kind of a thing is [The camera pans back to reveal this question has been asked by a short babyfaced man wearing a baseball cap, by the name of Lou Costello. His interlocutor is a taller, debonair fellow with a blazer and pocket square, answering to Bud Abbott.] Abbott: It's a vector in Costello: Which vector? Abbott: Well, it's not any particular vector. It depends on Costello: You just said it was a particular vector and now it's not a particular vector? Abbott: No, well, yes, it's some vector, so in that sense it's a particular vector, but I can't tell you which one, so in that sense it's no particular vector. Costello: You can't tell me which one? Abbott: No. Costello: Why not? Abbott: Because it depends on . It's one of the vectors that's minimal in length among nonzero vectors in Abbott: No particular vector. Costello: But is it some vector? Abbott: Naturally! Costello: You said it depends on . What's Abbott: A lattice in $\mathbb{R}^n$. Costello: Which lattice? Abbott: Any lattice. Costello: Why won't you say which lattice? Abbott: Because I'm trying to prove something about all lattices. Costello: You mean to say is every lattice??? Abbott: No, it's just one lattice. Costello: Which one?! For any readers unfamiliar with the allusion here, it is to "Who's on First?", legendary comedy duo Abbott & Costello's signature routine.[7] What's relevant to the present discussion is that the skit is based on Costello asking Abbott a sequence of questions about a situation to which he is an outsider and Abbott is an insider. Costello becomes increasingly frustrated by Abbott's answers, which make perfect sense from inside the situation, but seem singularly unhelpful from the outside. Abbott for his part maintains patience but is so internal to his situation—enculturated, as it were—that he doesn't address, or even seem to perceive, the ways he could be misunderstood by an outsider.[8] My goal with this little literary exercise has been to dramatize the strangeness of the "arbitrary, but fixed" nature of the objects in proofs. Most things we name, outside of proof-writing, don't have this character. Either they're singular or plural; one or many; specific or general; not both. Every so often, we speak of a singular that represents a collective ("the average household", "a typical spring day"), or that is constituted from a collective ("the nation"), but these are still ultimately singular. They are not under the same burden as mathematical proof objects, to be able to stand in for any member of a class. Proof objects aren't representative members of classes but universal members. This makes them fundamentally unspecified, even while we imagine and write about them as concrete things. There's an additional strangeness: proof objects, and the classes of which they are the universal members, are themselves often constituted in relation to other proof objects. We get chains, often very long, where each link adds a new layer of remove from true specificity, but we still treat each link in the chain, including the final one, as something concrete. I was trying to hint at this by posing the question "what is it?" about , rather than , in the example above. As consternated as Costello is by is doubtless more confounding. I think there are at least two distinct aspects of this that students new to proof do not usually do on their own without some kind of enculturation process. In the first place, the initial move of dealing with everything in a class of objects simultaneously by postulating a "single universal representative" of that class just isn't automatic. This is a tool mathematical culture has developed. Students need to be trained, or to otherwise catch on, that a good approach to proving a statement of the form "For all real numbers…" might begin, "Let be a real number."[9] But secondly, when we work with these objects, their "arbitrary, but fixed" character forces us to hold them in a different way, mentally, than we hold the objects of our daily lives, or even the mathematical objects of concrete calculations. When you read, "Let be a smooth function ," what do you imagine? A graph? Some symbols? How does your mental apparatus store and track the critical piece of information that can be any smooth function on ? Reflecting on my own process, I think what I do in this case is to imagine a vague visual image of a smooth graph, but it is "decorated"—in a semantic, not a visual, way—by information about which features are constitutive and which could easily have been different. The local maxima and minima I happen to be imagining are stored as unimportant features while the smoothness is essential. Likewise, when I wrote, "Let ," what did you imagine? Was there a visual? If so, what did you see? I imagined a triclinic lattice in 3-space. But again, it was somehow semantically "decorated" by information about which features were constitutive vs. contingent. That I was in 3 dimensions was contingent, but the periodicity of the pattern of points I imagined was constitutive. I'm positing that students new to proof do not usually already know how to mentally "decorate" objects in this way.[10] Here are some specific instances of student struggle that seem to me to be illuminated by the ideas above. In the paper of Lew and Mejía-Ramos mentioned above, eight mathematicians and fifteen undergraduates (all having taken at least one proof-oriented mathematics course) were asked to assess student-produced proofs for unconventional linguistic usages. The sample proofs were taken from student work on exams in an introduction to proof class. One of these sample proofs began, "Let ." Seven of the eight mathematicians identified the "Let …" as unconventional without prompting, and the eighth did as well when asked about it. Of the fifteen undergraduate students, on the other hand, only four identified this sentence as unconventional without prompting, while even after being asked directly about it, six of the students maintained that it was not unconventional. I would like to understand better what these six students thought that the sentence "Let " meant. Previously on this blog, I described the struggle of a student to wrap her head around the idea, in the context of proofs, that you are supposed to write about as though it's a particular number, when she knew full well that she was trying to prove something for all at once. A year and a half ago, I was working with students in a game theory course. They were developing a proof that a Nash equilibrium in a two-player zero-sum game involves maximin moves for both players. It was agreed that the proof would begin by postulating a Nash equilibrium in which some player, say , was playing a move that was not a maximin move. By the definition of a maximin move, this implies that has some other move such that the minimum possible payout for if she plays move is higher than the minimum possible payout if she plays . The students recognized the need to work with this "other move" but had trouble carrying this out. In particular, it was hard for them to keep track of its constitutive attribute, i.e., that its minimum possible payout for is higher than 's. They were as drawn to chains of reasoning that circled back to this property of as a conclusion, as they were to chains of reasoning that proceeded forward from it. In the same setting as the previous example, there was a student who, in order to get her mind around what was going on, very sensibly constructed some simple two-player games to look at. I don't remember the examples, but I remember this: I kept expecting that when she looked at the fully specified games, "what was" would click for her, but it didn't. Instead, I found myself struggling to be articulate in calling her attention to , precisely because its constitutive attribute was now only one of the many things going on in front of us; nothing was "singling it out." I found myself working to draw her attention away from the details of the examples she'd just constructed in order to focus on the constitutive attribute of . My reflection on this student's experience was what first pointed me toward the ideas in this blog post: I mean really, what is , anyway, that recedes from view exactly when the situation it's part of becomes visible in detail? This semester I taught a course on symmetry for non-math majors. It involved some elementary group theory. An important exercise was to prove that in a group, . One student produced an argument that was essentially completely general, but carried out the logic in a specific group, with a specific choice of , and presented it as an example. Here is a direct quote, edited lightly for grammar and typesetting. "For example [take] ; if we will operate on both sides the inverse of . As we have proven that , we can change the structure of the equation to , [which] shows that x has to be equal to y." The symbols refer to one-quarter and three-quarters rotations in the dihedral group . From my point of view as instructor, the student could have transformed this from an illustrative example to an actual proof just by replacing , respectively, throughout. What was the obstruction to the student doing this? My claim is that the mathematician's skill of mentally capturing classes of things by positing "arbitary, but fixed" universal members of those classes, and then proceeding to work with these universal members as though they are actual objects that exist, is an enculturated capacity.[11] I think it's a little bit invisible to us—at least, it was so to me, for a long time. My purpose in advancing this claim is that making the skill visible invites an inquiry into how we can explicitly lead students to acquire it. I hope those of you who have given attention to how to train students in this particular aspect of proof (reading and) writing will offer some thoughts in the comments! I would like to thank Mark Saul and especially Yvonne Lai for extremely helpful editorial feedback. [1] I trust that any reader of this blog who has ever taught a course, at any level, that serves as its students' introduction to proof, has some sense of what I am referring to. Additionally, the research literature is dizzyingly vast and there is no hope to do it any justice in this blog post, let alone this footnote. But here are some places for an interested reader to start: S. Senk, How well do students write geometry proofs?, The Mathematics Teacher Vol. 78, No. 6 (1985), pp. 448–456 (link); R. C. Moore, Making the transition to formal proof, Educational Studies in Mathematics, Vol. 27 (1994), pp. 249–266 (link); W. G. Martin & G. Harel, Proof frames of preservice elementary teachers, JRME Vol. 20, No. 1 (1989), pp. 41–51 (link); K. Weber, Student difficulty in constructing proofs: the need for strategic knowledge, Educational Studies in Mathematics, Vol. 48 (2001), pp. 101–119 (link); and K. Weber, Students' difficulties with proof, MAA Research Sampler #8 (link). [2] Again, I cannot hope even to graze the surface of this conversation in a footnote. The previous note gives the reader some places to start on the scholarly conversation. A less formal conversation takes place across blogs and twitter. Here is a recent relevant blog post by a teacher, and here are some recent relevant threads on Twitter. [3] This and the following sentence should be treated as definitions. I am indulging the mathematician's prerogative to define terms and expect that the audience will interpret them according to those definitions throughout the work. In particular, while I hope I've chosen terms whose connotations align with the definitions given, I'm relying on the reader to go with the definitions rather than the connotations in case they diverge. I invite commentary on these word choices. [4] This is an argument I have made at length in the past on my personal teaching blog (see here, here, here, here, here), and occasionally in a very long comment on someone else's blog (here). Related arguments are developed in G. Harel, Three principles of learning and teaching mathematics, in J.-L. Dorier (ed.), On the teaching of linear algebra, Dordrecth: Kluwer Academic Publishers, 2000, pp. 177–189 (link; see in particular the "principle of necessity"); and in D. L. Ball and H. Bass, Making believe: The collective construction of public mathematical knowledge in the elementary classroom, in D. Phillips (ed.), Yearbook of the National Society for the Study of Education, Constructivism in Education, Chicago: Univ. of Chicago Press, 2000, pp. 193–224. [5] K. Lew & J. P. Mejía-Ramos, Linguistic conventions of mathematical proof writing at the undergraduate level: mathematicians' and students' perspectives, JRME Vol. 50, No. 2 (2019), pp. 121–155 (link). [6] Disclaimer: although I am using the word "ontology" here, I am not trying to do metaphysics. The motivation for this line of inquiry is entirely pedagogical: what are the processes involved in students gaining proof skills? [7] Here's a video—it's a classic. [8] One of the keys to the humor is that the audience is able to see the big picture all at once: the understandable frustration of Costello, the uninitated one, apparently unable to get a straight answer; the endearing patience of Abbott, the insider, trying so valiantly and steadfastly to make himself understood; and, the key idea that Costello is missing and that Abbott can't seem to see that Costello is missing. I'm hoping to channel that sense of stereovision into the present context, to encourage us to see the objects in a proof simultaneously with insider and outsider eyes. [9] Annie Selden and John Selden write about the behavioral knowledge involved in proof-writing, and use this move as an illustrative example. A. Selden and J. Selden, Teaching proving by coordinating aspects of proofs with students' abilities, in Teaching and Learning Proof Across the Grades: A K-16 Perspective, New York: Routledge, 2009, p. 343. [10] The ideas in this paragraph are related to Efraim Fischbein's notion of "figural concepts"—see E. Fischbein, The theory of figural concepts, Educational Studies in Mathematics Vol. 24 (1993), pp. 139–162 (link). Fischbein argues that the mental entities studied in geometry "possess simultaneously conceptual and figural characters" (1993, p. 139). Fischbein's work in turn draws on J. R. Anderson, Arguments concerning representations for mental imagery, Psychological Review, Vol. 85 No. 4 (1978), pp. 249–277 (link), which, in a broader (not specifically mathematical) context, discusses "propositional" vs. "pictorial" qualities of mental images. The resonance with the dichotomy I've flagged as "semantic" vs. "visual" is clear. I'm suggesting that the particular interplay between these poles that is involved in conceptualizing proof objects is a mental dance that is new to students who are new to proof. (Actually, it is not entirely clear to me that the dichotomy I want to highlight is "semantic" vs. "visual" as much as "general" vs. "specific"; perhaps it's just that visuals tend to be specific. However, time does not permit to develop this inquiry further here.) [11] Because this circle of skills involve taking something strange and abstract and turning it into something the mind can deal with as a concrete and specific object, they strike me as related to some notions well-studied in the education research literature: Anna Sfard's reification and Ed Dubinsky's APOS theory—both ways of describing the interplay between process and object in mathematics learning—and the more general concept of compression (see, e.g., D. Tall, How Humans Learn to Think Mathematically, New York: Cambridge Univ. Press, 2013, chapter 3). This entry was posted in Mathematics Education Research, Student Experiences. Bookmark the permalink. 8 Responses to The things in proofs are weird: a thought on student difficulties Sylvia Wenmackers says: Dear Ben, This is very interesting! It reminds me of some work in philosophy of mathematics on "arbitrary objects". Kit Fine worked on this in the 1980s and Leon Horsten published a book about it in 2019, called "The Metaphysics and Mathematics of Arbitrary Objects". For a very brief intro to Fine's work, see Horsten's summary with some criticism (2019, p. 131): https://books.google.be/books?id=hmSbDwAAQBAJ&pg=PA131. Best wishes, Sylvia Ben Blum-Smith says: Thank you Sylvia! I'm delighted, if unsurprised, to learn of philosophical work on these types of objects. I'm excited for the opportunity to have consideration of these teaching questions be informed a metaphysics lens in addition to a cognitive psychology lens as in the references discussed in note [10]. Paul Pearson says: Hi Ben, I love the Abbott and Costello routine! Our language of formal proof uses loaded phrases (WLOG, let, etc.) that students initially don't know how to understand because these phrases don't explain *why* particular choices are being made or what some of the alternative choices are, which means they're not grounded in something that makes sense to the student. Students need to unpack the meaning of phases such as "Let v in Lambda…" and explain their context and purpose to themselves and the reader. For example, if a student wrote, "There are potentially many nonzero vectors in the lattice of minimal length. We know such a vector exists because the lattice has at least one nonzero vector. Further, there could be more than one vector in the lattice of minimal length because such vectors of minimal length could all lie on the same n-sphere. For the sake of having a general argument that does not rely on any particular properties of one such minimal vector, but only relies upon the fact that the vector is of minimal length among all the vectors in the lattice, we will choose one minimal length vector v in the lattice Lambda and only use that it is minimal moving forward." Then, it would be clear that the student fully understood the context and purpose (the what and the why) of the situation. The fact that we write "Let v in Lambda…" and expect our students to know what that means and to be able to unpack it for themselves is kind of absurd. We should be developing fluency with formal mathematics skills in our students. We should have students explain things in depth and in detail in ways that make sense to themselves and others, rather than having them produce concise math without meaning and purpose. Should professors and students be this verbose all of the time? Probably not. But, we should at least initially (and perhaps periodically) require very thorough explanations. An over reliance on symbols (upside down A, backwards E, etc.) and abbreviations (WLOG) can be an indicator that someone is faking understanding, which is why students learning to write proofs should be encouraged to explain things rather than rely on compact notation that doesn't carry a lot of meaning for them yet. Thanks for this Paul! Your unpacking of "Let v in Lambda…" is a beautiful illustration of the large amount of implicit cultural knowledge we have condensed into the linguistic conventions used in proof writing. I'd love to hear a little more about concretely what it looks like in your (or others' – chime in!) teaching, as far as the pedagogical process to support students in writing down all of that beautiful exposition. Japheth Wood says: Thanks for educating me about "imported vs. enculturated" – I hadn't quite thought of it that way, but found it useful. The naming of some of the specific weird objects that we handle in proof goes to the heart of the matter in addressing students' difficulties in learning proof, as well as professors' difficulties in teaching it. I already commented on your epsilon-delta post that I'm a fan of Professor Susanna Epp's writing on teaching and learning proof. She has a nice article titled "The Language of Quantification in Mathematics Instruction" on her webpage that offers similar ideas. Thanks for pointing out this reference! For interested readers, here's the url for convenience: https://condor.depaul.edu/~sepp/NCTM99Yrbk.pdf Looks like it has a nice discussion of differences between the linguistic conventions of mathematical writing and everyday language. Michael Bächtold says: This "encultured" mathematical reasoning isn't fundamentally different from reasoning in everyday life. First of all, saying "let v be an arbitrary but fixed vector…" is redundant, in the sense that nothing would change in the remainder of the proof, if we simply said "let v a vector…". Second, we encounter such hypothetical objects in everyday reasoning. For instance you might say to a child, "suppose I throw a stone into a window…". The child might then ask "which stone" and "which window", but I think that rarely happens, since we learn hypothetical reasoning with "unspecified" objects early on. The main difference between mathematical and everyday reasoning seems to be the habit of naming/labeling these objects with made up names (like single letters). I guess that has to do with the fact that we commonly deal with several objects of the same type in mathematical arguments (like numbers x,y,z) and naming them makes further reasoning easier. In everyday language we might use other lables like "suppose I throw two stones into a window, the first and the second…" and then use the lables "first" and "second" in further arguments. Thanks for this comment; it advances my thinking. The example of "suppose I throw a stone into a window…" is illuminating. You'll be unsurprised that I disagree (rather strongly) that this "isn't fundamentally different" from proof objects; I think the difference is fundamental enough to write a blog post about. But perhaps this type of hypothetical language is the (what I would call) imported capacity from which the enculturated capacity of thinking in proof objects is built! I look forward to thinking more about this. Of course adding the words "arbitrary, but fixed" before an object in a proof changes nothing mathematically. Indeed, I don't think I have ever seen this phrase in a proof. The phrase "arbitrary, but fixed" is used in the blog post (and in the article by Selden & Selden mentioned in note [9], and undoubtedly many other places) not as something you'd actually write in a proof but as a descriptor of the (ontological? psychological?) character of proof objects. Elsewhere, Susanna Epp uses the descriptor "generic particular" for the same phenomenon (I thank Japheth Wood for directing me to Dr. Epp's writings and this phrase in particular). The question is whether or not this "arbitrary but fixed"/"generic particular" character of proof objects is distinct from the character of the mental objects of every day life. The argument of the blog post is that it is very distinct, with the implication that it's useful for educators to focus our attention on how students come to master the technique of working mentally with objects possessing this character. You disagree that this involves learning anything new; if you're right, then there's nothing to see here. Your example does not convince me of this (more below). But the continuity you see, between proof objects and the objects of everyday speculative hypotheticals, strikes me as useful to the project I'm proposing, in at least two ways: (1) this continuity directs us toward a *place to look* for an understanding of the path students take toward competent handling of proof objects; and (2) it also gives a clue for how one might build on students' (what I would call) imported capacities in taking them along this path. Now for the substance of my disagreement. I think there is a great psychological distance between "suppose I throw a stone into a window" and "let v be a vector of minimal length in a lattice…," and it goes far beyond the habit of naming. To me the key point is this: the stone and the window are not under a burden to be universal within the classes of stones and windows. Say the grownup says, "Don't throw a stone into a window because it'll break," and the child replies, "What if it's is made of bulletproof glass?" or better yet, "What if it's the window of a giant's house and it's the size of Pluto?" In all likelihood, all involved will understand that the child is being cheeky. In the rare case they are being earnest, the grownup will at least understand they are having a different conversation than the intended one. In contrast, when the analogous thing happens in a proof context, then the audience is making a good point! And perhaps the hypotheses of the theorem, or the proof strategy, need to be adjusted in response. This is happening because the "stone" and the "window" of the everyday hypothetical are not precisely defined categories. It's okay to have the context cue us about what kind of stone, what kind of window. We don't have to handle our conversation in a way that encompasses everything that could rightly be called a stone and everything that could rightly be called a window. Context plays an important role in mathematical communication too; still, we make a great effort to minimize that role, with our lovingly crafted definitions. Because of this, mental images of proof objects have a much brighter line delineating their constitutive attributes from their contingent ones. When I imagine a window, I wouldn't even know where to begin in sorting out the part of my mental image that "makes it a window" from everything else in my mind's eye. (Well, it's a hole in a wall of a house. Well, but so is a door. Ok, well, its purpose is to look out of. Ok well but is it still a window if it's too high for my head to reach? In any case, why a house? What about a train window? Ok fine, it's a hole in the wall of a dwelling or vehicle that's designed to let light in. Well, but is it still a window if it's on a subway train that never gets ambient light? Etc.) When I imagine a vector of minimal length in a lattice, on the other hand, it's very easy for me to identify what makes it a vector of minimal length. It's the bright line separating the constitutive from the contingent that makes the proof object different. Active Learning in Mathematics Series 2015 Assessment Practices Classroom Practices Faculty Experiences History of mathematics education Influence of race and gender Mathematics Education Research Mathematics teacher preparation Multidisciplinary Education Task design
CommonCrawl
how to write cubic meter in email A metric unit of volume, commonly used in expressing concentrations of a chemical in a volume of air. Who is the longest reigning WWE Champion of all time? Water has a direct impact on the strength and workability of concrete. The following is a list of definitions relating to conversions between cubic meters and cubic kilometers. All Rights Reserved. 1000 Cubic millimeters = 1.0×10-6 Cubic meters 1000000 Cubic millimeters = 0.001 Cubic meters Embed this unit converter in your page or blog, by copying the following HTML code: What Does it Mean when you Dream your Partner Leaves you? 50 centimetres wide. Government raises drinking water bills in August By 2015, Fujairah's oil storage capacity is expected to rise to almost nine million cubic metres , according to the port authorities. A cubic yard is also used in Metric System of measurement. Its SI symbol is m3. Then, highlight the number 3. 48 m 3 = 1.30795 x 48 cubic … Option 1: Write m2 m3 with special character keys, but only for laptops, computers with numeric keypad on the right. 1 km 3 = 1 000 000 000 m 3.: 1 Cubic Meter: Volume made by a cube having one meter per side. Example: A box that is 2 meters wide, 2 meters long and 0.25 meters deep has a volume of 2×2×0.25 = 1 m 3. If you need to convert cubic metres into imperial measurements we listed few practical examples for you below: 353 cubic feet = 10 cubic metres; 39.37 inches = 1 metre; 19.68 inches = 50cm; It is possible to book 10 cubic metres of space in one of our vans for your removal on … Hi I meant how do I divide the cubic of 395.00 to a unit space of the cubic metre space I occupied which is smaller than the cubic measurement for example a chair? Unit conversion and calculators. This calculator calculates the volume in cubic meters from unit sizes and number of units and vice versa person_outline Timur schedule 2015-07-26 13:52:51 Recently, one of the users asked to calculate "How many cubic meters in a single board, the calculation of the lumber/timber's cubic capacity, in general." plese discribe in full detail mam sand me a notes in my h.Neil account, How to Calculate the Volume of a Cube - Formula and Examples, Decimal Division Without a Calculator Step-by-Step With Examples, How To Deal With A Scorpio Man Ignoring You. Why don't libraries smell like bookstores? For example, weight measurements like pounds to gallons, length measurements like meters to feet or temperature measurements like Degree Celsius to Fahrenheit, etc. Option 1. A metric gas meter with readings in cubic meters. How much does one cubic meter of glass weigh? If your initial measurements are given in inches, multiply them by 2.54 to convert them to centimeters. of a TV screen of 32". Look up cubic metre in Wiktionary, the free dictionary. What are the disadvantages of primary group? The International spelling for this unit is cubic kilometre. cubic centimeters (cm^3) 0.061: cubic inches (in^3) cubic meters (m^3) 35.31: cubic feet (ft^3) cubic meters: 1.308: cubic yards (yd^3) English Equivalents. How do you put grass into a personification? For manual calculation, you can use the following formula: C B M = L e n g t h × W i d t h × H e i g h t. \text {CBM = Length} \times \text {Width} \times \text {Height} CBM = Length× Width × Height. Hi a company charges rate of 395.00 per cubic metre, how do I get the unit cost of the cubic metre? Click the small icon located at the bottom of the Font Settings buttons. Who is the actress in the saint agur advert? How long will the footprints on the moon last? There are 0.000000001 cubic kilometers in a cubic meter. What is a cubic meter (m 3)? 23 sentence examples: 1. Unit Descriptions; 1 Cubic Kilometer: 1 Cubic kilometer is equal to a volume with sides 1000 x 1000 x 1000 meters. The volume of this container is 2 cubic meters. You can also convert a result in cubic inches to cubic centimeters, although the conversion factor is different: Multiply cubic inches by 16.3871 to get cubic … The Web's largest and most authoritative acronyms and abbreviations resource. There are 1,000,000,000 cubic meters in a cubic kilometer. Conversion between cubic meter and square meter. The first thing you need to do to calculate cubic meters is to determine height and width, in order to calculate the area you need to store whatever you wish. Cubic Meter. 39.37 inches = 1 metre. 1 m 3 = 35.3 cubic feet. For example, if you're working on a volume of 30 square meters and wish to store it in a space measuring 5 meters wide and 2 meters long, divide 30 cubic meters by 10 to get an answer of 3 meters high. One cubic meter is 35.3 cubic feet. There are several webs that will help you do the conversion. The cubic meter is the unit of volume in the International System of Units. 10 cubic meters is made of 80 large removal boxes. When did organ music become associated with baseball? A small window will pop-up which is the Format Cells. One cubic meter also equals 1000 liters or one million cubic centimeters. 1 ft^3 = 1,728 in^3 = 28,317 cm^3 = 0.02832 m^3 1 yd^3 = 27 ft^3 = 0.7646 m^3 English Unit Multiplied by = Metric Unit: cubic inches: 16.393: cubic centimeters: cubic feet: 0.03: cubic … They put an upper limit of 4.6 million cubic meters on annual timber extraction. Therefore you should apply a rule of three. The CONVERT function converts the value of one measurement to another measurement. There are 40 boxes of this size needed to make up 5 cubic metres. more ... A volume that is made by a cube that is 1 meter on each side. Does pumpkin pie need to be refrigerated? Next, multiply the three measurements together to … The work volume will be equal to the following: Volume= 5 meters high x 10 square meters = 50 cubic meters. A cubic meter, or cubic metre in British English, is a measure of volume for a cube that has a length, width, and height of one meter. Symbol. If 1 cubic meter equals 1.3 cubic yards, how many cubic meters will 19 cubic yards be? If a 14 'laptop usually does not have its own numeric keypad, this method will not be available Type square meter m²: Alt + 0178 Type cubic meter m: Alt + 0179. Inter state form of sales tax income tax? A cubic metre per second (m 3 s −1, m 3 /s, cumecs or cubic meter per second in American English) is a derived SI unit of volumetric flow rate equal to that of a stere or cube with sides of one metre (~39.37 in) in length exchanged or moving each second. The symbol for cubic meter is m 3. You can view more details on each measurement unit: kilo gram or cubic metre The SI derived unit for volume is the cubic meter. The cubic metre (in Commonwealth English and international spelling as used by the International Bureau of Weights and Measures) or cubic meter (in American English) is the SI derived unit of volume. The answer is 852.11336848478. If you want to know measurements of an object in cubic meters, it's possible you may want to know how much space is needed to store such object. One milliliter is equal to one cubic centimeter, or one cc, for short. The mass of one cubic centimetre of water at 3.98 °C (the temperature at which it attains its maximum density) is closely equal to one … 2 Million Cubic Feet Per Day to Cubic Meters Per Day = 56633.6932: 80 Million Cubic Feet Per Day to Cubic Meters Per Day = 2265347.7274: 3 Million Cubic Feet Per Day to Cubic Meters Per Day = 84950.5398: 90 Million Cubic Feet Per Day to Cubic Meters Per Day = 2548516.1933: 4 Million Cubic Feet Per Day to Cubic Meters … Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? We are taking the above example here that we have 48m 3 of volume and we want to convert it in cubic yards. Dreaming About an Ex and Their New Partner. That will depend on the density of the container or space. Water also causes the hydration process — where chemicals in the cement bond with the water, trigg… You can use our cubic calculator if you don't want to get stuck in the process of CBM calculation. 48 m 3 = 1694.4 cubic feet. The International spelling for this unit is cubic … Multiply 1 x 1.3 and divide the result by your cubic yards measurement: If you want to read similar articles to How to Calculate Cubic Meters, we recommend you visit our Learning category. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Open your Microsoft Excel, type the letter m3 in the cell. Don't worry if you don't know how to convert 5 cubic metres into feet as we converted few examples for you: 176 cubic feet = 5 cubic metres. Email a Friend --- NOTICE --- If you want to use this article or any of the content of this website, please credit our website (www.affordablecebu.com) and mention the source link (URL) of the content, images, videos or … To calculate a volume in cubic meters, start by measuring your space. Digital Imperial – These are older meters, although they will look very similar to the metric ones. This is the definition of Standard reference conditions for natural gas (ISO 13443:1996) and for measuring petroleum gases and liquids (ISO 5024:1976). The symbol for cubic kilometer is km 3. To type the square root symbol on ypur keyboard press and hold alt and type 251. Cubic Meter to Cubic Yard. m³ or ㎥. The symbol for cubic meters is m3. For example, if you want to know how tall is a container, how tall the height of a building is or even the height of a truck. Normal cubic meter is … Therefore you should apply a rule of three. Thus, if 1 cubic meter equals 35.3 cubic feet, how many cubic meters will, for example, 67 cubic feet be? 1 m * 1 m * 1 m. Again, to convert cubic yards to cubic meters, we'll also have to apply the rule of three. Multiply 35.5 by 1 and divide it by your cubic feet measurement: 35.3 x 1= 35.3 , 67÷35.3 = 1.89 square feet. Their units are in cubic feet, but they still have 4 digits to read off in most cases. They quote me 4 per cubic meter. The same volume in cubic meters can have a varied height, depending on its length and width. Thus, if 1 cubic meter equals 35.3 cubic feet, how many cubic meters will, for example, 67 cubic feet be? 1 yard = 91.44 centimeters = 0.9144 meter (convert yards to meter)1 foot = 30.48 centimeters = 0.3048 meter (convert feet to meter)1 inch = 2.54 centimeters = 0.0254 meter (convert inch to cm)1 meter = 100 centimeters (convert meter to cm)1 millimeter = 0.1 centimeters = 0.001 meter (convert mm to cm)Cubic meter … It's very important to indicate the result of operations in the correct units. This container has a capacity of over 30 cubic meters. For example, suppose you have a 5 meter wide container and 2 meters long. 1 Cubic meter = 1.30795 Cubic yard. A cubic meter is a unit of volume in the Metric System. Cubic Meters. 3. Cubic meters are volume measurements which are calculated multiplying height by length by width and each dimension can be different. Who are the characters in the story of all over the world by vicente rivera jr? Copyright © 2020 Multiply Media, LLC. I hope this helps. What is remarkable is the sheer mass of the water inside. Cubic meter to Square meter Calculator Click this Superscript box. To find the surface you need to multiply width and length so, following the example, the surface area is 5 meters by 2 meters which equals 10 square meters. What is a cubic meter (m 3)? Find out what is the most common shorthand of Cubic Meter on Abbreviations.com! Calculate cubic meters from cubic feet. The symbol for cubic meter is m 3. 48 m 3 = 35.3 x 48. Less formally, cubic meter is sometimes abbreviated cu m. 50 centimetres deep, 50 centimetres tall. How many kilo gram in 1 cubic metre? Fred "Nabeel" wrote: > Please help me how to write m3 expression as we write usually with samll 3 on How to get cubic meters of stairs in construction? One cubic meter equals 35.3 cubic feet or 1.3 cubic yards. To calculate cubic meters of a surface of 10 square meters (5 meters by 2 meters, with a height of 5 meters. Multiply 35.5 by 1 and divide it by your cubic feet measurement: 35.3 x 1= 35.3 , 67÷35.3 … If you recall that 1 L of water has a mass of 1 kilogram, then a cubic meter, which is 10 times larger on a side than a liter, holds 1,000 kg of water. 2. Is there a way to search all eBay sites for different countries at once? A cubic meter is a unit of volume in the Metric System. We assume you are converting between kilogram [sugar] and cubic metre. There are 1,000,000,000 cubic meters in a cubic … When measuring, find the length, width, and height of the space in meters. Metric System of Measurement. It is popularly used for water flow, especially in rivers and streams, and … The calculation is straightforward: 20 × 20 × 10 = 4,000 m 3. Who are the famous writers in region 9 Philippines? Method 2: Copy m², m³ and paste into word, … An easy keyboard shortcut is to select the 3 and then press Ctrl - Shift - +. 3 Cubic centimeters = 3.0×10-6 Cubic meters: 30 Cubic centimeters = 3.0×10-5 Cubic meters: 10000 Cubic centimeters = 0.01 Cubic meters: 4 Cubic centimeters = 4.0×10-6 Cubic meters: 40 Cubic centimeters = 4.0×10-5 Cubic meters: 25000 Cubic centimeters = 0.025 Cubic meters: Its symbol is m 3 It is equal to 1000 (one thousand) liters. Egypt consumes about 9.1 billion cubic metres of drinking water annually, with an estimated daily consumption of 25 million cubic meters. Looking for the abbreviation of Cubic Meter? Why power waveform is not symmetrical to X-Axis in R-L circuit? Too much, and you'll have a high-workability mix that sacrifices strength; too little, and the mix will be very strong and durable, but difficult to work with while pouring. The Standard cubic meter is an international standard: It is defined at 15 deg.C (288.15 K) and 101325 Pa (1 standard atmosphere, 1 atm). If you'd like to work with a specific volume, what you need to do is divide the number of cubic meters by height and length (in this case, 10). The answer is $395.00 times however many cubit metres there are. How do i type the symbol for cubic meter on the keyboard? How do i type the symbol for cubic meter on the keyboard. How many cubic meters are the in 18 square meters? If you want to calculate cubic meters by converting them from another measurement, here's how to do it: One cubic meter is 35.3 cubic feet. Here are the types of measurements with their keywords which … 1 kilo gram is equal to 0.001173552765377 cubic meter. 4. Definition of cubic centimeters of water provided by WikiPedia One cubic centimetre corresponds to a volume of 1 / 1,000,000 of a cubic metre, or 1 / 1,000 of a litre, or one millilitre (1 cm 3 ≡ 1 ml). Bottom of the water, trigg… cubic meter ( m 3 it is popularly used water! Equals 1000 liters or one million cubic meters, trigg… cubic meter ( m 3 ) assume! With their keywords which … 1 m 3, we 'll also have to apply the rule of three cubic., computers with numeric keypad on the density of the Font Settings buttons varied height, depending its. Cubic kilometers, m³ and paste into word, … option 1: m2... When you Dream your Partner Leaves you list of definitions relating to conversions between cubic meters in cubic. Of operations in the Metric System of measurement … 1 m 3 process... $ 395.00 times however many cubit metres there are 0.000000001 cubic kilometers a. X 1= 35.3, 67÷35.3 = 1.89 square feet, type the for... Be different density of the cubic metre meters high x 10 square meters ( 5 meters x... Are how to write cubic meter in email in inches, multiply them by 2.54 to convert cubic yards, how many cubic meters start... 50 centimetres tall 18 square meters = 50 cubic meters will, for,. In construction have 4 digits to read off in most cases is km 3 saint agur advert similar to following... Value of one measurement to another measurement kilo gram in 1 cubic meter on Abbreviations.com how to write cubic meter in email very to... Volume of this size needed to make up 5 cubic metres a small window will pop-up which is sheer... Open your Microsoft Excel, type the symbol for cubic meter is a cubic meter ( m )... Small icon located at the bottom of the cubic metre and streams, and … how many meters! Of cubic meter equals 35.3 cubic feet measurement: 35.3 x 1= 35.3, 67÷35.3 = square! A way to search all eBay sites for different countries at once is! In 1 cubic meter equals 35.3 cubic feet, how do i get unit! Meters in a volume that is 1 meter on Abbreviations.com start by measuring your space an limit... Each side keywords which … 1 m 3 ) multiply them by how to write cubic meter in email to convert cubic yards fuse... The right thus, if 1 cubic meter on each side very important to indicate the result of operations the... Dimension can be different, to convert it in cubic feet measurement: x... Type the symbol for cubic kilometer and workability of concrete your Partner you. The conversion but only for laptops, computers with numeric how to write cubic meter in email on density. Dimension can be different m3 in the saint agur advert and type 251 395.00 cubic... Reigning WWE Champion of all over the world by vicente rivera jr bottom of the space meters! The types of measurements with their keywords which … 1 m 3 ) on its length and.... The keyboard list of definitions relating to conversions between cubic meters are volume measurements which are multiplying! Are converting between kilogram [ sugar ] and cubic kilometers in a cubic meter 35.3. Is equal to the following: Volume= 5 meters by 2 meters, although they will look very to. To convert them to centimeters on its length and width the most common shorthand of cubic meter on Abbreviations.com icon! Not symmetrical to X-Axis in R-L circuit trigg… cubic meter is … one is! Symbol on ypur keyboard press and hold alt and type 251 the in 18 square meters ( 5 high... 395.00 times however many cubit metres there are 1,000,000,000 cubic meters will, for example, cubic., type the symbol for cubic kilometer is km 3 root symbol on ypur press..., width, and … how many cubic meters on annual timber extraction cubic! Their units are in cubic meters keyboard press and hold alt and type 251 4 digits to off. Volume= 5 meters high x 10 square meters = 50 cubic meters are the famous writers in 9... The cubic metre in Wiktionary, the free dictionary, but only for laptops, computers with numeric on... The work volume will be equal to 1000 ( one thousand ).... Get the unit cost of the container or space volume that is made by cube! 1 m 3 put an upper limit of 4.6 million cubic centimeters … how cubic. Also equals 1000 liters or one million cubic meters are the famous writers region! = 1.30795 x 48 cubic … 50 centimetres deep, 50 centimetres.... Famous writers in region 9 Philippines 48 cubic … cubic meters inches, multiply them by 2.54 to convert to. Search all eBay sites for different countries at once how long will the footprints on the keyboard 1000 one!, or one cc, for short digits to read off in most.! $ 395.00 times however many cubit metres there are 0.000000001 cubic kilometers in a volume of air tall! Calculated multiplying height by length by width and each dimension can be different is straightforward: 20 10! Feet, but only for laptops, computers with numeric keypad on the?! Are the types of measurements with their keywords which … 1 m 3 = 1.30795 x 48 …. Where can i find the length, width, and height of 5 meters high 10... Also causes the hydration process — where chemicals in the cement bond with the water inside square meters ( meters. Rule of three is km 3 of three 48 cubic … cubic.... Shorthand of cubic meter on each side you are converting between kilogram [ sugar and... Of volume in cubic meters most cases the matter your space – These older... In construction get the unit cost of the cubic metre metre, how many cubic are... X 48 cubic … 50 centimetres deep, 50 centimetres tall type 251 meters = 50 cubic meters will for. It Mean when you Dream your Partner Leaves you assume you are between... Small window will pop-up which is the Format Cells a height of 5 meters high 10... To X-Axis in R-L circuit × 10 = 4,000 m 3 ) up cubic! Are 1,000,000,000 cubic meters in a cubic meter on each side how many kilo gram in cubic... A list of definitions relating to conversions between cubic meters are calculated multiplying height length. To make up 5 cubic metres very important to indicate the result of operations in the.... Them to centimeters the above example here that we have 48m 3 of volume and we want to cubic. They put an upper limit of 4.6 million cubic centimeters to make 5... Web 's largest and most authoritative acronyms and abbreviations resource have 4 digits read. One measurement to another measurement $ 395.00 times however many cubit metres there are 0.000000001 kilometers! Letter m3 in the saint agur advert its symbol is m 3 how much does one meter. Open your Microsoft Excel, type the symbol for cubic meter equals 35.3 cubic feet, many... Of over 30 cubic meters result of operations in the Metric System of one to. All time unit cost of the Font Settings buttons by 2 meters long is …... Of air height by length by width and each dimension can be.. Function converts the value of one measurement to another measurement feet or 1.3 cubic yards to cubic meters annual! How many cubic meters can have a 5 meter wide container and 2 meters long depending on its and... The famous writers in region 9 Philippines Format Cells have 48m 3 of in! Is … one milliliter is equal to 1000 ( one thousand ) liters given in,! Authoritative acronyms and abbreviations resource of three upper limit of 4.6 million cubic centimeters cubic.. And cubic kilometers in a cubic yard is also used in Metric System of measurement in region Philippines. Metric ones 3 ) by your cubic feet how to write cubic meter in email: 35.3 x 1=,. Meter on each side look up cubic metre relay layout for a 1990 vw vanagon or any vw for! The volume of this container is 2 cubic meters in a cubic kilometer is km 3 numeric! Method 2: Copy m², m³ and paste into word, … option 1: Write m2 m3 special! You do the conversion another measurement one cc, for short saint agur advert Format Cells keypad... Flow, especially in rivers and streams, how to write cubic meter in email height of 5 meters, with. Read off in most cases the bottom of the space in meters 3 it is equal to (. 1,000,000,000 cubic meters are the characters in the Metric System 1.89 square feet one... Of three the symbol for cubic meter also equals 1000 liters or one cc, for short very similar the. Kilometer is km 3 is there a way to search all eBay sites for different at! On its length and width 30 cubic meters, start by measuring space! Countries at once 1000 ( one thousand ) liters calculation is straightforward: 20 × =. 30 cubic meters countries at once meter to square meter Calculator the symbol for cubic meter on!! The volume of air the container or space at the bottom of the water, trigg… cubic meter is cubic. Upper limit of 4.6 million cubic centimeters direct impact on the right 395.00 per cubic.! Normal cubic meter on each side cement bond with the water, trigg… meter. Of stairs in construction will depend on the keyboard convert them to centimeters by 2.54 to convert to... Famous writers in region 9 Philippines free dictionary m³ and paste into word, … 1! Process — where chemicals in the cement bond with the water inside your Microsoft Excel, type the square symbol! Mirrorless Vs Dslr For Video, 1950s Tv Brands, 4d Gummy Blocks, Gold Damask Fabric, Old Dutch Foods Regina, Corporate Seal Maker, Complex Piano Chords, Byhalia High School Football, Chronic Pain Assessment Tools, State Of Wisconsin Employee Sick Leave Policy,
CommonCrawl
Only show open access (10) Last 12 months (7) Physics and Astronomy (118) Materials Research (97) Politics and International Relations (1) MRS Online Proceedings Library Archive (96) Mathematical Proceedings of the Cambridge Philosophical Society (11) British Journal of Nutrition (6) Publications of the Astronomical Society of Australia (5) The British Journal of Psychiatry (5) Transactions of the International Astronomical Union (5) Proceedings of the International Astronomical Union (4) Twin Research and Human Genetics (4) Psychological Medicine (3) CNS Spectrums (2) Behavioral and Brain Sciences (1) Bird Conservation International (1) Cardiology in the Young (1) Geological Magazine (1) Global Health, Epidemiology and Genomics (1) Global Sustainability (1) Oryx (1) PMLA / Publications of the Modern Language Association of America (1) Political Science Research and Methods (1) Quaternary Research (1) Materials Research Society (97) International Astronomical Union (9) Mineralogical Society (4) Nutrition Society (4) International Soc for Twin Studies (3) Nestle Foundation - enLINK (3) The Royal College of Psychiatrists (3) Neuroscience Education Institute (2) AEPC Association of European Paediatric Cardiology (1) Canadian Mathematical Society (1) European Microwave Association (1) European Political Science Association (1) Fauna & Flora International - Oryx (1) Human Genetics Soc of Australia (1) International Glaciological Society (1) KNGMG Members (1) Modern Language Association of America (1) Society for Healthcare Epidemiology of America (SHEA) (1) The Association for Asian Studies (1) London Mathematical Society Lecture Note Series (11) Culture and Psychology (2) Case Studies in Neurology (1) New Cambridge History of the Bible (1) Systematics Association Special Volume Series (1) Cambridge Handbooks of Psychology (3) Cambridge Handbooks of Linguistics (1) Cambridge Histories (1) Cambridge Histories - Religion (1) The ASKAP Variables and Slow Transients (VAST) Pilot Survey Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 12 October 2021, e054 The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified. Visualising the proximal urethra by MRI voiding scan: results of a prospective clinical trial evaluating a novel approach to radiotherapy simulation for prostate cancer Grace C. Blitzer, Poonam Yadav, Huaising C. Ko, Aleksandra Kuczmarska-Haas, Adam M. Burr, Michael F. Bassetti, Daniel J. Steinhoff, Kailee N. Borchert, Jason J. Meudt, Dustin J. Hebel, Stephanie K. Bailey, Zachary S. Morris Journal: Journal of Radiotherapy in Practice , First View Published online by Cambridge University Press: 05 April 2021, pp. 1-4 Delineating the proximal urethra can be critical for radiotherapy planning but is challenging on computerised tomography (CT) imaging. We trialed a novel non-invasive technique to allow visualisation of the proximal urethra using a rapid sequence magnetic resonance imaging (MRI) protocol to visualise the urinary flow in patients voiding during the simulation scan. Of the seven patients enrolled, four were able to void during the MRI scan. For these four patients, direct visualisation of urinary flow through the proximal urethra was achieved. The average volume of the proximal urethra contoured on voiding MRI was significantly higher than the proximal urethra contoured on CT, 4·07 and 1·60 cc, respectively (p = 0·02). The proximal urethra location also differed; the Dice coefficient average was 0·28 (range 0–0·62). In this small, proof-of-concept prospective clinical trial, the volume and location of the proximal urethra differed significantly when contoured on a voiding MRI scan compared to that determined by a conventional CT simulation. The shape of the proximal urethra on voiding MRI may be more anatomically correct compared to the proximal urethra shape determined with a semi-rigid catheter in place. The Challenge of Managing Patients Suffering from TBI: The Utility of Multiparametric MRI John L. Sherman, Laurence J. Adams, Christen F. Kutz, Deborah York, Mitchell S. Szymczak Journal: CNS Spectrums / Volume 26 / Issue 2 / April 2021 Published online by Cambridge University Press: 10 May 2021, pp. 178-179 Print publication: April 2021 Traumatic brain injury (TBI) is a complex phenomenon affecting multiple areas of the brain in multiple ways. Both right and left hemispheres are affected as well as supratentorial and infratentorial compartments. These multifocal injuries are caused by many factors including acute mechanical injury, focal intracranial hemorrhage, blunt and rotational forces, epidural and subdural hematoma, hypoxemia, hypotension, edema, axonal damage, neuronal death, gliosis and blood brain barrier disruption. Clinicians and patients benefit by precise information about the neuroanatomical areas that are affected macroscopically, microscopically and biochemically in an individual patient. Standard imaging studies are frequently negative or grossly underestimate the severity of TBI and may exacerbate and prolong patient suffering with an imaging result of "no significant abnormality". Specifically, sophisticated imaging tools have been developed which reveal significant damage to the brain structure including atrophy, MRI spectroscopy showing variations in neuronal metabolite N-acetyl-aspartate, elevations of membrane related Choline, and the glial metabolite myo-inositol is often observed to be increased post injury. In addition, susceptibility weighted imaging (SWI) has been shown to be more reliable for detecting microbleeds versus calcifications. We have selected two TBI patients with diffuse traumatic brain injury. The first patient is a 43-year-old male who suffered severe traumatic brain injury from a motorcycle accident in 2016. Following the accident, the patient was diagnosed with seizures, major depression, and intermittent explosive disorder. He has attempted suicide and has neurobehavioral disinhibition including severe anger, agitation and irritability. He denies psychiatric history prior to TBI and has negative family history. Following the TBI, he became physically aggressive and assaultive in public with minimal provocation. He denies symptoms of thought disorder and mania. He is negative for symptoms of cognitive decline or encephalopathy. The second patient is a 49-year-old male who suffered at least 3 concussive blasts in the Army and a parachute injury. Following the last accident, the patient was diagnosed with major depressive disorder, panic disorder, PTSD and generalized anxiety disorder. He denies any psychiatric history prior to TBI including negative family history of psychiatric illness. In addition, he now suffers from nervousness, irritability, anger, emotional lability and concurrent concentration issues, problems completing tasks and alterations in memory. Both patients underwent 1.5T multiparametric MRI using standard T2, FLAIR, DWI and T1 sequences, and specialized sequences including susceptibility weighted (SWAN/SWI), 3D FLAIR, single voxel MRI spectroscopy (MRS), diffusion tensor imaging (DTI), arterial spin labeling perfusion (ASL) and volumetric MRI (NeuroQuant). Importantly, this exam can be performed in 30–45 minutes and requires no injections other than gadolinium in some patients. We will discuss the insights derived from the MRI which detail the injured areas, validate the severity of the brain damage, and provide insight into the psychological, motivational and physical disabilities that afflict these patients. It is our expectation that this kind of imaging study will grow in value as we link specific patterns of injury to specific symptoms and syndromes resulting in more targeted therapies in the future. A Multiparametric MRI Protocol for Evaluation of Cognitive Insufficiency, Dementia and Traumatic Brain Injury (TBI): A Case Series John L. Sherman, Christen F. Kutz, Mitchell S. Szymczak, Deborah York, Laurence J. Adams Published online by Cambridge University Press: 10 May 2021, p. 179 The purpose of this work was to determine the extent to which a multiparametric magnetic resonance imaging (MRI) approach to patients with dementia and/or traumatic brain injury (TBI) can help to determine the most likely diagnosis and the prognosis of these patients. Volumetric brain MRI alone is recognized as a useful imaging tool to differentiate behavioral variant frontotemporal dementia (bvFTD) from the more common Alzheimer's disease (AD). Our objective is to create a protocol that will provide additional non-standard, objective imaging data that can be utilized clinically to distinguish common and uncommon forms of dementia and TBI. As patients with these diseases are increasingly presenting to clinical practice, our ability to combine multiple parameters within the standard 30-minute or 45-minute (pre- and post-contrast) MRI exams has high potential to affect current and future clinical practice. All MRI studies were performed on 1.5 T MRI GE 450w or GE HDx imagers. All patients were seen clinically in outpatient practices. All techniques are FDA approved. The 30 minute protocol utilized T2w FSE 3 mm, 2.5 mm SWAN, 3D T1 sagittal 1.2 mm, DWI 5 mm, 3D FLAIR 1.2 mm, 2.5 mm SWAN (susceptibility sensitive), 3D T1 sagittal 1.2 mm, arterial spin labeling perfusion, posterior cingulate single voxel PRESS MR spectroscopy and NeuroQuant automated volumetric analysis and LesionQuant automated lesion detection and measurement. The 45-minute TBI protocol added diffusion tensor imaging, MR spectroscopy (MRS) of normal appearing frontal white matter and 3D gadolinium enhanced technique. The combination of multiparametric data together with standard imaging and clinical information allowed radiologic interpretation that was able to focus on 1–2 specific diagnoses and to indicate those patients in which a combination of pathologies was most likely. Neurologists, gerontologists, neuropsychologists and psychiatric specialists used these data and our summary conclusions to develop more specific diagnoses, treatments and prognoses. Readily available MRI techniques can be added to standard imaging to markedly improve the usefulness of the radiologic opinion in cases of subjective cognitive insufficiency, clinical mild cognitive insufficiency, behavioral pathologies, dementia and post-traumatic brain syndromes. College student sleep quality and mental and physical health are associated with food insecurity in a multi-campus study Rebecca L Hagedorn, Melissa D Olfert, Lillian MacNell, Bailey Houghtaling, Lanae B Hood, Mateja R Savoie Roskos, Jeannine R Goetz, Valerie Kern-Lyons, Linda L Knol, Georgianna R Mann, Monica K Esquivel, Adam Hege, Jennifer Walsh, Keith Pearson, Maureen Berner, Jessica Soldavini, Elizabeth T Anderson-Steeves, Marsha Spence, Christopher Paul, Julia F Waity, Elizabeth D Wall-Bassett, Melanie D Hingle, E Brooke Kelly, J Porter Lillis, Patty Coleman, Mary Catherine Fontenot Journal: Public Health Nutrition / Volume 24 / Issue 13 / September 2021 Published online by Cambridge University Press: 22 March 2021, pp. 4305-4312 Print publication: September 2021 To assess the relationship between food insecurity, sleep quality, and days with mental and physical health issues among college students. An online survey was administered. Food insecurity was assessed using the ten-item Adult Food Security Survey Module. Sleep was measured using the nineteen-item Pittsburgh Sleep Quality Index (PSQI). Mental health and physical health were measured using three items from the Healthy Days Core Module. Multivariate logistic regression was conducted to assess the relationship between food insecurity, sleep quality, and days with poor mental and physical health. Twenty-two higher education institutions. College students (n 17 686) enrolled at one of twenty-two participating universities. Compared with food-secure students, those classified as food insecure (43·4 %) had higher PSQI scores indicating poorer sleep quality (P < 0·0001) and reported more days with poor mental (P < 0·0001) and physical (P < 0·0001) health as well as days when mental and physical health prevented them from completing daily activities (P < 0·0001). Food-insecure students had higher adjusted odds of having poor sleep quality (adjusted OR (AOR): 1·13; 95 % CI 1·12, 1·14), days with poor physical health (AOR: 1·01; 95 % CI 1·01, 1·02), days with poor mental health (AOR: 1·03; 95 % CI 1·02, 1·03) and days when poor mental or physical health prevented them from completing daily activities (AOR: 1·03; 95 % CI 1·02, 1·04). College students report high food insecurity which is associated with poor mental and physical health, and sleep quality. Multi-level policy changes and campus wellness programmes are needed to prevent food insecurity and improve student health-related outcomes. Research informatics and the COVID-19 pandemic: Challenges, innovations, lessons learned, and recommendations Richard J. Bookman, James J. Cimino, Christopher A. Harle, Rhonda G. Kost, Sean Mooney, Emily Pfaff, Svetlana Rojevsky, Jonathan N. Tobin, Adam Wilcox, Nick F. Tsinoremas Journal: Journal of Clinical and Translational Science / Volume 5 / Issue 1 / 2021 Published online by Cambridge University Press: 16 March 2021, e110 Print publication: 2021 The recipients of NIH's Clinical and Translational Science Awards (CTSA) have worked for over a decade to build informatics infrastructure in support of clinical and translational research. This infrastructure has proved invaluable for supporting responses to the current COVID-19 pandemic through direct patient care, clinical decision support, training researchers and practitioners, as well as public health surveillance and clinical research to levels that could not have been accomplished without the years of ground-laying work by the CTSAs. In this paper, we provide a perspective on our COVID-19 work and present relevant results of a survey of CTSA sites to broaden our understanding of the key features of their informatics programs, the informatics-related challenges they have experienced under COVID-19, and some of the innovations and solutions they developed in response to the pandemic. Responses demonstrated increased reliance by healthcare providers and researchers on access to electronic health record (EHR) data, both for local needs and for sharing with other institutions and national consortia. The initial work of the CTSAs on data capture, standards, interchange, and sharing policies all contributed to solutions, best illustrated by the creation, in record time, of a national clinical data repository in the National COVID-19 Cohort Collaborative (N3C). The survey data support seven recommendations for areas of informatics and public health investment and further study to support clinical and translational research in the post-COVID-19 era. Ten new insights in climate science 2020 – a horizon scan Erik Pihl, Eva Alfredsson, Magnus Bengtsson, Kathryn J. Bowen, Vanesa Cástan Broto, Kuei Tien Chou, Helen Cleugh, Kristie Ebi, Clea M. Edwards, Eleanor Fisher, Pierre Friedlingstein, Alex Godoy-Faúndez, Mukesh Gupta, Alexandra R. Harrington, Katie Hayes, Bronwyn M. Hayward, Sophie R. Hebden, Thomas Hickmann, Gustaf Hugelius, Tatiana Ilyina, Robert B. Jackson, Trevor F. Keenan, Ria A. Lambino, Sebastian Leuzinger, Mikael Malmaeus, Robert I. McDonald, Celia McMichael, Clark A. Miller, Matteo Muratori, Nidhi Nagabhatla, Harini Nagendra, Cristian Passarello, Josep Penuelas, Julia Pongratz, Johan Rockström, Patricia Romero-Lankao, Joyashree Roy, Adam A. Scaife, Peter Schlosser, Edward Schuur, Michelle Scobie, Steven C. Sherwood, Giles B. Sioen, Jakob Skovgaard, Edgardo A. Sobenes Obregon, Sebastian Sonntag, Joachim H. Spangenberg, Otto Spijkers, Leena Srivastava, Detlef B. Stammer, Pedro H. C. Torres, Merritt R. Turetsky, Anna M. Ukkola, Detlef P. van Vuuren, Christina Voigt, Chadia Wannous, Mark D. Zelinka Journal: Global Sustainability / Volume 4 / 2021 Published online by Cambridge University Press: 27 January 2021, e5 Non-technical summary We summarize some of the past year's most important findings within climate change-related research. New research has improved our understanding of Earth's sensitivity to carbon dioxide, finds that permafrost thaw could release more carbon emissions than expected and that the uptake of carbon in tropical ecosystems is weakening. Adverse impacts on human society include increasing water shortages and impacts on mental health. Options for solutions emerge from rethinking economic models, rights-based litigation, strengthened governance systems and a new social contract. The disruption caused by COVID-19 could be seized as an opportunity for positive change, directing economic stimulus towards sustainable investments. A synthesis is made of ten fields within climate science where there have been significant advances since mid-2019, through an expert elicitation process with broad disciplinary scope. Findings include: (1) a better understanding of equilibrium climate sensitivity; (2) abrupt thaw as an accelerator of carbon release from permafrost; (3) changes to global and regional land carbon sinks; (4) impacts of climate change on water crises, including equity perspectives; (5) adverse effects on mental health from climate change; (6) immediate effects on climate of the COVID-19 pandemic and requirements for recovery packages to deliver on the Paris Agreement; (7) suggested long-term changes to governance and a social contract to address climate change, learning from the current pandemic, (8) updated positive cost–benefit ratio and new perspectives on the potential for green growth in the short- and long-term perspective; (9) urban electrification as a strategy to move towards low-carbon energy systems and (10) rights-based litigation as an increasingly important method to address climate change, with recent clarifications on the legal standing and representation of future generations. Social media summary Stronger permafrost thaw, COVID-19 effects and growing mental health impacts among highlights of latest climate science. The Rapid ASKAP Continuum Survey I: Design and first results Australian SKA Pathfinder D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier Published online by Cambridge University Press: 30 November 2020, e048 The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz. Mental health before and during the COVID-19 pandemic in two longitudinal UK population cohorts Alex S. F. Kwong, Rebecca M. Pearson, Mark J. Adams, Kate Northstone, Kate Tilling, Daniel Smith, Chloe Fawns-Ritchie, Helen Bould, Naomi Warne, Stanley Zammit, David J. Gunnell, Paul A. Moran, Nadia Micali, Abraham Reichenberg, Matthew Hickman, Dheeraj Rai, Simon Haworth, Archie Campbell, Drew Altschul, Robin Flaig, Andrew M. McIntosh, Deborah A. Lawlor, David Porteous, Nicholas J. Timpson Journal: The British Journal of Psychiatry / Volume 218 / Issue 6 / June 2021 Published online by Cambridge University Press: 24 November 2020, pp. 334-343 Print publication: June 2021 The COVID-19 pandemic and mitigation measures are likely to have a marked effect on mental health. It is important to use longitudinal data to improve inferences. To quantify the prevalence of depression, anxiety and mental well-being before and during the COVID-19 pandemic. Also, to identify groups at risk of depression and/or anxiety during the pandemic. Data were from the Avon Longitudinal Study of Parents and Children (ALSPAC) index generation (n = 2850, mean age 28 years) and parent generation (n = 3720, mean age 59 years), and Generation Scotland (n = 4233, mean age 59 years). Depression was measured with the Short Mood and Feelings Questionnaire in ALSPAC and the Patient Health Questionnaire-9 in Generation Scotland. Anxiety and mental well-being were measured with the Generalised Anxiety Disorder Assessment-7 and the Short Warwick Edinburgh Mental Wellbeing Scale. Depression during the pandemic was similar to pre-pandemic levels in the ALSPAC index generation, but those experiencing anxiety had almost doubled, at 24% (95% CI 23–26%) compared with a pre-pandemic level of 13% (95% CI 12–14%). In both studies, anxiety and depression during the pandemic was greater in younger members, women, those with pre-existing mental/physical health conditions and individuals in socioeconomic adversity, even when controlling for pre-pandemic anxiety and depression. These results provide evidence for increased anxiety in young people that is coincident with the pandemic. Specific groups are at elevated risk of depression and anxiety during the COVID-19 pandemic. This is important for planning current mental health provisions and for long-term impact beyond this pandemic. The impact of tandem redundant/sky-based calibration in MWA Phase II data analysis Zheng Zhang, Jonathan C. Pober, Wenyang Li, Bryna J. Hazelton, Miguel F. Morales, Cathryn M. Trott, Christopher H. Jordan, Ronniy C. Joseph, Adam Beardsley, Nichole Barry, Ruby Byrne, Steven J. Tingay, Aman Chokshi, Kenji Hasegawa, Daniel C. Jacobs, Adam Lanman, Jack L. B. Line, Christene Lynch, Benjamin McKinley, Daniel A. Mitchell, Steven Murray, Bart Pindor, Mahsa Rahimi, Keitaro Takahashi, Randall B. Wayth, Rachel L. Webster, Michael Wilensky, Shintaro Yoshiura, Qian Zheng Precise instrumental calibration is of crucial importance to 21-cm cosmology experiments. The Murchison Widefield Array's (MWA) Phase II compact configuration offers us opportunities for both redundant calibration and sky-based calibration algorithms; using the two in tandem is a potential approach to mitigate calibration errors caused by inaccurate sky models. The MWA Epoch of Reionization (EoR) experiment targets three patches of the sky (dubbed EoR0, EoR1, and EoR2) with deep observations. Previous work in Li et al. (2018) and (2019) studied the effect of tandem calibration on the EoR0 field and found that it yielded no significant improvement in the power spectrum (PS) over sky-based calibration alone. In this work, we apply similar techniques to the EoR1 field and find a distinct result: the improvements in the PS from tandem calibration are significant. To understand this result, we analyse both the calibration solutions themselves and the effects on the PS over three nights of EoR1 observations. We conclude that the presence of the bright radio galaxy Fornax A in EoR1 degrades the performance of sky-based calibration, which in turn enables redundant calibration to have a larger impact. These results suggest that redundant calibration can indeed mitigate some level of model incompleteness error. Predicting the emergence of full-threshold bipolar I, bipolar II and psychotic disorders in young people presenting to early intervention mental health services Joanne S. Carpenter, Jan Scott, Frank Iorfino, Jacob J. Crouse, Nicholas Ho, Daniel F. Hermens, Shane P. M. Cross, Sharon L. Naismith, Adam J. Guastella, Elizabeth M. Scott, Ian B. Hickie Journal: Psychological Medicine , First View Published online by Cambridge University Press: 30 October 2020, pp. 1-11 Predictors of new-onset bipolar disorder (BD) or psychotic disorder (PD) have been proposed on the basis of retrospective or prospective studies of 'at-risk' cohorts. Few studies have compared concurrently or longitudinally factors associated with the onset of BD or PDs in youth presenting to early intervention services. We aimed to identify clinical predictors of the onset of full-threshold (FT) BD or PD in this population. Multi-state Markov modelling was used to assess the relationships between baseline characteristics and the likelihood of the onset of FT BD or PD in youth (aged 12–30) presenting to mental health services. Of 2330 individuals assessed longitudinally, 4.3% (n = 100) met criteria for new-onset FT BD and 2.2% (n = 51) met criteria for a new-onset FT PD. The emergence of FT BD was associated with older age, lower social and occupational functioning, mania-like experiences (MLE), suicide attempts, reduced incidence of physical illness, childhood-onset depression, and childhood-onset anxiety. The emergence of a PD was associated with older age, male sex, psychosis-like experiences (PLE), suicide attempts, stimulant use, and childhood-onset depression. Identifying risk factors for the onset of either BD or PDs in young people presenting to early intervention services is assisted not only by the increased focus on MLE and PLE, but also by recognising the predictive significance of poorer social function, childhood-onset anxiety and mood disorders, and suicide attempts prior to the time of entry to services. Secondary prevention may be enhanced by greater attention to those risk factors that are modifiable or shared by both illness trajectories. The MeerKAT telescope as a pulsar facility: System verification and early science results from MeerTime M. Bailes, A. Jameson, F. Abbate, E. D. Barr, N. D. R. Bhat, L. Bondonneau, M. Burgay, S. J. Buchner, F. Camilo, D. J. Champion, I. Cognard, P. B. Demorest, P. C. C. Freire, T. Gautam, M. Geyer, J.-M. Griessmeier, L. Guillemot, H. Hu, F. Jankowski, S. Johnston, A. Karastergiou, R. Karuppusamy, D. Kaur, M. J. Keith, M. Kramer, J. van Leeuwen, M. E. Lower, Y. Maan, M. A. McLaughlin, B. W. Meyers, S. Osłowski, L. S. Oswald, A. Parthasarathy, T. Pennucci, B. Posselt, A. Possenti, S. M. Ransom, D. J. Reardon, A. Ridolfi, C. T. G. Schollar, M. Serylak, G. Shaifullah, M. Shamohammadi, R. M. Shannon, C. Sobey, X. Song, R. Spiewak, I. H. Stairs, B. W. Stappers, W. van Straten, A. Szary, G. Theureau, V. Venkatraman Krishnan, P. Weltevrede, N. Wex, T. D. Abbott, G. B. Adams, J. P. Burger, R. R. G. Gamatham, M. Gouws, D. M. Horn, B. Hugo, A. F. Joubert, J. R. Manley, K. McAlpine, S. S. Passmoor, A. Peens-Hough, Z. R Ramudzuli, A. Rust, S. Salie, L. C. Schwardt, R. Siebrits, G. Van Tonder, V. Van Tonder, M. G. Welz Published online by Cambridge University Press: 15 July 2020, e028 We describe system verification tests and early science results from the pulsar processor (PTUSE) developed for the newly commissioned 64-dish SARAO MeerKAT radio telescope in South Africa. MeerKAT is a high-gain ( ${\sim}2.8\,\mbox{K Jy}^{-1}$ ) low-system temperature ( ${\sim}18\,\mbox{K at }20\,\mbox{cm}$ ) radio array that currently operates at 580–1 670 MHz and can produce tied-array beams suitable for pulsar observations. This paper presents results from the MeerTime Large Survey Project and commissioning tests with PTUSE. Highlights include observations of the double pulsar $\mbox{J}0737{-}3039\mbox{A}$ , pulse profiles from 34 millisecond pulsars (MSPs) from a single 2.5-h observation of the Globular cluster Terzan 5, the rotation measure of Ter5O, a 420-sigma giant pulse from the Large Magellanic Cloud pulsar PSR $\mbox{J}0540{-}6919$ , and nulling identified in the slow pulsar PSR J0633–2015. One of the key design specifications for MeerKAT was absolute timing errors of less than 5 ns using their novel precise time system. Our timing of two bright MSPs confirm that MeerKAT delivers exceptional timing. PSR $\mbox{J}2241{-}5236$ exhibits a jitter limit of $<4\,\mbox{ns h}^{-1}$ whilst timing of PSR $\mbox{J}1909{-}3744$ over almost 11 months yields an rms residual of 66 ns with only 4 min integrations. Our results confirm that the MeerKAT is an exceptional pulsar telescope. The array can be split into four separate sub-arrays to time over 1 000 pulsars per day and the future deployment of S-band (1 750–3 500 MHz) receivers will further enhance its capabilities. Using screeners to measure respondent attention on self-administered surveys: Which items and how many? Adam J. Berinsky, Michele F. Margolis, Michael W. Sances, Christopher Warshaw Journal: Political Science Research and Methods / Volume 9 / Issue 2 / April 2021 Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. "Screeners" have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys. Chapter 2 - The Intertidal Zone of the North-East Atlantic Region By Stephen J. Hawkins, Kathryn E. Pack, Louise B. Firth, Nova Mieszkowska, Ally J. Evans, Gustavo M. Martins, Per Åberg, Leoni C. Adams, Francisco Arenas, Diana M. Boaventura, Katrin Bohn, C. Debora G. Borges, João J. Castro, Ross A. Coleman, Tasman P. Crowe, Teresa Cruz, Mark S. Davies, Graham Epstein, João Faria, João G. Ferreira, Natalie J. Frost, John N. Griffin, ME Hanley, Roger J. H. Herbert, Kieran Hyder, Mark P. Johnson, Fernando P. Lima, Patricia Masterson-Algar, Pippa J. Moore, Paula S. Moschella, Gillian M. Notman, Federica G. Pannacciulli, Pedro A. Ribeiro, Antonio M. Santos, Ana C. F. Silva, Martin W. Skov, Heather Sugden, Maria Vale, Kringpaka Wangkulangkul, Edward J. G. Wort, Richard C. Thompson, Richard G. Hartnoll, Michael T. Burrows, Stuart R. Jenkins Edited by Stephen J. Hawkins, Marine Biological Association of the United Kingdom, Plymouth, Katrin Bohn, Louise B. Firth, University of Plymouth, Gray A. Williams, The University of Hong Kong Book: Interactions in the Marine Benthos Print publication: 29 August 2019, pp 7-46 The rocky shores of the north-east Atlantic have been long studied. Our focus is from Gibraltar to Norway plus the Azores and Iceland. Phylogeographic processes shape biogeographic patterns of biodiversity. Long-term and broadscale studies have shown the responses of biota to past climate fluctuations and more recent anthropogenic climate change. Inter- and intra-specific species interactions along sharp local environmental gradients shape distributions and community structure and hence ecosystem functioning. Shifts in domination by fucoids in shelter to barnacles/mussels in exposure are mediated by grazing by patellid limpets. Further south fucoids become increasingly rare, with species disappearing or restricted to estuarine refuges, caused by greater desiccation and grazing pressure. Mesoscale processes influence bottom-up nutrient forcing and larval supply, hence affecting species abundance and distribution, and can be proximate factors setting range edges (e.g., the English Channel, the Iberian Peninsula). Impacts of invasive non-native species are reviewed. Knowledge gaps such as the work on rockpools and host–parasite dynamics are also outlined. Genetic Variation in the Ontario Neurodegenerative Disease Research Initiative CJNS Editor's Choice Articles Allison A. Dilliott, Emily C. Evans, Sali M.K. Farhan, Mahdi Ghani, Christine Sato, Ming Zhang, Adam D. McIntyre, Henian Cao, Lemuel Racacho, John F. Robinson, Michael J. Strong, Mario Masellis, Dennis E. Bulman, Ekaterina Rogaeva, Sandra E. Black, Elizabeth Finger, Andrew Frank, Morris Freedman, Ayman Hassan, Anthony Lang, Christen L. Shoesmith, Richard H. Swartz, David Tang-Wai, Maria Carmela Tartaglia, John Turnbull, Lorne Zinman, the ONDRI Investigators, Robert A. Hegele Journal: Canadian Journal of Neurological Sciences / Volume 46 / Issue 5 / September 2019 Published online by Cambridge University Press: 15 August 2019, pp. 491-498 Background/Objective: Apolipoprotein E (APOE) E4 is the main genetic risk factor for Alzheimer's disease (AD). Due to the consistent association, there is interest as to whether E4 influences the risk of other neurodegenerative diseases. Further, there is a constant search for other genetic biomarkers contributing to these phenotypes, such as microtubule-associated protein tau (MAPT) haplotypes. Here, participants from the Ontario Neurodegenerative Disease Research Initiative were genotyped to investigate whether the APOE E4 allele or MAPT H1 haplotype are associated with five neurodegenerative diseases: (1) AD and mild cognitive impairment (MCI), (2) amyotrophic lateral sclerosis, (3) frontotemporal dementia (FTD), (4) Parkinson's disease, and (5) vascular cognitive impairment. Genotypes were defined for their respective APOE allele and MAPT haplotype calls for each participant, and logistic regression analyses were performed to identify the associations with the presentations of neurodegenerative diseases. Our work confirmed the association of the E4 allele with a dose-dependent increased presentation of AD, and an association between the E4 allele alone and MCI; however, the other four diseases were not associated with E4. Further, the APOE E2 allele was associated with decreased presentation of both AD and MCI. No associations were identified between MAPT haplotype and the neurodegenerative disease cohorts; but following subtyping of the FTD cohort, the H1 haplotype was significantly associated with progressive supranuclear palsy. This is the first study to concurrently analyze the association of APOE isoforms and MAPT haplotypes with five neurodegenerative diseases using consistent enrollment criteria and broad phenotypic analysis. Subglacial sediment distribution from constrained seismic inversion, using MuLTI software: examples from Midtdalsbreen, Norway Siobhan F. Killingbeck, Adam D. Booth, Philip W. Livermore, Landis J. West, Benedict T. I. Reinardy, Atle Nesje Journal: Annals of Glaciology / Volume 60 / Issue 79 / September 2019 Fast ice flow is associated with the deformation of subglacial sediment. Seismic shear velocities, Vs, increase with the rigidity of material and hence can be used to distinguish soft sediment from hard bedrock substrates. Depth profiles of Vs can be obtained from inversions of Rayleigh wave dispersion curves, from passive or active-sources, but these can be highly ambiguous and lack depth sensitivity. Our novel Bayesian transdimensional algorithm, MuLTI, circumvents these issues by adding independent depth constraints to the inversion, also allowing comprehensive uncertainty analysis. We apply MuLTI to the inversion of a Rayleigh wave dataset, acquired using active-source (Multichannel Analysis of Surface Waves) techniques, to characterise sediment distribution beneath the frontal margin of Midtdalsbreen, an outlet of Norway's Hardangerjøkulen ice cap. Ice thickness (0–20 m) is constrained using co-located GPR data. Outputs from MuLTI suggest that partly-frozen sediment (Vs 500–1000 m s−1), overlying bedrock (Vs 2000–2500 m s−1), is present in patches with a thickness of ~4 m, although this approaches the resolvable limit of our Rayleigh wave frequencies (14–100 Hz). Uncertainties immediately beneath the glacier bed are <280 m s−1, implying that MuLTI cannot only distinguish bedrock and sediment substrates but does so with an accuracy sufficient for resolving variations in sediment properties. Plant and animal responses of elephant grass pasture-based systems mixed with pinto peanut A. C. Vieira, C. J. Olivo, C. B. Adams, J. C. Sauthier, L. R. Proença, M. D. F. A. de Oliveira, P. B. dos Santos, H. P. Schiafino, T. J. Tonin, G. L. de Godoy, M. Arrial, L. G. Casagrande Journal: The Journal of Agricultural Science / Volume 157 / Issue 1 / January 2019 Published online by Cambridge University Press: 12 April 2019, pp. 63-71 Print publication: January 2019 The effects of growing pinto peanut mixed with elephant grass-based pastures are still little known. The aim of the current research was to evaluate the performance of herbage yield, nutritive value of forage and animal responses to levels of pinto peanut forage mass mixed with elephant grass in low-input systems. Three grazing systems were evaluated: (i) elephant grass-based (control); (ii) pinto peanut, low-density forage yield (63 g/kg of dry matter – DM) + elephant grass; and (iii) pinto peanut, high-density dry matter forage yield (206 g/kg DM) + elephant grass. The experimental design was completely randomized with the three treatments (grazing systems) and three replicates (paddocks) in split-plot grazing cycles. Forage samples were collected to evaluate the pasture and animal responses. Leaf blades of elephant grass and the other companion grasses of pinto peanut were collected to analyse the crude protein, in vitro digestible organic matter and total digestible nutrients. The pinto peanut, high-density dry matter forage yield + elephant grass treatment was found to give the best results in terms of herbage yield, forage intake and stocking rate, as well as having higher crude protein contents for both elephant grass and the other grasses, followed by pinto peanut with low-density forage yield + elephant grass and finally elephant grass alone. Better results were found with the grass–legume system for pasture and animal responses. Updated European Consensus Statement on diagnosis and treatment of adult ADHD J.J.S. Kooij, D. Bijlenga, L. Salerno, R. Jaeschke, I. Bitter, J. Balázs, J. Thome, G. Dom, S. Kasper, C. Nunes Filipe, S. Stes, P. Mohr, S. Leppämäki, M. Casas, J. Bobes, J.M. Mccarthy, V. Richarte, A. Kjems Philipsen, A. Pehlivanidis, A. Niemela, B. Styr, B. Semerci, B. Bolea-Alamanac, D. Edvinsson, D. Baeyens, D. Wynchank, E. Sobanski, A. Philipsen, F. McNicholas, H. Caci, I. Mihailescu, I. Manor, I. Dobrescu, T. Saito, J. Krause, J. Fayyad, J.A. Ramos-Quiroga, K. Foeken, F. Rad, M. Adamou, M. Ohlmeier, M. Fitzgerald, M. Gill, M. Lensing, N. Motavalli Mukaddes, P. Brudkiewicz, P. Gustafsson, P. Tani, P. Oswald, P.J. Carpentier, P. De Rossi, R. Delorme, S. Markovska Simoska, S. Pallanti, S. Young, S. Bejerot, T. Lehtonen, J. Kustow, U. Müller-Sedgwick, T. Hirvikoski, V. Pironti, Y. Ginsberg, Z. Félegyházy, M.P. Garcia-Portilla, P. Asherson Journal: European Psychiatry / Volume 56 / Issue 1 / February 2019 Published online by Cambridge University Press: 16 November 2018, pp. 14-34 Background Attention-deficit/hyperactivity disorder (ADHD) is among the most common psychiatric disorders of childhood that often persists into adulthood and old age. Yet ADHD is currently underdiagnosed and undertreated in many European countries, leading to chronicity of symptoms and impairment, due to lack of, or ineffective treatment, and higher costs of illness. Methods The European Network Adult ADHD and the Section for Neurodevelopmental Disorders Across the Lifespan (NDAL) of the European Psychiatric Association (EPA), aim to increase awareness and knowledge of adult ADHD in and outside Europe. This Updated European Consensus Statement aims to support clinicians with research evidence and clinical experience from 63 experts of European and other countries in which ADHD in adults is recognized and treated. Results Besides reviewing the latest research on prevalence, persistence, genetics and neurobiology of ADHD, three major questions are addressed: (1) What is the clinical picture of ADHD in adults? (2) How should ADHD be properly diagnosed in adults? (3) How should adult ADHDbe effectively treated? Conclusions ADHD often presents as a lifelong impairing condition. The stigma surrounding ADHD, mainly due to lack of knowledge, increases the suffering of patients. Education on the lifespan perspective, diagnostic assessment, and treatment of ADHD must increase for students of general and mental health, and for psychiatry professionals. Instruments for screening and diagnosis of ADHD in adults are available, as are effective evidence-based treatments for ADHD and its negative outcomes. More research is needed on gender differences, and in older adults with ADHD. Association between multimorbidity and undiagnosed obstructive sleep apnea severity and their impact on quality of life in men over 40 years old G. Ruel, S. A. Martin, J.-F. Lévesque, G. A. Wittert, R. J. Adams, S. L. Appleton, Z. Shi, A. W. Taylor Journal: Global Health, Epidemiology and Genomics / Volume 3 / 2018 Published online by Cambridge University Press: 04 June 2018, e10 Background. Multimorbidity is common but little is known about its relationship with obstructive sleep apnea (OSA). Men Androgen Inflammation Lifestyle Environment and Stress Study participants underwent polysomnography. Chronic diseases (CDs) were determined by biomedical measurement (diabetes, dyslipidaemia, hypertension, obesity), or self-report (depression, asthma, cardiovascular disease, arthritis). Associations between CD count, multimorbidity, apnea-hyponea index (AHI) and OSA severity and quality-of-life (QoL; mental & physical component scores), were determined using multinomial regression analyses, after adjustment for age. Of the 743 men participating in the study, overall 58% had multimorbidity (2+ CDs), and 52% had OSA (11% severe). About 70% of those with multimorbidity had undiagnosed OSA. Multimorbidity was associated with AHI and undiagnosed OSA. Elevated CD count was associated with higher AHI value and increased OSA severity. We demonstrate an independent association between the presence of OSA and multimorbidity in this representative sample of community-based men. This effect was strongest in men with moderate to severe OSA and three or more CDs, and appeared to produce a greater reduction in QoL when both conditions were present together. Education in Twins and Their Parents Across Birth Cohorts Over 100 years: An Individual-Level Pooled Analysis of 42-Twin Cohorts Karri Silventoinen, Aline Jelenkovic, Antti Latvala, Reijo Sund, Yoshie Yokoyama, Vilhelmina Ullemar, Catarina Almqvist, Catherine A. Derom, Robert F. Vlietinck, Ruth J. F. Loos, Christian Kandler, Chika Honda, Fujio Inui, Yoshinori Iwatani, Mikio Watanabe, Esther Rebato, Maria A. Stazi, Corrado Fagnani, Sonia Brescianini, Yoon-Mi Hur, Hoe-Uk Jeong, Tessa L. Cutler, John L. Hopper, Andreas Busjahn, Kimberly J. Saudino, Fuling Ji, Feng Ning, Zengchang Pang, Richard J. Rose, Markku Koskenvuo, Kauko Heikkilä, Wendy Cozen, Amie E. Hwang, Thomas M. Mack, Sisira H. Siribaddana, Matthew Hotopf, Athula Sumathipala, Fruhling Rijsdijk, Joohon Sung, Jina Kim, Jooyeon Lee, Sooji Lee, Tracy L. Nelson, Keith E. Whitfield, Qihua Tan, Dongfeng Zhang, Clare H. Llewellyn, Abigail Fisher, S. Alexandra Burt, Kelly L. Klump, Ariel Knafo-Noam, David Mankuta, Lior Abramson, Sarah E. Medland, Nicholas G. Martin, Grant W. Montgomery, Patrik K. E. Magnusson, Nancy L. Pedersen, Anna K. Dahl Aslan, Robin P. Corley, Brooke M. Huibregtse, Sevgi Y. Öncel, Fazil Aliev, Robert F. Krueger, Matt McGue, Shandell Pahlen, Gonneke Willemsen, Meike Bartels, Catharina E. M. van Beijsterveldt, Judy L. Silberg, Lindon J. Eaves, Hermine H. Maes, Jennifer R. Harris, Ingunn Brandt, Thomas S. Nilsen, Finn Rasmussen, Per Tynelius, Laura A. Baker, Catherine Tuvblad, Juan R. Ordoñana, Juan F. Sánchez-Romera, Lucia Colodro-Conde, Margaret Gatz, David A. Butler, Paul Lichtenstein, Jack H. Goldberg, K. Paige Harden, Elliot M. Tucker-Drob, Glen E. Duncan, Dedra Buchwald, Adam D. Tarnoki, David L. Tarnoki, Carol E. Franz, William S. Kremen, Michael J. Lyons, José A. Maia, Duarte L. Freitas, Eric Turkheimer, Thorkild I. A. Sørensen, Dorret I. Boomsma, Jaakko Kaprio Journal: Twin Research and Human Genetics / Volume 20 / Issue 5 / October 2017 Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
CommonCrawl
Bookstore MathSciNet® Journals Member Directory Employment Services Giving to the AMS About the AMS Visual Insight Mathematics Made Visible About Visual Insight Discriminant of Restricted Quintic John Baez / 1 June, 2016 Zero Set of Discriminant of Restricted Quintic – Greg Egan This image by Greg Egan shows the set of points \((a,b,c)\) for which the quintic \(x^5 + ax^4 + bx^2 + c \) has repeated roots. The plane \(c = 0\) has been removed. The fascinating thing about this surface is that it appears to be diffeomorphic to two other surfaces, defined in completely different ways, which we discussed here: • Involutes of a cubical parabola. • Discriminant of the icosahedral group. The icosahedral group is also called \(\mathrm{H}_3\). In his book The Theory of Singularities and its Applications, V. I. Arnol'd writes: The discriminant of the group \(\mathrm{H}_3\) is shown in Fig. 18. Its singularities were studied by O. V. Lyashko (1982) with the help of a computer. This surface has two smooth cusped edges, one of order 3/2 and the other of order 5/2. Both are cubically tangent at the origin. Lyashko has also proved that this surface is diffeomorphic to the set of polynomials \(x^5 + ax^4 + bx^2 + c\) having a multiple root. Figure 18 of Arnold's book is a hand-drawn version of the surface below: \(\mathrm{H}_3\) Discriminant – Greg Egan Arnol'd's claim that the discriminant of \(\mathrm{H}_3\) is diffeomorphic to the set of polynomials \(x^5 + ax^4 + bx^2 + c\) having a repeated root is not literally true, since all such polynomials with \(c = 0\) have a repeated root, and we need to remove this plane to obtain a surface that looks like the discriminant of \(\mathrm{H}_3\). After this correction his claim seems right, but it still deserves proof. Puzzle. Can you prove the corrected version of Arnol'd's claim? Arnol'd's claim appear on page 29 here: • Vladimir I. Arnol'd, The Theory of Singularities and its Applications, Cambridge U. Press, Cambridge, 1991. The following papers are also relevant: • Vladimir I. Arnol'd, Singularities of systems of rays, Uspekhi Mat. Nauk 38:2 (1983), 77-147. English translation in Russian Math. Surveys 38:2 (1983), 77–176. • O. Y. Lyashko, Classification of critical points of functions on a manifold with singular boundary, Funktsional. Anal. i Prilozhen. 17:3 (1983), 28–36. English translation in Functional Analysis and its Applications 17:3 (1983), 187–193 • O. P. Shcherbak, Singularities of a family of evolvents in the neighbourhood of a point of inflection of a curve, and the group \( \mathrm{H}_3\) generated by reflections, Funktsional. Anal. i Prilozhen. 17:4 (1983), 70–72. English translation in Functional Analysis and its Applications 17:4 (1983), 301–303. • O. P. Shcherbak, Wavefronts and reflection groups, Uspekhi Mat. Nauk 43:3 (1988), 125–160. English translation in Russian Mathematical Surveys 43:3 (1988), 1497–194. All these sources discuss the discoveries of Arnol'd and his colleagues relating singularities and Coxeter–Dynkin diagrams, starting with the more familiar \(\mathrm{ADE}\) cases, then moving on to the non-simply-laced cases, and finally the non-crystallographic cases related to \(\mathrm{H}_2\) (the symmetry group of the pentagon), \(\mathrm{H}_3\) (the symmetry group of the icosahedron) and \(\mathrm{H}_4\) (the symmetry group of the 600-cell). Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I'm always looking for truly beautiful images, so if you know about one, please drop a comment here and let me know! 1 June, 2016 in Algebraic geometry, Images Library, Surfaces. Chmutov Octic Escudero Nonic Togliatti Quintic ← Discriminant of the Icosahedral Group Small Stellated Dodecahedron → 5 thoughts on "Discriminant of Restricted Quintic" James Prichard says: Can Klein's method for solving the quintic be adapted to eliminate \(x^2\) & \(x^2\) terms instead of \(x^4\) & \(x^3\)? John Baez says: I don't know. I have thought about how the special class of quintics discussed here ($x^5+ax^4+bx^2+c = 0$) is related to the special classes that show up when people try to solve the quintic, but I haven't any good ideas yet! Klein's ideas are especially tempting, because they involve the icosahedron. But still, I don't see what's going on. Greg Egan says: 2 September, 2016 at 06:36 An explicit diffeomorphism between the three-dimensional space of invariants of the icosahedral symmetry group $(P,Q,R)$ and the coefficients $(a,b,c)$ of the restricted quintics that maps the discriminant into the discriminant is given by: $(a,b,c) = (P, \alpha Q + \beta P^3, \gamma R + \delta Q P^2 + \epsilon P^5)$ where the constants $\alpha, \beta, \gamma, \delta, \epsilon$ depend on the precise choice of normalisation for the invariants $P, Q, R$. If we recall that those invariants (defined here) are homogeneous polynomials of degree 2, 6 and 10 in the coordinates $x, y, z$, this diffeomorphism is the simplest kind of map that is consistent with the fact that $a, b, c$ are homogenous polynomials of degree 1, 3 and 5 in the roots of the quintic. It's not hard to see that this map has a constant, non-zero Jacobian determinant, which proves that it is a diffeomorphism, and in fact it has a polynomial inverse of the same general form. The five constants $\alpha, \beta, \gamma, \delta, \epsilon$ can be determined by identifying three curves that appear in both discriminant surfaces, the lines of cusps of type 5/2 and 3/2 and the line of double points, which all take the general parametric form $(a, A_i a^3, B_i b^5)$ or $(P, C_i P^3, D_i P^5)$, with four of the constants here equal to zero, in those cases where the curve in question is a straight line along a coordinate axis. Excellent! So the Puzzle is solved! Although the particular diffeomorphism depends on the choice of normalisation for the invariants $P,Q,R$ of the icosahedral symmetry group, it's possible to make the following identification that remains true regardless of that choice. If $e_1$ and $e_2$ are two orthogonal unit vectors that pass through edge centres of the icosahedron, then the point on the mirror plane: $m = \sqrt{\frac{1}{2}\left(\left(1+\frac{1}{\sqrt{5}}\right) a + \sqrt{5} \alpha \right)} e_1 + \sqrt{\frac{1}{2}\left(\left(1-\frac{1}{\sqrt{5}}\right) a – \sqrt{5} \alpha \right)} e_2$ when mapped to the invariant space $(P,Q,R)$ and then to the space of quintics $(a,b,c)$ corresponds to the restricted quintic with a coefficient of $a$ for the degree-4 term and a repeated root of $\alpha$. Romik's Ambidextrous Sofa Truncated {6,3,3} Honeycomb Bunimovich Stadium Algebraic geometry Algebraic number theory Complex analysis Dynamical systems Lattices Polytopes Posets Tilings Comments Guidelines The AMS encourages your comments, and hopes you will join the discussions. We review comments before they are posted, and those that are offensive, abusive, off topic or promoting a commercial product, person or website will not be posted. Expressing disagreement is fine, but mutual respect is required. Powered by MathJax Member Directory (CML) AMS Conferences News & Public Outreach Who Wants to Be a Mathematician Data on the Profession AMS Fellowships Policy & Advocacy News Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 © Copyright , American Mathematical Society · Privacy Statement
CommonCrawl
How Can Infinitely Many Primes Be Infinitely Far Apart? Quantized Academy By Patrick Honner Mathematicians have been studying the distribution of prime numbers for thousands of years. Recent results about a curious kind of prime offer a new take on how spread out they can be. Robert Neubecker for Quanta Magazine mathematicsnumber theoryprime numbersQuantized AcademyQuantized ColumnsAll topics If you've been following the math news this month, you know that the 35-year-old number theorist James Maynard won a Fields Medal — the highest honor for a mathematician. Maynard likes math questions that "are simple enough to explain to a high school student but hard enough to stump mathematicians for centuries," Quanta reported, and one of those simple questions is this: As you move out along the number line, must there always be prime numbers that are close together? You may have noticed that mathematicians are obsessed with prime numbers. What draws them in? Maybe it's the fact that prime numbers embody some of math's most fundamental structures and mysteries. The primes map out the universe of multiplication by allowing us to classify and categorize every number with a unique factorization. But even though humans have been playing with primes since the dawn of multiplication, we still aren't exactly sure where primes will pop up, how spread out they are, or how close they must be. As far as we know, prime numbers follow no simple pattern. Our fascination with these fundamental objects has led to the invention, or discovery, of hundreds of different types of primes: Mersenne primes (primes of the form 2n − 1), balanced primes (primes that are the average of two neighboring primes), and Sophie Germain primes (a prime p such that 2p + 1 is also prime), to name a few. Interest in these special primes grew out of playing around with numbers and discovering something new. That's also true of "digitally delicate primes," a recent addition to the list that has led to some surprising results about the most basic of questions: Just how rare or common can certain kinds of primes be? To appreciate this question, let's start with one of the first intriguing facts an aspiring number enthusiast learns: There are infinitely many prime numbers. Euclid proved this 2,000 years ago using one of the most famous proofs by contradiction in all of math history. He started by assuming that there are only finitely many primes and imagined all n of them in a list: $latexp_1, \ p_2, \ p_3, \ …, \ p_n$. Then he did something clever: He thought about the number $latexq=p_1 \times p_2 \times p_3 \times … \times p_n+1$. Notice that q can't be on the list of primes, because it's bigger than everything on the list. So if a finite list of primes exists, this number q can't be prime. But if q is not a prime, it must be divisible by something other than itself and 1. This, in turn, means that q must be divisible by some prime on the list, but because of the way q is constructed, dividing q by anything on the list leaves a remainder of 1. So apparently q is neither prime nor divisible by any prime, which is a contradiction that results from assuming that there are only finitely many primes. Therefore, to avoid this contradiction, there must in fact be infinitely many primes. Given that there are infinitely many of them, you might think that primes of all kinds are easy to find, but one of the next things a prime number detective learns is how spread out the primes can be. A simple result about the spaces between consecutive prime numbers, called prime gaps, says something quite surprising. Among the first 10 prime numbers — 2, 3, 5, 7, 11, 13, 17, 19, 23 and 29 — you can see gaps that consist of one or more composite numbers (numbers that are not prime, like 4, 12 or 27). You can measure these gaps by counting the composite numbers in between: For example, there is a gap of size 0 between 2 and 3, a gap of size 1 between both 3 and 5 and 5 and 7, a gap of size 3 between 7 and 11, and so on. The largest prime gap on this list consists of the five composite numbers — 24, 25, 26, 27 and 28 — between 23 and 29. Now for the incredible result: Prime gaps can be arbitrarily long. This means that there exist consecutive prime numbers as far apart as you can imagine. Perhaps just as incredible is how easy this fact is to prove. We already have a prime gap of length 5 above. Could there be one of length 6? Instead of searching lists of primes in hopes of finding one, we'll just build it ourselves. To do so we'll use the factorial function used in basic counting formulas: By definition, $latexn!=n \times(n-1) \times (n-2) \times … \times 3 \times 2 \times 1$, so for example $latex3!=3 \times 2\times 1 = 6$ and $latex5!=5 \times 4 \times 3 \times 2 \times 1=120$. Now let's build our prime gap. Consider the following sequence of consecutive numbers: $latex 7!+2$, $latex7!+3$, $latex 7!+4$, $latex7!+5$, $latex 7!+6$, $latex 7!+7$. Since $latex7!=7 \times 6 \times 5 \times 4 \times 3 \times2 \times 1$, the first number in our sequence, $latex7!+2$, is divisible by 2, which you can see after a little bit of factoring: $latex7!+2=7 \times 6 \times 5 \times 4 \times 3 \times2 \times 1+2$ $latex= 2(7 \times 6 \times 5 \times 4 \times 3 \times 1+1)$. Likewise, the second number, $latex7!+3$, is divisible by 3, since $latex= 3(7 \times 6 \times 5 \times 4 \times2 \times 1+1)$. Similarly, 7! + 4 is divisible by 4, 7! + 5 by 5, 7! + 6 by 6, and 7! + 7 by 7, which makes 7! + 2, 7! + 3, 7! + 4, 7! + 5, 7! + 6, 7! + 7 a sequence of six consecutive composite numbers. We have a prime gap of at least 6. This strategy is easy to generalize. The sequence $latexn!+2$, $latexn!+3$, $latexn!+4$, $latex…$, $latexn!+n$. is a sequence of $latexn-1$ consecutive composite numbers, which means that, for any n, there is a prime gap with a length of at least $latexn-1$. This shows that there are arbitrarily long prime gaps, and so out along the list of natural numbers there are places where the closest primes are 100, or 1,000, or even 1,000,000,000 numbers apart. A classic tension can be seen in these results. There are infinitely many prime numbers, yet consecutive primes can also be infinitely far apart. What's more, there are infinitely many consecutive primes that are close together. About 10 years ago the groundbreaking work of Yitang Zhang set off a race to close the gap and prove the twin primes conjecture, which asserts that there are infinitely many pairs of primes that differ by just 2. The twin primes conjecture is one of the most famous open questions in mathematics, and James Maynard has made his own significant contributions toward proving this elusive result. This tension is also present in recent results about so-called digitally delicate primes. To get a sense of what these numbers are and where they may or may not be, take a moment to ponder the following strange question: Is there a two-digit prime number that always becomes composite with any change to its ones digit? To get a feel for digital delicacy, let's play around with the number 23. We know it's a prime, but what happens if you change its ones digit? Well, 20, 22, 24, 26 and 28 are all even, and thus composite; 21 is divisible by 3, 25 is divisible by 5, and 27 is divisible by 9. So far, so good. But if you change the ones digit to a 9, you get 29, which is still a prime. So 23 is not the kind of prime we're looking for. What about 37? As we saw above, we don't need to bother checking even numbers or numbers that end in 5, so we'll just check 31, 33 and 39. Since 31 is also prime, 37 doesn't work either. Does such a number even exist? The answer is yes, but we have to go all the way up to 97 to find it: 97 is a prime, but 91 (divisible by 7), 93 (divisible by 3) and 99 (also divisible by 3) are all composite, along with the even numbers and 95. A prime number is "delicate" if, when you change any one of its digits to anything else, it loses its "primeness" (or primality, to use the technical term). So far we see that 97 is delicate in the ones digit — since changing that digit always produces a composite number — but does 97 satisfy the full criteria of being digitally delicate? The answer is no, because if you change the tens digit to 1 you get 17, a prime. (Notice that 37, 47 and 67 are all primes as well.) In fact, there is no two-digit digitally delicate prime. The following table of all the two-digit numbers, with the two-digit primes shaded in, shows why. All the numbers in any given row have the same tens digit, and all the numbers in any given column have the same ones digit. The fact that 97 is the only shaded number in its row reflects the fact that it is delicate in the ones digit, but it's not the only prime in its column, which means it is not delicate in the tens digit. A digitally delicate two-digit prime would have to be the only prime in its row and column. As the table shows, no such two-digit prime exists. What about a digitally delicate three-digit prime? Here's a similar table showing the layout of the three-digit primes between 100 and 199, with composite numbers omitted. Here we see that 113 is in its own row, which means it's delicate in the ones digit. But 113 isn't in its own column, so some changes to the tens digit (like to 0 for 103 or to 6 for 163) produce primes. Since no number appears in both its own row and its own column, we quickly see there is no three-digit number that is guaranteed to be composite if you change its ones digit or its tens digit. This means there can be no three-digit digitally delicate prime. Notice that we didn't even check the hundreds digit. To be truly digitally delicate, a three-digit number would have to avoid primes in three directions in a three-dimensional table. Do digitally delicate primes even exist? As you go further out on the number line the primes tend to get sparser, which makes them less likely to cross paths in the rows and columns of these high-dimensional tables. But larger numbers have more digits, and each additional digit decreases the likelihood of a prime being digitally delicate. If you keep going, you'll discover that digitally delicate primes do exist. The smallest is 294,001. When you change one of its digits, the number you get — 794,001, say, or 284,001 — will be composite. And there are more: The next few are 505,447; 584,141; 604,171; 971,767; and 1,062,599. In fact, they don't stop. The famous mathematician Paul Erdős proved that there are infinitely many digitally delicate primes. And that was just the first of many surprising results about these curious numbers. For example, Erdős didn't just prove that there are infinitely many digitally delicate primes: He proved that there are infinitely many digitally delicate primes in any base. So if you choose to represent your numbers in binary, ternary or hexadecimal, you're still guaranteed to find infinitely many digitally delicate primes. And digitally delicate primes aren't just infinite: They comprise a nonzero percentage of all prime numbers. This means that if you look at the ratio of the number of digitally delicate primes to the number of primes overall, this fraction is some number greater than zero. In technical terms, a "positive proportion" of all primes are digitally delicate, as the Fields medalist Terence Tao proved in 2010. The primes themselves don't make up a positive proportion of all numbers, since you'll find fewer and fewer primes the farther out you go along the number line. Yet among those primes, you'll continue to find digitally delicate primes often enough to keep the ratio of delicate primes to total primes above zero. Maybe the most shocking discovery was a result from 2020 about a new variation of these strange numbers. By relaxing the concept of what a digit is, mathematicians reimagined the representation of a number: Instead of thinking about 97 by itself, they instead thought of it as having leading zeros: Each leading zero can be thought of as a digit, and the question of digital delicacy can be extended to these new representations. Could there exist "widely digitally delicate primes" — prime numbers that always become composite if you change any of the digits, including any of those leading zeros? Thanks to the work of the mathematicians Michael Filaseta and Jeremiah Southwick, we know that the answer, surprisingly, is yes. Not only do widely digitally delicate primes exist, but there are infinitely many of them. Prime numbers form an infinite string of mathematical puzzles for professionals and enthusiasts to play with. We may never unravel all their mysteries, but you can count on mathematicians to continually discover, and invent, new kinds of primes to explore. 1. What's the biggest prime gap among the primes from 2 to 101? 2. To prove that there are infinitely many primes, Euclid assumes there are finitely many primes $latexp_1, \ p_2, \ p_3, \ …, \ p_n$, and then shows that $latexq=p_1 \times p_2 \times p_3 \times … \times p_n+1$ isn't divisible by any prime on the list. Doesn't this mean that q has to be prime? 3. A famous result in number theory is that there is always a prime between k and 2k (inclusive). This is hard to prove, but it's easy to prove that there's always a prime between k and $latexq=p_1 \times p_2 \times p_3 \times … \times p_n+1$ (inclusive), where $latexp_1, \ p_2, \ p_3, \ …, \ p_n$ are all the primes less than or equal to k. Prove it. 4. Can you find the smallest prime number that is digitally delicate in the ones and tens digits? This means that changing the ones or tens digit will always produce a composite number. (You might want to write a computer program to do this!) Challenge Problem: Can you find the smallest prime number that is digitally delicate when represented in binary? Recall that in binary, or base 2, the only digits are 0 and 1, and each place value represents a power of 2. For example, 8 is represented as $latex1000_2$, since $latex 8=1 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 0 \times 2^0$, and 7 in base 2 is $latex111_2$, since $latex7=1 \times2^2 + 1 \times 2^1 + 1 \times 2^0$. Click for Answer 1: The largest gap is between the primes 89 and 97. Generally speaking, the gaps get larger as you go further out along the number line, but of course the twin primes conjecture claims that there will always be primes very close together no matter how far out you go. Notice also how inefficient the method for constructing prime gaps used in this column is: To construct a prime gap of this size, you would start with the number $latex8!+2=40,322$ . No. Consider the first six primes: 2, 3, 5, 7, 11 and 13. In this case the number q would be $latex 2 \times 3 \times 5 \times 7 \times 11 \times13 + 1 = 30,031$ . This is not divisible by 2, 3, 5, 7, 11 or 13, but it's not a prime: it factors as $latex 30,031 = 59 \times 509$. Notice it has prime factors, but they are all larger than the first six primes. If either k or q is prime we're done. If q isn't prime it's composite, which means it's divisible by some prime number, but we already know that it's not divisible by any of the first n primes. Thus it has to be divisible by a prime larger than the first n primes, and since these are all the primes less than k, this prime must be bigger than k. But this prime divides q, so it must be less than q, so there must be a prime between k and q. The first prime that satisfies this property is 2,459, since 2,451, 2,453 and 2,457 are all composite (satisfying the delicate ones digit criterion) and 2,409, 2,419, 2,429, 2,439, 2,449, 2,469, 2,479, 2,489 and 2,499 are all composite (satisfying the delicate tens digit criterion). Yet 2,459 isn't digitally delicate, because 2,659 is prime, so it fails once you start considering the hundreds digit. (Thanks to the mathematician John D. Cook for publishing his digitally delicate prime-finding Python code.) Click for Answer to Challenge Problem: $latex127=1111111_2$ is digitally delicate, since $latex 126=1111110_2$, $latex125=1111101_2$, $latex123=1111011_2$, $latex119=1110111_2$, $latex111=1101111_2$, $latex95=1011111_2$, and $latex63=0111111_2$ are all composite. The Astrophysicist Who Sculpts Stars Before They Are Born
CommonCrawl
Space Science Reviews February 2019 , 215:12 | Cite as SEIS: Insight's Seismic Experiment for Internal Structure of Mars P. Lognonné W. B. Banerdt D. Giardini W. T. Pike U. Christensen P. Laudet S. de Raucourt P. Zweifel S. Calcutt M. Bierwirth K. J. Hurst F. Ijpelaan J. W. Umland R. Llorca-Cejudo S. A. Larson R. F. Garcia S. Kedar B. Knapmeyer-Endrun D. Mimoun A. Mocquet M. P. Panning R. C. Weber A. Sylvestre-Baron G. Pont N. Verdier L. Kerjean L. J. Facto V. Gharakanian J. E. Feldman T. L. Hoffman D. B. Klein K. Klein N. P. Onufer J. Paredes-Garcia M. P. Petkov J. R. Willis S. E. Smrekar M. Drilleau T. Gabsi T. Nebut O. Robert S. Tillier C. Moreau M. Parise G. Aveni S. Ben Charef Y. Bennour T. Camus P. A. Dandonneau C. Desfoux O. Pot P. Revuz D. Mance J. tenPierick N. E. Bowles C. Charalambous A. K. Delahunty J. Hurley R. Irshad Huafeng Liu A. G. Mukherjee I. M. Standley A. E. Stott J. Temple T. Warren M. Eberhardt A. Kramer W. Kühne E.-P. Miettinen M. Monecke C. Aicardi M. André J. Baroukh A. Borrien A. Bouisset P. Boutte K. Brethomé C. Brysbaert T. Carlier M. Deleuze J. M. Desmarres D. Dilhan C. Doucet D. Faye N. Faye-Refalo R. Gonzalez C. Imbert C. Larigauderie E. Locatelli L. Luno J.-R. Meyer F. Mialhe J. M. Mouret M. Nonon Y. Pahn A. Paillet P. Pasquier G. Perez R. Perez L. Perrin B. Pouilloux A. Rosak I. Savin de Larclause J. Sicre M. Sodki N. Toulemont B. Vella C. Yana F. Alibay O. M. Avalos M. A. Balzer P. Bhandari E. Blanco B. D. Bone J. C. Bousman P. Bruneau F. J. Calef R. J. Calvet S. A. D'Agostino G. de los Santos R. G. Deen R. W. Denise J. Ervin N. W. Ferraro H. E. Gengl F. Grinblat D. Hernandez M. Hetzel M. E. Johnson L. Khachikyan J. Y. Lin S. M. Madzunkov S. L. Marshall I. G. Mikellides E. A. Miller W. Raff J. E. Singer C. M. Sunday J. F. Villalvazo M. C. Wallace D. Banfield J. A. Rodriguez-Manfredi C. T. Russell A. Trebi-Ollennu J. N. Maki E. Beucler M. Böse C. Bonjour J. L. Berenguer S. Ceylan J. Clinton V. Conejero I. Daubar V. Dehant P. Delage F. Euchner I. Estève L. Fayon L. Ferraioli C. L. Johnson J. Gagnepain-Beyneix M. Golombek A. Khan T. Kawamura B. Kenda P. Labrot N. Murdoch C. Pardo C. Perrin L. Pou A. Sauron D. Savoie S. Stähler E. Stutzmann N. A. Teanby J. Tromp M. van Driel M. Wieczorek R. Widmer-Schnidrig J. Wookey First Online: 28 January 2019 The InSight Mission to Mars II By the end of 2018, 42 years after the landing of the two Viking seismometers on Mars, InSight will deploy onto Mars' surface the SEIS (Seismic Experiment for Internal Structure) instrument; a six-axes seismometer equipped with both a long-period three-axes Very Broad Band (VBB) instrument and a three-axes short-period (SP) instrument. These six sensors will cover a broad range of the seismic bandwidth, from 0.01 Hz to 50 Hz, with possible extension to longer periods. Data will be transmitted in the form of three continuous VBB components at 2 sample per second (sps), an estimation of the short period energy content from the SP at 1 sps and a continuous compound VBB/SP vertical axis at 10 sps. The continuous streams will be augmented by requested event data with sample rates from 20 to 100 sps. SEIS will improve upon the existing resolution of Viking's Mars seismic monitoring by a factor of \(\sim 2500\) at 1 Hz and \(\sim 200\,000\) at 0.1 Hz. An additional major improvement is that, contrary to Viking, the seismometers will be deployed via a robotic arm directly onto Mars' surface and will be protected against temperature and wind by highly efficient thermal and wind shielding. Based on existing knowledge of Mars, it is reasonable to infer a moment magnitude detection threshold of \(M_{{w}} \sim 3\) at \(40^{\circ}\) epicentral distance and a potential to detect several tens of quakes and about five impacts per year. In this paper, we first describe the science goals of the experiment and the rationale used to define its requirements. We then provide a detailed description of the hardware, from the sensors to the deployment system and associated performance, including transfer functions of the seismic sensors and temperature sensors. We conclude by describing the experiment ground segment, including data processing services, outreach and education networks and provide a description of the format to be used for future data distribution. Mars seismology InSight Edited by William B. Banerdt and Christopher T. Russell The online version of this article ( https://doi.org/10.1007/s11214-018-0574-6) contains supplementary material, which is available to authorized users. 1 InSight's SEIS: Introduction and High Level Science Objectives The InSight mission will deploy the first complete geophysical observatory on Mars following in the footsteps of the Apollo Lunar Surface Experiments Package (ALSEP) deployed on the Moon during the Apollo program (e.g. Latham et al. 1969, 1970; Bates et al. 1979) It will thus provide the first ground truth constraints on interior structure of the planet. The InSight spacecraft was launched on May 5, 2018 and landed on Mars on November 26, 2018 in Elysium Planitia (Golombek et al. 2017). The three primary scientific investigations are the Seismic Experiment for Interior Structure (SEIS), the Heat Flow and Physical Properties Package (HP3, Spohn et al. 2018), a self-hammering mole that deploys a tether with temperature sensors to a depth of 3–5 m and the Rotation and Interior Structure Experiment (RISE; Folkner et al. 2018, an X-band precision tracking experiment which will follow the motion of the lander over a Martian year to determine the precession and nutation of Mars). In addition, there is a set of environmental sensors grouped as the Auxiliary Payload Sensor Suite (APSS; Banfield et al. 2018). This set of instruments includes a pressure sensor, wind sensors and a magnetometer. It was primarily included to decorrelate seismic events from atmospheric effects or lander and planetary magnetic field variations and ensure that putative seismic signals are not mistaken for wind activity. It is notable that the magnetometer might potentially be used to perform crustal and lithosphere magnetic sounding and the pressure sensor has a sensitivity compatible with infrasound detection. Finally, the lander also has an Instrument Deployment System (IDS; Trebi-Ollennu et al. 2018) with a robotic arm (Instrument Deployment Arm, or IDA) and set of cameras (Maki et al. 2018) which will deploy SEIS and HP3 to the surface of Mars. The camera will also be used to determine the azimuth of SEIS with respect to Geographic North Pole (Savoie et al. 2018) and to better understand the geology and physical properties of the local surface and shallow subsurface (Golombek et al. 2018). The InSight mission goal is to understand the formation and evolution of terrestrial planets through investigation of the interior structure and processes of Mars and secondarily to determine the present level of tectonic activity and impact flux on Mars. More specifically, the payload is targeted to determine through geophysical measurements the fundamental planetary parameters that can substantially contribute to these goals. Thus, in order to address these goals, InSight has the following science objectives: Determine the size, composition and physical state of the core. Determine the thickness and structure of the crust. Determine the composition and structure of the mantle. Determine the thermal state of the interior. Measure the rate and distribution of internal seismic activity. Measure the rate of impacts on the surface. These goals have all been quantified as listed in Table 1 and have defined the InSight mission requirement (Level 1 or L1). Their rationale in terms of knowledge of Mars' interior structure and evolution is described in detail in Smrekar et al. (2019, this issue). All of these goals were defined before InSight was selected in 2012. In the ensuing six years, some of these have benefited from advances in knowledge from ongoing orbiter and lander measurements, but most are even more worthy of pursuit in view of recent findings. We illustrate this point for two examples, the core size and the crustal thickness L1 Mission Requirements of the InSight along with their associated science objectives. When instrument name in the last column is in bold, the requirement is considered as threshold while it is baseline otherwise. All threshold goals (dark grew) are associated with internal structure while the baseline (light grew) are associated with heat flow, seismic activity and impact rate. Only the Mission requirements related to SEIS are numbered Science objectives Mission requirements 2012 knowledge Determine the thickness and structure of the crust L1-1 Determine the crustal thickness to \(\pm 10~\mbox{km}\) \(\pm 35~\mbox{km}\) L1-2 Detect any regional-scale crustal layering with velocity contrast \(\geq 0.5~\mbox{km}/\mbox{s}\) over a depth interval \(\geq 5~\mbox{km}\) Determine the composition and structure of the mantle L1-3 Determine the seismic velocities in the upper 600 km of mantle to within \(\pm 0.25~\mbox{km}/\mbox{s}\) \(\pm 1~\mbox{km}/\mbox{s}\) (inferred) Determine the size, composition and physical state of the core L1-4 Positively distinguish between a liquid and solid outer core None (likely liquid) RISE+SEIS L1-5 Determine the core radius to within \(\pm 200~\mbox{km}\) \(\pm 450~\mbox{km}\) Determine the core density to within \(\pm 450~\mbox{kg}/\mbox{m}^{3}\) \(\pm 1000~\mbox{kg}/\mbox{m}^{3}\) Determine the thermal state of the interior Determine the heat flux at the landing site to within \(\pm 5~\mbox{mW}/\mbox{m}^{2}\) \(\pm 25~\mbox{mW}/\mbox{m}^{2}\) Measure the rate and geographic distribution of seismic activity L1-6 Determine the rate of seismic activity to within a factor of 2 Factor of 10 (inferred) L1-7 Determine epicenter distance to ±25% and azimuth to ±20∘ Measure the rate of meteorite impacts on the surface L1-8 Determine the rate of meteorite impacts to within a factor of 2 Factor of 6 For the core and in the same way that Jeffreys (1926) demonstrated the liquid state of the Earth's outer core using tidal measurements, the range of \(k_{2}\) values observed for Mars at the Solar tidal periods may only be explained by a core in a primarily, if not entirely, liquid state (Yoder et al. 2003). The most recent determinations of the tidal Love number from orbiters have furthermore narrowed our estimation of the Mars core. The last proposed value for \(k_{2}\) (\(0.163 \pm 0.008\), based on the estimates of Konopliv et al. 2016 and Genova et al. 2016) ruled out earlier results in the range of \(k_{2}=0.12\mbox{--}0.13\) by Marty et al. (2009). It implies a core radius in the range of 1710–1860 km for the SEIS reference models (Smrekar et al. 2019, this issue) and in an even smaller range as proposed by Khan et al. (2018) (1720–1810 km). These two ranges are smaller than the \(\pm 200~\mbox{km}\) expected originally through either SEIS tidal or RISE geodetic measurements. But as shown by Panning et al. (2017) and Smrekar et al. (2019, this issue) more than 150 seconds of difference is predicted between the SEIS reference models for the arrival at the InSight station of shear waves generated by quakes and reflected by the core. InSight should thus be able to use core reflected waves to determine the core radius with much better resolution, perhaps a few tens of km. This is important because core size controls the maximum mantle pressure, which can have a significant influence on mineralogy and potential mantle convection regimes. Our second example is the crust. It appears to be very far from the homogeneous crust assumed in most geophysical models and the Martian lithosphere might also be far from thermodynamic and mechanical equilibrium. Goossens et al. (2017) suggested for example a very low average bulk density of \(2582\pm 209~\mbox{kg}/\mbox{m}^{3}\) which is significantly less than the \(2660\mbox{--}2760~\mbox{kg}/\mbox{m}^{3}\) range assumed by Khan et al. (2018). This suggests a mean crustal thickness of about 42 km, very well outside the 55–80 km range of Khan et al. (2018). The mean crustal thickness proposed by Goossens et al. (2017) is moreover based on the assumption than some crust remains even beneath the largest impacts, which remains to be proven. In addition, higher densities for the volcanoes (e.g., Belleguic et al. 2005), the discovery of feldspar-rich magmatic rocks analogous to the earliest continental crust on Earth (Sautter et al. 2015) and possible large temperature variations in the lithosphere (Plesa et al. 2016) indicate the possibility of significant lateral density variations which make gravity constraints weaker. Knowledge of the crustal thickness has therefore arguably not improved when one takes into account these unknowns, largely as a consequence of the non-uniqueness of any gravity interpretation and the lack of penetration for other geophysical observations (e.g., ground penetrating radars). Seismic measurements are mandatory for any significant new step in our knowledge of the Martian mean crustal thickness. The SEIS goals can also be considered in another context and compared to historical achievements in terrestrial and lunar seismology (see, e.g., Ben-Menahem 1995; Agnew 2002; Dziewonski and Romanowicz 2015; Schmitt 2015; Lognonné and Johnson 2015). After having located seismic activity on the Earth from human reports (Mallet 1853), instrumental seismology grew rapidly following the first remote observation of a quake from Japan in Potsdam (von Rebeur-Paschwitz 1889) and the first observation of the solid Earth tide with a gravimeter (Schweydar 1914). Subsequently seismology on the Earth was able to rapidly decipher the interior details of our planet. Table 2 provides a comparison between SEIS goals for Mars and some key discoveries made on the Earth in the period from 1850 to 1926 and on the Moon following the Apollo seismometer deployments in the early 1970s. Of course, such early observations always triggered alternative interpretations and multiple controversies before reaching consensus. Comparison of the mission science objectives with achievements made in terrestrial and lunar seismology. The suggested references are those corresponding to the first observation reported either in historical reviews or in the literature for the Moon. Many more studies were of course done after Science objective Earth analogue Lunar analogue Determine the thickness and structure of the crust Mohorovičić (1910, 1992) Toksöz et al. (1972) Jeffreys and Bullen (1940) Nakamura et al. (1973) Oldham (1906) (body waves) Jeffreys (1926) (solid tide) Williams et al. (2001), from LLR Nakamura et al. (1974), from far impact Weber et al. (2011), from ScS Garcia et al. (2011), from ScS e.g. Mallet (1853) Latham et al. (1971), DMQ Nakamura et al. (1974), HFT N/A (due to atmosphere shielding) Duennebier and Sutton (1974) Dainty et al. (1975) The major challenge of InSight SEIS, with its first non-ambiguous detection of marsquakes and solid tides, will be to implement a third planetary seismological success story. The single-station character of the mission will limit its scope compared to the 4-station Lunar passive seismology network (plus a partial fifth station consisting of the Apollo 17 gravimeter, Kawamura et al. 2015) and the current very dense network on Earth. This is among the reasons why InSight has chosen not to target the interpretation of any seismic observations deeper than the core-mantle boundary, likely leaving observation of any possible inner core phases, as made on Earth by Lehmann (1936) and proposed by Weber et al. (2011) for the Moon, a possible goal for future Mars geophysical networks. 2 Mars Seismology Background 2.1 Summary of Past Missions and InSight Pre-selection Efforts Seismology on Mars started with the seismometers on the Viking landers. This first attempted seismic exploration of Mars (Anderson et al. 1977a, 1977b) was unfortunately much less successful than the seismic exploration of the Moon. On Mars only the Viking 2 seismometer was operational, as the seismometer on the Viking 1 lander failed to unlock. The sensitivity of the Viking 2 seismometer was an order of magnitude less than the sensitivity of the Lunar Short Period (SP) seismometer for periods shorter than 1 s and five orders of magnitude less than the Lunar Long Period (LP) seismometer for periods longer than 10 s. No events were convincingly detected during the seismometer's 19 months of nearly continuous operation, with the possible exception of one event on sol 80 (Anderson et al. 1977a). The event occurred when no wind data were recorded but recent analyses (Lorenz et al. 2016) have shown that the local time excludes, with a better than 95% probability, wind-induced lander noise with such a high amplitude level. Nevertheless, the absence of other recorded events, as shown by Goins and Lazarewicz (1979), was probably related to the inadequate sensitivity of the seismometer in the frequency bandwidth of teleseismic body waves, as well as the device's high sensitivity to wind noise (Nakamura and Anderson 1979). On the other hand, this sensitivity provided a means to monitor the wind and atmospheric activity in a quite different way than with classical weather sensors (Lorenz and Nakamura 2014). The next mission for Mars seismology was the ambitious Russian 96 mission with a very large orbiter, two small autonomous stations (Linkin et al. 1998) equipped with the short-period OPTIMISM (from the French "Observatoire PlanéTologIque MagnétIsme et Sismique sur Mars") seismometers (Lognonné et al. 1998) and 2 penetrators (Surkov and Kremnev 1996) with the Kamerton short-period (SP) seismometers (Khavroshkin and Tsyplakov 1996). After a successful initial launch, it failed to insert into a trans-Mars trajectory and fell into the Pacific. More than 2 decades of efforts will have therefore been spent in proposal formulations and instrument development between the collapse of Mars 96 and the selection of InSight. See the summary provided by Lognonné (2005), including efforts related the NetLander project, a network of 4 landers cancelled at the end of its phase B (Harri et al. 1999; Sotin et al. 2000; Lognonné et al. 2000; Banerdt and Pike 2001; Marsal et al. 2002; Dehant et al. 2004). We will describe only those directly related to the conception of InSight and SEIS. The collapse of NetLander marked indeed the end of near terms perspectives for a Mars Seismic Network mission and implied focus on a single-station seismic pathfinder mission (Banerdt and Lognonné 2003; Lognonné and Banerdt 2003). The first attempts were made jointly in the USA and in Europe, respectively with the GEMS (Geophysical and Environment Monitoring Station) NASA Mars Scout Program proposal (based on a Mars Pathfinder-like lander; Banerdt and Smrekar 2007) and with the Humboldt package (Lognonné et al. 2006; Biele et al. 2007, Mimoun et al. 2008) onboard the ExoMars lander and later, the MarsTwin proposal to ESA (Dehant et al. 2010). Although none of these projects were selected, they all paved the way for the InSight/SEIS design (Lognonné et al. 2011), which was proposed in 2010 to the NASA Discovery Program of the GEMS proposal (Banerdt et al. 2011). The latter was finally selected and renamed InSight in 2012. Thus some 40 years after the Viking landings, InSight will explore a virgin planet for seismology, armed only with a sparse set of a-priori constraints on the internal structure derived from other types of orbital or rover investigations and measurements on Martian meteorites. 2.2 Mars Interior Structure Before InSight While the expected data returned from SEIS will allow for a much-improved knowledge of the interior structure of Mars, it is possible to use existing geophysical observations, combined with geochemical analysis and mineral physics experiments and modeling, to estimate possible domains of internal structure. We review briefly in this section our knowledge before Insight's seismic data and refer to Smrekar et al. (2019, this issue) for a more in-depth discussion. Even without seismic observational data, many estimations of the internal elastic and compositional structure of Mars have been made in the last 40 years. The first were based on the knowledge on the seismic structure of the deep Earth, transposed to the pressure conditions inside Mars (Anderson et al. 1977a; Okal and Anderson 1978; Lognonné and Mosser 1993; Mocquet and Menvielle 2000). Fundamental constraints from planetary mass, moment of inertia and gravimetric tidal Love number \(k_{2}\) (for the latest assessment of \(k_{2}\), see Genova et al. 2016 and Konopliv et al. 2016) combined with some assumptions on bulk chemistry based on constraints primarily from Martian meteorites have then be used. See reviews in, e.g., McSween (1994) and Taylor (2013). A chronological evolution of the proposed models can be appreciated through recently published reviews in Mocquet et al. (2011), Dehant et al. (2012), Lognonné and Johnson (2015), Panning et al. (2017) and Smrekar et al. (2019, this issue). The current restricted set of geophysical and geochemical constraints allow for a wide variety of theoretical models. As part of the preparation for InSight, we have defined a sample range of candidate models (Fig. 1) similar to the collection created for a community blind test of Marsquake location approaches (Clinton et al. 2017). Eight of the models in the set (those with model names beginning with DWT or EH45T) are based on Rivoldini et al. (2011) and described in Panning et al. (2017). The model labeled ZG_DW is model M14_3 of Zharkov et al. (2009), based on the Dreibus and Wänke (1985) chemical model. The ZG_DW model has been corrected for the larger \(k _{2}\) value in Zharkov et al. (2017). The family of "AK" models (Khan et al. 2018) are constructed assuming 4 different bulk mantle compositions (the preface to "AK" with DW, LF, SAN and TAY referring to Dreibus and Wänke (1985), Lodders and Fegley (1997), Sanloup et al. (1999) and Taylor (2013), respectively) and therefore mineralogy. Models vary due to different assumed bulk silicate and core compositions, crustal models, thermal profiles and depths of first order interfaces (i.e. crust-mantle and core mantle boundaries). The largest differences between the models are primarily due to the trade-off between the core radius, the spherically averaged thickness of the crust and of their corresponding densities. Seismological constraints on the depth of the Moho and hopefully on the core radius, will be extremely useful to resolve these ambiguities. Waiting for these data, the mean density and moment of inertia already provide rather precise values for the gradients of pressure and adiabatic increase of temperature inside the mantle (\(0.12~\mbox{K}/\mbox{km}\)). Sample suite of 13 models (color-coded as in legend in lower right). (A) \(V_{P}\) (solid lines, in km/s), \(V_{S}\) (dashed, in km/s) and density (dotted, in \(\mbox{g}/\mbox{cm}^{3}\)) as function of depth (km). (B) Shear quality factor (\(Q\)) as a function of depth. Models DWThot through EH45ThotCrust2b are from Rivoldini et al. (2011), ZG_DW is from Zharkov et al. (2009) and models DWAK through TAYAK are from Khan et al. (2018). Figure updated from Panning et al. (2017) with models available at: https://doi.org/10.5281/zenodo.1478804 In the mantle, relatively shallow variation arises primarily due to a wide range of possible thermal profiles (Plesa et al. 2016). In the bulk of the mantle, however, velocity and density variations between possible models are smaller. For example, when a suite of models was calculated in the study of Panning et al. (2017) with varying published Martian mantle compositions, either enriched in olivine or pyroxene and temperature profiles (Plesa et al. 2016) using a consistent equation of state approach based on the code PerpleX (Connolly 2005) with thermodynamic data from Stixrude and Lithgow-Bertelloni (2011) and Rivoldini et al. (2011), shear velocity varied only within a band of \(\pm 0.15~\mbox{km}/\mbox{s}\). Some mid-mantle variation between models can be seen, however, near \(1100 \pm 200~\mbox{km}\) depth, where phase transitions between olivine, wadsleyite and ringwoodite are expected. The depth and sharpness of this transition, which is critical for determining whether seismic energy reflecting from such a transition can be observed, are primarily governed by the iron content and the temperature of the mantle (Mocquet et al. 1996). Estimate vary depending on the composition and temperature distribution used in the models (e.g. Sohl and Spohn 1997; Gudkova and Zharkov 2004; Verhoeven et al. 2005; Zharkov and Gudkova 2005; Khan and Connolly 2008; Zharkov et al. 2009; Rivoldini et al. 2011; Khan et al. 2018). 2.3 Expected Seismic Activity on Mars from Quakes and Impacts We refer the reader to Clinton et al. (2018) for a more detailed discussion on internal seismic activity, Daubar et al. (2018) for impacts and summarize below the key points in term of targeted quake and impacts. Mars is expected to be seismically more active than the Moon, but less active than the Earth, based on the relative geologic histories of the terrestrial planets (Solomon et al. 1991; Oberst 1987; Goins et al. 1981). The total seismic moment release per year is \(\sim 10^{21}\mbox{--}10^{23}~\mbox{N}\,\mbox{m}/\mbox{yr}\) on the Earth (Pacheco and Sykes 1992) and \(\sim 10^{15}~\mbox{N}\,\mbox{m}/\mbox{yr}\) on the Moon (Goins et al. 1981). This would suggest a total moment release on Mars to be midway between the Earth and Moon or somewhere between \(10^{17}~\mbox{N}\,\mbox{m}/\mbox{yr}\) and \(10^{19}~\mbox{N}\,\mbox{m}/\mbox{yr}\) (Phillips 1991; Golombek et al. 1992; Golombek 1994, 2002; Knapmeyer et al. 2006; Plesa et al. 2018). An average seismicity could therefore generate per year 2 quakes of moment larger than \(10^{17}~\mbox{N}\,\mbox{m}\), 10 quakes with moment larger than \(10^{16}~\mbox{N}\,\mbox{m}\) and 50 quakes with moment larger than \(10^{15}~\mbox{N}\,\mbox{m}\). This leads us to design SEIS with a performance compatible for the surface wave detection of a quake with moment larger than \(10^{16}~\mbox{N}\,\mbox{m}\) every were on the planet and the detection of high signal to noise body waves of the latter if occurring outside the core shadow zone. Although the landing site was mostly chosen with landing safety and long-term operations considerations. Cerberus Fossae is only \(\sim 1500~\mbox{km}\) to the east-northeast from the InSight landing site and is one of the youngest tectonic features on Mars. It has been interpreted as a long graben system with cumulative offsets of 500 m or more (Vetterlein and Roberts 2010) and it contains boulder trails young enough to be preserved in eolian sediments (Roberts et al. 2012), indicative of large and perhaps very recent marsquakes large enough, if occurring again, to be recorded by the InSight instruments (Taylor et al. 2013). Meteorite impacts provide another potential source of present-day seismic activity and are discussed in detail in Daubar et al. (2018), with a prediction of about 5 detectable events per year. The cratering rate on Mars can be estimated by extrapolating lunar isochrones to Mars (Hartmann 2005) or more directly from new impact craters detected using before and after orbital imagery (Malin et al. 2006; Daubar et al. 2013, 2016; Hartmann and Daubar 2017). Despite an orbital incomplete coverage, the agreement between estimates of the crater production function from these studies is typically within a factor of 2 or 3, with the Mars-observational studies suggesting fewer impacts. Larger uncertainties are in the estimation of the seismic signal amplitude generated by an impact. The conversion of the impactor momentum or energy is subject to several hypothesis. It is discussed in detail by Daubar et al. (2018) and significant differences exist between approach based on seismic efficiency (e.g. Teanby and Wookey 2011; Teanby 2015) and those computing directly the seismic equivalent source (e.g. Lognonné et al. 2009; Gudkova et al. 2011; Lognonné and Johnson 2015; Karakostas et al. 2018). 3 SEIS Requirements SEIS requirements (e.g. the Instrument Level 2 requirements, provided in Table 3) have been designed to meet the InSight mission goal and L1 requirements, as listed in Table 1. Traditional seismic analysis is based largely on arrival times of body waves and direct surface wave acquired by a broadly distributed network of stations. In contrast, SEIS had in contrary to integrate explicitly the constraints of several single station analysis techniques developed for extracting Earth's interior and seismic sources informations. This, in addition to the expected seismicity and seismic noise on Mars, was integrated in the experiment requirements, especially in the targeted sensitivity. SEIS instrument requirement (L2) flow and their link to the InSight mission requirements (L1). The SEIS requirements have led to the VBB performance requirements as indicated in Table 4 Mission requirement SEIS instrument requirement L1-1: Determine the depth of the crust-mantle boundary to within \(\pm 10~\mbox{km}\) L2-1: Measure Rayleigh wave group velocity dispersion to ±5% for at least 2 quakes with SNR ≥ 3 on R3 wavetrains L1-2: Detect velocity contrast \(\geq 0.5~\mbox{km}/\mbox{s}\) over depth interval \(\geq 5~\mbox{km}\) within the crust, if it exists L2-2: Measure group velocity dispersion to ±4% for at least 3 quakes with SNR ≥ 3 on R3 wavetrains L1-3: Determine seismic velocities in the upper 600 km of the mantle to within \(\pm 0.25~\mbox{km}/\mbox{s}\) L2-3: Measure P and S arrival times to \(\pm 2~\mbox{s}\) and R1 and R2 arrival times to \(\pm 15~\mbox{s}\) for at least 13 quakes L1-4: Positively distinguish between liquid and solid outer core∗ L2-4: Measure the Phobos tide amplitude to \(\pm 2.5\times 10^{-11}~\mbox{m}/\mbox{s}^{2}\) L1-5: Determine the radius of core to within \(\pm 200~\mbox{km}\) L1-6: Determine the rate of seismic activity to within a factor of 2 L2-6: Measure marsquake signals of P-wave amplitude \(\geq 6\times 10^{-9}~\mbox{m}/\mbox{s}^{2}\) with SNR ≥ 3 L1-7: Determine epicenter distance to ±25% and azimuth to \(\pm 20~\mbox{degrees}\) L2-7: Measure the horizontal components of P-wave signals from \(10^{16}~\mbox{N}\,\mbox{m}\) quakes with a SNR of ≥20 L2-7b: Detect P and S-wave signals from \(10^{16}~\mbox{N}\,\mbox{m}\) quakes at distances up to shadow zone with SNR ≥ 3 L1-8: Determine the rate of meteorite impacts to within a factor of 2 L2-8: Measure the seismic signals from meteorite impacts of P-wave amplitudes \(\geq 3\times 10^{-9}~\mbox{m}/\mbox{s}^{2}\) with SNR ≥ 3 In this section, we first provide in Sect. 3.2 a general overview and review of the estimate of amplitude of seismic waves on Mars as a function of epicentral distance and seismic moment. In Sect. 0, we discuss the consequences of the single station approach for SEIS performances. We then present the instrument noise requirement and expected environmental noise (Sect. 3.4). Section 3.5 provides then an estimate of the expected number of quake detections and Sect. 3.6 provides an update and short critical review of new or challenging science goals prior to surface seismic operation. This identify new goals of the experiment which in many cases were considered at risk and not listed in the NASA 2012 non-published concept study report. 3.2 Overview of Seismic Propagation on Mars As compared with Earth, we expect to observe seismic event with lower magnitudes on Mars. We thus expect on Mars the data with the best signal-to-noise to be found in the bandwidth of body waves and regional surface waves (Lognonné and Johnson 2007, 2015). From seismograms, the most reliable seismological secondary data that could be extracted should be: travel times of body waves (in the short period range, 0.1–5 Hz), group and phase velocities of Rayleigh surface waves (in the long period range, 0.01–0.1 Hz), eigenfrequencies of spheroidal fundamental normal modes in the frequency range of 0.01–0.02 Hz. Waveform polarization and estimates of azimuth should also be recoverable. Other data such as receiver functions, spheroidal normal modes below 0.01 Hz and overtones above 0.01 Hz, Love surface waves group and phase velocities or Toroidal modes eigenfrequencies might also be extracted but will be more sensitive to thermal or horizontal noise. The analysis of the short period part of the seismic spectrum will be mainly devoted to obtaining information from the P and S waves that pass through the planet. The P-wave arrival time is the most robust measurement on a seismogram but inevitably, the waveforms to be recorded will look quite different from Earth. Except for quakes located close to the station, the seismic signal will be strongly reduced by the scattering in the crust due to the impacting history and by the attenuation of the planet (Lognonné and Johnson 2007, 2015). The importance of attenuation on Mars was originally pointed out by Goins and Lazarewicz (1979) who have shown that the Viking seismometer with a 4 Hz central frequency was unable to detect remote events due to attenuation. While surface waves and quakes at small epicentral distances are expected to propagate mostly in the lithosphere, where the shear \(Q _{\mu }\) is expected to be large, the deep Martian mantle will therefore likely have a relatively low \(Q_{\mu }\), likely comparable or slightly larger that of Earth's upper mantle one (\(Q_{\mu } =140\) in the transition zone following Dziewonski and Anderson 1981) and much less than the \(Q_{\mu }\) observed in the deep lunar interior (\(Q_{\mu }=300\mbox{--}500\) following Toksöz et al. 1974). Proposed values range from \(Q_{\mu } = 140\) (e.g. Khan et al. 2016) to about \(Q_{\mu } =250\) from the use of the Anderson and Given (1982) model extended from Phobos' tidal period to the seismic band (Lognonné and Mosser 1993; Zharkov and Gudkova 1997). We refer the reader to Smrekar et al. (2019, this issue), for more discussions on the a priori Mars intrinsic attenuation and its relation to the Martian mantle and discuss only in this section the implications in terms of seismic signal amplitudes. This \(Q_{\mu }\) is expected to be one of the major parameters influencing the detectability of remote activity. The amplitude ratio of the waves between two models depends on \(e^{- \pi fT \Delta ( \frac{1}{Q})}\) where f is the frequency, \(T\) the propagation time and \(\Delta ( \frac{1}{Q} )\) is the difference of the inverse of \(Q\) of the two models. For a 3 s period body wave and at 90° epicentral distance, for which the propagation time is roughly 800 s, this leads to an amplitude decrease by a factor of 3.3 for \(Q_{\mu } = 140\) relative to \(Q_{\mu } = 175\) and to an amplitude increase by a factor of 4.2 for \(Q_{\mu } = 250\) again relative to \(Q_{\mu } = 175\). This very high sensitivity of the body waves amplitudes to attenuation at large epicentral distance is a key difference to the Earth, for which the low lower mantle attenuation reduces attenuation loss at large epicentral distances. High sensitivity at long periods, due to its robustness to attenuation, was therefore considered as a critical requirement at the beginning of the OPTIMISM Broad Band seismometer (Lognonné and Mosser 1993; Lognonné et al. 1998) and Very Broad Band one (Lognonné et al. 1996, 2000). Amplitudes of body waves were first estimated by Mocquet (1999), for an isotropic quake located at the surface with a seismic moment of 1015 N m and by Lognonné and Johnson (2015) for 1D models with a method enabling better amplitude modelling. Figure 2 shows expected body waves spectra for direct P, S and core reflected ScS waves, for the model A of Sohl and Spohn (1997) with P and S waves and for two attenuation models (\(Q_{\mu } = 250\) and \(Q_{\mu } = 175\) respectively, with corresponding \(Q_{p} = 625\) and \(Q_{p} = 440\)). In the 0.5–2.5 Hz frequency band, the amplitudes of the P body waves decrease rapidly with epicentral distance and are smaller than S-wave amplitudes at the longest periods, whereas in the 0.1–1 Hz frequency band, amplitude is relatively independent of epicentral distance only for P waves. On Earth, scattering is very strong in volcanic regions, which suggests that significant scattering may occur in volcanic areas on Mars, particularly in the Tharsis region. Scattering mainly affects P waves and decreases the peak-to-peak amplitudes of body waves by producing conversions (P to SV, P to SH) and spreads this energy in time. For shallow quakes this effect will reduce the amplitude of the P waves near the source and the receiver and can decrease the P-wave energy by a factor of 10 (Lognonné and Johnson 2007). Figure 3 summarizes the amplitude over the full bandwidth of the expected signals as a function of epicentral distance for a magnitude 4 quake (moment \(10^{15}~\mbox{N}\,\mbox{m}\)), for two plausible Mars models. It can be seen that the SANAK model creates a much more extended surface wave train than the cold crust of the EH45T model. In both models, a broad S-shadow zone exists, but S-energy will come in very focused at distances between 60 and 90 degrees. A strong attenuation related to the low \(Q_{\mu }\) factor is assumed for these two models (about 140 for \(Q_{\mu }\) which follows models of Khan et al. 2016, 2018). Body waves amplitude spectrum, for a 15 second window, as compared to the Earth Low Noise model (Peterson 1993) and for quakes of Moment \(10^{15}~\mbox{N}\,\mbox{m}\) at 45° (left) and 90° (right) of epicentral distance computed with a Gaussian beam method. The two dashed curves are for a shear Qμ of 250 (upper curve) and 175 (lower curve) respectively in blue for P waves and red for S waves. On Earth, these body waves signals would be hidden by the micro-seismic peak. Note nevertheless the strong cutoff of amplitude at a few Hz, which shows that most for distant events amplitude will be recorded below 2 Hz for P body waves and below 1 Hz for S body waves Global stack of synthetic seismogram envelopes for a magnitude 4 (moment \(10^{15}~\mbox{N}\,\mbox{m}\)) quake for two plausible Mars models, calculated using AxiSEM (Nissen-Meyer et al. 2014; van Driel et al. 2015). The seismograms were filtered with a noise-adapted filter suppressing all phases whose spectral power is below the noise level at all periods. In the plot, this corresponds to an amplitude of 0 dB. Note however, that phases with an amplitude of 0 dB can still be detectable, based on their polarization. Depth of the event is 10 km By sampling the crust, lithosphere and upper mantle, surface waves are the most important source of information to investigate the interior structure of Mars and will propagate mostly in the relatively cold lithosphere, in which attenuation might be much less than in the mantle (Lognonné et al. 1996). For instance, surface wave group velocities are very sensitive to the crustal thickness with 10% typical variations for crustal variations of 20 km (Lognonné and Johnson 2007, 2015). Though no surface waves were recorded on the Moon by the Apollo seismometers (e.g. Gagnepain-Beyneix et al. 2006), SEIS's improved performance at long period and the expected larger magnitudes of quakes suggest the possibility of such detections on Mars. In the framework of the InSight mission, Panning et al. (2015, 2017) proposed a single-station technique based on globe-circling surface Rayleigh waves measurements. It requires quakes with moments larger than 1016 N m and enable the location of quakes as well as inversion for crustal and upper-mantle structure. This technique is the key constraint on the SEIS performances as it implies the detection of R3. The consequences are provided in Sect. 3.3. Mars is expected to be less dispersive than the Earth and due to the smaller size of the planet, surface waves should therefore have a larger and more impulsive waveform than on Earth. This amplitude ratio with respect to Earth increases with angular epicentral distance and can reach a factor of 15 at 90° (Okal 1991). All the modeling techniques used for these amplitudes modeling have of course been benchmarked with different techniques in 1D, such as normal modes summations (Lognonné and Mosser 1993; Lognonné et al. 1996), AxiSEM (Ceylan et al. 2017) and SPECFEM (Larmat et al. 2008; Bozdag et al. 2017). Even if less severe than on the Moon, diffraction of surface waves may however be effective at periods less than 10 s, due to the fracturing of the crust related to meteoritic impacts and might impact these analyses. 3.3 Consequence of the Single Station Approach on the SEIS Performance As noted above, one of the main drivers on the SEIS instrument requirements is associated with the global detection of R3 for \(10^{16}~\mbox{N}\,\mbox{m}\) quakes, which are expected to occur at a rate of a few per year to a few tens per year. The consequences of these requirements are illustrated by Fig. 4 which provides an estimate of the amplitude of long period surface waves between periods of 25 s and 50 s, for a 1D model of Sohl and Spohn (1997). They are also listed in Table 4. Practically, the requirement of detecting the R3 surface waves, 10 times smaller than R1, is a major requirement for a single station mission. On the other hand, seismic network missions could indeed focus on the direct waves as soon as a sufficient number of stations are deployed. This was the case for the MESUR (Mars Environmental SURvey, Solomon et al. 1991) and especially for the Impact (Banerdt et al. 1998) concepts were the performance requirement were related to the joint detection of the direct waves at more than 3 stations. Normal mode summation synthetic seismograms for Mars shows large signals for multiple surface-wave arrivals from a \(10^{16}~\mbox{N}\,\mbox{m}\) quake at a distance of \(90^{\circ}\) (5500 km). Filtering to isolate the Rayleigh waves suppresses the P and S arrivals around 10 minutes, which are actually quite strong (\(\mbox{SNR} >70\) in a 0.1–1 Hz band). Black, green, purple and cyan traces are for source depths of 10, 20, 50 and 100 km, respectively. Red lines denote the RMS noise level for \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) in amplitude spectral density in a bandwidth of 0.02–0.04 Hz, about \(1.5\times 10^{- 10}~\mbox{m}/\mbox{s}\). Dashed blue provide the amplitude model which was used in the requirement flow The SEIS VBB performance requirements Instrument requirements [0.1–1] Hz \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) \(2.5\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) [0.01–1] Hz Vertical and horizontal (SP) [15–50] Hz \((f/15)^{2}\times 10^{-8}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) This, in addition to early estimates performed on the efficiency of simple surface thermal protection of Broad Band seismometers (e.g. Lognonné et al. 1996), leads to the \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) requirement in the 0.01–1 Hz bandwidth of the vertical axis. If this requires obviously electronic sensor self-noise below this noise level, this requirement requests also to mitigate all other source of noise and justified: the low temperature sensitivity of the seismic sensors and the significant thermal protection of the housing sphere (2 hours requirement time constant), the additional thermal protection and wind shield (5.5 hours requirements), the surface deployment of the SEIS sensor assembly with its wind and thermal shield (WTS), the minimization of all other sources of noise, including those from the tether, leveling system and packaging of the instruments, the inclusion in the payload of environmental sensors aiming to reduce the long period noise, when the latter is associated with either ground deformation associated with pressure fluctuations or magnetic field effects on the sensor (see more in Sect. 3.4). For a temperature noise spectrum of \(30~\mbox{K}/\sqrt{\mbox{Hz}}\) at 100 seconds (Mimoun et al. 2017) based on Viking measurement, the two-stages thermal protection attenuates the temperature by a factor of more than \(5.6\times 10^{5}\) at 100 secs but nevertheless necessitates a temperature sensitivity of about \(2\times 10^{-5}~\mbox{m}/\mbox{s}^{2}/{}^{\circ}\mbox{C}\) at the instrument level to reach the requirement. 3.4 Instrument Noise As concluded by Anderson et al. (1977a) after the Viking seismometer data analysis: "One firm conclusion is that the natural background noise on Mars is low and that the wind is the prime noise source. It will be possible to reduce this noise by a factor of\(10^{3}\)on future missions by removing the seismometer from the lander, operation of an extremely sensitive seismometer thus being possible on the surface". As shown on Fig. 5, an improvement of about 2500 at 1 Hz and 200 000 at 0.1 Hz is expected in terms of resolution, which however will likely be limited by the environmental noise associated with the interaction of Mars atmosphere and temperature variations with the SEIS assembly. Root mean squared self-noise of the three main outputs of the SEIS instrument (VBB VEL, VBB POS and SP VEL), in acceleration for a \(1/6\) of decade bandwidth, as a function of the central frequency of the bandwidth. This is compared to the Apollo and Viking resolution or LSB, as none of these instruments were able to record their self-noise due to limitations in the acquisition system for Apollo and Viking (9 bits plus sign for Apollo, 7 bits plus sign for Viking). SEIS uses acquisition at 23 bits plus sign A complete noise model of the SEIS instrument has therefore been developed, where all sources of noise associated with the sensor interaction with the Martian environment are added to the self-noise of the SEIS instrument. This noise is extensively developed by Mimoun et al. (2017) and is only summarized in this paper. Specific noise contributions are also described by Murdoch et al. (2017a) and Murdoch et al. (2017b) for the source of wind-induced noise associated with the lander and the ground deformation respectively. With the estimation of the seismic amplitudes described in the previous section, this noise model allowed an estimation of the signal-to-noise ratio of seismic events, as a function of epicentral distance and magnitude (among other parameters) and therefore was vital to assess the success criteria of the experiment with respect to the achievement of its science goals. We have investigated the seismometer performance in three signal bandwidths: very low frequencies (typically \(10^{-5}~\mbox{Hz}\) to detect tides), the [0.01–1 Hz] bandwidth to detect teleseismic signals and high frequency signal (e.g. asteroid impacts, local events…) that will be observable in the [1–20 Hz] bandwidth. In this section, we therefore only present the general approach developed by Mimoun et al. (2017) for the SEIS noise model, and discuss the major environmental assumptions and analyze the major and minor contributors to the model. In the literature, noise analyses for seismometers often focus on the seismometer self-noise. This is due to the fact that most of the very broadband seismometers are operated inside seismic vaults with a very careful installation process (see e.g. Holcomb and Hutt 1993; McMillan 2002; Wielandt 2002; Trnkoczy et al. 2002), with very stable temperature conditions and with magnetic shielding (Forbriger et al. 2010). Despite all these efforts, the detection threshold of body waves on Earth is in addition limited by the minimum ambient Earth seismic noise, known as the low noise model e.g. Peterson (1993) and illustrated in Fig. 2. The situation for SEIS is different: SEIS will be deployed on the surface of Mars, where the daily temperature variations can be larger than 80 K and the instrument has to integrate this major design constraint from the very beginning. In addition, the instrument will be installed on very low rigidity material and must be protected against all forces, either related to its tether link or to wind stresses, which will induce instrument displacements on the ground. The objective of the SEIS noise model is therefore double: first to provide an estimate of the instrument noise for the various bandwidths of interest and, secondly, to help refine, where necessary, the requirements of SEIS subsystems and of the various interfaces with the lander and HP3, including during the deployment. In some cases, the noise model had led us to consider including additional sensors on the InSight lander to help us decorrelate the seismometer output from the environmental contributions, as already illustrated on Earth for a magnetometer (Forbriger et al. 2010) and micro-barometer (Zürn and Widmer 1995; Beauduin et al. 1996; Zürn et al. 2007). See Murdoch et al. (2017b) for the implementation on InSight. The first step was to build a seismic noise model identifying and evaluating all possible contributors, including the instrument self-noise and the instrument sensitivity to the external environment. This is described in detail in Mimoun et al. (2017) and is only briefly summarized here. This ensures that a complete estimate of the noise of the instrument in the Martian environment can be made. Then we have followed the performance maturation loop during the mission design and development. As is standard with any design process, all the parts of the system changed in their performance, from estimated values to measured and validated values. The noise model allows the consequences of the evolution of these performances to be tracked throughout the mission design and development process. The noise requirements (see Table 4) have been defined based on: early Earth tests made at Pinion Flat Observatory with installation conditions comparable to those expected for InSight, including a tripod and windshield (Lognonné et al. 1996), seismic amplitudes estimation which indicated that these noise requirements are good enough for SEIS to detect a sufficient number of quakes during the operational life of the lander (1 Mars year \(\sim 1.88\) Earth years). The requirements are specified at both instrument and system levels and on both the vertical and horizontal axes. Note that the horizontal requirements extend down to 0.1 Hz and vertical axis requirements extend to 0.01 Hz. The large tilt sensitivity on the horizontal axes was indeed considered as too large to include science associated with Love surface waves without major risks in the threshold and baseline science goals and therefore in the mission requirement flow. It was important during the process of evaluating the various possible noise sources to be very thorough in order to avoid forgetting an important noise contribution and we separated the source of instrument noise into two categories: Instrument noise (self-noise), which includes contributions from the sensor head, electronics and tether and weakly depends on the temperature, although the decrease of the Brownian and Johnson noise and, for the VBBs, the increase of the mechanical gain at cold temperature might slightly reduce the self-noise during winter Environmental effects including noise derived from instrument sensitivity to external perturbation sources (temperature variations including thermoelastic effects on the ground and sensor mounting, magnetic field, electrical field) and also the environmental effects generating ground acceleration or ground tilt (pressure signal, wind impact…). This led to a noise map detailed in Mimoun et al. (2017) in the seismic bandwidth of the VBB (0.01–10 Hz) and in Pou et al. (2018) at very long periods. The quantification of these sources of noise has also been used to define the suites and performance requirements of the APSS sensors, when a source of environmental noise was found larger than the requirement but possibly mitigated by environmental decorrelation. A first example is the magnetic field sensitivity of the VBBs, associated with both micro-motors and spring magnetic properties. Its mitigation was either possible by a mu-metal shielding, too heavy with respect to mass constraints on the deployed sensor assembly, or by a magnetometer decorrelation, which was finally chosen for the implementation. A second example is the pressure decorrelation. In both cases, the admittance between the apparent ground acceleration and the perturbating signals (e.g. in \(\mbox{m}\,\mbox{s}^{-2}/\mbox{nT}\) or \(\mbox{m}\,\mbox{s}^{-2}/\mbox{Pa}\)) was estimated and then used to define the performances of these sensors. The instrument noise summary is depicted in detail in Fig. 6 for both the vertical and horizontal VBBs and SPs, which summarizes the results of Mimoun et al. (2017) for the VBBs and extend the latter to the SPs. During the night, we expect the noise to be below \(2.5\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) in the body wave bandwidth and to be close to the Earth low noise model down to 0.02 Hz, e.g. 50 s. At longer period and despite the strong thermal protection, thermal noise is expected to grow rapidly, in way very similar to that observed at Pinion Flat Observatory (PFO) during tests made on the Earth's surface in desert areas by Lognonné et al. (1996). The environmental noise will peak over the self-noise of the SP only at the 3-sigma level and most of the time, we expect the SP to be limited by its self-noise. Instrument noise. Vertical (top two figures) and horizontal noises (bottom two figures) for the day (left) and night (right) environmental conditions. Horizontal black and red lines represent the instrument performance requirements for the VBB self-noise and SEIS full noise (with environmental ones). Performances are presented for mean (50%), nominal \(1\sigma \) (70%) and worst case \(3\sigma \) (95%) conditions, respectively in dashed, dot-dashed and solid lines for the VBBs. Dashed black curve represents the SP sensor requirement while the green continuous line is the expected SP noise with environment for mean conditions. Curves are provided in the VBB and SP bandwidth, respectively [0.01–10 Hz] and [0.1–50 Hz] 3.5 Seismic Event Signal-to-Noise and Frequency Detailed analyses have shown that all instrument requirements listed in Table 3 and related to quakes can be fulfilled with the Mars activity as described in Sect. 2.3, noise level as predicted by the instrument noise model of Sect. 3.4 and with seismic waves propagation models from expected structure as described in Sects. 2.2 and 3.2. We provide here only two examples and leave others to Fig. 3, which uses synthetics to capture the signal-to-noise estimation of the different seismic phases of an \(M=10^{15}~\mbox{N}\,\mbox{m}\) marsquake seismogram. The frequency of seismic events for the estimation of total seismic moment release per year and the slope of the negative power law that defines the number of marsquakes of any size have been determined for different assumed maximum marsquakes. Intermediate estimates suggest hundreds of marsquakes per year with seismic moments above \(10^{13}~\mbox{N}\,\mbox{m}\), which is approximately the minimum magnitude for detection of P waves at sufficient signal to noise ratio at epicentral distances up to \(60^{\circ}\) (Mocquet 1999; Teanby and Wookey 2011; Böse et al. 2017; Clinton et al. 2017, 2018). In addition, there should be 4–40 teleseismic events (i.e., globally detectable) per year, which are estimated to have a seismic moment release of \(\sim 10^{15}~\mbox{N}\,\mbox{m}\) and 1–10 events per year large enough to produce detectable surface waves propagating completely around the planet, which are suitable for additional techniques in source location (Panning et al. 2015) and fulfill requirements L2-1 with one Mars year of operation. As shown on Fig. 6, noise in an octave bandwidth around 1 Hz is expected to be in the range of \(2\mbox{--}3 \times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\), one order of magnitude smaller than P wave amplitudes at 90° of epicentral distance and for a \(M=10^{15}~\mbox{N}\,\mbox{m}\) moment (Fig. 2). For S waves, the amplitude peak will occur at longer period and for one octave around 0.1 Hz, with a ratio of about 5. This illustrates the fulfillment of requirement L2-7b of Table 3. More representative tests were done in the frame of the Blind test proposed by Clinton et al. (2017). Analysis of one (earth) year of data of synthetic quakes with the current best estimate of the noise model was performed with the tools of the MarsQuake Service (MQS) as well as those from other test participants. Figure 7 shows the results from MQS, indicating that 7 quakes were detected and located using R1/R2/R3, with additional 27 quakes with only R1. Practically, the detection of R1/R2 only is most of the time rare as R3 can then be detected even in low signal to noise conditions. Over one Martian year, this is comparable to L2-3 and much more than L2-1. In addition, this test was made without pressure decorrelation, which could significantly improve the number of detected Rayleigh waves at long periods. Summary of Marsquake Service performance in the Blind Test. All events included in the one year of data are shown. MQS detected the events shown in red and green, those in green meet L1 requirements. Squares indicate the events located using R1/R2/R3, triangles were located with R1 and circles are for only P and S waves. The grey curve indicates the limit threshold for detection and the black curve the location threshold, as a function of distance. See details in Clinton et al. (2018) Last but not least and as noted in Sect. A.5 of Appendix A, an event large enough to create observable excitation of the planet's free oscillation (seismic moment of \(\sim 10^{18}~\mbox{N}\,\mbox{m}\)) may be expected to occur during the nominal mission if the seismic activity level is near the upper bound of the range of reasonable estimates. 3.6 Challenges and New Science Goals 3.6.1 Pressure and Environmental Decorrelation As indicated in Sect. 3.4 and described in detail in Mimoun et al. (2017), the pressure noise associated with the low rigidity surface deformation will likely be the limiting factor at both long periods (\(T>15\mbox{--}20~\mbox{s}\)) and possibly at short period during the day. The efficiency of pressure decorrelation proposed by Murdoch et al. (2017b) will likely improve significantly the detection and analysis of quakes. This is illustrated with synthetic waveforms realized for the Marsquake Service blind test (Clinton et al. 2017), which include realistic pressure noise from Large Eddy Simulation. Figure 8 shows the benefit on the largest quake in the blind test catalog: \(M_{w}=5\) quake at \(35^{\circ}\) epicentral distance. Since the quake happens during the nighttime, no decorrelation is applied for the first three hours after the origin time. After that, however, an improvement in the SNR of multi-orbiting Rayleigh waves is achieved by decorrelation in the 20–30 s band. A second example for the \(M_{w}=3.7\) quake at \(66^{\circ}\) epicentral distance is also shown. In this case, the quake occurs during the daytime and pressure decorrelation helps in identifying the surface-wave trains. Indeed, if the body-wave arrivals are clearly visible in the high-pass filtered data, the Rayleigh waves are partially hidden by the background noise. These perspectives explain not only the integration of the APSS suite in the InSight payload (Banfield et al. 2018) but also why APSS data will be distributed by the SEIS ground system to seismological users in the same SEED format as seismic data (see Sect. 9.2). Example of pressure decorrelation efficiency from synthetic tests following the techniques of Murdoch et al. (2017b). Event on the left is the one of the blind test data on 22/09/2019 and pressure decorrelation will enable much better long observations and therefore normal modes. The right example shows the R1 of a smaller quake on 27/03/2019. The bottom trace in black shows a clear detection of R1 3.6.2 Natural Impacts (L1-8) Meteorite impacts provide an additional source of seismic events for analysis. On the Moon, impacts constitute \(\sim 20\%\) of all observed events and a similarly large number are expected for Mars: for a comparable mass, the frequency of impact is \(2\mbox{--}4{\times}\) larger for Mars but the velocity is \(\sim 2{\times}\) smaller because of deceleration in the atmosphere (even less for the smallest events). The Apollo 14 seismometer detected about 100 events per year generating ground velocity larger than \(10^{-9}~\mbox{m}/\mbox{s}\) and 10 per year with ground velocity larger than \(10^{-8}~\mbox{m}/\mbox{s}\) (Lognonné et al. 2009). Large uncertainties remain prior to landing on the amplitude of impacts signal and on the atmospheric noise in the relatively high frequency bandwidth, were body waves of impacts might peak (0.5–3.5 Hz) but where also the self-noise of the pressure sensor might prevent any good pressure decorrelation. Most of these uncertainties are related to the impact equivalent sources and are added to those in the impact rate, for which uncertainty of a factor 2–3 remains. Nevertheless, using estimates of impactor flux, seismic efficiency and crater scaling laws, Teanby and Wookey (2011) predict that globally detectable impacts are rare, with an estimate of \(\sim 1\) large event per year. Regional decameter-scale impacts, within \(\sim 2000\mbox{--}3000~\mbox{km}\) of the lander, are more frequent with \(\sim 10\) detectable events predicted per year (Teanby 2015). These estimates are consistent with those by Lognonné and Johnson (2015) using an independent approach based on seismic impulse in the long-period limit. Nevertheless and even if a few impacts can be detected seismically and located from orbital images, the potential for constraining the crustal structure will be much greater than for a single marsquake because the source location will be defined, removing a major unknown. The search in orbital images for fresh impacts after a seismic signal detection associated with shallow sources is thus an important part of the InSight SEIS investigation (Daubar et al. 2018). See also more on impact estimates in Gudkova et al. (2015), Yasui et al. (2015), Lognonné and Kawamura (2015), Güldemeister and Wünnemann (2017) and in airbursts estimates in Stevanović et al. (2017), Garcia et al. (2017), Karakostas et al. (2018). 3.6.3 Phobos Tide (L1-5) The Phobos tides, which are \(\sim 0.5\times 10^{-8}\mbox{ m}/\mbox{s}^{2}\), are subdiurnal with periods of about 5.5 hr (Van Hoolst et al. 2003). They are thus below the primary seismic frequency range and provide a unique link between high frequency (seismic) and ultra-low frequency (geodetic) observations of Mars' interior and provides, through their gravimetric factor, constraints on the core (Lognonné and Mosser 1993; Van Hoolst et al. 2003; Panning et al. 2017). The measurement is essentially limited by the temperature noise (\(\sim 0.5~\mbox{K}\,\mbox{rms}\) in a bandwidth of 1 mHz around the Phobos orbital frequency) and by calibration precision of the gravimetric output of seismometers. If the first source of noise can be mitigated due to the non-synchronized period of Phobos tide with the sol harmonics, the second is a systematic source of error and will be the limiting factor. Figure 9 illustrates the challenge, by showing the differences in the gravimetric factor associated with the \(l=2\), \(m=2\) Phobos tide for the different models listed by Smrekar et al. (2019, this issue). Section 7.3.2 and Pou et al. (2018) are reporting the perspective of absolute calibration of the VBBs or SPs output, suggesting a conservative target of 0.5% of calibration error. This only allows distinguishing the extreme sides of the models. But as noted by Lognonné et al. (1996), Van Hoolst et al. (2003), Phobos is close enough to Mars to have a relatively large \(l=4\), \(m=4\) harmonic with an amplitude 5.5 times smaller for a frequency about 2 times higher. This will provide an alternative for characterization of the core, by using as proxy the ratio between the \(l=2\), \(m=2\) and the \(l=4\), \(m=4\), which will be not be impacted by the absolute error in the calibration, as the latter cancels in such a ratio. Although smaller by a factor of two (see Fig. 9), this will provide additional constraints on the interior, independent of the planet seismicity. More physically, this will balance the tidal impact of the upper mantle (\(l=4\), \(m=4\)) with respect to the whole planet tide (\(l=2\), \(m=2\)). Such analysis will be performed in the framework of Mars Structure Service activities (Panning et al. 2017 and Sect. 8.3). Deviation of the Phobos gravimetric factor l=2, m=2 (in red) and of the ratio between the \(l=2\), \(m=2\) and \(l=4\), \(m=4\) factors in blue. The second one varies by about \(\pm 0.4\%\) for the range of a-priori models but will not depend on an absolute calibration of SEIS 3.6.4 New Science Goals Last but not least, new science investigations on the Martian subsurface, which were not initially integrated in the CSR, will be conducted. The first one will be associated with the monitoring of the HP3 generated seismic signal, when HP3 will penetrate the ground. This will allow both the measurement of body waves (Kedar et al. 2017) and also possibly first attempt in 6 axis seismology by using the 6 SEIS sensors (Fayon et al. 2018), even if this will require a careful processing of the SEIS data during the passive and active cross-calibration phases (see Sect. 7.3). Other investigations include joint data analysis of the SEIS and APSS data that will use SEIS to investigate signals associated with the lander wind generated noise (Murdoch et al. 2017a), dust devils and atmospheric boundary layer activity generated ground deformation (Lorenz et al. 2015; Murdoch et al. 2017b; Kenda et al. 2017), dust devil detection (Lorenz et al. 2015; Kenda et al. 2017) or short period Rayleigh waves (Kenda et al. 2017; Knapmeyer-Endrun et al. 2017). These signals will not only help us to determine the subsurface structure, but might also provide a new tool for monitoring the atmosphere (Spiga et al. 2018) 4 SEIS Description 4.1 SEIS Overall Description Both the VBB and the SP are feedback seismometers based on capacitive transducers and are inheriting from the development of the very broad band seismometers on Earth since the early 1980 (e.g. Wielandt and Streckeisen 1982). For more information on the broad band seismometers, see e.g. Wielandt (2002) and Ackerley (2014). For a review on past planetary seismometers, see Lognonné (2005), Lognonné and Pike (2015). Due to mass, launch and space environment, space qualification technology limitation and the very large temperature variation on Mars, the SEIS experiment has however been entirely designed for the purpose of planetary seismometry. We provide in the section a first overview, followed by a more detailed description of the different subsystems of SEIS in Sect. 5. Section 6 describes performance and instrument noise and transfer function and Sect. 7 their operation. The SEIS instrument has 4 main components (Fig. 10): The Sensor Assembly (SA) (Fig. 11). It accommodates two independent, 3 axes seismometers: a Very Broad Band (VBB) oblique seismometer and a miniature Short Period (SP) seismometer. Both seismometers and their respective signal preamplifier stages are mounted on a common structure which can be precisely levelled thanks to 3 tunable length legs. They are protected against thermal noise by a thermal blanket (RWEB, the Remote Warm Enclosure Box). The Sensor Assembly is stored on the lander's deck for launch, cruise to Mars, EDL (Entry, Descent and Landing) and the first days of operation on Mars. It is then deployed on the ground of the planet with a motorized arm (Fig. 12). Drawing of the SA is provided in supplementary material 1. The EBox (Electronic Box), a set of electronic cards located inside the lander's thermal enclosure. The tether that makes the electrical link between the SA and the EBox. The WTS (Wind and Thermal Shield) that is deployed after the SA and over it. It gives an extra protection against winds and temperature variations. SEIS experiment subsystems, together with the institutions leading the subsystems SEIS Sensor Assembly (SA) without the RWEB Funding of all subsystems has been made without fund transfer and has been therefore supported by the French (CNES), US (NASA within JPL Discovery contract), German (DLR), Swiss (SSO) and UK (UKSA) space agencies. Additional human resources support has been made by the national academic and research organizations. See details in the Acknowledgement section. CNES has in addition done the overall project management and ESA has managed the Swiss ETHZ contribution through PRODEX. 4.1.2 SEIS Seismic Sensors SEIS has 2 sets of three axis seismometers, the VBB sensors and the SP sensors, each described in more detail in Sects. 5.1 and 5.2. In this section, we focus on comparing the sensors. The VBBs were developed by IPGP since the end of the 1990 following CNES R&D. For the pendulum, early prototypes, InSight qualification, engineering and the flight units were built by SODERN. EREMS was in charge of the InSight VBB feedback cards. SPs sensors have been designed by Imperial College and the electronics are developed by Oxford University and Kinemetrics. Both the VBBs and SPs are inherited from developments initiated in the mid-1990s by the InterMarsnet and Marsnet ESA-NASA projects. The joint VBB and SP configuration was also the baseline for the NetLander project (Lognonné et al. 2003), at that time with 2 VBBs and 2 SPs (Lognonné et al. 2000). VBBs are oblique sensors, recording U, V, W ground velocity in a non-Galperin configuration and with a tilt with respect to ground horizontal of about 30°, while the SPs are vertical (for SP1) and horizontal (for SP2 and SP3). Both the VBBs and SPs are feedback sensors with their feedback cards inside the lander thermal enclosures, proximity electronics on the sensor assembly and with analogue feedback signals transmitted in the tether. The VBBs however have an increased built-in robustness, as each VBB axis is completely autonomous, including the quartz oscillator driving the displacement sensor. The 3 SP axes on the other hand all share the same oscillator and their 3 feedback circuits are integrated on a single electronic board. In comparison, a failure of any VBB axis prevents the synthesis of a vertical output, while SP1 can provide this irrespective of the failure of SP2 or SP3. Taken together, in their common bandwidth the VBBs and SPs will provide fully redundant 3 axes seismic measurements and any failing axis of the VBB (or SP) can be replaced by any other of the SP (or VBB), as no VBB axis is parallel to any SP axis. This configuration in addition offers the possibility, when all 6 sensor axes operate nominally, to perform 6 axes high frequency seismological measurements, as developed by Fayon et al. (2018). VBB sensors target the monitoring of the 0.01–5 Hz bandwidth, while the SPs target the 0.1–50 Hz bandwidth. Because of their different natural frequencies (0.5 Hz for the VBBs and 6 Hz for the SPs), VBBs have a larger mechanical gain (\(>0.11~\mbox{s}^{2}\)) than SPs (\(7\times 10^{-4}~\mbox{s}^{2}\)), but lower high-frequency cut-off frequencies than the SPs. VBBs therefore demonstrates better performances at long periods than the SPs, while the latter are best at short periods. VBBs were therefore required to meet a self-noise better than \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) between 0.01 Hz and 1 Hz, while the SPs were required to meet a self-noise better than \(10^{-8}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) between 0.1 Hz and 1 Hz (see Table 4). Their performances are comparable between 3–5 Hz (depending on tests results) and their transfer functions are compared, for the velocity outputs in Fig. 13. On the other hand, the factor 100 larger mechanical gain makes the VBBs much more sensitive to installation tilt: VBBs must therefore be levelled and can operate in nominal configuration without saturation up to about 0.25° and 0.02° degrees of tilt in their lowest and highest sensitivity mode, while the SPs can operate under up to 15° of tilt. Each VBB can however still operate in non-nominal conditions for tilts of their sensitivity axis from \(-2.8^{\circ}\) to \(3.5^{\circ}\), but with a free frequency varying from unstable (about \(i\times 0.2~\mbox{Hz}\), where \(i\) is the imaginary number such that \(i^{2}=-1\)) to 0.70 Hz. The feedback is nevertheless strong enough to accommodate unstable free frequencies. The three VBBs can therefore operate in a non-nominal mode within \(\pm 2.5^{\circ}\) of tilts of the levelling system (LVL). The baseline acquisition rate of the VBBs and SPs are 20 and 100 samples per second (sps) respectively, both being acquired with a 24 bit acquisition system. In the common seismic bandwidth (0.1–5 Hz), the output of both VBBs and SPs are flat in velocity in their nominal mode and the low gain of the VBBs is about 55% larger than the high gain of the SPs (\(2.8\times 10^{10}~\mbox{DU}\,\mbox{s}/\mbox{m}\) and \(1.8\times 10^{10}~\mbox{DU}\,\mbox{s}/\mbox{m}\)). The high gain of the VBB is about 5 times larger than the high gain of the SP. As both sensors are feedback sensors, their long period noise in their velocity flat mode is both related to the electronic feedback self-noise and the displacement transducer noise. With their 10 times lower requirement than VBBs at 0.1 Hz but comparable space qualified technologies for the feedback amplifiers, the SPs velocity self-noise at 0.1 Hz is mostly related to the displacement transducer noise, while the VBBs displacement and feedback noise are comparable at 0.01 Hz even with the larger electronic gain. VBBs have therefore, in addition to their velocity output, a DC coupled, flat in acceleration, very long period output (POS), which is much less sensitive to the integrator feedback noise. The SPs have also this output, but the POS SP output is acquired only with a 12 bit A/D converter while the VBB one is acquired with a 24 bit A/D converter. The mission schedule was compatible with only a few days of passive seismic monitoring at the different stages of the VBBs integration, both prior to their integrations in the sphere and after. Nevertheless, several earthquakes were observed by the Flight sensors during testing activities, including cold tests. Figure 14 shows two such earthquakes of \(M_{w} = 7.8\) (a) and \(M_{w} = 3.9\) (b) recorded by two VBBs located in the IPGP 'Observatoire de Saint Maur' facility. After filtering of the \(M_{w} = 7.8\) earthquake, we clearly observe the surface wave packet with the highest amplitude at 20h00, as well as the PP and SS phases. A short period filter (0.3–2 Hz) reveals the \(M_{w} = 3.9\) earthquake, which was hidden between two signals of suburban commuter trains. A non-flight (but similar) model vertical-axis SP was field tested at ambient temperature, inclined to match Mars gravity, over six days in the Kinemetrics test vault in Acton, Southern California. 12 events from \(M_{w}\) 1.4 to 6.3 were recorded. Figure 15 shows a low magnitude local event expressed as a spectrogram and as three time series, at the full 80 Hz bandwidth from 200 sps (labeled SP), downsampled to the continuous stream of 2 sps (cont. SP) and using the energy in a 4 to 16 Hz filter downsampled to 2 sps (ESTA). While there is no recognizable event in the continuous data the ESTA time series (labelled ESP in the Fig. 15) correctly identifies the event, validating the approach adopted for InSight's data downlink at least for local terrestrial events. A larger, \(M_{w} = 7.7\), teleseismic event is shown in Fig. 16. This was also detected during SP testing in Oxford, again at ambient temperature. The source was 29 km SW of Agriha, Northern Mariana Islands at 2016-07-29 21:18:24 UTC. The P and S wave are seen in both the reference and SP time series, with the R1 Rayleigh waves seen most clearly in the spectrogram climbing in frequency to 0.06 Hz. The derived SP sensor noise, which is the incoherent difference with the reference sensor, is stable over time, with no glitches, with a very good match to the reference in the time domain. 4.1.3 LVL and Tiltmeters The SEIS Leveling System (LVL) has been developed by the Max Planck Institute for solar system research (MPS) and is detailed in Sect. 5.3. It has several purposes: provide the main structure of the SEIS sensors assembly (SA) and a "rigid" link to the ground, allow the precise leveling of the SA on slopes of up to 15° or on rocky ground, measure precisely the tilt angle, plus other functions, like supporting science temperature sensors, heaters and sensor thermal protection or performing active tilt calibration of the 6 axes on Mars. The LVL consists of two main parts: a mechanical part, the leveling structure, as the central part of the Sensor Assembly and an electrical part, the Motor Driver Electronics (MDE), integrated in the EBox. The main part of the leveling structure is a structural ring. The following components are mounted on this structural ring: The three expandable legs: driven by stepper motors, those legs are able to compensate tilts of the SA of up to 15° (Fig. 17) and have a displacement resolution of roughly \(0.6~\upmu\mbox{m}\). Their geometry has been optimized in order to maximize the stiffness and to minimize any backlash. At the bottom of the legs, there are cone-shaped feet with optimized geometry in order to provide a good interface with the Martian soil and to anchor SEIS against horizontal sliding generated by the tether's thermoelastic deformations. See Fayon et al. (2018) for more details on the feet. The tiltmeters: two types of tiltmeters are integrated on the LVL's ring. A two-axes MEMS sensor for coarse leveling (resolution better than 0.1°) and two single-axis High Precision (HP) tiltmeter for fine leveling (resolution better than 1 arcsec). The heaters: three heaters are serial mounted inside the ring in order to face the cold temperatures during winter time. They provide a heating power of 1.5 W. The Science Temperature sensors (SCIT). Two sensors are mounted on the ring. The spider structure: it is the mechanical link between the LVL's ring and the grapple hook, which is the interface that will be grabbed by the deployment arm of the lander. The SPs (see Sect. 5.2). The VBBs proximity electronics (see Sect. 5.1.4). The interface with the cradle (see Sect. 5.7) and the VBB sphere. Figure 18 provides the placement of all these subsystems in the sensor assembly. The MDE card controls the LVL from the EBox where it is integrated. It activates the stepper motors of the legs, as well as the heaters and acquires the signals from the tiltmeters. It also provides diagnostics and protection against motor overheating. 4.1.4 EBox The Electronic Box (EBox, Fig. 19) of SEIS has been developed by ETH Zurich (ETHZ) with the exceptions of the VBBs and SPs sensor feedback cards and the LVL-MDE control card, which are integrated inside the Ebox. See Sect. 5.4 for more details on the power and acquisition parts of EBox and Sects. 5.1, 5.2 and 5.3 for those related to the VBBs, SPs and LVL-MDE cards. Ebox contains the main part of SEIS' electronics and is located inside the lander's Warm Electronic Box. Thus, it is not submitted to the same environmental constraints as the SA: temperature will remain within the MIL temperature range, but is nevertheless not stable, with significant changes occurring when the lander operates. Of course, the Ebox stays in the same location while the SA is deployed on the ground. Figure 20 shows the electronic boards integrated in the EBox: 3 VBB-FB (Feedback), delivered by IPGP for the VBBs, 1 SP-FB, delivered by Oxford University for the SP, 1 LVL-MDE, delivered by MPS for the LVL, 1 SEIS-AC (including 1 ACQuisition and 2 ConTroL boards for redundancy) from ETHZ, 2 SEIS-DC modules from ETHZ modules which receive the 28 V primary power line and provide all secondary voltage lines to others sub-systems. As the electrical interface of SEIS with the lander, it is controlled by the lander's Command and data Handling (C&DH) and powered by the Power Distribution and Drive Unit (PDDU). When the lander is in sleeping mode, the EBox provides operating power to the sensor units with stabilized voltage. In addition, the SEIS-AC main controller board is in charge of the acquisition of the scientific signals. This board digitizes analog signals and can store up to 65 hours of data. Data are transferred to the lander during its wake-up period and then processed and transmitted to Earth. 4.1.5 Tether and LSA Once on the ground, the SEIS seismometer will remain connected to the InSight lander through a sophisticated umbilical tether in the form of a semi-rigid flat cable, called the Tether. See Sect. 5.5.2 for a more detailed description and Fig. 21 for a general description. The tether has the following main functions Provide an electrical link between the EBox and the SA. This is performed thanks to 4 sub-parts (TSA-1 to 4) connected together. Allow for the deployment of the SA on the ground. This is performed in particular thanks to the TSB (Tether Storage Box) that contains the TSA-2 part and releases it just before deployment. Decouple the SA from the mechanical noise that could come from the lander. This is performed thanks to the LSA (Load Shunt Assembly) which is an extra loop of the TSA-1 that is released by a frangibolt (Shape Memory Alloy launch lock device). 4.1.6 RWEB and WTS The RWEB and WTS are the "portable" seismic vault of the instrument and are targeted to provide a strong thermal, wind and sun protection to the instrument, in addition to maintain the ground near SEIS in permanent shadow and to reduce tilts effects associated with ground temperature changes. See Sect. 5.6 for a more detailed description. The Fig. 22 gives an idea of the several barriers in between the seismic sensors and the Martian environment. The first barrier (RWEB or Remote Warm Enclosure Box) is made of titanium and mylar and uses the Martian atmosphere as an insulator. Thanks to its reduced gaps, it prevents convection from developing. It is part of the Sensor Assembly. The second barrier (WTS or Wind and Thermal Shield) provides an extra protection against the winds and the thermal variations. 3 legs support a dome from which a skirt is hanging. The skirt is able to adapt to the terrain in order to provide a maximum protection. 4.1.7 Cradle The Cradle subsystem (see Sect. 5.7 for a more detailed description) is made of three nearly identical turrets at 120° around the SEIS Sensor Assembly and provides two main functions: It connects the SA to the spacecraft until it is deployed on Mars. In particular, each turret is fitted with an elastomer damper that limits the mechanical loads seen by the SA. It allows the separation of the SA from the spacecraft in order to start the deployment. To do so, each turret is fitted with an off-the-shelf frangibolt (shape memory alloy launch lock device) that breaks a titanium fastener when heated. Each frangibolt has a nominal and redundant heater circuits that are connected to the unregulated load switches of the spacecraft. See Fig. 23 for their location. 4.1.8 Instrument Architecture and Integration Process The storyboard shown on Fig. 24 gives a better idea of the sensor assembly organization, even if it is not fully representative of the order in which the components are integrated. The VBB sensors heads are integrated first one after the other in the sphere crown. Each VBB was therefore compatible for either the Flight model or the Spare model, enabling a selection process for the three best VBBs for the Flight model. See movie in supplementary material 2 for the last VBB mounting on the crown. Shells are welded on the crown. The sphere is evacuated and outgassed during a 2 week bakeout process and the exhaust tube (queusot) is pinched off. After thermal and functional tests of the sphere tests, performed at IPGP for all VBBs with their proximity electronics and generic feedback cards, the sphere has been delivered to CNES Toulouse for further integration. The Sphere and VBB proximity electronics are then integrated on the MPS delivered Leveling System (LVL), like the SPs delivered by Imperial College. After connecting of the VBBs, SPs and LVL tether, the RWEB is finally placed to close the sensor assembly. 4.1.9 Instrument Budgets Power Budget The SEIS instrument is powered by the non-regulated primary 28 V bus of the InSight space craft, directly connected to the lander batteries which will be recharged by the solar panels' generator during day time. The power of the SEIS Flight Model instrument has been measured during Assembly, Integration and Test (AIT) in a standalone test where the SEIS instrument was powered by a commercial power supply. Additional tests and measurements have been done when the SEIS instrument was connected to the Flight Model lander and powered by the lander power supply subsystem. The Ebox has a power supply subsystem, taking power from primary lines of the lander and powering the Ebox internal cards (3 VBB feedback boards (VBB-FB)), SP feedback board (SP-FB), LVL Motor Driver Electronics card (LVL-MDE) and its internal data acquisition and processing board (SEIS-AC). In normal operating mode, the power of the overall experiment is about 5.9 Watt. All these powers are measured at the non-regulated primary voltage level of the experiment and include losses in the DC/DC converter. They are provided in Table 5 for the different modes of the experiment. Power of the different modes of SEIS, at primary 28 Volt level Re-centering Active cross calibration Mass Budget SEIS is carrying to Mars not only the sensor's heads and their electronics, but also all the installation and multilayer environmental protections necessary to fulfill the mission goals in terms of performance. The full mass of the instrument is therefore large at about 28.8 kg, with about 17.1 kg associated either with the instrument mounting on the lander (1.67 kg for the cradle), the windshield (almost 7.3 kg plus 2.2 kg of launch lock assembly) or the tether and tether box associated with the remote installation (5.9 kg). The remaining 11.7 kg are associated with the sensor assembly (6.5 kg) and the Ebox (5.2 kg). All mass breakdown details are listed in Table 6 and correspond to the weighed mass of the Flight units. Mass breakdown of the SEIS experiment. The detail of the mass is provided for the Sensor assembly, Electronic box, tether system and Wind Shield. In addition, the mass either lifted by the robotic arm, or carried by the cradles are provided With the exclusion of the tethers, the 3 SPs sensors encapsulated in their boxes for the sensors and with their feedback card have a mass of 614 g, while the 3 VBBs sensors encapsulated in the sphere for the sensor and with their 3 feedback cards have a mass of 3697 g, about 6 times larger. This is the ratio between the measured performances of the SPs and VBBs in low noise seismic vault condition at 2.5 s (\(\sim 3\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) for the SPs and \(5\times 10^{-10}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) for the VBBs) and illustrates that both sensors fit well along the optimum slope of −1 between performances and mass, as defined by Pike et al. (2016). The mass of the VBB sphere, LVL and VBB feedback cards is about 5.7 kg and can be compared to Earth instruments with built-in feedback but manual LVL like the Trillium compact 120 seismometer (1.2 kg and \(-174~\mbox{dB}\) with respect to \(1~\mbox{m}/\mbox{s}^{2}\) and about \(2\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) at 2.5 s, Nanometrics 2018) or the Streckeisen STS-2.5 (12 kg and \(-194~\mbox{dB}\) with respect to \(1~\mbox{m}/\mbox{s}^{2}\) and about \(2\times 10^{-10}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) at 2.5 s, Kinemetrics 2017). The ratio of performances and mass with respect to the VBB/LVL/FB are about 0.85 for both the TC120 and STS-2.5 and therefore close to 1 despite the VBB capability to perform motorized leveling and its space qualified status, including the very efficient evacuated sphere with its high thermal protection efficiency. If both the SP and VBB sensors are close to the optimum mass, future optimization, if needed for future missions, can nevertheless be made, especially with respect to the tether which is a very complex piece of hardware. SEIS was indeed limited by the capability of the deployment arm, which impacted the maximum lifted mass of the Sensor Assembly and by the very cold temperature encountered by the Sensor Assembly when powered off (\(-115^{\circ}\mbox{C}\)). This prevented the integration of all feedback cards in the sensor assembly, including the quartz oscillator required for the displacement transducers. If future opportunities allow larger lifted mass, significant optimization will be possible. For Mars missions, other optimization can be made by improving the aerodynamic shape of the WTS (Nishikawa et al. 2014) and/or reducing the maximum wind over which the WTS is authorized to move. This was not performed for InSight and, for static friction larger than 0.2, the mass of the WTS is compatible with a non-displacement of the WTS up to \(80~\mbox{m}/\mbox{s}\) of wind, providing therefore a high safety margin, even if a large dust devil is passing over the WTS. Data Budget The data handling and transmission strategy of the experiment has been designed in order to ensure that seismic data and the APSS data (pressure, wind and magnetometer) are registered continuously during the monitoring mode at the highest data rate and stored in the lander mass memory. This approach is dictated by the need of the full waveform for seismic signals and of a full environmental monitoring for either decorrelation or just confirming that a seismic signal is not related to weather activity. With the sampling rates listed in Table 7, this leads to about \(1.03~\mbox{Gbits}/\mbox{sol}\), excluding overheads and secondary data from the SEIS flight software. This is too much for a full transmission. Raw data production of SEIS during one sol, before compression. A minimum of 50% of compression is expected and low noise conditions are expected to reach compression ratio of 40% The data transmission strategy has therefore been based on (i) a first lossless compression, (ii) the dump of all short period data in a large mass memory, (iii) the transmission of all low pass filtered and decimated signals needed to perform the science goals below 1 Hz and (iv) a post-selection process, where events of interest in the high frequency bandwidth (\(> 1~\mbox{Hz}\) for SEIS) will be transmitted as events, with a sampling rate larger than the continuous data, but which nevertheless can be less than the acquired frequency (100 sps for SPs, 20 sps for VBBs, Pressure, IFGs and 1 sps for wind) through a tunable decimation by FIRs in the Flight Software. See Sect. 7.2 for more details on the Flight Software. All data have been compressed with a lossless STEIM compression (Steim 1994), in which the delta value between two consecutive samples is compressed. This value can also be expressed as $$ d(n) - d ( n -1 ) = \Delta t \frac{d ( n ) - d ( n -1 )}{\Delta t} \approx \frac{ \Delta t}{\mbox{LSB}} \gamma , $$ where \(d(n)\) is the velocity flat output signal in count, \(\gamma \) is the acceleration, \(\Delta t\) the sampling interval in second and LSB the velocity flat output LSB in \(\mbox{m}/\mbox{s}\). At their primary sampling rate and in high gain mode, both VBBs and SPs are therefore expected to generate about 9–10 bits per sample due to their self-noise after Steim compression and with overhead. The compression ratio of the raw data has therefore been assumed as 50% with 2 bits of margins. But it is likely, especially for the continuous data at 2 sps during night, that better compression will be achieved. All SEIS data then generate less than \(400~\mbox{Mbits}/\mbox{sol}\) of data and less than \(550~\mbox{Mbits}/\mbox{sol}\) when the APSS data at high rate (Pressure and MAG at 20 sps and TWINS at 1 sps) are also included. Further data compression was therefore mandatory in order to fit the data into the SEIS allocation of \(38~\mbox{Mbits}/\mbox{sol}\), including all APSS data requested for seismic analysis and the flight software data. The chosen strategy, illustrated in Fig. 25, is based on the transmission of both low frequency continuous channels and of selected-event high frequency channels. All data are therefore first acquired at high frequency by the SEIS Acquisition system, with a baseline of 20 sps and 100 sps for the 3 VBBs and 3 SPs respectively and stored in the flash memory inside the Ebox. Continuous Data About every 10 hours, the SEIS FSW retrieves these data. As the data provided by the SEIS instrument are much larger than the InSight system can accommodate, several pieces of software have been implemented to filter and decimate the input raw data onboard (see Sect. 7.2 on SEIS FSW). Those data, at full resolution/frequency are first stored in a Full Rate Buffer inside the lander flash memory. They are then filtered and decimated to produce a continuous data flow which is then sent through telemetry to orbiters around Mars, then through the JPL Deep Space Network to Earth. This sampling allows to fully cover the DC-1 Hz bandwidth for the VBBs and pressure, with VEL and pressure data sampled at 2 sps plus magnetic data sampled at 0.2 Hz as magnetic noise is expected to be possibly significant above 0.1 Hz. In addition, POS is decimated by two with an additional numerical gain of 4, to generate a 0.5 sps time series and additional wind data, at 0.1 sps are transferred in order to discriminate aeolian signals from seismic events. Some high-frequency data are in addition partially transmitted continuously, with a VELZ output at 10 sps and an ESTA output at 1 sps. VELZ will be a composite output of the 6 channels, defined as $$\begin{aligned} \mbox{VELZ} =& \mbox{FIRVBB}*(\alpha_{\mathrm{vbb}}\mbox{VBB1} + \beta_{\mathrm{vbb}}\mbox{VBB2} + \gamma_{\mathrm{vbb}}\mbox{VBB3}) \\ &{}+ \mbox{FIRSP}*(\alpha_{\mathrm{sp}}\mbox{SP1} + \beta_{\mathrm{sp}} \mbox{SP2} + \gamma_{\mathrm{sp}}\mbox{SP3}), \end{aligned}$$ where FIRVBB and FIRSP are decimating FIRs, performing an equalization of the outputs with respect to their noise and gain, an anti-alias low-pass and a final decimation from their raw sampling rate to 10 Hz. Such filters can for example be used to generate a hybrid output, in a way similar to the one used by Kawamura et al. (2017) for Apollo LP and SP data. The ESTA will be the rms of a band pass filtered data, as defined by $$\begin{aligned} \mbox{ESTASP} = \mbox{rms} \bigl(\mbox{FIRSP}_{\mathrm{esta}}*( \alpha_{\mathrm{sp}}\mbox{SP1} + \beta_{\mathrm{sp}}\mbox{SP2}+ \gamma_{\mathrm{sp}}\mbox{SP3}) \bigr), \end{aligned}$$ for the example of the SP channels. It will capture, typically every second, the high frequency energy in the bandwidth defined by the FIRSPesta. Similar ESTA processing is planned for the IGF magnetic data and for the Pressure data, in the latter case with the possibility to implement two types of ESTA channels, one for weather events (e.g. dust devils) and the other for infrasound events. Together with Housekeeping monitoring (every 100 seconds except during wake-up at higher rate) and with clock synchronization data for the relative drift between APSS and SEIS, a total of about 30 Mbits per sol, corresponding to the SEIS allocation for continuous data are then transmitted. From this allocation, 78% are corresponding to continuous SEIS data and 22% to continuous APSS data and includes 2% of data headers. See the continuous data budget detail for all channels in Table 8. Continuous data budget breakdown. About 30 Mbits of continuous data will be transmitted every sol and will provide all seismic information for a full monitoring of the DC-1 Hz bandwidth Events Data Continuous data will be distributed regularly to both the Science Team and the Mars Quake Service (see Sect. 8.2) with a latency of less than 2 hours after being received on Earth. If the detection of a seismic event is suspected, the ground segment can send a request to the lander to retrieve buffer data with full sample rate from the lander. Those event request can be for data filtered and decimated from the full rate. However, the full rate data can also be downloaded. 8 additional Mbits have been baselined for the transmission of these events during the nominal monitoring phase. It is planned that transmitted seismic events will be systematically supplemented by high frequency wind and pressure data, which will also be transmitted as event for the same period of time. Data Transmission Update The previous sections describe only the data transmission for the continuous monitoring phase which is summarized in Fig. 25. Possibly, the compression ratio will be much better for low noise conditions, enabling possible increase of the output sampling rates of the continuous channels or the transmission of new channels, including continuous SP channels. During the early phase of the mission, including commissioning, different transmission scenarios have been defined, either with increased frequency continuous data or with event requests associated with calibrations, motor activations, etc. For all the operational scenarios several FSW configuration files have been defined, for example for the cruise phase, for the period when the SEIS instrument is still on the deck of the lander after landing, for the commissioning phase and of course for the routine phase, called Surface Monitoring phase and described above in detail. 4.2 SEIS Deployment The deployment of SEIS, illustrated on Fig. 26, is completed once the seismometer is placed on the surface of Mars, is leveled and centered, the service loop is released and the seismometer is covered by the free-standing Wind and Thermal Shield (WTS). The following is a detailed description of the carefully orchestrated deployment and verification steps that take place from the point where a surface deployment site was selected by the InSight Site Selection Working Group (ISSWG) to a fully deployed SEIS system as defined above. Each deployment step is verified on Earth by a set of specific measurements, images and other data, that help determine that its requirements are met and it is safe to proceed to the next deployment step. Several deployment steps are known as "committal events" or events that are not reversible. We elaborate on those as we follow the step-by-step SEIS deployment procedure below. 4.2.1 Site Selection For SEIS to operate on the surface properly a number of deployment requirements must be met. The SEIS requirements as well as some desired characteristics are summarized in Table 9. SEIS has a leveling system that can accommodate up to 15° of tilt so both SEIS and the WTS must be deployed on surfaces with slopes of \(<15^{\circ}\). This is nevertheless reduced to \(-13^{\circ}\) for tilts lowering the height of the LSA, as the later cannot be deployed successfully for tilts ranging from \(-15^{\circ}\) to \(-13^{\circ}\). Both the SEIS leveling system and HP3 have clearances of \(\sim 3~\mbox{cm}\) and so must be placed on surfaces with no rocks or protrusions higher than 3 cm. In addition, the SEIS leveling system can accommodate rocks or protrusions \(<2~\mbox{cm}\) and \(<1~\mbox{cm}\) high for instrument tilts of \(11\mbox{--}13^{\circ}\) and \(13\mbox{--}15^{\circ}\), respectively. For stability, foot patch roughness or relief of both instruments must be less than 1.5 cm and for \(\mbox{WTS} <3~\mbox{cm}\). The soil beneath both instruments and WTS must be load bearing, as unequal sinkage could lead to additional tilt. This requirement will be integrated in the final site selection of SEIS, after assessment of the soil geomorphology and properties from picture analysis. After deployment, SEIS and WTS must not touch (for noise reasons), so the SEIS foot plane (the plane formed by the SEIS feet) must be less than 1.5 cm higher than the WTS foot plane and the relative tilt between the two must be less than 5°. There are also constraints on the location of the SEIS tether pinning mass (to mechanically isolate SEIS from the tether) and the tether field joint (the connection between the two SEIS tether sections, one from the lander and one from the instrument). The pinning mass and field joint must be free of rocks or other obstructions and on a gentle slope so that if the pinning mass needs to be moved, there will not be obstacles or a tilt hindering the movement. SEIS and WTS deployment site requirements SEIS & WTS deployment site requirements SEIS<15∘ Tilt (must also be <12∘ of negative pitch) SEIS Footplane<12∘ for negative pitch slopes WTS<15∘ Tilt No Rocks Under \(\mbox{SEIS} > 3~\mbox{cm}\) high for tilts ≤ 11∘ No Rocks Under \(\mbox{SEIS} > 2~\mbox{cm}\) high for 11∘<tilt ≤ 13∘ No Rocks Under \(\mbox{SEIS} \geq 1~\mbox{cm}\) high for 13∘<tilt ≤ 15∘ No Rocks under \(\mbox{WTS} > 6~\mbox{cm}\) high or \(>3~\mbox{cm}\) low under skirt No Rocks under Load Shunt Assembly SEIS Footpatch Roughness: \(<1.5~\mbox{cm}\) WTS Footpatch Roughness: \(<3~\mbox{cm}\) Load Bearing Soil SEIS/WTS Relative Placement \(\mbox{SEIS footplane} < 1.5~\mbox{cm}\) higher than center of WTS footplane Less than 5∘ relative tilt between SEIS/WTS SEIS not to exceed WTS DNE envelope Desired characteristics SEIS Footplane Tilt<11∘ All three SEIS feet on same material SEIS on terrain with positive pitch (uphill from lander along tether) Place SEIS on the right side of workspace to avoid tether crossing No rocks under pinning mass or field joint Field Joint not in hole, in front of hole, or in front of rock Pinning mass orientation desirable for adjustment with scoop No obstacles around Pinning Mass Plane of tether lower than plane of SEIS sensor assembly Noise—wind and other noise sources SEIS as far as possible away from the lander \(\mbox{SEIS} >= 1~\mbox{m}\) (as far as possible) away from HP3 In addition to these requirements, one of the key contributors to the SEIS noise is expected to be the mechanical noise of the lander transmitted through the ground to the seismometer (lander wind noise). The noise model described in Sect. 3.4 takes into account a typical deployment location within the zone that the Instrument Deployment Arm (IDA) can reach (shown in Fig. 27). Of course, depending on the actual conditions of the Mars landing site, the mission may not be able to deploy the seismometer at its nominal location. Once landed, a whole week is assigned to the selection of the best site for instrument deployment. To select the location where the instrument is deployed, two families of parameters have to be evaluated. The first parameters are linked to the engineering capabilities of the deployment system: in order to deploy SEIS correctly on its three feet, the underlying terrain needs to have a tilt below 15°, underlying rocks shall not be bigger than 3 cm and the terrain slope shall be compatible with tether and tether loop deployment. JPL has developed a set of tools that evaluates these site geometric properties for all locations within the deployment zone (Abarca et al. 2018). These tools use images, mosaics and digital elevation models generated from the stereo images taken after landing by the camera on the arm. The tools are designed to quickly determine where the seismometer can be deployed that meets the requirements. The second family of parameters evaluated is related to the lander vibrations induced by the wind. In order to assess the performance impact of the lander motion, the noise model assumes that the ground behaves as an elastic medium. Two major parameters influence this noise contribution: the distance to the lander feet, the mean slope on which the lander is located and the mean wind speed and direction. We have developed a tool, described in Murdoch et al. (2017a) that estimates the noise map at the site depending on the actual landing conditions. The tool takes in account the contribution from HP3 wind noise. 4.2.2 Sensor Assembly Deployment Deck to Ground After release of the SEIS SA from the deck by the activation of the Frangibolts ((1) in Fig. 28), the Instrument Deployment Arm (IDA) picks up SEIS from the spacecraft deck and places it on the Martian surface (Fig. 12 and Fig. 29) at a pre-selected location determined by the ISSWG team based on analysis of the detailed Digital Elevation Map (DEM) derived from the Instrument Deployment Camera (IDC) mounted on the IDA. At this point it is determined that SEIS is on ground and within recapture constraints using information from the IDA, IDC and the lander-mounted Instrument Context Camera (ICC). At the same time the coarse-tiltmeter on SEIS is used to verify that SEIS is within the constraints at the selected site by verifying that the instrument tilt meets the deployment requirements and in good agreement with the DEM pre-calculated tilt. When all of the above requirements are met, a decision to release the IDA grapple that holds the seismometer is made. The Sensor Assembly being deployed on the ground during DST#3 (Deployment System Test). Two segments of the tether (TSA-1, part of TSA-), the Tether Storage Box, Field Joint and part of the Load Shunt Assembly are also visible Comparison of the VBBs and SPs transfer functions. Very long period gains are similar while the VBB gain is a factor of 4–5 times larger between 20 s and 10 Hz. All gains are in Digital Unit (DU) per ground velocity (\(\mbox{m}/\mbox{s}\)) Grapple Release The next step is the verification of the grapple release (Fig. 29). This is done both by information from the IDC. SEIS Placement Imaging Once the grapple is released an imaging campaign begins to evaluate the state of SEIS on the ground and establish that its location meets the deployment requirements. IDC stereo imaging of SEIS is acquired to localize it in order to evaluate the position and orientation of SEIS in the workspace and confirm that the placement constraints were met. Specifically, the imaging will focus on the configuration of the tether that links SEIS to the lander and on the location of the feet, to determine that it is safe to proceed with leveling. This step includes an analysis to ascertain that the WTS can still be deployed without touching the seismometer. SEIS leveling is a two-step process including 'Initial Leveling' followed by a 'Leveling Low' step in which the leveled seismometer is lowered so that its center of mass is as close to the ground as possible without touching the surface. The latter step requires some elaboration as it differs from most terrestrial installations. The SEIS LVL system is capable of leveling the seismometer on a surface tilt of up to 15°. This is a factor of \(\sim 3\) more than most terrestrial seismometer systems. Therefore, unlike most terrestrial installations, there is a possibility that the seismometer would be levelled while there is a significant bias from the ground that can be trimmed down by evenly lowering the seismometer. This is desirable both for shifting vibrational modes of the LVL system to higher frequencies and for allowing more room between the seismometer and the WTS. Initial Leveling The SEIS leveling system (LVL) is activated and SEIS is leveled to within its 0.1° requirement. The tilt is verified using both its coarse and precise tiltmeters. Further imaging of SEIS is used to establish that no significant change to the SEIS system has occurred and to determine the lowering distance in the next step. Leveling Low At this point the SEIS system is evenly lowered to its pre-determined "Low" position and a final tilt of \(<0.1^{\circ}\) is established. In order to insure no contact with the ground a Digital Elevation model (DEM) integrating the current location of Sensor Assembly will be used. The latter is assessed from images taken by Instrument Deployment Camera. With the knowledge of the pebbles size underneath the Sensor Assembly reported in this DEM and with a margin of 0.5 cm, the maximum movement on each leveling leg able to lower the Sensor Assembly as much as possible will be computed. VBB Operations This deployment step is key to determining that it is safe to proceed to the first committal event of releasing the tether from its box on the lander. Until this point, only Short Period (SP) seismometers were turned on providing ancillary non-decisional data. Now that the seismometer is leveled, the Very Broad Band (VBB) sensors are turned on and centered. We proceed with 12-hour period of daytime (since the seismometer cannot be operated at night without the protection of the WTS) monitoring of the VBB in Engineering, low-gain, mode to make sure the sensors do not saturate, as would be the case if the tilt changed by more than 0.25°. Simultaneously, the tilt is monitored for drift by the precise tiltmeter. At this point we are ready to release the Tether Box—the first "committal event". Tether Box Release At the end of the VBB operations step we are ready to commit to the SEIS location, since after release of the tether from its box below the lander deck the ability to change location of the seismometer would be minimal (a few centimeters). The SEIS tether is released by opening the tether box door (Frangibolt (2) activation, Fig. 28 and Fig. 30) and the tether drops to the Martian surface. Although the team has studied a number of tether configurations based on a range of landing site terrain, slopes and obstacles, it is difficult to predict the precise configuration of the tether once it is released. While minor adjustment of the tether layout near the interface with the Service Loop is possible by shifting the position of a pinning mass (discussed below) with the IDA, once the tether is deployed it is more challenging and therefore unlikely that the seismometer itself will be moved. Once it is confirmed by the Instrument Context Camera (ICC) that the tether box door is fully open and the tether is completely released, SEIS monitoring operations are continued. Polarization Assessment The next committal event is the opening of the service loop by releasing the tether shunt. The decision to open the service loop is based upon continuing to meet placement, functional and derived performance requirements. This is determined based on the tether configuration, tilt and tilt stability and the VBB ability to re-center. This decision is based on imagery, VBB and tilt data. Tether Shunt Release The service loop is opened by activating the Frangibolt that keeps it in the closed position (Frangibolt (3) activation, Fig. 28 and Fig. 31). Once this is done, it is necessary to confirm that the service loop is completely open and that there is no contact between the two parts of the Load Shunt Assembly (LSA) previously held together by the Frangibolt. This is confirmed by Instrument Deployment Camera (IDC) image. Service Loop Assessment At this stage, the seismic monitoring is continued with particular focus on analysis of signal polarization that might indicate that the LSA is shorted. If there is contact between the two LSA plates, the free plate can be shifted by using the IDA scoop to pull the pinning mass away from the seismometer. Another key factor is the ability to deploy the WTS over the seismometer without touching any part of the seismometer including the LSA. Therefore, the final configuration of the seismometer in its "leveled low" position with an open LSA cannot exceed a pre-determined volume which will be encapsulated by the WTS. At this stage, the analysis of the terrain and the final configuration of the seismometer is carried out to ensure that the seismometer is well within its "Do Not Exceed" volume for WTS placement. The WTS placement is selected accordingly. All the while VBB monitoring is continued to confirm that the tilt drift remains within the recentering VBB capability. WTS Deployment The final committal event is the deployment of the Wind and Thermal Shield (Fig. 32), which is picked up from its stowed position on the lander deck (during low wind conditions) after the last Frangibolt activation (Frangibolt (4) activation, Fig. 28) and placed over the seismometer, then confirming that it is on the ground in its desired position with IDA and IDC data. While grappled, the WTS position is approximated using the ICC image and a single IDC image of the WTS. The final position is determined from IDC stereo imaging. Only then is it determined whether or not it is safe to release the grapple from the WTS. Although the IDA is able to re-grapple the WTS, once the WTS is released it is extremely unlikely to be moved again. WTS Grapple Release As before, the verification of the grapple release is done by information from the IDC. WTS Imaging The imaging of the WTS chronicles its final position that may be used in future data analysis. The VBB can now be turned on continuously and a background noise level reduction should be noticeable. With this step completed, SEIS is fully deployed and the mission can proceed with the deployment of HP3. 5 SEIS Sub-system Descriptions 5.1 Very Broad Band Sensors The VBB is an ultra-sensitive very broad band (VBB) 3 axis oblique seismometer illustrated by Fig. 33. Its function is to transform the ground motion into analog electrical signals recorded and numerized by SEIS-AC. The VBB has two feedback modes: The first is the engineering (ENG) where the sensor operates as an accelerometer with two outputs flat in acceleration but provided with different gains. The second is the scientific (SCI) mode. The feedback provides then two outputs: a ground acceleration proportional to the position of the proof mass and therefore named as POS and a velocity output named as VEL. The POS output is flat in acceleration from DC to a few tenths of Hz while the VEL is relatively flat in ground velocity from \(1/15~\mbox{Hz}\) to 20 Hz. Note that in ENG mode, the two outputs will still be named POS and VEL, even if they are both proportional to ground acceleration. The VBB sensors are a trio of inverted pendulums stabilized with a leaf spring and tuned for Mars gravity (Sect. 5.1.2 for details on the pendulum including spring and Sect. 6.2.1 for details on their operation on Earth gravity). Each has a dedicated and tunable internal temperature compensation system, activated by micro-motors as well as a re-centering system based also on micro-motors. They are packaged in an evacuated sphere (EC) with internal passive vacuum pumps (getter) and operate in a high vacuum environment. The getters are described in detail by Petkov et al. (2018). A differential capacitive displacement transducer detects movement of the housing relative to the pendulum and generates an analog output signal amplified by a Proximity Electronic (PE), mounted on the LVL ring. This signal is then sent to the Feedback Boards (FB), located in the SEIS electronics, in the lander warm box and feedback signals are returned to the sensor though the tether, to perform continuous re-centering by a first magnetic-coil actuator for both the ENG and SCI modes and response shaping for a second one in the SCI mode. In the SCI mode, the velocity output is derived from the differential component of the analog feedback signal prior to amplification by output gain amplifiers. The moving parts of the VBB do not need to be locked for launch or EDL but must be leveled to within \(\pm 0.3^{\circ}\) of the gravity vector to operate nominally. The 3 sensors enclosed in the spherical evacuated container are identical. Each sensor is measuring motion along one axis. They are then oriented in the sphere in order to register three acceleration directions (U, V, W) forming a tetrahedron and therefore measuring seismic motions in 3 dimensions (Fig. 34). Vertical and horizontal are produced by combining the three outputs after transfer function correction. 5.1.2 Mechanical Pendulum The mobile part of the pendulum is suspended to a fixed frame through a flexure pivot and a leaf spring (Fig. 35). The flexure pivot provides the rotation axis of the pendulum with a very low stiffness (around \(0.003~\mbox{N}\,\mbox{m}/\mbox{rad}\)) and a very high stiffness in the five other degrees of freedom (above \(900~\mbox{N}\,\mbox{m}/\mbox{rad}\)). The flexure hinge allows a very low motion damping in the Evacuated Container as there is no sliding, rolling nor friction between parts. Electrical signals between fixed frame and mobile pendulum are transmitted through the pivot's blades (Fig. 36). Figure 37 provides a complete view of the pendulum, together with its mechanisms described later in details while Fig. 38 shows all units manufactured for the project. The configuration of the mechanical pendulum is such that the center of mass of the mobile part is above its axis of rotation. This configuration generates instability or a negative stiffness, which reduces the pendulum's balancing spring and pivot stiffness. The natural frequency of the pendulum can then be expressed as: $$ f_{0} = \frac{1}{2 \pi } \sqrt{ \frac{c - mg D_{g} \cos \alpha + p}{J}} = \frac{1}{2 \pi } \sqrt{ \frac{K}{J}}, $$ where \(c\) is the leaf spring restoring torque, \(p\) the pivot torque, \(J\) the moment of inertia of the VBB with respect to the pivot rotation axis, \(m\) the pendulum mobile part mass, \(g\) the local gravity vector norm and \(D_{g}\) the distance of the center of mass of the pendulum away from the pivot axis, \(K\) the overall angular stiffness of the pendulum. \(\alpha \) is the angle between the plane defined by the pivot's rotation axis and local gravity vector and the vector between the pivot center and the center of mass of the mobile pendulum (see Fig. 35). The vector perpendicular to this plane defines the sensitivity direction of the VBB pendulum (\(\alpha \) is also the angle between that axis and the horizontal plane). The equilibrium of the pendulum is achieved when the leaf spring restoring torque in zero mechanical position (\(M_{0}\)) equals the gravity moment: $$ M_{0} = m gD_{g} \sin \alpha . $$ As the leaf spring is balancing the pendulum weight, it can be sized in order to have the desired pendulum natural frequency. A stiff spring will increase the frequency, while a soft one can lead to an unstable pendulum, as soon as the gravity torque is larger than (\(c+p\)) in Eq. (4). The springs therefore were cherry picked individually for each VBB unit from a family of different springs in order to compensate for the dispersion of the actual pivot stiffness, weight momentum and geometry of each unit. Leaf springs are cut by electrical discharge machining from a Thermelast 0.12 mm thick sheet. They have a trapezoidal shape. Different families with various length and width have been produced to guarantee a good dispersion of properties. After thermal treatment—30 min at \(750^{\circ}\mbox{C}\) to minimize their Young's modulus temperature dependency—each spring is characterized by a momentum and stiffness. Springs are demagnetized before final mounting to minimize the VBB magnetic sensitivity. The mechanical gain defines the capability to measure low frequency accelerations of the pendulum. It is the ratio of the displacement generated at the level of the displacement transducer and the acceleration that is generated along the sensitivity axis at very low frequencies. It is given by the following formula: $$ G = \frac{m D_{g} D_{c}}{K}, $$ where \(D_{c}\) is the distance between the pivot and the center of the capacitive plates and other terms have been defined in Eq. (4). In this inverted pendulum configuration, the gravity term in (4) reduces the overall pendulum stiffness and allows to reach a low natural frequency in the range of 0.3–0.5 Hz. The sensor has this a high capability to measure low frequency accelerations, a high mechanical gain and a low Brownian noise while keeping the mobile mass small enough (190 g per axis). Equations (4) and (5) are also useful to see the properties of VBBs during Earth tests, for which momentum equilibrium must be met (see Sect. 6.2.2). Table 10 summarizes the properties of the pendulums. It shall be noted that the directions of sensitivity are close to 30° and not the 35.26° of a Galperin configuration. This orientation is optimizing the mechanical gain versus an increase of the self-noise of the recomposed vertical axis. Self-noise of the vertical axis will nevertheless be \(1/(2\sqrt{3}) = 1.15\) larger on the vertical axis than on the oblique ones. Mechanical properties of the Flight VBB pendulum and of the VBB EM in Earth configuration. \(J\) is the moment of inertia, \(mD_{g}\) the product between mass and distance of the center of gravity to the pendulum and \(\alpha \) the sensitivity direction. \(Q\) values are measured in high vacuum below \(10^{-5}~\mbox{mbar}\). \(f_{0}\) variations with temperature are due to pivot stiffness variation in temperature. Note that for VBB1 as an example, the mechanical gain of the VBBE is reduced by 2.74, close from the gravity ratio between Earth and Mars gravity and that the gain on Mars will increase by another factor of 1.75 \(mD_{g}\) \(D_{c}\) \(f_{0}\) (kg m2) (kg m) (∘) 20∘C (Hz) 20∘C −10∘C (Hz) VBB1 2.71 × 10−4 VBBE 5.1.3 Pendulum Brownian Noise The VBBs have been designed in order to have a very low Brownian noise for their moving part. Despite their moderate proof mass this is achieved by their low frequency and high \(Q\). For a pendulum, the Brownian noise generates angular noise which translates in acceleration noise along the VBB axis as $$ a_{\mathrm{brownian}} = \sqrt{8 \pi k_{B} T D_{c}^{2} \frac{f_{0}}{JQ}}, $$ where \(k_{B}\) is the Boltzmann constant. The \(Q\) as a function of pressure has been measured at ambient temperature for a few VBBs and is shown below for VBB13 (a unit used as spare), while individual \(Q\) of the Flight VBBs are provided in Table 10. Below 0.1 mbar, \(Q\) larger than 10 was found, while \(Q\) drops to 5 for 0.3 mbar and \(Q \sim 2\) for 0.5 mbar (Fig. 39). This pressure dependency can be well understood with the Free Molecular model (Christian 1966), which predicts the following \(Q\) proportionality: $$ Q = \frac{k}{P} \sqrt{\frac{RT}{M}}, $$ where \(K\) is the proportionality factor, \(P\) the pressure, \(R\) the gas constant, \(T\) the thermodynamical temperature and \(M\) the molecular mass of the residual gas. At \(-25^{\circ}\mbox{C}\) and for internal pressure of 0.035 mbar, Brownian noise of the VBB is smaller than \(10^{-10}~\mbox{m}\,\mbox{s}^{-2}/\mbox{Hz}^{1/2}\) and \(Q\) is larger than 100. Such low pressure was one of the motivations for the EC, in addition to its very high temperature insulation, which for Mars conditions, remains as the most sensitive motivation. Pressure measurements were made during the ATLO phase including a last measurement, two months prior to the May, 5th, launch. They have shown that the pressure at launch will be smaller than 0.005 mbar, which will lead to \(Q\) larger than 200 and therefore negligible Brownian noise. 5.1.4 Displacement Capacitive Sensors and Proximity Electronics The pendulums are equipped with a capacitive displacement sensor. It is composed of electrodes made of ceramic plates with a gold deposit mounted at the extremity of the pendulum (see Fig. 37e). A very small and extremely low dissipative front-end electronic is integrated in the electrodes. The proximity electronic, located close but outside the sphere, generates the excitation signal from a reference voltage and a clock integrated in the feedback board and transforms the charge back from the electrodes into a voltage proportional to the electrode's displacement. Each axis has its own clock and all three clocks have been designed to prevent cross-talks. The proximity electronics also conditions the DCS measurement signal towards the feedback board. The measurement is fully differential. The nominal gain of the DCS is \(2.6~\mbox{V}/\upmu\mbox{m}\). The DCS self-noise is shown in Fig. 40. It can be assumed as: $$\begin{aligned} \textstyle\begin{array}{l@{\quad}l} 5.4 \times 1/(f/f_{0})^{0.54}~\upmu\mbox{V}\,\mbox{Hz}^{-1/2} & \mbox{below 1~Hz where}~f_{0}=1~\mbox{Hz}. \\ 5.4~\upmu\mbox{V}\,\mbox{Hz}^{-1/2} & \mbox{above 1 Hz}. \end{array}\displaystyle \end{aligned}$$ This translates into a proof mass displacement resolution of \(2.1~\mbox{pm}\,\mbox{Hz}^{-1/2}\) at 1 Hz which will be practically the VBB ground displacement sensitivity at high frequency along the measurement direction. At low frequency, the DCS resolution increases as \(1/\sqrt{f}\) and therefore to about 0.2 Angstrom at 100 s period. 5.1.5 Feedback and VBB Transfer Functions A force feedback allows: erasing the natural frequency peak in the transfer function (and partially the thermal variations of the mechanical gain), locking the proof mass mechanical zero near the electrical zero of the displacement transducer and thus a linear response, increasing the bandwidth, and tuning the sensor gains to the desired dynamic. See Wielandt (2012) for a review on feedback sensors. The VBBs pendulums are equipped with 3 concentric voice coils: one is dedicated to calibration and interfaces with SEIS-AC DAC output. The two others are connected to the analog feedback circuit to close the control loop. The analog feedback input is the DCS signal. The feedback can work in 3 different modes: Scientific (SCI, Fig. 41 bottom) for optimal performances and nominal operations. The feedback loop gain is very high at very long periods (larger than 900) but is reduced in bandwidth in order to get amplification of the natural mechanical gain, with a minimum loop gain of about 100 at 10 s. Two different loops control the mechanical pendulum: the first one (Integrator + Coil A) is active from DC to 0.05 Hz and the second one (Derivator + Coil B) from 0.05 Hz to 10 Hz. The transfer function is relatively flat in ground velocity for frequencies between 0.067 Hz and 5 Hz (Fig. 42) but the small bump on the two sides of the pass band have different amplitudes on the 3 units due to dispersion of the actuator's efficiency and mounting. Engineering (ENG, Fig. 41 top) is more robust (flat feedback loop strength larger than 700 in all the bandwidth) and has a higher clipping level. This mode is intended for starting the VBB, recentering and fallback in case of anomaly or degradation of the SCI mode. As it has less amplification than the scientific mode, its robustness is bigger: it can withstand a daily temperature variation greater than \(\pm 50~\mbox{K}\) (VEL Low gain) and \(\pm 150~\mbox{K}\) (POS Low Gain). It has been used for extensive testing and can operate on Mars in case of WTS failure or any other major failure leading to very large temperature variations. It will also be used during the commissioning for performing the Thermal Compensator optimal positioning and is used during recentering. The analog output is flat in acceleration over most of the bandwidth (Fig. 42). Open loop (OL) to investigate the health of the mechanical pendulum in case of abnormal behavior. For all modes, two outputs are provided to SEIS-AC: the VEL and the POS output. Each output is equipped with a selectable gain (low or high) to allow dynamic adaptation. In ENG and Open loop mode, the only difference between the two outputs is the gain, while in SCI mode, the VEL output is a ground velocity output with a 2nd order high-pass filter with 6 dB corner frequency at about 0.0625 Hz (16 s period), while the POS is a ground acceleration flat output with a 2nd order low-pass filter with the same corner frequency. A lower cut-off frequency could not be implemented due to the limited size of the low temperature sensitive space qualified capacitors implemented in the VBB feedback and the required noise at 100 s period, which prevented the use of larger resistors in the implementation of the integrator cutoff frequency. Note that the SP was able to accommodate larger automotive capacitors after a dedicated qualification process, but the latter have temperature sensitivity about 5 times larger and were finally not selected for implementation on the VBBs. The feedback board provides also a logic interface with SEIS-AC to allow mode and gain changes and mechanisms activations. In addition, several housekeeping signals are transmitted to SEIS-AC for acquisition, especially temperature of the VBB sensor head, Proximity Electronics and Feedback card temperatures. The gains are shown on Fig. 42. SCI LG VEL has at 0.02 Hz a gain of \(2.8\times 10^{9}~\mbox{DU}/(\mbox{m}/\mbox{s})\) comparable or twice smaller to most of the IRIS global stations equipped with STS-1 Streckeisen seismometer and Quanterra digitizer (ranging between \(3\times 10^{9}\) and \(6\times 10^{9}~\mbox{DU}/(\mbox{m}/\mbox{s})\)), while the gain, larger by 3.2 for the SCI HG, will be slightly larger. The gain is therefore much larger in the 0.05–10 Hz frequency band, mostly as a consequence of design choices related to the larger self-noise of space qualified amplifiers as compared to those used for Earth instrumentation. For periods larger than 250 s, the higher gain and lowest noise will be found for the SCI POS HG, which has a gain 3–10 times smaller than VHZ IRIS global stations, a limitation mostly related to the much larger temperature variations expected on Mars as compared to an STS-1 in a seismic vault on Earth. This POS HG channel has been designed to record tides and all long period signal with the best performances. Saturation levels are shown in Fig. 43. At long periods, the SCI and ENG in LG are saturating for \(0.002~\mbox{m}/\mbox{s}^{2}\) and \(0.016~\mbox{m}/\mbox{s}^{2}\) respectively, which correspond on Mars to tilts of 0.03° and 0.25°. The first value is smaller than the requirement of the LVL system (0.1°) and will be achieved by the re-centering motors of the VBBs (see next subsection). In ENG LG mode however, the saturation level is larger than the LVL requirement and provide a backup in case of re-centering motor failure. Even in the VEL SCI high gain mode, saturation levels between 1 Hz and 3 Hz are 10 times larger than those of Viking in the most sensitive, high data rate mode (Anderson et al. 1977a, 1977b). They correspond to a ground velocity saturation level of \(0.3~\mbox{mm}/\mbox{s}\) in the 0.05–10 Hz bandwidth in SCI LG, comparable to the SP HG saturation level and to about \(0.1~\mbox{mm}/\mbox{s}\) in SCI HG. More precise gain values, as a function of temperature, are given in Table 11 and will of course be updated in the SEED metadata. Gain at various temperatures (Celsius) of the Flight Units and Earth Engineering unit. Gains of the SCI POS low gain are given at \(5\times 10^{-5}~\mbox{Hz}\) in \(10^{9}~\mbox{DU}/(\mbox{m}/\mbox{s}^{2})\) (or \(\mbox{DU}/\mbox{nm}/\mbox{s}^{2}\)) while those of the SCI VEL low gain are given at 100 s in \(10^{9}~\mbox{DU}/(\mbox{m}/\mbox{s})\) or \(\mbox{DU}/(\mbox{nm}/\mbox{s})\). LSBs are the inverse of these values. POS high gain is 4.565 times larger than low gain. VEL high gain is 3.2211 times larger than low gain SCIPOS SCIVEL −60∘ 0∘ 5.1.6 Re-centering Due to the large gains, the VBB pendulum includes a balancing mechanism (Fig. 44), which will be used for precise re-centering of the VBB sensors after the leveling made by the LVL. This mechanism has two main functions: at first it is used to adjust precisely the balance of the mechanical pendulum on Mars with respect to local gravity, leveling system inaccuracy, residual manufacturing offsets and secondly it serves to compensate for long term drifts that would otherwise drive the instrument into saturation. This mechanism is located on the mobile part of the pendulum. Its principle is to move a 60 g mass along a 17 mm course until fine balancing is achieved. Compared to Earth instruments, the re-centering mechanisms have been oversized and have the capability to re-center one VBB for tilts from about \(-2.8^{\circ} \mbox{ to } 3.5^{\circ}\) with respect to the leveled conditions on Mars, in order to accommodate local gravity variation and possible manufacturing offsets or aging of the pendulum. Because of the inverted pendulum design however, the natural frequency of each VBB will vary significantly, from unstable configuration (in open loop) up to 0.7 Hz within this tilt range and only a leveled platform will allow all three VBBs to achieve their nominal frequency and thus performances in Mars conditions. To achieve a re-centering within 1 V in SCI POS high gain, fine positioning accuracy is required. Design tradeoffs lead to a stepper motor (\(20~\mbox{steps}/\mbox{turn}\), 10 mm diameter) from Faulhaber and a 1:256 four stage planetary gear box with low backlash from Faulhaber. A counter nut preloaded with a spring avoids any backlash on the lead screw. Overall absolute positioning accuracy error is dominated by a harmonic one of the worm gear rotation with an amplitude of the order of magnitude of \(20~\upmu\mbox{m}\). It is driven by the combination of screw/nut geometry and parallel guide play and results from a simple design optimized for reliability rather than absolute positioning. The step-by-step algorithm chosen to drive the mechanism relies only a relative positioning accuracy which is about a few \(\upmu\mbox{m}\) over 12 steps and meets the requirement. Gear box, lead screw and parallel guide are lubricated with Braycote grease to ensure a high reliability over the whole lifetime. The drawback is an operational constraint: the re-centering mechanism can be powered only above \(-50^{\circ}\mbox{C}\) but was nevertheless qualified at \(-65^{\circ}\mbox{C}\). 5.1.7 Magnetic Sensitivity Most of the mobile part has been designed with non-magnetic material with the exception of the motors, invar column of the thermal compensation system and Thermelast spring. Magnetic momentum on the mobile part is dominated by far by the balancing motor. Based on component tests, the residual magnetic momentum has been bounded at \(10^{-2}~\mbox{A}\,\mbox{m}^{-2}\), which would lead to \(2\times 10^{-9}~\mbox{m}\,\mbox{s}^{-2}/\mbox{nT}\). Requirements has been set to \(0.5\times 10^{-9}~\mbox{m}\,\mbox{s}^{-2}/\mbox{nT}\). All VBBs magnetic sensitivity have been measured in their final flight configuration. Measurements are spread between \(0.1 \mbox{ to } 0.5\times 10^{-9}~\mbox{m}\,\mbox{s}^{-2}/\mbox{nT}\). 5.1.8 Thermal Compensator and Thermal Sensitivity Thermal variations are expected to be the source of the largest non-seismic excursion of the VBBs output. As an example, Streckeisen STS-2 seismometers have a no-centering range of \(\pm 25^{\circ}\mbox{C}\) for temperature and \(\pm 0.03^{\circ}\) for Earth tilt (Kinemetrics 2017), corresponding to sensitivities of about \(2\times 10^{-5}~\mbox{m}/\mbox{s}^{2}/{}^{\circ}\mbox{C}\) and comparable or better thermal sensitivities were required in order to not only have a continuous daily recording without recentering but also to meet thermal noise requirements at 100 s. Due to the lack of testing capabilities in Earth conditions and to the possibility to encounter aging, an active thermal compensator device has been integrated in the VBB design. The function of this second mechanism included on the VBB pendulum is to minimize the dependence of the sensor output signal on temperature variations. This allows reducing the part of the noise due to temperature in the VBB recordings and in turn allows to maximize the gain of the sensor. The principle of this mechanism, shown in Fig. 45, is to translate passively along an axis a small mass on the mobile part of the pendulum proportionally to temperature variation, in order to adjust the balance of the mobile pendulum so that it stays centered while the temperature is changing. The compensation can be tuned in amplitude and sign by rotating the translation axis. When it is vertical, there is no balancing momentum change as the mass is moving. When horizontal, the compensation is at its maximum capability. The passive compensation device is made of a CuBe2 cage and an invar column. The geometry has been optimized to maximize the center of gravity displacement with temperature. The latter is associated to length variation differences due to the different thermal expansion coefficients of the two metals. A structure around the cage acts as a stop-end to protect the mechanism under random vibrations. The orientation mechanism has an absolute rotation accuracy of 1°. Figure 46 shows an example of measurements, made during passive heating of the VBB14 unit, which is the VBB3 flight unit. Passive heating was made in a thermal chamber, which was then cooled down to \(-70^{\circ}\mbox{C}\) and adjusted back to ambient temperature passively, in order to minimize the thermal noise from either the chamber or the cooling/heating systems. Nevertheless, the VBB sensitivity in such a test is only an apparent one, as the test system is also likely injecting tilt (note that \(10^{-5}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\) is about \(10^{-6}~\mbox{radian}/\mbox{K}\) in tilt in Earth tests). The three blue lines are the VBB output variation, from \(-70^{\circ}\) to \(-10^{\circ}\) in the two extreme positions of the TCDM (i.e. where the TCDM either adds or subtracts its maximum strength in terms of temperature sensitivity) and in the neutral position (i.e. were it is minimizing its strength), while the black thin lines are the theoretical VBB variations for given TCDM position. This illustrates the capability of the TCDM to change the sign of the sensor thermal sensitivity, as the slope of the output can be either tuned as growing or decreasing with temperature. The TCDM will have to be tuned regularly, e.g. every few months, in order to accommodate the seasonal changes on Mars, as the sensor's sensitivity is expected to vary with temperature. This is illustrated by the neutral line in Fig. 46, where the apparent temperature VBB sensitivity varies from \(-2\times 10^{-5}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\) at \(-50^{\circ}\mbox{C}\) to \(1.5\times 10^{-5}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\) at \(-25^{\circ}\mbox{C}\). The sensitivity of the other VBBs is given in Fig. 47. 5.2 Short Period Sensors 5.2.1 SP Introduction InSight's SP seismometer consists of a set of three sensors in enclosures that are deployed with the rest of SEIS on the surface and feedback (FB) electronics integrated into the Ebox on the lander. The SP sensors, with their front-end electronics, are connected to their lander electronics via the tether between SEIS and the lander. The SP sensors are labeled SP1, for the vertical and SP2 and 3 for the two horizontals separated by a 60° azimuth (a 90° separation was not possible on the tripod structure due to volume limitations on the LVL ring). The three sensors are attached around the outer ring of LVL directly on which the VBB sphere is also mounted. In contrast to the VBBs, the sensors have been designed to operate at up to a 15° tilt from the vertical, the leveling range for LVL. They will therefore be able to operate prior to the leveling of SEIS, including on the lander deck before the SEIS deployment. In this configuration, they will be in contact with the lander through the cradle. After deployment, they will, like the VBBs, be in contact with the ground through the 3 LVL feet mounted on the outer-ring of the LVL. 5.2.2 SP Sensors Description The SP sensors are micromachined from single-crystal silicon by through-wafer deep reactive-ion etching to produce a suspension and proof mass (PM) die with a fundamental vibrational mode of 6 Hz (Fig. 48). The sensors are of a novel design (Pike et al. 2018) to give a much lower noise floor than has been previously (e.g. Bernstein et al. 1999) or subsequently (Middlemiss et al. 2016) achieved by through-wafer etching of silicon, while being sufficiently robust to survive launch and landing and capable of autonomous levelling and operation on the surface of Mars. The suspensions of the horizontal sensor dies are symmetric, while for the vertical the suspension is machined in an offset geometry so that under Mars gravity it takes up a symmetric configuration. Bumpers formed by the reflow of solder balls in cavities formed during the through-wafer etching protect the low-frequency suspension from damage (Delahunty and Pike 2014). Additional strengthening is provided by co-fabricated buttress structures that are bonded to the frame, with micromachined backstops inserted into the frame to provide protection for vibrations and shocks in the out-of-plane direction of the dies. The displacement of the proof mass is sensed with a capacitive displacement transducer (DT): two interposed arrays of electrodes on the PM are differentially driven and facing sets of fixed electrodes plated on a fixed glass strip above the PM. The capacitance varies with the areal overlap of the driven and pickup electrodes providing a displacement signal with the \(96\mbox{-}\upmu\mbox{m}\) periodicity of the array. The DT strip is connected mechanically and electrically to the PM frame using solder-ball bonding with pads at one end of the strip for electrical connection to the proximity electronics. Feedback is closed at the nearest null point of the periodic output of the DT. This allows operation over a large tilt range while keeping the actuation force low. The electrical connections to the coils and DT drives on the PM are routed along the suspension flexures using plated and sputtered gold traces. The SP sensors are designed for low-noise operation at ambient pressure. The thermal Brownian noise is therefore minimized by the geometry of the DT which operates with Couette flow in the smallest gap between the PM and DT strip. Viscous flow in this gap, at around \(12~\upmu\mbox{m}\), offers far less resistance with Couette flow than the alternative squeeze-film damping of a gap-based capacitance sensor. Additional reduction in the thermal Brownian noise is provided by the attachment of gold bar to the backside of the proof mass (Fig. 48b), with this mass trimming also used to set the fundamental resonance of the suspension. The SP sensors operate in feedback mode with electromagnetic actuation from coils plated onto the proof mass. An approximately \(1~\mbox{k}\Omega\) resistor is sputtered onto the PM frame to allow direct monitoring of the sensor temperature. Thermal compensation is incorporated into the base of the SP1, vertical suspension to attenuate the effect of temperature changes on the SP output (Liu and Pike 2015). The sensors are mounted on a Kovar frame which is inserted into a magnetic assembly to provide the actuation (Fig. 49). The sensors, magnets and front-end electronics are mounted on to the base of their enclosure via standoffs to provide thermal insulation between the sensors and the enclosure (Fig. 50). A second temperature sensor, a standard platinum resistance thermometer, is mounted on the enclosure base. The enclosure lid is hermetically sealed (Fig. 51) to the base and the enclosure evacuated and then backfilled with nitrogen to 10 mbar to provide a stable environment. Electrical feedthroughs to a flexprint external to the base of the enclosure provide routing to the connector to the tether. The SP sensors are an innovative design which evolved significantly during InSight development so each was treated as a "protoflight" unit and subjected to qualification levels of vibration and thermal cycling for limited periods. Separate qualification units were also subjected to long term thermal cycling to simulate the mission on Mars and survival of the proof mass suspension was demonstrated for in-plane vibration levels up to 32g rms. 5.2.3 SP Electronics Description A schematic of the SP electronics is shown in Fig. 52. The front-end electronics include the DT preamplifiers and routing for the coil drives and temperature resistors on the sensor. The feedback (FB) electronics within the lander's SEIS EBOX contain three sets of feedback electronics for the SP sensors and a DT drive conditioning circuit. The feedback provides an analogue velocity output (SP1, 2 and 3) at two selectable gains (see later for details) and an acceleration mass-position output (MPOS1, 2 and 3). The SP output signals are digitized by the SEIS-AC, the separate acquisition electronics, at 24 bit at either 100 sps or 20 sps, while MPOS are digitized at 12 bit as housekeeping signals, together with the temperature resistor signals interleaved on an analogue multiplexer at either 1 sps or \(1/100~\mbox{sps}\). SP commanding consists of a power on followed by an enable for the SP sensors required for the observation. The SP also has a calibration capability which is enabled during operation by sending a calibration signal generated by SEIS-AC to the selected sensor. SP's standard calibration signal consists of a shaped swept sine signal to validate the transfer function of the selected SP output. Power for SP is provided via SEIS-AC. 5.2.4 SP Transfer Function The velocity output of SP is flat below 2 kHz with two gain settings, a high gain of \(27{,}000~\mbox{V}/(\mbox{m}/\mbox{s})\) and a low gain of \(9000~\mbox{V}/(\mbox{m}/\mbox{s})\), with a 2-pole roll-off at a corner frequency at 0.0286 Hz (35 s) with close to critical damping (Fig. 53). The high gain has been selected to ensure that the SEIS-AC digitizer noise is below the 10 Hz SP requirement of \(10^{-8}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\). For the \(\pm 12~\mbox{V}\) input range of SEIS-AC ADC the two gain settings correspond to a clip level of \(0.9~\mbox{mm}/\mbox{s}\) and \(0.3~\mbox{mm}/\mbox{s}\). A second output, mass position or MPOS, is the acceleration signal required to keep the feedback closed below the corner frequency of the velocity output. The gain is flat in acceleration, with a gain of \(44~\mbox{V}/\mbox{m}/\mbox{s}^{2}\) and a low pass roll off at a corner frequency of 0.6 Hz. These transfer functions are illustrated in Fig. 45. 5.2.5 SP Thermal Response From previous temperature measurements on the surface of Mars thermal effects are expected to be the major noise injection directly into the SP (Mimoun et al. 2017). For the vertical unit, SP1, the transduction is through the thermal coefficient of Young's modulus of the silicon suspension (\(57~\mbox{ppm}/\mbox{K}\), Liu and Pike 2015) which will cause movement of the proof mass under gravity due to spring softening. On Mars this would give an uncompensated thermal coefficient of \(2.2\times 10^{-4}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\), considerably above the requirement of \(5\times 10^{-5}\mbox{ m}/\mbox{s}^{2}/\mbox{K}\). Therefore, the suspension of SP1 is passively thermally compensated with solder reflowed into cavities at the base of the suspension (Liu and Pike 2015). The resulting mismatch in the thermal coefficient of expansion (TCE) gives a thermoelastic tilt that can compensate for the suspension softening. The design target of the solder compensator was therefore an attenuation of ten in the thermal response. In addition, for all the SPs there is a thermoelastic response due to the sensor materials TCE mismatch. This mismatch will cause tilts which will inject a component of gravity into the SP's outputs. Any external thermoelastic stress is minimized by the compliant mounting points of the PM die. Therefore, the dominant TCE contribution is between the silicon and the borosilicate glass of the DT strip, which are matched to within \(5\times 10^{-7}/\mbox{K}\). The overall thermoelastic constant of the sensor is however difficult to predict, as it depends on integration asymmetries during assembly. Outside of the sensor there will be a thermoelastic response due to temperature gradients within the SP enclosures. The largest temperature gradients are across the low-conductivity thermal pathways used to attenuate the temperature variation at the sensor die, with a targeted thermal time constant of 200 s. Again, the resulting thermoelastic response is difficult to predict as it depends on non-nominal asymmetries in the thermal pathways, though it is expected to be proportional to the difference between the sensor and enclosure temperatures. A simple lumped-element thermal model of the SPs can be constructed to quantify the thermal response (Stott et al. 2018). One node is the sensor, with a temperature \(T\)sensor measured from the resistance of a gold element on the frame of the proof-mass die. This node has a heat capacitance \(C\)sensor. The second node is the SP enclosure, which is mechanically and thermally connected to LVL. This node's temperature, \(T_{\mathrm{enc}}\), is determined from a calibrated platinum resistance thermometer attached to the inside of the enclosure. Between the two nodes we model the thermal isolation pathways as a thermal resistance \(R\)sensor to give a thermal time constant for conduction to the die of \(\tau_{\mathrm{sensor}} = R_{\mathrm{sensor}}C_{\mathrm{sensor}}\). The thermal acceleration signal for each SP can then be calculated and removed from the data as $$ \alpha _{\mathrm{thermal}} = \alpha _{\mathrm{sensor}} T_{\mathrm{sensor}} + \alpha_{\mathrm{enclosure}} ( T_{\mathrm{sensor}} - T_{\mathrm{enc}}), $$ where \(\alpha _{\mathrm{sensor}}\) and \(\alpha _{\mathrm{enclosure}}\) are the thermal response of the sensor and enclosure respectively. To determine the model parameters and calibrate the die temperature outputs, the flight SP units were logged over a controlled thermal cycle and multiple regression performed on the results. In addition, this test allowed a calibration of the mass position signal. The results are shown in Table 12 together with the time constant of the enclosures. The correlation coefficient to the model was very high for SP1 and SP2, but the results for SP3 were poor due to a subsequently identified failure in the tether between the sensors and electronics used in this test. The completeness of the thermal model was assessed by repeating the multiple regression with a further node at an external temperature reference. Inclusion of this node did not increase the correlation significantly. Temperature sensibility parameters of the SPs. Those of the SP3 are to be determined (TBD) \(\alpha_{\mathrm{sensor}}\) (−0.9 ± 2.8)×10−6 m/s2/K \(\alpha_{\mathrm{enclosure}}\) (1.2 ± 0.1)×10−4 \(\tau_{\mathrm{enclosure}}\) MPOS gain V/(m/s2) Table 12 shows that the sensor thermal sensitivity requirement is met, both for the uncorrected SP output and after regression of the SP output against temperature, where the residue is set by the uncertainty in the sensitivity. The thermoelastic sensitivity is not an SP requirement, but the effect at 0.1 Hz can be estimated using the SEIS model value of the external temperature noise under the WTS of \(3\times 10^{-6}~\mbox{K}/\sqrt{\mbox{Hz}}\). This gives a thermoelastic noise injection of \(3\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) for SP1 and \(5\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) for SP2, both a factor of 2 or more below the SP noise requirement at this frequency. This thermal analysis performed at CNES can be repeated during the calibration phase of SEIS on Mars to determine a revised, or in the case of SP3, new thermal sensitivities for each unit. These new values will incorporate any additional injection from the thermoelastic response of LVL. 5.2.6 SP Magnetic Response The SP magnetic response should be very low. Silicon is a diamagnetic material and so the suspension should show no effect from any changing magnetic field. Although SP does use an electromagnetic actuator, the geometry of the magnetic circuit ensures that the coils are not sensitive to any change in an external field—the forces on the two sides of the coil will be common-mode rejected if there is no gradient in the field along the sensitive direction of the SP unit. To confirm this, an SP sensor unit was placed into a magnetic test coil and the response to a 1.5 mT change in field recorded. The resulting highest sensitivity in any orientation was determined as \(0.15~\mbox{m}/\mbox{s}^{2}/\mbox{T}\) compared to a requirement of \(1~\mbox{m}/\mbox{s}^{2}/\mbox{T}\). This sensitivity is likely to be an overestimate as the magnetic-field inhomogeneity rather than its absolute value are likely to have produced a response of SP. 5.3 LVL and Tiltmeters 5.3.1 Leveling System Overview The LVL (Fig. 18) has a dual purpose: It will ensure level placement of the SEIS sensors on the Martian ground under as yet unknown local conditions of ground slopes up to 15° in any direction, a requirement that needs to be fulfilled for proper operation of the highly sensitive VBB seismometer and provide the mechanical coupling of the SEIS sensor assembly to the Martian ground. The LVL subsystem consists of a mechanical part, the leveling structure and an electrical part, the Motor Driver Electronics (MDE) board. The structural ring of the LVL subsystem is the central interface to the VBB and SP seismic sensors and their proximity electronics, to the dampers with their interface to the lander deck during cruise and to the RWEB thermal enclosure. With the three extendable legs, namely the Linear Actuators, the LVL structure also provides the signal path from the Martian ground to the seismic sensors. 5.3.2 Linear Actuator Legs and Feet The linear actuator is designed and developed as a separate unit which is assembled and tested alone and later integrated as a part onto the LVL structure. The linear actuator housing and the foot are made of titanium grade 5 (Ti6Al4V); the housing is gold plated to decrease the thermal emissivity. The telescopic leg is made of Invar with TiN coating. To protect the mechanism against dust and to maintain the thermal environment, the telescopic leg is covered with bellows underneath the SEIS sensor assembly. The mass of one linear actuator is \(\sim 350~\mbox{g}\) without bellows and foot. The two main purposes of the three identical linear actuators in the LVL subsystem are the ability to level the SEIS sensor assembly from inclinations up to 15° by independently extending or retracting their telescopic legs and to transmit the seismic movements from the ground to the seismic sensors. An unbiased transmission of seismic motion is only possible with the first eigenfrequency of the sensor assembly much higher than the bandwidth for the measurements. This leads to a stiffness requirement for the extendable legs of the linear actuator which defines the geometrical shape and the guidance of the movable leg in the housing (Fig. 54). The stiffness has by system design a maximum value when the telescopic leg is mostly retracted; the stiffness decreases with extending the leg. The diameter of the telescopic leg was selected to 25 mm; the round shape provides by its geometry the most efficient flexural stiffness for the part itself. A linear guidance off the shelf could not be used for various reasons like mass, materials and CTE mismatch, surface quality, etc. The solution is a preloaded guidance based on two systems of three ball bearings positioned in a 120° angle around the telescopic leg. Two ball bearings are on fixed positions in the stiffened housing; the third ball bearing presses the telescopic leg with an adjustable preload against the two other bearings (Fig. 55). One system is located close to the lower end of the linear actuator housing; the other system at a distance of 45 mm, which is close to the upper rim of the LVL structural ring. The linear actuator is mounted at the bottom and the top of the LVL ring; both fixations are close to the two guidance systems to maintain the stiffness path towards the structural ring and the seismic sensors. The design of the mechanism for the linear movement is driven by geometrical requirements. With the effective diameter of the LVL structural ring of 250 mm, the required travel for a compensation of 15° inclination is 59 mm for each linear actuator. Due to the volume envelope of the sensor assembly, the gearmotor has to be mounted beside the housing with a spur gearhead on top. Mechanical end stops on the telescopic leg keep the moving part in place. The motor is a Phytron two-phase stepper motor with a 48:1 planetary gearhead. The spur gear head has a ratio of \(58/38\); the spindle has a pitch of 0.7 mm. With the 200 steps per motor turn, this results in a theoretical linear resolution of \(\sim 50~\mbox{nm}\) displacement per motor step. The spindle gearhead and spindle is made in one piece. Two angular ball bearings with a preload against the top of the housing are fixing the spindle in \(z\)-direction while keeping the stiffness path when the SEIS sensor assembly is standing on the ground. The material of the spindle is also titanium grade 5 to match the CTE. The leadnut is made of gearbronze. The system is dry lubricated with a MoS2 coating on the spindle. The LVL feet need to provide a stable contact and good coupling between the SEIS sensor assembly and the Martian surface at the landing site, where a regolith cover composed of fine basaltic sand with low rock abundance is expected (Golombek et al. 2017). As cone-shaped feet, which are commonly used for terrestrial seismometers, can result in uncontrolled sinking if deployed on a non-rigid surface, a round metal disk of 60 mm diameter was added to the upper end of each foot. The optimum dimensions of the foot cone were determined by dedicated measurements at the Ecole des Ponts ParisTech, using a specially developed penetration device. Measurements were performed on Mojave simulant provided by JPL and chosen as a Mars analogue of the surface materials. See Delage et al. (2017) and Fayon et al. (2018) for more details. This is a mix of MMS simulant, containing alluvial sedimentary and igneous grains from the Mojave Desert, with basaltic pumice, sieved at 2 mm. A series of preliminary tests with foot cones of 20 mm length and maximum diameters of 20 mm and 30 mm showed that full penetration could not be achieved under the maximum force of 10 N. Therefore, two alternative cones with a smaller maximum diameter of 10 mm and lengths of 10 mm and 20 mm were designed that achieved full penetration. In addition, some plate loading tests, using just the disk without any cone attached, were conducted. After complete penetration, repeated elastic loading cycles between 10 N and 8 N were performed for samples of the Mojave simulant with densities of \(1640~\mbox{kg}/\mbox{m}^{3}\) and \(1670~\mbox{kg}/\mbox{m}^{3}\). For the lower density, the plate loading test and the test with the 20 mm cone result in similar values of stiffness. In the denser sand, the response with the cone is softer than that of the plate alone, as stiffness increases markedly with higher sand density for plate loading but does not show a similar effect for the cone test. The stiffness values for the 20 mm cone are about 50% larger than for the 10 mm cone and stiffness values obtained with both cones increase progressively during consecutive load cycles (Fayon et al. 2018). Consequently, a foot cone of 20 mm length at 10 mm maximum diameter was selected, together with an 60 mm diameter disk which will ensure ground contact also for tilted configurations (Fayon et al. 2018). 5.3.3 Structural Ring and Sensors The LVL structural ring is a complex interface part made in one piece of titanium grade 5 (Ti6Al4V). It is gold plated to decrease the thermal emissivity. On the inner side of the ring, three foil heaters are mounted to maintain a sufficiently warm environment for the seismic sensors during winter times on Mars. The two SCIT (SCIentific Temperature) sensors are also mounted on the ring. Two types of tiltmeter are installed: A two-axes MEMS sensor for the coarse-leveling and two single-axis high precision electrolytic tiltmeters (HP tiltmeter) for fine leveling. The MEMS is oriented towards the reference coordinate system of the SEIS sensor assembly. Inside the MEMS, a small proof mass is hanging in springs; the position of the proof mass is a measure of tilt. The output signal depends on the gravity; together with the MDE, an inclination of ∼±25° can be measured on Earth and ±90° on Mars with a resolution better than 0.1°. The two electrolytic tiltmeters are oriented in the movement direction of two Linear Actuators. They have a measurement range of \(\pm 720~\mbox{arcsec}\), dependent on the sensor temperature. Inside the electrolytic tiltmeter, a small amount of liquid is filled in a hermetic sealed geometry like a water level. The liquid is conductive. Three inserted electrodes form a voltage divider with the liquid as inclination dependent resistances. The output amplitude is a measure of the tilt. To avoid electrolysis and corrosion, the tiltmeter is powered with AC voltage. Two dedicated sensor front end electronics pre-process the sensor signals. These electronics PCBs are mounted on the LVL structural ring close to the HP tiltmeter, underneath two SP sensors. The theoretical resolution is better than 1 arcsec. 5.3.4 MDE Board The MDE controls the levelling system and its block diagram is shown on Fig. 56. It operates the motors of the linear actuator, acquires the signals from the MEMS and high precision tiltmeters and switches the winter mode heater on the LVL ring. The PCB is mounted into the SEIS E-Box. The connection between MDE and LVL is realized with a dedicated LVL tether. In deviation from the general E-Box architecture, the MDE is a single system without redundancy. Nevertheless, it receives power and data from two sides. The power lines are cross-strapped in hardware; the data interfaces are merged in the FPGA. The MDE acts as a slave, i.e. it only communicates and operates the LVL when being asked to. With a serial three-byte protocol, all functions can be commanded; the MDE answers on each command with an acknowledge pattern and the requested data. There is no autonomous functionality implemented except hardware safety features like short circuit or over-temperature protection of the motor lines. The motor controller is a current controlled bipolar stepper motor driver with free programmable start speed, ramping and total number of steps for the three motors. A minimum of 12 motor steps can be commanded, resulting in a minimum displacement of \(\sim 0.6~\upmu\mbox{m}\) of the Linear Actuator. A maximum displacement of \(\sim 12.6~\mbox{mm}\) can be commanded in one motor run. A common 2-phase current controller is switched to one of three full bridge motor drivers. Consequently, only one motor can be operated at a time. The current controller has four fixed current steps: 100 mA, 125 mA, 150 mA (nominal motor current), 180 mA (boost motor current). It is a free running switching controller using the motor inductance in buck converter architecture. A supervisor circuit processes the on/off signals of the current control and determines nominal and fault conditions. For a short circuit, i.e. the current rises too fast, the motor control immediately stops and an error flag is raised. From the duty cycle of the current controller at constant current, the motor temperature can be derived as a function of the resistance of the motor coil. This information is used in the MDE to protect the motor against over-temperature. Each motor can be operated in half step, full step or in pre-heating mode. The pre-heating of the motor is realized with a full step mode without phase shift of the two motor currents. Heat is generated in the motor coil, but the motor does not move. On the LVL ring, the MEMS and high precision tiltmeter provide analogue signals of the tilt in X- and Y-direction. The data are sampled and digitized using a 12 bits AD converter. The sampling rate is limited by the communication interface where only one channel can be transmitted within one command. As the high precision tiltmeter is a device requiring an AC excitation, a square wave signal of 2 Vpp 500 Hz is generated on the MDE and provided to the sensors. A synchronous rectifier is located on the sensor front end electronics mounted on the LVL ring. The high precision tilt information is transmitted back via the LVL tether to the MDE as a DC value. Heaters are installed on the LVL structure with a total power of \(\sim 1.5~\mbox{W}\) to keep the seismometer warm during winter time. The heaters are powered from the MDE board from a 6 V supply. They are not actively controlled, only commanded on and off via the MDE FPGA. To save power during winter when the other levelling functions are not required, the MDE can be partly switched off. The heater control remains active in the winter power mode and keeps the heater on. 5.4 Ebox The E-Box (see Fig. 19 and Fig. 20) contains the main part of the electronics for SEIS and resides in the lander warm electronics box (WEB). Figure 57 shows the place of the E-Box in the system. The E-Box is controlled by the Command and Data Handling (C&DH) and supplied by the Power Distribution and Drive Unit (PDDU), which are both part of the lander. The design and production of the electronics in the red boxed part and the top level integration of SEIS-EBX has been conducted under the auspices of ETH Zurich. The blocks VBB-FB[123] are part of the VBB sensors, the block SP-FB is part of the SP sensors and the block LVL-MDE is part of the leveling system. The description in this section mainly focuses on the functions included in the SEIS-AC and the SEIS-DC, i.e. the red boxed part. This electronic must withstand the harsh environment during cruise to and operations on Mars. To overcome adverse effects due to radiation, vacuum and temperature variations, only space-qualified components can be used and dedicated design techniques are needed. These techniques include latch-up protection for analog circuits and an FPGA design with implementation of Triple Mode redundancy (TMR) for flip-flops, safe state machines and Error Detection and Correction (EDAC) for memories. On top of that the electronics is made fully cold spare redundant. The only exception is made for the signal conditioning and analog to digital conversion for the seismic signals from the VBB and SP sensors neither of which is redundant. 5.4.2 Functionality The E-Box acquires data by digitizing analog signals, which is stored in the on-board non-volatile memory. This function is made by the SEIS-AC card (Fig. 58). Up to 65 hours of data can be stored. The lander computer is able to gather that data when it is active. There are 9 seismic channels, from the VBB and SP sensors and the scientific temperature (SCIT), each acquired as continuous signal by a dedicated sigma-delta ADC (AD7712). These channels are digitized at a rate of 32 kHz using the ADC filters to output 24-bits samples at 500 Hz. Further digital filtering is used on SEIS-AC to reduce the sample rate to the chosen output sampling rate of these channels. Unlike the seismic channels, the remaining channels are acquired as samples by two ADCs that are used for multiple channels and have a multiplexer in front of them. These remaining channels comprise the 3 VBB temperatures and 48 housekeeping (HK) channels. The 3 VBB temperatures are acquired using one sigma-delta ADC (AD7712) that delivers 16-bits samples using the filters inside the ADC itself. A sample is composed of the average of 16 consecutive samples taken at 100 Hz by this ADC, which is stored as a 16-bit value. SEIS electronics measure resistivity for the Temperature sensors with a Full Scale Range of 896.25 \(\Omega \) (231.11 \(\Omega \) to 1127.36 \(\Omega \)). The Transfer function of the sensors in °C/DU is provided in Sect. 6.4.1. The 48 housekeeping channels are acquired using one rather fast successive-approximation ADC (ADC128S102QML) that delivers 12-bit samples. For each of the 48 channels a sample is composed of the sum of 16 samples taken at \(194.3~\upmu\mbox{s}\) intervals by the ADC, which is stored as a 16-bit value. Between the channels there is a delay of 1.166 ms to allow for the input multiplexer to switch and the input of the ADC to settle. These housekeeping channels include 1 dummy channel and measured voltages, currents and temperatures of the instrument. Digital Filtering The 9 seismic channels and the scientific temperature are acquired as continuous signals and therefore digital filters are used to reduce the sample rate. The sample rate is reduced in order to lower the data volume to be stored in the non-volatile memory without losing information that is in the bandwidth of interest. The filters remove high frequency components of the signal that would have aliases in the pass band if just decimation would be applied. The filters following the ADC have a suppression of at least 120 dB for frequencies out of the pass band, before decimation takes place. Figure 59 shows the part of the acquisition chain incorporating digital filtering for the velocity channels. Different filters are implemented for the position channels and scientific temperature, as they have a lower frequency of interest. Figure 60 shows the digital filtering for the low frequency channels. The filter in the ADC is a 3rd order sinc-in-time filter. The remaining stages are FIR filters in the FPGA that is part of the SEIS-AC. This FPGA can be configured to store the data at two sample rates. For velocity channels the data are stored at a sample rate of either 100 Hz, out of stage 1, or 20 Hz, out of stage 2. For position and SCIT channels the data are stored at a sample rate of either 1 Hz, out of stage 2, or 0.1 Hz, out of stage 4. In cases where both sample rates are needed, the E-Box is configured to store the higher sample rate and the lower sample rate is reproduced in the SEIS-FSW by an exact copy of the filter present in the E-Box. The coefficients for the FIR filters in the FPGA are stored in the non-volatile memory of the E-Box. Commands are available to upload 8 different sets of coefficients. Each one can hold up to 256 factors. The sets of coefficients identified as VEL_A and VEL_B are used in stage 1 and 2 respectively for the VBB velocity channels and the sets SP_A and SP_B are used in stage 1 and 2 respectively for the SP velocity channels. For the position and SCIT channels two stages are used to decimate by a factor of 10, because the two stages use less resources than a single stage would. This also takes two sets of coefficients, which are POS_A and POS_B for the VBB position channels and SCIT_A and SCIT_B for the SCIT channel. Thus, stage 1 and 3 of the position and SCIT channels share the same set of coefficients and so do stage 2 and 4 of these channels. The FIR filters are loaded with symmetric coefficient sequences to make them linear phase filters, i.e. there is a constant delay for all frequencies in the pass band. The amount of this delay depends on what coefficients are uploaded, but naturally the delay of the position and SCIT channels is much more than that of the velocity channels. Temperature Offset Compensation For all temperatures but SCIT an offset compensation is implemented in the FPGA in order to reduce measurement error related to the electronics. For the SCIT a precise current source is implemented, together with a 4-wire measurement. Hence, for the SCIT this compensation is not needed. For the VBB temperatures and the housekeeping temperature channels a precise \(1~\mbox{k}\Omega\) resistor is measured in addition to the temperature sensors, using the same acquisition electronics. There are separate acquisition electronics for the VBB temperatures and the housekeeping channels and thus separate precise \(1~\mbox{k}\Omega\) resistors for both. The value measured on the precise \(1~\mbox{k}\Omega\) resistor is then used to perform the compensation by the following formula: $$ R_{\mathrm{T\_COMP}} = R_{\mathrm{T\_MEAS}} + (R_{\mathrm{1k\_REF}} - R_{\mathrm{1k\_MEAS}}) $$ This compensation is done on the digital values in the FPGA, i.e. before they are converted to physical values. The value \(R_{\mathrm{T\_MEAS}}\) is what is acquired on the temperature sensor, the value \(R_{\mathrm{1k\_MEAS}}\) is measured on the precise \(1~\mbox{k}\Omega\) resistor and \(R_{\mathrm{1k\_REF}}\) is a constant that holds the digital value that converts to \(1~\mbox{k}\Omega\). The calculated \(R_{\mathrm{T\_COMP}}\) is the offset compensated value that is normally stored in the data packets. In case the value measured on the precise \(1~\mbox{k}\Omega\) resistor is not in the range of \(R_{\mathrm{1k\_REF}} \pm 6.25\%\), the compensation is not applied, \(R_{\mathrm{T\_MEAS}}\) is stored in the data packets and the corresponding data invalid flag residing in the science packet is raised. It is implemented this way to avoid that a malicious acquisition of the precise \(1~\mbox{k}\Omega\) resistor leads to corrupted data on the corresponding temperature channels. This offset compensation cancels the measurement error due to offset in the electronics and resistances in the multiplexers and leads that are common between the sensor and the precise \(1~\mbox{k}\Omega\) resistor. Variations in the sense current, that constitute as a gain error, are not fully canceled by the offset compensation, but still reduced. Figure 61 shows the reduction of the gain error by the offset compensation for the VBB temperatures. The gain error is exaggerated (5%) in this figure for illustrative reasons. The data of the seismic channels and the VBB temperature channels is packed in chunks that contain the samples acquired during 1 s. This data, together with a time stamp and a header that includes also the SEIS status flags, will always fit in one page of the non-volatile memory, regardless of the selected sample rates. If no channels are acquired, only the status flags are stored. Thus, there is always one page of the non-volatile memory written each second the SEIS is active. A different area of the non-volatile memory is used for storing the housekeeping (HK) data. In contrary to earth seismometer for which the only acquired channels are the velocity and mass position outputs, the health of SEIS will indeed be monitored through the recording of the different voltage supplies and not only those delivered by the lander but also those provided by the SEIS DC/DC (see Sect. 4.1.4) to the SEIS sub-systems, as well as through recording of these subsystem temperatures. A sample for each of these channels is acquired by performing a scan over all the channels. A scan over all the channels takes less than 1 s to finish, regardless of the selected data rate. The start of this scan is triggered on an internal 1 Hz signal and it can be configured to store the result of the scan every 1 s, every 100 s or not at all. Other rates of housekeeping data may be achieved by decimating the data from the E-Box, e.g. by the SEIS-FSW, which is a process happening outside the E-Box. The way the housekeeping data are stored depends on the rate at which the data are to be stored. If there is data every 100 s, the data of a single scan is stored together with a time stamp and a header in one page of the non-volatile memory. If there is data every 1 s, the data of 10 scans is collected and stored with a time stamp per scan and a header in one page of the non-volatile memory. The reason to store 10 scans together in one page is that the memory capacity reserved for housekeeping data would not be sufficient if one page is used every 1 s. On the other hand, if 10 scans are to be collected that are 100 s apart, housekeeping data of up to 900 s is not stored in the non-volatile memory at a time. If the instrument is shut down by the fault protection, housekeeping data of up to 900 s may be lost then, which may hamper the investigation of what has happened. Hence, there are different strategies how to store the data for the different data rates. The data stored in the pages of the non-volatile memory is transferred to the lander computer, when a request is received to do so. If the lander computer requests packets with seismic or housekeeping data, the E-Box creates exactly one packet from each memory page. Figure 62 shows the structure for these packets. The SP header provides a "marker" between the instrument data records and defines what type of packet is transferred, the memory page contents form a self-contained body of the packet and at the end there are the EDAC statistics and a checksum. The EDAC statistics comprise of a count of single errors, which are corrected and double errors, which cannot be corrected. The checksum at the end is used on the lander computer to detect transmission errors. The memory areas for the seismic and housekeeping data are managed as circular buffers. Packets are transferred to the lander computer in the same order as they are stored in the E-Box. Figure 63 shows such a buffer and the pointers used to manage the data. The area colored red is the part of the memory that is not in use and may contain old data. The areas colored yellow and green contain data stored in the E-Box that can be retrieved by the lander computer. The data start pointer is set to the page that contains the oldest data and the data write pointer is set to the first page where new data can be written. When a new packet is stored, it is stored where the data write pointer is set to and then this pointer is advanced. Data are deleted by advancing the data start pointer. If any of the pointers reaches the end of the area, then the pointer is moved to the start of the area when advanced. There is also a read pointer that is set to the first page that is not yet read. At the start of a packet transfer, the data read pointer is made equal to the data start pointer and the read pointer is advanced each time a page is read. The read finishes when all requested packets are transferred or if the data read pointer has become equal to the data write pointer. The data read pointer is also used to delete data, as this is safer. When an amount of packets is requested to be read, less packets may be available. If after this the same amount would be requested to be deleted, a packet that has been stored in the meantime would be deleted without ever have been read. Therefore, there is a command to delete the packets that having been read, which causes the data start pointer to be set equal to the data read pointer. Thus, with this command the yellow colored area would be deleted, i.e. become memory that is not in use. Time Stamps and Synchronization The SEIS instrument keeps its own time, which is independent of the lander computer. The SEIS time is called LOBT (Local On-Board Time). The time is kept by a 40-bit counter that counts \(1/1024~\mbox{s}\) (\(2^{-10}~\mbox{s}\)) ticks and can count more than 34 years. This LOBT is used to provide the time-stamps for the data that is stored in the E-Box. The data gets its time-stamp at the moment it is stored. Only one time-stamp is provided for the data of all channels of a 1 s time frame. The data that becomes available in a second gets a time-stamp that is set to the start of that second. So, for the VBB temperatures and the housekeeping channels the data are acquired at a time that is later than what the time-stamp is set to, i.e. in the 1 second period that started at the time put in the time-stamp. For the seismic channels and the scientific temperature channel the output data of the digital filters is stored. The group delays of the filters are not compensated, i.e. the signal is first delayed and then a time-stamp applied. That means that the actual signal is acquired before the time that is put in the time-stamp. For velocity channels there are multiple samples in a 1 s time frame. The time-stamp applies to the first sample and each subsequent sample is taken one sample period later. It is chosen to supply only one time-stamp to reduce the amount of data to be transferred. The difference between the time the signals are actually acquired and the time-stamp applied is constant and known, thus a single time-stamp supplies all information needed. To rearrange the data such that the packet has data that is actually acquired in the same 1 s frame, would need temporary storage of the signals with no or little delay. The group delay of the position signals at 0.1 Hz is several minutes, in which a lot of velocity data are acquired. The memory capacity of the FPGA used is not sufficient and external memory would cause an increase in power consumption and volume. Hence, it is chosen to have this resolved in the SEIS-FSW, in which the data are rearranged and further processed anyway. The SEIS LOBT may drift a little with respect to the time kept in the lander, which is called SCT (SpaceCraft Time). These times are not aligned, as it is chosen to have continuous data out of the E-Box without jumps in the time. Instead, time pairs of SCT and LOBT are generated, which contain both values measured at the same instant of time. In order to correlate data of SEIS with data from APSS a 1PPS signal is supplied by SEIS E-Box to the PAE (Payload Auxiliary Electronics of APSS). Figure 57 shows this connection. A rising edge of the 1PPS signal supplied to the PAE coincides with the instance of time a time-stamp is generated on SEIS. The APSS supplies data that allows to determine when this rising edge occurred measured in APSS LOBT. Several Control Functions The E-Box SEIS-AC switches the power of the subsystems and provides several, mostly custom, interfaces with the other boards in the E-Box. This includes configuring the feedback electronics for the VBB sensors, on which different feedback modes and gains can be set. The status of this electronics can be read, which includes the electrical end stop detection of the mass re-centering mechanism. For the SP sensors the feedback electronics can be set to different gains, re-centering can be started and control signals are provided to switch the power of the 3 SP sensors separately. All these functions are made available through SEIS commands from the lander computer. An asynchronous serial interface is supplied to the leveling motor driver electronics, allowing to operate the leveling motors, the heater and the tilt sensors by writing and reading registers in this electronic. The stepper motors in the VBB sensors for the thermal compensator and mass re-centering are controlled by SEIS-AC, which includes keeping record of their position. Power is supplied for the motors by SEIS-AC, which can be used for one motor at a time. The operation of the leveling motor driver electronics and the VBB stepper motors is also done by commands from the lander computer to SEIS. A calibration waveform can be sent to the VBB and/or SP sensors. Calibration waveforms, one for the VBB sensors and one for the SP sensors, are stored in the non-volatile memory. A sensor calibration can be started during which the VBB mode and acquisition configuration is changed. After the waveform is output to the sensor for the requested amount of repetitions, normal operation mode is resumed automatically. 5.4.3 Performance Signal Acquisition For the acquisition of the science channels a 24 bit sigma-delta ADC (AD7712) has been chosen primarily for its excellent low frequency noise performance, its low power consumption and radiation robustness. Sigma-delta ADCs, like other integrating ADCs, do not contain any source of non-monotonicity and inherently offer no missing codes performance. The AD7712 achieves excellent linearity by the use of high quality, on-chip silicon dioxide capacitors. The device also achieves low input drift through the use of chopper stabilized techniques in its input stage, which thus greatly reduces the \(1/f\) low frequency noise. A space qualified external voltage reference (RH1021-2.5), with the temperature stability of \(< 5~\mbox{ppm}/\mbox{K}\), is used to achieve good stability in the harsh temperature environment on Mars and has a low noise performance matching the ADC performance. Acquisition noise level at low frequency (\(<100~\mbox{mHz}\)) depends on input signal amplitude since the voltage reference noise is scaled with the acquired signal. For signals less than about 25% of the full scale range (FSR), the voltage reference noise is attenuated below the intrinsic noise of ADC (Fig. 64). Full scale range are \(\pm 25~\mbox{Volt}\) for the VBBs (for both VEL and POS outputs). It is \(\pm 12.5~\mbox{Volt}\) for the Velocity output of the SP and \(\pm 5~\mbox{Volt}\) for the SP POS output. Full Scale Range of the TSCI is \(1432.66~\Omega\) (\(0~\Omega\) to \(1432.66~\Omega \)) with transfer function of the TSCI given in Sect. 6.4.1. The ADC intrinsic noise of \(3.8~\upmu\mbox{V}/\sqrt{\mbox{Hz}}\) is flat (white) down to 10 mHz, below which the ADC \(1/f\) noise becomes visible. Beyond about 8 Hz the ADC quantization noise starts to dominate over the ADC white noise. The FPGA FIR filter will sharply attenuate this noise beyond 40 Hz (80% of the Nyquist frequency) for high 100 Hz output data rate selection. For low 20 Hz data rate, the filter will completely attenuate the quantization noise since the corner frequency is 8 Hz in this case. 5.5 Tether, Tether Storage Box and Load Shunt Assembly The Tether System has the task of bringing power and excitation waveforms from the Ebox in the Thermal Enclosure on the Lander to the Sensor Assembly deployed on the Martian ground and taking output voltages from the Sensor Assembly back to the Ebox for digitization and storage. It must provide this connectivity while also permitting the deployment of the Sensor Assembly, surviving the forces involved in deployment of the Sensor Assembly, surviving the Martian environment for at least 1 Mars year and after deployment, not exerting forces on the Sensor Assembly that would contaminate the seismic data. It is worth noting for comparison that a standard terrestrial seismometer (STS-2), which has all its analog feedback in the sensor assembly, has 18 conductors going from the sensor assembly to the data logger, while SEIS requires over 200 conductors between the sensor assembly and the Ebox in the Thermal Enclosure in the Lander. All VBBs and SP feedback cards are indeed located in the S/C warm box, in addition to all oscillators used by their displacement transducer. The solution consists of the Tether itself, a Tether Storage Box (TSB) that holds the excess Tether up until deployment and allows the Tether to pay out during deployment and a Load Shunt Assembly (LSA) that is strong during deployment and provides significant mechanical decoupling of the Tether from the Sensor Assembly after deployment. The Tether System is shown in its stowed (Fig. 21) and deployed configurations in Figs. 65 (see Fig. 12 for picture during deployment tests). 5.5.2 Tether The tether comes in 4 segments, 3 of which consist of flat copper and Kapton belts (TSA-1, 2 and 3 in) and the remaining segment (TSA-4), which is constructed of a normal wiring bundle. Each belt in the tether is made of 5 layers of Kapton interleaved with 4 layers of copper bonded together with acrylic adhesive as shown in Fig. 66. This construction was chosen to minimize mass and because the belts are flexible in the out-of-plane direction, permitting deployment of the sensor assembly to the Martian ground. There is a pinning mass attached to the tether just outside where the WTS wall crosses the tether (Fig. 12). The pinning mass is intended to anchor the tether and greatly attenuate thermoelastic and mechanical noise on the lander side from getting to the sensor assembly and also provides a hook whereby it is possible to adjust the geometry of the Load Shunt Assembly (LSA described below) after the Frangibolt has opened the LSA. The camera on the arm will image the LSA after it has opened to check the geometry. If the geometry is not as desired, the arm and scoop will be used to move the pinning mass by up to a few centimeters. The field joint (see Fig. 14) permits removal and re-integration or connection of the Sensor Assembly during testing with specific Earth ground systems with minimal impact to the rest of the spacecraft. Records of earthquakes detected by VBB14 (a) and VBB12 (b) in low gain engineering mode during temperature tests when located at IPGP (48.808°N 2.492°E): raw data (top), filtered data (middle) and spectrogram (bottom). (a) \(M_{w} = 7.8\) Solomon Islands earthquake occurred December 8, 2016, 17:38:46 UTC (epicenter location: 10.681°S 161.327°E, depth: 40 km, epicentral distance: 138.0°). The arrival times of PP and SS waves are indicated in red. P and S waves are not recorded at this epicentral distance. (b) \(M_{w} = 3.9\) earthquake occurred near Brest (France) December 11, 2016, 21:27:23 UTC (epicenter location: 48.490°N 4.460°W, depth: 2 km, epicentral distance: 4.6 km). Noise levels are all related to the test facility, a site very far from seismic vault conditions Record of local earthquake detected by QM SP1 in Acton, California: spectrogram (top), time series (bottom) for a \(M_{w} = 1.4\) ENE of Colton, California, occurred on July, 17th, 2016, 06:34:11 UTC. The time series show the full 80 Hz bandwidth from 200 sps (labeled SP), downsampled to the continuous stream of 2 sps (cont. SP) and using the energy in a 4 to 16 Hz filter downsampled to 2 sps (ESP) Record teleseismic earthquake detected by QM SP1 in Oxford UK: spectrogram of the incoherent noise with respect to the reference sensor, signal spectrogram (middle) and time series (bottom, for a reference seismometer, green, for the SP in blue with the difference in red) for a \(M_{w} = 7.7\), \(29~\mbox{km}\) SW of Agriha, Northern Mariana Islands at 2016-07-29 21:18:24 UTC. The red time signal is the one used for the coherence noise spectrogram 15 degrees tilted configuration for extreme deployment conditions. As a low rigidity regolith is expected at surface, SEIS will however be always in contact with the flat disks of the three feet, in the center of which a spike will penetrate in the ground. The surface will therefore be deformed just beneath each foot LVL design as well as location of all sensor assembly subsystems The SEIS EBox. A \(\sim 5~\mbox{kg}\) and \(\sim 9~\mbox{W}\) electronic box. The E-box is 243.8 mm in height. The top is \(303.5~\mbox{mm}\times 125.5~\mbox{mm}\) while the bottom is \(343.5~\mbox{mm} \times 169.5~\mbox{mm}\) due to mounting structures The EBox and its electronic boards. The E-box is 243.8 mm in height. The top is \(303.5~\mbox{mm} \times 125.5~\mbox{mm}\) while the bottom is \(343.5~\mbox{mm} \times 169.5~\mbox{mm}\) due to mounting structures Tether System overview, Stowed Configuration. The Ebox is inside the Thermal Enclosure of the Lander. The Thermal Enclosure is not shown. The height of the Sensor Assembly on the InSight deck is about 33 mm and the distance from the center of the Sensor Assembly to the field joint is about the same Illustration of the RWEB and WTS configuration after deployment Cradle subsystem overview Sensor Assembly integration summary. Each VBB (a) is integrated in the crown (b). (c) Shells are welded on the crown, the vacuum is made and the exhaust tube (queusot) is pinched off. (d) VBBs can then be tested when connected to their PE (Proximity Electronics), one PE for each VBB. (e) The sphere and then all SPs (Short Period) are added to the LVL ring. (f) The cradle makes the mechanical link between the instrument and the lander. (g) The tether makes the electrical link between all the Sensor Assembly's components and the Lander. (h) The RWEB provides a first protection, mainly thermal. (i) The WTS (Wind and Thermal Shield) is placed on the Sensor Assembly after the SA is deployed Transmission strategy of the SEIS experiment Deployment process of the SEIS experiment following the landing and prior to the HP3. This does not detail the deployment internal to the SEIS sensor assembly after Sensor Assembly deployment on the ground This figure presents the online tool developed by JPL/Caltech to evaluate and compare the performance of tentative deployment sites. The lander is represented in the lower part of the figure. Tentative positions for SEIS (pentagon) and WTS (circle) are figured on the top. The color code goes from blue to red. It represents the percentage of budget allocation for the wind noise on the overall instrument (red equal or superior to 100, deep blue zero percent of the allocation) for the 100 mHz horizontal noise Details of the 4 Frangibolt firings actions, each of them an irreversible action associated with locking systems. Respectively, these are (top left) the locking system of the SEIS SA on the deck, (top right) the Tether box opening for mechanical decoupling of the tether from the lander, (bottom left) the LSA opening for mechanical decoupling of the tether from the SEIS SA and (bottom right) the locking system of the Wind Shield SEIS Deployment (left) and grapple release (right) The closed tether box (left) is opened to release the tether onto the surface (right) Opening the Load Shunt Assembly (LSA) on the tether. The LSA mechanically decouples the seismometer from thermoelastic expansion and contraction of the tether The final stage in SEIS deployment is the placement of the WTS over the sensor assembly VBBs subsystem overview. It is composed of 3 sensors enclosed in the Evacuated Container (EC), 3 proximity electronics boxes hosted on the LVL and 3 feedback boards located into the E-box. The Tether provides the electrical connection between the feedback board and the PE The 3 VBB sensors in the spherical evacuated container (right) which has an outside diameter of 198 mm. Their three sensing directions form the tetrahedron shown on the left Inverted Pendulum Principle Schematic Picture of the pivot of VBB1, including electrical connections. The length of the pivot is 54 mm (a) One of the VBB sensor with Earth mass and VBB pendulum CAD views, illustrating the different functions of the sensor, (b) the fixed part, (c) the moving part, (d) the pivot, see Fig. 36, (e) the displacement Transducer, see Sect. 5.1.4, (f) the Feedback coils, see Sect. 5.1.5, (g) the re-centering motors, see Sect. 5.1.6 and Fig. 44, (h) the Thermal Compensation System, Sect. 5.1.8 and Fig. 45. A VBB pendulum fits in a \(65\times 100\times 108~\mbox{mm}^{3}\) volume All Flight and spare VBBs prior to the cherry-pick process which lead to the selection of the 3 Flight and the 3 spare units. Each VBB pendulum fits in a \(65\times 100\times 108~\mbox{mm}^{3}\) volume \(Q\) of VBB 13 as a function of pressure. The gas in the chamber was air DCS noise. Nominal noise in red and VBB11 measurement in other colors. Noise above 1 Hz is residual micro-seismic background 5.5.3 Tether Storage Box The Tether Storage Box resides under the deck of the Lander. It holds the excess tether during launch and cruise, allows some of the tether to be released during the Sensor Assembly deployment to the surface of Mars and then the bottom opens to deposit the remaining tether on the ground. During deployment, the tether is held above the ground to avoid snag hazards. 5.5.4 Load Shunt Assembly The Load Shunt Assembly (LSA) was invented to isolate the sensor assembly from thermoelastic expansion and contraction in the tether. Standard terrestrial broad band seismometer installations often arrange the cable to the sensor head to loop around the sensor head or otherwise take a serpentine path leading into the sensor head. This minimizes any forces that the cable might be able to exert on the sensor head as a result of thermoelastic expansion and contraction in the cable. Although such configurations were considered initially, they were discarded due to a combination of mass and complications associated with deployment. In its place, we have the LSA. Prior to deployment on Mars the LSA is held closed with a Frangibolt—a bolt that will be broken to release the LSA after the seismometer has been deployed to the surface of Mars. This permits the LSA to be strong during deployment, yet weak after it has been opened. There are two primary performance requirements that the presence of the LSA affects: The overall SEIS thermoelastic tilt shall not exceed \(2\times 10^{-5}~\mbox{deg}/\mbox{K}\) for a variation of temperature occurring under the TBK/RWEB. SEIS shall be able to transmit frequencies from DC up to 55 Hz to the VBB sensor sphere and from DC to 55 Hz to the SP sensor box without any significant amplification (\(Q<25\)). The thermoelastic tilt requirement is the reason for the existence of the LSA. Analysis showed that if the tether came straight across the ground and into the Sensor Assembly, that this requirement was broken by orders of magnitude. The fact that gravity on Mars is about \(3/8\) that of Earth, implies that it is impossible to replicate while on Earth, the combination of interaction of the Sensor Assembly feet with the Martian regolith, the normal forces on the tether and Load Shunt Assembly and the linear and rotational inertia of the Sensor Assembly. Therefore, it is impossible to verify compliance of the system to the subsystem requirements solely via test on Earth. In the face of this situation, we have relied on unit level tests, Finite Element modeling with two independent models and formal verification methods (Uncertainty Quantification and Phenomena Identification and Ranking Table) to verify compliance with the thermoelastic requirement. Figure 67 provides some information on the primary thermal and elastic Finite Element Models with their meshing configuration. The most constraining situation for the thermoelastic case occurs at 0.1 Hz, so inertial effects are relatively small compared to the tilt signal. Below 10 Hz, the open LSA has two natural frequencies of about 5 Hz and 8 Hz with low \(Q\) of about 2.5 and 4 respectively under Earth's gravity and under zero slope conditions. The exact frequency is influenced by details of the geometry after deployment. At this frequency, inertial effects are not negligible. The presence of this resonance is important for the second requirement above, to transmit motions from 0–55 Hz. Initial experiments have shown that the acceleration amplitude at 3 Hz between the outer portion of the LSA and seismometers on the sensor assembly is reduced by a factor of about \(10^{4}\) when the sensor assembly is sitting on coarse sand under Earth's gravity. More extensive characterization of the transfer function of the LVL and open LSA sitting on Martian regolith simulant in Earth gravity is being carried out and will be reported in a later publication. 5.6 Evacuated Container (EC), RWEB and WTS The SEIS instrument assembly includes a series of structures designed to mechanically couple the seismometer sensors to the Martian regolith while hermetically, thermally and mechanically isolating the core VBB and SP pendulums from the surrounding Mars atmospheric and thermal environment. The Evacuated Container (EC) contains the VBB sensors, the Remote Warm Enclosure Box (RWEB) contains the EC and the rest of the sensor assembly and the Wind and Thermal Shield (WTS) is placed over the whole sensor assembly as the final layer of protection. The EC (Fig. 68) is a 20 cm diameter oblate spheroidal welded structure, which accommodates the vacuum environment required to minimize aerodynamic damping of the VBB oscillations (see Sect. 5.1.3) and to provide the necessary thermal protection for the VBB sensors. The vacuum environment is part of the first layer of thermal insulation to the outside environment. In addition to being leak tight to UHV (Ultra High Vacuum), the EC includes six passive gas absorption canisters to absorb any H2O, CO, or CO2 which might outgas or leak into the EC over its life and two SAES coated titanium plates ("getters") to capture H2. Most of the internal EC structures are gold coated titanium to achieve the desired optical and conductive properties necessary to thermally isolate the VBBs. Finally, the EC includes six hermetic electrical feedthroughs and a \(1/2''\) copper exhaust tube (queusot) which is cold welded shut at the end of assembly processing to achieve a hermetic seal. The second layer of thermal insulation around the core EC structure is the \(34~\mbox{cm wide} \times 21~\mbox{cm tall}\) RWEB (Fig. 69). Constructed from titanium and mylar, the RWEB forms a 1–2 cm CO2 filled insulating gap around the instrument. The RWEB provides a stagnant layer of trapped CO2 gas and is designed to eliminate the possibility of natural convection cells from developing. All of the internal layers are coated with low IR emissivity materials. The external surface is bead blasted Kapton to achieve an absorptivity emission ratio which avoids excessive heating/cooling of the instrument while it is directly exposed to the Mars ambient environment after landing. The final layer of thermal/mechanical insulation comes from the \(72~\mbox{cm diameter} \times 35~\mbox{cm tall}\) WTS (Fig. 70). The configuration of the dome, legs and skirt have all been designed to protect the SEIS instrument from mechanical vibrations and tilts induced by Mars winds in excess of \(75~\mbox{m}/\mbox{s}\). The reinforced Kapton and CRES/Aluminum chainmail skirt has also been designed to conform and seal around various terrain obstacles. As with the RWEB, the dome internal surfaces are coated with low IR emissivity aluminum and the external surface with SiO to avoid excessive heating/cooling. Together the WTS, RWEB and Vacuum inside the Sphere provide a very significant decoupling of the instrument from the Martian environment. Specifically, a time constant of at least 11 hours is required between the VBBs and Martian atmosphere in terms of 2nd order attenuation. This is met by multiplying a 2 hours time constant between the VBBs and the outside of the Sphere, with the 5.5 hours time constant between the outside of the Sphere through the RWEB and WTS and the Martian environment. This requirement led to specific thermal design specifications of at least \(4.0~\mbox{K}/\mbox{W}\) thermal resistance through the RWEB at cold temperatures (173 K) and convection heat transfer coefficients through the WTS skirt of no more than \(0.28~\mbox{W}/\mbox{m}^{2}\,\mbox{K}\), as well as the need to minimize any pressure build up within the Sphere. 5.6.2 Getters Description Zeolite-loaded aerogel (ZLA) getters were devised, developed and produced for maintaining vacuum \(<0.01~\mbox{mbar}\) in the SEIS instrument, which facilitates nominal function of the VBBs. The total outgassing in the fully populated EC was estimated at roughly \(10^{-7}~\mbox{mbar}\,\mbox{L}/\mbox{s}\); hence the ZLA composition was designed to cope with that gas load. The ZLA getters are light compound materials (\(\sim 0.2~\mbox{g}/\mbox{cm}^{3}\)) with very high surface area (\(>500~\mbox{m}^{2}/\mbox{g}\)), loaded with \(2\mbox{--}5~\upmu\mbox{m}\) diameter zeolite particles. They were prepared using a modified two-step silica aerogel process and are loaded with fumed silica and zeolite particles in the liquid aerogel precursor. The precursor composition was chosen such that the zeolite particles stay homogeneously dispersed during the gelation process. The aerogel precursor with the dispersed zeolite particles forms a wet gel, locking the zeolite particles into the silica network formed. The material is then dried super-critically to produce a rigid silica network and to minimize shrinkage. The zeolites (13X faujasite) were ion-exchanged (Na+, Ca+, Mg+) to enhance the adsorption characteristics for water, CO2 and volatile organics, which were detected in the SEIS outgassing spectra. The liquid precursor was cast in six Ti-6Al-4V cylinders, designed to utilize the available space in the instrument without interference. A total of \(33~\mbox{cm}^{3}\) ZLA was super-critically dried in the cylinders, then outgassed and sealed with a lid. To avert the risk of particle transport, the 1 cm diameter opening on the lid provided molecular access to the getters through \(1~\upmu\mbox{m}\) filters without noticeably affecting the adsorption rate. The silica aerogel provides a mesoporous network, in which the zeolite particles were dispersed, providing excellent molecular conductance to the zeolite particles. This dramatically increases the effectiveness of the zeolite adsorption in comparison with their standard pellet form applications. Experimental verification of the ZLA performance was done by using water vapor as a proxy for the instrument outgassing. It was demonstrated that these materials are capable of maintaining \(\sim 10^{-3}~\mbox{mbar}\) vacuum over extended periods of time (months to years at room temperature), meeting the engineering requirement with a large margin. The ZLA adsorbance will increase dramatically at Mars temperatures, facilitating a pressure level below \(10^{-5}~\mbox{mbar}\) throughout the duration of the mission. The residual outgassing risk not addressed by the ZLA comes from hydrogen outgassing from steel and Ti alloys. All EC Ti-6Al-4V components were outgassed in vacuum at \(320^{\circ}\mbox{C}\) to reduce the outgassing by orders of magnitude. In addition, H2 getters were implemented by applying the standard SAES Rel-Hy deposition process on both sides of two Ti disks with large surface area (minimum \(80~\mbox{cm}^{2}\) total). The disks were then welded to the inside of each shell. The manufacturer specification of \(3~\mbox{cm}^{3}\,\mbox{Torr}/\mbox{cm}^{2}\) provides orders of magnitude larger absorption than the required capacity. Due to the low emissivity of the Rel-Hy film, 0.05–0.06, the getters also served as a redundant thermal shield. 5.6.3 Feedthroughs and Queusot Overview The EC was designed with 6 high vacuum electrical feedthroughs, three 2-pin feedthroughs and three 37-pin feedthroughs to provide power and signal paths to the VBBs (Fig. 71). The strict leak rate requirement (\(10^{-10}~\mbox{mbar}\,\mbox{L}/\mbox{s}\) helium standard leak rate) for a wide range of temperatures (\(-120^{\circ}\mbox{C} \mbox{ to } {+}120^{\circ}\mbox{C}\)) led to the development and qualification of these unique feedthroughs custom built for InSight by Solid Sealing Technology Inc. Both types of feedthroughs share the same key technology: each individual pin is electrically insulated from the others using BPS glass (borophosphosilicate glass) and held within a stainless-steel body (304L). The pins and the feedthrough body are both made of stainless steel 304L which generates a diffusion bond with the glass when chromium from the stainless diffuses into the BPS glass. During the manufacturing process, at the vacuum furnace the glass becomes solid at around after \(500^{\circ}\mbox{C}\) and remains in compression below this temperature, increasing robustness and decreasing the chances of crack propagation compared to standard alumina feedthroughs. The considerable benefit of this technology compared to traditional alumina insulator feedthroughs is that the insulator is continuously maintained under compression over the entire temperature range as described above, as well as limiting the stresses and deformations due to differences in the thermal expansion coefficients to a very small radius (of the individual insulators). This compares to the single alumina design with 37 holes that would have an outer braze with a considerably larger diameter, leading to more stress due to thermal dilatation coefficient's mismatches. The stainless transition ring is brazed to a Ti-6Al-4V flange to allow the feedthroughs to be Electron Beam Welded (EBW) to the titanium EC. This transition ring is vacuum brazed using CuSil in a separate process. Later the transition ring is EBW to the stainless-steel body which contains the pins. The EC is sealed using a \(1/2''\) custom pinch-off tube ("queusot"). The pinch-off tube assembly was designed with a standard CF16 stainless-steel corn flat flange which later connected to the vacuum ground support equipment, an \(1/2''\) oxygen free high conductivity copper tube and a Ti-6Al-4V adaptor where the assembly is EBW to the EC. The assembly process included two vacuum brazes in order to achieve a leak tightness of \(10^{-10}~\mbox{mbar}\,\mbox{L}/\mbox{s}\) of He Std. The welds of the pinch-off tube to the EC, as well as for the feedthroughs, were performed from the inside of the EC to minimize possible trapped volumes. The sealing mechanism for the pinch-off tube was a customized hydraulic pinch-off tool provided by Custom Products & Services Inc., Model HY-500 set at 5000 psi. Once the pinch-off tool closed its jaws on the copper, the tube plastically deformed until the cold weld took place (Fig. 72), sealing the EC. The feedthroughs and pinch-off tube were qualified at the part level undergoing multiple thermal cycles, vibration, shock and Packaging Qualification and Verification (PQV) successfully. Extensive helium leak tests were performed during the process. All qualification programs were completed. 5.6.4 EC Thermal Design Thermal isolation of the SEIS instrument from the Martian environment is enhanced by the vacuum inside the Evacuated Container (EC). Initial studies of the heat transfer paths within the EC showed that even conduction through low pressure gas within the EC could be large compared to the thermal conduction through the titanium structure. The dominant conduction path is through the coax and flexible ribbon cables that lead from the VBB to the connectors in the Sphere wall; conduction through the Inner Plate and up through the flexures to the shell was the secondary path. Figure 73 shows the sphere heat flow diagram under beginning of life pressure conditions. Without gas conduction present, radiation accounts for 46% of the heat lost from a VBB and conduction through solid structure accounts for 54%. If gas pressure builds up inside the EC to the end-of-life allowable value of \(10^{-1}~\mbox{mbar}\), gas conduction has passed through the free molecular regime and is in the transition regime between free molecular and full continuum flow. Under these conditions, gas conduction is the dominant path for heat loss from the VBB and may account for 57% of the value, reducing the proportion of conduction through solid structure to 35%. Radiation then accounts for only 7.6%. When this gas is present, 36% of the heat lost from a VBB is directly off the many surfaces of the VBB assembly straight to the EC walls. Table 13 below highlights these parameters, while the diagram of the EC shows the typical radiation and conduction paths in vacuum, assuming a 5.5 mW dissipation of the VBB. These studies highlighted the critical need to keep the EC leak tight and to minimize the buildup of outgassing products. Sphere heat flow summary with beginning of life pressure inside the sphere Heat flow Conduction % of total flow of Sphere Radiation % of total flow of Sphere % of total flow of Sphere VBB Getter cables Getters Thermal shield Inner plate % of total flow to Sphere 5.7 Cradle The Cradle subsystem connects the spacecraft at its base with the Sphere-VBB and LVL subsystems at its top (Fig. 74). It has two functions: to reduce the vibrations levels and to fix the Sensor assembly on the lander deck during launch, cruise and landing and unlock it during deployment prior the robotic arm deployment. It consists of 3 nearly identical turrets at 120° around the SEIS Sensor Assembly. A set of 3 dampers is used to decrease the mechanical loads seen by the VBB and SP sensors (random vibrations during launch and shocks during release mechanisms activation (Fig. 75)). Its active part is made of a silicone based elastomer, the geometry and material of which are tuned to provide the required characteristics. As the center of gravity does not lies precisely in the geometrical center of the 3 dampers (mainly due to the LSA—Load Shunt Assembly), the damper material closest to the LSA has been tuned to be slightly stiffer in order to achieve optimal performance. A grounding strap is also integrated between the inner and outer damper in order to have electrical continuity in the deployed configuration. The Cradle also releases the deployed part of the Sensor Assembly once on Mars. The separation plane is at the level of the lower thermal blanket in order to avoid catch hazards during deployment. A cup-cone feature ensures that the Sensor Assembly remains in place after release (the deck inclination can reach up to 15°). The Launch Lock assembly is built around an off-the-shelf FC4 Frangibolt. The Frangibolt is an SMA (Shape Memory Alloy) device in the shape of a tube which extends in length when heated over its transition temperature (around 90°C). It is mounted around the Ti-6Al-4V fastener which fixes the deployed part of the Cradle to the non-deployed part. Upon actuation of the Frangibolt, the fastener breaks at a notched portion which is set at the separation interface. Washers are mounted at both sides of the Frangibolts to spread the loads. An additional vented washer is mounted to the separation interface to act as a thermal barrier to keep heating energy in the SMA material. An enclosure acts as a bolt catcher and as a radiative screen for the Frangibolt. A honeycomb crusher is integrated on the bottom of the enclosure to absorb the kinetic energy of the broken fastener. The Frangibolt has nominal and redundant heater circuits that are connected to the unregulated load switches of the spacecraft. Depending on the spacecraft bus voltage, between 60 W and 110 W heating power is applied during actuation. A Pt1000 RTD integrated in the Frangibolt provides temperature feedback. Actuation times are between 15 s and 110 s, depending on firing circuits, starting temperature and bus voltage. 6 Noise and Transfer Functions Measurement Strategy 6.1 Measurement Strategy, Setup and Testing Sites 6.1.1 Measurement Strategy The self-noise of the SEIS seismometers (see Table 4 and Sect. 3.4) is a key parameter because it defines the smallest signal detectable on Mars. It is very likely that during the day time, this noise floor will be much less than the signal recorded, which will contain contributions from the environment noise and station noise, the sum of the two later being described in the SEIS noise model (Mimoun et al. 2017) and described in Sect. 3.4. Possibly, an additional noise might also be recorded, associated with Rayleigh waves trapped in the low velocity zone of the subsurface, similar to Earth observations (e.g. Withers et al. 1996). It is however likely, as indicated on Fig. 6, that the quietest period during the noise might be closer to the self-noise limit, especially in the frequency band 0.02–5 Hz which remains far from the temperature noise which is likely the most limiting source of noise at long period for surface temperature non-controlled installations. The SEIS noise floors are very low and their measurement on Earth is very challenging due to the natural and anthropogenic background noise, the limited amount of time for tests made available by the development schedule, the limited numbers of models available and, for the VBBs and the vertical SPs, the fact that the sensors cannot be operated under Earth gravity in nominal configuration. For comparison, incoherent noise between two Streckeisen STS-2 seismometers in a vault installation, is close to \(2\times 10^{-10}~\mbox{m}\,\mbox{s}^{-2}/\sqrt{\mbox{Hz}}\) between 0.05 Hz and 0.5 Hz on the vertical axis while almost reaching \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\sqrt{\mbox{Hz}}\) at 100 s for the horizontal components (Kinemetrics 2017). Since the seismic noise on Earth is present everywhere and remains largely above the VBB requirement between 0.05 Hz and 1 Hz, it is not possible to directly assess compliance with this requirement. At larger periods, differences of installations can also generate noise levels larger than the VBB requirements on horizontal components, which are rarely much below \(10^{-8}~\mbox{m}\,\mbox{s}^{-2}/\sqrt{\mbox{Hz}}\) at 100 secs (e.g. Beauduin et al. 1996). See another example in Fig. 91 during the tests performed at BFO between 3 STS-2s on the same seismic pillar. As a result, we tested the seismometers with the two instrument coincidence testing techniques (e.g. Holcomb 1989; Ringler et al. 2011), while three channel correlations will be considered for further analysis (e.g. Sleeman et al. 2006). To have good results with this method, three constraints were integrated: The self-noise of the reference seismometer used to measure the ambient noise had to be lower than those of the sensors to characterize and we used STS-2s. We additionally used Trillium compacts to better map the noise at different locations with respect to the SEIS instrument. We performed the tests in low noise environments by removing as much as possible all the potential sources of noise. For flight hardware, this was only possible in selected clean rooms at the SEIS level and in urban seismic vaults at the sensor level. But for Engineering Model tests, tests were made in low noise seismic vaults. The coherence between reference seismometer and seismometers under test was optimized and the quietest periods were used for an efficient non-coherent noise estimation. The processing of test data for estimating sensor noise and transfer function has been performed independently by Imperial College (ICL), ISAE, IPGP and CNES teams, the CNES algorithm being only used for double checking during the last noise tests. Teams exchanged their codes and performed cross-validation of their software. However, some differences in the processing are kept in between these approaches: the alignment of the test sensor axis with the reference sensor axis in the same direction is done in the time domain by ICL and in the frequency domain by others and the estimates of noise level are sometimes different. On this last point, ICL results appear to be able to remove the micro-seismic noise peaks remaining in the ISAE estimates, probably because of a different way to manage the cross-axis sensitivity of sensors. Noise between the POS and VEL outputs of the VBBs was also measured and compared to the noise model. The transfer functions were estimated either relative to reference sensors or by coil calibration. They are converted to absolute transfer functions by using the calibration of the reference sensors. 6.1.2 Experimental Setup The typical setup for noise measurements consists of recording seismic signals with many reference seismometers as close as possible to the instrument. SEIS was put on a goniometer to simulate the Mars gravity (Fig. 76 for SEIS without TBK and Fig. 81b for SEIS with RWEB) and allow the VBB and vertical SP to be balanced on Earth. This goniometer was likely the main source of non-coherent noise which depreciated noise estimations. A large heavy metallic aluminum plate put on the cleanroom floor was used to maximize coherency between all these sensors and a big thermal shield covered all the setup to minimize effects at long periods (Fig. 77). The big thermal shield was decoupled from the plate to avoid additional noise due to the drum effect and each reference seismometer under this dome received individual skinny thermal protection to prevent convection. The reference seismometers used for noise tests were 2 (or 3) STS-2s from Streckeisen and 2 Trillium compact from Nanometrics (Fig. 78). Even if the Trillium compact self-noise is too high for the VBB assessment, they are still useful for SPs self-noise assessment. The STS-2s were connected to a 6 channels/26 bits Q330HR Data acquisition unit, while 6 channels/24 bits Centaur acquisition unit recorded the Trilliums. In addition, environment parameters like pressure, magnetic field and temperature were sampled and recorded at 100 Hz, the same sampling rate as the velocity output data of both SEIS and the reference seismometers. Note that the pressure sensor was a MB2005 microbarometer (Larsonnier and Miller 2011) able to measure small pressure fluctuations around the ambient pressure in a 20 Hz bandwidth. The synchronization of all the data acquisition units was possible using a GPS repeater in the cleanroom. An external active antenna receives GPS signal, amplifies and provides it in the cleanroom through a passive antenna. Nevertheless, even if all the reference acquisition units were synchronized by GPS, it was not the case for the EBOX which time-tags the VBBs and SPs acquisition. Several different ways were tested to achieve the good synchronization that was required for post processing The Ebox clock was updated from a GPS synchronized PC at the beginning of each test. Nevertheless, this update was asynchronous and did not ensure an accuracy better than 0.5 s. Calibrated shocks were done on the plate at the beginning and at the end of each acquisition period. These shocks were seen by both reference seismometers and SEIS sensors and facilitated the alignment of time series. An additional box (BOB PAE Synchro—see Fig. 79) was connected to the EBOX to record safely the 1PPS signal provided from the EBOX clock to the external APSS sensors All these methods contributed to improve the synchronization of the records but, finally, post processing based on the coherency method gave the last correction before the noise estimation process. Finally, all data acquisition units were connected to a local network for remote control and data collection. 6.1.3 Test Sites Most of the tests with the full SEIS instrument were performed with the flight model in the cleanroom at CNES. Nevertheless, despite all the efforts to remove anthropogenic noise sources (air conditioning, fans, lights…) and to prevent excess noise (remote control, tests performed during night and weekend…) a large portion of background noise remained. This situation prevented us from meeting self-noise requirement level noise outside the 1s–10s band for the VBBs. Nevertheless, the self-noise compliance was pretty fully demonstrated for SPs in the full band because the requirement level is \({\times}10\) above the VBB one. The last tests performed in ATLO in Lockheed Martin facilities gave the worst results because major sources of noise, such as air condition, had not been switched off to prevent risks to the Insight lander flight model. In order to demonstrate that the SEIS seismometers meet the requirement over the full frequency band by design, a dedicated test campaign was carried out at the Black Forest Observatory, Germany. This place is the quietest facility in Europe and likely in the world, dedicated to seismic long period measurements. However, the seismic vault is at the bottom of an old silver mine with high humidity and dusty environment not compatible for space instruments. For that reason, the tests were done with the qualification model with great care. A first test was performed in March 2017 to complete the SPs noise assessment and proved that these sensors meet the requirement in the full band with margins. Because no more VBB in a sphere was available for this test, we had to develop a dedicated small vacuum chamber able to receive the VBB#11 in Earth configuration on the SEIS leveling system (cuvinette). In addition, a specific tent to control humidity in the same range as the cleanroom (\(55 \pm 10\%\)) had been built with a passive humidity control based on desiccant (to avoid noise induced by a standard dehumidifier, see Fig. 80). This new test campaign at BFO occurred in March 2018. 6.2 VBBs Results 6.2.1 Earth Operation of Flight Models VBBs To be operational the torque exerted by gravity must be equal and opposite in sign to the torque by the spring and therefore when $$ M_{0} = m_{\mathrm{Earth}} [ \vec{D}_{g, \mathrm{Earth}} \times \vec{g}_{\mathrm{Earth}} ]\cdot \vec{n}_{\mathrm{VBB}} = m gD _{g} \sin ( \alpha ), $$ where \(m_{\mathrm{Earth}}\) and \(\vec{D}_{g, \mathrm{Earth}}\) are the Earth mass and Center of gravity in the Earth configuration, \(\vec{g}_{\mathrm{Earth}} \) the Earth gravity vector, \(M_{0}\) the moment of the spring at equilibrium and \(\vec{n}_{\mathrm{VBB}}\) the VBB sensing direction. This allows therefore two testing strategies. The first one acts on the product \(g_{\mathrm{Earth}} m_{\mathrm{Earth}} \vec{D}_{g, \mathrm{Earth}}\) and consists in reducing its norm to the Mars one by adding a mass on the opposite side of the pivot, in order to reduce the larger gravity in such a way that $$ g_{\mathrm{Earth}} m_{\mathrm{Earth}} D_{g, \mathrm{Earth}} = g_{\mathrm{Mars}} m_{\mathrm{Mars}} D_{g, \mathrm{Mars}}. $$ The drawback of this additional mass is that the mechanical gain is reduced by the ratio of the Mars to Earth gravity and therefore by a factor of 2.65. Tests in this configuration have been made either with the early prototype or in the Black Forest Observatory with the EM model. The second strategy is to tilt the sensor in order to get an Earth gravity projection equal to the Mars one, with the aid of a goniometer (Fig. 81). This can be achieved either by a 19° tilt of the plane defined by the pivot direction and \(\vec{D}_{g, \mathrm{Earth}}\) or by tilting the VBB on its side. Two sides of tilting were used. A tilting at \(68\mbox{--}70^{\circ}\), where the tilt direction is in the plane defined by the pivot and gravity and a \(32\mbox{--}34^{\circ}\) tilting, where the tilt direction is in a plane with a 60° angle with the pivot, which enable to test two VBBs on Earth. In all these tilting configurations, the precise value of the tilt is depending on the recentering mass, which explains the angular range. In all these configurations, care must however be taken regarding restoring moment of the VBBs because of the inverted pendulum design. This can be expressed as $$\begin{aligned} M =& - \biggl[ c - \frac{\partial [ m_{\mathrm{Earth}} [ \vec{D}_{g, \mathrm{Earth}} \times \vec{g}_{\mathrm{Earth}} ]\cdot \vec{n}_{\mathrm{VBB}} ]}{\partial \alpha _{\mathrm{vbb}}} + p \biggr] \delta \alpha _{\mathrm{vbb}} \\ &{}+ \frac{\partial [ m_{\mathrm{Earth}} [ \vec{D}_{g, \mathrm{Earth}} \times \vec{g}_{\mathrm{Earth}} ]\cdot \vec{n}_{\mathrm{VBB}} ]}{\partial \beta _{\mathrm{gonio}}} \delta \beta _{\mathrm{gonio}} \end{aligned}$$ where \(\delta \alpha _{\mathrm{vbb}}\) is the pivot angular rotation and the angular deviation with respect to the equilibrium position, corresponding to rotation in the pivot direction and where \(\delta \beta _{\mathrm{gonio}} \) is the rotation of the goniometer corresponding to rotation in the direction of the center of mass position \(\vec{D}_{g, \mathrm{Earth}}\). The first part of Eq. (12) shows that the natural frequency of the VBB will depend on the tilt and can even be imaginary, when $$ c + p < \frac{\partial [ m_{\mathrm{Earth}} [ \vec{D} _{g, \mathrm{Earth}} \times \vec{g}_{\mathrm{Earth}} ]\cdot \vec{n}_{\mathrm{VBB}} ]}{\partial \alpha _{\mathrm{vbb}}}, $$ and the second part shows that the small rotations of the goniometer transverse to the VBB sensing axis, either due to creep in the goniometer or due to ground micro-seismic noise, are generating a moment change and therefore a decrease of the signal to noise ratio. Table 14 summarizes the different configurations. Note that for the 19° and 32° configurations, self-noise increases due to a larger norm of the frequency and a smaller mechanical gain while in the 68° configuration, a source of noise appears due to the transverse sensitivity. All these consequences of the tests under Earth gravity made the performances tests somehow challenging, especially with the respect to the sensitivity at long periods. Frequencies and sensitivity directions of the VBB, including the transverse mode in tilted configuration. Only the \(0^{\circ}\) (on Mars) and \(68^{\circ}\) (on Earth) configuration have the same frequency and are stable. Both the \(19^{\circ}\) (Earth) and \(32^{\circ}\) (Earth) configuration are unstable with large imaginary frequencies. Only the \(19^{\circ}\) (Earth) and Mars nominal configurations have zero transverse sensitivity, while the \(68^{\circ}\) and \(32^{\circ}\) have a growing transverse sensitivity. The TT angle provides the azimuth of the sensitivity of the VBB. In all cases, feedback recovers the un-stability 0∘ (Mars) 19∘ (Earth) Long stiffness N m/radian −0.0273 1.58i 1.417i Trans stiffness −0.022 Trans frequency TT cosine Noise (100 s) nm/s/s/V Hz 6.2.2 VBB Transfer Function Calibrations on Earth Transfer function of the VBB were therefore different for all tests, as the feedback is not strong enough to fully erase the large variation of the instrument frequency between these configurations (Fig. 82). All calibrations have therefore been made with respect to the Instrument Model, which is able to correct for these operational configuration differences. The full Flight unit, with flight tether and flight Ebox, has been calibrated and tested during 4 weeks during the project development. Two of these weeks were made prior to the delivery of the instrument in one of the cleanrooms of CNES in Toulouse, France, while the other two weeks of testing have been made in Denver, in the LMA facility. In both cases, tests were made before and after environmental tests. To compensate for the increased gravity on Earth relative to Mars, the entire SEIS instrument package was tilted such that every night at least one VBB (for the 68° tilt) or two (for the 32° tilt) sensor(s) were operational. At the end of each night a frequency calibration of the operational VBB sensor was performed. So, two types of calibrations could be compared: the built-in one and the one made by comparing the VBB signals to the reference instrument through the two instrument coincident techniques. Dispersion of the gain measurements of the VBBs during coil calibration were ranging 4–6% for the VEL gain and 0.7–0.9% for the POS gain. Full calibration information is given in the SEED dataless associated with the VBB sensors and we provide in this section only part of the calibration information. VBB Coil Calibration The VBB seismometers and the EBOX were built to provide the possibility to perform relative calibrations once deployed on the surface of Mars. To that goal the EBOX can generate a well-defined calibration current which can be fed into the calibration coils integrated in each of the VBB sensors. By doing so, a force proportional to the calibration current and the coil efficiency is exerted on the proof mass of the seismometer that simulates a ground acceleration. From the known electric current and the measured response of the seismometer it is then possible to estimate the frequency dependent response of the seismometer to accelerations of the ground. What this experiment does not provide is the absolute gain of the seismometer, as only the gain relative to the injected force is determined. Calibration is made by a 1000 s long sweep (top panel of Fig. 83) applied to the calibration coil of the VBB, which is defined as the digital input to the Digital-to-Analog converter, sampled at 20 sps (samples per second). This wave form is stored in the EPROM of the EBOX and can be modified by upload and command. Both the POS and VEL outputs of the VBB were digitized and recorded in the Ebox: the POS signal at 1 sps and the VEL signal at 100 sps. The FORTRAN program CALEX (Wielandt and Forbriger 2016), which uses impulse invariant recursive filters (Schuessler 1981) to model the digital output of a system with a rational transfer function was used on the processed data, decimated to the same acquisition rate as the calibration waveform (20 sps) and detrended for POS output when necessary. CALEX uses a conjugate gradient method to find a best fitting model. As such it depends on a starting model and we used the nominal VBB transfer function for that purpose. In the frequency band covered by the sweep the transfer function of the VBB to ground acceleration can be modeled by a second-order system: a second order band pass for the VEL channel and a second order low-pass for the POS channel. There are thus only 4 unknowns in this model: a time shift \(\delta \), a gain factor \(A\) and one complex conjugate pair of poles in the \(s = i\omega \) plane. The same model can also be represented by the physically more intuitive parameters \(\delta \), \(A\), \(T_{0} \) and \(h\), where \(T_{0}\) is the corner period and \(h\) the fraction of critical damping of a second-order system. The expressions by which \(T_{0} \) and \(h\) are related to the complex conjugate pair of poles \(p\), \(q\) are: $$ T_{0} = \frac{2 \pi }{\sqrt{pq}}\quad \mbox{and}\quad h= \frac{p + q}{2\sqrt{pq}}. $$ The pole-zero transfer function model for the POS channel to ground accelerations is then $$ H_{\mathrm{POS}} ( s )= A_{p} \frac{1}{( s - p )( s - q )}, $$ where \(s\) is the complex frequency of the Laplace transformation. This gives for the VEL channel: $$ H_{\mathrm{VEL}} ( s )= A_{v} \frac{s}{( s - p )( s - q )}. $$ Table 15 and Fig. 83 summarize the results of the CALEX runs. The normalized rms residue (= rms of the residue divided by rms of the signal) given in ppm units was typically a factor of 5–10 larger for the 20 sps VEL data than the VEL or POS data sampled at 1 sps. This was only due to the large high-frequency ambient seismic noise present at CNES. Much of this noise is above 1 Hz and hence outside the band covered by the sweep. The VEL data was therefore also low-passed and decimated to 1 sps. The modeling of the VEL sweeps worked almost perfectly: no sign of the sweep is left in the residue. The simple model with only 4 parameters can completely explain the seismometer output. Summary of the obtained system parameters from the frequency calibrations at CNES. During the first three nights the tilting of the SEIS instrument package was such that only one sensor could be calibrated. For the last night the orientation of SEIS was such that both VBB1 and VBB3 were balanced and could be calibrated Start time of sweep (UT) To (s) rms (ppm) Sampling freq. \( 20~\mbox{Hz} \) \( 1~\mbox{Hz} \) To summarize the sweep experiment, we can say that in the frequency band covered by the sweep (1–0.01 Hz) a very simple analytical model of the transfer function is sufficient to describe the response of the VBBs to ground acceleration: a second order band-pass with only three free real parameters: generator constant (= gain factor), corner period, \(T_{0}\) and damping, \(h\). The differences in \(T_{0}\) and \(h\) between the POS and VEL channels can be taken as an indication of the error in these parameters. Relative errors for \(T_{0}\) and \(h\) range between 0.1–0.5%. This is mostly due to the elevated ambient seismic noise in the clean room at CNES/Toulouse where the Flight models were tested. We have more dispersions on the output gain measurement, assuming the generator constant of the VBB is known. When taking in account the expected variation of the gain as a function of the tilt configuration, dispersions were ranging between 4–6% for the VEL gain and between 0.7–0.9% for the POS gain. Results are shown in Fig. 84 and Table 15. In similar experiments conducted in 2012 at BFO with commercial broad-band seismometers we found residues that correlated with the input sweep. For some of the sensors a residue with twice the instantaneous frequency of the sweep was prominent. Such frequency doubling is a clear sign of a quadratic or cubic non-linearity. No indication of such a non-linear response was found in the experiments analyzed here on the VBBs. However, we note that if the frequency calibrations had been conducted at a seismically quieter site small non-linear behavior might have appeared that remains hidden in the elevated seismic noise present on the CNES campus at Toulouse. Temperature Sensitivity of the VBB Transfer Functions We expect significant climatic and daily temperature changes on Mars and due to the temperature variations of the feedback actuators and of the natural frequency of the VBBs, the transfer function will vary with temperature. During VBB integration and the protoflight test program, special care has been taken to obtain pendulum thermal sensitivities over the complete temperature range. Pendulum thermal sensitivities were characterized before and after environmental tests (\(\mbox{vibrations} + \mbox{thermal cycles}\)) and also after integration in the sphere. The main parameters screened during the test program were: Pendulum eigenfrequencies: decreasing eigenfrequencies will increase the mechanical gain and the signal-to-noise ratio with a \(1/f _{0} ^{2}\) law. Expected variations are \(\pm 10\%\) (\(0.5\%/{}^{\circ}\mbox{C}\)) over the Martian climatic range, leading to a mechanical gain almost 50% larger in winter than summer (Fig. 85). Although minimized by the feedback, these eigenfrequency variations will generate variations of the transfer function, which are hopefully reduced as the variation is divided by the feedback strength to first order (about 80 at 0.07 Hz and 730 at very long period). Magnetic actuators parameters (\(K\) (N/A)—force coefficient and \(R\) (\(\Omega\)) internal resistor): The Feedback outputs drive these magnetic actuators. As the coil is made of copper, it is very sensitive to temperature changes. Due to thermal dilatations the geometry of these actuators changes also which may result in force coefficient changes. Modeling these effects is difficult since they are strongly related to the final mounting of the coil on the hardware and measurements of these efficiencies were performed Fig. 86. When injected in the instrument models, this enables prediction of the expected sensitivity of the transfer function for both the ENG and SCI modes described in Fig. 87 and Fig. 88. These sensitivities, based on all Earth tests, will of course be updated on Mars. Precise analysis based on long time series, like normal modes spectrum (expected on 6–12 hours times series) and tidal analysis will need to incorporate these effects for precise determinations of amplitude or frequencies. A simplified description of the temperature model will be documented in a comment blockette of SEED. See 0 for more details. The calibration coil actuator will be more critical as it will affect the in-situ calibration. We show on Fig. 86 and for the 3 coils, the temperature variation of \(K/R\), where \(K\) is the strength of the coil (in \(\mbox{N}/\mbox{Amp}\)) and \(R\) the coil resistance (in ohm). For the calibration coil C, this is the parameter driving the coil calibration described in the previous section. For a given mode and to first order, the output of a calibration at a temperature \(T\) will be the output of a calibration at temperature \(T_{0}\) multiplied by the ratio of the \(K/R\) at the two temperatures. This will enable interpretation of the transfer function found through calibration during the climatic variation, as well as comparison of these calibrations on Mars with those made at ambient temperature on Earth. 6.2.3 VBB Noise As indicated in Sect. 6.2.1 the operation of the Flight VBBs on Earth is challenging because of the difference of gravity and therefore and especially with the very low expected self-noise, the characterization of noise was also extremely challenging. A special care has therefore been used to elaborate on the self-noise model of the instrument, integrating all known sources of sensor noise and modeling the noise in a given test configuration (Fig. 89). This noise model has first been calibrated by noise measurements of all stages of the feedback in open loop (e.g. integrator, derivator, output gains, internal gains, DCS noise, Acquisition system, etc.), of the expected temperature sensitivities, both in terms of sphere and proximity electronic temperature variations and lander temperature noise for the FB cards. Note that the first two are related to thermal noise shielded by the WTS/TBK and Sphere and WTS/TBK and PE box respectively, while the second one is shielded only by the SEIS electronics box and depends on the thermal enclosure temperature noise. This noise model excludes the pressure and magnetic noise, which are in facts signals detected by the seismometer and potentially monitored by the APSS pressure sensor and IFG fluxgate magnetometer. The noise model suggests that in Mars conditions, the long period noise on the VEL output will be driven by the temperature during the day, while there is the possibility that during night, the VEL output will have a noise driven by the feedback integrator noise, offering then better data on the POS HG output at long periods. The POS LG noise will be sensitive to the acquisition noise which is expected to be above the transducer noise, in contrary to the POS HG noise. VEL HG and VEL LG are however expected to have a very similar self-noise, unless very low noise levels at 1 Hz are found. Tests made on the VBB Flight units in the CNES clean rooms are shown on Fig. 90. Testing conditions were impacted by an important environmental noise, which was characterized by cross-correlation between two STS-2s and is shown as the solid yellow line along the VBB measurement direction. This practically limited the self-noise of the VBBs to this environmental noise. Nevertheless, noise levels ranging from \(5\times 10^{-10}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) to \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) can be detected between 0.05 Hz and 1 Hz. At high frequencies, noise levels are larger than those recorded by the STS-2s. Such larger noise levels are also recorded on the SPs, as seen in Fig. 95. Results of tests performed in the Black Forest Observatory (BFO) with the complete EM system and with the VBB in Earth configuration are shown in Fig. 91. In this configuration, the Earth VBB was operating in a small vacuum chamber. At high frequencies above 0.5 Hz, noise is matching fairly well the Earth VBB noise level and remains below the BFO noise recorded by a reference STS-2 almost up to 10 Hz. It reaches a noise level of \(2\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) at 2 Hz and \(5\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) at 3–4 Hz. These noises are directly related to the Displacement Transducer noise and as noted in Sect. 6.2.1, will be reduced by a factor of 2.65 on Mars, which suggests that the VBB has a comparable noise as the SP at 3–4 Hz. At long periods, significant variability of the noise, as measured by STS-2s, was found on the tilted direction along the Earth VBB axis (Fig. 92). The only differences of these three STS-2s, which recorded noise levels at 100 seconds ranging from \(2\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) to \(10^{-8}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\), were their thermal protections and locations on the seismic pillar on which the SEIS was located. This illustrates the challenge of such performance tests, especially in a controlled schedule context, but might also not be so surprising as a \(10^{-8}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) noise level is equivalent to an instrument tilt of about \(1.2~\mbox{nano-radian}/\mbox{Hz}^{1/2}\) which might be easily induced by convection forcing on the instrument. Further analysis identified that the recorded VBB noise was likely related to the temperature noise induced by pressure transient variation in the small vacuum chamber (Fig. 93). In summary, the analysis of all tests demonstrated performances below \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) between 0.04 Hz and 1 Hz with a noise floor smaller than \(5\times 10^{-10}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) between 0.1 Hz and 1 Hz. Earth tests at long period were only able to reach a noise of \(10^{-8}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) at 100 s. At 0.01 Hz, all open loop measurements of the VBB feedback and transducers were within the requirements with a modeled noise of \(2\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) at 100 s for Earth configuration. If this was a non-modelled electronic noise, it will correspond on Mars to a noise of about \(4\times 10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) but is most likely noise related to the environment and Earth testing conditions 6.3 SPs Results 6.3.1 SP Transfer Functions Calibration The TFs of the SP flight units have been quantified using coherence testing against a known reference seismometer. This has confirmed the expected form (see Fig. 53) with the gain, poles and zeroes adjusted to give a corrected flat response in the SP velocity output (Fig. 94). The fitted TF parameters are shown in Table 16. The TF determination is valid when the coherence is high, allowing extension of the 5 dB requirement beyond the 0.1 Hz requirement to 0.002 Hz. Transfer function parameters of the SP outputs V/(m/s) −0.018231 No unit SP Mass position is in turn validated against the SP velocity output, differentiated to acceleration. No correction has been applied to the TF determined in Fig. 94, apart from an MPOS gain, in \(\mbox{V}/(\mbox{m}/\mbox{s}^{2})\) selected to give a 0 dB output. The uncorrected high-frequency roll-off in MPOS is evident with a corner frequency of 0.4 Hz. 6.3.2 SP Self Noise The performance floor of the SP is set by the internal self-noise of its sensors and the aseismic noise from the environment transduced into the SP's output. The self-noise was determined by coincidence testing (Holcomb 1989) against a conventional broad-band seismometer with a noise floor at least an order of magnitude lower. This allows the self-noise of the sensor to be attributed to any loss of coherency, subject to minimization of any common environmentally induced terms in the signal. Figure 95 shows the self-noise of the flight-model (FM) SPs determined by coincident testing at two sites, one with a low-ambient noise above 1 Hz and the second with better coupling to the reference seismometer at lower frequencies. The sensor is very close to the fundamental thermodynamic limits, for this sensor of \(0.2~\mbox{ng}/\sqrt{\mbox{Hz}}\). The expected higher sensitivity at resonance, down to \(0.1~\mbox{ng}/\sqrt{\mbox{Hz}}\) is challenging to validate, but the shaping of the noise through the transfer function of the suspension is clearly seen. The SP performance requirement is marked as well as the noise models for the FM and QM units. All SPs are at least a factor of 2 better than their requirements at 0.1 Hz, in either amplitude or frequency. 6.4 Temperature and Tiltmeters 6.4.1 Temperature Sensors SEIS instrument includes more than twenty temperature sensors distributed on each subsystem mainly for housekeeping purposes. Most of them are based on standard Class B PT1000 probes calibrated by the manufacturer for which generic calibration can be used. Several were however calibrated and the calibration method and results are provided in this section. The number of sensors allows mapping the temperature of the instrument which is a key point for seismometer performance. The temperature sensors used to monitor the health of SEIS are digitized with a resolution of 12 bits while the sensors dedicated to science on VBBs and Levelling system feed respectively a 16 and 24 bit ADC for better resolution. The SCIT A&B (Scientific temperature) sensors are placed on the LVL system as shown in Fig. 96. To meet the resolution and accuracy requirements the sensors were selected from 16 thermistors based on their linearity characteristics. This step was done by performing a comparison test between a reference thermometer HART SCIENTIFIC 159 (which is calibrated at French National Calibration Laboratory) and each PT1000 thermistor. The polynomial coefficients describing the behavior of the thermistors were taken into account by the acquisition system HART SCIENTIFIC 2590 used to interface the thermistors. Then, we measured the difference between temperature reference and each PT1000 in order to make the best choice. The CNES calibration lab test benches used for covering the temperature range were: A specific climatic chamber for temperatures from 0°C to \(-120^{\circ}\mbox{C}\) SANYO MDF-1156 A dry heat chamber for temperatures from 0°C to 50°C ISOTECH Europa-6 Plus Since the flight model sensors could not be calibrated in the standard calibration baths, the calibration was performed in dry air with a very high constant time copper cylinder (see Fig. 97 and Fig. 98). The science thermal sensors measurement chain includes not only the sensor itself but also a large portion of the tether which can add a parasitic resistance despite the 4 wires method used. In addition, the EBOX contains electronics devices which can affect the accuracy of the measurement. In consequence an end to end calibration was performed using very high stability resistors to simulate the PT1000 behaviour (a full chain calibration including sensor and measurement chain was not possible due to cleanness constraints). The calibrated sensors have a response which can be fitted with a 3rd order polynomial: $$ T=a x^{3} +b x^{2} +cx+d, $$ where \(T\) is the temperature in degree Celsius and \(x\) the raw data in bits and with values in Table 17. Transfer Function polynomial coefficients of the VBB Temperature sensors SCITA −6.91742 × 10−22 1.16867 × 10−13 1.98544 × 10−5 −247.538 SCITB VBB1_TEMP −2.66477 × 10−9 6.4.2 Tiltmeters The levelling system carries all the SEIS sensors and maintains the instrument horizontally on the Mars surface. For that, the LVL includes coarse (MEMS) and precise tiltmeters (HP Tiltmeters) to measure the tilt of the instrument on the regolith (relative to the local gravity direction). Their location is indicated in Fig. 99. MEMS Sensors The MEMS Tilt sensor is based on an ADXL203 built by Analog Devices. This is an high precision, low power, complete dual-axis accelerometer with signal conditioned voltage outputs, all on a single, monolithic integrated circuit. The ADXL203 measures acceleration with a full-scale range of \(\pm 1.7~(\mbox{Earth})~\mbox{g}\). The typical noise floor is \(110~\upmu\mbox{g}/\sqrt{\mbox{Hz}}\), providing a rms noise of 0.44 mg over a 16 Hz bandwidth. This corresponds to 0.025° of inclination on Earth and 0.067° on Mars, for tilt sensing applications. This sensor is connected through the LVL tether to the Motor Driver electronics card (MDE) which amplifies the 2 analogue signals provided by the 2 axes sensor. These two signals are digitized on the MDE board after low pass anti-aliasing filtering to reduce noise before amplification. The only internal FPGA data processing consists in an averaging and a measurement on a [\(\pm 15^{\circ}\)] range is ready to be read out by the E-Box. Note that an absolute error of \(0.1^{\circ}\) around zero degree of tilt on this measurement is required to be able to balance the VBBs properly. HP Tiltmeters Sensors The component used to perform an accurate measurement of tilt is a SH 50055-A-031 Electrolytic Tilt Sensor from Spectron (Fig. 100). This is a modified version of a SH 50055-A-009 commercial version component which includes changes to the electrolytic fluid and wire. This allows to enhance the temperature range to be compliant with requirements ([\(-65^{\circ}\) to \(+125^{\circ}\)] for operating, [\(-120^{\circ}\) to \(+150^{\circ}\)] for storage). After temperature compensation, the accuracy of the sensors is better than 3% over the \(\pm 0.2^{\circ}\) range. Figure 101 shows the principle of the electrolytic tilt sensor. As the sensor tilts, the surface of the fluid remains levelled due to gravity. The fluid is electrically conductive and the conductivity between the two electrodes is proportional to the length of electrode immersed in the fluid. At the angle shown, for example, the conductivity between pins a and b would be greater than that between b and c. Electrically, the sensor is similar to a passive potentiometer, with resistance changing in proportion to tilt angle. Calibration Process Calibration on Earth The calibration functions of both sets of tiltmeters on the LVL depend on the sensor temperature. This dependence takes different shapes for the MEMS and the HP tiltmeter due to their different sensor techniques. For the MEMS sensors, the gain of the transfer function is independent of temperature, while the measured value at zero inclination decreases or increases with varying temperature. For the HP tiltmeters in contrast, the measured value at zero inclination is independent of temperature, whereas the gain changes. This means that the measurement range and the smallest measurable change in tilt corresponding to one unit of HP tiltmeter output also depend on temperature, resulting in a higher sensitivity at lower temperatures. The calibration of the tiltmeter was performed with the complete LVL flight model in a thermal oven. The basic layout of the test is to acquire the transfer function of a high precision tiltmeter by step-wise moving the LVL with the related Linear Actuator and measuring and recording the output of all four tiltmeter and the inclination of the LVL with a reference inclinometer. The test runs are repeated at various temperatures and the data processed against temperature. Although the MEMS tiltmeter only gets a small excitation in tilt, the offset shift with temperature is clearly seen. The difficulty with the test setup is that the reference inclinometer has to be kept at room temperature while the LVL temperature must be varied. This problem was solved using a metal profile beam going through a feedthrough of the thermal oven. This bar was fixed to the LVL and transmitted the inclination to the outside of the chamber where the reference inclinometer was placed. As reference, the WYLER BlueLEVEL inclinometer with \(1~\upmu\mbox{m}/\mbox{m}\) (0.2 arcsec) resolution was used. The results of the high precision tiltmeter show the absolute zero point for this axis (Fig. 102). At this point, the output signal is independent of temperature; the liquid is always in balance. With the coefficient of thermal expansion of the housing material and the manufacturing precision, the measurement range varies with temperature. The MEMS output shows the variation of the offset against temperature while the slope of the transfer function curves remains constant (Fig. 103) Calibration During Cruise Assuming that zero gravity is equivalent to 0° of tilt of the MEMS proof mass, the transfer between Earth and Mars was an opportunity to assess the potential drift of the sensor. As a result, we plan to measure the offset for a given temperature two times during the cruise wake-up. If stable results are observed, we will be able to better extrapolate the value of the MEMS sensor offset on Mars. Unfortunately, experiment under microgravity aboard the CNES Airbus zero-G demonstrated that this calibration will not be possible for the HP tiltmeters due to the measurement method based on a fluid displacement. 6.5 LVL As all ground motion is transferred to the SEIS sensors via the LVL, it is important to understand and characterize its possible influence on the recorded waveforms. The LVL transfer function was determined during different stages of integration, for increasingly flight-like configurations and a simplified analytical model of the LVL was developed. A more detailed description of the results and methodology can be found in Fayon et al. (2018). The actual LVL transfer function on Mars can only be determined once SEIS is deployed, though, as it depends on the deployment configuration (leg extraction) of the LVL as well as on yet unknown local soil properties at the deployment site. The mayor way in which the LVL affects recorded signals is by horizontal resonances of the system due to the details of the leg structure. These resonances were first observed during forced excitation in a test of the LVL structure on a shaker with an input acceleration of 0.1 g, using a sweep signal between 5 and 200 Hz with a sweep rate of two octaves per minute. The resulting acceleration at various points of the LVL was recorded with miniature accelerometers glued to the LVL structure. The tips of the LVL feet were likewise glued to the shaker's table, unlike the SEIS deployment configuration and the LVL legs were extended to an intermediate length. Due to the missing SA, the weight of the structure was significantly less than the flight weight at 5300 g. Measurements were conducted for acceleration both in X- and Y-direction. During acceleration in each of these directions, only accelerometers pointing in the same direction recorded any significant amplification within the whole frequency band covered. The resonance frequencies observed for sensors at different locations on the LVL are identical in each of the two configurations, with varying peak amplitudes on the order of 5 and comparatively broad peaks, with a plateau covering about 10 Hz. Peak frequencies are slightly shifted between X- and Y-direction and centered at 50 and 48 Hz. A more thorough investigation of the seismic transfer functions was done in the MPS cleanroom, using a configuration typical in seismometer calibration (Holcomb 1989; Pavlis and Vernon 1994): We recorded ambient vibrations with a broad-band "test" sensor placed on the LVL and compared the data to that recorded by a "reference" sensor located on the ground close enough to assume that both sensors record the same ground motion. The used sensors are Trillium compact seismometers, connected to a six-channel 24-bit Centaur data logger. The final mass of the setup, including the seismometer and additional dummy masses, is 9082 g. The transfer function was determined under a variety of surface inclinations in both X- and Y-direction, using a magmatic rock with a slope of 15° over a square area of \(30 \times 30~\mbox{cm}\). In total, we performed measurements in 21 different configurations. As the LVL design is symmetrical with respect to tilts in ±Y-direction, only a limited number of measurements at the same angles in both +Y- and −Y-direction was conducted to confirm the symmetry. For each measurement, we calculated the power spectral densities for the three components of the reference as well as the test sensor. The orientation between the two sensors was adjusted by minimizing the incoherent noise in the frequency domain and the relative transfer functions calculated by division of the power spectral densities in the aligned system. In all cases where the three legs are not of equal length, two different resonance frequencies occur, which, depending on configuration, either do or do not align with the X- and Y-axes of the system. Resonance frequencies lie between 34.7 and 46.4 Hz, depending on configuration. Lower resonance frequencies than observed during these measurements are possible if the LVL mass is higher than used here, if a high slope in both X and Y that we could not reach with our test equipment needs to be accommodated, or if all legs are extended equally to a large extent. The latter case is not foreseen for SEIS deployment. Additionally, LVL resonances were determined for a more complete SA including LSA/tether during performance testing at CNES Toulouse. The measurement principle was the same as above and results are broadly consistent with those previously obtained, both for measurements on a solid surface and for measurements at three different tilted configurations on sand, using the LVL QM. A further measurement was conducted using actual horizontal SP sensors, at a comparable mass and partly extracted legs, which showed resonances around 40 Hz. No measurement showed any clear LVL influence of the phase of the transfer function. Observed amplifications at the resonance peaks range between 10 and larger than 100, but the determined values depend on the coherence between the channels of the reference and test sensors more strongly than the measured resonance frequencies and are thus less certain. The LVL also has an impact on high frequency measurements, i.e. the HP3 hammering, as it averages the ground acceleration sensed across the three feet and in this way acts as a low-pass filter (Kedar et al. 2017). Modeling of the LVL is based on a method to detect and compensate for inconsistent coupling conditions during seismic acquisition (Bagaini and Barajas-Olalde 2007). Four main elements characterize the LVL model: one platform and three legs. Each 3D platform-leg coupling phenomenon is modelled by one vertical spring with a rigidity constant \(k ^{p} _{v}\) and two horizontal ones with a representative constant \(k ^{p} _{h}\). Likewise, each 3D foot-ground coupling phenomenon is described by constants \(k ^{g} _{v}\) and \(k ^{g} _{h}\). Equivalent masses for the platform subsystem \(M _{p}\) and the three legs are used to complete the system. This configuration permits six degrees of freedom for each subsystem. However, as the complete instrument configuration does not allow for a vertical rotation of the legs, the final system in total has 12 degrees of freedom in translation and 9 in rotation. Newton's second law is applied for each part of the global structure in both translation and rotation. For example, for the LVL platform $$\begin{aligned} M_{p} \frac{d^{2}}{dt^{2}} {{\overrightarrow{\Delta G_{p}}}} =& \sum_{i=1}^{3} \overrightarrow{\Delta F_{i}^{+}}, \end{aligned}$$ $$\begin{aligned} J_{p} \frac{d^{2}}{dt^{2}} \overrightarrow{\Omega _{p}} =& \sum_{i =1}^{3}\overrightarrow{G_{p}P_{i}^{+}} \times \overrightarrow{\Delta F_{i}^{+}}, \end{aligned}$$ The second derivative terms represent the platform's center of mass acceleration in translation, in (18) and rotation, in (19) and \(J _{p}\) is the platforms' moment of inertia. \(\overrightarrow{\Delta F_{i}^{+}}\) is the relative movement between the two ends of the spring on top of leg \(i\) and \(\overrightarrow{G_{p} P_{i}^{+}}\) corresponds to the vector between the platform's center of mass and the top of the considered spring. These equations are also written for each leg of the LVL structure. Combining all equations, the [\(M\)] and [\(K\)] matrices (size \(21\times 21\)) are defined and implemented numerically. This allows finding the eigenmodes of the global structure. The adjustable parameters in the model are the various masses, the length of each leg, the stiffnesses of the springs, the torque induce by the ground on the legs \(C ^{g} _{h}\) and the attenuation coefficient \(Q\) of the ground. Once the extracted lengths of the LVL legs are known, this also sets their masses and the horizontal stiffnesses \(k ^{p} _{h}\) between them and the platform. Values for \(k ^{p} _{v}\) and \(k ^{g} _{v}\) can be selected arbitrarily as tests show that they do not significantly influence the results. The main parameters to adjust because of their considerable influence on the calculated resonances are \(k^{\mathrm{g}} _{{h}}\) and \(C^{\mathrm{g}} _{{h}}\). When calculating all of the LVL's 21 vibration modes (resonances and structure's movements) with the analytical model, only two of the obtained frequencies are within the range covered by the measurements. They correspond to horizontal translations of the platform in X- and Y-direction, respectively, in good agreement with the laboratory results. A further validation of the model was done by changing either the mass of the platform or the leg lengths (same length for all three legs). When one of these parameters increases, the horizontal resonance frequencies decrease. The same effect is observed in the measured data and the model covers the same range of frequency values. The model can also describe the complete LVL transfer functions as determined during test measurements in the laboratory. Figure 104 shows an example for the baseline configuration (level low, with all legs at the same length). Our modeling indicates that the horizontal resonances of the LVL are highly dependent on ground properties. The model presented here could thus not only be used to predict at which frequencies SEIS measurements might be affected by LVL resonances, but also to invert for ground properties at the InSight deployment site once SEIS data from Mars are available. 6.6 Thermal Protections 6.6.1 Thermal Objectives The SEIS thermal protections aim to maintain all the elements of the instrument within Allowable Flight Temperature range all along the mission (Table 18), in operating, non-operating and start-up conditions, in deployed or stowed configurations. Allowable Flight Temperature range Non-operating \(T_{\mathrm{min}}\) \(T_{\mathrm{max}}\) Sphere VBB −65∘C +30∘C Linear actuator −105∘C Proximity electronics SP sensors Moreover, in order to guarantee the performance of the instrument, the environment temperature variations shall be filtered for VBBs and SPs measurements. This filtering is achieved thanks to high time constants of the system obtained through efficient thermal isolation. The meaning of the time constant is explained below. Let us consider a system without internal heat dissipation at temperature equilibrium with its direct surrounding environment. If this system is exposed to a sudden external environment temperature step, the thermal time constant "\(\tau \)" of a system is the characteristic time defined by relation (17) $$ T_{\mathrm{final}} - T_{B} ( t ) =( T_{\mathrm{final}} - T_{\mathrm{initial}} ) e ^{{-t}/{\tau }}, $$ where \(T_{B}(t)\) is the system temperature at instant \(t\) (°C), \(T_{\mathrm{initial}}\) is the initial temperature of the system (before the temperature step) (°C), \(T_{\mathrm{final}}\) is the final system temperature equal to the environment temperature step (°C). Two thermal time constants are specified for SEIS: between VBB and the sphere crown: this time constant shall be higher than 2 hours; between the sphere crown and WTS: this time constant shall be higher than 5.5 hours. Other objectives of the thermal control are to guarantee the daily temperature stability (\(<35^{\circ}\mbox{C}\) peak to peak for the sphere and \(<45^{\circ}\mbox{C}\) for PE) and internal gradients (\(<60^{\circ}\mbox{C}\) between VBB and crown). 6.6.2 Thermal Constraints The thermal control needs to deal with Martian environment: air and ground temperature variation, external heat flux (Sun, albedo and IR flux, convection). Wind and dust are key parameters for the heat flux exchanges. A model of the ground has been realized to account for the surface thermal equilibrium under the WTS dome (Fig. 105). Indeed, ground surface temperature is determined by thermal equilibrium under WTS since Martian regolith is a low conductive material. 6.6.3 Thermal Design The instrument Thermal Control System (TCS) described hereafter has to ensure the following elements of thermal control: sphere and the 3 VBB; proximity electronic boxes; SP sensors; LVL (structural ring) and its associated linear actuators. The electronic box (inside the lander) and the tether external part have dedicated thermal control. The Sensor Assembly thermal control is ensured thanks to several levels of protection (WTS, RWEB, Evacuated Container) as described in Sect. 5.6. 6.6.4 Validation (Model, Analysis and Tests) SEIS Thermal Model A detailed model of the sensor assembly (stowed and deployed configuration) has been built to perform thermal analysis in flight condition and demonstrate the performance of the design. SEIS sensor assembly model is composed of 5457 thermal nodes (5115 for the sphere). The model is built in Systema/Thermica format. Figure 106 summarizes the coating used in the SEIS instrument model. Particular attention has been paid to the convective couplings. For external convection, computational fluid dynamics studies have been conducted to estimate the equivalent convective coefficients by area (Fig. 107). Internal convective couplings are managed by linear conductors that have been updated following thermal balance tests. Sensitivity studies on key parameters (including convective coefficients) allowed defining the model uncertainties. Note that CFD computation remains a source of uncertainty because exchange coefficients have been estimated using a constant wind speed from one direction. However, the variation of WTS temperature shall have a minor impact on instrument units thanks to the multiple thermal isolation levels. The thermal analyses confirm the compliance to all requirements and identified the critical cases: cold non-operating case on ground and on the lander. An interface thermal model has also been delivered to Lockheed Martin to complete thermal analysis at lander level. SEIS Thermal Tests Several of the thermal tests completed on the SEIS instrument were used for thermal validation: Thermal Balance Test on the Structural and Thermal Model to correlate the thermal model in 2013, Thermal Vacuum Test TVAC#4 to achieve the instrument qualification in thermal environment and verify the sphere thermal time constant in 2017, Thermal Balance/Thermal Vacuum Test on the lander (Landed TVAC) used by SEIS team to correlate the thermal model and validate the thermal time constants such as sphere and WTS time constant in 2017, TVAC#4. Figure 108 show the TVAC#4 configuration: the SA was mounted within a thermally controlled cover installed on a TGSE (Thermal Ground System Equipment). This TGSE allowed tilting any of the VBBs up to 68° for functional tests. Time constants were verified during the test using a first filter order model (see Fig. 109). The time constant VBB-sphere was estimated at 4 hr. The Flight Model was successfully qualified within qualification temperature range. The benefit of the sphere time constant was used to reach qualification temperature on the sphere queusot for the first time after the sphere pinch out. It was achieved thanks to touch-and-dwells where the queusot temperature reached qualification levels while the VBBs inside the sphere had to remain within reduced temperature range. The test had to be driven carefully to not exceed a temperature difference higher than 40°C between crown and VBB during transients. Landed TVAC The landed TVAC (thermal balance test for SEIS) was used to correlate the thermal model and to measure the time constants of SEIS. The estimation of the First order filter between sphere and VBB and between WTS and VBB was made using data from test with the formalism described in Sect. 6.6.1. But we also used the detailed thermal model correlated after landed TVAC test, for which the time constant is the time to have the sphere temperature at 63% of an external step, taken as \(+10^{\circ}\mbox{C}\) during these tests. Results are shown in Table 19. Measurements of the Time constants during Landed TVAC tests Time constant result for test 1st order filter Detailed thermal model Sphere-VBB 4 to 4.5 hr WTS-sphere 5.5 hr 5 to 6 hr 2.5 hr (in N2 atm.) 4.6 hr (in CO2 atm.) The Fig. 110 shows the good correlation between temperature calculated with first order filters and measurements, where: VBB calc is computed using a first order filter between VBB and crown temperature, VBB calc GLOBAL is computed using a first order filter between VBB and WTS temperature, T crown calc is computed using a first order filter between crown and a pondered temperature of RWEB and SEIS plate (70% SEIS plate—30% RWEB). However, the WTS-sphere time constant is unexpectedly lower when computed with detailed correlated model on a 10°C step: 2.5 hours instead of 5 hours. This is because the 1st order filter is not representative of the heat exchanges between WTS and the sphere. In this method, the sphere temperature is a function of only one temperature (taken as the mean of the WTS and plate temperature) and tau. In reality, sphere temperature is driven by exchanges with the plate, the RWEB and harness and the RWEB itself depends of air exchanges. The detailed thermal model is representative of thermal heat exchanges. The radiative exchanges (not linear) are not considered in the 1st order filter whereas they are significant and the detailed thermal model is representative. However, a 10°C step is not a realistic case (and is conservative) but this is the case defined to verify the compliance with the specified time constant. The detailed thermal model correlation appeared to be slightly optimistic on sphere-VBB time constant and pessimistic on WTS-sphere thermal constant. Finally, the WTS-sphere time constant obtained with the detailed thermal model is more representative and is the one to consider for comparison with requirement. The values above are obtained in test condition: with N2 atmosphere instead of CO2 and in Earth gravity. Thermal exchanges are lower in Mars conditions and time constants on Mars are expected to be higher. This is consistent with the thermal flight prediction at EOL: 4.6 hr between WTS and sphere. The thermal model was successfully correlated within \(\pm 5^{\circ}\mbox{C}\) based on Landed TVAC data even in transient phases. Figure 111 illustrates the good correlation between model (dashed line) and measurement (plain line) on sphere and VBB during warm-up phase of the test. This model completed with the Flight sphere correlated model provides trustable flight prediction to confirm the compliance to thermal requirements. It will be used in operation to define the time of the day to switch on the instrument before WTS deployment. 6.6.5 SEIS Flight Thermal Prediction Flight prediction has been achieved on 18 thermal cases that cover the whole mission on Mars: worst hot and cold conditions were analyzed in operating and non-operating mode, dedicated cases were analyzed for operation at deployment, additional cases were studied to refine the VBB coil temperature profile: it helped to perform thermal tests in more representative temperature range, sensitivity cases achieved to understand the sensitivity of SEIS to some parameters (wind speed, heaters, deck temperature). Figure 112 and Fig. 113 show the temperature profile of main components for the cold and hot operating case respectively. 1.7 W of heating power is used in cold case. The good efficiency of the thermal protections is clearly visible: WTS temperature evolved from \(-80^{\circ}\mbox{C}\) to \(+70^{\circ}\mbox{C}\) while VBB remains between \(-40^{\circ}\mbox{C}\) and \(-20^{\circ}\mbox{C}\) in hot case. Note that uncertainties on dust conditions implied to study worst condition leading to higher solar absorptivity on the WTS and as a consequence to "high" temperature on WTS silicon oxide in hot case (Table 20). Impact of dust on solar absorptivity Material/coating name Infrared emissivity—\(\varepsilon_{\mathrm{IR}}\) Solar absorptivity—\(\alpha_{\mathrm{S}}\) Dusty horizontal Dusty vertical surface Titanium TA6V Titanium UT40 Aluminium AU4G1 RWEB Kapton Gold l (sphere) Vapour Deposit Aluminium (VDA) Vapour Deposit Aluminium (VDA inside skirt) The main sources of daily variation of WTS temperature are the solar flux evolution and the air and ground temperature variations during the day. The thermal design implies very low leaks at each stage. The main leaks are through exchanges with air. The delay between sphere and VBB minimum temperature is clearly visible and is due to the sphere time constant. This delay impacts the operations in some particular cases, especially during deployment because PE and VBB must remain above their minimum allowable flight temperature (AFT) to operate and it does not happen in the same part of the day. The thermal time constant in the worst hot case end of life are 3.2 hours between sphere and VBB and 4.6 hours between WTS and sphere. The WTS-sphere time constant is lower than required but it is compensated by the larger sphere-VBB time constant. The unexpected lower efficiency of the WTS is essentially due to a phenomenon of air circulation increasing the heat exchanges between WTS and RWEB when the ground temperature is higher than WTS temperature and demonstrated that even for small atmospheric thickness, convection on Mars might appear (Fig. 114). 7 Instrument Operations and Lander Onboard Management 7.1 General Description of Operations The SEIS operations will be performed by the SEIS/APSS Instrument Operations Team (IOT) at CNES, in Toulouse, France, with the support from both SEIS and APSS institutions. We therefore describe here not only the operations of SEIS, but also those related to APSS, as the latter is expected to provide critical data for assessing the impact of the Martian environment on the SEIS noise. SEIS operations are based on a weekly uplink cycle (Fig. 115) and two downlink opportunities per day. During the Science Monitoring phase, instrument operations teams will operate their instruments from their home institutions and the SEIS operation will be based on a regular week cycle with 4 working days, Monday to Thursday. The lander communicates with the Lockheed Martin/JPL Ground Segment via UHF transmission to Mars Orbiters and then Earth transmission through NASA's Deep Space Network (DSN). During the science monitoring phase, there is insufficient energy to keep the lander powered continuously whereas both SEIS and APSS are working continuously and acquire high-frequency data. The lander wakes up every 3 h for about thirty to sixty minutes on average, to monitor and respond to faults, collect raw data from SEIS and APSS, store them in the lander mass memory, generate telemetry for the orbiters (NASA MRO and Odyssey), and receive command uplinks (via Direct from Earth or relay). The continuous (i.e. low frequency) data will be downloaded entirely as described in Sect. 4.1.9, paragraph "Continuous Data" while the high rate will be selected and downloaded as event data. The key activity will be to manage the onboard data buffer for seismic events and ensure no data are erased onboard by newer data, since a cyclic buffer is used to store raw data. The event buffer, also called the raw data store, can store about 5 weeks of lossless compressed SEIS data. On average the mission can downlink \(30~\mbox{Mbits}/\mbox{sol}\) of continuous data and \(8~\mbox{Mbits}/\mbox{sol}\) of event data each sol, which is significantly less than the amount of data the instrument produces (about 650 Mbits per sol). At the beginning of the planning process, SEIS team receives bandwidth and power allocations from Mission Planning team at JPL. Within this allocation, SEIS team determines the activities that can be performed with SEIS throughout the week. The SEIS science team analyzes the low resolution continuous data flow to detect seismic events, and then prepare a prioritized list of seismic events to be requested from the ground during the next uplink opportunity. Once the weekly activity plan has been defined, sequences of instrument commands, configuration files and possibly calibration waveforms are edited, validated and transmitted to JPL for bundling and radiation to the spacecraft. 7.1.2 Operational Roles SEIS engineers are in charge of both the analysis of received data and the preparation of uplink products. They are operating from the SEIS Operation Center in Toulouse, France, called SISMOC (SeIS on Mars Operations Center). Their role consists in analyzing received telemetry to assess SEIS health and safety and prepare sequences of commands to be uplinked to SEIS. 7.1.3 Downlink Process JPL is the only entity having direct interfaces to the spacecraft, via Lockheed Martin. SISMOC receives raw data from JPL. Engineers track downlink data and check that received products correspond to the expected time spans. They report any possible missing products. They perform the SEIS health and safety assessment by monitoring key housekeeping parameters and metadata (activity durations, warning messages from the spacecraft, etc.) and check that no alarm has been triggered in the monitoring tools. This process is fully part of the weekly activity plan preparation and requires a coordination within the SEIS operations team partners (CNES, IPGP, Imperial College, Oxford, ETHz, SEIS scientists from their home institution, etc.). 7.1.4 Uplink Process The main purpose of the uplink process is mostly to update on board data acquisition parameters: input acquisition frequency, gain change, decimation filters and output acquisition frequencies through configuration files. This process will also transmit the commands for the processing and downlink of the seismic events stored on board, based on the requests made by the science team (also called ERPs: Event Request Proposals). This is summarized in Fig. 116. The amount of event data transmitted back to earth depends mostly on the bandwidth, but also on the time allowed for data processing on the lander during the wake-up. The SEIS operations team gathers all the sequences of commands and deliver them to JPL for bundling. 7.2 SEIS Flight Software 7.2.1 Flight Software Functions and Design SEIS on board Flight SoftWare (FSW) uses as inputs the SEIS channels provided by SEIS-EBOX and APSS channels provided by APSS-PAE. A total of 137 raw data channels are produced by the SEIS and APSS sensors. The outputs of SEIS-FSW are TeleMetry (TM) packets for transmission to Earth of SEIS and APSS data. The FSW is running on lander flight computer during lander wake-up periods. The FSW has the following functions: produce housekeeping TM packets, produce scientific data TM packets for continuous and event data flows, commanding and calibration functions of SEIS instrument. Only scientific data processing will be described in this section. 7.2.2 Scientific Data Processing The scientific channels produced by the SEIS and APSS sensors are recovered by SEIS-FSW during lander wake-up periods and processed in order to produce TM packets. The FSW has a great flexibility because it allows the definition of processed channels that are the result of processing chains in which each stage is one of the algorithms described by Table 21. These algorithms are chained by taking the output channel of one stage as input of another stage to the processing chain. FSW algorithms available at each data processing stage described by inputs and processing parameters. For all processing, output is only one channel Input channel FIR filtering Any one SEIS or APSS channels FIR coefficients, downsampling rate Linear combination Up to three SEIS or APSS channels Inputs channel numbers, linear combination coefficients Average over a time window Window length Standard deviation over a time window Root mean square over a time window Maximum over a time window Time delay to apply Vector norm Three SEIS or APSS channels Inputs channel numbers, coefficients of vector components No processing (but can downsample a channel) One of any SEIS or APSS channels downsampling rate The processed channels are defined in a configuration file describing the processing chains starting with raw channels. All possible channels of a given sensor should be considered in order for the processing chain to be active when this channel is present. For example, all the processing chains of VBB1-VEL sensor should be defined for all the possible combinations of modes, gain and sampling rate of this sensor. For each processing stage, these configuration files include the numbers of input channel(s), output channels, the values of processing stage parameters and reference to FIR filter coefficient files as needed. Two types of configuration files are defined respectively for science continuous and event data flows. 7.2.3 Continuous Data Flow The continuous data flow is defined through the data budget document and consists mainly of two types of channels: Downsampled data, corresponding to a single raw input channel downsampled to a lower rate by application of low pass anti-aliasing filters before downsampling operation. Processed data, corresponding to output channels involving more complex operations or even more input channels. The downsampled data are produced by applying the "FIR filtering operation" which uses low pass anti-aliasing FIR filters defined for each downsampling ratio (2, 4 or 5) and decimates the channel with the corresponding ratio. These anti-aliasing FIR filters have been chosen identical to the ones of the terrestrial broad band seismological station CI.PFO. The same filters are also used inside SEIS-EBOX to perform the same operations. The highest sampling rate of the continuous downsampled channels is 2 samples per second (sps) during nominal operations. The processed data can be split into three main types of output channels: Energy Short Term Average (ESTA) channels, corresponding to estimate of signal energy of various sensors at frequencies above the Nyquist frequencies of continuous data. SEISVELZ channel, corresponding to a hybrid VBB/SP vertical velocity channel sampled at 10 sps (Fig. 117). Standard deviation and averaging operations applied on TWINS channels for wind retrieval. ESTA channels that allow us to infer the signal content at high frequencies are computed for VBB-VEL, SP, Pressure and Magnetometer science channels. These energy estimates are designed to allow the detection of events involving only signals at high frequencies (\(f>1~\mbox{Hz}\)). 7.2.4 Event Data Flow The INSIGHT mission stores all acquired data on board at full temporal resolution for approximately 30 Sols. This allows for the recovery of raw or high rate data stored in the lander's mass memory through an event request. Such a request will select raw channels, or channels defined in an event configuration file, for specific time windows which are defined on Earth from the continuous data flow information. The FSW event configuration file allows any of the operations in Table 21, except the vector norm or linear combination, before sending these to Earth. This capability allows us to tune the event data sampling rate to the available bandwidth. All the possible sampling rates for all the possible raw data channels should be defined in the event configuration file. 7.3 Calibration on Mars 7.3.1 Coil Calibrations and VBB TCDM Tuning The first SEIS Calibration on Mars will be made during the commissioning phase, which will start on sol 35 and last 60 sols. The calibration signals have all been designed in order to generate a signal reaching in the feedback loop about 50% of the saturation limit. The calibration results will be used for determination of the on-Mars transfer functions of the instrument, which will be distributed in the SEED dataless. The time line is described here: First 8 sols will be devoted to the TCDM optimization of the VBB. Next 10 sols will be devoted to calibration of the VBBs and SPs in their different modes and gains. The 28 following sols will be a passive cross-calibration, where both VBBs and SPs will operate continuously and will transmit back to Earth 2 sps continuous data, with selected time windows transmitted at 20 sps and 100 sps, with duration depending on the data budget. VBB characterization & VBB+SP TF (transfer function) are the two first calibration phases and may be merged in order to perform VBBs calibration in every mode approximately at the same time (i.e. same temperature). The whole set of VBB Characterization and transfer function calibrations using SEIS-AC includes Open-Loop (LG/LG) ENG mode (LG/LG) ENG mode (HG/HG) SCI mode (HG/HG) Each calibration lasts 17 minutes and recentering will occur in-between. During these 10 sols, this full set shall be performed three (to five) times, when VBBs temperatures reach their daily maximum, minimum and average. After the commissioning, the calibration activities will be run every week for a short calibration sequence lasting a few seconds and designed to check the high frequency gain and every month for the "full" calibration sequence described above. This set of characterizations and calibrations shall be performed seasonally in order to get calibration data for the VBBs with 20°C of interval. 7.3.2 Active Cross Calibration In order to constrain better the internal structure of Mars, an active cross-calibration is designed between the VBBs and SPs in order to retrieve the gain of the VBBs relative to the SPs. This is detailed in Pou et al. (2018). As the frequency response of a VBB in POS SCI (position scientific) mode is flat for low frequency (Fig. 41), this cross-calibration is designed at low frequency and aims at determining the relative gain of all VBBs with respect to SPs, in order to determine the vertical gain of the VBB instrument with an accuracy of 0.5%. For operations purposes, the cross-calibration procedure can be done in one hour, by calibrating 2 VBBs in 30 min, then the 3rd one also in 30 min. Such procedure shall ideally be done for two or three different temperatures (max, min and mean temperature) in order to have a realistic model of the VBBs transfer function as close as possible to reality. FB schematics. Top: ENG mode. Bottom: SCI mode FB transfer function at \(-55^{\circ}\mbox{C}\) for unit VBB10. On left are the Transfer functions for the ground velocity and at right are those for the ground acceleration, in Digital Unit (DU) per ground velocity (\(\mbox{m}/\mbox{s}\)) or ground acceleration (\(\mbox{m}/\mbox{s}^{2}\)) Saturation Levels at \(-55^{\circ}\mbox{C}\) for unit VBB10. Left are those of the SCI output and right those of the ENG outputs. In addition to the saturation of the outputs gains, the internal saturation is shown in red. In SCI mode, this internal saturation occurs at long period at the output of the INT2 filter and above about 1 Hz at the output of the displacement transducer. This internal saturation matches therefore the Low Gain modes of the instruments at long periods for POS LG and at short periods for VEL LG. The same is valid for the ENG, where the internal saturation is then only due to the displacement transducer saturation and matches the one of the POS LG output Re-centering mechanism from the top and from the side. A lead screw is driven by a stepper motor through a 1:256 gear box and a flex coupling to displace the mechanism. Two parallel guides prevent the motor gear box to rotate. To minimize overall mass, the motor is on the moving part. The re-centering mechanism fits in a \(86\times 36\times 22~\mbox{mm}^{3}\) volume. The motor and gear box have a 10 mm diameter Thermal Compensator Device Mechanism. Two thermal compensation devices are mounted on a shaft. A tuning mechanism allows tuning of their orientation in a vertical plane. It is composed of a stepper motor, a 1:256 gear box and a worm screw. The compensation device is \(37~\mbox{mm} \times 37.1~\mbox{mm}\) and can extend \(12~\upmu\mbox{m}\) per °C. The orientation mechanism is 43.2 mm long. Its motor and gear box are 8 mm in diameter Thermal tests of one of the Flight VBBs (VBB3) during which the temperature of the VBB went from \(-70^{\circ}\mbox{C}\) to \(-10^{\circ}\mbox{C}\). The test was made on the POS ENG low gain (about \(1500~\mbox{V}\,\mbox{s}^{2}/\mbox{m}\)), which has about 8 times less gain than the POS SCI low gain. In the neutral position (magenta solid line), signal varies by about 0.5 Volt from \(-52^{\circ}\mbox{C}\) to \(-9^{\circ}\mbox{C}\). On the right, dotted, long dashed and continuous red lines are for \(\pm 5\times 10^{-5}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\), \(\pm 2\times 10^{-5}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\) and \(\pm 5\times 10^{-6}~\mbox{m}/\mbox{s}^{2}/\mbox{K}\) thermal sensitivities. Blue line corresponds to data measured during passive heating of the VBB. Left is for signal output in Volt while right is the temperature derivative, for a fixed gain and for the temperature sensitive gain. Black dotted lines are for the different positions of the TCDM. The very large temperature sensitivity at \(-60^{\circ}\mbox{C}\) is suspected to be exaggerated by the testing device on Earth VBB 1 & 2 Thermal Sensitivity performances. VBB2 has higher thermal sensitivity at cold that exceed TCDM capabilities at low temperature. VBB2 meets its requirements over \(-55^{\circ}\mbox{C}\). VBB1 is compliant over the full range. Color and lines definitions are the same as the right Fig. 46. Only the active tests results are shown for VBB2 The unmounted SP sensors showing in (a) the top view of SP1, the vertical axis sensor and (b) the back view of SP2, one of the two horizontal axes The SP sensor mounted on its frame and connected to the proximity electronics The SP sensor assembly with the magnetic assembly and proximity electronics mounted on the SP sensor enclosure base prior to sealing A sealed horizontal SP unit viewed from (a) the connector side and (b) the LVL mounting direction A schematic of the SP electronics. SEIS-SC, the SEIS acquisition electronics are installed in the Ebox of the lander while the SP sensors are deployed on the Martian surface on the SEIS instrument assembly The principle of the cross-calibration procedure is to move the legs with the linear actuators in order to actively create a tilt and thus a signal seen by all VBBs and SPs. The most efficient signal was chosen to be a periodic signal with a period of 112 s, low enough to be close to the flat area of the VBB transfer function but high enough to be seen by the SPs (Fig. 53) with an amplitude of 0.002° in order to avoid saturation of any sensor. Solid lines: Generic shape of the Transfer functions of the SPs for the VEL output (left) and for the MPOS output (right). On the left, the dashed line is the low gain VEL output. The Full Scale Range of SEIS-AC is 25 Volt for SP VEL (with 24 bits) and 10 Volt for SP POS (with 12 bits for A/D and 16 bits after averaging) Linear guidance of the telescopic leg. The diameter of the telescopic length is 25 mm Geartrain of the linear actuator. The diameter of the telescopic legs is 25 mm MDE block diagram The place of the E-Box in the system The SEIS-AC acquisition electronics including redundancy Digital filtering in acquisition chain for velocity signals Digital filtering in acquisition chain for position and SCIT signals The reduction of gain error by offset compensation for temperatures The structure of packets containing seismic or housekeeping data The circular buffer as used for seismic and housekeeping data The acquisition noise breakdown of VBB VEL channels (\(\mbox{FSR} = \pm 25~\mbox{V}\), \(1~\mbox{LSB} = 3~\upmu\mbox{V}\)) Tether System overview, Deployed Configuration. The TSA-3 and TSA-4 between the Tether Storage Box and the Ebox are unchanged from. The distance from the center of the Sensor Assembly of SEIS and the point below the Tether Storage Box is about 1.40 m in this configuration Construction and thicknesses of Tether belts Finite element mesh configuration for thermal and elastic models of the tether EC (Evacuated Container) with non-flight ground handling ring. The diameter of the sphere is 198 mm SEIS RWEB (Remote Warm Enclosure Box). The width of the RWEB is \(\sim 355~\mbox{mm}\) and its height is 212.5 mm SEIS WTS (Wind and Thermal Shield). It is 72 cm in diameter and 35 cm tall (Left) 37-pin and 2-pin electrical feedthrough installed into EC Crown; (Right) 37-pin feedthrough closeup Pinched-off queusot Sphere heat flow diagram under beginning of life pressure conditions. The average temperature in Celsius are given for the different units Cradle Dampers (on left) and internal design of the cradles with the release mechanism (on right). The cradles are 185.5 mm height Cradle damping efficiency. The left figure provides the damping of launch vibration, while the right figure provides the damping efficiency during cradle release. Red lines are those on the deck and blue lines are the acceleration levels at the SEIS assembly Typical setup with 5 reference seismometers. STS-2 BFO was covered by an air lock, STS-2 CNRS was covered by mu-metal/thermal shield and STS-2.5 only by thermal shield. SEIS was tilted at 68° to balance one VBB and the vertical SP Left: Big thermal shield used for tests. Right: Aluminum plate and goniometer mounting. SEIS was mounted on the goniometer while the reference seismometers were installed on the plate or nearby References sensors and reference acquisition systems used for the performance tests. From left to right, STS-2 and Trillium compact seismometers from Streckeisen and Nanometrics respectively, Q330HR and Centaur digitizers from Kinemetrics and Nanometrics respectively Typical setup with 5 reference seismometers. SEIS (not seen) is connected to the Ebox Setup for SEIS QM test in the Black Forest Observatory mine seismic vault. EBOX on the left, sensor assembly with "cuvinette" on the right. Once the tent was closed the reference seismometers were installed on the pillar in the foreground Top left: VBB in Earth configuration. The Earth mass is indicated by a black arrow. VBBs can operate in Earth mass in the nominal configuration but with a gain smaller than the Flight model by about 2.65. Top right. Flight model during tests as fixed on the test goniometer. The test goniometer was used to tilt the Sensor Assembly to the required position. Bottom left: one of the 68° positions, used for testing the VBB1. The rotation is made with a rotation axis in the direction of Y. During this rotation, both VBB1 and the vertical SP1 can be desaturated. Generally, fine recentering was made with the goniometer in order to reduce the use of the Flight unit's recentering motors. Bottom right: Configuration for the 32° test, in which two VBBs as well as a horizontal SP can be desaturated and tested Details of the Transfer function between 30 s (0.03 Hz) and 30 Hz for the VEL output for the different testing configurations. Up to 30% of variations of the Transfer function are observed mostly due to the VBB frequency change. Gain is always larger than the 68° configuration which is close to the Flight configuration. Note however that even in this case, the pivot is operating in off-nominal condition, with a large force along its rotation axis due to the projected weight of the pendulum Modeling of the VBB seismometer output from a sweep calibration experiment. The top panel shows the current flowing through the calibration coil. It covers frequencies from 1 Hz to 0.01 Hz. The middle panel shows the output of the seismometer overlain with the modeled output and the residue, that is the difference between the two signals. The bottom panel shows only the residue. The modeled output fits the measured output so well that nothing from the sweep is visible in the residue. Only the background noise level from CNES is visible. Instrument parameters can be constrained more tightly if such test can be conducted at seismically quieter locations Dispersion of the gain measurements of the VBBs during coil calibration. Dispersions were ranging 4–6% for the VEL gain and 0.7–0.9% for the POS gain SEIS VBBs eigenfrequencies with respect to Temperature SEIS VBBs Magnetic actuators ratios of the force coefficient by the internal resistance with respect to Temperature. Coil A, B, C are the integrator, derivator and calibration coils respectively (see Sect. 5.1.5). Measurement errors remain large SEIS VBBs Theoretical VBBs Amplitude Response variation wrt Sphere Temperature from feedback actuator thermal sensitivity measurements. At periods larger than 100 s, the transfer function sensitivity is \(<0.05\%/{}^{\circ}\mbox{C}\), leading to 2.5% over the 50°C climatic variation between the coldest and hottest temperature of the VBBs over one Martian year SEIS POS output at the Phobos Tide frequency and SEIS VEL output at 100 s, as a function of temperature. Temperature dependency will be recalibrated on Mars but can be considered as linear as a function of frequency and with a frequency dependency as indicated by Fig. 85. The information of temperature dependency will be encrypted in a SEED comment Noise model of one VBB axis in Mars conditions, for Night and Day conditions, for both the VEL (left) HG and POS (right) HG. For LG, the acquisition noise will be respectively 3.2 and 4.5 larger for VEL and POS output respectively Results of tests made on the flight units in the CNES clean room, for both the POS and VEL outputs. The yellow curve shows the noise between two STS-2, which is significantly above the VBB noise model. The dashed green continuous curve is the VBB noise model while the continuous green curve is the quadratic sum between the VBB self-noise and the observed STS-2 noise, which might be more representative to the noise of the VBB with respect to a STS-2. Some of the VBBs measurements are very close to the environmental limits, despite the lower quality installation of the VBBs on the goniometer compared to the better installation quality of the STS-2 Noise measurement of the VBB in Earth configuration at BFO Comparison of three closely colocated STS-2s at BFO Spectrogram of the VBB output in ground acceleration, during 10 hours of BFO tests with Earth configuration. The series of events seen at frequencies smaller than 0.01 Hz are likely related to pressure transient variations associated with the leak rate of the vacuum chamber used for the test and whose spectrum might be proportional to \(f^{-1}\). The amplitudes shall be compared to those recorded by the STS-2s shown on Fig. 91 The transfer function (TF) of SP determined from coherence testing shown in amplitude and phase. The requirement of \(\pm 5~\mbox{dB}\) flatness within the bandwidth of SP is shown. The TF determination is valid within the bandwidth where the coherence is high, allowing extension of SP meeting the TF requirements to 0.006 Hz Self noise of the flight model (FM) SP's as determined by coherence testing on the SEIS instrument assembly at CNES Toulouse, together with the self-noise of a qualification model (QM) unit determined at the lower noise Black Forest Observatory (BFO). The SP performance requirement is marked as well as the noise models for the FM and QM units. All SPs are at least a factor of 2 better than their requirements at 0.1 Hz Position of SCIT A&B on the LVL subsystem HRTS-5760-B-U-0-12 sensor head Dry Heat chamber Location of MEMS (1) and HP tiltmeters (2) on the LVL SH 50055 Family Sensor Working principle of the HP sensors Temperature calibration curves for HP tiltmeter, measured at −20°C to 40°C in 20°C steps as described in the text Temperature calibration curves for MEMS tiltmeter, measured at −20°C to 40°C in 20°C steps as described in the text Measured (blue) and modeled (red) gain of the horizontal transfer functions in the LVL baseline configuration (all legs extended by 0.5 mm). Masses, leg lengths and values of \(k^{p}_{h}\) of the model were set to those of the measurement, whereas parameters \(k^{g}_{h}\), \(C^{g}_{h}\) and \(Q\) were adjusted to fit the data Schematic of the SEIS sensor assembly environment Views of the SA coatings Results of CFD computation on WTS in cold case with \(20~\mbox{m}/\mbox{s}\) wind Photos of TVAC#4 configuration TVAC#4 profile as realized Comparison data vs. temperature calculated with 1st order filter Comparison model-test data on the warm-up phase of Landed TVAC SEIS temperatures on Mars in cold operating case. Time is in Mars hours SEIS temperatures on Mars in hot operating case. Time is in Mars hours Air circulation impacting heat exchanges Overview of the SEIS Operations Event Request Selection Process Example of SEISVELZ production chain by Ebox (on the left side of blue dashed line) and FSW (on the right side of blue dashed line) from SP and VBB channels. The final product (SEISVELZ at 10 sps) is provided in the continuous data flow The designed procedure is to cross-calibrate SPs with the VBB 1 and VBB 2 at the same time by moving first the levelling activator (LA) 1, then the LA 2 and LA 3 in a periodic way in order to create a split profile. Just after, SPs and the remaining VBB 3 shall be cross calibrated with a similar procedure but based on the movements of LA 1 and LA 3 first, then LA 1 and LA 2. The calibration is made by recording the outputs of all VBBs and all SPs for the whole duration of each procedure, together with the SEIS temperature. Using a periodic signal also allows us to do a frequency analysis to determine the gain of the VBBs while being more robust to noise sources and incertitude errors such as centering errors and ground dissipation of the leg movements. More details on the design and constrains on the tilt profile generated can be found in Pou et al. (2018) and only performances are summarized below. Using these Split profiles for close to 30 min each time, the performance of the determination of the gain of the VBBs relative to the SPs is summarized in Table 22. The results are given on average and at the at the 90th percile over 1000 simulations, meaning for example that in all our simulations, in 90% of the cases, our knowledge of the VBB 1 gain was better than 0.22%. The results are worse for the VBB3 due to geometric reasons, since this VBB is the farthest from the SPs configurations. Accuracy of the active cross-calibration of the VBBs using the SPs (relative error over 1000 simulations) Calibrated VBB VBB 1 Accuracy on gain (average) Accuracy on gain (90th percile) After determining the gain of each VBB separately the vertical gain (Z-axis) of the VBB can then be cross-calibrated with the vertical SP1. With one hour of cross-calibration, it is possible to determine the relative differences between vertical VBB and SP1 with an accuracy better than 0.40% in 90% of the cases. This accuracy is better than the mean of the errors in Table 20 because positive errors and negative errors end up cancelling each other partially. 7.4 SISMO and Events Management The SISMOC (SeIS on Mars Operation Center), needed to support the SEIS operations, is specified, developed and performed by CNES. 7.4.1 SEIS Ground Segment Responsibilities The functional capabilities allocated to the SEIS ground segment are: SEIS and APSS health and safety assessment. Programming of SEIS/APSS (including management of downlink bandwidth via configuration of the continuous processing). Various on board time correlations. FSW Configuration Management. On board seismic event buffers management. Detection/characterization of seismic events. Production, distribution and archiving of data and products. Detection of meteorite impacts (in collaboration with the Science Team of the Impact Working Group, see Daubar et al. 2018). Support Instrument deployment and commissioning phase. In order to achieve these tasks, the SEIS ground segment is organized around 2 major components: The SISMOC, which is installed in CNES-CST that mainly deals with the engineering operations and the science tactical processing, The Mars SEIS Data Service is in charge of producing high level end scientific products, to archive them and to distribute scientific products to the scientific community through the SEIS data portal. See Sects. 8.1 and 9.1. 7.4.2 SISMOC Main Functions On one hand the SISMOC offers a set of basic services such as data management, task scheduling and system supervision constituting the core system of the operation center and on the other hand, the SISMOC includes a set of mission specific services such as management of the event buffers or correlation of the various clocks. Figure 118 provides a schematic view of SISMOC functions. Some of these functionalities are partially or fully met either by tools that will be delivered by the JPL or the science team or by CNES multi-mission tools and CNES facilities. SISMOC functions with the roles of JPL (in red), CNES (in blue) and the Science services and team (in yellow) 7.4.3 SISMOC External Interfaces The SISMOC interfaces with the InSight Ground Data Segment at JPL. SISMOC will receive from JPL: The SEIS FSW TM packets containing the raw CSSDS packets of SEIS and APSS, The Warning and Info ScienceEVR files containing the event records related to SEIS and APSS logged by InSight lander, The Ancillary data files containing the lander engineering data that are useful to state on SEIS and APSS health and safety, The Command history files containing a log of the command executed by the SEIS flight software, The Science activity planning files containing the predicted pass of satellite communication relays and associated estimated downlink volume for a certain period of time, foreseen lander and flight software wake-ups, The SCET files containing spacecraft event times that provide time correlation information between the spacecraft clock and UTC time, The TLM dictionary containing SEIS and APSS telemetry definition, The Deployment data are corresponding to images, digital terrain model, etc. allowing SEIS IOT team to support SEIS deployment on Mars surface, The ATLO data is a placeholder to handle ATLO specific data if any. On the other hand, SISMOC will deliver to JPL: The sequences files containing SEIS and APSS sequences that constitute the weekly programming of SEIS and APSS instruments, The VML blocks files containing the definition of command blocks that will be stored on board InSight flight software lander and can be called or spawned by a sequence, The FSW configuration files are binary files that can be uploaded to the lander in order to configure SEIS flight software processing of continuous data, The reports files corresponding to the various reports that will document SEIS operations activities. For data distribution purposes, SISMOC interfaces with the Mars SEIS data service and with CAB. It gets from MSDS the part of the SEED dataless which does not depend on the flight software configuration. It also gets from CAB the TWINS/APSS processed data product, providing wind and pressure in physical units. It then delivers to MSDS the SEIS and APSS level 0 data in miniSEED format and the SEIS and APSS level 1 data in SEED format. These data will also be transferred to the MQS for the purpose of quake detection. For instrument monitoring purposes, SISMOC interfaces with the SEIS and APSS team taking part in the tactical operations, including health monitoring, calibration, etc., of the instruments. In addition to getting the SEIS and APSS data in miniSEED and SEED, monitoring data will be made available through a second, CNES hosted system (IMIS). Finally, SISMOC will be responsible for preparing the event requests of both science and instrument communities, when the latter have been endorsed by instrument or science operations. SISMOC will therefore deliver to the SEIS data portal the event buffers content information (through the event buffer management tool) and will receive from both the science and instruments communities the SEIS and APSS event request proposal (through the event buffer management tool). 7.4.4 Focus on a SISMOC Function: Event Buffer Management As described in Sect. 4.1.9, paragraph "Data budget", SEIS data will be transmitted either in the form of continuous data-flow or event data-flow. The latter are selected high rate seismic data produced only for specific windows of time (the "seismic events") and the operational implementation of the event transmission required the development of tools described below. Into SISMOC, the Event Buffer Management (EBM) tool shall manage SEIS/APSS buffers on-board the lander, i.e. keep track of the content of each buffer at any time and support the development of new requests of full-rate data. The EBM tool is accessible from everywhere for authorized users and offers the following functions: Visualize the event buffer contents, Develop a programming of event windows, Generate event sequences, Model and adjust the buffers content, Select the Event Request Proposal (ERP) from science teams. After analyzing the continuous dataflow, the science teams will send their ERPs to SISMOC. Information about ERPs are available online for authorized users. A pre-ranking of ERPs by each science group should be possible before the weekly event selection meeting. According to the on-board available volume and CPU, ERPs are selected weekly: the corresponding data are programmed to be stored into the on-board event buffers and then to be downloaded. All information about event request status, buffers status and on-board event plans are available online through the ERP/EBM tools. 8 SEIS Services 8.1 Mars SEIS Data Service The Mars SEIS Data Service (MSDS) is led by IPGP. This is the operational service responsible for collecting from SISMOC, archiving and distributing data for the SEIS experiment to the scientific community. After project completion, the IPGP Data Center will also maintain an archive for long-term preservation. The MSDS is also responsible for synchronizing the SEIS data to the IRIS Data Management Center (IRIS-DMC) and to a US CO-I responsible for archiving the data in NASA's Planetary Data System (PDS). Then, the data will be delivered and is freely available through the IPGP Data Center, but also through IRIS and PDS. Data format on both MSDS and IRIS will be those compatible with IFDSN (International Federation of Digital Seismograph Networks), while the format on PDS will be PDS4 or PDS compatible. The MSDS is responsible for the management of the raw, calibrated data and reduced products generated by the SEIS instrument (VBB and SP), the SEIS Flight Software and the APSS instrument (magnetometer, pressure, wind and temperature) in the same format as SEIS. In addition, the MSDS collects and archives housekeeping data. The data flow is described in Fig. 119 Data will be collected, archived and distributed in the standard exchange format defined by the International Federation of Digital Seismograph Networks (FDSN, http://fdsn.org): dataless SEED or stationXML for the metadata and miniSEED for the waveforms. This summarizes the SEIS data flow from SISMOC to the scientific community Data will be automatically collected by the MSDS from SISMOC after being converted to miniSEED format and following the notification of the new or updated available data. Then, MSDS checks integrity and format of the data; ingests new or updated data making them automatically available. Users will access all available data provided by the MSDS through the standard FDSN Web Services station in stationXML format and dataselect in miniSEED format (https://www.fdsn.org/webservices/), as well as the dataless SEED through http protocol. The SEIS Science Team will be able to access all the data by the authenticated Web Service during proprietary periods, while these data will be open after. Web services available in MSDS will be used by the SEIS Data Portal to provide an interactive and guided access to the user, both through the public and the restricted access SEIS Portal sections. Users will be notified of new or updated data by a RSS notification available through the SEIS Data Portal when registered on the RSS feed. 8.2 Marsquake Service The MarsQuake Service (MQS) is the operational service for the SEIS instrument responsible for delivering catalogues of seismicity, one of the primary science products of the InSight mission. In this role, throughout the course of the mission, the MQS is responsible for prompt and routine detection and characterization of seismic events according to currently preferred sets of Martian interior models; assembling events into catalogues; disseminating event and catalogue information to the Mars Structural Service (MSS), scientists and public via the InSight portal; and reviewing catalogues following model updates. At the end of the mission, the MQS will deliver a final catalogue version to the Planetary Data System (PDS). A detailed discussion on the methods used to detect, locate and characterize seismic events is described in Khan et al. (2016) and Böse et al. (2017), following, for large earthquakes, the multiple Rayleigh detection method described by Panning et al. (2015). A probabilistic approach will combine independent estimates for distance, origin time and back-azimuth. These key parameters will be determined using any and all of the surface and body waves that can be identified. 3D crustal structure will be accounted for. Magnitudes will be determined following formulations typically used on Earth, with Mars and InSight-specific modifications, using amplitudes of various phases measured in specific frequency bands, such as Richter magnitude, body wave magnitude and surface wave magnitude (Böse et al. 2018). Efforts will be made to use depth phases and matching synthetics in order to infer depth. Discrimination between tectonic and impact events will be made where possible. The methods and software infrastructure have been exercised with success using Martian synthetics during the MQS Blind Test (Clinton et al. 2017). Absolute locations, especially those for events with low signal-to-noise, will be refined within the context of an overall seismicity catalogue—once a significant number of events have been identified with good quality locations, in the absence of clear arrivals, the distance of weaker events can be inferred by matching signals from well-located events. Cross-correlation tools or Hide Markov Model methods will be used to further augment the catalogue with otherwise unlocatable or even undetected events. For impacts, the preliminary locations of MQS will be updated by those provided by the Impact WG, JPL and CNES teams using Martian satellites, when new impact craters will be located on remote sensing data. These ground truth locations for impact events will provide strong constraints on the interior models. In these cases and when seismic events suspected to have an impact origin with a location known without large uncertainty, procurement of local high-resolution satellite images will be prioritized. Conversely, if impacts are identified by routine satellite observation, the seismicity catalogue can be reviewed to try to identify a corresponding seismic event. See details for impact location and science in Daubar et al. (2018). A final key role of the MQS is to prepare ERPs in order to collect more complete high frequency seismograms for observed events. The MQS will also refine locations based on the higher sample rate data. The MQS is described in detail in Clinton et al. (2018). 8.3 Mars Structure Service The Mars Structure Service (MSS) is the operational service for the SEIS instrument responsible for delivering interior seismic structure models. This is one of the primary science products of the InSight mission and the MSS is responsible for producing and updating such models throughout the course of the mission and delivering a final version to the Planetary Data System (PDS) at the end of the mission. A detailed discussion of the range of modeling products planned to be produced by the MSS is described by Panning et al. (2017). This is anticipated to be a range of models on different scales ranging from the shallow subsurface to global-scale models using many different seismic observations (Fig. 120). The general approach to most of the modeling planned relies on Bayesian methods. Such approaches are increasingly common in geophysical applications and rely on the creation of large numbers of models with a distribution proportional to the probability density function of the models given the constraining data. Using such an approach allows for determination of the most likely range of models to fit the observed data without the need for the assumptions often required to explicitly determine the sensitivity of the data misfit to model perturbations. This creates a range of models that can satisfy the data within uncertainty, allowing for a clear understanding of model uncertainty (Fig. 120). Examples of preliminary demonstrations of the anticipated products of the MSS from Panning et al. (2017). (A) Example demonstration of the probability density function output for the Bayesian inversion of a small number of P, S and Rayleigh wave group arrival times for resolution of Earth mantle velocity structure. (B) A range of models of shallow crustal structure with color of plotting representing data misfit inverted to match synthetic Mars observations of the frequency dependent ratio of vertical component to horizontal component amplitude. (C) Bayesian inversion of mantle structure from noisy synthetic long period normal mode spectra. Green colors represent higher probability models while blue color is lower probability Operationally, the MSS is responsible for delivering a range of a priori models of structure (e.g. Sect. 2.2) which is used broadly by the science team for the mission, as well as allowing for probabilistic location of the initially recorded events by the MQS by taking in account differences in body wave and surface wave arrival time predictions for a reasonable range of models (e.g. Böse et al. 2017). As data becomes available, this set of models will be refined by Bayesian inversion of the recovered data and the revised models will be distributed to the community and used by the MQS to reduce location uncertainties. 9 Data Distribution and Archiving 9.1 SEIS Portal The SEIS Portal (http://www.seis-insight.eu) is a hub leading to three distinct websites, each tailored toward a specific population of users/visitors: the general public, the scientist and finally students and teachers. The public website offers a complete, understandable overview of the SEIS experiment and of the InSight mission, through four sections and 200 web pages. The first section explains some basic principles in seismology and presents previous planetary seismology experiments. The second section is dedicated to SEIS and presents not only the instrument itself, but also the legacy of the development behind it, as well as lot of information related to tests. The third section focuses on the InSight mission itself, including the lander. Finally, the last section deals with Martian science: after a short presentation of the internal structure of rocky planets, several articles introduce the reader to the many scientific goals of the InSight mission. New contents will be regularly added through the mission. The primary target audience of the general public website is teenagers and adults. Suitable content for younger children will be available at the ETHZ website (http://marsatschool.ethz.ch/en/index.html) while those targeted for teenagers and students will be located at the GeoAzur website (https://insight.oca.eu/). In order to provide the necessary graphic support for explaining the instrument characteristics and science rational, a set of didactic colorful original illustrations and animations has been created in the science section of the public website, including artist's concepts. Several more sophisticated graphical products have also been developed, such as a fully textured cutaway of the SEIS instrument at the surface of Mars, a real-time interactive 3-D model of the VBB pendulum, 360° cylindric view of hardware or 3-D models of meteorites, etc. The scientific website of the SEIS Portal is dedicated to more professional seismologists and will provide access to different sets of data and specific documentation. Data distribution is described in Sect. 9.2. The third website encompassed in the SEIS Portal is focused on education and outreach described in Sect. 9.4. It will give access to diverse education and public outreach initiatives and to multiple sets of educational resources. Two additional areas on the SEIS Portal are dedicated to news and media. With a few exceptions, content is presented both in English and French. 9.2 SEIS Data Distribution The SEIS data flow is described in Fig. 121. Raw spacecraft data are downlinked through the Deep Space Network to the Multi-mission Instrument Processing Laboratory (MIPL) at JPL. These are transferred to SISMOC via the File Exchange Tool (FEI). Data are then archived in SEED format (\(\mathbf{S}\)tandard for the \(\mathbf{E}\)xchange of \(\mathbf{E}\)arthquake \(\mathbf{D}\)ata). Reader not used to SEED can found additional informations on this data format in the SEED manual (FDSN 2012) and in Appendix B. Data flow with key contact persons IPGP will provide SISMOC with the static dataless SEED associated with the Instrument Transfer function and other non-flight software (F/SW) tunable parameters. SISMOC will then: Complete the SEED dataless with dynamic information from F/SW tunable parameters. Generate raw and calibrated data from the raw spacecraft data and static SEED dataless. Generate processed data from the "Black Box." Distribute all above data in miniSEED/dataless SEED to MSDS (Mars SEIS Data Service) and MQS (Mars Quake Service). Relevant APSS data (such as Magnetic field, Temperature, Pressure, Wind Direction and speed) will be packaged with the SEIS data. The IPGP SEIS Team (with support from the SP and APSS teams) will certify the integrity of the SEIS/APSS SEED data to MSDS. MSDS will then archive and make the data available through FDSN Web Services (fdsnws-station, fdsnws-dataselect) in FDSN StationXML and miniSEED formats. Metadata in dataless SEED format will be also available. MSDS will archive the data as a mirror node of IRIS. SEIS, InSight science, outreach and education teams will be able to access all scientific SEIS and APSS data both through authenticated FDSN Web services and through the SEIS Portal (SEIS Team members Intranet). MSDS is responsible for final delivery of SEIS data in SEED (dataless, miniSEED) format to the IRIS and the PDS archive generation team. The data formatted according to the SEED format will be distributed under the reserved FDSN Temporary Network Code XB (period 2016–2022). Two station codes are planned: ELYSE for the scientific data and ELYHK for the housekeeping data. In addition, a Digital Object Identifier (DOI) is planned for these data. IPGP will also deliver the Mars Reference Internal Structure catalogues through the MSS (Panning et al. 2017 and Sect. 8.3) while ETHZ will be in charge of delivering the Mars Quake catalogue through MQS (Clinton et al. 2018, this issue, and Sect. 8.2). See Sects. 8.2 and 8.3 for more details. The PDS will distribute and maintain all InSight archives for the NASA planetary science community and the general public. SEIS data will be archived with the Geosciences Node. Release of SEIS data in open access and in SEED format at both IPGP and IRIS DMC will be synchronized with the release of these data at PDS. SISMOC will also make other relevant data available, in particular: The time correlation coefficients. The FSW configuration files. The EBOX/PAE configuration at a given date. The ERP approved/rejected list containing the results of the weekly coordination meeting on event selection. Notifications for a new file presence (RSS). Coherence files produced by the coherency calculation science processing (black box), and Deglitch log files produce by deglitch science processing. 9.3 SEIS Services Higher Level Products The key higher-level products that will be delivered from the SEIS experiment will be seismicity catalogues and models of the Martian structure. Seismicity catalogues will be created and curated by the Mars Quake Service team. The methods used to detect and characterize seismicity are described in detail in Clinton et al. (2018) and summarized in Sect. 8.2. The catalogue is expected to include both tectonic and impact events but will not include lander or weather related activity. The format will be quakeML1.2 (https://quake.ethz.ch/quakeml/), with InSight-specific extensions to reflect the single-station methods, the probabilistic location formulation and the Mars-specific event types (marsquakes, meteor impacts). Typically, events will be identified within hours of data being received on Earth. Event information will be distributed immediately to the science team via the InSight portal via standard webservices fdsnws/event (https://service.iris.edu/fdsnws/event/1/), with extended Mars-specific options. Updated marsquake catalogues will be made available once structure models are updated. The primary product from the Mars Structure Service will be models of the Martian interior. As described in Panning et al. (2017) and Sect. 8.3, models will be developed in a Bayesian probabilistic fashion and so the final product will be a suite of models of interior structure. These models will include seismic velocities and depths of major structural transitions, such as the core-mantle boundary and mean crustal thickness. The models will be delivered in a flexible "deck" format based on that originally utilized by the widely-used seismic free oscillations code MINOS (e.g. Woodhouse 1988), but modified to be more flexible and include more possible structural parameters to be compatible with the model format used by the AxiSEM numerical wave propagation code (Nissen-Meyer et al. 2014). 3D crustal models based on gravity and topography (e.g. Neumann et al. 2004) constrain only crustal thickness variation while not directly constraining mean crustal thickness. As seismic data constrains the crustal thickness at the landing site, updated 3D crustal models will also be produced, released in both latitude and longitude sampled files as well as spherical harmonic expansions useful for gravity comparisons. These models will be delivered and updated regularly during the course of the mission for use by the MQS probabilistic location algorithm and periodically made available via the InSight data portal. Martian interior models and seismicity catalogues will be periodically distributed to the public through the InSight portal using the same services described above, following the ending of the embargo periods. At the conclusion of the experiment, the full range of models and seismicity catalogues will be provided to the final mission Planetary Data System (PDS) product. 9.4 SEIS Education Plan The SEIS INSIGHT education plan has been designed in order to develop a specific scientific programme for secondary schools, high schools and general public, allowing a generation of school kids, teens and students to follow the mission at the same time as the InSight project scientists. The main objectives are: To provide to the school network the seismic activity of another terrestrial planet. To initiate comparative planetology activities in school based on space mission data. To test through fun hands-on experiments the key processing methodologies used by InSight. To organize workshops for teachers and to explore some innovative activities in geophysics teaching. 9.4.2 Resource Distribution The resources will be organized by topics to facilitate the teachers' school activities. Topic 'DATA' will be one of the most highlighted topics. Hundreds of secondary schools and high schools mostly in US, UK, France and Switzerland, will receive SEIS (and weather) data from Mars daily following the public data release. This wide distribution of data is made possible by the worldwide partnership already existing between seismological educational networks (OCA-France, IRIS-US, ETHZ-Swiss, GS-England). See Courboulex et al. (2012) and for activities in France (Berenguer and Virieux 2008). Students will have access to data selected for educational use, then teachers will be able to propose case studies to investigate the planet Mars. Topic 'TELLURIC' concerns a lot of hands-on about planetology, seismology experiments, meteorites impacts and/or physical states of water, pressure, temperature, etc. With the help of data from the camera HiRISE, students will also be able to print Digital Terrain Models files with 3D printers and learn more about Mars topography. Topic 'JOURNEY' will provide the resources to explain the launch, Earth-Mars transfer and landing. This will be made with hands-on tools and dedicated software, providing Mars' orbit characteristics, distances and planet positions and allowing the selection of the launch time window or landing site. Topic 'SENSOR' means to better to understand data recorded on Mars. A lot of experiments using simple or more sophisticated sensors have been developed for the classroom. Technical high school students can build easily by themselves seismic, pressure, temperature and wind sensors and have the opportunity to draw and build replicas of the lander and sensors. Elysium, a replica made by students in Toulouse and presented in 'Salon du Bourget' in 2015, is one example. Robotics, electronics and computer courses will take advantage of this topic. Topic 'SIGNAL' is necessary to understand communication techniques between Earth and Mars during the mission. It describes experiments for students about differences between analog and digital signals, on the sampling properties of signals and on the sun insulation on Mars as compared to Earth. 9.4.3 Share with the Educational Community All of these resources and data must be provided and shared through the educational community. Data and activities, ranked by topic and by level of school age, will be displayed on the web site for Education. Teachers and the general public will be able to access these web pages through the SEIS portal. The teachers will find data (data selected and case studies), hands-on activities (described step by step), sensors and replica plans (helpful to build models) and dedicated software (to read data, to simulate phenomena) and it will be easy to download all these educational packages. The success of this specific program for schools is dependent on good teacher training. It is necessary not to forget tutorials and documentation so that the teachers can access the resources easily and quickly. Some workshops with teachers and scientists involved in the mission will help to provide training for this specific program. With this education program, we will be able to bring the InSight mission and the SEIS experiment into the classroom and give to the pupils and their teachers, the opportunity to do science, with a multidisciplinary approach and connected with the scientists. 10 Conclusion Forty-two years after the landing of the two Viking seismometers on Mars and 41 years after the end of the Lunar ALSEP seismic network on the Moon, the SEIS instrument on InSight will start a new chapter in terrestrial planetary seismology and will search for the first definitive quake detected on a terrestrial planet other than Earth. More than 60% of the SEIS mass has been allocated to its surface wind and thermal protection and we can therefore expect a surface installation optimum with respect to the constraints of a Mars robotic mission. The deployment will however be made on a subsurface with low rigidity material (Morgan et al. 2018), due to landing safety contingencies and the need for such a subsurface for the successful deployment of HP3, another InSight payload instrument aiming to measure the heat flow of Mars. Pressure data will therefore be recorded continuously in order to minimize the pressure related ground deformation noise. Thanks to InSight's robotic arm, SEIS will therefore benefit from possibly the best installation scenario to be made by a static lander and will be able to detect quakes three orders of magnitude smaller than Viking. This results in a predicted detection threshold of moment magnitude \(M_{w} \sim 4\) for a global detection and \(M_{w} \sim 3\) for up to 40° epicentral distance. In addition, SEIS measurements will be continuously supported by the APSS suite of pressure, wind and magnetic sensors, which will not only allow systematic noise decorrelation and event validation but will also make joint event monitoring possible, from regional impacts with joint seismic and infrasound signals to local dust devils with seismic, pressure and magnetic signals. SEIS is in essence a true discovery experiment on a Discovery mission, in the sense that it will possibly perform a sequence of discoveries comparable to those made on Earth 120–100 years ago: first marsquake detection, first solid tide observation on Mars' surface, first seismic impact detection and first constraints on the crust, upper mantle and core size. In addition, SEIS will also explore a planet where micro-seismic noise is likely only generated by the atmosphere, in contrast to the Earth were micro-seismic noise is dominated by oceanic waves and anthropogenic activity. SEIS data will be distributed in SEED format, following the schedule of NASA's Planetary Data System. In addition, these data will be made available at the IRIS DMC and at IPGP Data Center. Data will also be distributed to a wide international network of schools and colleges, through the educational programs in the USA and in several European countries. We can therefore hope that the installation of the, possibly long-duration, InSight geophysical station with SEIS will not only provide key constraints on the interior of Mars and on our understanding of Mars evolution since its early Noachian-Phyllosian era but will renew a systematic seismic exploration of the terrestrial planets, Earth's Moon and icy moons of giant planets by future planetary science missions. The authors of this paper acknowledge the scientific discussion and inputs from all SEIS and InSight team members who have focused their activity on scientific preparation of the SEIS data analysis phase and preparation of interdisciplinary investigations. The authors acknowledge the three anonymous reviewers who greatly improved the manuscript with their review. The French Team acknowledge the French Space Agency CNES which as supported and funded all SEIS related contracts and CNES employees. Very large human resource support has been also provided by the Centre National de la Recherche Scientifique (CNRS), by Institut National des Sciences de l'Univers (INSU) for contract administrative management and by several French Universities or Engineer Schools, including Université Paris Diderot, Institut de Physique du Globe de Paris, Institut National de l'Aéonautique et de l'Espace/Supaero. The cleaning of all VBBs have been performed by IPGP with support of IMPMC in the clean rooms of the Paris Diderot Space Campus funded by the Ile de France under the SESAME project "Pole Terre-Planètes", while the cold test have been performed in facilities funded by CNES and Ile de France under SESAME project "Centre de Simulation Martien". Extra scientific support of the French SEIS team has been provided by ANR under the ANR SEISMARS project, by UnivEarthS Labex program (ANR-10-LABX-0023), IDEX Sorbonne Paris Cité (ANR-11-IDEX-0005-0) and by GENCI (A0030407341) for supercomputing resource. P. Lognonné acknowledges the long support of CNES and of R. Bonneville, F. Rocard and especially F. Casolli, as well as CNRS and the Institut Universitaire de France for extra support enabling full dedication to the SEIS project during the critical implementation time. The French team acknowledges and thanks all contractors and industrial partners who have contributed to the EC/VBB/VBB Electronics/cradle subsystems and associated tests through industrial contracts (SODERN, EREMS. DELTAPRESI, MICROCERTEC, AIRBUS DS, SMAC, MECANO ID). IMAGO and VR2PLANET contributed also to the SEIS portal and outreach VR tools. CNES Toulouse team acknowledges and thanks the contractors having supported integration and environment tests (ALTEN SUD OUEST, LOGIQUAL ET ALTRAN, XLM, EPSILON, THALES SERVICE, LANAGRAM), performance tests (ATELIER IMAGES, R-TECH, ASSYSTEM, AKKA, VERITAS) and SISMOC (CS, CAPGEMINI). IPGP technical team acknowledges and thanks collaborators who provided support at IPGP's Observatoire de Saint Maur (D. Baillivet, C. Choque Cortez, F. Rolfo, A. Simonin from ALTRAN, R. Crane from ASSYSTEM and M.A. Desnos, O. Mbeumou from NEXEYA). Additional support to IPGP was provided by IRAP for VBB proximity electronics packaging and from ENPC/Navier for SEIS feet design. IPGP VBB team acknowledges also those who contributed to the early design of the VBBs prior InSight, especially J.F. Karczewski, S. Cacho, C. Cavoit and P. Schibler and E. Wielandt for fruitful advice. The Swiss contribution in implementation of the SEIS electronics was made possible through funding from the federal Swiss Space Office (SSO), the contractual and technical support of the ESA-PRODEX office and the industrial contractor SYDERAL SA. We thank in particular A. Werthmüller (SSO), C. Bramanti (ESA) and C/Barraud (SYDERAL) for their strong contribution to the successful completion of the SEIS EBox. The Marsquake service was partly supported by funding from the (1) Swiss National Science Foundation and French Agence Nationale de la Recherche (SNF-ANR project 157133 "Seismology on Mars") and the (2) Swiss State Secretariat for Education, Research and Innovation (SEFRI project "MarsQuake Service—Preparatory Phase"). Additional support came from the Swiss National Supercomputing Centre (CSCS) under project ID s682. The MPS SEIS team acknowledges the funding for the SEIS Leveling system development by the DLR German Space Agency. SEIS-SP development and delivery were funded by the UK Space Agency. Research described in this paper was partially done by the InSight Project, Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. We also thank A.-C. Plesa, M. Knapmeyer and V. Tsai for discussions on Mars seismicity and comments on an earlier draft. This paper is IPGP contribution 4007 and InSight Contribution Number 42. 11214_2018_574_MOESM1_ESM.pdf (2.4 mb) 11214_2018_574_MOESM2_ESM.mp4 (294.4 mb) (MP4 294.4 MB) Appendix A: Single Station Analysis Strategies The Concept Study Report listed and identified several single stations processing with respect the mission goals of the mission we recall in this Appendix. Normal modes, described in Sect. A.5 were nevertheless not integrated in the requirement flow. Based on the a-priori activity of Mars, only one quake per year might excite the fundamental spheroidal normal modes in the 0.01–0.02 Hz bandwidth with large enough amplitude compared to the expected instrument noise. An extended mission on several Mars years will therefore greatly improve the possibility for such detection, in addition to the possible long stack of SEIS signals for hum search. A.1 Event Location by P–S and Back-Azimuth (L1-6, L1-7) The distribution of seismic activity is determined by monitoring the teleseismic body wave frequency band (\(\sim 0.1\mbox{--}2.5~\mbox{Hz}\)) for seismic events. The approximate epicentral distances are derived from the differential P–S arrival time on the vertical record. Initially, the distance error will be \(\sim 10\%\) (reflecting the range of a priori estimates of Mars velocity models). The refinement of heat flow and crustal thickness (from HP3 and SEIS, respectively) will produce better constraints on upper mantle temperature. Together with SNC constraints on upper mantle mineralogy and detailed mineralogical modelling, this will result in further improvements to event locations and upper mantle seismic velocities. The back-azimuth of the initial P arrival gives the direction to the source of the P wave. This is measured using the horizontal components, yielding an error of \(\pm 10^{\circ}\) in azimuth for conservative projected levels of horizontal component noise, resulting in a roughly \(\pm 15\%\) uncertainty in location. Events can thus be roughly localized within a \(\sim 150~\mbox{km}\) radius at distances of 1000 km. With an approximate location (augmented by guidance from observed geology), the spatial distribution and magnitude (from seismic amplitude and the approximate distance) of seismicity can be determined. This is a fundamental parameter of the seismic environment of a planet. See more in Böse et al. (2017), Clinton et al. (2018, this issue). A.2 Seismic Phase Analysis (L1-3, L1-5) As noted above, the joint determination of P and S arrival times provides an estimate of epicentral distance to \(\sim 15\%\) and RISE reduces the uncertainty in core radius to 200 km. With these constraints on the ray paths we can use the refined interior structure models with synthetic seismogram analysis to identify later-arriving phases. The additional differential measurements of arrival times, such as PcP–P PcS–S, ScS–S, as well as comparison of their relative amplitudes to P, provide additional constraints on the seismic velocities and attenuation in the deep mantle of Mars. These constraints help refine the core size estimate and place bounds on lower mantle discontinuities. A.3 Receiver Function Method (L1-2) When a P or S wave strikes a discontinuity, it generates reflected and transmitted waves of both P and S. So waves from distant quakes passing through a layered medium like a crust/upper mantle can generate complicated seismograms containing many echoes. Such seismograms can be processed to generate simplified artificial waveforms (called receiver functions) (Phinney 1964; Langston 1979). These can be inverted to yield the variation of shear velocity with depth and are particularly sensitive to strong velocity discontinuities. Receiver functions are a powerful tool for studying the depth to the crust-mantle boundary or to other layers within the crust and are computed from single seismograms without using source location or time. This method has been widely used on the Earth and was successfully applied to the Moon (Vinnick et al. 2001). See more in Panning et al. (2017) and derivative techniques for constraining the crust (Knapmeyer-Endrun et al. 2018). A.4 Surface Wave Dispersion (L1-1, L1-3) Surface waves are low-frequency seismic waves that propagate in the crust and upper mantle due to the presence of the free surface. By sampling the crust, lithosphere and upper-mantle, surface waves are an important source of information. The depths to which surface waves are sensitive depend on frequency, with low-frequency waves "feeling" greater depths and thus propagating at higher speeds. This results in dispersion, with low-frequency waves arriving earlier than higher frequencies. The details of the relation between frequency and group velocity are directly relatable to subsurface structure. They are extremely sensitive to the crustal thickness and variations \(\geq 10\%\) are typical for crustal variations of 20 km. The sensitivity to the upper mantle is also important, as the group velocity of surface waves (or the differential group velocity between the fundamental and the overtones) varies by 5–10% for different models, as listed in Panning et al. (2017) and Smrekar et al. (2019, this issue). In order to obtain velocity from arrival time, an estimate of the distance to the source or propagation distance between two arrivals shall be used. This can be obtained from multiple arrival of Rayleigh waves (R1–R2–R3) as proposed by Panning et al. (2015). This can also be obtained from P–S and from the R1–R2 See more details in Zheng et al. (2015), Khan et al. (2016) and Panning et al. (2017). A.5 Normal Modes and Hum Analysis (L1-1, L1-3) For a single seismometer, the most effective techniques for studying deep structure use normal mode frequencies, which do not require knowledge of the source location. Normal mode spectral peaks from 5–20 mHz (sensitive to mantle structure) can be identified for a detection noise level of \(10^{-9}~\mbox{m}/\mbox{s}^{2}/\mbox{Hz}^{1/2}\) (Lognonné et al. 1996, 2016; Gudkova and Zharkov 2004). This can be accomplished by single-seismogram analysis of a large quake of moment \(\geq 2 \times 10^{17}~\mbox{N}\,\mbox{m}\) (equivalent magnitude \(\sim 5.5\)). About three such quakes are expected during one Mars year. The SNR of the mode peaks can be further improved by stacking multiple quakes with lower magnitudes. Due to the size of the planet, however, the frequencies of the modes, for a given angular order, are typically twice those on Earth. Similar techniques can be applied to the background noise generated globally by atmospheric dynamics (Kobayashi and Nishida 1998) the so-called seismic "hum." Calculations based on excitation by turbulence in the boundary layer that do not take in account resonance effects, non-turbulent wind and pressure variations associated with atmospheric circulation (Tanimoto 1999, 2001) yield amplitudes for Mars \(\sim 0.1~\mbox{nanogal}\), a factor of 2–3 smaller than on Earth. See more in Lognonné and Johnson (2015), Lognonné et al. (2016), Panning et al. (2017), Schimmel et al. (2018), Nishikawa et al. (2019, this issue), Bissig et al. (2018). Appendix B: SEIS-AC Gains This appendix provides the gains of SEIS AC for the different channels. B.1 Science and Engineering Conversion Table 23 shows the gain and offsets required to convert the data into physical value, i.e. voltage or resistance. The general conversion is for example VBB vel \(\mbox{Voltage}=(\mbox{ADC\_Data}-\mbox{offset})/\mbox{Gain}\). The Least Significant Bit (LSB), defined as \(1/\mbox{Gain}\) is also given, as well as the Full Scale Range. Science and Engineering Data conversion factors VBB VEL & POS* \(2.98~\upmu\mbox{V}\) SP VEL VBB Eng Temp (QM) 896.25 Ω −13291 \(13.68~\mbox{m}\Omega\) VBB Eng Temp (FM/FS) −16899.33 \(13.68~\mbox{m}\Omega \) SCIT (QM) 1432.66 Ω \(85.39~\upmu \Omega \) SCIT (FM/FS) B.2 House Keeping Data Table 24 shows the gain and offsets required to convert the data into physical data, i.e. voltage, current and resistance. For example, Ch2 \(\mbox{Voltage}=(\mbox{ADC\_Data}-\mbox{offset})/\mbox{Gain}\). FM HK Data conversion factors Expected value (nominal mode) Ignore this data SP-HK1-MPOS1 [V] SEIS-DC+13VV [V] CAL1-HKT [Ohm] SEIS-DC+15VA [mA] VBB2-PXT [Ohm] SEIS-DC-13VV [V] −2491.8462 SP-HK1-TEMP-FB [V] SP-HK2-TEMP-FBE [V] SEIS-DC-15VA [mA] −1324.667 VBB3-HKT [Ohm] SEIS-DC+7VAV [V] SP-HK1-SP1-TEMP [V] SP-HK2-SP1-TEMPE [V] SEIS-DC+7VAA [mA] −3287.8 −343.3294 SP-HK1+VREF [V] SP-HK2-VREF [V] ACQ-HKT [Ohm] SEIS-DC+1V2V [V] SEIS-AC+5VREF [V] SEIS-DC+1V2VA [mA] DC-HKT [Ohm] SEIS-AC-6VSV [V] SEIS-AC+6VSV [V] SEIS-DC-5VV [V] CTL-HKT [Ohm] SEIS-DC+5VV [V] SEIS-AC+6VSA [mA] SEIS-DC-5VA [mA] −58.385 SEIS-DC+5VA [mA] There is no expected value for channels 41, 43 (SEIS-DC \(\pm 5~\mbox{VV}\)) and 45, 47 (SEIS-DC ±5 VA) in nominal mode because in this mode no motor operation is performed. The SP POS output is recorded with a FSR of 10 V (\(\pm 5~\mbox{V}\)) as an HK. The LSB after averaging is \(152.59~\upmu\mbox{V}\) while the LSB of the 12 bit AD is \(= 12.805~\mbox{m}\Omega\). ALL HK TEMP have a LSB after averaging of \(12.805~\mbox{m}\Omega\) while the 12 bits LSB is \(204.88~\mbox{m}\Omega\). They are recorded with a FSR of \(839.18~\Omega\) (\(470.97~\Omega\) to \(1310.15~\Omega\)). VBB PXT are VBB temperature sensors in VBB-PE boxes. HK voltages and currents have different LSBs that can be calculated in a similar way. FSR can be calculated using formula below in which minimum code of zero and maximum code of \(2^{16}-1 = 65535\) shall be inserted: \(\mbox{physical value}~[\mbox{V or mA}] = (\mbox{code}-\mbox{offset})/\mbox{Gain}\). Appendix C: MiniSEED Format Description C.1 SEED Data Description C.1.1 Overview InSight SEIS data as well as APSS data (for seismic use) will be released in SEED format (FDSN 2012). We briefly recall here this format as well as the general guidelines chosen by the project for SEIS data description. The Standard for the Exchange of Earthquake Data (SEED) format includes a data volume containing waveforms and a header, called dataless volume. SEED was designed for use by the earthquake research community, primarily for the exchange between institutions of unprocessed earth motion data. A dataless SEED volume contains the normal Volume, Abbreviation and Station Control Headers. The purpose of these volumes is to provide an alternate method for making sure that various Data Centers have current and correct information for seismic stations. It contains the metadata including instrument responses, instrument coordinates, compression type, etc. A dataless, by definition, contains no "data", in the sense that no waveform data are included, only headers. It of course can be used in combination with a miniSEED volume which is a data-only volume. It shall be noted that researchers interested mostly in APSS for atmospheric research shall use the APSS data from PDS, as the SEED is not able to fully represent the complex temperature dependency of the APSS data. APSS in SEED shall therefore be mostly used for decorrelation and diagnostic purposes on SEIS. C.1.2 Seed Volume Description Seed physical volumes may contain one or more logical volumes. Seed logical volumes may be dataless volumes or may contain waveforms and have an organization described by Fig. 122. Seed volumes organization C.1.3 MiniSEED Information All data information will be encrypted either in the miniSEED (mSEED) header or in the SEED dataless which will be provided in the released data. Details are given for the mSEED header and in for the SEED dataless. According to the SEED Reference Manual the mSEED data-packet is composed of the following main fields: A fixed header of 48 bytes One or two blockettes; always blockette coded as number 1000, optional the blockette coded as number 1001 Data field mSEED header contains information about the waveform it contains such as Location identifier Channel identifier Network code Record start time Details of mSEED format can be found in Appendix C.1.4 InSight Dataless Description InSight team choose SEED format to distribute data collected during the mission to ensure that data handling will be as clear and easy as possible. Indeed, the SEED format contains a maximum number of parameters and processing descriptions on the way the data has been produced, including by providing all the instrument information necessary to get a complete description of the way the onboard flight software processed the data. C.1.5 SEED Network and Station As the landing site is close to the Elysium mount, the InSight station name for the final configuration is Elysium Planitia and its acronym is ELYSE while data produced during cruise post-landing and during deployment will be identified as CRUI(1/2) and ELYS(0/1). The station name for synthetic is SYNT1 (e.g. Ceylan et al. 2017; Clinton et al. 2018). The position of the station is given in Mars geographic coordinates for ELYSE, while the position of the spacecraft with respect to Earth is chosen for CRUI coordinates at the time of the first data on the cruise check. The mission network label is XB and is used for data produced during the test, cruise, commissioning and normal science operation mission phases. Therefore, data may have different network codes such as 7I for the test data produced during the pre and post-flight phases and 7J for the synthetic data. C.1.6 Channel Naming Convention Channel naming convention is as close as possible to SEED channel naming rules and is given in Table 25 Channel name components are Frequency ID, Instrument Label, Orientation code. Full channels names can be found in this document annex. Synthesized channel list Channel naming High Gain Seismometer Low Gain Seismometer Mass Position Seismometer Synthetized beam data Non-specific instruments Instrument code C.1.7 SEED Informations The SEED dataless volume contains the full description of data in the mSEED volume and may contain the blockettes describing the Volume (Table 26), Abbreviation and configurations (Table 27), Station informations (Table 28). Included in the station section, the channel identifier blockette is repeated for each channel instance (different starting/ending time) and is followed by the suitable set of description blockettes. The successions of processes that lead to the production of the data are described in blocks called stages. Volume section blockettes Blockette No. Blockette name Blockette description Volume Identifier Record length, Beginning and ending time Volume Station Header Index List of station names that the volume may contain Abbreviation and configuration section blockettes Data format dictionary Dictionary referenced in the channel description field Comment dictionary Dictionary of comments used in blockette 59 Generic abbreviation List of used abbreviations such as instrument type Units abbreviation List of used measurement unit abbreviations such as M/S, V Beam configuration List of the channels used for onboard generation of VBB/SP hybrid output Information encrypted in the SEED dataless for station informations Station identifier Station short name, full name, localization, starting and ending time, network name Channel identifier, location code, instrument identifier, unit of signal, sample rate, starting and ending time Poles and zeros Contains real and imaginary poles and zeros and errors of the analog transfer function of the sensors or of the LVL Response (filter) Contains FIR filter coefficients of either the SEIS AC or the SEIS F/SW Contains the input sampling rate, decimation ratio and filters delay of either the SEIS AC or the SEIS F/SW Sensitivity gain Contains the sensitivity gain of the sensors and SEIS A/C Comment blockette Index to the comment dictionary (for instrument temperature sensitivity) Response polynomial Polynomial components used as \(V=P_{0}+P_{1}^{*}S+P_{2}^{*}S^{2}+ P_{3}^{*}S^{3}\ldots \) (for temperature sensors) C.2 MiniSEED Fixed Data Header Fields Data will be stored in the miniSEED (mSEED) files (see Fig. 123 for an example), each one with an header described in Table 29 and two blocquettes of additional informations, described in Sects. C.2.2 and C.2.3. The size of each packet is 512 bytes and it contains the starting time of the data in UTC. As the samples number is known, the Time difference between two mSEED packet will provide information of the drift of the SEIS-AC clock. Hex dump of the station of cola.iu.liss.org. The most important fields are specified and linked to a description Table Informations encrypted in the miniSEED header Byte position Length (bytes) Sequence number Data Header quality indicator Reserved byte BTIME Number of samples UWORD Sample rate factor Sample rate multiplier Activity flags UBYTE I/O flags Data quality flags Number of blockettes that follow Offset to beginning of data Offset to beginning of first blockette C.2.2 The Blockette Type 1000 Specifications This is a Data Only SEED Blockette. It is 8 bytes long (Table 30) and has the following structure: The blockette code that contains always the number 1000. The offset to the beginning of the next blockette. The encoding format according to the following basic table: Code Encoding Format ASCII text, byte order as specified in field 4 16 bit integers IEEE floating point IEEE double precision floating point STEIM (1) Compression The word order. According to the text in the SEED reference manual a zero (0) stands for little-endian and a one (1) for big endian. The exponent of a base of 2 to specify the record length. In LISS miniSEED it is 9 that means \(2^{9}=512\). Informations encrypted in the 1000 blockette Blockette code Offset to the beginning of the next blockette Encoding format Data record length This blockette is 8 bytes long (Table 31) and has the following structure: Blockette code that contains always the number: 1001. Timing quality. Can be used by the digitizer manufacturer to estimate the time quality of this data packet form (0 to 100% of accuracy). Precision of the start time down to microseconds. This filed is present to improve the accuracy of the time stamping given by the fixed header time structure. Reserved byte. Frame count. Is the number of 64 byte compressed data frames in the 4k record. (maximum of 63). Timing quality Microseconds Frame count C.3 SEIS-INSIGHT Channel List The naming of all SEIS channels is give in Table 32 for the VBB Velocity Channels, Table 33 for the VBB POS channels, Table 34 for the Temperature channels, Table 35 for the APSS channels, Table 36 for the SP channels and Table 37 for the onboard computed channels. Channel naming of the VBB velocity channels Loc ID Chan flag VBB 1 Velocity High Gain Science mode HHU MHU VBB 1 Velocity Low Gain Science mode HLU VBB 1 Velocity High Gain Engin. Mode VBB 1 Velocity Low Gain Engin. Mode MHV LLV BHW MLW LLW Channel naming of the VBB POS channels 0.1–0.5 Hz VBB 1 Position High Gain Science mode VMU VBB 1 Position Low Gain Science mode VBB 1 Position High Gain Engin. mode VBB 1 Position Low Gain Engin. mode LMW VMW Channel naming of the SEIS and VBB T channels Temperature channels <0.01 VBB 1 Temperature LKU VKU UKU LKV VKV UKV RKV VKW Scientific Temperature A LKI VKI Scientific Temperature B Channel naming of the APSS channels APSS channels Wind horizontal speed LWS VWS Wind vertical speed Atmosphere temperature LKO VKO UKO Pressure (outside) Pressure sensor temperature (inside) MKI Magnetomer1 Magnetometer temperature BKM VKM Channel naming of the SP channels SP channels SP1 (High Gain) EHW SHW SP1 (Low Gain) Synthesized channels SEISVELZ HZC BZC MZC LZC EZC VBBR ESTAVBB LHZ MAXVBB ESTASP LLZ MAXSP MAGZ ESTAP1 MAXP1 ESTAM H. Abarca, R. Deen et al., Image and data processing for InSight lander operations and science. Space Sci. Rev. (2018 this issue) Google Scholar N. Ackerley, Principles of broadband seismometry, in Encyclopedia of Earthquake Engineering, ed. by M. Beer, I.A. Kougioumtzoglou, E. Patelli, I. Siu-Kui Au (Springer, Berlin, 2014). https://doi.org/10.1007/978-3-642-36197-5_172-1 CrossRefGoogle Scholar D.C. Agnew, History of seismology. Int. Geophys. Ser. 81, 3–11 (2002). https://doi.org/10.1016/S0074-6142(02)80203-0 CrossRefGoogle Scholar D.L. Anderson, J. Given, Absorption band Q model for the Earth. J. Geophys. Res. 87, 3893–3904 (1982) CrossRefADSGoogle Scholar D.L. Anderson, W.F. Miller, G.V. Latham, Y. Nakamura, M.N. Toksoz, A.M. Dainty, F.K. Duennebier, A.R. Lazarewicz, R.L. Kowach, T.C. Knight, Seismology on Mars. J. Geophys. Res. 82, 4524–4546 (1977a). https://doi.org/10.1029/JS082i028p04524 CrossRefADSGoogle Scholar D.L. Anderson, F.K. Duennebier, G.V. Latham, M.N. Toksöz, R.L. Kovach, T.C.D. Knight, A.R. Lazarewicz, W.F. Miller, Y. Nakamura, G.H. Sutton, Viking seismology experiment. Bull. Am. Astron. Soc. 9, 447 (1977b) ADSGoogle Scholar C. Bagaini, C. Barajas-Olalde, Assessment and compensation of inconsistent coupling conditions in point-receiver land seismic data. Geophys. Prospect. 55, 39–48 (2007). https://doi.org/10.1111/j.1365-2478.2006.00606.x CrossRefADSGoogle Scholar W.B. Banerdt, P. Lognonné, An autonomous instrument package for providing "pathfinder" network measurements on the surface of Mars, in Sixth International Conference on Mars, Pasadena, California, July 20–25, 2003. Abstract no. 3221 Google Scholar W.B. Banerdt, W.T. Pike, A miniaturized seismometer for surface measurements in the outer solar system, in Forum on Innovative Approaches to Outer Planetary Exploration 2001–2020 (2001) Google Scholar W.B. Banerdt, S. Smrekar, Geophysics and meteorology from a single station on Mars, in 38th Lunar and Planetary Science Conference (Lunar and Planetary Science XXXVIII), League City, Texas, March 12–16, 2007. LPI Contribution No. 1338, id. 1524 Google Scholar W.B. Banerdt, S. Smrekar, J. Ayon, W.T. Pike, G. Sprague, A low-cost geophysical network mission for Mars, in 29th Annual Lunar and Planetary Science Conference, Houston, TX, March 16–20, 1998, abstract no. 1562 Google Scholar W.B. Banerdt, S. Smrekar, V. Dehant, P. Lognonné, T. Spohn, M. Grott, in GEMS (GEophysical Monitoring Station), EPSC-DPS Joint Meeting 2011, 2–7 October 2011, Nantes, France (2011), p. 331. http://meetings.copernicus.org/epsc-dps2011 Google Scholar D. Banfield, J.A. Rodriguez-Manfredi, C.T. Russell, K.M. Rowe, D. Leneman, H.R. Lai, P.R. Cruce, J.D. Means, C.L. Johnson, S.P. Joy, P.J. Chi, I.G. Mikellides, S. Carpenter, S. Navarro, E. Sebastian, J. Gomez-Elvira, J. Torres, L. Mora, V. Peinado, A. Lepinette, K. Hurst, P. Lognonné, S.E. Smrekar, W.B. Banerdt, InSight Auxiliary Payload Sensor Suite (APSS). Space Sci. Rev. (2018). https://doi.org/10.1007/s11214-018-0570-x CrossRefGoogle Scholar J.R. Bates, W.W. Lauderdale, H. Kernaghan, ALSEP Termination Report, NASA Reference Publication Series, NASA-RP-1036, S-480, 914-40-73-01-72, p. 162 (1979) Google Scholar R. Beauduin, P. Lognonné, J.P. Montagner, S. Cacho, J.F. Karczewski, M. Morand, The effect of the atmospheric pressure changes on seismic signals or how to improve the quality of a station. Bull. Seismol. Soc. Am. 86, 1760–1769 (1996) Google Scholar V. Belleguic, P. Lognonné, M. Wieczorek, Constraints on the Martian lithosphere from gravity and topography data. J. Geophys. Res. 110, E11005 (2005). https://doi.org/10.1029/2005JE002437 CrossRefADSGoogle Scholar A. Ben-Menahem, A concise history of mainstream seismology: origins, legacy and perspectives. Bull. Seismol. Soc. Am. 85(4), 1202–1225 (1995) Google Scholar J.-L. Berenguer, J. Virieux, Raising Awareness on Earthquake Hazard (European Commission, Brussels, 2008). ISBN 978-92-79-07083-9 Google Scholar J. Bernstein, R. Miller, W. Kelley, P. Ward, Low-noise MEMS vibration sensor for geophysical applications. J. Microelectromech. Syst. 8, 433–438 (1999). https://doi.org/10.1109/84.809058 CrossRefGoogle Scholar J. Biele, S. Ulamec, J. Block, D. Mimoun, P. Lognonné, T. Spohn, The Geophysics and Environmental Package (GEP) of the ExoMars Mission, in European Planetary Science Congress 2007, Proceedings of a Conference, Potsdam, Germany, 20–24 August, 2007, p. 244. Online at: http://meetings.copernicus.org/epsc2007 Google Scholar F. Bissig, A. Khan, M. van Driel, S. Stahler, D. Giardini, M. Panning, M. Drilleau, P. Lognonné, T.V. Gudkova, V.N. Zharkov, W.B. Banerdt, On the detectability and use of normal modes for determining interior structure of Mars. Space Sci. Rev. 214, 114 (2018). https://doi.org/10.1007/s11214-018-0547-9 CrossRefADSGoogle Scholar M. Böse, J.F. Clinton, S. Ceylan, F. Euchner, M. van Driel, A. Khan, D. Giardini, P. Lognonné, W.B. Banerdt, A probabilistic framework for single-station location of seismicity on Earth and Mars. Phys. Earth Planet. Inter. 262, 48–65 (2017). https://doi.org/10.1016/j.pepi/2016.11.003 CrossRefADSGoogle Scholar M. Böse, D. Giardini, S. Stähler, S. Ceylan, J. Clinton, M. van Driel, A. Khan, F. Euchner, P. Lognonné, W.B. Banerdt, Magnitude scales for marsquakes. Bull. Seismol. Soc. Am. 108, 2764–2777 (2018). https://doi.org/10.1785/0120180037 CrossRefGoogle Scholar E. Bozdag, Y. Ruan, N. Metthez, A. Khan, K. Leng, M. van Driel, M. Wieczorek, A. Rivoldini, C.S. Larmat, D. Giardini, J. Tromp, P. Lognonné, B.W. Banerdt, Simulations of seismic wave propagation on Mars. Space Sci. Rev. 211(1–4), 571–594 (2017). https://doi.org/10.1007/s11214-017-0350-z CrossRefADSGoogle Scholar S. Ceylan, M. van Driel, F. Euchner, A. Khan, J. Clinton, L. Krischer, M. Böse, D. Giardini, From initial models of seismicity, structure and noise to synthetic seismograms for Mars. Space Sci. Rev. 211(1–4), 595–610 (2017). https://doi.org/10.1007/s11214-017-0380-6 CrossRefADSGoogle Scholar R.G. Christian, The theory of oscillating-vane vacuum gauges. Vacuum 16, 175–178 (1966) CrossRefADSGoogle Scholar J. Clinton, D. Giardini, P. Lognonné, W.B. Banerdt, M. van Driel, M. Drilleau, N. Murdoch, M.P. Panning, R. Garcia, D. Mimoun, M. Golombek, J. Tromp, R. Weber, M. Böse, S. Ceylan, I. Daubar, B. Kenda, A. Khan, L. Perrin, A. Spiga, Preparing for InSight: an invitation to participate in a blind test for Martian seismicity. Seismol. Res. Lett. 88(5), 1290–1302 (2017). https://doi.org/10.1785/0220170094 CrossRefGoogle Scholar J.F. Clinton, D. Giardini, M. Böse, S. Ceylan, M. van Driel, F. Euchner, R.F. Garcia, S. Kedar, A. Khan, S.C. Stähler, B. Banerdt, P. Lognonné, E. Beucler, I. Daubar, M. Drilleau, M. Golombek, T. Kawamura, M. Knapmeyer, B. Knapmeyer-Endrun, D. Mimoun, A. Mocquet, M. Panning, C. Perrin, N.A. Teanby, The Marsquake Service—building a Martian seismicity catalogue for InSight. Space Sci. Rev. 214, 133 (2018 this issue). https://doi.org/10.1007/s11214-018-0567-5 CrossRefADSGoogle Scholar J.A.D. Connolly, Computation of phase equilibria by linear programming: a tool for geodynamic modeling and its application to subduction zone decarbonation. Earth Planet. Sci. Lett. 236, 524–541 (2005). https://doi.org/10.1016/j.epsl.2005.04.033 CrossRefADSGoogle Scholar F. Courboulex, J.L. Berenguer, A. Tocheport, M.P. Bouin, E. Calais, Y. Esnault, C. Larroque, G. Nolet, J. Virieux, Sismos à l'Ecole: a worldwide network of real-time seismometers in schools. Seismol. Res. Lett. 83, 870–873 (2012). https://doi.org/10.1785/0220110139 CrossRefGoogle Scholar A.M. Dainty, S. Stein, M.N. Toksöz, Variation in the number of meteoroid impacts on the Moon with lunar phase. Geophys. Res. Lett. 2, 273–276 (1975). https://doi.org/10.1029/GL002i007p00273 CrossRefADSGoogle Scholar I.J. Daubar, A.S. McEwen, S. Byrne, M.R. Kennedy, B. Ivanov, The current martian cratering rate. Icarus 225, 506–516 (2013). https://doi.org/10.1016/j.icarus.2013.04.009 CrossRefADSGoogle Scholar I.J. Daubar, C.M. Dundas, S. Byrne, P. Geissler, G.D. Bart, A.S. McEwen, P.S. Russell, M. Chojnacki, M.P. Golombek, Changes in blast zone albedo patterns around new martian impact craters. Icarus 267, 86–105 (2016). https://doi.org/10.1016/j.icarus.2015.11.032 CrossRefADSGoogle Scholar I.J. Daubar, P. Lognonné, N.A. Teanby, K. Miljkovic, J. Stevanović, J. Vaubaillon, B. Kenda, T. Kawamura, J. Clinton, A. Lucas, M. Drilleau, C. Yana, G.S. Collins, D. Banfield, M. Golombek, S. Kedar, N. Schmerr, R. Garcia, S. Rodriguez, T. Gudkova, S. May, M. Banks, J. Maki, E. Sansom, F. Karakostas, M. Panning, N. Fuji, J. Wookey, M. van Driel, M. Lemmon, V. Ansan, M. Böse, S. Stähler, H. Kanamori, J. Richardson, S. Smrekar, W.B. Banerdt, Impact-seismic investigations of the InSight mission. Space Sci. Rev. 214, 132 (2018). https://doi.org/10.1007/s11214-018-0562-x CrossRefADSGoogle Scholar V.M. Dehant, P. Lognonné, C. Sotin, Netlander: a European mission to study the planet Mars. Planet. Space Sci. 52, 977–985 (2004). https://doi.org/10.1016/j.pss.2004.07.019 CrossRefADSGoogle Scholar V.M. Dehant, D. Breuer, M. Grott, T. Spohn, P. Lognonné, P.L. Read, S. Vennerstroem, W.B. Banerdt, MarsTwin: an M-mission to Mars with two geophysical laboratories, in American Geophysical Union, Fall Meeting 2010 (2010). Abstract ID. P21A-1576 Google Scholar V.M. Dehant, B. Banerdt, P. Lognonné, M. Grott, S. Asmar, J. Biele, D. Breuer, F. Forget, R. Jaumann, C. Johnson, M. Knapmeyer, B. Langlais, M. Le Feuvre, D. Mimoun, A. Mocquet, P. Read, A. Rivoldini, O. Romberg, G. Schubert, S. Smrekar, T. Spohn, P. Tortora, S. Ulamec, S. Vennerstrøm, Future Mars geophysical observatories for understanding its internal structure, rotation and evolution. Planet. Space Sci. 68, 123–145 (2012). https://doi.org/10.1016/j.pss.2011.10.016 CrossRefADSGoogle Scholar P. Delage, F. Karakostas, A. Dhemaied, M. Belmokhtar, P. Lognonné, M. Golombek, E. De Laure, K. Hurst, J.C. Dupla, S. Keddar, Y.J. Cui, W.B. Banerdt, An investigation of the mechanical properties of some Martian regolith simulants with respect to the surface properties at the InSight mission landing site. Space Sci. Rev. 211, 191–213 (2017). https://doi.org/10.1007/s11214-017-0398-9 CrossRefADSGoogle Scholar A.K. Delahunty, W.T. Pike, Metal-armouring for shock protection of MEMS. Sens. Actuators A, Phys. 215, 36–43 (2014). https://doi.org/10.1016/j.sna.2013.11.008 CrossRefGoogle Scholar G. Dreibus, H. Wänke, Mars: a volatile rich planet. Meteoritics 20, 367–382 (1985) ADSGoogle Scholar F. Duennebier, G.H. Sutton, Meteoroid impacts recorded by the short-period component of Apollo 14 Lunar Passive Seismic Station. J. Geophys. Res. 79(29), 4365–4374 (1974). https://doi.org/10.1029/JB079i029p04365 CrossRefADSGoogle Scholar A.M. Dziewonski, D.L. Anderson, Preliminary reference Earth model. Phys. Earth Planet. Inter. 25, 297–356 (1981). https://doi.org/10.1016/0031-9201(81)90046-7 CrossRefADSGoogle Scholar A.M. Dziewonski, B.A. Romanowicz, Deep Earth seismology: an introduction and overview, in Treatise on Geophysics, ed. by G. Schubert 2nd edn. (Elsevier, Oxford, 2015), pp. 1–28. https://doi.org/10.1016/B978-0-444-53802-4.00001-4. ISBN 9780444538031 CrossRefGoogle Scholar L. Fayon, B. Knapmeyer-Endrun, P. Lognonné, M. Bierwirth, A. Kramer, P. Delage, F. Karakostas, S. Kedar, N. Murdoch, R. Garcia, N. Verdier, S. Tillier, W.T. Pike, K. Hurst, C. Schmelzbach, W.B. Banerdt, A numerical model of the SEIS leveling system transfer matrix and resonances: application to SEIS rotational seismology and ground interaction. Space Sci. Rev. 214, 119 (2018). https://doi.org/10.1007/s11214-018-0555-9 CrossRefADSGoogle Scholar Federation of Digital Seismograph Network, Reference Manual. Standard for the Exchange of Earthquake Data. SEED Format Version 2.4 (2012). https://www.fdsn.org/seed_manual/SEEDManual_V2.4.pdf W.M. Folkner, V. Dehant, S. Le Maistre, M. Yseboodt, A. Rivoldini, T. Van Hoolst, S.W. Asmar, M.P. Golombek, Space Sci. Rev. 214, 100 (2018). https://doi.org/10.1007/s11214-018-0530-5 CrossRefADSGoogle Scholar T. Forbriger, R. Widmer-Schnidrig, E. Wielandt, M. Hayman, N. Ackerley, Magnetic field background variations can limit the resolution of seismic broad-band sensors. Geophys. J. Int. 183, 303–312 (2010). https://doi.org/10.1111/j.1365-246X.2010.04719.x CrossRefADSGoogle Scholar J. Gagnepain-Beyneix, P. Lognonné, H. Chenet, T. Spohn, Seismic models of the Moon and their constraints on the mantle temperature and mineralogy. Phys. Earth Planet. Inter. 159, 140–166 (2006). https://doi.org/10.1016/j.pepi.2006.05.009 CrossRefADSGoogle Scholar R.F. Garcia, J. Gagnepain-Beyneix, S. Chevrot, P. Lognonné, Very preliminary reference Moon model. Phys. Earth Planet. Inter. 188, 96–113 (2011). https://doi.org/10.1016/j.pepi.2011.06.015 CrossRefADSGoogle Scholar R.F. Garcia, Q. Brissard, L. Rolland, R. Martin, D. Komatitsch, A. Spiga, P. Lognonné, W.B. Banerdt, Finite-difference modeling of acoustic and gravity wave propagation in Mars atmosphere: application to infrasounds emitted by meteor impacts. Space Sci. Rev. 211, 547–570 (2017). https://doi.org/10.1007/s11214-016-0324-6 CrossRefADSGoogle Scholar A. Genova, S. Goosens, F.G. Lemoine, E. Mazarico, G.A. Neumann, D.E. Smith, M.T. Zuber, Seasonal and static gravity field of Mars from MGS, Mars Odyssey and MRO radio science. Icarus 272, 228–245 (2016). https://doi.org/10.1016/j.icarus.2016.02.050 CrossRefADSGoogle Scholar N.R. Goins, A.R. Lazarewicz, Martian seismicity. Geophys. Res. Lett. 6, 368–370 (1979). https://doi.org/10.1029/GL006i005p00368 CrossRefADSGoogle Scholar N.R. Goins, A.M. Dainty, M.N. Toksöz, Seismic energy release of the Moon. J. Geophys. Res. 86, 378–388 (1981). https://doi.org/10.1029/JB086iB01p00378 CrossRefADSGoogle Scholar M.P. Golombek, Constraints on the largest marsquake, in Lunar Planet. Sci. Conf., XXV (1994), pp. 441–442 Google Scholar M.P. Golombek, A revision of Mars seismicity from surface faulting, in Lunar Planet. Sci. Conf., XXXIII (2002). Abstract 1244 Google Scholar M.P. Golombek, W.B. Banerdt, K.L. Tanaka, D.M. Tralli, A prediction of Mars seismicity from surface faulting. Science 258, 979–981 (1992). https://doi.org/10.1126/science.258.5084.979 CrossRefADSGoogle Scholar M.P. Golombek, D. Kipp, N. Warner, I.J. Daubar, R. Fergason, R. Kirk, R. Beyer, A. Huertas, S. Piqueux, N.E. Putzig, B.A. Campbell, G.A. Morgan, C. Charalambous, W.T. Pike, K. Gwinner, F. Calef, D. Kass, M. Mischna, J. Ashley, C. Bloom, N. Wigton, T. Hare, C. Schwartz, H. Gengl, L. Redmond, M. Trautman, J. Sweeney, C. Grima, I.B. Smith, E. Sklyanskiy, M. Lisano, J. Benardini, S. Smrekar, P. Lognonné, W.B. Banerdt, Selection of the InSight landing site. Space Sci. Rev. 211, 5–95 (2017). https://doi.org/10.1007/s11214-016-0321-9 CrossRefADSGoogle Scholar M.P. Golombek, M. Grott, G. Kargl, J. Andrade, J. Marshall, N. Warner, N.A. Teanby, V. Ansan, E. Hauber, J. Voigt, R. Lichtenheldt, B. Knapmeyer-Endrun, I.J. Daubar, D. Kipp, N. Müller, P. Lognonné, C. Schmelzbach, D. Banfield, A. Trebi-Ollennu, J. Maki, S. Kedar, D. Mimoun, N. Murdoch, S. Piqueux, P. Delage, W.T. Pike, C. Charalambous, R. Lorenz, L. Fayon, A. Lucas, S. Rodriguez, P. Morgan, A. Spiga, M. Panning, T. Spohn, S. Smrekar, T. Gudkova, R. Garcia, D. Giardini, U. Christensen, T. Nicollier, D. Sollberger, J. Robertsson, K. Ali, B. Kenda, W.B. Banerdt, Geology and physical properties investigations by the InSight lander. Space Sci. Rev. (2018). https://doi.org/10.1007/s11214-018-0512-7 CrossRefGoogle Scholar S. Goossens, T.J. Sabaka, A. Genova, E. Mazarico, J.B. Nicholas, G.A. Neumann, Evidence for a low bulk crustal density for Mars from gravity and topography. Geophys. Res. Lett. 44, 7686–7694 (2017). https://doi.org/10.1002/2017GL074172 CrossRefADSGoogle Scholar T. Gudkova, V. Zharkov, Mars: interior structure and excitation of free oscillations. Phys. Earth Planet. Inter. 142, 1–22 (2004). https://doi.org/10.1016/j.pepi.2003.10.004 CrossRefADSGoogle Scholar T.V. Gudkova, P. Lognonné, J. Gagnepain-Beyneix, Large impacts detected by the Apollo seismometers: impactor mass and source cutoff frequency estimations. Icarus 211, 1049–1065 (2011). https://doi.org/10.1016/j.icarus.2010.10.028 CrossRefADSGoogle Scholar T.V. Gudkova, P. Lognonné, K. Miljković, J. Gagnepain-Beyneix, Impact cutoff frequency–momentum scaling law inverted from Apollo seismic data. Earth Planet. Sci. Lett. 427, 57–65 (2015). https://doi.org/10.1016/j.epsl.2015.06.037 CrossRefADSGoogle Scholar N. Güldemeister, K. Wünnemann, Quantitative analysis of impact-induced seismic signals by numerical modeling. Icarus 296, 15–27 (2017). https://doi.org/10.1016/j.icarus.2017.05.010 CrossRefADSGoogle Scholar A.M. Harri, O. Marsal, P. Lognonné, G.W. Leppelmeier, T. Spohn, K.-H. Glassmeier, F. Angrilli, W.B. Banerdt, J.P. Barriot, J.-L. Bertaux, J.J. Berthelier, S. Calcutt, J.C. Cerisier, D. Crisp, V. Dehant, D. Giardini, R. Jaumann, Y. Langevin, M. Menvielle, G. Musmann, J.P. Pommereau, S. Di Pippo, D. Guerrier, K. Kumpulainen, S. Larsen, A. Mocquet, J. Polkko, J. Runavot, W. Schumacher, T. Siili, J. Simola, J.E. Tillman (NetLander Science Team), Network science landers for Mars. Adv. Space Res. 23, 1915–1924 (1999). https://doi.org/10.1016/S0273-1177(99)00279-3 CrossRefADSGoogle Scholar W.K. Hartmann, Martian cratering 8: isochron refinement and the chronology of Mars. Icarus 174, 294–320 (2005). https://doi.org/10.1016/j.icarus.2004.11.023 CrossRefADSGoogle Scholar W.K. Hartmann, I.J. Daubar, Martian cratering 11. Utilizing decameter scale crater populations to study Martian history. Meteorit. Planet. Sci. 52, 493–510 (2017). https://doi.org/10.1111/maps.12807 CrossRefADSGoogle Scholar L.G. Holcomb, A direct method for calculating instrument noise levels in side-by-side seismometer evaluations. U.S. Geol. Surv. Open-File Rept. 89-214 (1989) Google Scholar L.G. Holcomb, C.R. Hutt, An evaluation of installation methods for STS-1 seismometers. U.S. Geol. Surv. Open-File Rept. 92-302, Albuquerque (1993) Google Scholar H. Jeffreys, The rigidity of the Earth's central core. Geophys. Suppl. Mon. Not. R. Astron. Soc. 1, 371–383 (1926). https://doi.org/10.1111/j.1365-246X.1926.tb05385.x CrossRefADSGoogle Scholar H. Jeffreys, K.E. Bullen, Seismological Tables (British Association for the Advancement of Science, London, 1940) Google Scholar F. Karakostas, V. Rakoto, P. Lognonné, C. Larmat, I. Daubar, K. Miljkovic, Inversion of meteor Rayleigh waves on Earth and modeling of air coupled Rayleigh waves on Mars. Space Sci. Rev. 214, 127 (2018). https://doi.org/10.1007/s11214-018-0566-6 CrossRefADSGoogle Scholar T. Kawamura, S. Tanaka, N. Kobayashi, P. Lognonné, Lunar surface gravimeter as a lunar seismometer: investigation of a new source of seismic information on the Moon. J. Geophys. Res., Planets 120, 343–358 (2015). https://doi.org/10.1002/2014JE004724 CrossRefADSGoogle Scholar T. Kawamura, P. Lognonné, Y. Nishikawa, S. Tanaka, Evaluation of deep moonquake source parameters: implication for fault characteristics and thermal state. J. Geophys. Res., Planets 122, 1487–1504 (2017). https://doi.org/10.1002/2016JE005147 CrossRefADSGoogle Scholar S. Kedar, J. Andrade, B. Banerdt, P. Delage, M. Golombek, M. Grott, T. Hudson, A. Kiely, M. Knapmeyer, B. Knapmeyer-Endrun, C. Krause, T. Kawamura, P. Lognonne, T. Pike, Y. Ruan, T. Spohn, N. Teanby, J. Tromp, J. Wookey, Analysis of regolith properties using seismic signals generated by InSight's HP3 penetrator. Space Sci. Rev. 211, 315 (2017). https://doi.org/10.1007/s11214-017-0391-3 CrossRefADSGoogle Scholar B. Kenda, P. Lognonné, A. Spiga, T. Kawamura, S. Kedar, W.B. Banerdt, R. Lorenz, D. Banfield, M. Golombek, Modeling of ground deformation and shallow surface waves generated by Martian dust devils and perspectives for near-surface structure inversion. Space Sci. Rev. 211, 501–524 (2017). https://doi.org/10.1007/s11214-017-0378-0 CrossRefADSGoogle Scholar A. Khan, J. Connolly, Constraining the composition and thermal state of Mars from inversion of geophysical data. J. Geophys. Res. 113, E07003 (2008). https://doi.org/10.1029/2007JE002996 CrossRefADSGoogle Scholar A. Khan, M. van Driel, M. Böse, D. Giardini, S. Ceylan, J. Yan, J. Clinton, F. Euchner, P. Lognonné, N. Murdoch, D. Mimoun, M.P. Panning, M. Knapmeyer, W.B. Banerdt, Single-station and single-event marsquake location and inversion for structure using synthetic Martian waveforms. Phys. Earth Planet. Inter. 258, 28–42 (2016). https://doi.org/10.1016/j.pepi.2016.05.017 CrossRefADSGoogle Scholar A. Khan, C. Liebske, A. Rozel, A. Rivoldini, F. Nimmo, J.A.D. Connolly, D. Giardini, A geophysical perspective on the bulk composition of Mars. J. Geophys. Res., Planets 123, 575–611 (2018). https://doi.org/10.1002/2017JE005371 CrossRefADSGoogle Scholar O.B. Khavroshkin, V.B. Tsyplakov, Penetrator "Mars-96", reality and possibilities of seismic experiment, in Experimental Problems in Planetology, vol. 1 (United Institute of Physics of the Earth, Russian Academy of Science, Moscow, 1996) Google Scholar Kinemetrics, STS-2.5 data sheet (2017). https://kinemetrics.com/wp-content/uploads/2017/04/datasheet-high-performance-portable-very-broadband-triaxial-seismometer-STS-2.5-kinemetrics-streckeisen-quanterra.pdf M. Knapmeyer, J. Oberst, E. Hauber, M. Wählisch, C. Deuchler, R. Wagner, Working models for spatial distribution and level of Mars' seismicity. J. Geophys. Res. 111, E11006 (2006). https://doi.org/10.1029/2006JE002708 CrossRefADSGoogle Scholar B. Knapmeyer-Endrun, M.P. Golombek, M. Ohrnberger, Rayleigh wave ellipticity modeling and inversion for shallow structure at the proposed InSight landing site in Elysium Planitia, Mars. Space Sci. Rev. 211, 339 (2017). https://doi.org/10.1007/s11214-016-0300-1 CrossRefADSGoogle Scholar B. Knapmeyer-Endrun, S. Ceylan, M. van Driel, Crustal s-wave velocity from apparent incidence angles: a case study in preparation for InSight. Space Sci. Rev. 214, 83 (2018). https://doi.org/10.1007/s11214-018-0510-9 CrossRefADSGoogle Scholar N. Kobayashi, K. Nishida, Continuous excitation of planetary free oscillations by atmospheric disturbances. Nature 395, 357–360 (1998). https://doi.org/10.1038/26427 CrossRefADSGoogle Scholar A.S. Konopliv, R.S. Park, W.M. Folkner, An improved JPL Mars gravity field and orientation from Mars orbiter and lander tracking data. Icarus 274, 253–260 (2016). https://doi.org/10.1016/j.icarus.2016.02.052 CrossRefADSGoogle Scholar C.A. Langston, Structure under Mount Rainier, Washington, inferred from teleseismic body waves. J. Geophys. Res. 84, 4749–4762 (1979). https://doi.org/10.1029/JB084iB09p04749 CrossRefADSGoogle Scholar C. Larmat, J.-P. Montagner, Y. Capdeville, W.B. Banerdt, P. Lognonné, Numerical assessment of the effects of topography and crustal thickness on martian seismograms using a coupled modal solution-spectral element method. Icarus 196, 78–89 (2008). https://doi.org/10.1016/j.icarus.2007.12.030 CrossRefADSGoogle Scholar F. Larsonnier, P. Miller, Un dispositif original pour la métrologie acoustique basse fréquence appliquée aux capteurs d'infrasons. Acoust. Tech. 67, 19–25 (2011) Google Scholar G.V. Latham, M. Ewing, F. Press, G. Sutton, J. Dorman, N. Toksoz, R. Wiggins, Y. Nakamura, J. Derr, F. Duennebier, Passive seismic experiment, in Apollo 11 Preliminary Science Report, NASA, vol. SP-214 (1969), pp. 143–161 Google Scholar G.V. Latham, M. Ewing, F. Press, G. Sutton, J. Dorman, Y. Nakamura, N. Toksöz, R. Wiggins, J. Derr, F. Duennebier, Passive seismic experiment. Science 167, 455–457 (1970). https://doi.org/10.1126/science.167.3918.455 CrossRefADSGoogle Scholar G. Latham, M. Ewing, J. Dorman, D. Lammlein, F. Press, N. Toksoz, G. Sutton, F. Duennebier, Y. Nakamura, Moonquakes. Science 174, 687–692 (1971). https://doi.org/10.1126/science.174.4010.687 CrossRefADSGoogle Scholar I. Lehmann, Publ. Bur. Cent. Séismol. Int., Sér. A, Travaux Sci. 14, 87–115 (1936) Google Scholar V. Linkin, A.M. Harri, A. Lipatov, K. Belostotskaja, B. Derbunovich, A. Ekonomov, L. Khloustova, R. Krmenev, V. Makarov, B. Martinov, D. Nenarokov, M. Prostov, A. Pustovalov, G. Shustko, I. Jarvine, H. Kivilinna, S. Korpela, K. Kumpulainen, R. Pellinen, R. Pirjola, P. Riihela, A. Salminen, W. Schmidt, T. Siili, J. Blamont, T. Carpentier, A. Debus, C.T. Hua, J.F. Karczewski, H. Laplace, P. Levacher, P. Lognonné, C. Malique, M. Menvielle, G. Mouli, J.P. Pommereau, K. Quotb, J. Runavot, D. Vienne, F. Grunthaner, F. Kuhnke, G. Mussman, R. Rieder, H. Wanke, T. Economou, M. Herring, A. Lane, C. McKay, A sophisticated lander for scientific exploration of Mars: scientific objectives and implementation of the Mars 96 Small Station. Planet. Space Sci. 46, 717–737 (1998). https://doi.org/10.1016/S0032-0633(98)00008-7 CrossRefADSGoogle Scholar H. Liu, W.T. Pike, A silicon/solder bilayer thermal actuator for compensating thermal drift of silicon suspensions, in 18th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS) (2015), pp. 916–919. https://doi.org/10.1109/TRANSDUCERS.2015.7181073 CrossRefGoogle Scholar K. Lodders, B. Fegley, An oxygen isotope model for the composition of Mars. Icarus 126, 373–394 (1997). https://doi.org/10.1006/icar.1996.5653 CrossRefADSGoogle Scholar P. Lognonné, Planetary seismology. Annu. Rev. Earth Planet. Sci. 33, 19.1–19.34 (2005). https://doi.org/10.1146/annurev.earth.33.092203.122605 CrossRefGoogle Scholar P. Lognonné, W.B. Banerdt, Rationale for seismic measurements on Mars by a single station, in Sixth International Conference on Mars, Pasadena, California, July 20–25, 2003. Abstract no. 3225 Google Scholar P. Lognonné, C. Johnson, 10.03—Planetary seismology, in Treatise on Geophysics, ed. by G. Schubert (Elsevier, Amsterdam, 2007), pp. 69–122. https://doi.org/10.1016/B978-044452748-6.00154-1. ISBN 978-0-444-52748-6 CrossRefGoogle Scholar P. Lognonné, C.L. Johnson, 10.03—Planetary seismology, in Treatise on Geophysics, 2nd edn., vol. 10, ed. by G. Schubert (Elsevier, Oxford, 2015), pp. 65–120. https://doi.org/10.1016/B978-0-444-53802-4.00167-6 CrossRefGoogle Scholar P. Lognonné, T. Kawamura, Impact seismology on terrestrial and giant planets, in Extraterrestrial Seismology, ed. by V.C.H. Tong, R.A. Garcia (Cambridge University Press, Cambridge, 2015), pp. 250–263. https://doi.org/10.1017/CBO9781107300668.021 CrossRefGoogle Scholar P. Lognonné, B. Mosser, Planetary seismology. Surv. Geophys. 14, 239–302 (1993). https://doi.org/10.1007/BF00690946 CrossRefADSGoogle Scholar P. Lognonné, T. Pike, Planetary seismometry, in Extraterrestrial Seismology (Cambridge University Press, Cambridge, 2015), pp. 36–48. https://doi.org/10.1017/CBO9781107300668.006. Chap. 3 CrossRefGoogle Scholar P. Lognonné, J. Gagnepain-Beyneix, W.B. Banerdt, S. Cacho, J.F. Karczewski, M. Morand, An ultra-broad band seismometer on InterMarsnet. Planet. Space Sci. 44, 1237–1249 (1996). https://doi.org/10.1016/S0032-0633(96)00083-9 CrossRefADSGoogle Scholar P. Lognonné, V.N. Zharkov, J.F. Karczewski, B. Romanowicz, M. Menvielle, G. Poupinet, B. Brient, C. Cavoit, A. Desautez, B. Dole, D. Franqueville, J. Gagnepain-Beyneix, H. Richard, P. Schibler, N. Striebig, The seismic optimism experiment. Planet. Space Sci. 46, 739–747 (1998). https://doi.org/10.1016/S0032-0633(98)00009-9 CrossRefADSGoogle Scholar P. Lognonné, D. Giardini, B. Banerdt, J. Gagnepain-Beyneix, A. Mocquet, T. Spohn, J.F. Karczewski, P. Schibler, S. Cacho, W.T. Pike, C. Cavoit, A. Desautez, J. Pinassaud, D. Breuer, M. Campillo, P. Defraigne, V. Dehant, A. Deschamp, J. Hinderer, J.J. Lévéque, J.P. Montagner, J. Oberst, The NetLander very broad band seismometer. Planet. Space Sci. 48, 1289–1302 (2000). https://doi.org/10.1016/S0032-0633(00)00110-0 CrossRefADSGoogle Scholar P. Lognonné, J. Gagnepain-Beyneix, H. Chenet, A new seismic model of the Moon: implication in terms of structure, formation and evolution. Earth Planet. Sci. Lett. 6637, 1–18 (2003). https://doi.org/10.1016/S0012-821X(03)00172-9 CrossRefGoogle Scholar P. Lognonné, T. Spohn, D. Mimoun, S. Ulamec, J. Biele, GEP-ExoMars: a geophysics and environment observatory on Mars, in 37th Annual Lunar and Planetary Science Conference, League City, Texas, March 13–17, 2006. Abstract no. 1982 Google Scholar P. Lognonné, M. Le Feuvre, C.L. Johnson, R.C. Weber, Moon meteoritic seismic hum: steady state prediction. J. Geophys. Res. 114, E12003 (2009). https://doi.org/10.1029/2008JE003294 CrossRefADSGoogle Scholar P. Lognonné, W.B. Banerdt, D. Giardini, U. Christensen, W.T. Pike, D. Mimoun, S. de Raucourt, S. Tillier, P. Zweifel, D. Mance, R. Roll, M. Bierwirth, L. Boschi, R. Garcia, W. Goetz, C.L. Johnson, N. Kobayashi, A. Mocquet, M. Panning, J. Tromp, R.C. Weber, M. Wieczorek, in The GEMS (GEophysical Monitoring Station) SEISmometer, EPSC-DPS Joint Meeting 2011, Nantes, France, 2011, p. 1507. http://meetings.copernicus.org/epsc-dps2011 Google Scholar P. Lognonné, F. Karakostas, L. Rolland, Y. Nishikawa, Modeling of atmospheric-coupled Rayleigh waves on planets with atmosphere: from Earth observation to Mars and Venus perspectives. J. Acoust. Soc. Am. 140(2), 1447–1468 (2016). https://doi.org/10.1121/1.4960788 CrossRefADSGoogle Scholar R.D. Lorenz, S. Kedar, N. Murdoch, P. Lognonné, T. Kawamura, D. Mimoun, W.B. Banerdt, Seismometer detection of dust devil vortices by ground tilt. Bull. Seismol. Soc. Am. 105, 3015–3023 (2015). https://doi.org/10.1785/0120150133 CrossRefGoogle Scholar R.D. Lorenz, Y. Nakamura, Meteorological potential of the Viking-2 Seismometer dataset, in The Fifth International Workshop on the Mars Atmosphere: Modelling and Observation, ed. by F. Forget, M. Millour, Oxford, UK, January 13–16, 2014. Id. 1105 Google Scholar R.D. Lorenz, Y. Nakamura, J. Murphy, A bump in the night: wind statistics point to Viking 2 Sol 80 seismometer event as a real marsquake, in 47th Lunar and Planetary Science Conference, The Woodlands, Texas, March 21–25, 2016. LPI Contribution No. 1903, id. 1566 Google Scholar J.N. Maki, M. Golombek, R. Deen, H. Abarca, C. Sorice, T. Goodsall, M. Schwochert, M. Lemmon, A. Trebi-Ollennu, W.B. Banerdt, The color cameras on the InSight Lander. Space Sci. Rev. 214, 105 (2018). https://doi.org/10.1007/s11214-018-0536-z CrossRefADSGoogle Scholar M.C. Malin, K.S. Edgett, L.V. Posiolova, S.M. McColley, E.Z.N. Dobrea, Present-day impact cratering rate and contemporary gully activity on Mars. Science 314, 1573–1577 (2006). https://doi.org/10.1126/science.1135156 CrossRefADSGoogle Scholar R. Mallet, Catalogue of recorded earthquakes from 1606 B.C. to A.D. 1850, Part I, 1606 B.C. to 1755 A.D., in Report of the 22nd Meeting of the British Association for the Advancement of Science, Hull, September, 1853, (John Murray, London, 1853), pp. 1–176 Google Scholar O. Marsal, M. Venet, J.L. Counil, F. Ferri, A.M. Harri, T. Spohn, J. Block, The NetLander geophysical network on the surface of Mars: general mission description and technical design status. Acta Astronaut. 51, 379–386 (2002). https://doi.org/10.1016/S0094-5765(02)00069-3 CrossRefADSGoogle Scholar J.C. Marty, G. Balmino, P. Rosenblatt, J. Duron, S. Le Maistre, A. Rivoldini, V. Dehant, T. Van Hoolst, Martian gravity field model and its time variations from MGS and ODYSSEY data. Planet. Space Sci. 57, 350–363 (2009). https://doi.org/10.1016/j.pss.2009.01.004 CrossRefADSGoogle Scholar J.R. McMillan, Methods of installing United States National Seismographic Network (USNSN) stations—a construction manual. Open-file report 02-144. U.S. Geological Survey, Albuquerque, New Mexico (2002). http://earthquake.usgs.gov.regional/asl/pubs/ H. McSween, What we have learned about Mars from SNC meteorites. Meteoritics 29, 757–779 (1994) CrossRefADSGoogle Scholar R.P. Middlemiss, A. Samarelli, D.J. Paul, J. Hough, S. Rowan, Measurement of the Earth tides with a MEMS gravimeter. Nature 531, 614–617 (2016). https://doi.org/10.1038/nature17397 CrossRefADSGoogle Scholar D. Mimoun, P. Lognonné, J. Gagnepain-Beyneix, D. Giardini, D. Mance, P. Zweifel, T. Pike, S. Calcutt, U. Christensen, R. Roll, A. van den Berg, J.M. Smit, A. Selig, P. Schibler, T. Nebut, T. Gabsi, S. Tillier, O. Robert, O. Pot, A. Anglade, The ExoMars-Humbold payload SEIS, in 39th Lunar and Planetary Science Conference (Lunar and Planetary Science XXXIX), League City, Texas, March 10–14, 2008, pp. 10–14. LPI Contribution No. 1391, id. 1324 Google Scholar D. Mimoun, N. Murdoch, P. Lognonné, K. Hurst, W.T. Pike, J. Hurley, T. Nébut, W.B. Banerdt (SEIS Team), The noise model of the SEIS seismometer of the InSight mission to Mars. Space Sci. Rev. 211, 383–428 (2017). https://doi.org/10.1007/s11214-017-0409-x CrossRefADSGoogle Scholar A. Mocquet, A search for the minimum number of stations needed for seismic networking on Mars. Planet. Space Sci. 47, 397–409 (1999). https://doi.org/10.1016/S0032-0633(98)00126-3 CrossRefADSGoogle Scholar A. Mocquet, M. Menvielle, Complementarity of seismological and electromagnetic sounding methods for constraining the structure of the Martian mantle. Planet. Space Sci. 48, 1249–1260 (2000). https://doi.org/10.1016/S0032-0633(00)00107-0 CrossRefADSGoogle Scholar A. Mocquet, P. Vacher, O. Grasset, C. Sotin, Theoretical seismic models of Mars: the importance of the iron content of the mantle. Planet. Space Sci. 44, 1251–1268 (1996). https://doi.org/10.1016/S0032-0633(96)00086-4 CrossRefADSGoogle Scholar A. Mocquet, P. Rosenblatt, V. Dehant, O. Verhoeven, The deep interior of Venus, Mars and the Earth: a brief review and the need for planetary surface-based measurements. Planet. Space Sci. 59, 1048–1061 (2011). https://doi.org/10.1016/j.pss.2010.02.002 CrossRefADSGoogle Scholar A. Mohorovičić, Das Beben vom 8. X. 1909. Jahrbuch des meteorologischen Observatoriums in Zagreb (Agram) für das Jahr 1909 9/4, 63 (1910) Google Scholar A. Mohorovičić, Earthquake of 8 October 1909 (translation). Geofizika 9, 3–55 (1992) Google Scholar P. Morgan, M. Grott, B. Knapmeyer-Endrun, M. Golombek, P. Delage, P. Lognonné, S. Piqueux, I. Daubar, N. Murdoch, C. Charalambous, W.T. Pike, N. Müller, A. Hagermann, M. Siegler, R. Lichtenheldt, N. Teanby, S. Kedar, Space Sci. Rev. 214, 104 (2018). https://doi.org/10.1007/s11214-018-0537-y CrossRefADSGoogle Scholar N. Murdoch, D. Mimoun, R.F. Garcia, W. Rapin, T. Kawamura, P. Lognonné, D. Banfield, W.B. Banerdt, Evaluating the wind-induced mechanical noise on the InSight seismometers. Space Sci. Rev. 211, 429–455 (2017a). https://doi.org/10.1007/s11214-016-0311-y CrossRefADSGoogle Scholar N. Murdoch, B. Kenda, T. Kawamura, A. Spiga, P. Lognonné, D. Mimoun, W.B. Banerdt, Estimations of the seismic pressure noise on Mars determined from Large Eddy Simulations and demonstration of pressure decorrelation techniques for the InSight mission. Space Sci. Rev. 211, 457–483 (2017b). https://doi.org/10.1007/s11214-017-0343-y CrossRefADSGoogle Scholar Y. Nakamura, D.L. Anderson, Martian wind activity detected by a seismometer at Viking Lander 2 site. Geophys. Res. Lett. 6, 499–502 (1979). https://doi.org/10.1029/GL006i006p00499 CrossRefADSGoogle Scholar Y. Nakamura, D. Lammlein, G. Latham, M. Ewing, J. Dorman, F. Press, N. Toksöz, New seismic data on the state of the deep lunar interior. Science 181, 49–51 (1973). https://doi.org/10.1126/science.181.4094.49 CrossRefADSGoogle Scholar Y. Nakamura, G. Latham, D. Lammlein, M. Ewing, F. Duennebier, J. Dorman, Deep lunar interior inferred from recent seismic data. Geophys. Res. Lett. 1, 137–140 (1974). https://doi.org/10.1029/GL001i003p00137 CrossRefADSGoogle Scholar Y. Nakamura, F.K. Duennebier, G.V. Latham, H.J. Dorman, Structure of the lunar mantle. J. Geophys. Res. 81(26), 4818–4824 (1976). https://doi.org/10.1029/JB081i026p04818 CrossRefADSGoogle Scholar Nanometrics, Trillium Compact data sheet (2018). https://www.nanometrics.ca/sites/default/files/2018-04/trillium_compact_data_sheet.pdf G.A. Neumann, M.T. Zuber, M.A. Wieczorek, P.J. McGovern, F.G. Lemoine, D.E. Smith, Crustal structure of mars from gravity and topography. J. Geophys. Res., Planets 109(E8), e08002 (2004). https://doi.org/10.1029/2004JE002262 CrossRefADSGoogle Scholar Y. Nishikawa, A. Araya, K. Kurita, N. Kobayashi, T. Kawamura, Designing a torque-less wind shield for broadband observation of marsquakes. Planet. Space Sci. 104, 288–294 (2014). https://doi.org/10.1016/j.pss.2014.10.011 CrossRefADSGoogle Scholar Y. Nishikawa, P. Lognonné, T. Kawamura, A. Spiga, E. Stutzmann, M. Schimmel, T. Bertrand, F. Forget, K. Kurita, Mars' background free oscillations. Space Sci. Rev. (2019). https://doi.org/10.1007/s11214-019-0579-9 CrossRefGoogle Scholar T. Nissen-Meyer, M. van Driel, S. Stähler, K. Hosseini, S. Hempel, L. Auer, A. Colombi, A. Fournier, AxiSEM: broadband 3-D seismic wavefields in axisymmetric media. Solid Earth 5, 425–445 (2014). https://doi.org/10.5194/se-5-425-2014 CrossRefADSGoogle Scholar J. Oberst, Unusually high stress drops associated with shallow moonquakes. J. Geophys. Res. 92, 1397–1405 (1987). https://doi.org/10.1029/JB092iB02p01397 CrossRefADSGoogle Scholar E.A. Okal, Excitation of seismic waves on Mars, in Scientific Rationale and Requirements for a Global Seismic Network on Mars, ed. by S.C. Solomon (Lunar and Planetary Institute, Houston, 1991), p. 4347. LPI Tech. Rept. 91-02 Google Scholar E.A. Okal, D.J. Anderson, Theoretical models for Mars and their seismic properties. Icarus 33, 514–528 (1978). https://doi.org/10.1016/0019-1035(78)90187-2 CrossRefADSGoogle Scholar R.D. Oldham, The constitution of the interior of the Earth, as revealed by earthquakes. Q. J. Geol. Soc. Lond. 2, 456–475 (1906). https://doi.org/10.1144/GSL.JGS.1906.062.01-04.21 CrossRefGoogle Scholar J.F. Pacheco, L.R. Sykes, Seismic moment catalog of large shallow earthquakes, 1900 to 1989. Bull. Seismol. Soc. Am. 82, 1306–1349 (1992) Google Scholar M.P. Panning, E. Beucler, M. Drilleau, A. Mocquet, P. Lognonné, W.B. Banerdt, Verifying single-station seismic approaches using Earth-based data: preparation for data return from the InSight mission to Mars. Icarus 248, 230–242 (2015). https://doi.org/10.1016/j.icarus.2014.10.035 CrossRefADSGoogle Scholar M.P. Panning, P. Lognonné, B.W. Banerdt, R. Garcia, M. Golombek, S. Kedar, B. Knapmeyer-Endrun, A. Mocquet, N.A. Teanby, J. Tromp, R. Weber, E. Beucler, J.F. Blanchette-Guertin, E. Bozdǎg, M. Drilleau, T. Gudkova, S. Hempel, A. Khan, V. Lekic, N. Murdoch, A.C. Plesa, A. Rivoldini, N. Schmerr, Y. Ruan, O. Verhoeven, C. Gao, U. Christensen, J. Clinton, V. Dehant, D. Giardini, D. Mimoun, W.T. Pike, S. Smrekar, M. Wieczorek, M. Knapmeyer, J. Wookey, Planned products of the Mars structure service for the InSight mission to Mars. Space Sci. Rev. 211, 611–650 (2017). https://doi.org/10.1007/s11214-016-0317-5 CrossRefADSGoogle Scholar G.L. Pavlis, F.L. Vernon, Calibration of seismometers using ground noise. Bull. Seismol. Soc. Am. 84, 1243–1255 (1994) Google Scholar J. Peterson, Observations and modelling of background seismic noise. Open-file report 93-322, U.S. Geological Survey, Albuquerque, New Mexico (1993). http://earthquake.usgs.gov.regional/asl/pubs/ M.P. Petkov, S.M. Jones, G.E. Voecks, K.J. Hurst, O. Grosjean, D. Faye, G. Rioland, C.M. Sunday, E.M. Bradford, W.N. Warner, J.M. Mennella, N.W. Ferraro, M. Gallegos, D.M. Soules, P. Lognonné, W.B. Banerdt, J.W. Umland, Development of the primary sorption pump for the SEIS seismometer of the InSight mission to Mars. Space Sci. Rev. 214, 112 (2018). https://doi.org/10.1007/s11214-018-0548-8 CrossRefADSGoogle Scholar R.J. Phillips, Expected rates of marsquakes, in Scientific Rationale and Requirements for a Global Seismic Network on Mars. (Lunar and Planet. Inst., Houston, 1991), pp. 35–38. LPI Tech. Rep. 91-02 LPI/TR-91-02 Google Scholar R.A. Phinney, Structure of the Earth's crust from spectral behaviour of long-period body waves. J. Geophys. Res. 69, 2997–3017 (1964). https://doi.org/10.1029/JZ069i014p02997 CrossRefADSGoogle Scholar W.T. Pike, I.M. Standley, S.B. Calcutt, S. Kedar, S.D. Vance, B.G. Bills, The Europa Seismic Package (ESP): 1. Selecting a broadband microseismometer for ocean worlds, in 3rd International Workshop on Instrumentation for Planetary Mission, Pasadena, California, 24–27 October, 2016. LPI Contribution No. 1980, id. 4133 Google Scholar W.T. Pike, I.M. Standley, S.B. Calcutt, A.G. Mukherjee, A broad-band silicon microseismometer with 0.25 NG/rtHz performance, in 2018 IEEE Micro Electro Mechanical Systems (MEMS), Belfast (2018), pp. 113–116. https://doi.org/10.1109/MEMSYS.2018.8346496 CrossRefGoogle Scholar A.C. Plesa, M. Grott, N. Tosi, D. Breuer, T. Spohn, M. Wieczorek, How large are present-day heat flux variations across the surface of Mars? J. Geophys. Res. 121, 2386–2403 (2016). https://doi.org/10.1002/2016JE005126 CrossRefGoogle Scholar A.-C. Plesa, M. Knapmeyer, M.P. Golombek, D. Breuer, M. Grott, P. Lognonné, N. Tosi, R.C. Weber, Present-day Mars' seismicity predicted from 3D thermal evolution models of interior dynamics. Geophys. Res. Lett. 45, 2580–2589 (2018). https://doi.org/10.1002/2017GL076124 CrossRefADSGoogle Scholar L. Pou, D. Mimoun, P. Lognonné, R.F. Garcia, O. Karatekin, High precision SEIS calibration for the InSight mission and its applications. Space Sci. Rev. (2018). https://doi.org/10.1007/s11214-018-0561-y CrossRefGoogle Scholar A.T. Ringler, C.R. Hutt, J.R. Evans, L.D. Sandoval, A comparison of seismic instrument noise coherence analysis techniques. Bull. Seismol. Soc. Am. 101, 558–567 (2011). https://doi.org/10.1785/0120100182 CrossRefGoogle Scholar A. Rivoldini, T. Van Hoolst, O. Verhoeven, A. Mocquet, V. Dehant, Geodesy constraints on the interior structure and composition of Mars. Icarus 213, 451–472 (2011). https://doi.org/10.1016/j.icarus.2011.03.024 CrossRefADSGoogle Scholar G.P. Roberts, B. Matthews, C. Bristow, L. Guerrieri, J. Vetterlein, Possible evidence of paleo-marsquakes from fallen boulder populations, Cerberus Fossae, Mars. J. Geophys. Res. 117(E2), 003816 (2012). https://doi.org/10.1029/2011JE003816 CrossRefGoogle Scholar C. Sanloup, A. Jambon, P. Gillet, A simple chondritic model of Mars. Phys. Earth Planet. Inter. 112, 43–54 (1999). https://doi.org/10.1016/S0031-9201(98)00175-7 CrossRefADSGoogle Scholar V. Sautter, M.J. Toplis, R.C. Wiens, A. Cousin, C. Fabre, O. Gasnault, S. Maurice, O. Forni, J. Lasue, A. Ollila, J.C. Bridges, N. Mangold, S. Le Mouélic, M. Fisk, P.-Y. Meslin, P. Beck, P. Pinet, L. Le Deit, W. Rapin, E.M. Stolper, H. Newsom, D. Dyar, N. Lanza, D. Vaniman, S. Clegg, J.J. Wray, In situ evidence for continental crust on early Mars. Nat. Geosci. 8, 605–609 (2015). https://doi.org/10.1038/ngeo2474 CrossRefADSGoogle Scholar D. Savoie, A. Richard, M. Goutaudier, P. Lognonné, N. Onufer, M. Wallace, D. Mimoun, K. Hurst, N. Verdier, J. Maki, B.W. Banerdt, Determining true North on Mars by a sundial on InSight. Space Sci. Rev. (2018). https://doi.org/10.1007/s11214-018-0568-4 CrossRefGoogle Scholar M. Schimmel, E. Stutzmann, S. Ventosa, Low-frequency ambient noise autocorrelations: waveforms and normal modes. Seismol. Res. Lett. (2018). https://doi.org/10.1785/0220180027 CrossRefGoogle Scholar D.R. Schmitt, 11.03—Geophysical properties of the near surface Earth: seismic properties, in Treatise on Geophysics, 2nd edn., ed. by G. Schubert (Elsevier, Oxford, 2015), pp. 43–87. https://doi.org/10.1016/B978-0-444-53802-4.00190-1. ISBN 9780444538031 CrossRefGoogle Scholar H.W. Schuessler, A signal processing approach to simulation. Frequenz 35, 174–184 (1981). https://doi.org/10.1515/FREQ.1981.35.7.174 CrossRefADSGoogle Scholar W. Schweydar, Beobachtung der Änderung der Intensität der Schwerkraft dutch den Mond. Königl Preuszische Akademie der Wissenschaflen, Physikalisch-mathematische Classe, Berlin (1914) Google Scholar R. Sleeman, A. van Wettum, J. Trampert, Three-channel correlation analysis: a new technique to measure instrumental noise of digitizers and seismic sensors. Bull. Seismol. Soc. Am. 96, 258–271 (2006). https://doi.org/10.1785/0120050032 CrossRefGoogle Scholar S.E. Smrekar, P. Lognonné, T. Spohn, W.B. Banerdt, D. Breuer, U. Christensen, V. Dehant, M. Drilleau, W. Folkner, N. Fuji, R.F. Garcia, D. Giardini, M. Golombek, M. Grott, T. Gudkova, C. Johnson, A. Khan, B. Langlais, A. Mittelholz, A. Mocquet, R. Myhill, M. Panning, C. Perrin, W.T. Pike, A.C. Plesa, A. Rivoldini, H. Samuel, S. Staehler, T. Van Hoolst, M. van Driel, O. Verhoeven, R. Weber, M. Wieczorek, Pre-mission InSights on the interior of Mars. Space Sci. Rev. 215, 3 (2019 this issue). https://doi.org/10.1007/s11214-018-0563-9 CrossRefADSGoogle Scholar F. Sohl, T. Spohn, The interior structure of Mars: implications from SNC meteorites. J. Geophys. Res. 102(E1), 1613–1635 (1997). https://doi.org/10.1029/96JE03419 CrossRefADSGoogle Scholar S.C. Solomon, D.L. Anderson, W.B. Banerdt, R.G. Butler, P.M. Davis, F.K. Duennebier, Y. Nakamura, E.A. Okal, R.G. Phillips, Scientific rationale and requirements for a global seismic network on Mars. LPI Tech. Rpt. 91-02, Lunar and Planetary Institute, Houston (1991), 51 pp. Google Scholar C. Sotin, F. Rocard, P. Lognonné, Summary of the International Conference of Mars Exploration Program and Sample Return mission. Planet. Space Sci. 48, 1143–1144 (2000) CrossRefADSGoogle Scholar A. Spiga, D. Banfield, N.A. Teanby, F. Forget, A. Lucas, B. Kenda, J.A. Rodriguez Manfredi, R. Widmer-Schnidrig, N. Murdoch, M.T. Lemmon, R.F. Garcia, L. Martire, O. Karatekin, S. Le Maistre, B. Vam Hove, V. Dehant, P. Lognonné, N. Muller, R. Lorenz, D. Mimoun, S. Rodriguez, E. Beucler, I. Daubar, M.P. Golombek, T. Bertrand, Y. Nishikawa, S. Navarro, L. Mora-Sotomayor, E. Sebastian-Martinez, E. Millour, L. Rolland, Q. Brissaud, T. Kawamura, A. Mocquet, R. Martin, J. Clinton, E. Stutzmann, W.M. Folkner, J. Maki, T. Spohn, S. Smrekar, W.B. Banerdt, Atmospheric science with InSight. Space Sci. Rev. 214, 109 (2018). https://doi.org/10.1007/s11214-018-0543-0 CrossRefADSGoogle Scholar T. Spohn, M. Grott, S.E. Smrekar, J. Knollenberg, T.L. Hudson, C. Krause, N. Müller, J. Jänchen, A. Börner, T. Wippermann, O. Krömer, R. Lichtenheldt, L. Wisniewski, J. Grygorczuk, M. Fittock, S. Rheershemius, T. Spröwitz, E. Kopp, I. Walter, A.C. Plesa, D. Breuer, P. Morgan, W.B. Banerdt, The Heat Flow and Physical Properties Package (HP3) for the InSight mission. Space Sci. Rev. 214, 96 (2018). https://doi.org/10.1007/s11214-018-0531-4 CrossRefADSGoogle Scholar J.M. Steim, Steim Compression (1994). http://www.ncedc.org/ftp/pub/quanterra/steim123.ps.Z J. Stevanović, N.A. Teanby, J. Wookey, N. Selby, I. Daubar, J. Vaubaillon, R. Garcia, Bolide airbursts as a seismic source for the 2018 Mars InSight mission. Space Sci. Rev. 211, 525–545 (2017). https://doi.org/10.1007/s11214-016-0327-3 CrossRefADSGoogle Scholar L. Stixrude, C. Lithgow-Bertelloni, Thermodynamics of mantle minerals—II. Phase equilibria. Geophys. J. Int. 184, 1180–1213 (2011). https://doi.org/10.1111/j.1365-246X.2010.04890.x CrossRefADSGoogle Scholar A.E. Stott, C. Charalambous, T. Warren, W.T. Pike, Full-band signal extraction from sensors in extreme environments: the NASA InSight microseismometer. IEEE Sens. J. 18, 9382–9392 (2018). https://doi.org/10.1109/JSEN.2018.2871342 CrossRefADSGoogle Scholar Y.A. Surkov, R.S. Kremnev, Mars-96 mission: Mars exploration with the use of penetrators. Planet. Space Sci. 46, 1689–1696 (1996) CrossRefADSGoogle Scholar T. Tanimoto, Excitation of normal modes by atmospheric turbulence: source of long period seismic noise. Geophys. J. Int. 136, 395–402 (1999). https://doi.org/10.1146/annurev.earth.29.1.563 CrossRefADSGoogle Scholar T. Tanimoto, Continuous free oscillations: atmosphere–solid Earth coupling. Annu. Rev. Earth Planet. Sci. 29, 563–584 (2001). https://doi.org/10.1146/annurev.earth.29.1.563 CrossRefADSGoogle Scholar G.J. Taylor, The bulk composition of Mars. Chem. Erde 73(4), 401–420 (2013). https://doi.org/10.1016/j.chemer.2013.09.006 CrossRefGoogle Scholar J. Taylor, N.A. Teanby, J. Wookey, Estimates of seismic activity in the Cerberus Fossae region of Mars. J. Geophys. Res. E118, 2570–2581 (2013). https://doi.org/10.1002/2013JE004469 CrossRefADSGoogle Scholar N.A. Teanby, Predicted detection rates of regional-scale meteorite impacts on Mars with the InSight short-period seismometer. Icarus 256, 49–62 (2015). https://doi.org/10.1016/j.icarus.2015.04.012 CrossRefADSGoogle Scholar N.A. Teanby, J. Wookey, Seismic detection of meteorite impacts on Mars. Phys. Earth Planet. Inter. 186, 70–80 (2011). https://doi.org/10.1016/j.pepi.2011.03.004 CrossRefADSGoogle Scholar M.N. Toksöz, F. Press, K. Anderson, A. Dainty, G. Latham, M. Ewing, J. Dorman, D. Lammlein, G. Sutton, F. Duennebier, Y. Nakamura, Lunar crust: structure and composition. Science 176, 1012–1016 (1972). https://doi.org/10.1126/science.176.4038.1012 CrossRefADSGoogle Scholar M.N. Toksöz, F. Press, A.M. Dainty, K.R. Anderson, Lunar velocity structure and compositional and thermal inferences. Moon 9(1), 31–42 (1974). https://doi.org/10.1007/BF00565389 CrossRefADSGoogle Scholar A. Trebi-Ollennu, W. Kim, K. Ali, O. Khan, C. Sorice, P. Bailey, J. Umland, R. Bonitz, C. Ciarleglio, J. Knight, N. Haddad, K. Klein, S. Nowak, D. Klein, N. Onufer, K. Glazebrook, B. Kobeissi, E. Baez, F. Sarkissian, M. Badalian, H. Abarca, R.G. Deen, J. Yen, S. Myint, J. Maki, A. Pourangi, J. Grinblat, B. Bone, N. Warner, J. Singer, J. Ervin, J. Lin, InSight Mars Lander Robotics Instrument deployment system. Space Sci. Rev. 214, 93 (2018). https://doi.org/10.1007/s11214-018-0520-7 CrossRefADSGoogle Scholar A. Trnkoczy, P. Bormann, W. Hanka, L.G. Holcomb, R.L. Nigbor, Site selection, preparation and installation of seismic stations, in New Manual of Seismological Observatory Practice, ed. by P. Bormann, E. Bergmann (GeoForschungsZentrum, Potsdam, 2002). https://doi.org/10.2312/GFZ.NMSOP_r1_ch7. http://ebooks.gfz-potsdam.de/pubman/item/escidoc:4023:5 CrossRefGoogle Scholar M. van Driel, L. Krischer, S.C. Stähler, K. Hosseini, T. Nissen-Meyer, Instaseis: instant global seismograms based on a broadband waveform database. Solid Earth 6(2), 701–717 (2015). https://doi.org/10.5194/se-6-701-2015 CrossRefADSGoogle Scholar T. Van Hoolst, V. Dehant, F. Roosbeek, P. Lognonné, Tidally induced surface displacements, external potential variations and gravity variations on Mars. Icarus 161, 281–296 (2003). https://doi.org/10.1016/S0019-1035(02)00045-3 CrossRefADSGoogle Scholar O. Verhoeven, A. Rivoldini, P. Vacher, A. Mocquet, G. Choblet, M. Menvielle, V. Dehant, T. Van Hoolst, J. Sleewagen, J.-P. Barriot, P. Lognonné, Interior structure of terrestrial planets: modeling Mars' mantle and its electromagnetic, geodetic and seismic properties. J. Geophys. Res. 110, E04009 (2005). https://doi.org/10.1029/2004/JE002271 CrossRefADSGoogle Scholar J. Vetterlein, G.P. Roberts, Structural evolution of the Northern Cerberus Fossae graben system, Elysium Planitia, Mars. J. Struct. Geol. 32, 394–406 (2010). https://doi.org/10.1016/j.jsg.2009.11.004 CrossRefADSGoogle Scholar L. Vinnick, H. Chenet, J. Gagnepain-Beyneix, P. Lognonné, First seismic receiver functions on the Moon. Geophys. Res. Lett. 28, 3031–3034 (2001). https://doi.org/10.1029/2001GL012859 CrossRefADSGoogle Scholar E. von Rebeur-Paschwitz, The earthquake of Tokio, April 18, 1889. Nature 40, 294–295 (1889). https://doi.org/10.1038/040294e0 CrossRefADSGoogle Scholar R. Weber, Y. Lin, E. Garnero, Q. William, P. Lognonné, Seismic detection of the Lunar Core. Science 331, 309–312 (2011). https://doi.org/10.1126/science.1199375 CrossRefADSGoogle Scholar E. Wielandt, Seismometry, in International Geophysics, Part A, ed. by W.H.K. Lee, H. Kanamori, P.C. Jennings, C. Kisslinger (eds.), vol. 81 (2002), pp. 283–304. https://doi.org/10.1016/S0074-6142(02)80221-2 CrossRefGoogle Scholar E. Wielandt, Seismic sensors and their calibration, in New Manual of Seismological Observatory Practice 2 (NMSOP-2), ed. by P. Bormann (Deutsches GeoForschungsZentrum GFZ, Potsdam, 2012), pp. 1–51. https://doi.org/10.2312/GFZ.NMSOP-2_ch5 CrossRefGoogle Scholar E. Wielandt, T. Forbriger, Linux version of program CALEX (2016). https://git.scc.kit.edu/Seitosh/software-for-seismometry-linux/tree/master/software/calex E. Wielandt, G. Streckeisen, The leaf-spring seismometer—design and performance. Bull. Seismol. Soc. Am. 72(6), 2349–2367 (1982) Google Scholar J.G. Williams, D.H. Boggs, C.F. Yoder, J.T. Ratcliff, J. Todd, J.O. Dickey, Lunar rotational dissipation in solid body and molten core. J. Geophys. Res. 106, 27933–27968 (2001). https://doi.org/10.1029/2000JE001396 CrossRefADSGoogle Scholar M.M. Withers, R.C. Aster, C.J. Young, E.P. Chael, High-frequency analysis of seismic background noise as a function of wind speed and shallow depth. Bull. Seismol. Soc. Am. 86, 1507–1515 (1996) Google Scholar J.H. Woodhouse, The calculation of the eigenfrequencies and eigenfunctions of the free oscillations of the Earth and the Sun, in Seismological Algorithms, Computational Methods and Computer Programs, ed. by D.J. Doornbos (Academic Press, London, 1988), pp. 321–370 Google Scholar M. Yasui, E. Matsumoto, M. Arakawa, Experimental study on impact-induced seismic wave propagation through granular materials. Icarus 260, 320–331 (2015). https://doi.org/10.1016/j.icarus.2015.07.032 CrossRefADSGoogle Scholar C.F. Yoder, A.S. Konopliv, D.N. Yuan, E.M. Standish, W. Folkner, Fluid core size of Mars from detection of the Solar tide. Science 300, 299–303 (2003). https://doi.org/10.1126/science.1079645 CrossRefADSGoogle Scholar V.N. Zharkov, T.V. Gudkova, Planetary seismology. Planet. Space Sci. 45(4), 401–407 (1997) CrossRefADSGoogle Scholar V.N. Zharkov, T.V. Gudkova, Construction of Martian interior model. Sol. Syst. Res. 39, 343–373 (2005). https://doi.org/10.1007/s11208-005-0049-7 CrossRefADSGoogle Scholar V.N. Zharkov, T.V. Gudkova, S.M. Molodensky, On models of Mars' interior and amplitudes of forced nutations: 1. The effects of deviation of Mars from its equilibrium state on the flattening of the core-mantle boundary. Phys. Earth Planet. Inter. 172, 324–334 (2009). https://doi.org/10.1016/j.pepi.2008.10.009 CrossRefADSGoogle Scholar V.N. Zharkov, T.V. Gudkova, A.V. Batov, On estimating the dissipative factor of the martian interior. Sol. Syst. Res. 51, 479–490 (2017). https://doi.org/10.1134/S0038094617060089 CrossRefADSGoogle Scholar Y. Zheng, F. Nimmo, T. Lay, Seismological implications of a lithospheric low seismic velocity zone in Mars. Phys. Earth Planet. Inter. 240, 132–141 (2015). https://doi.org/10.1016/j.pepi.2014.10.004 CrossRefADSGoogle Scholar W. Zürn, R. Widmer, On noise reduction in vertical seismic records below 2 mHz using local barometric pressure. Geophys. Res. Lett. 22, 3537–3540 (1995). https://doi.org/10.1029/95GL03369 CrossRef
CommonCrawl
Given a group $ G $, how many topological/Lie group structures does $ G $ have? Given any abstract group $ G $, how much is known about which types of topological/Lie group structures it might have? Any abstract group $ G $ will have the structure of a discrete topological group (since generally, any set can be given the discrete topology), but there are groups that have no smooth structure. An example of this from Wikipedia is the group $ \mathbb{Q} $ with the subspace topology inherited from $ \mathbb{R} $. Which groups can occur as Lie groups? Are there specific families of groups that are known to have no smooth structure? Similarly, how much can we know about the possible topologies on an abstract group $ G $? For example, which types of abstract groups admit a nontrivial (i.e., not the usual compact case) group structure? In particular, I am curious about the extent to which properties of the abstract group determine properties of any associated topology/smooth structure. Does anyone have any good references or a succinct answer for this question? general-topology functional-analysis soft-question lie-groups topological-groups Haskell Curry rondo9rondo9 $\begingroup$ I don't know of a good reference, but one example to keep in mind is that $\mathbb{R}^n$ and $\mathbb{R}$ are isomorphic as groups (at least, if you believe in the axiom of choice). This gives countable many topologies/Lie group structures for $\mathbb{R}$. $\endgroup$ – Jason DeVito Dec 4 '12 at 1:52 $\begingroup$ Maybe what I should have asked is what can we say about the possible topological/Lie group structures? In the example, $\mathbb{R}$ has lots of structures, but in some sense they are all pretty tame; $\mathbb{R}^n$ is a fairly boring Lie group. Must every Lie group structure on $\mathbb{R}$ be like this? $\endgroup$ – rondo9 Dec 4 '12 at 22:19 $\begingroup$ mathoverflow.net/questions/63636/what-groups-are-lie-groups and mathoverflow.net/questions/62385/… are related $\endgroup$ – Mariano Suárez-Álvarez Jan 5 '13 at 3:09 $\begingroup$ @rondo9, abelian Lie groups are direct products of $\mathbb R^n$ and tori, and tori have torsion, so every Lie group structure on $\mathbb R$ must be one of the $\mathbb R^n$. $\endgroup$ – Mariano Suárez-Álvarez Jan 5 '13 at 3:10 $\begingroup$ @Mariano: I believe that this result applies only to connected abelian Lie groups. For example, $ \mathbb{R}^{\times} $ is a $ 1 $-dimensional abelian Lie group that is disconnected, so it cannot be a direct product of $ \mathbb{R}^{n} $'s and tori. :) $\endgroup$ – Haskell Curry Jan 8 '13 at 6:39 Let us begin with the following theorem. Theorem 1 Let $ G $ be a topological group. If $ G $ admits a Lie group structure, then this structure is unique up to diffeomorphism. Proof: Suppose that $ \mathcal{A}_{1} $ and $ \mathcal{A}_{2} $ are smooth structures (maximal smooth atlases) on $ G $ that make it a Lie group. Observe that the identity map $ \text{id}_{G}: G \to G $ defines a continuous homomorphism of Lie groups from $ (G,\mathcal{A}_{1}) $ to $ (G,\mathcal{A}_{2}) $. A basic fact in the theory of Lie groups is that a continuous homomorphism between Lie groups is actually smooth. Hence, $ \text{id}_{G}: (G,\mathcal{A}_{1}) \to (G,\mathcal{A}_{2}) $ is a smooth mapping between smooth manifolds. As $ \text{id}_{G} $ is its own inverse, $ \text{id}_{G}: (G,\mathcal{A}_{2}) \to (G,\mathcal{A}_{1}) $ is also a smooth mapping between smooth manifolds. Therefore, $ (G,\mathcal{A}_{1}) $ is diffeomorphic to $ (G,\mathcal{A}_{2}) $. $ \spadesuit $ Theorem 1 says that if we fix a topology on a group $ G $, then there is at most one Lie group structure on $ G $. What happens when we vary the topology on $ G $ will be investigated in Section 3. In this section, we address the issue of which topological groups admit or do not admit Lie group structures. This issue is very much related to Hilbert's Fifth Problem, which, in its original formulation, asks for the minimal hypotheses that one needs to put on a topological group so that it admits a Lie group structure. Once again, if a Lie group structure exists, then its uniqueness is guaranteed according to Section 1. In their attempt to resolve Hilbert's Fifth Problem, Andrew Gleason, Deane Montgomery and Leo Zippin proved the following deep theorem in the 1950's. Theorem 2 (G-M-Z) Let $ G $ be a topological group. If $ G $ is locally Euclidean, then $ G $ admits a Lie group structure. To appreciate the power of this theorem, realize that it says, "A topological group that is merely a topological manifold is, in fact, a Lie group!" A detailed and insightful proof may be found in this set of notes posted by Professor Terence Tao on his research blog. Upon reading his notes, one will notice that a key concept used in the proof is that of a group having no small subgroups. Definition A topological group $ G $ is said to have the No-Small-Subgroup (NSS) Property iff there exists a neighborhood $ U $ of the identity element that does not contain any non-trivial subgroup of $ G $. One can consult this other set of notes by Professor Tao in order to understand the importance of the NSS Property. The Japanese mathematicians Morikuni Gotô and Hidehiko Yamabe were the first ones to formulate this property (in a 1951 paper), which Yamabe then used to recast the G-M-Z solution of Hilbert's Fifth Problem in a manner that reflects the algebro-topological structure of Lie groups more directly. Theorem 3 (Yamabe) Any connected and locally compact topological group $ G $ is the projective limit of a sequence of Lie groups. If $ G $ is locally compact and has the NSS Property, then $ G $ admits a Lie group structure. Theorem 3 implies Theorem 2 because a locally Euclidean group is locally compact (an obvious assertion) and has the NSS Property (a non-trivial assertion). It is far from obvious why both local compactness and the NSS Property should imply locally Euclidean back; indeed, this is the main content of Yamabe's deep theorem. Now, a well-known example of a topological group that does not admit a Lie group structure is the group $ \mathbb{Z}_{p} $ of the $ p $-adic integers with the $ p $-adic topology. It is a completely metrizable and compact topological group, but as it is a totally disconnected space, it cannot admit a Lie group structure for obvious topological reasons. In general, profinite groups (these are topological groups that are compact, Hausdorff and totally disconnected) do not admit Lie group structures. Examples of profinite groups are the discrete finite groups (these are $ 0 $-dimensional manifolds, but we can ignore them as topologically uninteresting), étale fundamental groups of connected affine schemes and Galois groups equipped with the Krull topology (this means that I am referring also to groups that correspond to infinite Galois extensions, not just the finite ones). In fact, every profinite group is an étale fundamental group in disguise, in the sense that every profinite group is topologically isomorphic to the étale fundamental group of some connected affine scheme. Although the group $ \mathbb{Q}_{p} $ of $ p $-adic numbers is not profinite (it is locally compact, not compact), it is also a totally disconnected space, so it does not admit a Lie group structure. Until now, we have stayed within the realm of locally compact topological groups. The OP has asked about the non-locally compact case, so here is an attempt at a response. The Swedish functional-analyst Per Enflo did his Ph.D thesis on Hilbert's Fifth Problem by investigating to what extent the results of Montgomery and Zippin, formulated only in the finite-dimensional setting, could be carried over to the infinite-dimensional setting. He performed his investigation on topological groups that are modeled on (locally homeomorphic to) infinite-dimensional Banach spaces. The main reason for using infinite-dimensional Banach spaces is due to the following basic theorem from functional analysis. Theorem 4 A Banach space is locally compact iff it is finite-dimensional. Citing unfamiliarity with Enflo's work, we kindly request the reader to consult the references that are provided below. Note: The OP did say that giving references only was okay! :) In this final section, we shall see how to put different topological structures on an abstract group. Toward this end, let us state the following theorem. Theorem 5 Let $ G $ be an abstract group and $ H $ a topological group. For any group homomorphism $ \phi: G \to H $, the pre-image topology on $ G $ induced by $ \phi $ makes $ G $ a topological group. If $ \phi $ is further an isomorphism, then $ G $ with the pre-image topology is topologically isomorphic to $ H $. Proof: The pre-image topology on $ G $ induced by $ \phi $ is defined as the following collection of subsets of $ G $: $$ \{ {\phi^{\leftarrow}}[U] \in \mathcal{P}(G) ~|~ \text{$ U $ is an open subset of $ H $} \}. $$ Pick an open subset $ U $ of $ H $. Then \begin{align} \{ (g_{1},g_{2}) \in G \times G ~|~ g_{1} g_{2} \in {\phi^{\leftarrow}}[U] \} &= \{ (g_{1},g_{2}) \in G \times G ~|~ \phi(g_{1} g_{2}) \in U \} \\ &= \{ (g_{1},g_{2}) \in G \times G ~|~ \phi(g_{1}) \phi(g_{2}) \in U \} \\ &= {(\phi \times \phi)^{\leftarrow}}[\{ (h_{1},h_{2}) \in H \times H ~|~ h_{1} h_{2} \in U \}] \\ &=: {(\phi \times \phi)^{\leftarrow}}[V]. \end{align} Multiplication is continuous in $ H $, so $ V $ is an open subset of $ H \times H $. Therefore, $ {(\phi \times \phi)^{\leftarrow}}[V] $ is an open subset of $ G \times G $ w.r.t. the product pre-image topology. As $ U $ is arbitrary, this implies that group multiplication in $ G $ is indeed continuous w.r.t. the pre-image topology. Next, we have \begin{align} \{ g \in G ~|~ g^{-1} \in {\phi^{\leftarrow}}[U] \} &= \{ g \in G ~|~ \phi(g^{-1}) \in U \} \\ &= \{ g \in G ~|~ [\phi(g)]^{-1} \in U \} \\ &= {\phi^{\leftarrow}}[\{ h \in H ~|~ h^{-1} \in U \}] \\ &=: {\phi^{\leftarrow}}[W]. \end{align} Inversion is continuous in $ H $, so $ W $ is an open subset of $ H $. Therefore, $ {\phi^{\leftarrow}}[W] $ is an open subset of $ G $ w.r.t. the pre-image topology. As $ U $ is arbitrary, this implies that inversion in $ G $ is indeed continuous w.r.t. the pre-image topology. The proof of the final statement is easy enough to be left to the reader. $ \quad \spadesuit $ For distinct positive integers $ m $ and $ n $, the groups $ \mathbb{R}^{m} $ and $ \mathbb{R}^{n} $ are isomorphic because they are isomorphic as $ \mathbb{Q} $-vector spaces. To prove the second assertion, first use the Axiom of Choice to deduce the existence of Hamel $ \mathbb{Q} $-bases for $ \mathbb{R}^{m} $ and $ \mathbb{R}^{n} $. Then show that a Hamel $ \mathbb{Q} $-basis $ \beta_{m} $ for $ \mathbb{R}^{m} $ and a Hamel $ \mathbb{Q} $-basis $ \beta_{n} $ for $ \mathbb{R}^{n} $ have the same cardinality, namely $ 2^{\aleph_{0}} $. Any bijection (there are uncountably many) from $ \beta_{m} $ to $ \beta_{n} $ now defines a unique vector-space isomorphism from $ \mathbb{R}^{m} $ to $ \mathbb{R}^{n} $. Given this vector-space isomorphism, transfer the standard topology on $ \mathbb{R}^{n} $ to $ \mathbb{R}^{m} $. With the pre-image topology, $ \mathbb{R}^{m} $ is a topological group that is topologically isomorphic to $ \mathbb{R}^{n} $ with the standard topology. It follows from Invariance of Domain (a result in algebraic topology) that the new $ \mathbb{R}^{m} $ is not topologically isomorphic to $ \mathbb{R}^{m} $ with the standard topology. The OP has asked if there is a non-trivial example involving non-Euclidean spaces. Off-hand, I do not have one in mind, but one can carry out the following procedure, which is in the same spirit as the previous example. (1) Take two known topological groups, $ G $ and $ H $, with different topological properties. (2) If one can find a discontinuous group isomorphism $ \phi: G \to H $, use $ \phi $ to transfer the topology on $ H $ to $ G $. (3) Then $ G $ with the pre-image topology is not topologically isomorphic to $ G $ with the original topology. (4) If $ H $ further admits a Lie group structure, then this Lie group structure can be transferred to $ G $, where there might have been none before. Montgomery, D; Zippin, L. Topological Transformation Groups, New York, Interscience Publishers, Inc. (1955). Gotô, M. Hidehiko Yamabe (1923 - 1960), Osaka Math. J., Vol. 13, 1 (1961), i-ii. Yamabe, H. On the Conjecture of Iwasawa and Gleason, Annals of Math., 58 (1953), pp. 48-54. Yamabe, H. A Generalization of a Theorem of Gleason, Annals of Math., 58 (1953), pp. 351 - 365. Enflo, P. Topological Groups in which Multiplication on One Side Is Differentiable or Linear, Math. Scand., 24 (1969), pp. 195–197. Enflo, P. On the Nonexistence of Uniform Homeomorphisms Between $ L^{p} $ Spaces, Ark. Math., 8 (1969), pp. 103–105. Enflo, P. On a Problem of Smirnov, Ark. Math., 8 (1969), pp. 107–109. Enflo, P. Uniform Structures and Square Roots in Topological Groups, I, Israel J. Math., 8 (1970), pp. 230–252. Enflo, P. Uniform Structures and Square Roots in Topological Groups, II, Israel J. Math., 8 (1970), pp. 253–272. Magyar, Z. Continuous Linear Representations, Elsevier, 168 (1992), pp. 273-274. Benyamini, Y; Lindenstrauss, J. Geometric Nonlinear Functional Analysis, Volume 1, AMS Publ. (1999). Haskell CurryHaskell Curry $\begingroup$ Pretty impressive :-) $\endgroup$ – Mariano Suárez-Álvarez Jan 5 '13 at 3:06 $\begingroup$ Maybe this is cheating a bit: All infinite-dimensional separable Banach spaces are isomorphic (as groups) to the vector space of dimension continuum. They all are homeomorphic to $\mathbb{R}^\mathbb{N}$ by a theorem of Anderson. They are isomorphic as topological groups if and only if they are isomorphic Banach spaces. Considering the spaces $\ell^p$ with $1 \leq p\lt \infty$ (for which it is easy to show they are homeomorphic) you obtain a continuum of non-isomorphic topological group structures on the same abstract group. $\endgroup$ – Martin Jan 5 '13 at 5:16 $\begingroup$ @Martin: That is very true. I also know of a theorem by David Henderson that classifies separable metric manifolds that are modeled on a separable infinite-dimensional Hilbert space, i.e., $ {\ell^{2}}(\mathbb{N}) $. It states that every such manifold must be homeomorphic to an open subset of $ {\ell^{2}}(\mathbb{N}) $. Banach spaces are much wilder, so one is definitely hard-pressed to find results this nice pertaining to Banach manifolds. I am not sure if Enflo ever dealt with manifolds that are modeled on non-separable Banach spaces. $\endgroup$ – Haskell Curry Jan 5 '13 at 5:45 $\begingroup$ I think van der Waerden showed that, for compact (semisimple?) Lie groups, any homomorphism between them is automatically continuous (hence smooth), so, if I'm remembering correctly, your step (2) in the last blue box won't work in the case of comapct Lie groups. I'll see if I can track down a reference. $\endgroup$ – Jason DeVito Jan 7 '13 at 16:12 $\begingroup$ I found a reference: math.uni-muenster.de/u/linus.kramer/publ/34.pdf first paragraph contains what I was thinking of and attributes it to Cartan and van der Waerden. I don't read German, so I didn't pursue the van der Waerden result further, but Cartan's can be found here: gdz.sub.uni-goettingen.de/dms/load/img/… . My French is barely adequate, so it will take me a while to find exactly what I'm looking for in there. $\endgroup$ – Jason DeVito Jan 7 '13 at 16:18 Not the answer you're looking for? Browse other questions tagged general-topology functional-analysis soft-question lie-groups topological-groups or ask your own question. $\mathbb{R}$ and $\mathbb{R}^2$ isomorphic as groups? Which Algebraic Properties Distinguish Lie Groups from Abstract Groups? Unitary representations of non-compact Lie groups Groups which are not Lie groups Natural example of a Riemannian manifold with a group structure that is not a Lie group? Two differentiable structures on the orthogonal group Useful sufficient conditions for a topological space to be the underlying space of a topological group? Diffeomorphic, group-isomorphic Lie groups that are not isomorphic as Lie groups Non-isomorphic Group Structures on a Topological Group Can topological groups be smoothed into lie groups? Lie group structures inside of Clifford algebras Topological/Lie group structure on projective spaces
CommonCrawl
A methodology framework for bipartite network modeling Chin Ying Liew1, Jane Labadin2, Woon Chee Kok2 & Monday Okpoto Eze3 Applied Network Science volume 8, Article number: 6 (2023) Cite this article The graph-theoretic based studies employing bipartite network approach mostly focus on surveying the statistical properties of the structure and behavior of the network systems under the domain of complex network analysis. They aim to provide the big-picture-view insights of a networked system by looking into the dynamic interaction and relationship among the vertices. Nonetheless, incorporating the features of individual vertex and capturing the dynamic interaction of the heterogeneous local rules governing each of them in the studies is lacking. The methodology in achieving this could hardly be found. Consequently, this study intends to propose a methodology framework that considers the influence of heterogeneous features of each node to the overall network behavior in modeling real-world bipartite network system. The proposed framework consists of three main stages with principal processes detailed in each stage, and three libraries of techniques to guide the modeling activities. It is iterative and process-oriented in nature and allows future network expansion. Two case studies from the domain of communicable disease in epidemiology and habitat suitability in ecology employing this framework are also presented. The results obtained suggest that the methodology could serve as a generic framework in advancing the current state of the art of bipartite network approach. The bipartite network approach applies network theory that has its basis in graph theory (Harary 1969). This graph-theoretic network approach commonly focuses on the properties, the structural dynamics, and the relationship between the structure and function of real-world networks like social networks, transportation systems, collaboration networks, epidemiology and the Web and Internet structures which are regarded as emergent fields of network science by Barabási (2013). A bipartite network consists of nodes of two different natures with links joining only between unlike nodes. It is also referred to as an affiliation or two-mode network (Kevork and Kauermann 2022). The heterogeneous nature of the bipartite network makes it a realistic model of the real-world system and applicable across a wide range of research fields, particularly in the studies related to science and technology (Valejo et al. 2021). It is commented as capable of providing insightful representation from mutualistic networks in ecology to trade networks in the economy (Saracco et al. 2015). In the well-cited review paper by Newman (2003) on the structure and functions of complex networks, a bipartite network is regarded as both a preference network under the category of information or knowledge network and a type of network under the social network category among the four network categories given. Most of the studies that apply the bipartite graph or bipartite network approach focus on the statistical properties of the structure and behavior of these networked systems under the domain of complex network analysis. The emphasis is to delve into the properties of networks that discusses features like, but not limited to, the small-world effect, transitivity or clustering, degree distribution of the vertices in the network, characteristics of community within a network, resilience of a network, assortativity of the connection between vertices, network clustering that considers the density of edges among vertices and groups with different clustering structure, and navigation within a network (Baumgartner 2020; Derudder 2021; Ducruet and Beauguitte 2014; Kevork and Kauermann 2022). Complex network analysis has been employed in surveying the relationship between the two types of nodes like different aspects of epidemiological modeling on complex networks (Jin et al. 2016; Zhao et al. 2021), microbes-compound metabolic network (Zhang and Deng 2021), user-object bipartite network in abstracting the selection pattern of web objects (Chandra et al. 2017), the relationship between cyberspace and physical space regarding a grid cyber-physical systems (Huang et al. 2018), hash-tags and users in studying the complex interactions between the semantic content of a debate and the users' interest in the Twitter activity (Gargiulo et al. 2015), and ecological bipartite networks of biotic interaction types within ecological communities (Kaszewska-Gilas et al. 2021; Poisot et al. 2015). Apart from that, the bipartite network approach has been widely applied in the studies of social sciences or social networks. This includes the studies of disease transmission networks (Büttner and Krieter 2020; Hernándex and Risau-Gusman 2013; Rafo et al. 2021), biological system networks (Baumgartner 2020), food-web networks (Michalko et al. 2021), ecological network (Elliott et al. 2021), cognitive network (Vitevitch et al. 2021), and governance-leadership relationship in a development policy network (Rudnick et al. 2019). These studies show that the focus in typical complex network research is on the big-picture-view of the networked system and observation of the interaction or relationship between the vertices of the network. Incorporating the features of individual nodes (which are the local rules governing the individual vertices) and capturing the dynamic interaction of the heterogeneous characteristics of the individual node in a network is scarce. Studies on how these could be done are lacking in the above complex network research. As Newman (2003) has pointed out, predicting the system behavior based on the measured structural properties and local rules governing individual vertices is still in its infancy. As a result, the methodology in bipartite network modeling that incorporates unique features of every individual node in a network is also lacking. Typical modeling processes include understanding or formalizing the research problem to confirm the feasibility of employing an approach in modeling the studied problem before determining the potential variables and assumptions. This is followed by formulation of the intended model incorporating the variables, utilizing an approach that usually comprises iterative processes. Lastly is the process of evaluating the model formulated to validate its usefulness in achieving the intention it is developed. In studies utilizing a novel modeling approach, the last stage usually requires verifying the processes implemented in formulating the model comply with the standard practices of a specific research community. Studies employing the graph-theoretic network approach predominantly focus on the network data analyses, which are statistically based. The methodology generally includes formalizing the research problem, setting up the bipartite graph by defining the bipartite nodes and the existence of edges between the unlike node types, abstracting the real-world system and formulating it into a bipartite network model, performing analyses onto the network which focus on network structure, validating the bipartite network model and concluding the real-world system based on the findings obtained from the network analyses (Derudder 2021; Kaszewska-Gilas et al. 2021; Kevork and Kauermann 2022). In brief, the primary steps in typical network approach studies are network abstraction and network analysis (Derudder 2021). The heterogeneity in these studies mainly refers to the heterogeneous nodes (two types of nodes) or the statistical characteristics portrayed by the nodes in the network that are related to the network structure analyses like the node degrees, network connectedness and centralities. When abstracting a real-world system into a bipartite network, the natural features of the nodes should be taken into consideration. These features could be the environmental variables, species specific variables, geographical variables of locations, epidemiological variables, biological variables, depending on the domain and objectives of the studies. These features of real-world phenomena contribute to the way interactions happen between the unlike nodes in a bipartite network, which eventually impact the network structure and its subsequent network statistics. Nevertheless, studies employing the bipartite network approach that incorporate the features of individual node are scarce. O'Sullivan and Manson (2015) stipulate the studies surveying urban systems using network approach are ontologically and epistemologically unique from network studies conducted by physicists, and thus warrant a distinct methodology. Likewise, although the bipartite network approach studies between researchers that do not incorporate the natural features and those that do are methodologically identical to a certain degree, there are discernible ontological and epistemological differences between them. This has resulted in difficulties in employing the methodologies used for typical bipartite network studies. As a result, this study proposes a methodology framework for studies that intend to incorporate features of individual nodes to capture and model the interactions between the unlike nodes, employing a bipartite network approach. It is termed the bipartite network modeling (BNM) framework. Two case studies that fall under the disease transmission networks—mosquito-borne disease hotspot—and ecological networks—habitat suitability of a marine mammal—are presented to show the applicability of the framework. The proposed framework could serve as an alternative to the typical system development life cycle (SDLC) for bipartite network modeling study that intends to incorporate the unique features of the distinct bipartite nodes. The contributions of this methodology framework include: a) specifying the need to check for the feasibility of employing a bipartite network approach, to determine the functional definitions of the nodes, links and the overall graph, to resolve the parameterization and the assumptions of the features of the bipartite nodes in the first stage; b) detailing the need to quantify the heterogeneous properties of the bipartite nodes; c) accentuating the necessity to scientifically evaluate many available quantification methods represented as the library of quantification techniques where computational intelligence approaches should be considered because the real data are always dynamic and complex; and d) specifying the network evaluation technique library to show that the verified and validated bipartite network model formulated can then be evaluated using both the typical and also novel network analysis where appropriate. Therefore, the main contribution of this paper is the methodology framework serving as a generic methodology for researchers who intend to employ the bipartite network modeling approach across research areas and domains. It is for studies that aim to capture the heterogeneous nature of features in individual vertices that are contributing to the behavior of the network formed. The rest of this paper is organized as follows: "Bipartite Network Modeling (BNM) Framework" section introduces our proposed BNM methodology framework. "Dengue Hotspot Identification" and "Preferred Habitat of Irrawaddy dolphin (Orcaella brevirostris) at Kuching Bay" sections present the two case studies. In "Discussion" section, discussions are presented for the proposed BNM methodology framework with respect to the case studies presented with elaborations on the possible future works. Lastly, our conclusions are presented in the last section. Bipartite network modeling (BNM) framework The BNM framework depicted in Fig. 1 captures the complete process of modeling. It has three distinct stages. The methodology is iterative and process-oriented in nature. The purpose is to formulate a validated bipartite network model that is able to rank either one or both the nodes, which is the hotspot (entity of interest according to the problem domain). Principal processes are detailed in every stage to guide the modeling activities. These processes are numbered in sequential order based on the stage they belong to. For example, process 2.4 refers to the fourth process of the second stage, the Link Weight Quantification. Every process produces one composite output. The output from one process serves as the input to the following process. The description of every stage and its corresponding processes are presented in the following subsections. Problem characterization In this first stage are three main processes. They are current research scenario understanding, graph structure representation, and bipartite graph formulation. The focus of this stage is to formulate the graph structure for the network model of a research. Current research scenario understanding Denoted as process 1.1 in Fig. 1, the purpose of this process is to gauge the understanding of the current state of the research scenario. It is achieved through a triangulation procedure. They include discussions with the experts or stakeholders of the field, consolidating results from the review of the past literature, and studying research data. At the same time, the features or characteristics of the bipartite nodes that are significant in contributing to solving the research problem are identified. These features are the potential variables of the bipartite nodes of a study. The assumptions to be adopted in the study are also identified. The finalized variables and assumptions will be decided at the second stage. Graph structure representation The aim of the second process, process 1.2 in Fig. 1, is to identify the basis of the graph structure representation for the network system being studied. It is achieved by setting up the basic building block (Fig. 2), which is the simplest form of a bipartite graph that consists of two nodes (one from each bipartite node)—node-type-1 (V) and node-type-2 (U) as seen in Fig. 2—and an edge that joins them. Figure 2 shows that there are n and k features captured as variables for node-type-1 and node-type-2, respectively. Representation of basic building block How the edge of a bipartite graph is defined depends on the research domain and the problem it is solving. For example, the edge could occur when there exist a virus-vector-host, supplier-manufacturer, manufacturer-contractor, cyberspace domain-host, twitter user-hashtag, non-volatile-and-volatile wine compound, location-pollutant, plantation-location, and species-habitat relationship. Next, the third process is important in formalizing and defining the bipartite nodes and the link through the information collected from the first process. Bipartite graph formulation For the third process, denoted as process 1.3 in Fig. 1, the research data obtained is used to form the bipartite graph using the basic building block formalized in the previous process. The complete bipartite graph representing the research scenario produced at this point is the output of the first stage. In addition, the potential variables of the study identified and the assumption to be adopted are also the pertinent output from the first stage. Mathematical and graphical representations of the research problem are thus formulated. The general mathematical expression of a formulated bipartite graph, G, having i number of node-type-1 denoted as U and j number of node-type-2 denoted as V, with k number of edges denoted as E connecting only node U and node V, is given in Eq. 1. Model construction The second stage comprises five processes: data pre-processing, node-type-1 parameters quantification, node-type-2 parameters quantification, link weight quantification, and search algorithm implementation. Data Pre-processing The aim of the first process, denoted as process 2.1 in Fig. 1, is to ensure that the data is complete and balanced especially when real data is used. The output of this process is data that is ready to be used by the next processes. Node-Type-1 parameters quantification and Node-type-2 parameters quantification Both the second and third processes, denoted as processes 2.2 and 2.3 in Fig. 1, involve the parameters quantification of the nodes. Node-type-1 and node-type-2 refer to the respective bipartite nodes of the bipartite graph formulated in the first stage. The potential variables purportedly govern the behavior of a node within the network system have already been identified in the first stage. For instance, the n variables of node-type-1 and k variables of node-type-2 in Fig. 2. Hence, the focus of these two processes is to identify the techniques to quantify the parameters. The mean to quantify a parameter depends on, amongst others, the research objectives, research domain or field, the past surveys, preference of researcher, and availability of traditional, novel, or emergent quantification techniques. One notable potential quantification method we wish to highlight here is the computational intelligence technique that is powerful and promising in tackling complex real-world problems. As there are a huge number of techniques to choose from, our methodology has resorted to represent this collection of choices through a library. It is termed as the quantification techniques library, denoted as a green-color shape in Fig. 1. From this library, the researcher shall consider quantification techniques deemed appropriate through scientifically sound procedures or analyses. Values for parameters of the bipartite nodes are then generated and computed using these techniques or taken from the research data. Link weight quantification The fourth process, denoted as process 2.4 in Fig. 1, intends to identify a quantification technique for determining the link weight. The link that connects the bipartite nodes has already been formalized and defined in stage one. As in their counterpart in the second and the third processes of the second stage, the researcher identifies these potential quantification techniques from the review of related work in their respective research domain. These potential techniques are also collectively represented as the quantification techniques library as shown in Fig. 1. As discussed in section two above, there is a lack of studies that compute the edge or link weight by incorporating the distinct parameters of each feature or variable characterizing both node types (captured by their respective variables). Consequently, the quantification of the link weight ought to consider this alongside capturing the complex interactions among nodes of both types which are given by the edge set (E) in Eq. 1. The finalized quantification technique used in the study requires repetitive validation, and verification too if needed. Computation of the weight for each link is then executed. The BNM methodology framework presented in Fig. 1 shows that processes 2.2, 2.3 and 2.4 are grouped together, signifying that together they are responsible for producing the complete weighted bipartite network of a study. Search algorithm implementation The last process of the second stage uses a search algorithm to determine the ranking of one or both node-types. The nodes that are ranked at the top are the hotspots for the study. As revealed in Fig. 1, a green-colored shape named search algorithm technique library is connected to this process. This library symbolizes that there are many different search algorithms available in the research community for use. Among them are the two well-established and widely used web-based search algorithm: graph-theoretic based PageRank (Borgatti et al. 2002), Hypertext Induced Topic Selection (HITS) (Kleinberg 1999), and their variations that are extensively applied in influence calculation and network related research for nodes ranking purpose (Liao et al. 2020; London and Csendes 2013). On the other hand, a study identifying malaria transmission hotspots reported that web-based HITS search algorithms (Eze et al. 2014) are useful and could be applied in other domains employing a bipartite network modeling approach. The researchers stipulated that a web-based HITS search algorithm exhibits the existence of structural similarity among the social network, web graph and the malaria network. Web-based HITS algorithm particularly stands out because the search of HITS involves both authority and hub nodes (Liao et al. 2020) which is equivalent to the bipartite nodes in the bipartite network. Adapting the HITS algorithm with the web graphs and the preference network resulted in a hybrid search engine represented in Algorithm 1 (Eze et al. 2014). Hub refers to either one of the bipartite nodes (Node-Type-1 or Node-Type-2) that is intended to determine its ranking. The search engine model employed is made up of four main sections—the Input, Transformation, Search and Indexing, and the Output—as shown in Fig. 3. The hub and authority matrices refer to the bipartite nodes (Node-Type-1 and Node-Type-2) of the bipartite network system. The Input Section accepts the formulated bipartite network, in the form of two matrices—link matrix (LinkMat) and link weight matrix (ContStrMat)—and the number of nodes for each bipartite node in the Malaria Contact Network (Eze 2013)—public place node (NPub) and human being node (NHum) for the case of malaria network. The Transformation Section houses two generators—the Authority (Auth.) Matrix Generator and the Hub Matrix Generator. Both were used to generate the hub and the authority matrices respectively. The Search and Indexing Section is made up of the Dominance Vector Generator and the Indexer. The result of the operations in this section is the ranking of hubs which are the public places in terms of the malaria vector densities. The Output Section generates the result of the search engine operations, which are the hotspots of malaria transmission. (Source: Eze 2013) The search engine workflow Consequently, the input for this process 2.5 is the complete bipartite network produced from the previous process 2.4. Implementing the selected search algorithm produces ranking indices where one or both bipartite nodes are ranked. The final result of stage two is the bipartite network model with the ranking of the nodes in the surveyed network system. Subsequently, the last stage, stage three, is elaborated next. Model analysis and evaluation This last stage consists of three processes: model verification, model validation and network evaluation analysis. The main goal is to ascertain that the model formulated from the previous stage is verified, validated and evaluated. Model verification and model validation This process is denoted as process 3.1 in Fig. 1. The objective of model verification is to ensure that the research processes of modeling the network system in a study comply with the standard regulation, requirement or specification of a research community (IEEE 2011). To achieve this, the researcher generally uses other analysis systems and analytical methods as a benchmark to verify the implementation processes performed in the study. Conversely, model validation makes sure that the network system modeled meets the objective of the study and fulfills the needs of its stakeholders (IEEE 2011). Typical validation practices in modeling use real data or past survey results, or both to validate the result obtained in stage two. Appropriate error analysis and comparative analysis are to be performed in these two processes to compare the actual model results or performance and the verification or validation results. Should the model fail to pass the verification or validation, or both processes, the model needs to be further refined by returning to the earlier stage(s) of the methodology. It marked the iterative nature of the BNM methodology. Network evaluation analysis Upon verifying and validating the bipartite network system modeled, it is passed to the third process of stage three, denoted as process 3.2 in Fig. 1. Network evaluation analysis aims to perform extended evaluations and analyses towards the model formulated. It further gauges the behavior, properties, structure, and function of the abstracted network system being studied. Typical complex network analysis methods, Petri nets methods for directed bipartite graph or network, existing and emergent analytical techniques, visualization tools that are scientifically sound are some examples for these purposes. In view of the numerous ways to analyze a network, they are collectively represented as a network evaluation techniques library in BNM methodology as shown in Fig. 1. The evaluation results obtained further validate and strengthen the research findings and provide auxiliary illustrations and insights for the research findings. The output of this stage, which is the final output of a study, is a verified, validated and evaluated bipartite network model. The BNM methodology framework allows future expansion or extension of a formulated bipartite network model. This enables the researchers to extend their existing model when more data are acquired, implying that more nodes and edges are added to the network model. The framework also allows model expansion whereby additional variable(s) are required to be included. Likewise, the existing model can be modified when researchers intend to achieve another objective using the current model that they have. The researchers could refer to the BNM framework to identify the process (es) that need to be carried out when they want to perform any one of the above expansions. The BNM framework could act as a checklist as well so that proper modeling processes are carried out. In the following sections, two studies employing the bipartite network approach following the BNM framework will be presented. The studies are in the fields of epidemiology and habitat suitability in ecology. The former study investigates the hotspot identification of vector-borne diseases whereas the latter detects the preferred habitat of a marine mammal species. Dengue hotspot identification Hotspot detection of vector-borne diseases such as dengue is pivotal in ensuring the eradication (Aziz et al. 2014) of the disease concerned. Disease hotspots are geospatial areas with a high prevalence or efficient transmission of disease (Lessler et. al. 2017). Public health authority targets the hotspots to eliminate the vector effectively (Nagao et al. 2003; Ritchie and Johnson 2017). Dengue disease, like malaria, is one of the mosquito-borne diseases. The bipartite network approach is used to identify the dengue hotspots (Kok et al. 2018) where hotspots are defined as the public places of mosquito breeding sites. The BNM framework is adopted as the methodology in this study. Problem characterization of dengue hotspot identification This section discusses the process in Stage 1 of the BNM framework (as seen in Fig. 1) which focuses on the formulation of the basic building block and then a graph structure representation of the intended bipartite dengue contact network. As demonstrated by Eze (2013), epidemiological studies that relate the interaction among environmental properties, public places and hosts can be visually represented as a graph consisting of three vertices and referred to as the epidemiological triangle (ET). Similarly, as depicted in Fig. 4, Kok et al. (2018 p. 3) use it as the basis for formulating the study on dengue transmission. The three epidemiologic factors in the epidemiological triangle are interdependent and used to identify the basic building block. Epidemiological triangle (ET) As a mosquito-borne disease, the main hosts in this study are mosquitoes and humans. However, the mosquitoes are not characterized as a node in the network model of this study as the mosquitoes are reported to be unable to fly further than 400 m from a particular public place (Eze et al. 2011). Therefore, it is assumed that the public place node houses the mosquito nodes resulting in the consideration of simply the public place. Besides that, public place and environmental properties are two risk factors that are strongly related where the public place component specifies the spatial features of specific environmental properties (Liew 2016). At the same time, environmental properties characterize a specific public place and differentiate a public place from another. Therefore, the public place component can be viewed as a component housing the environmental properties. Hence the ET is further modified, as shown in Fig. 5. Public place is denoted as P, a component of environmental properties denoted as N and the component of host denoted as H. The previous three vertex graph structure (Fig. 4) of ET is modified to a two-vertex graph structure as depicted in Fig. 5 (Kok et al. 2018 p. 3). Basic building block of the bipartite dengue contact network Based on the assumption that the public place component is strongly related to the environmental properties, forming the basic building block of the bipartite network model, as depicted in Fig. 5. The two different nodes consist of human and public place nodes. The component of the host is replaced with the human and denoted as H. There are two vertices, P and H, that imply the attributes for vertex public place (P) and vertex human (H) are different, signifying a bipartite network which is a heterogeneous network of two node types. Consequently, Fig. 5 shows a network structure that consists of the sets of public place nodes (P), human nodes (H) and edges (E). The set of edges is the link between the public place and human nodes. The graph structure is termed Bipartite Dengue Contact (BDC) graph, while the network model is coined as the Bipartite Dengue Contact (BDC) network. Process 1.2 of the BNM framework ensures that a graph is different from a network where a BDC graph is an unweighted bipartite graph for a visual representation of the BDC network. BDC network is a weighted bipartite graph that gives the topological and functional relationship of the bipartite nodes and their respective links (Rayfield et al. 2011). The link weight is a measure of affinity between nodes in the BDC network. To quantify the contact strength values, potential parameters for public places and human nodes are identified as both the nodes are associated with the increased probability of dengue spread. Two parameters are potential to be considered for the human nodes, namely the frequency of a patient visiting a public place (Fh), and the total duration of stay in a public place (Du in second, s). The parameters that are attached to public place nodes involve the mosquito characteristic, for instance, life cycle index (Lc), survival rate (S), and biting rate (B); environmental parameters, which includes, total precipitation amount (Pre in meter, m), humidity (K in percentage, %); geographical parameters, for instance, altitude (Al in meter, m); and frequency of a public place visited by humans (Fl). Based on the research data, the link between the nodes formed when humans visited public places. For process 1.3 on Bipartite Graph Formulation in the BNM framework (Fig. 1), the complete BDC graph is formed in this process using the basic building block shown in Fig. 5. From process 1.1, a total of twelve Epidemiological Week (Epi Week 28 to 39) of dengue patients' mobility data are collected. However, only data collected from the first two weeks of Epi Week (Epi Week 28 and 29) is chosen for this paper to demonstrate the formulation process of the BDC network model. There are eight unique individual dengue patients and each of them is given a unique code. These eight unique coded individual dengue patients are consequently identified as the eight human nodes of the BDC network. They are labeled as H1, H2, H3, H4, H5, H6, H7, H8 where H is the symbol used for human nodes and the numerical number 1 and 8 is used to differentiate one human from another. The number of human nodes includes dengue positive and possible positive patients. Based on process 1.1, individuals will be registered when the patient visits the hospital or clinic due to fever, and Immunoglobulin M (IgM) dengue serology test will be conducted to diagnose dengue fever. The possible positive patient in this study represents the patient who has negative IgM dengue serology test results (obtained from the investigation form). It is revealed from the report by the Centers for Disease Control and Prevention (CDC) (2014) that the primary infection shows a slow and low titer antibody response compared to the secondary infection. Dengue IgM serology has low sensitivity during the early phase of dengue fever as the virus and IgM antibodies may be at undetectable levels for those who submit a day five acute specimen (CDC 2014). Therefore, the human mobility of the patient who has a negative IgM serology needs to be considered in this model. As for the public place nodes, they were visited by both human nodes with dengue positive and possibly positive capability to provide a possible new risk public place to the model. Based on the eight human nodes which have been identified, there are a total of 19 public places visited by the eight human hosts. Thus, these public place nodes are labeled as P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, P11, P12, P13, P14, P15, P16, P17, P18 and P19, where P is the symbol used for the public place node and the numerical number 1 until 19 is used to differentiate a public place from another. Each of the public place nodes is labeled with its corresponding latitude and longitude values. Next is the identification of the link which joins the human and public place nodes. The link is formed when H visits P. To trace all links, each unique human host's movement visiting each public place is extracted from the investigation form. The information of links formed concerning each public place and each human can be identified and is revealed in the complete BDC graph for the BDC network presented in Fig. 6. BDC graph is the bipartite graph, denoted as BDCDEN_KCH where set H, the human nodes, consists of eight elements, and set P, the set of public place nodes, consists of nineteen elements. Set E, the set of links that join elements of H and P, has 20 elements. This is given in Eq. 2 (Kok et al. 2018 p. 5). Subsequently, the BDC graph in Fig. 6 is a graph of 8H by 19P with 20 edges. The degree of each public place node is {1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}. On the other hand, the degree of each human node is {2, 3, 3, 4, 1, 3, 2, 2}. The sum of the degrees of the public place or human nodes is 20, which is also the total number of edges in the BDC graph. Complete bipartite dengue contact graph for dengue patients Bipartite dengue contact network model construction This section discusses the processes in Stage 2 of the BNM framework (as seen in Fig. 1) which focuses on the formulation of the BDC network model. It explains the quantification of the parameters for the public place and human nodes, and the quantification of links connecting these two types of nodes. The first process (denoted as process 2.1 in the BNM framework) at this stage is to process the data collected from the previous stage. The data obtained is raw and untidy, which requires data pre-processing for the public place and human nodes. This is an essential process to determine the parameters' values attached to or defined for the public place and human nodes in the following processes. Public place node The first step is to change the public place name into Global Positioning System (GPS) coordinates using Google Maps. An algorithm to call a user-defined function is used to calculate the distance of the new incoming public place and the existing public places from the data frame. A new public place node is declared in the database only if the distance between the new public place node and the current public place (in the database) is greater than 400 m because the maximum flight range of the Aedes mosquito is 400 m. Next, all the public places' GPS coordinates of the data frame are passed to a user-defined distance matrix generator where a matrix of geographical distances between public places is generated. Human node The human node refers to the patient's identity and this can be obtained from the investigation form. To protect the patient's confidentiality, patient identity is replaced by an algorithm-generated ID. The BDC network model consists of eight human nodes. Thus, the human nodes are identified as H1, H2, H3, H4, H5, H6, H7, and H8. Parameter values such as temperature, humidity, precipitation, and altitude are collected for node quantification. Pre-processing these parameter values is necessary to prepare the parameter in developing a robust model to quantify the nodes in the following processes. The human mobility data capture the patients' movement two weeks before the onset date. In order to observe the effect of the parameters on the environment, the average temperature two weeks ago at that particular public place needs to be calculated. For instance, the onset date for the first patient (denoted as H1) was 2015–07-09. The human mobility data captured the movement two weeks ago, which is 2015–06-25 until 2015–07-08. The H1 visited P1 on 2015–06-25. Thus, the average temperature 14 days ago, between 2015–06-11and 2015–06-24, is calculated with the Eq. 3. In Eq. 3, the i represents the day before the human mobility date starts and i is a positive integer. Thus, i = 14 represents 14 days calculated from the human mobility start date and gradually decreases to i = 1, which is one day before the human mobility date. The variable k in Eq. 3 represents temperature, humidity, precipitation, or altitude. $$Average_{k} = Average\left( {\mathop \sum \limits_{i = 14}^{1} k_{i} } \right)$$ With the pre-processed data, process 2.2 can be activated with the quantification of the parameter values of node-type-1, the public place nodes, specifically on the parameters namely the life cycle parameters, survival rate, biting rate and the frequency of humans visiting a public place. It is established that the mosquito vectors hardly move far away from their breeding sites. Thus, these vector activities taking place in that particular locality will affect the dengue transmission. The activities below have been considered in this study to model dengue transmission. Number of days to complete a mosquito vector life cycle that will affect the dengue transmission rate. Thus, the vector life cycle duration model needs to be constructed. Mosquito survival rate is also affecting dengue transmission. As the higher the survival rate resulted in higher mosquito population, and hence the higher transmission rate. Thus, the vector survival rate model is incorporated into the contact network. Mosquito biting rate affects dengue transmission, as the more frequent the mosquito bites the human host, the higher the probability the dengue spreads. Thus, the vector biting rate model is constructed. Quantification for duration of vector life cycle The duration of the vector life cycle measures the life cycle duration of mosquito from an egg to an adult at every locality in a BDC network. Due to its known dependence on temperature, the life cycle of the Aedes mosquito plays an essential role in understanding the effects of environmental property on dengue transmission (Carrington et al. 2013). The life cycle duration is negatively associated with the temperature. Vector life cycle duration parameter is thus termed the vector life cycle index with symbol Lc. This vector life cycle index, valued between 0 and 1, is defined as the measurement of the life cycle duration of the Aedes mosquito. Development of Lc is presented in Kok et al. (2018). It is a temperature-dependent model formulated as a function of t where t represents temperature, with a degree of 6 as in Eq. 4 (Kok et al. 2018, p. 5). $$Lc = - 0.633t^{6} - 0.786t^{5} + 1.488t^{4} + 1.153t^{3} - 0.408t^{2} - 0.758t - 0.504$$ After the values of Lc for each public place node are computed, the inverse of it, 1/Lc is to be used in the link weight quantification later. This is because the life cycle index is inversely proportional to the dengue transmission rate, where the shorter the time taken for a complete life cycle leads to an increase in the dengue infection rate. Vector survival The survival parameter measures the survival probability at a locality to indicate the vector survival rate at one locality. This parameter is included to account for the importance of dengue transmission as one of the significant contributors to the vector hotspot (Lambrechts et al. 2011). The mosquito survival rate is positively associated with temperature (Rueda et al. 1990; Tsai et al. 2017; Lee and Farlow 2019). Nevertheless, there is no documented source for vector survival data. Quantification of the vector survival parameter is given in Kok et al. (2018). Vector survival parameter is termed the vector survival index with symbol S. In this study, generating a vector survival index is the same as the process to generate the life cycle index. The resulting model is given in Eq. (5) (Kok et al. 2018, p.6). It is then used to compute the value of S for each public place node. $$S\left( t \right) = 1.3908t^{6}- 0.2951t^{5}- 3.8642t^{4} + 1.3217t^{3} + 1.2971t^{2}- 0.1412t + 0.591$$ Vector biting One of the crucial activities, like biting, contributes to dengue transmission (Phaijoo and Gurung 2015; Wesolowski et al. 2015). Scott et al. (2000) associated the temperature and blood-feeding frequency of female Ae. Aegypti. This blood-feeding frequency indicates the number of blood meals the mosquito takes, referring to the number of mosquito bites. A linear regression model of the blood-feeding is derived and given in Eq. 6 that represents the total mosquito biting rate per week where T represents the average weekly temperature range from 21℃ to 32℃ in the study area, Thailand. $$B(T ) = 0.03T + 0.66$$ As the unit of biting rate in this study is the daily mosquito biting rate, Eq. 6 is divided by 7 to transform it into a daily biting rate. The modified biting parameter model is given in Eq. 7. Scott et al. (2000) applied the model when the temperature ranged from 21 to 34. If the temperature is out of this range, the biting parameter is a baseline value, 0.8 (Scott et al. 2000). Subsequently, the vector biting index, B of this study is computed using the average temperature of the particular date and public place. $$B\left( T \right) = \left\{ {\begin{array}{*{20}l} {0.004286T + 0.09429, } \hfill & {21^{ \circ } {\text{C}} \le T \le 32^{ \circ } {\text{C}}} \hfill \\ {0.8,} \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right.$$ Frequency of public place visited This study includes the number of times the dengue patients visit a public place as one of the parameters for public place nodes in the BDC network to capture the effect of visiting the dengue patients in a public place. This parameter is termed the frequency or number of times one human visits a public place and is denoted as Fl. A link matrix, Link_MattrixBDC Network is created to record the Fl and is defined in Eq. 8. It is used to generate the link matrix for this study. Another parameter, Fh measures the number of times that a human visited a public place and is discussed in the next section. Therefore, Fl is defined as in Eq. 9. $$\begin{gathered} {\text{Fl}}_{i} = \mathop \sum \limits_{j = 1}^{8} \left[ {{\text{Link\_Matrix}}_{{{\text{BDC}}\;{\text{network}}}} \left( {{\text{P}}_{i} {\text{H}}_{j} } \right) \times {\text{Fh}}_{ji} } \right] \hfill \\ {\text{where}}\;i \in \left\{ {1, 2, \ldots , 19} \right\}j \in \left\{ {1, 2, \ldots , 8} \right\} \hfill \\ \end{gathered}$$ Four significant public place parameters namely Lc, S, B and Fli are explained. Lc and S are quantified through polynomial curve fitting with three attributes: latitude (x), longitude (y) and temperature (T). B is quantified using a linear step function with respect to temperature (T) and Fli is quantified through the Eq. 9 defined earlier. Parameters initially decided for the public place node of the BDC network are then further refined. It is crucial to keep the model simple (Barnes and Fulford 2014 p. 3). Thus, latitude, longitude and temperature are excluded from the public place parameter as their effects have been accounted for in the life cycle model, survival model and biting rate. The output of process 2.2 (Fig. 1) is the seven parameters finalized as the BDC network public place node parameters. They are life cycle index, survival index, biting index, humidity, precipitation, altitude, and number of times a public place is visited by humans. The values of three parameters—Al, Pre and K—are directly obtained from the research data. Table 1 presents the values of all seven parameters for each of the 19 public place nodes in the BDC network dataset. Table 1 Values of the seven parameters of BDC Network Next, process 2.3 begins where two parameters are identified for the human node of a BDC network, namely, time duration of human stay at a public place, Du, and the frequency of a human visiting a public place, Fh. Time duration of stay of human at a public place The total duration of a human stay at a public place across 14 days is recorded in the investigation form. These 14 days are the periods of dengue patients before the first symptoms and the dengue patients' movement within these 14 days is essential in dengue transmission (World Health Organization (WHO) 2012). The duration is recorded in either day, hours or even minutes. The time taken of a human stay at a public place across 14 days is calculated in seconds. Denoted as Duij, the duration for human j visited a public place i across 14 days is calculated using Eq. 10 and the values are given in Table 2. $${\text{Du}}_{{{\text{ij}}}} = {\text{total}}\;{\text{duration}}\;{\text{of}}\;{\text{human}}\;j\;{\text{visited}}\;{\text{public}}\;{\text{place}}\;i\;{\text{in}}\;{\text{seconds}}$$ Table 2 Values of parameters for human nodes for the BDC Network Frequency of human visiting a public place The parameter of the number of times a human visited a public place is represented by the symbol Fh. Human and public places here refer to each of the respective public places and human nodes. This parameter is denoted as Fhij and is defined in Eq. 11 to record how many times has human node j visited public place node i. $$Fh_{ij} = \left\{ {\begin{array}{*{20}l} n \hfill & {{\text{if}}\;{\text{H}}_{j} \;{\text{visited}}\;P_{i} \;n\;{\text{times}}\;{\text{where}}\;n \in Z^{ + } } \hfill \\ 0 \hfill & { {\text{if}}\;{\text{H}}_{j} \;{\text{did}}\;{\text{not}}\;{\text{visit}}\;P_{i} } \hfill \\ \end{array} } \right.$$ The value of n and thus Fhij for every human node to each of the 19 public place nodes is determined by tracking the movement of each unique individual human. The values of Fhij agree with the number of link(s) formed between human node j and public place node i in the BDC graph shown in Fig. 6. The values of Fhij are provided in Table 3. Table 3 Values of Fhij for human nodes for BDC Network Seven parameters are identified and quantified for the public place node and two parameters are determined for the human node. Processes 2.2 and 2.3 in Stage 2 of the BNM framework are completed. Once both distinct node types have been quantified, the link between them can now be quantified and is depicted as process 2.4 in the BNM framework. Twenty link edges are identified from the BDC graph established earlier and the weights of these edges need to be computed. This link weight is termed dengue contact strength (DCS), representing the link affinity between the human and public place nodes. The stronger the strength indicates the greater degree of attachment between the human and the specific public place, which contributes to a higher degree of human contact with the specific public place. Eze (2013) introduced a summation rule to compute the contact strength for the edge formed between the bipartite nodes of the malaria transmission network. The quantification technique is explained in Eq. 12. Summation is used because the total of the individual parameters will contribute to a more significant value, indicating a stronger strength. Since individual parameters consist of rational numbers such as 0.7, the product of these rational numbers will contribute a smaller value which indicates a weaker strength. Thus, a summation rule is the most suitable one. DCSij refers to the dengue contact strength of the link formed between public place i and human j. $$\begin{aligned} {\text{DCS}}_{ij} & = \left( {\sum {{\text{PublicPlace}}\_{\text{Node}}\_{\text{Parameters}}_{i} } } \right) + \left( {\sum {{\text{Human}}\_{\text{Node}}\_{\text{Parameters}}_{ij} } } \right) \\ & = (Lc_{i} + S_{i} + B_{i} + Al_{i} + K_{i} + \Pr e_{i} + Fl_{i} ) + \left( {Du_{ij} + Fh_{ij} } \right) \\ \end{aligned}$$ Using the normalized parameter values of Lc, S, B, Al, K, Pre and Fl, the complete DCS is computed. The output of DCS for all links in the BDC graph (Fig. 6) eventually resulted in the BDC network as shown in Fig. 7. Bipartite Dengue Contact (BDC) Network The BDC network represented in the form of a matrix (where the row of this matrix is the location node while the column represents the human node, and the elements of the matrix are the normalized link weight) is the input of process 2.5 in the BNM framework corresponding to the implementation of the search algorithm. Similar to the previous studies that adopt the framework, this study also used the HITS search algorithm that involves the computation of principal eigenvalues and eigenvectors. The implementation of the algorithm involves fours steps namely, the generation of hub and authority matrices; the generation of the corresponding principal eigenvectors; the assignment of nodes' labels according to the principal eigenvectors and the assignment of the dengue hotspot ranking (DHR) values; and finally the generation of the output of the algorithm which is the locations prioritized according to the DHR values (Fig. 3). The higher-ranking location represents the more critical the location is in terms of dengue control intervention. The final stage of the BNM framework is the Model Analysis and Evaluation of the Bipartite Dengue Model. The model is verified via a comparison of the Root-Mean-Square Error (RMSE) made with a benchmark system, that is the UCINET 6 for Windows, a powerful network analysis software (Borgatti et al. 2002). The analytical verification is conducted by calculating the Spearman's Rank Correlation Coefficient (SRCC) between the hub matrix and the DHR values. The validation process was executed by calculating the SRCC between the targeted and validated network. Further analyses like the predictive power analysis, parameter significance analysis and data size analysis were also conducted as reported in Kok et al. (2018, 2019). Preferred Habitat of Irrawaddy dolphin (Orcaella brevirostris) at Kuching Bay Irrawaddy dolphin (ID) (Orcaella brevirostris) is listed under the category and criteria of Endangered A2cd + 3cd + 4cd (version 3.1) where it has been categorized as Vulnerable A4cd (version 3.1) since 2008 by the International Union of Conservation of Nature and Natural Resources (IUCN) Red List of threatened species (version 2017) (Minton et al. 2017). However, the sub-population of ID at Kuching Bay, Sarawak, Malaysia is not listed in the databases of the IUCN until the year 2017. No established and consistent scientific survey and research has been conducted on the distribution and abundance of the ID at Kuching Bay (Peter 2012) until the commencement of the Sarawak Dolphin Project (SDP) in 2008 by the Institute of Biodiversity and Environmental Conservation (IBEC) of Universiti Malaysia Sarawak (UNIMAS). The habitat suitability related studies reviewed (Clauzel et al. 2018; Heinonen 2019; Torres et al. 2017) always relate the abundances of a species with the environmental properties. The approaches employed are predominantly statistical, which demand a big data size whereas the deterministic approaches used by population dynamics studies incorporate the aspect of habitat suitability into their modeling effort (Cayuela et al. 2020; Marquez et al. 2021; Nusz et al. 2018). The deterministic approaches are based strongly on established physical or mechanistic laws and require detailed species-specific demographic values. Apart from this, generalization is mostly assumed and incorporation of features from individual habitat location or species, or both could hardly be found in these approaches. The above approaches are not suitable to be applied in this study as the data that the study had is scarce. The reason is the lacking scientific and detailed demographic information about ID at Kuching Bay and suitable physical law to be applied in modeling habitat suitability of a species at the time the study is carried out. Furthermore, this study intends to incorporate the attributes of individual habitat location or species, or both. Nevertheless, the graph-theoretic network modeling approaches is not restricted by these limitations. Problem characterization of preferred habitat In this section, discussion, and justification for the formulation of basic building block as the graph structure representation for the intended bipartite habitat network are presented. Habitat suitability studies reveal that species, location, and environmental properties are the three typical main components in its research structure. These graph-structure-like components show that the location and environmental properties components, and the species' component are of two different natures sharing different attributes. It is termed the Habitat Suitability Triangle (HST) in this study (Fig. 8a). The heterogeneous nature suggests the bipartite network approach could be applied and the BNM framework presented in Fig. 1 could be used to guide the modeling processes. The data this study has are real-world data collected by the SDP team (Peter 2012), which consist of four main sub-datasets. The data record individual ID identified by SDP; the ID re-sight's maps (Peter 2012, Fig. 4.3c and 4.3d, p. 68); the physical and water parameter readings and sighting of ID at each data collection point; and species sighting data at the location point whenever ID are sighted. To triangulate these data that are scarce, imbalanced, and without scientific information, opinions from the experts in the field of animal nutritional ecology, and the researchers of the SDP team are also collected. Three main assumptions adopted in this study include every individual ID at Kuching Bay is free to settle anywhere, and the territoriality and preemption by early settlers do influence settlement of other individual ID (Fletcher Jr. et al. 2011), is physically fit and possess the ideal capability to assess the quality of all locations available and locate their most preferred habitat (Fletcher Jr. et al. 2011), and possess prior knowledge which optimizes the foraging behaviors of each ID with the least trade-off of meeting predator. (Source: Liew et al. 2015a, p. 268); b Basic Building Block of the Bipartite Network (Source: Liew et al. 2015a, p. 268) a Habitat Suitability Triangle (HST) Since the environmental properties (N) explain the physical characteristics of a location (L) and are thus inseparable, HST is further modified into a two-node graph as depicted in Fig. 8b. It is used to form the basic building block of the network structure of this study. The basic building block consists of two nodes: the dolphin node (D) representing the ID species under study and the location node (L) representing the location with its unique environmental properties (N) enclosed within; and a link (E) that joins the two nodes. The link is formed when a D visits an L. The link weight is termed the Habitat Suitability Strength (HSS). It represents the relationship between the location and dolphin nodes where greater link strength represents stronger affinity between the dolphin and the specific location, which implies higher suitability of this location to function as the preferred habitat. With the basic building block identified (Fig. 8b), the graph representation of the ID habitat network at Kuching Bay is formulated using the first and second sub-datasets comprising 2 km by 2 km grid cells and the unique individual ID. The former is taken as the distinct location nodes for set L whereas the latter as the distinct dolphin nodes for set D. The data of grid cells in the second sub-dataset that shows visitations by different unique individual ID enables identification of the distinct links for set E. Consequently, thirteen 2 km by 2 km grid cells are identified as the thirteen location nodes, and thirteen unique individual ID identified are taken as the thirteen dolphin nodes of the intended bipartite habitat suitability network (BiHSN). Together with the 38 unique links identified between the bipartite nodes, the complete bipartite graph constructed is defined in Eq. (13) (Liew et al. 2015a p. 269) and presented in Fig. 9. (Source: Liew 2016, p. 67) The bipartite habitat suitability graph (Source: Liew et al. 2015a, p. 272) BiHSN with parameter values for location nodes, dolphin nodes and the link weights (Source: Liew et al. 2015a p. 273) Actual Location Nodes at Kuching Bay (overlaid on modified Fig. 3.5 of Peter (2012)) Bipartite habitat suitability network model construction In this section, the execution of five important processes of Stage 2 shown in Fig. 1—data pre-processing, location node parameters quantification, dolphin node parameters quantification, link weight quantification, and search algorithm implementation—are discussed. The data of this study are pre-processed to overcome the missing and faulty values and imbalanced data in the third sub-dataset. The former is achieved through the data interpolation technique (Bavay and Egger 2014) via MATLAB tool for scattered data interpolation on known values of data points. At the same time, the latter is accomplished by applying the under-sampling technique via the systematic random sampling method for Support Vector Machines (SVM), which is the machine learning approach employed in quantifying the location node parameter (Liew et al. 2015c). The parameters of the location node (node-type-1 in Fig. 1) are determined based on the data available to the study. Twelve parameters are included in this study. They are seawater salinity (S) in Practical Salinity Unit (PSU), acidity (pH), seawater surface temperature (T) in Celsius degree (oC), seawater depth (de) in meter (m), tide height (Ti) in meter (m), water suitability index (W), latitude (x), longitude (y), distance to the river mouth (drm) in meter (m) and land (dl) in meter (m), fisheries (food) availability index (F), and the number of times a location is visited by ID (Fl). Out of these twelve parameters, W, F and Fl need to be quantified while the rest are available in the data. The water suitability index (W) provides the suitability degree measurement of seawater at a location point of the study location. Conversely, the fisheries (food) availability index (F) indicates the availability degree of food for the ID by measuring the possibility of observing fisheries activity at a location point of the study location. This is because the availability of food is a pertinent factor for ID in choosing their preferred habitat. SVM is employed in this study to quantify both W and F through two distinct machine learning classifiers (Liew et al. 2015a). These two SVM classifiers are SVM Water Model and SVM Fisheries (food) Model formulated through LIBSVM (version 3.17) package (Chang and Lin 2011) with Gaussian radial basis function (RBF) kernel function. The probability estimation of sighting an ID or fisheries activity respectively is extracted from the models and used in this study as the indices. The formal model is defined by six attributes: latitude, longitude, depth, temperature, salinity, and sighting of ID whereas the latter by four attributes: latitude, longitude, tide height, and sighting of fisheries activity. As for Fl, it is given by Eq. 14 (Liew et al. 2015a, p. 270) with Fd (dolphin frequency) being a parameter of dolphin node (node-type-2). It intends to rationalize the interaction between the ID and the location through the visitations of ID to a location. Consequently, this study resolves to contain W, F, dl, drm, Ti, pH, and Fl as the parameters for the location node where x, y, S, T and de have already been accounted for in the quantification of W and F. The values for all the parameters of the location nodes are presented in Fig. 10. Likewise, the parameters of the dolphin node (node-type-2 in Fig. 1) are determined based on the data available to the study as well. The two parameters designated for the dolphin node are the number of times a dolphin visited a location (Fd) and the best-estimated number of individual ID in the group of ID sighted at a location (N). The former captures the number of times a dolphin node is linked to a location node, as given in Eq. 15 whereas the latter records the group size of each ID sighting as defined in Eq. 16 (Liew et al. 2015a p. 270). Group size refers to the number of individual ID approximated through standard scientific procedures when a group of ID is sighted (Peter 2012). Subsequently, these are the two parameters finalized for this study. The values for these parameters are presented in Fig. 10. For the link formed between any pair of location and dolphin nodes, its weight is referred as HSS. HSS is quantified by incorporating the parameters of both location and dolphin nodes (Liew et al. 2015a). An analysis is carried out on the applicability of different quantification techniques for the link weight, and this study resolved to employ the multiplication rule, as denoted by Eq. 17 (Liew et al. 2015a p. 271). Using this quantification technique, the values of HSS, ranged between zero and one, are computed and presented in Fig. 10, the complete BiHSN. $$\begin{aligned} {\text{HSS}}_{i:j} & = \left( {\prod {\text{Location}}\_{\text{Node}}\_{\text{Parameters}}_{i} } \right) \times \left( {\prod {\text{Dolphin}}\_{\text{Node}}\_{\text{Parameters}}_{j:i} } \right) \\ & = \left( {W_{i} \times F_{i} \times dl_{i} \times drm_{i} \times Ti_{i} \times pH_{i} \times Fl_{i} } \right) \times \left( {Fd_{j:i} \times N_{j:i} } \right) \\ & \quad {\text{where}}\;i \in \left\{ {1, 2, \ldots , 13} \right\}\;{\text{and}}\;j \in \left\{ {1, 2, \ldots , 13} \right\} \\ \end{aligned}$$ With the quantification of the parameters for location and dolphin nodes, and the link that joins them, the preferred habitat of ID at Kuching Bay is then determined by adapting the HITS search algorithm of Eze et al. (2014), as detailed in Algorithm 1. In this study, the power iteration method is implemented with BiHSN as the searching space and the HSS matrix as the input. The output computed is taken as the ranking index, coined as the Habitat Suitability Index (HSI) in this study. It is defined as the suitability degree measurement for the location nodes of BiHSN, valued between zero and one where the higher the value of HSI for a location node the more preferred the location node is to the ID. The findings show that L2 ranked top among the location nodes, implying that it is relatively the most preferred habitat of ID (or the ID hotspot) at Kuching Bay. Table 4 and Fig. 11 present the results (ranking of location nodes, HSI of each location node, and the actual locations at Kuching Bay) obtained in this study. Finally, the bipartite habitat suitability network model for ID at Kuching Bay is formulated. Table 4 Ranking of location nodes This last stage consists of three processes: model verification, model validation and network evaluation analysis. The purpose is to ascertain that the model formulated from the previous stage is verified, validated and evaluated. Benchmark verification using UCINET 6.0 for Window (version 6.498) (Borgatti et al. 2002) is carried out in this study for the BiHSN model. It compares the HSI computed by the BiHSN model and the benchmark system through the computation of RMSE. The resulting value has fulfilled the RMSE threshold value of no greater than 0.05 set in this study. BiHSN is then validated by computing the SRCC between the BiHSN results and two other sets of data: a different set of real data and a past survey result where the validation process is reported in Liew and Labadin (2017). As for the process 3.3, a few extended analyses are executed to further evaluate the capability of the network modeled. The analyses also assess the effect of uncertainty faced when real data is used like uncertainty in data size, in having the unique individual dolphin data, and in having data for certain parameters. Eventually, the ability of BiHSN model to distinguish location nodes where ID is sighted from those that are not sighted and to predict preferred habitat of ID when a new set of data is used is inspected. These analyses have reported encouraging results, supporting the relevance of BiHSN model as an abstraction of the real-world habitat suitability system of ID at Kuching Bay. The BNM methodology framework is developed to facilitate the modeling activities in studies that intend to use the bipartite network approach. It is a robust framework where the whole modeling effort is captured within three main stages. The specific processes required by each of the stages are explicitly detailed. As demonstrated in the case studies above, once researchers have identified that it is feasible to employ the bipartite network approach, the BNM framework guides the whole modeling processes that follow. The case studies presented above shows that the BNM framework is applicable in both fields of epidemiology in modeling dengue hotspot, and ecology in modeling the preferred habitat of a marine mammal species. In the first stage that aims to characterize the research problem of a study, the literature reviews, opinions of experts of the field, and research data are the main inputs. These three components have collectively provided a comprehensive understanding of the research scenario (process 1.1) one is surveying. It includes the identification of the potential variables and assumptions for the study. In process 1.2, the graph structure representation of the study is identified. In the above case studies, the authors have recognized the ET and HST that show how the relationship between the entities being surveyed and modeled (the public places and humans, and locations and dolphins) can be represented as discrete objects of two different natures that can be joined by an edge (the dengue contact strength and habitat suitability strength). The graph structure representation is then modified and later evolves into the basic building block for the graph representation of the studies. In process 1.3, the basic building block is used to formulate the complete bipartite graph and its definition using the research data available. The resulting bipartite graphs (Figs. 7 and 9) are the graph structure version of the bipartite networks the studies intend to model. In the second stage that targets for model construction, data pre-processing is designated as the first process. Getting ready the research data is pertinent for the next four prominent processes in building the intended model. As shown in the case studies discussed above, issues like determining public places, imbalanced and incomplete data are resolved accordingly. The processed data output is then ready to be used in the next three processes (processes 2.2 to 2.4). The objectives of processes 2.2, 2.3 and 2.4 are to quantify the parameters for the respective bipartite nodes and the link formed between them. The library of quantification techniques attached to them guides the researchers in the need to analyze and determine scientifically on the appropriate method(s) for the intended purpose. In the above case studies, we have shown how parameters for the public place nodes and human nodes, and locations and dolphins' nodes in the BDC network and BiHSN are quantified by employing the respective traditional mathematical modeling and computational intelligence techniques as presented in "Bipartite dengue contact network model construction" and "Bipartite habitat suitability network model construction". Different methods have also been adopted to quantify the link weight—DCS and HSS—of the studies. The quantification techniques employed are decided and justified based on the results obtained from the scientific analyses or reviews performed. Hence, the choices of quantification techniques in the library depend closely on the review of the studies of the field, the techniques adopted before in research employing the bipartite network approach across the fields, and other emergent novel techniques like the advantages offered by the computational intelligence. The output of processes 2.2 to 2.4 is the formation of a bipartite network with values for all the parameters of the bipartite nodes and the weights for all the links that are formed between them. The link weights of the network are then expressed in matrix form and input to process 2.5. A search algorithm is implemented in this process onto the link weight matrix with the bipartite network as the searching space. The library of search algorithm technique attached to process 2.5 points the researcher to choose and justify scientifically a method to be employed here. The case studies above resolved to employ the adapted HITS algorithm and produce ranking indices where the ranking of the nodes was based. Ranking indices in the above case studies were taken and termed as DHR value and HSI. The study was probing into utilizing the ranking indices in seeking the hotspot or preferred habitat of concerns. At this point in the methodology process, the intended bipartite network model was constructed. It was termed the BDC Model and BiHSN Model. As stipulated in "Bipartite network modeling (BNM) framework" section for process 2.5, the ranking index for the other bipartite node (Node-Type-2) can also be generated depending on the aim of the study. A further study has been conducted for the second case study where leadership in species (Irrawaddy dolphin) is surveyed (Liew and Labadin 2018). Using the state-of-the-art of bipartite-network-based approach and employing the BNM framework, promising results have been obtained and validated. The model formulated in the previous stage needs to go through the model analysis and evaluation, which is the third stage of the BNM methodology framework before it can be accepted as a model by the respective research communities. Processes 3.1 and 3.2 have specified the need to verify or validate, or both, the bipartite network model. These analyses have been carried out accordingly in the case studies presented above where BDC Model and BiHSN Model are verified or validated, or both. Lastly, the model is further evaluated using appropriate analysis methods (process 3.3) from the library of network evaluation techniques that are attached to it. In the second case study, seven extended evaluations and analyses have been performed: to inspect the relationship between the properties of the model and the result obtained; to evaluate the effect of uncertainty on the performance of the model; and, to evaluate the potential predictive ability of the BiHSN Model. These extended network evaluation analyses implemented on the results produced by the bipartite network model further strengthen the justification of the use of the bipartite-network-based approach in a study as shown in the above study. Apart from that, process 3.3 is the process where the robust complex network analyses could come into further study the statistical properties of interest to the structure and behavior of the network systems formed. Subsequently, the final output of this stage, which is also the output of the study is a verified, validated and evaluated bipartite network model. It is represented as the final result in the BNM framework. The BNM framework is also employed in modeling the rabies (Chia et al. 2021) and COVID-19 (Hong et al. 2021) transmissions where hotspots or sources of infection for the respective disease are identified using real-world data. On top of this, Hong et al. (2021) managed to determine the 'super spreader' (p.132) of the disease and thus allows COVID-19 high-risk groups of people to be identified for better infectious disease management, particularly at the beginning stage of an outbreak. The studies have played a significant role in curbing the deadly rabies and COVID-19 diseases that are contagious. Alongside the applicability of the BNM framework in modeling individual-based network systems, the bipartite network approach is relevant even in solving research problems with scarce or limited data. The bipartite habitat suitability model formulated in the second case study is an example of a solved real-life network system with as few as thirteen nodes. Consequently, the same BNM framework is believed to be applicable in any study where the bipartite network approach is deemed feasible. Besides the principal processes, three libraries of techniques (one for the quantification of nodes parameters and links weights, one for the implementation of search algorithms, and the other one for the evaluations and analyses of the bipartite network model formulated) are included without specifying the actual technique should be employed for the corresponding processes. This implies the uniqueness of potential techniques applicable in a unique research field or domain. As an example, future works employing BNM in disease transmission of epidemiology studies may look into validating the bipartite model constructed with the conventional compartmental models like the Susceptible-Infected-Recovered (SIR) model or any corresponding emergence SIR models. It shows the emphasis of the BNM framework for keeping the board abreast of updated emergent scientific techniques, and the importance to consider both novel and traditional methods. Grey system theory (Liu et al. 2012) is an example of a newly emerged methodology. It shares many similarities with the BNM approach where both are capable of handling problems with a small sample and limited scientific knowledge of uncertain systems, typical characteristics of the natural world. The possibility of incorporating grey system theory in the bipartite network approach could be an interesting future research recommendation worth looking into. The techniques identified should be scientifically justified within the context of the research field the study confined to. Besides that, the use of BNM methodology in studies with larger data size and in domains other than epidemiology and ecology are greatly desired in the future. These studies may include, but not limited to, surveying the human mobility, materials as vitreous metals, social media, and Web and Internet structure. They are able to strengthen the genericity and scalability of the proposed BNM methodology framework. Subsequently, it is thus suggested that the BNM framework could serve as a generic methodology for a bipartite network approach across research domains and disciplines. In this paper, a generic methodology termed the BNM methodology is developed and proposed for the use of the researcher who intends to employ the bipartite network modeling approach with the heterogeneous features of unique individual node of a network system incorporated. The usability of the methodology has been presented in the modeling of the dengue hotspot identification and the habitat suitability of a marine mammal species that capture the features of distinct individual node of the set of bipartite nodes. The BNM framework has the potential to add value to complex network studies especially when the local rules governing the individual vertices are to be considered. This modeling methodology is believed to be feasible and can be readily extended for studies across research fields where the state-of-the-art of bipartite network modeling approach is deemed applicable and appropriate. The data used in the current study are available from the corresponding author on reasonable request. BNM: Bipartite Network Modeling BDC: Bipartite Dengue Contact BiHSN: Intended Bipartite Habitat Suitability Network DCS: Dengue contact strength DHR: Dengue hotspot ranking Epidemiological triangle Hypertext induced topic selection HSI: Habitat suitability index Habitat suitability strength HST: Habitat suitability triangle Irrawaddy dolphin RMSE: Root-Mean-Square Error SDP: Sarawak Dolphin Project SRCC: Spearman's Rank Correlation Coefficient SVM: Aziz S, Aidil RM, Nisfariza MN, Ngui R, Lim YAL, Yusoff WW, Ruslan R (2014) Spatial density of Aedes distribution in urban areas: a case study of breteau index in Kuala Lumpur. Malaysia J Vector Borne Dis 51(2):91 Barabási AL (2013) Network science. Phil Trans R Soc A 371:20120375. https://doi.org/10.1098/rsta.2012.0375 Barnes B, Fulford GR (2014) Mathematical modelling with case studies: using maple and MATLAB, 3rd edn. CRC Press, Boca Raton Book MATH Google Scholar Baumgartner MT (2020) Connectance and nestedness as stabilizing factors in response to pulse disturbances in adaptive antagonistic networks. J Theoret Biol 486:110073. https://doi.org/10.1016/j.jtbi.2019.110073 Bavay M, Egger T (2014) Meteoio 2.4.2: a preprocessing library for meteorological data. Geosci. Model Dev. 7(6):3135–3151 Borgatti SP, Everett MG, Freeman LC (2002) Ucinet 6 for windows: software for social network analysis. Analytic Technologies, Harvard MA. Brin S, Page L (1998) The anatomy of a large-scale hypertextual web search engine. Comput Netw ISDN Syst 30(1):107–117. https://doi.org/10.1016/S0169-7552(98)00110-X Büttner K, Krieter J (2020) Illustration of different disease transmission routes in a pig trade network by monopartite and bipartite representation. Animals 10(6):1071. https://doi.org/10.3390/ani10061071 Carrington LB, Armijos MV, Lambrechts L, Scott TW (2013) Fluctuations at a low mean temperature accelerate dengue virus transmission by Aedes aegypti. PLoS Negl Trop Dis 7(4):e2190 Cayuela H, Griffiths RA, Zakaria N, Arntzen JW, Priol P, Léna JP, Besnard A, Joly P (2020) Drivers of amphibian population dynamics and asynchrony at local and regional scales. J Anim Ecol 89(6):1350–1364. https://doi.org/10.1111/1365-2656.13208 Centers for Disease Control and Prevention (CDC). (2014). Laboratory guidance and diagnostic testing. https://www.cdc.gov/dengue/clinicallab/laboratory.html. Accessed 11 February 2018 Chandra A, Garg H, Maiti A (2017) How fair is your network to new and old objects?: a modeling of object selection in Web based user-object networks. In: Bouguettaya A, Gao Y, Klimenko A, Chen L, Zhang X, Dzerzhinskiy F, Jia W, Klimenko SV, Li Q (eds) Web information systems engineering—WISE 2017. Lecture notes in computer science, vol 10570. Springer, Cham, pp 90–97. https://doi.org/10.1007/978-3-319-68786-5_7 Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):27. https://doi.org/10.1145/1961189.1961199 Chia DJB, Kok WC, Abdul Taib NA, Hong BH, Abd Majid K, Labadin J (2021) Rabies hotspot detection using bipartite network modelling approach. Trends Undergrad Res 4(1):c52-60. https://doi.org/10.33736/tur.3012.2021 Clauzel C, Jeliazkov A, Mimet A (2018) Coupling a landscape-based approach and graph theory to maximize multispecific connectivity in bird communities. Landsc Urban Plan 179:1–16. https://doi.org/10.1016/j.landurbplan.2018.07.002 Derudder B (2021) Network analysis of 'urban system': potential, challenges, and pitfalls. Tijds Voor Econ En Soc Geog 112(4):404–420. https://doi.org/10.1111/tesg.12392 Ducruet C, Beauguitte L (2014) Spatial science and network science: review and outcomes of a complex relationship. Netw Spat Econ 14:297–316. https://doi.org/10.1007/s11067-013-9222-6 Elliott B, Wilson R, Shapcott A, Keller A, Newis R, Cannizzaro C, Burwell C, Smith T, Leonhardt SD, Kämper W, Wallace HM (2021) Pollen diets and niche overlap of honey bees and native bees in protected areas. Basic Appl Ecol 50:169–180. https://doi.org/10.1016/j.baae.2020.12.002 Eze M, Labadin J, Lim T (2014) Structural convergence of web graph, social network and malaria network: an analytical framework for emerging web-hybrid search engine. Int J Web Eng Technol 9(1):3–29. https://doi.org/10.1504/IJWET.2014.063039 Eze M, Labadin J, Lim T (2011) Contact strength generating algorithm for application in malaria transmission network. In: Proceedings of the 7th international conference on information technology in Asia (CITA 11), IEEE, pp 1–6. Eze MO (2013) Web algorithm search engine based network modeling of Malaria transmission. Ph.D. thesis, Universiti Malaysia Sarawak. Fletcher Jr. RJ, Young JS, Hutto RL, Noson A, Rota CT (2011) Insights from ecological theory on temporal dynamics and species distribution modeling. In: Drew CA, Wiersma YF, Huettmann F (eds) Predictive species and habitat modeling in landscape ecology: concepts and applications, vol XIV, Springer, New York, pp 91–107. https://doi.org/10.1007/978-1-4419-7390-0_6 Gargiulo F, Bindi J, Apolloni A (2015) The topology of a discussion: the #occupy case. PLoS ONE 10(9):0137191. https://doi.org/10.1371/journal.pone.0137191 Harary F (1969) Graph theory. Addison Wesley, MA Heinonen T (2019) Developing landscape connectivity in commercial boreal forests using minimum spanning tree and spatial optimization. Can J for Res 49(10):1198–1206. https://doi.org/10.1139/cjfr-2018-0480 Hernándex DG, Risau-Gusman S (2013) Epidemic thresholds for bipartite networks. Phys Rev E 88(5):052801. https://doi.org/10.1103/PhysRevE.88.052801 Hong BH, Labadin J, King Tiong W, Lim T, Chung MHL (2021) Modelling COVID-19 hotspot using bipartite network approach. Acta Informatica Pragensia 10(2):123–137. https://doi.org/10.18267/j.aip.151 Huang L, Liang Y, Huang F, Wang D (2018) A quantitative analysis model of grid cyber physical systems. Global Energy Interconnect 1(5):618–626. https://doi.org/10.14171/j.2096-5117.gei.2018.05.011 IEEE (2011) IEEE guide—Adoption of the project management institute (PMI®) standard, A guide to the project management body of knowledge (PMBOKR® Guide) (4th ed). Project Management Institute, PA. Jin Z, Li S, Zhang X, Zhang J, Peng XL (2016) Epidemiological modeling on complex network. In: Lü J, Yu X, Chen G, Yu W (eds), Complex systems and networks: understanding complex systems, Springer, Berlin, pp 51–77. https://doi.org/10.1007/978-3-662-47824-0_3 Kaszewska-Gilas K, Kosicki JZ, Hromada M, Skoracki M (2021) Global studies of the host-parasite relationships between Ectoparasitic Mites of the family Syringophilidae and birds of the Order Columbiformes. Animals 11(12):3392. https://doi.org/10.3390/ani11123392 Kevork S, Kauermann G (2022) Bipartite exponential random graph models with nodal random effects. Soc Netw 70:90–99. https://doi.org/10.1016/j.socnet.2021.11.002 Kleinberg J (1999) Authoritative sources in a hyperlinked environment. J ACM 46(5):604–632. https://doi.org/10.1145/324133.324140 Kok WC, Labadin J (2019) Validation of bipartite network model of dengue hotspot detection in Sarawak. In: Alfred R, Lim Y, Ibrahim A, Anthony P (eds) Computational science and technology. Lecture notes in electrical engineering, vol 481. Springer, Singapore, pp 335–345. doi:https://doi.org/10.1007/978-981-13-2622-6_33 Kok WC, Labadin J, Perera D (2018) Modeling dengue hotspot with bipartite network approach. In: Alfred R, Iida H, Ag. Ibrahim A, Lim Y (eds) Computational science and technology. ICCST 2017. Lecture notes in electrical engineering, vol 488. Springer, Singapore, pp 220–229. doi:https://doi.org/10.1007/978-981-10-8276-4_21 Lambrechts L, Paaijmans KP, Fansiri T, Carrington LB, Kramer LD, Thomas MB, Scott TW (2011) Impact of daily temperature fluctuations on dengue virus transmission by Aedes aegypti. In: Beaty BJ (ed), Proceedings of the national academy of sciences vol 108, no. 18, pp 7460–7465. https://doi.org/10.1073/pnas.1101377108 Lee JS, Farlow A (2019) The threat of climate change to non-dengue-endemic countries: increasing risk of dengue transmission potential using climate and non-climate datasets. BMC Public Health 19:934. https://doi.org/10.1186/s12889-019-7282-3 Lessler J, Azman AS, McKay HS, Moore SM (2017) What is a hotspot anyway? Am J Trop Med Hyg 96(6):1270–1273. https://doi.org/10.4269/ajtmh.16-0427 Liao Z, Wu Z, Li Y, Zhang Y, Fan X (2020) Core-reviewer recommendation based on Pull Request topic model and collaborator social network. Soft Comput 24:5683–5693. https://doi.org/10.1007/s00500-019-04217-7 Liew CY, Labadin J (2017) Applying bipartite network approach to scarce data: validation of the habitat suitability model of a marine mammal species. J Telecommun Electron Comput Eng 9(3–11):13–16 Liew CY, Labadin J, Wang YC, Tuen AA, Peter C (2015a) Applying bipartite network approach to scarce data: modeling habitat suitability of a marine mammal species. Proc Comput Sci 60:266–275. https://doi.org/10.1016/j.procs.2015.08.126 Liew CY, Labadin J, Wang YC, Tuen AA, Peter C (2015c) Modeling using support vector machines on imbalanced data: a case study on the prediction of the sightings of Irrawaddy dolphins. AIP Conference Proc 1660:050011. https://doi.org/10.1063/1.4915644 Liew CY, Labadin J (2018) Leadership in species: a bipartite-network-based approach. In: Conference proceedings of the international conference on computer and drone applications (IConDA), November 2017, IEEE Xplore, pp 66–70. doi:https://doi.org/10.1109/ICONDA.2017.8270401 Liew CY, Labadin J, Wang YC, Tuen AA, Peter C (2015b) Comparing classification performance of decision trees and support vector machines: a small data scenario. In: Conference proceedings of the 9th international conference on information technology in Asia. Universiti Malaysia Sarawak, Malaysia. ISBN: 978-1-4799-9939-2 Liew CY (2016) Bipartite-network-based modeling of habitat suitability. Ph.D thesis, Universiti Malaysia Sarawak Liu S, Forrest J, Yang Y (2012) A brief introduction to grey systems theory. In: Conference proceedings of 2011 IEEE international conference on grey systems and intelligent services, pp 1–9. https://doi.org/10.1109/GSIS.2011.6044018 London A, Csendes T (2013) HITS based network algorithm for evaluating the professional skills of wine tasters. In The 8th IEEE international symposium on applied computational intelligence and informatics, pp 197–200. https://doi.org/10.1109/SACI.2013.6608966 Luz PM, Lima-Camara TN, Bruno RV, Castro MGD, Sorgine MHF, Lourenço-de-Oliveira R, Peixoto AA (2011) Potential impact of a presumed increase in the biting activity of dengue-virus-infected Aedes aegypti (Diptera: Culicidae) females on virus transmission dynamics. Mem Inst Oswaldo Cruz 106(6):755–758. https://doi.org/10.1590/s0074-02762011000600017 Marquez JF, Sæther BE, Aanes S, Engen S, Salthaug A, Lee AM (2021) Age-dependent patterns of spatial autocorrelation in fish populations. Ecology 102(12):e03523. https://doi.org/10.1002/ecy.3523 Michalko R, Košulič O, Martinek P, Birkhofer K (2021) Disturbance by invasive pathogenic fungus alters arthropod predator–prey food-webs in ash plantations. J Anim Ecol 90(9):2213–2226. https://doi.org/10.1111/1365-2656.13537 Minton G, Smith B, Braulik G, Kreb D, Sutaria D, Reeves R. (2017) Orcaella brevirostris (errata version published in 2018). The IUCN Red List of Threatened Species 2017:e.t15419a123790805. http://www.iucnredlist.org/details/15419/0. Accessed 7 August 2018 Nagao Y, Thavara U, Chitnumsup P, Tawatsin A, Chansang C, Campbell-Lendrum D (2003) Climatic and social risk factors for Aedes infestation in rural Thailand. Tropical Med Int Health 8(7):650–659. https://doi.org/10.1046/j.1365-3156.2003.01075.x Newman MEJ (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256. https://doi.org/10.1137/S003614450342480 O'Sullivan D, Manson SM (2015) Do physicists have geography envy? and What can geographers learn from it? Ann Assoc Am Geogr 105(4):704–722. https://doi.org/10.1080/00045608.2015.1039105 Peter C (2012) Distribution patterns, habitat characteristics and population estimates of Irrawaddy Dolphins (Orcaella brevirostris) in Kuching bay, Sarawak. Master's thesis, Universiti Malaysia Sarawak. Phaijoo GR, Gurung DB (2015) Mathematical study of biting rates of mosquitoes in transmission of dengue disease. J Sci Eng Technol 11:25–33 Poisot T, Kéfi S, Morand S, Stanko M, Marquet PA, Hochberg ME (2015) A continuum of specialists and generalists in empirical communities. PLoS ONE. https://doi.org/10.1371/journal.pone.0114674 Rafo MdV, Mauro JPD, Aparicio JP (2021) Disease dynamics and mean field models for clustered networks. J Theoret Biol 526:110554. https://doi.org/10.1016/j.jtbi.2020.110554 Rayfield B, Fortin MJ, Fall A (2011) Connectivity for conservation: a framework to classify network measures. Ecology 92(4):847–858. https://doi.org/10.1890/09-2190.1 Ritchie SA, Johnson BJ (2017) Advances in vector control science: rear-and-release strategies show promise… but don't forget the basics. J Infect Diseases 215:S103–S108. https://doi.org/10.1093/infdis/jiw575 Rudnick J, Niles M, Lubell M, Cramer L (2019) A comparative analysis of governance and leadership in agricultural development policy networks. World Dev 117:112–126. https://doi.org/10.1016/j.worlddev.2018.12.015 Rueda LM, Patel KJ, Axtell RC, Stinner RE (1990) Temperature-dependent development and survival rates of Culex quinquefasciatus and Aedes aegypti (Diptera: Culicidae). J Med Entomol 27(5):892–898. https://doi.org/10.1093/jmedent/27.5.892 Saracco F, Clemente RD, Gabrielli A, Squartini T (2015) Randomizing bipartite networks: the case of the World Trade Web. Sci Rep 5:10595. https://doi.org/10.1038/srep10595 Schober P, Boer C, Schwarte LA (2018) Correlation coefficients: appropriate use and interpretation. Anesth Analg 126(5):1763–1768. https://doi.org/10.1213/ANE.0000000000002864 Scott TW, Amerasinghe PH, Morrison AC, Lorenz LH, Clark GG, Strickman D, Kittayapong P, Edman JD (2000) Longitudinal studies of Aedes aegypti (Diptera: Culicidae) in Thailand and Puerto Rico: blood feeding frequency. J Med Entomol 37(1):89–101. https://doi.org/10.1603/0022-2585-37.1.89 Torres RT, Carvalho J, Serrano E, Helmer W, Acevedo P, Fonseca C (2017) Favourableness and connectivity of a Western Iberian landscape for the reintroduction of the iconic Iberian ibex Capra pyrenaica. Oryx 51(4):709–717. https://doi.org/10.1017/S003060531600065X Tsai CH, Chen TH, Lin C, Shu PY, Su CL, Teng HJ (2017) The impact of temperature and Wolbachia infection on vector competence of potential dengue vectors Aedes aegypti and Aedes albopictus in the transmission of dengue virus serotype 1 in southern Taiwan. Parasit Vectors 10:551. https://doi.org/10.1186/s13071-017-2493-x Valejo ADB, de Oliveira dos Santos W, Naldi MC, Zhao L (2021) A review and comparative analysis of coarsening algorithms on bipartite networks. Eur. Phys. J. Spec. Top. 230:2801–2811. https://doi.org/10.1140/epjs/s11734-021-00159-0 Vitevitch MS, Niehorster-Cook L, Niehorster-Cook S (2021) Exploring how phonotactic knowledge can be represented in cognitive networks. Big Data Cogn Comput 5(4):47. https://doi.org/10.3390/bdcc5040047 Wesolowski A, Qureshi T, Boni MF, Sundsøy PR, Johansson MA, Rashed SB, Buckee CO (2015) Impact of human mobility on the emergence of dengue epidemics in Pakistan. In: Singer BH (ed), Proceedings of the national academy of sciences vol 112, no. 38, pp 11887–11892. doi:https://doi.org/10.1073/pnas.1504964112 World Health Organization (WHO) (2012) Rapid risk assessment of acute Public Health Events. WHO, Switzerland Zhang C, Deng L (2021) Microbial community analysis based on bipartite graph clustering of metabolic network. J Phys Conf Ser 1828(1):012092. https://doi.org/10.1088/1742-6596/1828/1/012092 Zhao R, Liu Q, Zhang H (2021) Dynamical behaviors of a vector-borne diseases model with two time delays on bipartite networks. Math Biosci Eng 18(4):3073–3091. https://doi.org/10.3934/mbe.2021154 The authors would like to thank Prof. Dr. Andrew Alek Tuen, Cindy Peter and other SDP research team members for providing the data and expert opinions for the study on preferred habitat of Irrawaddy dolphin at Kuching Bay. MO Eze and WC Kok wish to acknowledge Universiti Malaysia Sarawak for the postgraduate scholarships Zamalah made available for them during their studentship at the university. CY Liew wishes to thank the Ministry of Education Malaysia and Universiti Teknologi MARA (UiTM) for providing scholarship with study leave during her doctorate term of study. Open Access funding provided by Universiti Malaysia Sarawak. The authors wish to acknowledge the research funding support from the Ministry of Higher Education, Malaysia, under Fundamental Research Grant Scheme (FRGS) [FRGS/2/10/SG/UNIMAS/02/04] granted to J Labadin and a publication fund granted by the Universiti Malaysia Sarawak. Mathematical Sciences Studies, College of Computing, Informatics and Media, Universiti Teknologi MARA, Sarawak Branch, 94300, Kota Samarahan, Sarawak, Malaysia Chin Ying Liew Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, 94300, Kota Samarahan, Sarawak, Malaysia Jane Labadin & Woon Chee Kok Department of Computer Science, Babcock University, Ilishan-Remo, Ogun State, Nigeria Monday Okpoto Eze Jane Labadin Woon Chee Kok All authors contributed to the conception and design of the study. Introduction was drafted by CY Liew and commented by J Labadin. Conception, design and writing of the bipartite network modeling framework were contributed by CY Liew, MO Eze and J Labadin. Conception, design, data curation and analysis, modeling, and writing of the study on dengue hotspot identification were contributed by WC Kok and J Labadin. Conception, design, data curation and analysis, modeling, and writing of the study on preferred habitat of Irrawaddy dolphin at Kuching Bay were contributed by CY Liew and J Labadin. Conclusion was drafted by CY Liew and commented by J Labadin. All authors read and approved the final manuscript. Funding acquisition was contributed by J Labadin. Correspondence to Jane Labadin. Liew, C.Y., Labadin, J., Kok, W.C. et al. A methodology framework for bipartite network modeling. Appl Netw Sci 8, 6 (2023). https://doi.org/10.1007/s41109-023-00533-y Accepted: 09 January 2023 Individual-based modeling Complex network Habitat suitability Disease modeling
CommonCrawl
Multi-directional and saturated chaotic attractors with many scrolls for fractional dynamical systems DCDS-S Home Memorized relaxation with singular and non-singular memory kernels for basic relaxation of dielectric vis-à-vis Curie-von Schweidler & Kohlrausch relaxation laws March 2020, 13(3): 609-627. doi: 10.3934/dcdss.2020033 Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel Jean Daniel Djida 1,2, , Juan J. Nieto 1, and Iván Area 3,, Departamento de Estatística, Análise Matemática e Optimización, Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain African Institute for Mathematical Sciences (AIMS), P.O. Box 608, Limbe Crystal Gardens, South West Region, Cameroon Departamento de Matemática Aplicada Ⅱ, E.E. Aeronáutica e do Espazo, Universidade de Vigo, Campus As Lagoas s/n, 32004 Ourense, Spain * Corresponding author: Iván Area Received April 2018 Revised May 2018 Published March 2019 We prove Hölder regularity results for nonlinear parabolic problem with fractional-time derivative with nonlocal and Mittag-Leffler nonsingular kernel. Existence of weak solutions via approximating solutions is proved. Moreover, Hölder continuity of viscosity solutions is obtained. Keywords: Fractional derivative with Mittag-Leffler function, Hölder continuity, nonlocal diffusion, viscosity solution. Mathematics Subject Classification: 35K55, 26A33, 35D30. Citation: Jean Daniel Djida, Juan J. Nieto, Iván Area. Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 609-627. doi: 10.3934/dcdss.2020033 M. Allen, Hölder regularity for nondivergence nonlocal parabolic equations, Calc. Var. Partial Differential Equations, 57 (2018), Art. 110, 29 pp, arXiv: 1610.10073. doi: 10.1007/s00526-018-1367-1. Google Scholar M. Allen, A nondivergence parabolic problem with a fractional time derivative, Differential Integral Equations, 31 (2018), 215-230. Google Scholar M. Allen, L. Caffarelli and A. Vasseur, A parabolic problem with a fractional time derivative, Arch. Ration. Mech. Anal., 221 (2016), 603-630. doi: 10.1007/s00205-016-0969-z. Google Scholar I. Area, J. D. Djida, J. Losada and J. J. Nieto, On fractional orthonormal polynomials of a discrete variable, Discrete Dyn. Nat. Soc., 2015 (2015), Article ID 141325, 7 pages. doi: 10.1155/2015/141325. Google Scholar A. Atangana and D. Baleanu, New fractional derivatives with non-local and non-singular kernel: theory and application to heat transfer model, Thermal Science, 20 (2016), 763-769. Google Scholar A. Bernardis, F. J. Martín-Reyes, P. R. Stinga and J. L. Torrea, Maximum principles, extension problem and inversion for nonlocal one-sided equations, J. Differential Equations, 260 (2016), 6333-6362. doi: 10.1016/j.jde.2015.12.042. Google Scholar L. Caffarelli, C. H. Chan and A. Vasseur, Regularity theory for parabolic nonlinear integral operators, J. Am. Math. Soc., 24 (2011), 849-869. doi: 10.1090/S0894-0347-2011-00698-X. Google Scholar L. Caffarelli and J. L. Vazquez, Nonlinear porous medium flow with fractional potential pressure, Arch. Rational Mech. Anal., 202 (2011), 537-565. doi: 10.1007/s00205-011-0420-4. Google Scholar F. Ferrari and I. E. Verbitsky, Radial fractional Laplace operators and hessian inequalities, J. Differential Equations, 253 (2012), 244-272. doi: 10.1016/j.jde.2012.03.024. Google Scholar R. Herrmann, Fractional Calculus, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2nd edition, 2014. doi: 10.1142/8934. Google Scholar R. Hilfer, Threefold introduction to fractional derivatives, In R. Klages et al. (eds.), editor, Anomalous Transport, (2008), pages 17–77. Wiley-VCH Verlag GmbH & Co. KGaA, 2008. doi: 10.1002/9783527622979.ch2. Google Scholar M. Kassmann, M. Rang and R. W. Schwab, Integro-differential equations with nonlinear directional dependence, Indiana University Mathematics Journal, 63 (2014), 1467-1498. doi: 10.1512/iumj.2014.63.5394. Google Scholar H. C. Lara and G. Dávila, Regularity for solutions of non local parabolic equations, Calc. Var. Partial Differential Equations, 49 (2014), 139-172. doi: 10.1007/s00526-012-0576-2. Google Scholar [14] K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York-London, 1974. Google Scholar S. Samko, A. A. Kilbas and O. Marichev, Fractional Integrals and Derivatives, Taylor & Francis, 1993. Google Scholar L. Silvestre, On the differentiability of the solution to the Hamilton-Jacobi equation with critical fractional diffusion, Adv. Math., 226 (2011), 2020-2039. doi: 10.1016/j.aim.2010.09.007. Google Scholar L. Silvestre, Hölder estimates for advection fractional-diffusion equations, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 11 (2012), 843–855, arXiv: 1009.5723. Google Scholar P. R. Stinga and J. L. Torrea, Regularity theory and extension problem for fractional nonlocal parabolic equations and the master equation, SIAM J. Math. Anal., 49 (2017), 3893–3924, arXiv: 1511.01945. doi: 10.1137/16M1104317. Google Scholar R. Zacher, Weak solutions of abstract evolutionary integro-differential equations in Hilbert spaces, Funkcial. Ekvac., 52 (2009), 1-18. doi: 10.1619/fesi.52.1. Google Scholar Mehmet Yavuz, Necati Özdemir. Comparing the new fractional derivative operators involving exponential and Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 995-1006. doi: 10.3934/dcdss.2020058 Ndolane Sene. Mittag-Leffler input stability of fractional differential equations and its applications. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 867-880. doi: 10.3934/dcdss.2020050 Ebenezer Bonyah, Samuel Kwesi Asiedu. Analysis of a Lymphatic filariasis-schistosomiasis coinfection with public health dynamics: Model obtained through Mittag-Leffler function. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 519-537. doi: 10.3934/dcdss.2020029 Raziye Mert, Thabet Abdeljawad, Allan Peterson. A Sturm-Liouville approach for continuous and discrete Mittag-Leffler kernel fractional operators. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020171 Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 561-574. doi: 10.3934/dcdss.2020031 Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2267-2278. doi: 10.3934/dcdsb.2014.19.2267 Zaiyun Peng, Xinmin Yang, Kok Lay Teo. On the Hölder continuity of approximate solution mappings to parametric weak generalized Ky Fan Inequality. Journal of Industrial & Management Optimization, 2015, 11 (2) : 549-562. doi: 10.3934/jimo.2015.11.549 Andrey Kochergin. A Besicovitch cylindrical transformation with Hölder function. Electronic Research Announcements, 2015, 22: 87-91. doi: 10.3934/era.2015.22.87 Susanna Terracini, Gianmaria Verzini, Alessandro Zilio. Uniform Hölder regularity with small exponent in competition-fractional diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2669-2691. doi: 10.3934/dcds.2014.34.2669 Atsushi Kawamoto. Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (2) : 315-330. doi: 10.3934/ipi.2018014 Samia Challal, Abdeslem Lyaghfouri. Hölder continuity of solutions to the $A$-Laplace equation involving measures. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1577-1583. doi: 10.3934/cpaa.2009.8.1577 Lili Li, Chunrong Chen. Nonlinear scalarization with applications to Hölder continuity of approximate solutions. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 295-307. doi: 10.3934/naco.2014.4.295 Ndolane Sene. Fractional diffusion equation described by the Atangana-Baleanu fractional derivative and its approximate solution. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020173 Łukasz Struski, Jacek Tabor. Expansivity implies existence of Hölder continuous Lyapunov function. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3575-3589. doi: 10.3934/dcdsb.2017180 Lucio Boccardo, Alessio Porretta. Uniqueness for elliptic problems with Hölder--type dependence on the solution. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1569-1585. doi: 10.3934/cpaa.2013.12.1569 Luis Silvestre. Hölder continuity for integro-differential parabolic equations with polynomial growth respect to the gradient. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1069-1081. doi: 10.3934/dcds.2010.28.1069 Kyudong Choi. Persistence of Hölder continuity for non-local integro-differential equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1741-1771. doi: 10.3934/dcds.2013.33.1741 Nguyen Huy Tuan, Donal O'Regan, Tran Bao Ngoc. Continuity with respect to fractional order of the time fractional diffusion-wave equation. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020033 Boris Muha. A note on the Trace Theorem for domains which are locally subgraph of a Hölder continuous function. Networks & Heterogeneous Media, 2014, 9 (1) : 191-196. doi: 10.3934/nhm.2014.9.191 Charles Pugh, Michael Shub, Amie Wilkinson. Hölder foliations, revisited. Journal of Modern Dynamics, 2012, 6 (1) : 79-120. doi: 10.3934/jmd.2012.6.79 Jean Daniel Djida Juan J. Nieto Iván Area
CommonCrawl
Abelian variety From Encyclopedia of Mathematics 2010 Mathematics Subject Classification: Primary: 14-XX [MSN][ZBL] An Abelian variety is an algebraic group that is a complete algebraic variety. The completeness condition implies severe restrictions on an Abelian variety. Thus, an Abelian variety can be imbedded as a closed subvariety in a projective space; each rational mapping of a non-singular variety into an Abelian variety is regular; the group law on an Abelian variety is commutative. The theory of Abelian varieties over the field of complex numbers $\C$ is, in essence, equivalent to the theory of Abelian functions founded by C.G.J. Jacobi, N.H. Abel and B. Riemann. If $\C^n$ denotes $n$-dimension vector space, $\Gamma\subset\C^n$ is a lattice (cf. Discrete subgroup) of rank $2n$, then the quotient group $X=\C^n/\Gamma$ is a complex torus. Meromorphic functions on $X$ are the same thing as meromorphic functions on $\C^n$ that are invariant with respect to the period lattice $\Gamma$. If the field $K$ of meromorphic functions on $X$ has transcendence degree $n$, then $X$ can be given the structure of an algebraic group. This structure is unique by virtue of the compactness of $X$, and it is such that the field of rational functions of this structure coincides with $K$. The algebraic groups formed in this way are Abelian varieties, and each Abelian variety over the field $\C$ arises in this way. The matrix which defines a basis of $\Gamma$ can be reduced to the form $(E|Z)$, where $E$ is the identity matrix and $Z$ is a matrix of order $n\times n$. The complex torus $X=\C^n/\Gamma$ is an Abelian variety if and only if $Z$ is symmetric and has positive-definite imaginary part. It should be pointed out that, as real Lie groups, all varieties $X$ are isomorphic, but this is not true for their analytic or algebraic structures, which vary strongly when deforming the lattice $\Gamma$. Inspection of the period matrix $Z$ shows that its variation has an analytic character, which results in the construction of the moduli variety of all Abelian varieties of given dimension $n$. The dimension of the moduli variety is $n(n+1)/2$ (cf. Moduli problem). The theory of Abelian varieties over an arbitrary field $k$ is due to A. Weil [We], [We2]. It has numerous applications both in algebraic geometry itself and in other fields of mathematics, particularly in number theory and in the theory of automorphic functions. To each complete algebraic variety, Abelian varieties (cf. Albanese variety; Picard variety; Intermediate Jacobian) can be functorially assigned. These constructions are powerful tools in studying the geometric structures of algebraic varieties. E.g., they were used to obtain one of the solutions of the Lüroth problem. Another application is the proof of the Riemann hypothesis for algebraic curves over a finite field — the problem for which the abstract theory of Abelian varieties was originally developed. It was also one of the sources of $l$-adic cohomology. The simplest example of such a cohomology is the Tate module of an Abelian variety. It is the projective limit, as $n\to\infty$, of the groups $X[l^n]$ of points of order $l^n$. The determination of the structure of such groups was one of the principal achievements of the theory of Weil. In fact, if $m$ is coprime with the characteristic $p$ of the field $k$ and if $k$ is algebraically closed, then the group $X[m]$ is isomorphic to $(\Z/mZ)^{2\dim X}$. If $m=p$, the situation is more complicated, which resulted in the appearance of concepts such as finite group schemes, formal groups and $p$-divisible groups (cf. Finite group scheme; Formal group; $p$-divisible group). The study of the action of endomorphisms of Abelian varieties, in particular of the Frobenius endomorphism on its Tate module, makes it possible to give a proof of the Riemann hypothesis (for algebraic curves over finite fields, cf. Riemann hypotheses) and is also the principal instrument in the theory of complex multiplication of Abelian varieties. Another circle of problems connected with the Tate module consists of a study of the action of the Galois group of the closure of the ground field on this module. There resulted the Tate conjectures and the theory of Tate–Honda, which describes Abelian varieties over finite fields in terms of the Tate module [Mu]. The study of Abelian varieties over local fields, including $p$-adic fields, is proceeding at a fast rate. An analogue of the above-mentioned representation of Abelian varieties as a quotient space $\C^n/\Gamma$, usually known as uniformization, over such fields, was constructed by D. Mumford and M. Raynaud. Unlike the complex case, not all Abelian varieties, but only those having a reduction to a multiplicative group modulo $p$, are uniformizable [Ma]. The theory of Abelian varieties over global (number and function) fields plays an important role in Diophantine geometry. Its principal result is the Mordell–Weil theorem: The group of rational points of an Abelian variety, defined over a finite extension of the field of rational numbers, is finitely generated. For recent information on the Tate conjectures see [Fa]. For the theory of Tate–Honda see also [Ta]. Mumford's theory of uniformization is developed in [Mu2], [Mu3]. [Fa] G. Faltings, "Endlichkeitssätze für abelsche Varietäten über Zahlkörpern" Invent. Math., 73 (1983) pp. 349–366 ((Errratum: Invent. Math. 75 (1984), p. 381)) MR0718935 MR0732554 Zbl 0588.14026 [La] S. Lang, "Abelian varieties", Springer (1983) MR0713430 Zbl 0516.14031 [Ma] Yu.I. Manin, "p-Adic automorphic functions" J. Soviet Math., 5 : 3 (1976) pp. 279–333 Itogi Nauk. i Tekhn. Sovrem. Problemy, 3 (1974) pp. 5–93 Zbl 0375.14007 MR0422161 [Mu] D. Mumford, "Abelian varieties", Oxford Univ. Press (1974) MR0282985 Zbl 0326.14012 [Mu2] D. Mumford, "An analytic construction of degenerating curves over complete local rings" Compos. Math., 24 (1972) pp. 129–174 MR0352105 Zbl 0243.14010 Zbl 0228.14011 [Mu3] D. Mumford, "An analytic construction of degenerating abelian varieties over complete rings" Compos. Math., 24 (1972) pp. 239–272 MR0352106 Zbl 0241.14020 [Se] J.-P. Serre, "Groupes algébrique et corps des classes", Hermann (1959) MR0103191 [Si] C.L. Siegel, "Automorphe Funktionen in mehrerer Variablen", Math. Inst. Göttingen (1955) [Ta] J.T. Tate, "Classes d'isogénie des variétés abéliennes sur un corps fini (d' après T. Honda)", Sem. Bourbaki Exp. 352, Lect. notes in math., 179, Springer (1971) [We] A. Weil, "Variétés abéliennes et courbes algébriques", Hermann (1971) MR0029522 Zbl 0208.49202 [We2] A. Weil, "Courbes algébriques et variétés abéliennes. Sur les courbes algébriques et les varietés qui s'en deduisent", Hermann (1948) MR0029522 [We3] A. Weil, "Introduction à l'étude des variétés kahlériennes", Hermann (1958) MR0111056 Zbl 0137.41103 Abelian variety. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Abelian_variety&oldid=21543 This article was adapted from an original article by B.B. VenkovA.N. Parshin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Abelian_variety&oldid=21543" TeX done
CommonCrawl
Chekhlov, Andrey Rostislavovich Statistics Math-Net.Ru Total publications: 54 Scientific articles: 49 This page: 6261 Abstract pages: 15689 Full texts: 2365 References: 1489 Doctor of physico-mathematical sciences Keywords: algebraically compact groups, fully tansitive groups, pure subgroups. UDC: 512.541, 512.553, 51(091), 512.552 MSC: 20K20, 20K15, 20K30, 20K27, 16S50 Abelian groups and modules. Main publications: A. R. Chekhlov, "Kvaziservantno in'ektivnye gruppy bez krucheniya s nerazlozhimymi servantnymi podgruppami", Matem. zametki, 68:4 (2000), 587–592 A. R. Chekhlov, "Ob odnom klasse endotranzitivnykh grupp", Matem. zametki, 69:6 (2001), 944–949 A. R. Chekhlov, "O razlozhimykh vpolne tranzitivnykh gruppakh bez krucheniya", Sibirskii matem. zhurnal, 42:3 (2001), 714–719 http://www.mathnet.ru/eng/person18946 List of publications on Google Scholar http://zbmath.org/authors/?q=ai:chekhlov.andrey-r https://mathscinet.ams.org/mathscinet/MRAuthorID/226783 http://elibrary.ru/author_items.asp?spin=6802-3961 http://www.researcherid.com/rid/8772-2014 http://www.scopus.com/authid/detail.url?authorId=8933512100 Publications in Math-Net.Ru 1. A. R. Chekhlov, "On fully idempotent homomorphisms of abelian groups", Sibirsk. Mat. Zh., 60:4 (2019), 932–940 ; Siberian Math. J., 60:4 (2019), 727–733 2. A. R. Chekhlov, "Abelian groups with monomorphisms invariant with respect to epimorphisms", Izv. Vyssh. Uchebn. Zaved. Mat., 2018, 12, 86–93 3. A. R. Chekhlov, "Homomorphically Stable Abelian Groups", Mat. Zametki, 103:4 (2018), 609–616 ; Math. Notes, 103:4 (2018), 649–655 4. A. R. Chekhlov, "Abelian groups with annihilator ideals of endomorphism rings", Sibirsk. Mat. Zh., 59:2 (2018), 461–467 ; Siberian Math. J., 59:2 (2018), 363–367 5. A. R. Chekhlov, "On Strongly Invariant Subgroups of Abelian Groups", Mat. Zametki, 102:1 (2017), 125–132 ; Math. Notes, 102:1 (2017), 106–110 6. A. R. Chekhlov, "On Fully Inert Subgroups of Completely Decomposable Groups", Mat. Zametki, 101:2 (2017), 302–312 ; Math. Notes, 101:2 (2017), 365–373 7. A. R. Chekhlov, "Intermediately fully invariant subgroups of abelian groups", Sibirsk. Mat. Zh., 58:5 (2017), 1170–1180 ; Siberian Math. J., 58:5 (2017), 907–914 8. A. R. Chekhlov, "On fully quasitransitive abelian groups", Sibirsk. Mat. Zh., 57:5 (2016), 1184–1192 ; Siberian Math. J., 57:5 (2016), 929–934 9. A. R. Chekhlov, "Fully inert subgroups of completely decomposable finite rank groups and their commensurability", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2016, 3(41), 42–50 10. A. R. Chekhlov, "On Abelian groups with commutative commutators of endomorphisms", Fundam. Prikl. Mat., 20:5 (2015), 227–233 ; J. Math. Sci., 230:3 (2018), 502–506 11. A. R. Chekhlov, "On a Direct Sum of Irreducible Groups", Mat. Zametki, 97:5 (2015), 798–800 ; Math. Notes, 97:5 (2015), 815–817 12. A. R. Chekhlov, "On abelian groups with right-invariant isometries", Sibirsk. Mat. Zh., 55:3 (2014), 701–705 ; Siberian Math. J., 55:3 (2014), 574–577 13. A. R. Chekhlov, "Torsion-Free Weakly Transitive $E$-Engel Abelian Groups", Mat. Zametki, 94:4 (2013), 620–627 ; Math. Notes, 94:4 (2013), 583–589 14. A. R. Chekhlov, "On abelian groups with commuting monomorphisms", Sibirsk. Mat. Zh., 54:5 (2013), 1182–1187 ; Siberian Math. J., 54:5 (2013), 946–950 15. A. R. Chekhlov, Ml. V. Agafontseva, "On abelian groups with central squares of commutators of endomorphisms", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2013, 4(24), 54–59 16. A. R. Chekhlov, "On direct sums of cyclic groups with invariant monomorphisms", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2013, 3(23), 60–65 17. A. R. Chekhlov, "On Abelian groups close to $E$-solvable groups", Fundam. Prikl. Mat., 17:8 (2012), 183–219 ; J. Math. Sci., 197:5 (2014), 708–733 18. A. R. Chekhlov, "Abelian groups with nilpotent commutators of endomorphisms", Izv. Vyssh. Uchebn. Zaved. Mat., 2012, 10, 60–73 ; Russian Math. (Iz. VUZ), 56:10 (2012), 50–61 19. A. R. Chekhlov, "On Some Classes of Nilgroups", Mat. Zametki, 91:2 (2012), 297–304 ; Math. Notes, 91:2 (2012), 283–289 20. A. R. Chekhlov, "On projectively soluble abelian groups", Sibirsk. Mat. Zh., 53:5 (2012), 1157–1165 ; Siberian Math. J., 53:5 (2012), 927–933 21. A. R. Chekhlov, "On the projective commutant of abelian groups", Sibirsk. Mat. Zh., 53:2 (2012), 451–464 ; Siberian Math. J., 53:2 (2012), 361–370 22. A. R. Chekhlov, "E-engelian abelian groups of step $\le2$", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2012, 1(17), 54–60 23. A. R. Chekhlov, "On the Lie bracket of endomorphisms of Abelian groups, 2", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2011, 1(13), 55–60 24. A. R. Chekhlov, "$E$-solvable modules", Fundam. Prikl. Mat., 16:7 (2010), 221–236 ; J. Math. Sci., 183:3 (2012), 424–434 25. A. R. Chekhlov, "Commutator invariant subgroups of abelian groups", Sibirsk. Mat. Zh., 51:5 (2010), 1163–1174 ; Siberian Math. J., 51:5 (2010), 926–934 26. A. R. Chekhlov, "Some examples of E-solvable groups", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2010, 3(11), 69–76 27. A. R. Chekhlov, "E-nilpotent and E-solvable abelian groups of class 2", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2010, 1(9), 59–71 28. A. R. Chekhlov, "On $p$-rank 1 nil groups", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2010, 1(9), 53–58 29. A. R. Chekhlov, "Abelian groups with normal endomorphism rings", Algebra Logika, 48:4 (2009), 520–539 ; Algebra and Logic, 48:4 (2009), 298–308 30. A. R. Chekhlov, "Separable and vector groups whose projectively invariant subgroups are fully invariant", Sibirsk. Mat. Zh., 50:4 (2009), 942–953 ; Siberian Math. J., 50:4 (2009), 748–756 31. A. R. Chekhlov, "On abelian groups, in which all subgroups are ideals", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2009, 3(7), 64–67 32. A. R. Chekhlov, "On properties of centrally invariant and commutatorically invariant subgroups of abelian groups", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2009, 2(6), 85–99 33. A. R. Chekhlov, "On bracket Lie of endomorphisms of abelian groups", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2009, 2(6), 78–84 34. A. R. Chekhlov, "On projective invariant subgroups of abelian groups", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2009, 1(5), 31–36 35. A. R. Chekhlov, "On projective invariant subgroups of Abelian groups", Fundam. Prikl. Mat., 14:6 (2008), 211–218 ; J. Math. Sci., 164:1 (2010), 143–147 36. A. R. Chekhlov, "Properties of Projective Invariant Subgroups of Abelian Groups", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2008, 1(2), 76–82 37. A. R. Chekhlov, "On Weakly Quasipure Injective Groups", Mat. Zametki, 81:3 (2007), 434–447 ; Math. Notes, 81:3 (2007), 379–391 38. A. R. Chekhlov, "On quasi-closed mixed groups", Fundam. Prikl. Mat., 8:4 (2002), 1215–1224 39. A. R. Chekhlov, "Totally Transitive Torsion-Free Groups of Finite $p$-Rank", Algebra Logika, 40:6 (2001), 698–715 ; Algebra and Logic, 40:6 (2001), 391–400 40. A. R. Chekhlov, "On a Class of Endotransitive Groups", Mat. Zametki, 69:6 (2001), 944–949 ; Math. Notes, 69:6 (2001), 863–867 41. A. R. Chekhlov, "On decomposable fully transitive torsion-free groups", Sibirsk. Mat. Zh., 42:3 (2001), 714–719 ; Siberian Math. J., 42:3 (2001), 605–609 42. P. A. Krylov, A. R. Chekhlov, "Torsion-free abelian groups with a large number of endomorphisms", Trudy Inst. Mat. i Mekh. UrO RAN, 7:2 (2001), 194–207 ; Proc. Steklov Inst. Math. (Suppl.), 2001no. , suppl. 2, S156–S168 43. A. R. Chekhlov, "Quasipure injective torsion-free groups with indecomposable pure subgroups", Mat. Zametki, 68:4 (2000), 587–592 ; Math. Notes, 68:4 (2000), 502–506 44. A. R. Chekhlov, "Direct products and direct sums of torsion-free abelian $QCPI$-groups", Izv. Vyssh. Uchebn. Zaved. Mat., 1990, 4, 58–67 ; Soviet Math. (Iz. VUZ), 34:4 (1990), 69–79 45. A. R. Chekhlov, "Abelian torsion-free $CS$-groups", Izv. Vyssh. Uchebn. Zaved. Mat., 1990, 3, 84–87 ; Soviet Math. (Iz. VUZ), 34:3 (1990), 103–106 46. A. R. Chekhlov, "Cohesive quasipure injective abelian groups", Izv. Vyssh. Uchebn. Zaved. Mat., 1989, 10, 84–87 ; Soviet Math. (Iz. VUZ), 33:10 (1989), 116–120 47. A. R. Chekhlov, "Quasipure injective torsion-free Abelian groups", Mat. Zametki, 46:3 (1989), 93–99 ; Math. Notes, 46:3 (1989), 739–743 48. A. R. Chekhlov, "Quasipure injective torsion-free abelian groups", Izv. Vyssh. Uchebn. Zaved. Mat., 1988, 6, 80–83 ; Soviet Math. (Iz. VUZ), 32:6 (1988), 114–118 49. A. R. Chekhlov, "Some classes of torsion-free abelian groups that are close to quasi-pure-injective groups", Izv. Vyssh. Uchebn. Zaved. Mat., 1985, 8, 82–83 ; Soviet Math. (Iz. VUZ), 29:8 (1985), 116–118 50. P. A. Krylov, A. R. Chekhlov, "To the 110th anniversary of Sergei Antonovich Chunikhin", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2016, 1(39), 115–124 51. S. Ya. Grinshpon, A. R. Chekhlov, "P. A. Krylov. To the 65$^{\mathrm{th}}$ anniversary", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2013, 1(21), 116–122 52. P. A. Krylov, A. R. Chekhlov, "Grinshpon Samuil Yakovlevich (on the occasion of the 65th anniversary)", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2012, 4(20), 131–134 53. P. A. Krylov, A. R. Chekhlov, "The first head of the Department of Algebra in Tomsk State University", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2012, 3(19), 107–112 54. L. S. Kopaneva, A. R. Chekhlov, "On F. E. Molin's archive", Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2011, 3(15), 117–126 Tomsk State University Tomsk State University, Faculty of Mechanics and Mathematics
CommonCrawl
Random Signal Analysis : Question Paper May 2014 - Electronics & Telecomm. (Semester 5) | Mumbai University (MU) written 6.9 years ago by teamques10 ★ 46k modified 8 weeks ago by sagarkolekar ★ 10k Random Signal Analysis - May 2014 Electronics & Telecomm. (Semester 5) TOTAL MARKS: 80 (2) Attempt any three from the remaining questions. (3) Assume data if required. (4) Figures to the right indicate full marks. 1 (a) Explain any two properties of cross correlation function(5 marks) 1 (b) State and prove any two properties of Probability Distribution Function(5 marks) 1 (c) Define Strict Sense Stationary and Wide Sense Stationary Process.(5 marks) 1 (d) State and explain joint and conditional Probability of events(5 marks) 2 (a) Box 1 contains 5 white balls and 6 black balls. Box 2 contains 6 white balls and 4 black balls. A box is selected at random and then a ball is chosen at random from the selected box. (i) What is the probability that the ball chosen will be a white ball? (ii) Given that the ball chosen is white, what is the probability that it came from Box 1?(8 marks) 2 (b) The joint Probability density function of (x,y) is given by fxy(x,y)=Ke-(x+y); 0<x<y<? <br=""> Find : K (i) Marginal densities of x and y (ii) Are x and y independent? </x<y<?>(12 marks) 3 (a) If X and Y are two independent random variables and if Z=X+Y, then prove that the probability density function of Z is given by convolution of their individual densities.(10 marks) 3 (b) Find the characteristics function of Binomial Distribution and Poisson Distribution.(10 marks) 4 (a) Define Central Limit Theorem and give its significance(5 marks) 4 (b) Describe sequence of random variables(5 marks) 4 (c) State and prove Chapman-Kolmogorov equation. (10 marks) 5 (a) Find the autocorrelation function and power spectral density of the random process x(t)=a cos(bt+Y) where a,b and constant and Y is random variable uniformly distributed over (-π, π)(10 marks) 5 (b) Show that the random process given by x(t)=A cos(w0t+θ) Where A and w0 are constant and θ is uniformly distributed over (0, 2π) is wide sense stationary(10 marks) 6 (a) Explain power spectral density function. State its important properties and prove any one of the property. (10 marks) 6 (b) Prove that if input to LTI system is WSS then the output is also WSS(10 marks) 7 (a) Prove that the Poisson process in Markov Process(5 marks) 7 (b) The transmission matrix of Markov chain with three state 0,1,2 is $$given\ by\ \ P=\begin{array}{cc}& \\&\end{array}\begin{array}{cc}\ & \begin{array}{c}0 & \ \ \ \ \ \ 1 & \ \ \ \ \ \ 2\end{array} \\\begin{array}{ccc}0 \\1 \\2\end{array} & \left[\begin{array}{ccc}0.75 & 0.25 & 0 \\0.25 & 0.5 & 0.25 \\0 & 0.75 & 0.25\end{array}\right]\end{array}$$ and the initial state distribution is P(x0=i)= 1/3, i=0,1,2. Find : (i) P[x2=2] (ii) P[x3=1, x2=2, x1=1, x0=2] (10 marks) 7 (c) Define Markov Chain with an example and application(5 marks)
CommonCrawl
Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2016: genomics Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic Xin Bai1, Kujin Tang2, Jie Ren2, Michael Waterman1,2 & Fengzhu Sun1,2 BMC Genomics volume 18, Article number: 732 (2017) Cite this article Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2-statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2, respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1,r 2)+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences. The comparison of genome sequences is important for understanding their relationships. The most widely used methods are alignment based algorithms such as the Smith-Waterman algorithm [1], BLAST [2], BLAT [3], etc. In such studies, homologous genes among the genomes are identified, aligned, and then their relationships inferred using phylogenetic analysis tools to obtain gene trees. A consensus tree combining the gene trees from all the homologous genes is used to represent the relationship among the genomes. However, non-conserved regions form large fractions of most genomes and they also contain information about the relationships among the sequences. Most alignment based methods do not consider the non-conserved regions resulting in loss of information. Another drawback of the alignment based method is the extremely long time needed for the analysis, especially when the number of genome sequences is large. With the development of new sequencing technologies, a large number of genome sequences are now available and many more will be generated. To overcome the challenges facing alignment based methods for the study of genome sequence relationships, several alignment-free sequence comparison methods have been developed as reviewed in [4, 5]. Most of the methods use the counts of word patterns within the sequences [6–12]. One important problem is the determination of word length used for the comparison of sequences. Several investigators addressed this issue using simulation studies or empirical data [13–15]. Wu et al. [15] investigated the performance of Euclidian distance, standardized Euclidian distance, and symmetric Kullback–Leibler discrepancy (SK-LD) for alignment free genome comparison. For a given dissimilarity measure, Wu et al. [15] simulated the evolution of two sequences with different mutation rates and chose the word length that yielded the highest Spearman correlation between the dissimilarity measure and the mutation rate. They showed that SK-LD performed well and the optimal word length increases with the sequence length. Using a similar approach, Forêt et al. [14] studied the optimal word length for D 2 that measures the number of shared words between two sequences [8]. Sims et al. [13] suggested a range for the optimal word length using alignment-free genome comparison with SK-LD. Markov chains (MC) have been widely used to model molecular sequences to solve several problems including the enrichment and depletion of certain word patterns [16], prediction of occurrences of long word patterns from short patterns [17, 18], and the detecting of signals in introns [19]. Narlikar et al. [20] showed the importance of using appropriate Markov models on phylogenetic analysis, assignment of sequence fragments to different genomes in metagnomic studies, motif discovery, and functional classification of promoters. In this paper, we consider the comparison of two sequences modelled using Markov chains [11, 12] as a hypothesis testing problem. The null hypothesis is that the two sequences are generated by the same Markov chain. The alternative hypothesis is that they are generated by different Markov chains. We investigate a log-likelihood ratio statistic for testing the hypotheses and its corresponding χ 2-statistic based on the counts of word patterns in the sequences. The details of the statistics are given in "The likelihood ratio statistic and the χ 2-statistic for comparing two Markov sequences" subsection. We use statistical power of the test statistic under the alternative hypothesis to evaluate its performance. We will study the following questions. a) What is the optimal word length k yielding the highest power of the χ 2-statistic? b) How do the estimated orders of the Markov sequences, sequence length, word length, and sequencing error rate impact the power of the χ 2-statistic? c) For NGS read data, what is the distribution of the χ 2-statistic under the null hypothesis? (d) Do the conclusions from (a) and (b) still hold for NGS reads? Alignment-free comparison of two long Markov sequences We study alignment-free comparison of two long Markov sequences using counts of word patterns. We first introduce the likelihood ratio [11, 12] and corresponding χ 2-statistic. We show theoretically and by simulations that the optimal word length is k= max{r 1,r 2}+1, where r 1 and r 2 are the orders of the two Markov sequences. We then study the effects of sequence length, word length, and estimated orders of MCs on the power of the χ 2-statistic. The likelihood ratio statistic and the χ 2-statistic for comparing two Markov sequences Given two Markov sequences A 1 and A 2, we want to test if the two sequences follow the same MC, that is, if their transition probability matrices are the same. We formulate this as a hypothesis testing problem. The null hypothesis H 0 is that the two sequences are generated from the same MC. The alternative hypothesis H 1 is that the two sequences are generated from MCs with different transition probability matrices. To test the hypotheses, we use a likelihood ratio test statistic. Since we may not know the orders of MCs, we use counts of word patterns of length k (k≥1) to test if the two sequences are from the same MC of order k−1 as in [11]. The basic formulation of the problem can be described as follows. Let $$ \mathbf{A}_{s} = A_{s,1} A_{s,2} \cdots \cdots A_{s, L_{s}},\ \ s = 1, 2, $$ where L s is the length of the s-th sequence and A s,i , 1≤i≤L s is the letter of the sequence at the i-th position. To derive the likelihood ratio test, we assume that both sequences follow MCs of order k−1. The probability of the s-th sequence is $$\begin{array}{@{}rcl@{}} P(\mathbf{A}_{s}) &=& \pi^{(s)}_{A_{s,1} A_{s,2} \cdots A_{s,k-1}} \prod_{i = k}^{L_{s}} t^{(s)}\left(A_{s, i-k+1} \cdots A_{s, i-1}, A_{s, i}\right) \\ &=& \pi^{(s)}_{A_{s,1} A_{s,2} \cdots A_{s,k-1}} \prod_{\mathbf{w}} \left(t^{(s)}\left(\mathbf{w}^{-}, w_{k}\right)\right)^{N^{(s)}_{\mathbf{w}}}, \end{array} $$ where w=w 1 w 2⋯w k is any word pattern of length k, w −=w 1 w 2⋯w k−1 (the last letter is removed), \(N^{(s)}_{\mathbf {w}}\) is the number of occurrences of word w, and t (s)(w −,w k ) is the (k−1)-th order transition probability from w − to w k in the s-th sequence, and π (s) is the initial distribution. From this equation, it is easy to show that the maximum likelihood estimate of t (s)(w −,w k ) is $$ \hat{t}^{(s)}(\mathbf{w}^{-}, w_{k})= \frac{N^{(s)}_{\mathbf{w}}}{N^{(s)}_{\mathbf{w}^{-}}}. $$ Therefore, we can obtain the maximum likelihood for the s-th sequence \(\hat {P}(\mathbf {A}_{s})\) by replacing t (s)(w −,w k ) with \(\hat {t}^{(s)}(\mathbf {w}^{-}, w_{k})\) in equation (1). The likelihood of both sequences under the alternative hypothesis H 1 is $${} P_{1}\! = \prod_{s = 1}^{2}\! \hat{P}(\mathbf{A}_{s})\! = \prod_{s = 1}^{2} \pi^{(s)}_{A_{s,1} A_{s,2} \cdots A_{s,k-1}} \prod_{\mathbf{w}} \left(\hat{t}^{(s)}\left(\mathbf{w}^{-}, w_{k}\right)\right)^{N^{(s)}_{\mathbf{w}}}. $$ Under the null hypothesis H 0, the transition matrices for the two sequences are the same. Using the same argument as above, we can show that the maximum likelihood estimate of the common transition probability t(w −,w k ) is given by $$ \hat{t}(\mathbf{w}^{-}, w_{k}) = \frac{N^{(-)}_{\mathbf{w}}}{ N^{(-)}_{\mathbf{w}^{-}}}, $$ where \(N^{(-)}_{\mathbf {w}} = \sum _{s=1}^{2} N^{(s)}_{\mathbf {w}}\). Then the probability, P 0, of both sequences can be estimated similarly as in Eq. (2). The log-likelihood ratio statistic is given by (ignoring the first k−1 bases in each sequence) $$\begin{array}{@{}rcl@{}} \log(P_{1}/P_{0}) & = & \sum_{s = 1}^{2} \sum_{w_{1} w_{2} \cdots w_{k-1}} \sum_{w_{k}} N^{(s)}_{\mathbf{w}} \log \left(\frac{\hat{t}^{(s)}(\mathbf{w}^{-}, w_{k})}{\hat{t}(\mathbf{w}^{-}, w_{k})} \right) \\ & = & \sum_{s = 1}^{2} \sum_{w_{1} w_{2} \cdots w_{k-1}} \sum_{w_{k}} N^{(s)}_{\mathbf{w}} \log \left(\frac{N^{(s)}_{\mathbf{w}} \times N^{(-)}_{\mathbf{w}^{-}}}{N^{(s)}_{\mathbf{w}^{-}} \times N^{(-)}_{\mathbf{w}}} \right)\\ \end{array} $$ The above statistic has an approximate χ 2-distribution as the lengths of both sequences become large [21, 22]. It has been shown that twice the log-likelihood ratio statistic has the same approximate distribution as the following χ 2-statistic [11] defined by $$ S_{k} = \sum_{s = 1}^{2} \sum_{w_{1} w_{2} \cdots w_{k-1}} \sum_{w_{k}} \frac{\left(N^{(s)}_{\mathbf{w}} - N^{(s)}_{\mathbf{w}^{-}} N^{(-)}_{\mathbf{w}}/N^{(-)}_{\mathbf{w}^{-}} \right)^{2}}{N^{(s)}_{\mathbf{w}^{-}} N^{(-)}_{\mathbf{w}}/N^{(-)}_{\mathbf{w}^{-}}}. $$ Since 2 log(P 1/P 0) and S k are approximately equal, in our study, we use the measure S k for sequence comparison. To test if two independent identically distributed (i.i.d) sequences (r=0) have the same nucleotide frequencies, we set k=1, \(N^{(s)}_{\mathbf {w}^{-}} = L_{s}, ~s = 1, 2\), \(N^{(-)}_{\mathbf {w}^{-}} = L_{1} + L_{2},\) and S 1 is calculated by $$ S_{1} = \sum_{\mathbf{w}} \frac{L_{1} L_{2} \left(p_{\mathbf{w}}^{(1)} - p_{\mathbf{w}}^{(2)}\right)^{2}}{L_{1} p_{\mathbf{w}}^{(1)} + L_{2} p_{\mathbf{w}}^{(2)} }, $$ where w is a nucleotide and the summation is over all the nucleotides, \(p_{\mathbf {w}}^{(s)} = N_{\mathbf {w}}^{(s)}/L_{s}\), and L s is the length of the s-th sequence. Estimating the order of a MC sequence We usually do not know the order, r, of the MC corresponding to each sequence and it needs to be estimated from the data. Several methods have been developed to estimate the order of a MC including those based on the Akaike information criterion (AIC) [23] and Bayesian information criterion (BIC) [24]. The AIC and BIC for a Markov sequence of length L are defined by $$\begin{array}{@{}rcl@{}} \text{AIC}(k)&=&-2\sum_{\mathbf{w}\in\mathcal{A}^{k+1}}N_{\mathbf{w}}\log\frac{N_{\mathbf{w}}}{N_{\mathbf{w^{-}}}}+2(C-1)C^{k},\\ \text{BIC}(k)&=&-2\sum_{\mathbf{w}\in\mathcal{A}^{k+1}}N_{\mathbf{w}}\log\frac{N_{\mathbf{w}}}{N_{\mathbf{w^{-}}}}+(C-1)C^{k} \log (L-k+1), \end{array} $$ where C is the alphabet size. The estimators of the order of a Markov sequence based on AIC and BIC are given by $$\begin{array}{@{}rcl@{}} \hat{r}_{\text{AIC}}=\arg\min_{k} AIC (k), \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{r}_{\text{BIC}}=\arg\min_{k} BIC (k). \end{array} $$ Peres and Shields [25] proposed the following estimator for the order of a Markov chain $$ \hat{r}_{PS}=\arg\max_{k} \left \{ \frac{\Delta^{k}}{\Delta^{k+1}} \right \}-1, $$ $$\Delta^{k} = \max_{\mathbf{w}\in\mathcal{A}^{k}} |N_{\mathbf{w}}-E_{\mathbf{w}}|, $$ and \(\mathcal {A}\) is the set of all alphabet and \(E_{\mathbf {w}}=\frac { N_{\mathbf {-w}} N_{\mathbf {w-}} }{N_{\mathbf {-w-}}}\) is the expectation of word w estimated by a k−2-th order MC. Based on similar ideas as in [25], Ren et al. [26] proposed several methods to estimate the order of a MC based on $$T_{k} = \sum\limits_{\mathbf{w}\in\mathcal{A}^{k}}\frac{(N_{\mathbf{w}}-E_{\mathbf{w}})^{2}}{E_{\mathbf{w}}},\qquad \text{where}\quad E_{\mathbf{w}}=\frac{N_{\mathbf{-w}} N_{\mathbf{w-}}}{N_{\mathbf{-w-}}}. $$ The statistic T k has an approximate χ 2-distribution with df k =(C−1)2 C k−2 degrees of freedom when k≥r+2 [21, 22, 27, 28]. When k<r+2, T k will be large if the sequence is long, while T k should be moderate when k≥r+2. Based on this idea, we can estimate the order of the MC by $$ \hat{r}_{T} = \arg\min_{k} \left \{\frac{T_{k+1}}{T_{k}} \right \}-1. $$ Instead of using T k directly, we can calculate the corresponding p-value $$\begin{array}{@{}rcl@{}} p_{k}=P(T_{k} \geq t_{k})=P\left(\chi^{2}_{df_{k}} \geq t_{k}\right), \end{array} $$ where t k is the observed value of T k based on the long sequence. Since t k is generally large when k≤r+1 and thus p k should be small, while p k is moderate when k≥r+2. Based on this idea, we can estimate the order of a MC by $$ \hat{r}_{p}=\arg\min_{k} \left \{ \frac{\log(p_{k+1})}{\log(p_{k})} \right \}-1. $$ It is also possible to estimate the order of a MC based on the counts of individual word patterns. Let $$\begin{array}{@{}rcl@{}} Z_{\mathbf{w}}=\frac{N_{\mathbf{w}}-E_{\mathbf{w}}}{\hat{\sigma}_{\mathbf{w}}}, \end{array} $$ where \( \hat {\sigma }^{2}_{\mathbf {w}} = E_{\mathbf {w}} \left (1-\frac {N_{-\mathbf {w}}}{N_{-\mathbf {w}-}} \right) \left (1-\frac {N_{\mathbf {w}-}}{N_{-\mathbf {w}-}}\right)\) with \(E_{\mathbf {w}} = \frac { N_{\mathbf {-w}} N_{\mathbf {w-}} }{N_{\mathbf {-w-}}}.\) It has been shown that, for every word w, Z w is approximately normally distributed when k≥r+2. When the sequence is long, we expect Z max(k)= maxw,|w|=k|Z w | to be large when k≤r+1, while it is moderate when k≥r+2. Similar to the ideas given above, we can estimate the order of the MC by $$ \hat{r}_{Z} = \arg\min_{k} \left \{\frac{Z_{\max}(k+1)}{Z_{\max}(k)} \right \}-1. $$ We are interested in knowing the power loss of the χ 2-statistic when any of the estimated orders of the two sequences are used for the comparison of MC sequences. Alignment-free comparison of two Markov sequences based on NGS reads We then investigate the comparison of sequences based on NGS reads. We first extend the χ 2-statistic in Eq. (4) to be applicable to NGS reads. We then extend the methods for estimating the order of MC sequences for long sequences to be applicable to NGS reads. Finally, we study the optimal word length for genome comparison based on NGS reads and investigate the effect of sequence length, read length, distributions of reads along the genome, and sequencing errors on the power of the statistic. Alignment-free dissimilarity measures for comparing Markov sequences based on NGS reads Next generation sequencing (NGS) technologies are widely used to sequence genomes. Instead of whole genome sequences, NGS data consists of short reads with lengths ranging from 100 bps to several hundred base pairs depending on the sequencing technologies. Since the reads are randomly chosen from the genomes, some regions can be sequenced multiple times while other regions may not be sequenced. The log-likelihood ratio statistic in Eq. (3) for long sequences cannot be directly extended to NGS reads because of the dependence of the overlapping reads. On the other hand, the χ 2-statistic in Eq. (4) depends only on word counts in the two sequences, and thus can be easily extended to NGS read data. We replace N w in Eq. (4) by \(N^{R}_{\mathbf {w}}\), the number of occurrences of word pattern w among the NGS reads, to obtain a new statistic, $$\begin{array}{@{}rcl@{}} S_{k}^{R} &=& \sum_{s = 1}^{2} \sum_{w_{1} w_{2} \cdots w_{k-1}} \sum_{w_{k}} \frac{\left(N^{R(s)}_{\mathbf{w}} - N^{R(s)}_{\mathbf{w}^{-}} N^{R(-)}_{\mathbf{w}}/N^{R(-)}_{\mathbf{w}^{-}} \right)^{2}}{N^{R(s)}_{\mathbf{w}^{-}} N^{R(-)}_{\mathbf{w}}/N^{R(-)}_{\mathbf{w}^{-}}}, \\ \end{array} $$ $$\begin{array}{@{}rcl@{}} S_{1}^{R} &=& \sum_{\mathbf{w}} \frac{L_{1} L_{2} \left(p_{\mathbf{w}}^{R(1)} - p_{\mathbf{w}}^{R(2)}\right)^{2}}{L_{1} p_{\mathbf{w}}^{R(1)} + L_{2} p_{\mathbf{w}}^{R(2)} }. \end{array} $$ We will use \(S_{k}^{R}\) to measure the dissimilarity between the two sequences. Estimating the order of a Markov sequence based on NGS reads We next extend the estimators of the order of a MC in "Estimating the order of a MC sequence" subsection to NGS reads. The estimators r AIC and r BIC cannot be directly calculated because the likelihood of the reads is hard to calculate due to the potential overlaps among the reads. On the other hand, the other remaining estimators in "Estimating the order of a MC sequence" subsection, r PS , r S ,r p , and r Z , depend only on the word counts and we can just replace N w in these Eqs. by \(N^{R}_{\mathbf {w}}\) for the NGS data. For simplicity of notation, we will continue to use the same notation as that in "Estimating the order of a MC sequence" subsection for the corresponding estimators. Similar to the study of long sequences, we investigate the power loss of the statistic \(S_{k}^{R}\) when the estimated orders of the sequences are used to compare the power of \(S_{k}^{R}\) when the true orders of the sequences are used. Optimal word length for the comparison of Markov sequences using the χ 2-statistic The following theorem gives the optimal word length for the comparison of two sequences using the χ 2-statistics given in Eqs. 4 and (5). The theoretical proof is given in the Additional file 1. Theorem 1 Consider two Markov sequences of orders r 1 and r 2, respectively. We test the alternative hypothesis H 1: the transition matrices of the two Markov sequences are different, versus the null hypothesis H 0: the transition probability matrices are the same, using the χ 2-statistic in Eqs. (4) and (5). Then the power of the χ 2-statistic under the alternative hypothesis is maximized when the word length k= max{r 1,r 2}+1. In the following, we present simulation results to show the power of the statistic S k in Eqs. (4) and (5) for different values of sequence length and word pattern length. We simulated two Markov sequences A 1 and A 2 with different transition matrices and then calculated the distributions of the χ 2-statistic. We set the length of both sequences to be the same L: 10, 20 and 30 kbps, respectively, and started the sequences from the stationary distribution. We simulated MCs of first order and second order, respectively. Tables 1 and 2 show the transition probability matrices of (a) the first and (b) the second order transition matrices we used in the simulations. Here we present simulation results based on transition matrices from Tables 1 and 2 for simplicity. We also tried other transition matrices and the conclusions were the same. Table 1 The transition probability matrix of the first order Markov chain in our simulation studies Table 2 The transition probability matrix of the second order Markov chain The parameters α i ,β i ,γ i ,δ i , i=1,2, in Table 2 control the transition matrix of the second order MC. Note that if α i =β i =γ i =δ i , i=1,2, the MC will become a first order MC. Under the null hypothesis, sequences A 1 and A 2 follow the same Markov model. So we set the transition matrices for both A 1 and A 2 to be Table 1. Under the alternative hypothesis, the two sequences are different and we set the transition matrix of sequence A 1 to be from Table 1 and the transition matrix of sequence A 2 to be from Table 2. We set the parameters of Table 2 to be (1) α i =β i =γ i =δ i =0.05, i=1,2, and (2) α 1=α 2=0.05,β 1=β 2=−0.05,γ 1=γ 2=0.03,δ 1=δ 2=−0.03. The former scenario corresponds to the situation that sequences A 1 and A 2 have different orders and the latter scenario corresponds to the situation that they both have first order but different transition matrices. We then calculated the dissimilarity measure between sequence A 1 and A 2 using the χ 2-statistic in Eq. (4). We repeated the above procedures 2000 times to obtain an approximate distribution of S k under the null hypothesis. We sorted the value of S k in ascending order and took the 95% percentile as a threshold. Under the alternative hypothesis, the power is approximated by the fraction of times that S k is above the threshold. Figure 1 shows the relationship between the word size k and the power of S k for long sequences of different lengths. It can be seen from the figure that the power of S k is highest when the word length is k optimal= max{r 1,r 2}+1. When the word length is less than the optimal value, the power of S k can be significantly lower. On the other hand, when the word length is slightly higher than the optimal word length, the power of S k is still close to the optimal power. However, when the word length is too large, the power of S k can be much lower. Relationship between the word length k and the power. The transition matrix of sequence A 1 is from Table 1 and the transition matrix of sequence A 2 is from Table 2 with the parameters being (a) α i =β i =γ i =δ i =0.05, i=1,2 for the first order MC and (b) α 1=α 2=0.05, β 1=β 2=−0.05,γ 1=γ 2=0.03,δ 1=δ 2=−0.03 for the second order MC Given long sequences, the orders of the MCs are usually not known and have to be estimated from the data. We then studied how the power of S k changes when the estimated orders of the sequences are used compared to the power when the true orders of the sequences are known. Let \(\hat {r}_{1}\) and \(\hat {r}_{2}\) be the estimated orders of sequences A 1 and A 2, respectively. We compared the power of \(S_{\hat {k}}\) where \(\hat {k} = \max \left \{ \hat {r}_{1}, \hat {r}_{2} \right \} + 1\) with that of S k−optimal where k−optimal= max{r 1,r 2}+1. The power loss is defined as the difference between the power of S k−optimal and that of \(S_{\hat {k}}\). When both sequences are of first order, there was no power loss in our simulations. Figure 2 shows the power loss using different methods to estimate the orders of the sequences described in Eqs. (6) to (11) when the first sequence is of first order and the second sequence is of second order. There are significant differences among the various estimators when the sequence length is below 20 kbps. The power loss is minimal based on r AIC, r BIC, and r p for all three sequence lengths from 10 to 30 kbps, indicating their good performance in estimating the true Markov order of the sequence. When the sequence length is long, e.g 30kpbs, the power loss is minimal for all the estimators across the sequence lengths simulated. The power loss of the χ 2-statistic based on the estimated orders of the long sequences. A first order and a second order Markov long sequences are used Optimal word length for \(S_{k}^{R}\) for the comparison of two Markov sequences with NGS data The distribution of \(S_{k}^{R}\) was not known previously. In this paper, we have the following theorem whose proof is given in the Additional file 1. Consider two Markov sequences with the same length L and Markov orders of r 1 and r 2, respectively. Suppose that they are sequenced using NGS with M reads of length κ for each sequence. Let \(S_{k}^{R}\) be defined as in Eqs. (12) and (13). Suppose that each sequence can be divided into (not necessarily contiguous) regions with constant coverage r i for the i-th region, so that every base is covered exactly r i times. Let L is be the length of the i-th region in the short read data for the s-th sequence and \({\lim }_{L\rightarrow \infty } L_{is}/L=f_{i},~s=1,2\). Then Under the null hypothesis that the two sequences follow the same Markov chain, as sequence length L becomes large, \(S_{k}^{R}/d\) is approximately χ 2-distributed with degrees of freedom df k =(C−1)C k−1, where C is the alphabet size and $$ d=\frac{\sum_{i}r^{2}_{i}f_{i}}{\sum_{j}r_{j}f_{j}}. $$ In particular, under the Lander-Waterman model, the reads are randomly sampled from the long sequence so that the NGS reads follow a Poisson process with rate λ=M κ/L[29], for r i =i, f i =λ i exp(−λ)/i!, d=1+λ. If we use \(S_{k}^{R}\) to test whether the two sequences follow the same MC, under the alternative hypothesis, the power of \(S_{k}^{R}\) is the highest when k= max{r 1,r 2}+1. To illustrate the first part of Theorem 2, we simulated the distribution of \(S^{R}_{k}\) under the null hypothesis. We assumed that both sequences are of order 1 with the transition probability matrix from Table 1. First, we generated MCs with length of L=10 and 20 kbps, respectively. The simulations of long sequences were the same as in "Optimal word length for the comparison of Markov sequences using the χ 2-statistic" subsection. Second, we simulated NGS reads by sampling a varying number of reads from each sequence. The sampling of the reads was simulated as in [26,30]. The length of the reads was assumed to be a constant κ=200 bps and the number of reads M = 100 and 200 bps, respectively. The coverage of reads is calculated as λ=M κ/L. Two types of read distributions were simulated: (a) homogeneous sampling that the reads were sampled uniformly along the long sequence [29], and (b) heterogeneous sampling as in [31]. In heterogeneous sampling, we evenly divided the long genome sequences into 100 blocks. For each block, we sampled a random number independently from the gamma distribution Γ(1,20). The sampling probability for each position in the block is proportional to the chosen number. Sequencing errors are present in NGS data. In order to see the effect of sequencing errors on the distribution of \(S^{R}_{k}\), we simulated sequencing errors such that each base was changed to other three bases with equal probability 0.005. Once the reads are generated, we then calculated \(S^{R}_{k}\) between two NGS read data sets. In our simulation study, we fixed k=3 and the simulation process was repeated 2000 times for each combination of sequence length and number of reads (L,M) to obtain the approximate distribution of \(S^{R}_{3}/d\), where d is given in Eq. (14). Figure 3 shows the Q-Q (Quantile-Quantile) plots of the 2000 \(S^{R}_{3}/d\) scores v.s. 2000 scores sampled from a \(\chi ^{2}_{48}\) distribution, where the subscript 48 indicates the degrees of freedom of the χ 2 distribution. The constant d is 1+λ where λ denotes the coverage for homogeneous sampling; and d is calculated from Eq. (14) for heterogeneous sampling. It can be seen from the figure that the Q-Q plots center around the line y=x for both homogeneous and heterogeneous sampling without sequencing errors. These observations are consistent with part 1 of the Theorem 2. However, when sequence errors are present, the distribution of \(S^{R}_{3}/d\) deviates slightly from \(\chi ^{2}_{48}\). Q-Q plots of the 2000 \(S^{R}_{3}/d\) scores v.s. 2000 scores sampled from a \(\chi ^{2}_{48}\) distribution. The length of sequences L is 20kpbs and the number of reads M is 200. a homogeneous sampling without errors, b homogeneous sampling with errors, c heterogeneous sampling without errors, and d heterogeneous sampling with errors We next studied how the power of \(S_{k}^{R}\) changes with word length, sequence length, and sequencing errors. Here we show the results for the scenario that one sequence has first order and the other has second order. The results for the scenario that both sequences are of first order are given in the Additional file 1. The type I error was set at 0.05. Figure 4 shows the relationship between the word length k and the power of \(S^{R}_{k}\) using NGS short reads for different sampling of the reads and with/without sequencing errors. Several conclusions can be derived. First, the power of \(S^{R}_{k}\) is the highest when the word length k= max{r 1,r 2}+1. This is consistent with the result with long sequences. Second, sequencing errors can decrease the power of \(S^{R}_{k}\). However, with the range of sequencing error rates of current technologies, the decrease in power is minimal. Third, the power of \(S^{R}_{k}\) based on heterogeneous sampling of the reads is lower than that based on homogeneous sampling of the reads. Fourth, the power of \(S^{R}_{k}\) increases with both sequence length L and number of reads M as expected. The relationship between the word length k and the power of \(S_{k}^{R}\) based on NGS reads. The transition matrix of sequence A 1 is from Table 1 and the transition matrix of A 2 is from Table 2. The parameters of Table 2 are α 1=α 2=0.05,β 1=β 2=−0.05,γ 1=γ 2=0.03,δ 1=δ 2=−0.03. a homogeneous sampling without errors, b homogeneous sampling with errors, c heterogeneous sampling without errors, and d heterogeneous sampling with errors We then studied the effect on the power of \(S_{k}^{R}\) using the estimated orders of the Markov sequences with NGS reads. We used a similar approach as in "Optimal word length for the comparison of Markov sequences using the χ 2-statistic" subsection to study this problem except that we change long sequences to NGS reads. Figure 5 shows the results. It can be seen that the power loss is significant except when r p was used to estimate the order of the sequences. In all the simulated scenarios, the power loss is very small when r p is used to estimate the orders of Markov sequences. This result is consistent with the case of long sequences where r p also performs the best. The power loss of \(S_{k}^{R}\) based on different methods for estimating the order of Markov sequences based on NGS short reads. Panels are the same as in Fig. 4. a homogeneous sampling without errors, b homogeneous sampling with errors, c heterogeneous sampling without errors, and d heterogeneous sampling with errors Applications to real data Searching for homologs of the human protein HSLIPAS We used S k to analyze the relationship of 40 sequences chosen from mammals, invertebrates, viruses, plants, etc. as in [32,33]. We used HSLIPAS human lipoprotein lipase (LPL) of length 1612 bps as the query sequence and searched for similar sequences from a library set containing 39 sequences with length from 322 to 14,121 bps. The relationships among all the 40 sequences are well understood. Among the 39 library sequences, 20 sequences are from the primate division of Genbank, classified as being related to HSLIPAS, and 19 sequences that are from the divisions other than the primate division of Genbank, classified as being not related. Wu et al. [32] estimated the orders of the 40 sequences using Schwarz information criterion (SIC) [34] and found that 13 of them follow independent identically distributed (i.i.d) model (order = 0) and 27 of them follow a first order MC. We also used BIC and found the same results as SIC. As in Wu et al. [32], we used selectivity and sensitivity to quantify the performance of the measure S k for different values of k. First, we calculated the dissimilarity between HSLIPAS and each of the 39 sequences using S k and then ranked the 39 sequences in ascending order according to the values of S k . The sequence closest to HSLIPAS is ranked as sequence 1, the sequence with the next shortest distance as sequence 2, etc. Sensitivity is defined as the number of HSLIPAS-related sequences found among the first 20 (1-20) library sequences. Selectivity is measured in terms of consecutive correct classifications [35], that is, starting from sequence 1, the total number of sequences are counted until the first non-HSLIPAS-related library sequence occurs. Thus, selectivity and sensitivity are scores from 0 to 20 and higher score means better performance on the real data set. Table 3 shows the sensitivity and selectivity of S k for different values of k from 1 to 6. It can be seen from Table 3 that k=2 yields the best result for both selectivity and sensitivity. Since about two thirds of the sequences have estimated order 1 and one third of the sequences have estimated order 0, the results are consistent with our conclusion. Table 3 The selectivity and sensitivity of S k for different word length k based on the comparison of HSLIPAS with 39 library sequences Comparison of CRM sequences in four mouse tissues We also used S k to analyze cis-regulatory module (CRM) sequences in four tissues from developing mouse embryo [36–38] as in Song et al. [4]. The four tissues we used are forebrain, heart, limb and midbrain, with the average sequence lengths to be 711, 688, 657, and 847 bps, respectively. For each tissue, we randomly chose 500 sequences from the CRM dataset to form the positive dataset. For each sequence in the positive dataset, we randomly selected a fragment from the mouse genome with the same length, ensuring a maximum of 30% repetitive sequences to form the negative dataset. Thus, we have a negative dataset containing another set of 500 sequences. We calculated the pairwise dissimilarity of sequences within the positive and also the negative dataset using the S k statistic with word length from 1-7. Then we merged the pairwise dissimilarity from the positive and negative datasets together. Sequences within the positive dataset should be closer than sequences within the negative dataset because the positive sequences should share some common CRMs. Therefore, we ranked the pairwise dissimilarity in ascending order and then predicted sequence pairs with distance smaller than a threshold as from the positive sequence pairs and otherwise we predicted them as coming from the negative pairs. For each threshold, we calculated the false positive rate and the true positive rate. Thus, by changing the threshold, we plotted the receiver operating characteristic (ROC) curve and calculated the area under the curve (AUC). For each tissue and each word length k, we repeated the above procedures 30 times. We used BIC to estimate the MC orders of the sequences. The estimated orders of positive sequences for all four tissues are given in the Additional file 1. Almost all positive sequences in the positive dataset have estimated orders of 0 or 1. The results are similar for the negative sequences (data not shown). Figure 6 shows the relationship between the word length k and the AUC values in all four tissues using boxplot for the 30 replicates. It can be seen from the figure that the AUC values using word length 1-3 are much higher than that using word length 4-7. The AUC values when k=1 are slightly higher than that when k=2 and k=3. However, the differences are relatively small. The results are consistent in all four tissues. These results show that when the word length is close to the optimal word length based on our theoretical results, the AUC is generally higher than that when the word length is far away from the optimal word length based on our theoretical results. Boxplot of the AUC values for different word lengths k. For each k and each tissue, 30 AUC values based on 30 repeated experiments are shown. The subplots show results based on different tissues: a forebrain, b heart, c limb, and d midbrain In this paper, we investigated only the χ 2-statistic for alignment-free genome comparison and the optimality criterion is to maximize the power of the χ 2-statistic under the alternative hypothesis. Many other alignment-free genome comparison statistics are available as reviewed in [4,5]. The optimal word length we derived in this study may not be applicable to other statistics. We assumed that the sequences of interest are Markov chains. Real molecular sequences do not exactly follow Markov chains and the sequences are also highly related. The relationship between the true evolution distance between the sequences and the pairwise χ 2-dissimilarity using the optimal word length needs to be further investigated. These are the topics for future studies. In this paper, we study the optimal word length when comparing two Markov sequences using word count statistics, in particular, the likelihood ratio statistic and the corresponding χ 2-statistic defined in Eq. (4). We showed theoretically and by simulations that the optimal word length is k= max{r 1,r 2}+1. When the orders of the sequences are not known and have to be estimated from the sequence data, we showed that the estimator r p defined in Eq. (10) and the estimator r AIC defined in Eq. (6) have the best performance, followed by r BIC defined in Eq. (7) based on long sequences. We then extended these studies to NGS read data and found that the conclusions about the optimal word length continue to hold. It was also shown that if we use r p defined in Eq. (10) to estimate the orders of the Markov sequences based on NGS reads \(\hat {r}_{p1}\) and \(\hat {r}_{p2}\), respectively, and then compare the sequences using \(S_{\hat {k}-\text {optimal}}\), with \(\hat {k}-\text {optimal} = \max \{ \hat {r}_{p1}, \hat {r}_{p2} \}+1 \), the power loss is minimal. These conclusions are not significantly changed by sequencing errors. Therefore, our studies provide guidelines on the optimal choice of word length for alignment-free genome comparison using the χ 2-statistic. Smith TF, Waterman MS. Identification of common molecular subsequences. J Mol Biol. 1981; 147(1):195–7. Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ, et al. Basic local alignment search tool. J Mol Biol. 1990; 215(3):403–10. Kent WJ. BLAT, the BLAST-like alignment tool. Genome Res. 2002; 12(4):656–64. Song K, Ren J, Reinert G, Deng M, Waterman MS, Sun F. New developments of alignment-free sequence comparison: measures, statistics and next-generation sequencing. Brief Bioinform. 2014; 15(3):343–53. Vinga S, Almeida J. Alignment-free sequence comparison–a review. Bioinformatics. 2003; 19(4):513–23. Qi J, Luo H, Hao B. CVTree: a phylogenetic tree reconstruction tool based on whole genomes. Nucleic Acids Res. 2004; 32(Web Server Issue):45. Behnam E, Waterman MS, Smith AD. A geometric interpretation for local alignment-free sequence comparison. J Comput Biol. 2013; 20(7):471–85. Torney DC, Burks C, Davison D, Sirotkin KM. Computation of d2: A measure of sequence dissimilarity. Comput DNA. 1990; 7:109–25. Reinert G, Chew D, Sun FZ, Waterman MS. Alignment-free sequence comparison (I): Statistics and power. J Comput Biol. 2009; 16(12):1615–34. Karlin S, Burge C. Dinucleotide relative abundance extremes: a genomic signature. Trends Genet. 1995; 11(7):283–90. Blaisdell BE. A measure of the similarity of sets of sequences not requiring sequence alignment. Proc Natl Acad Sci USA. 1986; 83(14):5155–9. Blaisdell BE. Markov chain analysis finds a significant influence of neighboring bases on the occurrence of a base in eucaryotic nuclear DNA sequences both protein-coding and noncoding. J Mol Evol. 1985; 21(3):278–88. Sims GE, Jun SR, Wu GA, Kim SH. Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc Natl Acad Sci USA. 2009; 106(8):2677–82. Forêt S, Kantorovitz MR, Burden CJ. Asymptotic behaviour and optimal word size for exact and approximate word matches between random sequences. BMC Bioinforma. 2006; 7(5):1. Wu TJ, Huang YH, Li LA. Optimal word sizes for dissimilarity measures and estimation of the degree of dissimilarity between DNA sequences. Bioinformatics. 2005; 21(22):4125–32. Pevzner PA, Borodovsky MY, Mironov AA. Linguistics of nucleotide sequences i: the significance of deviations from mean statistical characteristics and prediction of the frequencies of occurrence of words. J Biomol Struct Dyn. 1989; 6(5):1013–26. Hong J. Prediction of oligonucleotide frequencies based upon dinucleotide frequencies obtained from the nearest neighbor analysis. Nucleic Acids Res. 1990; 18(6):1625–8. Arnold J, Cuticchia AJ, Newsome DA, Jennings WW, Ivarie R. Mono-through hexanucleotide composition of the sense strand of yeast DNA: a Markov chain analysis. Nucleic Acids Res. 1988; 16(14):7145–58. Avery PJ. The analysis of intron data and their use in the detection of short signals. J Mol Evol. 1987; 26(4):335–40. Narlikar L, Mehta N, Galande S, Arjunwadkar M. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses. Nucleic Acids Res. 2013; 41(3):1416–24. Anderson TW, Goodman LA. Statistical inference about Markov chains. Ann Math Stat. 1957; 28(4):89–110. Billingsley P. Statistical methods in Markov chains. Ann Math Stat. 1961; 32(1):12–40. Tong H. Determination of the order of a Markov chain by Akaike's information criterion. J Appl Probab. 1975; 12:488–97. Katz RW. On some criteria for estimating the order of a Markov chain. Technometrics. 1981; 23(3):243–9. Peres Y, Shields P. Two new Markov order estimators. arXiv preprint math/0506080. 2005. Ren J, Song K, Deng M, Reinert G, Cannon CH, Sun F. Inference of Markovian properties of molecular sequences from NGS data and applications to comparative genomics. Bioinformatics. 2016; 32(7):993–1000. Hoel PG. A test for Markov chains. Biometrika. 1954; 41(3/4):430–3. Billingsley P. Statistical Inference for Markov Processes, vol 2. Chicago: University of Chicago Press; 1961. Lander ES, Waterman MS. Genomic mapping by fingerprinting random clones: a mathematical analysis. Genomics. 1988; 2(3):231–9. Song K, Ren J, Zhai Z, Liu X, Deng M, Sun F. Alignment-free sequence comparison based on next-generation sequencing reads. J Comput Biol. 2013; 20(2):64–79. Zhang ZD, Rozowsky J, Snyder M, Chang J, Gerstein M. Modeling chip sequencing in silico with applications. PLoS Comput Biol. 2008; 4(8):1000158. Wu T, Hsieh Y, Li L. Statistical measures of DNA sequence dissimilarity under Markov chain models of base composition. Biometrics. 2001; 57:441–8. Hide W, Burke J, Davison D. Biological evaluation of d 2, an algorithm for high performance sequence comparison. J Comput Biol. 1994; 1:199–215. Schwarz G. Estimating the dimension of a model. Annals Stat. 1978; 6:461–4. Wu T, Burke JP, Davison DB. A measure of dna sequence dissimilarity based on mahalanobis distance between frequencies of words. Biometrics. 1997; 53:1431–9. Göke J, Schulz MH, Lasserre J, Vingron M. Estimation of pairwise sequence similarity of mammalian enhancers with word neighbourhood counts. Bioinformatics. 2012; 28(5):656–63. Blow MJ, McCulley DJ, Li Z, Zhang T, Akiyama JA, Holt A, Plajzer-Frick I, Shoukry M, Wright C, Chen F, et al. Chip-seq identification of weakly conserved heart enhancers. Nat Genet. 2010; 42(9):806–10. Visel A, Blow M, Li Z, et al. Chip-seq accurately predicts tissue-specific activity of enhancers. Nature. 2009; 457(7231):854–8. We would like to thank Prof. Minping Qian at Peking University (PKU) for suggestions on the proof of Theorem 1, and Yang Y Lu at USC for providing the software and suggestions to improve the paper. This research is partially supported by US NSF DMS-1518001 and OCE 1136818, Simons Institute for the Theory of Computing at UC Berkeley, and Fudan University, China. The publication costs of this paper were provided by Fudan University, Shanghai, China. Data of the first real data application can be downloaded from [33]. Data of the second real data application can be downloaded from [37]. This article has been published as part of BMC Genomics Volume 18 Supplement 6, 2017: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2016: genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-18-supplement-6. Centre for Computational Systems Biology, School of Mathematical Sciences, Fudan University, Shanghai, China Xin Bai , Michael Waterman & Fengzhu Sun Molecular and Computational Biology Program, University of Southern California, Los Angeles, California, USA Kujin Tang , Jie Ren Search for Xin Bai in: Search for Kujin Tang in: Search for Jie Ren in: Search for Michael Waterman in: Search for Fengzhu Sun in: FS and MSW conceived the study, designed the framework of the paper and finalized the manuscript. XB did the simulation studies, proved the theorems, and wrote the manuscript. KT and JR participated in the real data analysis. All authors read and approved the final manuscript. Correspondence to Fengzhu Sun. Supplementary Materials. Proofs of Theorem 1 and 2, simulation results for the comparison of two first order Markov sequences based on NGS reads and estimated orders of positive sequences in four mouse tissues. (PDF 274 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Bai, X., Tang, K., Ren, J. et al. Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic. BMC Genomics 18, 732 (2017) doi:10.1186/s12864-017-4020-z Markov chain Alignment-free genome comparison
CommonCrawl
Short-term effects of single-tree selection cutting on stand structure and tree species composition in Valdivian rainforests of Chile Florian Schnabel ORCID: orcid.org/0000-0001-8452-40011, Pablo J. Donoso2 & Carolin Winter ORCID: orcid.org/0000-0002-4238-68163 New Zealand Journal of Forestry Science volume 47, Article number: 21 (2017) Cite this article The Valdivian temperate rainforest, one of the world's 25 biodiversity hotspots, is under a continued process of degradation through mismanagement. An approach to reverse this situation might be the development of uneven-aged silviculture, combining biodiversity conservation and timber production. We examined the short-term effects of single-tree selection cutting on stand structure and tree species (richness, diversity and composition) in the Llancahue Experimental Forest in south-central Chile to quantify changes in comparison with old-growth rainforests of the evergreen forest type. We compared plots with high and low residual basal areas (60 and 40 m2 ha−1) and a control old-growth forest. Both cutting regimes achieved a balanced structure with reverse-J diameter distribution, continuous forest cover and sufficient small-sized trees. Compared to the old-growth forest, there were no significant changes in tree species richness and diversity. The only shortcomings detected were significant reductions in diameter and height complexity as assessed by the Gini coefficient, Shannon H′ and standard deviation, with a significantly lower number of large-sized trees (dbh 50 cm+, height 23 m+), especially in the low residual basal area regime. We suggest the intentional retention of a certain number of large-sized and emergent trees as strategy for biodiversity conservation. If adjusted accordingly, single-tree selection is a promising approach to retain many old-growth attributes of the Valdivian rainforest in managed stands while providing timber for landowners. The Chilean evergreen rainforest in the Valdivian Rainforest Ecoregion (35–48° S) is a unique, but endangered, ecosystem. It is one of the world's 25 biodiversity hotspots due to its abundance of vascular plant and vertebrate species and high degree of endemism, as well as a conservation priority due to it undergoing exceptional loss of habitat (Myers et al. 2000; Olson and Dinerstein 1998). This loss is caused basically due to illegal logging and inappropriately conducted legal selective cutting (cut the best and leave the worst; sensu Nyland (2002)), which destroy the multi-aged stand structure of these old-growth forests, leading to thousands of hectares of degraded forests (Moorman et al. 2013; Donoso 2013; Schütz et al. 2012; Myers et al. 2000; Olson and Dinerstein 1998). Old-growth forests of the evergreen forest type harbour the highest tree species richness in Chile and consist of a mixture of mostly shade-tolerant and moderately shade-tolerant (hereafter referred to as "mid-tolerant") broadleaved evergreen hardwood species and some conifers of the Podocarpaceae family (Donoso and Donoso 2007). The biodiversity associated with the structural and compositional attributes of these old-growth forests must not only be maintained in reserves (Moorman et al. 2013; Bauhus et al. 2009) but also in managed forests, combining the needs of the local population for forest products with biodiversity conservation (Moorman et al. 2013). A promising way to address this is the development of a silviculture regime that: (a) maintains forest attributes that are close to the natural state of old-growth forests; and (b) allows stakeholders to benefit from timber harvesting. In this study, we use the term "old-growthness" to refer to the degree of the retention of old-growth structural and compositional attributes in managed stands following Bauhus et al. (2009). Old-growth forests are defined here through the presence of key structural and compositional attributes including a high number of large trees, a wide range of tree sizes, complex vertical layering, the presence of late successional tree species and large amounts of standing and lying dead wood among others (Bauhus et al. 2009; Mosseler et al. 2003). The maintenance of these attributes in managed stands is essential for sustaining forest biodiversity as has been illustrated, for example, in boreal ecosystems (Bauhus et al. 2009 and citations within). The rationale is that since natural forest ecosystems and their dynamics are able to sustain the whole range of forest-dwelling species and forest functions, silviculture that mimics natural dynamics should be a good approach for sustaining forest biodiversity (Schütz et al. 2012). Currently in Chile, the application of single-tree selection cutting is believed to be the most promising and adequate approach for uneven-aged forests (Donoso 2013; Schütz et al. 2012; Donoso et al. 2009; Siebert 1998). Chilean native evergreen forests in south-central Chile are dominated by several commercially valuable hardwood shade-tolerant or mid-tolerant species, a major requisite to work with selection silviculture. Moreover, it has been shown that some of these species have much faster diameter growth rates under lower levels of basal area than those found in dense unmanaged old-growth forests (Donoso 2002; Donoso et al. 2009). Due to rare implementation, however, it remains unknown how the forest ecosystem is influenced through this type of silviculture and which would be the economic benefits in Chile, although preliminary estimates of timber revenues are positive (Nahuelhual et al. 2007). Nonetheless, there is abundant evidence for other forests that selection systems can maintain a high forest cover, complex vertical layering and balanced/regulated structures while providing income through timber sales at regular intervals on a sustainable basis (e.g. O'Hara 2014; Schütz et al. 2012; Pukkala and Gadow 2012; Gronewold et al. 2010; O'Hara et al. 2007; Keeton 2006; Bagnaresi 2002). For example, forests in the European Alps can harbour high structural and vegetation diversity even after several centuries of uneven-aged management (Bagnaresi 2002). The possible lack of old-growth attributes like large-sized trees can, however, be a concern (Bauhus et al. 2009 and citations within). Another general concern regarding selection silviculture is that through their evenly distributed small-scale disturbances, single-tree selection cutting might favour the development of shade-tolerant species at the expense of mid-tolerant ones, creating an abundant but homogenous regeneration and relatively low horizontal heterogeneity (Angers et al. 2005; Doyon et al. 2005). These considerations should be addressed before a new silvicultural scheme is applied at a large scale to avoid unwanted side effects. In the present work, our aim was, therefore, to evaluate the impacts of single-tree selection cutting with two different residual basal areas, upon structural and compositional attributes of old-growth temperate rainforests of the evergreen forest type. We were interested in finding management approaches that could avoid negative impacts on old-growth attributes and associated biodiversity at the stand scale. The objectives were to: (a) quantify the type and magnitude of structural and compositional changes induced through single-tree selection cutting with high residual basal areas (HRBA; 60 m2 ha−1) and low residual basal areas (LRBA; 40 m2 ha−1); and (b) identify key structural and compositional attributes of old-growthness that were affected through single-tree selection cutting with HRBA and LRBA. Unmanaged and well-conserved forests of the evergreen forest type in Chile reach 80–100 m2 ha−1 in basal area and support regeneration of almost exclusively shade-tolerant species (Donoso and Nyland 2005). The rationale for these two levels of residual basal areas was, therefore, that single-tree selection with LRBA would create relatively more light availability and was expected to favour the development of both ecologically and economically important mid-tolerant tree species (Donoso 2013). However, there might be trade-offs in terms of greater structural and compositional changes at LRBA compared with HRBA. Study area and experimental design The study was conducted in the Llancahue watershed (39° 50′ 20″ south and 73° 07′ 18″ west) in the intermediate depression of south-central Chile, a 1270-ha state-owned reserve that is administered by the University Austral de Chile (UACh) (Fig. 1). Study area showing the location of old-growth control (n = 4), high residual basal area (HRBA, n = 4) and low residual basal area (LRBA, n = 4) plots The low-elevation forest of the study area corresponds to the evergreen forest type, more specifically to the subtype dominated by shade tolerant species with few emergent Nothofagus trees, according to the official classification in Chile, and is part of the Valdivian Rainforest Ecoregion (Donoso and Donoso 2007). Llancahue lies between 50 and 410 m a.s.l., receives 2100 mm average annual rainfall and has an average annual temperature of 12.2 °C (Oyarzún et al. 2005; Fuenzalida 1971). Stands dominated by the shade-tolerant species Aextoxicon punctatum R. et Pav. and Laureliopsis philippiana (Looser) Schodde and the mid-tolerant species Eucryphia cordifolia Cav. and Drimys winteri J.R. et G. Forster were chosen. All stands had an uneven-aged structure and basal areas characteristic for this forest type. In the intermediate depression of south-central Chile, nearly all remnant old-growth forests show signs of illegal selective cuttings, especially since the twentieth century (Donoso and Lara 1995) and at low elevations. Signs include large stumps of few valuable species and increased cover of Chusquea spp., especially at low residual densities. This is also the case for stands selected in this study, which show signs of past harvests of a few large trees over the last three decades. The experimental design consisted of eight plots 2000 m2 (50 × 40 m) each, which were subjected to single-tree selection cutting in 2012 and were re-evaluated two growing seasons afterwards in 2014. Four plots were cut to achieve a residual basal area of 60 m2 ha−1 and four plots to 40 m2 ha−1, called high and low residual basal area (HRBA and LRBA), respectively (Table 1). The BDq method proposed by Guldin (1991) with a maximum diameter of 80 cm and a q factor (the difference between successive diameter classes) of 1.3 in average for a balanced diameter distribution was used based on recommendations in Schütz et al. (2012). Since this was the first time the stands were cut following a selection system, only half of the trees above the maximum diameter were cut to avoid a severe change and potential damage to residual trees. The main target species of selection silviculture are A. punctatum, L. philippiana, D. winteri, E. cordifolia and Podocarpaceae conifers if the expected product is timber and E. cordifolia if the objective of the harvest is firewood. For this first harvest, the rule "cut the worst, leave the best" was applied to enhance the quality and growth of the residual stock by preferentially harvesting defective and unhealthy trees. This approach contrasts with current selective cuttings that are used under the Chilean law, which do not control for residual stand structure, allow the harvest of 35% of the basal area per hectare in 5-year cutting cycles and preferably cut the most valuable trees instead of the worst (Donoso 2013; Schütz et al. 2012). Table 1 Basal area (m2 ha−1) per treatment and plot before (2012) and after the harvesting (2014) Four permanent plots 900 m2 (30 × 30 m) each that showed only minimal signs of past illegal cuttings were used as control. Although smaller than the cut plots, plot sampling sizes for temperate old-growth forests have been traditionally considered adequate with at least 500 m2 in Chile (Prodan et al. 1997) and elsewhere (Lombardi et al. 2015), so the plots used in this study provide a reliable sampling of the variables tested. Moreover, different plot sizes were addressed through choosing analysis methods that allow for unbiased testing of different sample sizes (see below). In Chile, the cutting intensity for the evergreen forest type is restricted to an average maximum of 35% of the original basal area (Donoso 2013). To achieve two levels of residual basal areas (HRBA and LRBA), while at the same time complying with the legal restrictions, we had to choose plots with the lowest initial basal areas for LRBA (average 34% of the basal area cut) and those with the largest basal areas for HRBA (average 24% of the basal area cut) (Table 1). Final average residual basal areas were 58.2 m2 ha−1 for HRBA plots and 41.2 m2 ha−1 for LRBA plots (Table 1). The plot where the least trees were cut was number S6 (10.5%), and the one with the most trees cut was S7 (41.6%). Apart from these extremes, plots had a cutting intensity that ranged between 17 and 37% of the original basal area. We acknowledge differences in the original basal areas of the three groups of plots selected for this study (old-growth, HRBA and LRBA). However, to reach the expected residual basal areas proposed by Donoso (2002) for uneven-aged silviculture in Chilean forests, within legal restrictions, we had to choose these partially cut stands that are common in the landscape. From there, rather than from pristine old-growth forests, we sought to find out how selection stands do, or do not, maintain old-growth attributes. Sampling design and data collection Three parameters for quantifying structural and compositional attributes were used in this study that have been largely and successfully applied in other ecosystems (e.g. Gadow et al. 2012; Lexerød and Eid 2006; McElhinny et al. 2005): (a) diameter at breast height (dbh) measured at 1.3 m; (b) tree height; and (c) tree species. All trees with a dbh ≥ 5 cm were recorded by species and diameter for the eight plots before cutting (2012) and were re-evaluated two growing seasons after harvesting (2014). The four control plots were measured once in 2014. Tree height was included as an additional and more direct measurement of vertical complexity only in 2014. Tree height was measured for all trees with dbh ≥ 10 cm with a Vertex III hypsometer. To quantify tree size complexity, three diversity indices were used to analyse the diameter and height data of the trees: (a) standard deviation; (b) Gini coefficient (Lexerød and Eid 2006; Gini 1912); and (c) ln-based Shannon index (H′) (Lexerød and Eid 2006; Shannon 1948). Standard deviation has been widely used as a way to calculate diameter and height complexity and can be compared with more complex indices for stand structural comparisons (McElhinny et al. 2005 and citations within). The Gini coefficient has also been used successfully to describe structural changes. For example, Lexerød and Eid (2006) found that the Gini coefficient was superior in discriminating between stands and was considered to have a very low sensitivity to sample size in a comparison of eight diameter diversity indices. It is calculated with the following equation: $$ \mathrm{GC}\kern0.5em =\kern0.5em \frac{\sum_{j=i}^n\left({2}_j-n-1\right){\mathrm{ba}}_j}{\sum_{j=i}^n{\mathrm{ba}}_j\left(n-1\right)\left.-1\right)} $$ where ba stands for the basal area of tree j (m2 ha−1). Finally, the Shannon index is a widely used measure of tree size complexity for diameter distributions, which allows a direct comparison of different distributions through one single value (e.g. Lexerød and Eid 2006; McElhinny et al. 2005; Wikström and Eriksson 2000). It is calculated after the following equation: $$ {H}^{\hbox{'}}=-\sum_{i=1}^S{P}_i\kern0.5em \ln \left({P}_i\right) $$ where P stands for the proportion of number of trees in size class i or per species i and S, for the number of dbh classes or species. An important quality of the Shannon index and the Gini coefficient is their independence of stand density as proven for example by Lexerød and Eid (2006). These indices were used: (a) due to their abilities documented in the literature (especially independence of sample size); and (b) to have a more robust result than using only one index. The standard deviation and the Gini coefficient were calculated from original individual tree data while the Shannon index was calculated on the basis of 5-cm diameter classes as suggested by Lexerød and Eid (2006). All three indices have been used similarly to describe diameter complexity as well as height complexity (Lexerød and Eid 2006). The values of the Gini coefficient range from (0, 1), with 1 standing for total inequality, while the Shannon index values range from (0, ln(S)) (Lexerød and Eid 2006). The standard deviation ranges from [0, ∞]. For all three indices, a higher index value reflects a wider range of tree diameters and heights and consequently greater complexity (Lexerød and Eid 2006). Index values were calculated for each plot and then compared between treatments to quantify the changes in structural complexity after management. To quantify a potential loss of large and/or emergent trees in more detail than with the complexity indices, trees were grouped in five diameter classes and five height strata based on diameter and height ranges known for these forests (Table 2). Table 2 Diameter (dbh) classes and height strata used to compare plots in this study Tree species richness was assessed using rarefaction, a statistical method to repeatedly re-sample richness out of a random pool of samples constructed out of the field data (e.g. different plots). This allowed an unbiased comparison of richness among different plot sizes (Kindt and Coe 2005). The rarefaction curve represents the average richness of a treatment at a given number of sampled area. Species diversity and evenness per treatment were calculated for each plot using the ln-based Shannon index (H′) (Eq. 2) and evenness using Pielou's evenness index (J′) as proposed by several authors (e.g. Alberdi et al. 2010; Kern et al. 2006). The value of J′ was calculated as H′/ln(S) where H is the Shannon diversity index and S, the species richness. Species diversity indices are dependent on sample size (Kindt and Coe 2005) so they were calculated only for the plots with selection cutting. This was done to avoid a biased comparison of species diversity due to the different sample sizes between treated and untreated plots. To assess changes in species composition in more detail, the number of trees per species was calculated for each plot and then compared between treatments. Furthermore, the importance value (IV; Eq. 3) of each species was calculated as the sum of its relative density (RD), relative dominance (Rd) and relative frequency (RF) where density (D) is the number of individuals per hectare, dominance (d) is the basal area (BA) of each species per hectare and frequency (F) is the number of plots where a species is present divided by the total number of plots (de Iongh Arbainsyah et al. 2014; Mueller-Dombois and Ellenberg 1974). $$ \mathrm{IV}\ \left(\mathrm{Importance}\ \mathrm{Value}\right)\kern0.5em =\kern0.5em \left(\mathrm{RD}\kern0.5em +\kern0.5em \mathrm{Rd}\kern0.5em +\kern0.5em \mathrm{RF}\right)/3 $$ Index data as well as the number of trees in the diameter and height classes were compared between the different years of observation (pre/post harvesting) and among treatments (control/HRBA/LRBA). Index data was analysed using linear mixed models (LMM) and generalised least square models (GLS). For the number of trees, generalised linear mixed models (GLMM) and generalised linear models (GLM) with Poisson distribution were used, since the data correspond to counts of individuals. Overdispersion was tested and, if found, a quasi-GLM model was used with a variance of ø × μ with ø as dispersion parameter and μ as mean, as suggested by Zuur et al. (2009). The effect of harvesting was determined using year and treatment as fixed effects. The difference between years was analysed through comparing the treatments with selection cutting, without incorporating the control. For this analysis, plots were incorporated as random effect since repeated measurements were used in this study, thus using LMM or GLMM for this analysis. The differences among the three treatments were analysed before and after the harvesting by GLS and GLM using management as fixed effect. The assumptions of normality and heterogeneity of variance were tested through examining the model residuals and the Shapiro-Wilk test for normality. If heterogeneity of variance was found, the variance function (varIdent) was used to model heteroscedasticity to avoid transformations. Models with and without variance functions were compared through the information criteria AIC and the model presenting the lowest AIC was chosen. All statistical analysis was conducted using R 3.1 (R Core Team 2014), the R packages nlme (Pinheiro et al. 2014), vegan (Oksanen et al. 2014), BiodiversityR (Kindt and Coe 2005) and AER (Kleiber and Zeileis 2008) as well as the software InfoStat (DiRienzo et al. 2011). Diameter distribution The control plots showed a reverse-J diameter distribution with a slight trend to a rotated-sigmoid distribution, due to a relatively high number of large trees between 50 and 100 cm dbh (Fig. 2). The HRBA and LRBA plots showed a reverse-J diameter distribution before and after the application of single-tree selection cutting (Fig. 2). Observed diameter distributions of a the old-growth control, b high residual basal area (HRBA) and c low residual basal area (LRBA). The histograms represent the average number of trees per hectare per treatment (four plots each) before and after harvesting In comparison to the control plots, treated plots had around twice as many young trees between 5 and 15 cm dbh (Fig. 2) before and after harvesting. For emergent trees (100 + cm dbh), a clear trend existed, with most trees in this diameter class in the control plots (max. diameter 160 cm) and few in the LRBA plots (max. diameter 105 cm). The LRBA plots had already fewer emergent trees before harvesting (plots had been slightly subjected to "selective" cuts in the past as mentioned above) but the difference became more pronounced after single-tree selection cutting. Additionally, selection cutting reduced the number of large trees (50–100 cm dbh), especially in the LRBA plots (Fig. 2). Structural complexity indices Harvesting significantly reduced diameter complexity as assessed by the Gini coefficient (p = 0.0012), the Shannon index (p = 0.0039) and the standard deviation (p = 0.0002) (Table 3). Moreover, all three indices provided a logical and consistent ranking of diameter complexity, with the highest index values in the control, followed by the HRBA and then by the LRBA plots after harvesting (Table 3). Table 3 Mean values for the Gini coefficient, the Shannon index and the standard deviation for diameter at breast height (dbh) and height data and for the Shannon index and the evenness index for species diversity before and after harvesting. Values are expressed as mean ± 1 standard deviation. Significant treatment differences with the control are shown with asterisks after the index value, with *p = 0.05–0.01, **p = 0.01–0.001 and ***p = <0.001, respectively. Significant harvesting impacts are mentioned in the text The Gini coefficient for diameter complexity was not significantly different between control and treated plots, while the Shannon index had already significantly higher values in the control plots before harvesting (see asterisks, Table 3). The significance of this difference became more pronounced for both HRBA and LRBA plots through harvesting (Table 3). The standard deviation showed the same pattern as the Shannon index (Table 3). Before harvesting, control plots had already a significantly higher complexity than the treated plots, but these differences were only slightly significant (Table 3). After harvesting, these differences became highly significant for both HRBA and LRBA plots. The Gini coefficient of height complexity showed no significant difference between the untreated and treated plots (Table 3). The Shannon index showed a significantly higher height complexity after harvesting in the unmanaged plots than in the managed ones. The same was observed for the standard deviation, which was significantly higher in the control plots. No clear differences in height complexity existed between the HRBA and LRBA plots. Diameter classes and height strata Small-sized trees, small diameter classes (very small and small) and low height strata (low understorey and upper understorey) were more abundant in the treated plots than in the control plots before harvesting (Table 4). In regard to diameter classes, harvesting reduced the number of small-sized trees only marginally, leaving an abundant residual growing stock. On the contrary, intermediate- to very large-sized trees (dbh 25 cm+, height 23 m+) were strongly influenced by harvesting. The density of trees with an intermediate diameter was already significantly higher in the control compared to the LRBA plots before harvesting (see asterisks, Table 4). Harvesting significantly (p = 0.0011) reduced their number, leading to a significantly higher number of trees in the intermediate diameter class in the control compared to selection plots, with a more pronounced difference for the LRBA plots after harvesting (Table 4). Table 4 Mean number of trees per hectare per diameter class (trees dbh ≥ 5 cm) and height strata (trees dbh ≥ 10 cm) before and after harvesting. Diameter classes and height strata are explained in Table 3. Values are expressed as mean ± 1 standard deviation. Significant treatment differences with the control are shown through an asterisk after the mean number of trees, with *p = 0.05–0.01, **p = 0.01–0.001 and ***p ≤ 0.001, respectively. Significant harvesting impacts are mentioned in the text The number of large diameter trees was nearly the same for control, HRBA and LRBA plots before harvesting (Table 4). Harvesting significantly reduced their density (p < 0.0001), resulting in a significantly higher number of large diameter trees in the control compared to the LRBA but not to HRBA plots (Table 4). The number of very large diameter trees was highest in the control already before harvesting, but without significant differences due to the high variability between plots (Table 4). Harvesting significantly reduced their number (p < 0.0001), resulting in a significantly higher tree density of these trees in the control compared to LRBA plots (Table 4). No significant differences could be found between control and HRBA plots (Table 4). The HRBA plots had a higher number of intermediate to very large diameter trees than their LRBA counterparts (Table 4). In regard to height classes, no significant difference was found for the number of low canopy trees, while upper canopy trees were significantly more abundant in the control than in the treated plots (Table 4). The difference between untreated and treated plots was even more pronounced for emergent trees with significantly more emergent trees in the control (72 trees ha−1) compared to HRBA (19 trees ha−1) and LRBA (10 trees ha−1) plots after harvesting (Table 4). Tree species richness and diversity Tree species richness was not changed as a result of harvesting. Thus, only the results after harvesting are presented here. The confidence intervals of the three rarefaction curves overlap at the total sampled area of the control plots (Fig. 3), reflecting that tree species richness was not significantly different between unmanaged and managed plots. Species rarefaction curves showing the mean tree species richness per sampled area for the control, high and low residual basal area (HRBA and LRBA) plots after harvesting. The rarefaction curves were calculated through repeatedly re-sampling richness out of a random pool of samples constructed out of the four sampled plots Tree species diversity, as evaluated through the Shannon index, did not significantly change after harvesting (comp. Table 3). Also, no significant differences in tree species diversity existed between treated and untreated plots (comp. Table 3). The same was observed for tree species evenness which was not significantly changed through harvesting and was not different between treated and untreated plots (comp. Table 3). Tree species composition In general, treated and untreated plots showed a similar species composition regarding dominant tree species, as evaluated through the average number of trees per species and the importance value (IV) of each species before harvesting (Table 5). The shade-tolerant species A. punctatum and Myrceugenia planipes (H et A.) Berg, however, had far higher IVs in the control plots compared with the treated ones (Table 5). The application of the single-tree selection cutting regime applied in this study had the strongest effect on E. cordifolia. The IV of this species decreased by 21%, and 20% of individuals were removed as a result of the HRBA treatment (Table 5). The LRBA treatment had a more severe effect with the IV decreasing by 27%, and 32% of individuals being removed (Table 5). The decrease in IV of E. cordifolia through management led to an increase of the IV of most other species (Table 5). The only other species that experienced a clear decline in the number of trees through harvesting was L. philippiana, but the strong loss of E. cordifolia still led to a rise in its IV. Except for E. cordifolia, LRBA management did not induce stronger changes in the tree species community as compared with HRBA management (Table 5). The selection cutting regime led only to marginal changes in the number of trees of all less frequent species, and no species were lost through harvesting (Table 5). Moreover, the extraction of dominant species led to a rise of IV of several less frequent species in the forest community (Table 5). Table 5 Average number of trees per hectare and importance value (IV) in % per treatment and species in 2012 and 2014 Key structural attributes and biodiversity conservation We examined changes in key structural attributes such as reverse-J diameter distributions, complex vertical layering, variability of tree sizes, presence of advance regeneration and large/emergent trees as measures of old-growthness (Bauhus et al. 2009). Both residual basal area regimes evaluated in this study on single-tree selection cutting were found to maintain a balanced uneven-aged structure, forest cover continuity and a sufficient growing stock of small-sized trees. All plots, managed and unmanaged, were characterised by reverse-J shaped diameter distributions before and after harvesting. These findings are in accordance with studies in other forest types, where selection cutting maintained these structural forest attributes over decades while providing timber yields at regular intervals (e.g. Pukkala and Gadow 2012; Gronewold et al. 2010; O'Hara et al. 2007; Keeton 2006; Bagnaresi 2002). The observed reverse-J shaped distributions are typical for old-growth stands in the Valdivian Costal Range and the Valdivian Andes (Donoso 2013; Donoso 2005), showing that single-tree selection cutting is able to maintain this old-growth attribute. Still, results of this study cover only the short-term impacts of selection cutting (2 years), and there may be lag effects with sensitive species. However, the growth model predictions of Rüger et al. (2007), which were parameterised for the evergreen forest type on Chiloé Island, Chile, suggest that single-tree selection cutting can maintain the above-mentioned forest attributes also on the long term. Moreover, balanced structures with similar crown covers for small-, intermediate- and large-sized trees allow more abundant regeneration and tree growth in the evergreen forest type in Chile than unbalanced ones (Schütz et al. 2012; Donoso and Nyland 2005; Donoso 2005). Further considerations should be given to the fact that plots with a high residual basal area (HRBA) tend to approximate old-growth conditions more closely through maintaining higher numbers of large-sized trees and higher diameter complexity than plots with a low residual basal area (LRBA) (Tables 3 and 4). Similarly, Gronewold et al. (2010) concluded that (after a survey of 57 years in northern hardwood stands of North America managed through single-tree selection cutting) stands with high residual basal areas better approximated the natural disturbance history and diameter distributions of unmanaged uneven-aged stands, while low residual basal areas resulted in simpler and more regulated distributions. The higher number of small trees (advanced regeneration) already present before cutting compared to the control plots in our study most likely results from higher light availability, especially in the LRBA plots caused by previous illegal cuttings (as mentioned before). The only shortcoming detected was the significant reduction and lower numbers of large-sized (dbh 50 cm+, height 23 m+) and emergent trees (height 30 m+) in the treated plots (especially in LRBA ones), compared to the control plots after harvesting. This finding was supported by the significant reduction of diameter and height complexity (shown by all three indices) and significantly higher diameter and height complexity in the untreated plots (shown by the Shannon index and standard deviation) as a result of a reduction in the range of tree diameters and heights. All three structural indices have been widely used to quantify diameter and height complexity in managed and unmanaged forest stands and to a lesser extent to evaluate the impacts of single-tree selection cutting (Torras et al. 2012; O'Hara et al. 2007; Lexerød and Eid 2006; McElhinny et al. 2005; Acker et al. 1998). Similar to results of this study, Acker et al. (1998) reported higher values of the standard deviation of tree diameter in old-growth northern hardwood stands of North America compared to managed ones, but there are also studies that report a rise of diameter and height complexity under selection silviculture over time using the same three indices (Torras et al. 2012; O'Hara et al. 2007). In particular, very large and emergent trees were far less numerous in the managed plots. Similarly, numerous other studies have found that stands managed through selection cutting have less large trees than comparable old-growth stands (e.g. Torras and Saura 2008; Rüger et al. 2007; Keeton 2006; Angers et al. 2005; Crow et al. 2002). Furthermore, stands with lower residual basal areas were found to present less large trees than stands with higher residual basal areas (Gronewold et al. 2010; Rüger et al. 2007). This is partly consistent with the findings of this study where only LRBA plots presented significantly lower numbers of large and very large diameter trees than the control plots. Large-sized and emergent trees are, however, an important habitat for many forest dwelling species and communities that depend on this specific structural attribute of old-growth forests (Bauhus et al. 2009), such as cavity-dependent animal species like birds and mammals as well as bryophytes, lichens, fungi, and saproxylic beetles (Paillet et al. 2010; Bauhus et al. 2009). One specific example in the Chilean evergreen rainforest is the abundant flora of endemic epiphytic plants that depend on emergent trees (Díaz et al. 2010) and their greater frequency on trees with large diameters (Muñoz et al. 2003). Furthermore, bird species diversity in the evergreen forest type can be predicted by the presence of emergent trees, and their diversity is consequently higher in old-growth than in early or mid-successional forest stands (Díaz et al. 2005). It follows that a certain number of large-sized, especially emergent trees, in managed stands is crucial for biodiversity conservation. Although we do not deal with dead wood (i.e. snags and coarse woody debris) in this paper, preliminary results suggest that plots subjected to single-tree selection cutting have similar or even higher amounts of this key structural attribute (sensu Bauhus et al. 2009) compared to unmanaged old-growth forests (Schnabel et al., unpublished). Impacts on tree species richness, diversity and composition An important attribute for old-growthness is the high number of late successional tree species (Bauhus et al. 2009), i.e. shade-tolerant and emergent mid-tolerant ones in the evergreen forest type. Tree species richness, diversity and evenness were not changed through selection cutting in the short-term, which is in accordance with findings in northern hardwood forest in North America (Angers et al. 2005; Crow et al. 2002). It is therefore reasonable to conclude that the direct harvesting effects of single-tree selection (e.g. tree felling), as conducted in the present study, are compatible with preserving tree species richness and diversity within the evergreen forest type in the short term. If management guidelines such as those applied in this study are used, the same should also apply for future applications. The numbers of less frequently occurring tree species were not reduced through selection cutting; a fact that further supports the conclusion that single-tree selection cutting is compatible with preserving tree species diversity. While most dominant tree species experienced no severe losses through selection cutting, a clear impact was noted for the mid-tolerant species E. cordifolia. It clearly declined in abundance and IV, although it remained the second species in IV. The reason for this was the preferential harvest of old/large, poor-quality E. cordifolia trees, to improve the quality of the residual stock in the first harvest. Moreover, E. cordifolia trees were mostly large individuals, since regeneration for this mid-tolerant species is generally scarce under closed forests (Escobar et al. 2006; Donoso and Nyland 2005) and was thus strongly impacted by the harvesting criteria of a maximum residual diameter of 80 cm. In future harvests, impacts are likely to be more equally distributed, as most of the defective E. cordifolia trees would have already been harvested after this first selection cut. Also, E. cordifolia is one of the target tree species of selection silviculture due to its high economic value and expected fast growth and abundant regeneration at low residual basal areas (especially at 40 m2 ha−1). In addition, retaining some emergent E. cordifolia trees is a conservation priority as this species harbours an exceptional high diversity and abundance of epiphytes, acting as key structure for biodiversity conservation and ecosystem processes like water and nutrient cycling (Díaz et al. 2010). Finally, single-tree selection might favour both the recruitment of mid-tolerant species like E. cordifolia (Torras and Saura 2008; Angers et al. 2005) and/or shade-tolerant species (Keyser and Loftis 2013; Gronewold et al. 2010; Rüger et al. 2007) depending on the size of crown opening and consequent light availability (i.e. LRBA should induce more regeneration of mid-tolerant tree species like E. cordifolia). As the effects on tree regeneration in the evergreen forest type remain unknown in the field, different harvesting intensities might be currently the best option to promote the regeneration of both mid- and shade-tolerant tree species. Implications for management Overall, our results support the claim that single-tree selection cutting is a promising silvicultural approach for the evergreen forest type. This approach is certainly more promising than the currently supported selective harvesting guidelines of the Chilean law which do not control for a balanced residual stock and allow the harvest of 35% of the basal area in 5-year cutting cycles, which is unsustainable (Donoso 2013; Schütz et al. 2012). In contrast, the only negative affect detected for the single-tree selection cutting regime applied in the present study was the clearly lower number of large-sized and emergent trees in managed plots, a key structural attribute of old-growth forests and crucial for biodiversity conservation. The management strategy of single-tree selection cutting would need to be adjusted by forest managers who wish to preserve some emergent and large-sized trees in stands managed through selection silviculture. The use of a maximum residual diameter, such as 80 cm in this study, actually impairs the preservation of large-sized trees (Keeton 2006). One possibility in this context would be the intentional retention of a still to be a specified number of large (>80 cm diameter) trees, especially emergent ones (e.g. Bauhus et al. 2009; Angers et al. 2005). We did maintain some large trees in the managed stands in this study because otherwise, basal area harvesting would have been too destructive, but our results suggest that leaving trees above a given maximum diameter must be an ongoing requirement. In particular, retaining emergent trees with crowns over the main canopy of the residual stock is beneficial as: (a) they may not impede the growth of either young or mature trees nor tree regeneration in the evergreen forest type (Donoso 2005); and (b) they are key structures for biodiversity conservation (Díaz et al. 2010). An additional possibility might be the use of diameter-guiding curves other than the reverse-J distribution curve used in the current study. For example, a rotated sigmoid distribution curve may satisfy ecological needs more closely through allocating more basal area and growing space to larger diameter classes (Keeton 2006). The HRBA plots tended to better approximate old-growth conditions than LRBA plots, in terms of higher numbers of large-sized trees and higher structural complexity. However, it remains untested in the field as to which residual basal area selection cutting generates sufficient light availability to allow the regeneration of both shade-tolerant and mid-tolerant species in the evergreen forest type. Thus, using a combination of the two residual basal area regimes examined here (HRBA with 60 m2 ha−1 and LRBA with 40 m2 ha−1) might be an advisable option, which would contribute to more diverse and species-rich stands and additionally to the generation of a more heterogeneous forest structure on a broader scale, e.g. Angers et al. (2005). Little is known about the required quantity and spatial distribution of retained emergent trees, which would be necessary to develop sound ecological management guidelines, like retention targets for biodiversity conservation (Bauhus et al. 2009; McElhinny et al. 2005). Due to a lack of information on this topic in Chile and in other ecosystems (e.g. Bauhus et al. 2009; McElhinny et al. 2005), this is a major challenge for uneven-aged silviculture, especially in forests of high diversity and endemism like the evergreen temperate forests of south-central Chile. A final concern in Chile (and elsewhere) is that Chusquea bamboos in the understorey may be a threat for regeneration (Donoso and Nyland 2005). These are usually light-demanding species, so the creation of canopy openings above a certain size following selection cuts, especially if using group selection, could promote Chusquea spp. regeneration. This poses an important research challenge for selection silviculture in Chilean forests, i.e. determining adequate densities (for example expressed in basal area) that would maintain low levels of Chusquea spp. cover while allowing the forest stand to sustain good growth rates. Donoso (2002) studied uneven-aged forest with basal areas from 38 to 140 m2 ha−1 in the lowlands of south-central Chile, and Chusquea spp. had levels of cover that ranged from 3 to 12%. This result suggests that managed forest stands with residual basal areas as low as 40 m2 ha−1 should not have major competition from Chusquea spp. upon tree regeneration. From a management perspective, a great advantage of selection silviculture is the production of large logs for saw timber or veneer, products of high commercial value, while in the same time, logs of smaller dimensions are harvested that can be used for firewood or charcoal production (Moorman et al. 2013; Puettmann et al. 2015). Siebert (1998), Donoso (2002) and Donoso et al. (2009) have proposed target maximum diameters of 60–90 cm (80 cm in this study), which should generate high-value products. Operationally, harvesting requires skilled workers and marked stands after determining adequate marking guides according to the BDq or a similar technique. In addition, Donoso (2002) proposed 10-year cutting cycles for evergreen forests on productive low-elevation sites. Single-tree selection would thus especially offer landholders with small properties a variety of wood products at regular intervals (Puettmann et al. 2015). Overall, major considerations to better conserve structural features and biodiversity of old-growth forests in managed stands, while also achieving good rates of timber productivity, could include: (a) retaining a certain number of large-sized, especially emergent trees; (b) using a diverse but relatively narrow range of residual basal areas that may support good development of relatively fast-growing and valuable mid-tolerant tree species associated to shade-tolerant ones; and (c) applying diameter distributions that allow for a greater allocation of basal area in relatively large trees. These considerations for stand variability in managed forests should be included in forest regulations, which should adapt to new knowledge generated through research. Considering that mostly, we did not cut beyond 35% harvested basal area, the maximum established in Chilean regulations, research in selection silviculture should also evaluate an ample range of harvesting intensities using a relatively ample range of initial and residual basal areas. This would allow a more robust information on thresholds to conserve in the best possible manner "old-growthness" (sensu Bauhus et al. (2009)) in managed forest ecosystems. We examined changes in forest structure and tree species composition as well as possible detrimental effects on key attributes of old-growthness in stands managed through single-tree selection cutting. Through both harvest variants, high and low residual basal areas (HRBA and LRBA), a balanced, uneven-aged structure with reverse-J diameter distribution and forest cover were maintained. Also, a sufficient growing stock of small-sized trees was kept. Moreover, neither tree species richness, diversity and evenness, nor the presence of less frequent species were negatively affected on the short term. As the effects on tree regeneration remain unknown, using a combination of HRBA and LRBA may be advisable to support good development of relatively fast-growing and valuable mid-tolerant tree species associated with shade-tolerant ones. The only negative effect detected was the clearly lower number of large-sized and emergent trees in managed plots (especially for LRBA), which are a key structural attribute of old-growth forests and crucial for biodiversity conservation. These results suggest that single-tree selection cutting, if adjusted to retain a certain number of large-sized and emergent trees, can serve as a possible means to preserve many old-growth structural and compositional attributes of the evergreen forest type in managed stands while harvesting timber for the landowners. Future experiments should test the effects of alternative selection cutting upon structural heterogeneity, diversity and productivity to balance the varied societal demands of ecosystem services expected from forest management. Dbh: Diameter at breast height GLMM: Generalised linear mixed models GLS: Generalised least square models HRBA: High residual basal area Importance value LMM: Linear mixed models LRBA: Low residual basal area Acker, S. A., Sabin, T. E., Ganio, L. M., & McKee, W. A. (1998). Development of old-growth structure and timber volume growth trends in maturing Douglas-fir stands. Forest Ecology and Management, 104, 265–280. https://doi.org/10.1016/S0378-1127(97)00249-1 . Alberdi, I., Condés, S., & Martínez-Millán, J. (2010). Review of monitoring and assessing ground vegetation biodiversity in national forest inventories. Environmental Monitoring and Assessment, 164, 649–676. https://doi.org/10.1007/s10661-009-0919-4 . Angers, V. A., Messier, C., Beaudet, M., & Leduc, A. (2005). Comparing composition and structure in old-growth and harvested (selection and diameter-limit cuts) northern hardwood stands in Quebec. Forest Ecology and Management, 217, 275–293. https://doi.org/10.1016/j.foreco.2005.06.008 . Bagnaresi, U. (2002). Stand structure and biodiversity in mixed, uneven-aged coniferous forests in the eastern Alps. Forestry, 75, 357–364. https://doi.org/10.1093/forestry/75.4.357 . Bauhus, J., Puettmann, K., & Messier, C. (2009). Silviculture for old-growth attributes. Forest Ecology and Management, 258, 525–537. https://doi.org/10.1016/j.foreco.2009.01.053 . Crow, T. R., Buckley, D. S., Nauertz, E. A., & Zasada, J. C. (2002). Effects of management on the composition and structure of northern hardwood forests in Upper Michigan. Forest Science, 48(1), 129–145. de Iongh Arbainsyah, H. H., Kustiawan, W., & de Snoo, G. R. (2014). Structure, composition and diversity of plant communities in FSC-certified, selectively logged forests of different ages compared to primary rain forest. Biodiversity and Conservation, 23, 2445–2472. https://doi.org/10.1007/s10531-014-0732-4 . Díaz, I. A., Armesto, J. J., Reid, S., Sieving, K., & Wilson, M. (2005). Linking forest structure and composition: avian diversity in successional forests of Chiloé Island, Chile. Biological Conservation, 123, 91–101. https://doi.org/10.1016/j.biocon.2004.10.011 . Díaz, I. A., Sieving, K. E., Peña-Foxon, M. E., Larraín, J., & Armesto, J. J. (2010). Epiphyte diversity and biomass loads of canopy emergent trees in Chilean temperate rain forests: a neglected functional component. Forest Ecology and Management, 259, 1490–1501. https://doi.org/10.1016/j.foreco.2010.01.025 . DiRienzo, J. A., Casanoves, F., Balzarini, M. G., Gonzalez, L., Tablada, M., & Robledo, C. W. (2011). InfoStat (24th ed.). Córdoba: Universidad Nacional de Córdoba. Donoso, C., & Lara, A. (1995). Utilización de los bosques nativos en Chile: pasado, presente y futuro. In J. J. Armesto, C. Villagran, & M. K. Arroyo (Eds.), Ecología de los Bosques Nativos de Chile (pp. 363–388). Santiago: Editorial Universitaria. Donoso, P. (2002). Structure and growth in coastal evergreen forests as the bases for uneven-aged structure and growth in coastal evergreen forests as the bases for uneven-aged silviculture in Chile. PhD thesis. Syracuse: SUNY-ESF. Donoso, P. (2013). Necesidades, opciones y futuro del manejo multietáneo en el centro-sur de Chile. In P. H. Donoso & Á. B. Promis (Eds.), Silvicultura en Bosques Nativos: Avances en la investigación en Chile, Argentina y Nueva Zelandia (Vol. 1, pp. 55–85, 1). Chile: Marisa Cuneo Eds. Donoso, P. J. (2005). Crown index: a canopy balance indicator to assess growth and regeneration in uneven-aged forest stands of the Coastal Range of Chile. Forestry, 78, 337–351. https://doi.org/10.1093/forestry/cpi046 . Donoso, P. J., & Donoso, C. (2007). Chile: forest species and stand types. In F. W. Cubbage (Ed.), Forests and Forestry in the Americas: An Encyclopedia. Society of American Foresters and International Society of Tropical Foresters. https://sites.google.com/site/forestryencyclopedia/Home/Chile%3A%20Forest%20Species%20and%20Stand%20Types. Accessed 2 Feb 2015. Donoso, P. J., & Nyland, R. D. (2005). Seedling density according to structure, dominance and understory cover in old-growth forest stands of the evergreen forest type in the coastal range of Chile. Revista Chilena de Historia Natural, 78, 51–63. Donoso, P. J., Samberg, L., Hernández, M. P., & Schlegel, B. (2009). The old-growth forests in the Valdivian Andes: composition, structure and growth. In C. Oyarzún, N. Verhoest, P. Boeckx, & R. Godoy (Eds.), Ecological advances in Chilean temperate rainforests. Ghent: Academia Press. Doyon, F., Gagnon, D., & Giroux, J.-F. (2005). Effects of strip and single-tree selection cutting on birds and their habitat in a southwestern Quebec northern hardwood forest. Forest Ecology and Management, 209, 101–116. https://doi.org/10.1016/j.foreco.2005.01.005 . Escobar, B., Donoso, C., & Zúñiga, A. (2006). Capítulo Eucryphia cordifolia. In C. Donoso et al. (Eds.), Especies arbóreas de los bosques templados de Chile y Argentina. Autoecología (pp. 246–255). Marisa Cuneo Eds: Valdivia. Fuenzalida, H. (1971). Clima: Geografía económica de Chile, texto refundido. Santiago: Corporación de Fomento de la Producción. Gadow, K. V., Zhang, C. Y., Wehenkel, C., Pommerening, A., Corral-Rivas, J., Korol, M., et al. (2012). Forest structure and diversity. In T. Pukkala & K. Gadow (Eds.), Continuous cover forestry (Vol. 23, pp. 29–83). Dordrecht: Springer Netherlands. Gini, C. (1912). Variabilita e mutabilita. Bologna, Tipogr. di P. Cuppini. Gronewold, C. A., D'Amato, A. W., & Palik, B. J. (2010). The influence of cutting cycle and stocking level on the structure and composition of managed old-growth northern hardwoods. Forest Ecology and Management, 259, 1151–1160. https://doi.org/10.1016/j.foreco.2010.01.001 . Guldin, J. M. (1991). Uneven-aged BDq regulation of Sierra Nevada mixed conifers. Western Journal of Applied Forestry, 6(2), 27–32. Keeton, W. S. (2006). Managing for late-successional/old-growth characteristics in northern hardwood-conifer forests. Forest Ecology and Management, 235, 129–142. https://doi.org/10.1016/j.foreco.2006.08.005 . Kern, C. C., Palik, B. J., & Strong, T. F. (2006). Ground-layer plant community responses to even-age and uneven-age silvicultural treatments in Wisconsin northern hardwood forests. Forest Ecology and Management, 230, 162–170. https://doi.org/10.1016/j.foreco.2006.03.034 . Keyser, T. L., & Loftis, D. L. (2013). Long-term effects of single-tree selection cutting on structure and composition in upland mixed-hardwood forests of the southern Appalachian Mountains. Forestry, 86, 255–265. https://doi.org/10.1093/forestry/cps083 . Kindt, R., & Coe, R. (2005). Tree diversity analysis: a manual and software for common statistical methods for ecological and biodiversity studies. Nairobi: World Agroforestry Centre (ICRAF). Kleiber, C., & Zeileis, A. (2008). Applied econometrics with R (Use R!). New York: Springer. Lexerød, N. L., & Eid, T. (2006). An evaluation of different diameter diversity indices based on criteria related to forest management planning. Forest Ecology and Management, 222, 17–28. https://doi.org/10.1016/j.foreco.2005.10.046 . Lombardi, F., Marchetti, M., Corona, P., Merlini, P., Chirici, G., Tognetti, R., et al. (2015). Quantifying the effect of sampling plot size on the estimation of structural indicators in old-growth forest stands. Forest Ecology and Management, 346, 89–97. https://doi.org/10.1016/j.foreco.2015.02.011 . McElhinny, C., Gibbons, P., Brack, C., & Bauhus, J. (2005). Forest and woodland stand structural complexity: its definition and measurement. Forest Ecology and Management, 218, 1–24. https://doi.org/10.1016/j.foreco.2005.08.034 . Moorman, M., Donoso, P. J., Moore, S. E., Sink, S., & Frederick, D. (2013). Sustainable protected area management: the case of Llancahue, a highly valued periurban forest in Chile. Journal of Sustainable Forestry, 32, 783–805. https://doi.org/10.1080/10549811.2013.803916 . Mosseler, A., Lynds, J. A., & Major, J. E. (2003). Old-growth forests of the Acadian Forest Region. Environmental Reviews, 11, S47–S77. https://doi.org/10.1139/cjfr-2012-0476 . Mueller-Dombois, D., & Ellenberg, H. (1974). Aims and methods of vegetation ecology. New York: Wiley. Muñoz, A. A., Chacón, P., Pérez, F., Barnert, E. S., & Armesto, J. J. (2003). Diversity and host tree preferences of vascular epiphytes and vines in a temperate rainforest in southern Chile. Australian Journal of Botany, 51, 381. https://doi.org/10.1071/BT02070 . Myers, N., Mittermeier, R. A., Mittermeier, C. G., da Fonseca, G. A., & Kent, J. (2000). Biodiversity hotspots for conservation priorities. Nature, 403, 853–858. https://doi.org/10.1038/35002501 . Nahuelhual, L., Donoso, P., Lara, A., Núñez, D., Oyarzún, C., & Neira, E. (2007). Valuing ecosystem services of Chilean temperate rainforests. Environment, Development and Sustainability, 9, 481–499. https://doi.org/10.1007/s10668-006-9033-8 . Nyland, R. D. (2002). Silviculture. Concepts and applications. Illinois: Wavelan Press, Inc.. O'Hara, K. (2014). Multiaged silviculture: managing for complex forest stands structures. Oxford: Oxford University Press. O'Hara, K., Hasenauer, H., & Kindermann, G. (2007). Sustainability in multi-aged stands: an analysis of long-term plenter systems. Forestry, 80, 163–181. https://doi.org/10.1093/forestry/cpl051 . Oksanen, J., Blanchet, F. G., Kindt, R., Legendre, P., Minchin, P. R., O'Hara, R. B. et al. (2014). vegan: community ecology package. Olson, D. M., & Dinerstein, E. (1998). The global 200: a representation approach to conserving the Earth's most biologically valuable ecoregions. Conservation Biology, 12, 502–515. https://doi.org/10.1046/j.1523-1739.1998.012003502.x . Oyarzún, C., Nahuelhual, L., & Núñez, D. (2005). Los servicios ecosistémicos del bosque templado lluvioso: producción de agua y su valoración económica. Revista Ambiente y Desarrollo, 20(3), 88–95. Paillet, Y., Bergès, L., Hjältén, J., Ódor, P., Avon, C., Römermann, M., et al. (2010). Biodiversity differences between managed and unmanaged forests: meta-analysis of species richness in Europe. Conservation Biology, 24, 101–112. https://doi.org/10.1111/j.1523-1739.2009.01399.x . Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D., & R Core Team. (2014). nlme: linear and nonlinear mixed effects models. Prodan, M., Peters, R., Cox, F., & Real, P. (1997). Mensura Forestal. San José: Serie Investigación y Educación en Desarrollo Sostenible. Puettmann, K. J., Wilson, S. M., Baker, S. C., Donoso, P. J., Drössler, L., Amente, G., et al. (2015). Silvicultural alternatives to conventional even-aged forest management—what limits global adoption? Forest Ecosystems, 2, 611. https://doi.org/10.1186/s40663-015-0031-x . Pukkala, T., & Gadow, K. (Eds.). (2012). Continuous cover forestry (Vol. 23). Dordrecht: Springer Netherlands. R Core Team. (2015). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/ Rüger, N., Gutiérrez, Á. G., Kissling, W. D., Armesto, J. J., & Huth, A. (2007). Ecological impacts of different harvesting scenarios for temperate evergreen rain forest in southern Chile—a simulation experiment. Forest Ecology and Management, 252, 52–66. https://doi.org/10.1016/j.foreco.2007.06.020 . Schütz, J.-P., Pukkala, T., Donoso, P. J., & Gadow, K. V. (2012). Historical emergence and current application of CCF. In T. Pukkala & K. Gadow (Eds.), Continuous cover forestry (Vol. 23, pp. 1–28). Dordrecht: Springer Netherlands. Shannon, C. E. (1948). The mathematical theory of communication. In C. E. Shannon & W. Weaver (Eds.), The mathematical theory of communication (pp. 29–125). Urbana: University of Illinois Press. Siebert, H. (1998). La silvicultura alternativa: un concepto silvícola para el bosque nativo chileno. In C. Donoso Zegers & L. Aguilar (Eds.), Silvicultura de los bosques nativos de Chile (pp. 381–407). Santiago: Editorial Universitaria. Torras, O., Gil-Tena, A., & Saura, S. (2012). Changes in biodiversity indicators in managed and unmanaged forests in NE Spain. Journal of Forest Research, 17, 19–29. https://doi.org/10.1007/s10310-011-0269-2 . Torras, O., & Saura, S. (2008). Effects of silvicultural treatments on forest biodiversity indicators in the Mediterranean. Forest Ecology and Management, 255, 3322–3330. https://doi.org/10.1016/j.foreco.2008.02.013 . Wikström, P., & Eriksson, L. O. (2000). Solving the stand management problem under biodiversity-related considerations. Forest Ecology and Management, 126, 361–376. https://doi.org/10.1016/S0378-1127(99)00107-3 . Zuur, A. F., Ieno, E. N., Walker, N. J., Saveliev, A. A., & Smith, G. M. (2009). Mixed effects models and extensions in ecology with R. NY: Springer Science+Business Media. We sincerely thank Jürgen Huss for commenting and revising this manuscript during its elaboration and Simone Ciuti for the statistical support. Moreover, we acknowledge the dedication of our field workers Ronald Rocco, Nicole Raimilla Fonseca and Pol Bacardit from the University Austral de Chile. Finally, we greatly appreciated the accommodations provided by the Lomas de Sol community during our stay in Llancahue. PJ Donoso acknowledges the support of FIBN-CONAF project no. 034/2011 and FONDECYT project no. 1150496. P.J. Donoso thanks FONDECYT Grant No. 1150496 and Project 034/2011 of the "Fondo de Investigación en Bosque Nativo" administered by the forest service CONAF. The dataset(s) supporting the conclusions of this article are included within the article (and its Additional file 1). The use of the Llancahue database (in Additional file 1) requires the authorization of the authors. Chair of Silviculture, Faculty of Environment and Natural Resources, University of Freiburg, Tennenbacherstr. 4, 79106, Freiburg, Germany Florian Schnabel Insituto de Bosques y Sociedad, Facultad de Ciencias Forestales y Recursos Naturales, Universidad Austral de Chile, Casilla 567, Valdivia, Chile Pablo J. Donoso Faculty of Environment and Natural Resources, University of Freiburg, Tennenbacherstr. 4, 79085, Freiburg, Germany Carolin Winter Search for Florian Schnabel in: Search for Pablo J. Donoso in: Search for Carolin Winter in: FS, the lead author, conceived and designed this experiment. He performed the experiment through planning and leading the field data collection, analysed the data and wrote most parts of the paper. PD conceived and coordinated the general single-tree selection cutting experiment, designed and supervised the management-interventions and established the plots. He advised in conceiving this experiment and in the data analysis, supervised the process of writing the paper and wrote parts himself. CW helped in taking the field data and analysed minor parts. She contributed during the whole study through revisions and wrote parts of the paper. FS and CW created figures and tables and formatted the paper. Author contribution rephrased without making a change to the content to provide better clarity of the indiviudal contributions. This change has been approved by CW and FS. All authors read and approved the final manuscript. Correspondence to Florian Schnabel. Data of the Llancahue Experimental Forest in south-central Chile. (XLSX 427 kb) Schnabel, F., Donoso, P.J. & Winter, C. Short-term effects of single-tree selection cutting on stand structure and tree species composition in Valdivian rainforests of Chile. N.Z. j. of For. Sci. 47, 21 (2017) doi:10.1186/s40490-017-0103-5 Uneven-aged silviculture Old-growth forest attributes Evergreen forest type Temperate rainforests
CommonCrawl
home chevron_right Learning centerchevron_rightEnergy storage and conversionchevron_rightBatterychevron_rightQCM: History and principles Topic 5 min read QCM: History and principles The Quartz Crystal Microbalance (QCM) is a technique based on the piezoelectric properties of quartz and allows the measurement of extremely small mass changes down to a monolayer fraction. Used as an EQCM (Electrochemical QCM), it is possible to control the electrochemical potential of the studied surface and hence change and/or characterize the surface charge and/or the surface concentrations of the electroactive species. Piezoelectricity is the property of a crystal which turns a mechanical pressure into an electrical field, which is proportional to the applied pressure. This was discovered by René Just Haüy (1743-1822), but studied more seriously by Pierre and Jacques Curie in 1880. The indirect piezo-electric effect, where a crystal submitted to an electrical field is mechanically deformed was discovered a few years after that. There are many other piezoelectric materials, but quartz has several advantages: It has a low resistance to the propagation of acoustic waves It has a high shear modulus It is chemically quite stable When an AC electrical field at an appropriate frequency is applied between the two sides of a quartz plate, the quartz starts to vibrate. The vibrational mode mostly used in a QCM is the thickness-shear mode: on each side of the nodal plan represented by a dot, the crystal plans move with the same amplitude but in the opposite direction as represented by the animated image below. Figure 1: Shear vibration of a quartz plate submitted to an electrical AC modulation between the two Au electrodes. At the resonance frequency, the amplitude of the displacement is maximal and is obtained when: $$\frac{n\lambda_0}{2}=d \tag{1}\label{eq1}$$ With $n$ the harmonic mode or overtone order, $\lambda_0$ the wavelength of the acoustic wave in $\mathrm{m^{-1}}$, $d$ the thickness of the crystal plate in $\mathrm{m}$. The wavelength $\lambda_0$ is related to the resonant frequency $f_0$ by $$\lambda_0 = \frac{f_0}{v_\mathrm{c}}\tag{2}\label{eq2}$$ With $ v_\mathrm{c}$ the shear or tangential velocity of the crystal in $\mathrm{m\,s^{-1}}$. Using Eqs. $\eqref{eq1}$ and $\eqref{eq2}$ give the following equation: $$f_0=\frac{v_\mathrm{c}}{2d} \tag{3}\label{eq3}$$ Equation $\eqref{eq3}$ is referred as Equation 1 in the application note 68 [1]. A resonator or sensor is composed of a quartz plate, whose sides are coated with an electronic conductor, generally Au. Sometimes an adhesion layer between the quartz and the electrodes is added, mostly Cr or Ti. Ti is generally preferred because of its electrochemical stability and also because its contamination of the Au electrode is twice less than Cr [2]. An AC voltage is applied between the two parallel electrodes and the quartz vibrates. It is assumed that the Au electrodes are moving at the same speed as the quartz and the acoustic wave propagates in the metallic electrodes as the same speed as in the quartz. In 1959, Sauerbrey [3] was the first one to establish a relationship between the mass change and the resonant frequency change based on Eq. $\eqref{eq3}$. $$\Delta f_n=-n\frac{2f_{0,n}^2}{\sqrt{\mu_\mathrm{q} \rho_\mathrm{q}}}\Delta m_\mathrm{a} \tag{4}\label{eq4}$$ With $\Delta f_n$, the change of resonant frequency at the $n^{\mathrm{th}}$ harmonic in $\mathrm{Hz}$, $n$ the harmonic order, $f_{0,n}$ the resonant frequency at the $n^{\mathrm{th}}$ harmonic in $\mathrm{Hz}$, $\mu_\mathrm{q}$ the shear elastic modulus of the quartz in $\mathrm{kg\,m^{-1}\,s^{-1}}$ or $\mathrm{Pa\,s}$, $\rho_\mathrm{q}$ the quartz density in $\mathrm{kg\,m^{-3}}$, and $\Delta m _\mathrm{a}$ the areal mass of the film in $\mathrm{kg\,m^{-2}}$. This relationship was used primarily to monitor mass or thickness change of films deposited in vacuum. In these conditions, it is considered that the film is a rigid layer and that the wave propagates in the film as the same speed as in the quartz and electrodes (Fig. 1). According to Eq. $\eqref{eq3}$, the increase of the thickness induces a change of the resonant frequency $f_0$. Figure 2: Acoustic standing wave across the sensor with a rigid layer. Until the 1980s, it was believed that such a mass uptake could not be measured in solution because of the decay of the standing wave in a viscous medium. It was found that in solution, the addition of a rigid layer to the resonator has the same effect as in air or vacuum and that frequency shifts could also be successfully measured and related to thickness and mass changes [4]. The first application in electrochemistry was the electrodeposition of Ag and Cu [5]. One can see that the frequency shift $\Delta f_n$ is proportional to the areal mass $\Delta m_\mathrm{a}$ of the deposited material, which means it does not depend on the surface of the electrode. More details on the how the frequency shift is measured and how it can be ensured that the film is rigid and that the Sauerbrey equation can be used are given in the related topics [6,7,8]. [1] BioLogic Application Note 68: "In situ electrochemical study of LiFePO4 electrodes by Quartz Crystal Microbalance" [2] J.C. Hoogvliet, W.P. van Bennekom, Electrochim. Acta 47 (2001) 599 [3] G. Sauerbrey Z. Phys. 155 (1959) 206 [4] T. Nomura, M. Okuhara Anal. Chem. Acta 142, (1982) 281 [5] S. Bruckensteinn M. Shay Electrochim. Acta 30 (1985) 1295 [6] Quartz Crystal Microbalance: Measurement principles [7] Quartz Crystal Microbalance: When is the Sauerbrey equation valid ? [8] Quartz Crystal Microbalance: Why measure at overtones? Quartz Crystal Microbalance quartz piezoelectric Sauerbrey equation resonant frequency gravimetry dissipation BluQCM QSD The BluQCM QSD is a single channel, compact and modular instrument. Its low footprint and lightweight makes it particularly suitable for crowded labs. It is available as standalone, with temperature control or/and flow control.
CommonCrawl
We have the metric transformations between the two different coordinate systems as; $$g_{\mu '\nu '}'=\frac{\partial x^{\mu}}{\partial y^{\mu '} }\frac{\partial x^{\nu}}{\partial y^{\nu '} }g_{\mu \nu}$$ We also know that the Christoffel symbol in terms of the metric tensors is as follows $$\Gamma_{\mu \nu}^{\lambda}=\frac{1}{2} g^{\lambda \rho}\left(\partial_{\mu}g_{\nu \rho} +\partial_{\nu}g_{\rho \mu}-\partial_{\rho}g_{\mu \nu}\right)$$ This then implies that the christoffel symbol in the primed coordinate system is then; $$\Gamma_{\mu ' \nu '}^{\lambda '} =\frac{1}{2} g^{\lambda ' \rho '}\left(\partial_{\mu '}g_{\nu ' \rho '} +\partial_{\nu '}g_{\rho ' \mu '}-\partial_{\rho '}g_{\mu ' \nu '}\right)$$ Our aim here, is to find the transformation relation between these christoffel symbols which are in different coordinate system. We first have to find the derivative of the metric tensor in the primed coordinate system. Let us differentiate with respect to $\lambda '$ $$\partial_{\lambda '}g_{\mu ' \nu '}'=\frac{\partial x^{\lambda } }{\partial y^{\lambda '} }\frac{\partial x^{\mu } }{\partial y^{\mu '} }\frac{\partial x^{\nu } }{\partial y^{\nu '} } \partial_\lambda g_{\mu \nu}+g_{\mu \nu}\left(\frac{\partial x^{\nu } }{\partial y^{\nu '} }\frac{\partial^{2} x^{\mu } }{\partial y^{\lambda '}\partial y^{\mu '} }+ \frac{\partial x^{\nu } }{\partial y^{\mu '} }\frac{\partial^{2} x^{\mu } }{\partial y^{\lambda '}\partial y^{\nu '} } \right)$$ We know that the christoeffel symbol in the primed coordinate system has the derivatives of metric of the cycle $\rho ' ,\mu ' ,\nu '$. Using this expression three times and relabeling the indices, one can write; $$\partial_{\mu '}g_{\nu ' \rho '} +\partial_{\nu '}g_{\rho ' \mu '}-\partial_{\rho '}g_{\mu ' \nu '}=\frac{\partial x^{\lambda } }{\partial y^{\lambda '}}\frac{\partial x^{\rho } }{\partial y^{\rho '}}\frac{\partial x^{\nu } }{\partial y^{\nu '} }\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+$$ $$g_{\rho \nu}\left(\frac{\partial x^{\nu } }{\partial y^{\nu '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\rho '} }+\frac{\partial x^{\nu } }{\partial y^{\lambda '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\nu '}\partial y^{\rho '} }+ 2\frac{\partial x^{\nu } }{\partial y^{\rho '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }-\frac{\partial x^{\rho } }{\partial y^{\lambda '} }\frac{\partial^{2} x^{\nu } }{\partial y^{\rho '}\partial y^{\nu '} }-\frac{\partial x^{\rho } }{\partial y^{\nu '} }\frac{\partial^{2} x^{\nu } }{\partial y^{\rho '}\partial y^{\lambda '}}\right)$$ Because, the metric is symmetric in $\rho$ and $\nu$ we are just left with: $$\partial_{\mu '}g_{\nu ' \rho '} +\partial_{\nu '}g_{\rho ' \mu '}-\partial_{\rho '}g_{\mu ' \nu '}=\frac{\partial x^{\lambda } }{\partial y^{\lambda '}}\frac{\partial x^{\rho } }{\partial y^{\rho '}}\frac{\partial x^{\nu } }{\partial y^{\nu '} }\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+$$ $$2g_{\rho \nu}\frac{\partial x^{\nu } }{\partial y^{\rho '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }$$ Now substituting the result in the primed christoffel symbol we have the following: $$\Gamma_{\mu ' \nu '}^{\lambda '}=\frac {1}{2}\frac{\partial y^{\mu '}}{\partial x^{\mu} }\frac{\partial y^{\nu '}}{\partial y^{\nu} }g^{\mu \rho}\left(\frac{\partial x^{\lambda } }{\partial y^{\lambda '}}\frac{\partial x^{\rho } }{\partial y^{\rho '}}\frac{\partial x^{\nu } }{\partial y^{\nu '} }\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+2g_{\rho \nu}\frac{\partial x^{\nu } }{\partial y^{\rho '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }\right)$$ $$=\frac{\partial y^{\mu '}}{\partial x^{\mu} }\frac{\partial x^{\lambda}}{\partial y^{\lambda '} }\frac{\partial x^{\nu }}{\partial y^{\nu '} }\frac {1}{2}g^{\mu \rho}\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+\frac{\partial y^{\mu '} }{\partial x^{\mu} }\delta_{\rho}^{\nu}\delta_{\nu}^{\mu}\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }$$ Thus, we have the transformation relation between the christoffel symbol as follows; $$\Gamma_{\mu ' \nu '}^{\lambda '}=\frac{\partial y^{\mu '}}{\partial x^{\mu} }\frac{\partial x^{\lambda}}{\partial y^{\lambda '} }\frac{\partial x^{\nu }}{\partial y^{\nu '} }\Gamma_{ \nu \lambda}^{\mu}+\frac{\partial y^{\mu '} }{\partial x^{\mu} }\frac{\partial^{2} x^{\mu} }{\partial y^{\lambda '}\partial y^{\nu '} }$$ Labels: Christoffel, christoffel symbol While relabeling tags you provide the negative term with a common index with the contravariant metric which wasn't the case originally, ie you change an index to another currently in use but different from the one you changed. Why can you do this? I agree with Unknown who commented December 2018. One thing is that The indices μ,ν in the denominator of the last term of the fourth equation are the wrong way round. Another is that the indices on unprimed and primed Christoffel symbol in the last equation have moved around in a very odd way. It is not right! I have written out the correct derivation here. (https://www.general-relativity.net/2019/03/transformation-of-christoffel-symbol.html) In addition this proof is for a torsion-free metric-compatible connection. It would be better to have a proof for any type of connection. I learnt this in Spacetime and Geometry: An Introduction to General Relativity by Sean M Carroll. He also had an error in the transformation! The correct proof and transformation are also now on www.general-relativity.net HERE.
CommonCrawl
Ethnicity and neighbourhood deprivation determines the response rate in sexual dysfunction surveys Lasantha S. Malavige1,2, Pabasi Wijesekara3, Dhanesha Seneviratne Epa3, Priyanga Ranasinghe4 & Jonathan C. Levy1 Self-administered questionnaires provide a better alternative to disclose sensitive information in sexual health research. We describe the factors that determine the positive response (initial recruitment) to an initial invitation and subsequent completion of study to a postal questionnaire on sexual dysfunction. South Asians (SA) and Europids with and without diabetes (DM) were recruited from GP clinics in UK. Men who returned the properly filled consent form ('recruited-group') were sent the questionnaire and those who returned it were considered as the 'completed-group'. Index of Multiple Deprivation Scores (IMDs) were generated using UK postcodes. We calculated the recruitment rate and completion rate of the recruited and the study-completed groups respectively. Total approached sample was 9100 [DM: 2914 (32 %), SA: 4563 (50.1 %)]. Recruitment rate was 8.8 % and was higher in Europids and in patients with DM. Mean IMDs for the recruited group was 20.9 ± 11.9, and it was higher among recruited SA compared to Europids (p < 0.001). Mean IMDs was higher in the recruited group compared to non-recruited (p < 0.01). All four recruited groups (SA/Europid and DM/non-DM) had lower IMDs compared to non-recruited. Completion rate was 71.5 % (n 544) (SA: 62.3 %, Europids: 77.4 %; p < 0.05). Recruitment for postal sexual health surveys is positively influenced by presence of investigated disease, older age, being from lesser deprived areas and Europid ethnicity. Furthermore, Europids were more likely to complete survey than South Asians irrespective of disease status. Self administered questionnaires provide a better alternative to disclose sensitive information for study participants. Amongst the different self-administrative survey methods, postal questionnaires have long been used successfully to evaluate sexual dysfunction and sexual behaviour. In comparison to face-to-face and telephone interviews, the higher levels of privacy and confidentiality offered in postal methods, provides the participant a better opportunity to reveal truthful information [1–3]. Computer assisted self-interviews are an emerging new method of survey, however its superiority in response when compared to postal surveys remain inconclusive [1, 4]. Low response rates and the accompanying non-responder bias is a common problem in postal surveys, affecting the generalisabilty and validity of the findings [5]. It is important therefore, to identify how responders differ from non-responders. Questionnaire topic, length, sensitivity, pre-notification, incentives and intense follow up are well known determinants of response rate [6]. Responder related factors such as age, intelligence, social class and level of education are also known to influence the response rate [7–9]. At present there is only limited data available on the factors affecting the response rate in postal surveys on sexual dysfunction and sexual behaviour. The GSSAB study (Global Study of Sexual Attitudes and Behaviours) demonstrated that there is a possible socio-cultural inhibition influencing response amongst sexually conservative groups [10]. However, to our knowledge, a comparison of response rates between the South Asian and Europid ethnic groups has not been described previously in the sexual health epidemiology. The Oxford Sexual Dysfunction Study (OSDS) was a multi centred GP practice based study that describes sexual dysfunction in men with and without diabetes of South Asian and Europid ethnic origin living in the UK. One of the primary objectives of this study was to evaluate the feasibility of using validated postal questionnaires to assess sexual dysfunctions and their clinical, socioeconomic and lifestyle associations. The present report aims to describe the factors that determine the positive response (initial recruitment) to an initial invitation and subsequent completion of study to a postal questionnaire on sexual dysfunction. Study population and sampling Thirty-seven (GP) clinics from eight primary care trusts (PCTs) in the UK were invited for the OSDS. The OSDS is a large survey which aimed to evaluate the prevalence and associations of sexual dysfunction among males of South Asian and Europid origin, both with and without diabetes resident in the UK [11]. Ethical approval for the study was obtained from the Oxfordshire Research Ethics Committee C. Research Governance approval for the study was granted by the regional PCTs of Ealing, Brent, Luton, Redding, Slough, Swindon and Coventry. According to recent national population data, South Asians represent 4–5 % of the UK population [12]. Hence a selective approach was used to select GP practices in order to recruit a larger population of South Asians. Geographical areas with a considerably high South Asian population were identified. In these areas, the GP practices with higher numbers of registered South Asians were selected and invited to participate in the study. The selective approach was guided by local collaborators in the regional hospitals. The study included patients with and without diabetes of South Asian and Europid origin. Patients with diabetes were selected as follows. In the GP practices that agreed for collaboration, we selected all male patients in the practice's diabetes registry between 21 and 70 years, and stratified this sample into five age categories (21–30, 31–40, 41–50, 51–60 and 61–70). Men with diabetes in each age category were sub-categorized into Europid and South Asian ethnicities. When the ethnicity and first language were not recorded in the practice data base, the researcher allocated the most likely ethnicity and the first language for the particular case, based on the name/surname and recommendation of the practice doctor. The accuracy of the allocated ethnicity was verified later by comparing with the participant reported ethnicity in the returned questionnaire. The same method was also applied in patients without diabetes. Patients without diabetes were selected as follows. Diabetes is approximately 3–5 times more common among South Asians [13, 14]. Therefore, we hypothesised that the proportion of South Asians in patients without diabetes would be smaller than in those with diabetes. Further, we assumed that the response rate would be lower among those without diabetes compared to the patients with diabetes, based on previous similar studies in patients with respiratory diseases/symptoms [15]. For these reasons, we approached twice as many males without diabetes in the relevant age categories above as the control group (as shown below). The number of South Asian and Europid men in the sample of patients with diabetes were considered X and Y respectively. A four times bigger random sample [4(X + Y)] of men without diabetes was drawn from the GP database in each age category as the Temporary Selection Sample (to ensure adequate representation of South Asians). This temporary selection sample was divided into South Asians without diabetes and Europids without diabetes lists using a similar procedure as for the patients with diabetes. From each of these lists of those without diabetes sorted in alphabetical order a secondary random sample of 50 % was obtained, ensuring the final invited control group sample size was twice that of the disease group. These samples together (males with and without diabetes) were termed as the Initial Approach Sample. The exclusion criteria included men who had spinal cord damage, undergone surgery of the prostate and/or pelvic irradiation and diagnosed with serious psychiatric conditions. We invited the selected men with and without diabetes, with endorsement letters from the GP, to participate in the study, as studies have shown that this increases participation [16]. The invitation pack contained a personalised invitation letter, information sheet and the consent form. These documents were originally developed in English language and translated into five South Asian languages; Hindi, Punjabi, Sinhalese, Tamil and Urdu using translation–retranslation technique [17]. Those who did not respond the invitation within a fortnight were sent a 2nd invitation. This second invitation was sent out only to a randomly selected sub-group (diabetic group: 36 %, control group: 30 %) of those of did not respond to 1st invitation. The men who returned the properly filled consent form with the consent to participate were recruited for the study and others were included in the non recruited group. The recruited group received the two booklet questionnaire designed for the study. The questionnaire was also available in the above mentioned five South Asian languages, upon request by the participant on the consent form [17]. The non-responding recruited subjects received a first and a second reminder, each within 2 week intervals. The group who returned the completed questionnaire was considered the "completed group". Data collection and analysis The Index of Multiple Deprivation scores (IMD score) of all men were generated using the post codes, through the Economic and Social Research Council (ESRC) census programme, UK (www.census.ac.uk). The IMD brings together 37 different indicators of deprivation including; income, employment, health and disability, education, skills and training, barriers to housing and services, living environment and crime. The higher IMD scores indicate lesser deprivation compared to the areas with lower scores. We calculated the response rates for the recruited and the study completed group as below. These rates were compared between South Asian and Europid, men with and without diabetes using Chi square analysis. $${\text{Recruitment Rate }} = \frac{\text{Total number of men who returned the completed consent form}}{{{\text{Total number of }}\left( {{\text{approached men }}{-}{\text{ undelivered mail}}} \right)}}$$ $${\text{Completion Rate }} = \frac{\text{Total number of men who returned the completed questionnaire}}{\text{Total number of men recruited for the study}}$$ Data was analysed using the SPSS version 17 (Chicago. IL, USA). Mean values of age and IMD scores for the four groups (men with and without diabetes; South Asian and Europid) within the approached, recruited and study completed groups were compared using analysis of variance (ANOVA). Mean age and IMD score for the recruited group was compared with the non recruited group and completed group was compared with the non completed group within each of the above four groups as well using ANOVA. A binary logistic regression analysis was performed 'successful recruitment' as the dichotomous dependent variable (0 = not recruited; 1 = recruited) and using age (continuous), IMD score (continuous), presence of diabetes (binary, 0 = no; 1 = yes) and ethnicity (binary, 0 = South Asian; 1 = Europid) as the independent variables. A similar binary logistic regression analysis with above independent variables was also performed separately for study completion using 'study completion' as the dichotomous dependent variable (0 = not completed; 1 = completed). Sample characteristics Twenty-five GP practices of the 37 GP practices invited from the 8 PCTs agreed to take part in the study. There were differences in the recruitment and completion rates between the practices (Table 1). The total approached sample was 9100 with 2914 (32 %) patients with diabetes mellitus and 6186 (68 %) males without diabetes. According to allocated ethnicity, 4563 (50.1 %) men were South Asians and 4537 (49.9 %) were Europids (Fig. 1). The allocated ethnicity was 91.5 % accurate (498/544) when verified with the participants' reported ethnicity in the questionnaires, with 34 participants reporting an ethnicity other than Europid or South Asian. Table 1 Participant recruitment and completion rates of the questionnaire in the 25 GP practices Summarized recruitment process (A South Asians, E Europids, Af Afro-Caribbean) Study recruited group Seven hundred and sixty one men consented and were recruited for the study (Fig. 1). The overall recruitment rate for the study was 8.8 % after adjusting for undelivered mail (n 414). Recruitment rate for the first invitation was 6.9 % and it was significantly higher among Europids (9.9 %) compared to South Asians (4 %) and among patients with diabetes (9 %) compared to those without (5.9 %). Recruitment rate for the second invitation was 6.2 % and it was 6 and 7.2 % for the South Asian and Europid ethnic groups respectively (p = 0.67). The recruitment rate for second invitation in the males with diabetes (8.6 %) was significantly higher compared to those without diabetes (4.9 %) (p = 0.001). Recruitment rate was significantly higher among Europids in comparison to South Asians irrespective of their diabetes status (p < 0.001) (Table 2). Similarly, the rate was significantly higher among patients with diabetes compared to those without diabetes irrespective of the ethnicity (Table 2). Figures 1 and 2 summarise the process of recruitment, follow up and completion of the study. Table 2 Mean age and IMD scores for the Approached, Recruited and Study completed men Summarized study completion process (A South Asians, E Europids, Q Questionnaire) Mean age of the recruited men was 56.8 (±9.3) years. The mean age difference between the recruited males with diabetes (58 years) and males without diabetes (55.9 years) was 2.1 years (p < 0.01) (Table 2). Mean IMD score for the recruited group was 20.9 (±11.9). Mean IMD was significantly higher among recruited South Asians compared to Europids (p < 0.001) (Table 2). The difference of the mean IMD score between the recruited males with and without diabetes was not statistically significant (p = 0.053). Compared to the non recruited group, the recruited group was significantly older (56.9 ± 9.3 vs 53.1 ± 11.2) (p < 0.001). Mean IMD score difference between the recruited and non-recruited groups were significant (20.9 ± 11.9 vs 22.5 ± 11.9) (p < 0.01). All four groups (South Asian/Europid and with/without diabetes) of recruited men had lower IMD scores indicating lesser deprivation compared to non-recruited. However, this result was statistically significant only among males without diabetes of Europid ethnic origin (Table 3). Table 3 Comparison of recruited/non-recruited and study completed/non-completed groups Study completed group Of the 761 (recruited group) who were sent the questionnaire, 263 returned the properly filled questionnaire upon receiving the two booklets and 498 did not return the questionnaire (Fig. 2). The completion rate for the study after two reminders was 71.5 % with 544 men completing the study. Overall 62.3 % South Asians and 77.4 % Europids completed the study. The completion rate was significantly higher among the Europids (p < 0.001). The proportion of males without diabetes completing the study was higher (71.8 %) than the males with diabetes (71.1 %) (p 0.66). Table 2 describes the demographic data for the approached, recruited and study completed groups. Mean age of the approached sample was 53.5 ± 11.1 years, with a mean age difference between the patients with and without diabetes of 2.9 years (P < 0.001). South Asian men in both with and without diabetes groups of the approached sample were younger compared to Europids (Table 2). Mean IMD score of the approached sample was 22.3 ± 11.9, while the IMD scores for those with and without diabetes were 21.9 ± 12.4 and 22.4 ± 11.7 respectively (p = 0.07). Europids had a significantly lower mean IMD score in the approached, recruited and study completed groups compared to the South Asians (p < 0.001) indicating lesser deprivation (Table 2). Mean age of the study completed group was 57.1 ± 9.3 years. Mean age difference between the men with and without diabetes was 2.1 years (p < 0.01). We observed a trend for the study completed South Asians to be older than the Europids within both groups of males with and without diabetes (Table 2). Mean IMD score for the study completed groups was 19.9 ± 11.9 and was even lesser than that for the recruited group suggesting the likelihood for more affluent group of people to complete a postal questionnaire study (Table 2). Males with and without diabetes reported 21.5 ± 12.4 and 19.8 ± 11.5 as their mean IMD scores respectively (p = 0.053). The difference of this score between the two ethnic groups was statistically significant (p < 0.001) indicating the Europid ethnic group to be less deprived than the South Asian group (Table 2). The study completed men were older (p = 0.65) and the mean IMD scores were lower (p = 0.55) compared to the men who did not complete the study (Table 3). In the logistic regression analysis on study recruitment/study completion, the overall models were statistically significant and the Cox & Snell R-Square and Nagelkerke R Square values were 0.021/0.04 and 0.045/0.057 respectively. The results indicate that presence of investigated disease (OR 1.74), older age (OR 1.03) and being from lesser deprived areas and Europid ethnicity (OR 1.15), all significantly increased the likelihood of study recruitment (Table 4). However, study completion was influence only by Europid ethnicity (OR 1.96) (Table 4). Table 4 Binary logistic regression analysis on study recruitment and study completion The difference between the responders and non responders with regard to the four factors we aimed at investigating was most apparent during the recruitment phase. The recruited group was significantly older and were from lesser deprived areas compared to the non recruited group. Europids and the patients with diabetes were more likely to consent for a postal survey. After consenting to participate in the study, completion appeared to be independent of age, area based deprivation and presence of diabetes. However, Europids were more likely to complete the survey than South Asians in both groups of males with and without diabetes. Ethnicity has been recognised as a determining factor in most health related issues, including sexual health [18, 19]. Although research is limited comparing the response rates between the South Asian and Europid ethnic groups to a mail survey, studies have shown that ethnic minorities are significantly less likely to respond a mailed questionnaire than a telephone survey [20]. Linguistic difficulties, socio cultural influences on decision making, feeling of not belonging to the British society and social class could have been the potential disincentives for South Asian participation [21]. Whilst these factors remain potential determinants of South Asian participation, not being approached by the researchers is another common reason for lack of participation, due to increased cost and time associated with their inclusion particularly in relation to language barrier [22]. However, in our study there was equality in the approach by ethnicity the language barrier was also addressed. Nevertheless, the literacy rate among the South Asian ethnic group in the UK is lower. Thus, the lower literacy rate could also have been a detrimental factor for the response rate reported among this ethnic group, even with the language barrier excluded. In addition answering a postal questionnaire also needs a reasonable level of ability to read, comprehend and write. Therefore, South Asian ethnicity appear to be a negative contributory factor for the response rate of a postal survey on sexual health at both recruitment and completion stages, compared to the Europids, in the UK. Associations between socioeconomic inequalities and health in the UK have been described in the past. In a closely comparable study, the response rate for a mailed questionnaire about the views of the general population on NHS has been higher among the people who lived in the lesser deprived areas; as determined by the Jarmon score area based deprivation [23]. Studies done outside the UK also provides supportive evidence [24]. However, the evidence is limited in the literature to support any relationship between the socio-economic status and the response rate to a postal questionnaire on sexual dysfunction. The disparity observed between the area based socio-economic statuses of Europid and South Asian ethnicities in the approached sample was comparable with the individual based socio economic status of the general population in the UK. However, even though the South Asians reported higher socio economic deprivation, during data analysis of the study completed group we found the level of education to be higher among the South Asians (these data is not presented in this paper) compared to the Europids. In line with our findings, Bhopal et al. have reported the South Asians to be advantaged in university education compared to Europeans in the UK [25]. Willingness to consent for research is generally variable among the public. Among the people with medical diseases, willingness to consent for research is arguable. A study done outside the UK reported a sample of women seen by the health care practitioner for sexually transmitted diseases to have been significantly less likely to respond to a postal data collection on sexual history and sexual behaviour compared to women who were seen for contraceptive advice [26]. However, these findings are not comparable with our findings with regard to the disease involved. In contrast the respiratory health epidemiology suggests the possibility of people who suffered more of the disease/symptoms to respond to a respiratory questionnaire [15]. This supports our observation of a higher response rate amongst the patients with diabetes. Furthermore, in line with our findings, responders have been older in several studies done both in the UK and elsewhere [23, 27, 28]. Lack of time available with the younger population, increased rates of moving and the less awareness of social responsibility compared to the older population may have been contributory factors for this finding. During the participant selection stage the name of the person was used as a guide to determine his ethnicity in the absence of ethnicity recording at the particular GP practice. Recording the self ascribed ethnicity and the first language of all patients in the GP registry is a new introduction to the NHS system in the UK, under clinical directed enhanced services (DES). However, we experienced under reporting of ethnicity and the first language in most of the practice databases. Considering the South Asian names to ascertain their ethnicity has long been in use and considered a reliable alternative [29, 30]. We report 91.5 % accuracy in determining the ethnicity by names for this study, although it was time consuming. The main limitation of the study was the low-recruitment rate (8.8 %) observed. Surveys on sensitive subjects such as sexual health, usually reports a lesser response rate regardless of the mode of administration or other characteristics of the participants such as age, gender or ethnicity [6, 20, 26]. In addition to addressing a sensitive issue, 50 % of the approached population for the current study was South Asians; an ethnic group among which the willingness to discuss the sexual issues is considered even less due to cultural reasons. In addition, male gender on its own has been identified as a factor for non response [27, 28]. The recruitment rate of this study may have influenced negatively by the above factors. However, personally addressed hand signed invitation letter by the GP, follow up, providing a second copy of the questionnaire at follow up, university sponsorship/collaboration, personalised cover letters and assurance of confidentiality used in the present study have been recognised in the previous studies as methods to increase response rate for postal questionnaires [27, 31]. Furthermore, the GP practice characteristics may also determine the response rate. The recruitment rate between the 25 GP practices varied considerably, from 0.1 to 15.2 %, in addition the completion rate also varied widely (0.7–12.1 %). Hence it is possible that the recruitment and completion rates were affected by the characteristics of the practice. Our results demonstrate that recruitment for postal surveys on sexual health is positively influenced by presence of the investigated disease, older age, being from lesser deprived areas and Europid ethnicity. However after recruitment, completion of the study appeared to be independent of age, area based deprivation and presence of diabetes. However, Europids were more likely to complete the survey than South Asians in both groups of males with and without diabetes. Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health (Oxf). 2005;27(3):281–91. Siemiatycki J. A comparison of mail, telephone, and home interview strategies for household health surveys. Am J Public Health. 1979;69(3):238–45. Smeeth L, Fletcher AE, Stirling S, Nunes M, Breeze E, Ng E, Bulpitt CJ, Jones D. Randomised comparison of three methods of administering a screening questionnaire to elderly people: findings from the MRC trial of the assessment and management of older people in the community. BMJ. 2001;323(7326):1403–7. Johnson AM, Copas AJ, Erens B, Mandalia S, Fenton K, Korovessis C, Wellings K, Field J. Effect of computer-assisted self-interviews on reporting of sexual HIV risk behaviours in a general population sample: a methodological experiment. AIDS. 2001;15(1):111–5. Cook JV, Dickinson HO, Eccles MP. Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. BMC Health Serv Res. 2009;9:160. Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;3:MR000008. Sonne-Holm S, Sorensen TI, Jensen G, Schnohr P. Influence of fatness, intelligence, education and sociodemographic factors on response rate in a health survey. J Epidemiol Commun Health. 1989;43(4):369–74. Korkeila K, Suominen S, Ahvenainen J, Ojanlatva A, Rautava P, Helenius H, Koskenvuo M. Non-response and related factors in a nation-wide health survey. Eur J Epidemiol. 2001;17(11):991–9. O'Neill TW, Marsden D, Matthis C, Raspe H, Silman AJ. Survey response rates: national and regional differences in a European multicentre study of vertebral osteoporosis. J Epidemiol Commun Health. 1995;49(1):87–93. Nicolosi A, Glasser DB, Kim SC, Marumo K, Laumann EO. Sexual behaviour and dysfunction and help-seeking patterns in adults aged 40–80 years in the urban population of Asian countries. BJU Int. 2005;95(4):609–14. Malavige LS, Wijesekara P, Seneviratne Epa D, Ranasinghe P, Levy JC. Ethnic differences in sexual dysfunction among diabetic and nondiabetic males: the Oxford Sexual Dysfunction Study. J Sex Med. 2013;10(2):500–8. Khunti K, Kumar S, Brodie J. Diabetes UK and South Asian Health Foundation recommendations on diabetes research priorities for British South Asians. London: Diabetes UK; 2009. Dreyer G, Hull S, Aitken Z, Chesser A, Yaqoob MM. The effect of ethnicity on the prevalence of diabetes and associated chronic kidney disease. QJM. 2009;102(4):261–9. Jayawardena R, Ranasinghe P, Byrne NM, Soares MJ, Katulanda P, Hills AP. Prevalence and trends of the diabetes epidemic in South Asia: a systematic review and meta-analysis. BMC Public Health. 2012;12:380. Kotaniemi JT, Hassi J, Kataja M, Jonsson E, Laitinen LA, Sovijarvi AR, Lundback B. Does non-responder bias have a significant effect on the results in a postal questionnaire study? Eur J Epidemiol. 2001;17(9):809–17. Hewitson P, Ward AM, Heneghan C, Halloran SP, Mant D. Primary care endorsement letter and a patient leaflet to improve participation in colorectal cancer screening: results of a factorial randomised trial. Br J Cancer. 2011;105(4):475–80. Malavige LS, Wijesekara PN, Jayaratne SD, Kathriarachchi ST, Ranasinghe P, Sivayogan S, Levy JC, Bancroft J. Linguistic validation of the Sexual Inhibition and Sexual Excitation Scales (SIS/SES) translated into five South Asian languages: Oxford Sexual Dysfunction Study (OSDS). BMC Res Notes. 2013;6:550. Chaturvedi N, Rai H, Ben-Shlomo Y. Lay diagnosis and health-care-seeking behaviour for chest pain in south Asians and Europeans. Lancet. 1997;350(9091):1578–83. Adamson J, Ben-Shlomo Y, Chaturvedi N, Donovan J. Ethnicity, socio-economic position and gender—do they affect reported health-care seeking behaviour? Soc Sci Med. 2003;57(5):895–904. Mancuso C, Glendon G, Anson-Cartwright L, Shi EJ, Andrulis I, Knight J. Ethnicity, but not cancer family history, is related to response to a population-based mailed questionnaire. Ann Epidemiol. 2004;14(1):36–43. Hussain-Gambles M, Leese B, Atkin K, Brown J, Mason S, Tovey P. Involving South Asian patients in clinical trials. Health Technol Assess. 2004;8(42):iii:1–109. Hussain-Gambles M, Atkin K, Leese B. Why ethnic minority groups are under-represented in clinical trials: a review of the literature. Health Soc Care Commun. 2004;12(5):382–8. Angus VC, Entwistle VA, Emslie MJ, Walker KA, Andrew JE. The requirement for prior consent to participate on survey response rates: a population-based survey in Grampian. BMC Health Serv Res. 2003;3(1):21. Bihan H, Laurent S, Sass C, Nguyen G, Huot C, Moulin JJ, Guegen R, Le Toumelin P, Le Clesiau H, La Rosa E, et al. Association among individual deprivation, glycemic control, and diabetes complications: the EPICES score. Diabetes Care. 2005;28(11):2680–5. Bhopal R, Hayes L, White M, Unwin N, Harland J, Ayis S, Alberti G. Ethnic and socio-economic inequalities in coronary heart disease, diabetes and risk factors in Europeans and South Asians. J Public Health Med. 2002;24(2):95–105. Rolnick SJ, Gross CR, Garrard J, Gibson RW. A comparison of response rate, data quality, and cost in the collection of data on sexual history and personal behaviors. Mail survey approaches and in-person interview. Am J Epidemiol. 1989;129(5):1052–61. Hazell ML, Morris JA, Linehan MF, Frank PI, Frank TL. Factors influencing the response to postal questionnaire surveys about respiratory symptoms. Prim Care Respir J. 2009;18(3):165–70. Ronmark EP, Ekerljung L, Lotvall J, Toren K, Ronmark E, Lundback B. Large scale questionnaire survey on respiratory health in Sweden: Effects of late- and non-response. Respir Med. 2009;103(12):1807–15. Nicoll A, Bassett K, Ulijaszek SJ. What's in a name? Accuracy of using surnames and forenames in ascribing Asian ethnic identity in English populations. J Epidemiol Commun Health. 1986;40(4):364–8. Martineau A, White M. What's not in a name. The accuracy of using names to ascribe religious and geographical origin in a British population. J Epidemiol Commun Health. 1998;52(5):336–7. Scott P, Edwards P. Personally addressed hand-signed letters increase questionnaire response: a meta-analysis of randomised controlled trials. BMC Health Serv Res. 2006;6:111. LSM, PW, DSE and JCL made substantial contribution to conception and study design and data collection. LSM, PR, PW and DSE were involved in refining the study design, statistical analysis and drafting the manuscript. LSM and PR critically revised the manuscript. All authors read and approved the final manuscript. Oxford Centre for Diabetes, Endocrinology and Metabolism, Nuffield Department of Clinical Medicine, University of Oxford, Oxford, OX3 7LJ, UK Lasantha S. Malavige & Jonathan C. Levy Genetech Research Institute, Colombo, Sri Lanka Lasantha S. Malavige Ministry of Health Care and Nutrition, Colombo, Sri Lanka Pabasi Wijesekara & Dhanesha Seneviratne Epa Department of Pharmacology, Faculty of Medicine, University of Colombo, Colombo, Sri Lanka Priyanga Ranasinghe Pabasi Wijesekara Dhanesha Seneviratne Epa Jonathan C. Levy Correspondence to Lasantha S. Malavige. Malavige, L.S., Wijesekara, P., Seneviratne Epa, D. et al. Ethnicity and neighbourhood deprivation determines the response rate in sexual dysfunction surveys. BMC Res Notes 8, 410 (2015). https://doi.org/10.1186/s13104-015-1387-2 Europids
CommonCrawl
Effeminacy Temperature, Heat, Physical quantity, Atmospheric thermodynamics Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) For perception of temperature see Thermoception Main article: Psychological effects of heat Fig. 1 A picture of a gas of hard-sphere molecules. The temperature of the gas is a measure of the average energy of the molecules as they move and collide in the box. Here, the size of helium atoms relative to their spacing is shown to scale under 136 atmospheres of pressure. These room-temperature atoms have a certain, average speed (slowed down here a trillion fold). Temperature is a physical property of a system that underlies the common notions of hot and cold; something that is hotter has the greater temperature. Temperature is one of the principal parameters of thermodynamics. The temperature of a system is related to the average energy of microscopic motions in the system. For a solid, these microscopic motions are principally the vibrations of the constituent atoms about their sites in the solid. For an ideal monatomic gas, the microscopic motions are the translational motions of the constituent gas particles. Temperature is measured with thermometers that may be calibrated to a variety of temperature scales. Throughout the world (except for in the U.S.), the Celsius scale is used for most temperature measuring purposes. The entire scientific world (the U.S. included) measures temperature in Celsius, and thermodynamic temperature in kelvins. Many engineering fields in the U.S., especially high-tech ones, also use the Kelvin and Celsius scales. The bulk of the U.S. however, (its lay people, industry, meteorology, and government) relies upon the Fahrenheit scale. Other engineering fields in the U.S. also rely upon the Rankine scale when working in thermodynamic-related disciplines such as combustion. Overview Edit Temperature is a measure of the average energy of the particles (atoms or molecules) of a substance. This energy occurs as the translational motion of a particle or as internal energy of a particle, such as a molecular vibration or the excitation of an electron energy level. Although very specialized laboratory equipment is required to directly detect the translational thermal motions, thermal collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The thermal motions of atoms are very fast and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool caesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second in order to calculate their temperature. Molecules, such as O2, have more degrees of freedom than single atoms: they can have rotational and vibrational motions as well as translational motion. An increase in temperature will cause the average translational energy to increase. It will also cause the energy associated with vibrational and rotational modes to increase. Thus a diatomic gas, with extra degrees of freedom like rotation and vibration, will require a higher energy input to change the temperature by a certain amount, i.e. it will have a higher heat capacity than a monatomic gas. The process of cooling involves removing energy from a system. When there is no more energy able to be removed, the system is said to be at absolute zero, which is the point on the thermodynamic (absolute) temperature scale where all kinetic motion in the particles comprising matter ceases and they are at complete rest in the "classic" (non-quantum mechanical) sense. By definition, absolute zero is a temperature of precisely 0 kelvins (–273.15 °C or –459.67 °F). Details Edit Conjugate variables of thermodynamics Pressure Volume (Stress) (Strain) Temperature Entropy Chem. potential Particle no. The formal properties of temperature are studied in thermodynamics and statistical mechanics. The temperature of a system at thermodynamic equilibrium is defined by a relation between the amount of heat $ \delta Q $ incident on the system during an infinitesimal quasistatic transformation, and the variation $ \delta S $ of its entropy during this transformation. $ dS = \frac{\delta Q}{T} $ Contrary to entropy and heat, whose microscopic definitions are valid even far away from thermodynamic equilibrium, temperature can only be defined at thermodynamic equilibrium, or local thermodynamic equilibrium (see below). As a system receives heat its temperature rises, similarly a loss of heat from the system tends to decrease its temperature (at the - uncommon - exception of negative temperature, see below). When two systems are at the same temperature, no heat transfer occurs between them. When a temperature difference does exist, heat will tend to move from the higher-temperature system to the lower-temperature system, until they are at thermal equilibrium. This heat transfer may occur via conduction, convection or radiation (see heat for additional discussion of the various mechanisms of heat transfer). Temperature is also related to the amount of internal energy and enthalpy of a system. The higher the temperature of a system, the higher its internal energy and enthalpy are. Temperature is an intensive property of a system, meaning that it does not depend on the system size or the amount of material in the system. Other intensive properties include pressure and density. By contrast, mass, volume, and entropy are extensive properties, and depend on the amount of material in the system. The role of temperature in nature Edit Temperature plays an important role in almost all fields of science, including physics, chemistry, and biology. Many physical properties of materials including the phase (solid, liquid, gaseous or plasma), density, solubility, vapor pressure, and electrical conductivity depend on the temperature. Temperature also plays an important role in determining the rate and extent to which chemical reactions occur. This is one reason why the human body has several elaborate mechanisms for maintaining the temperature at 37 °C, since temperatures only a few degrees higher can result in harmful reactions with serious consequences. Temperature also controls the type and quantity of thermal radiation emitted from a surface. One application of this effect is the incandescent light bulb, in which a tungsten filament is electrically heated to a temperature at which significant quantities of visible light are emitted. Temperature-dependence of the speed of sound in air c, density of air ρ and acoustic impedance Z vs. temperature °C Impact of temperature on speed of sound, air density and acoustic impedance T in °C c in m/s ρ in kg/m³ Z in N·s/m³ −10 325.4 1.341 436.5 −5 328.5 1.316 432.4 0 331.5 1.293 428.3 10 337.5 1.247 420.7 Temperature measurement Edit Main article: Temperature measurement, see also The International Temperature Scale. Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early 18th century, when Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use, alongside the Celsius scale and the Kelvin scale. Units of temperature Edit The basic unit of temperature (symbol: T) in the International System of Units (SI) is the kelvin (K). The Kelvin and Celsius scales are, by international agreement, defined by two points: absolute zero, and the triple point of specially prepared (VSMOW) water. Absolute zero is defined as being precisely 0 K and –273.15 °C. Absolute zero is where all kinetic motion in the particles comprising matter ceases and they are at complete rest in the "classic" (non-quantum mechanical) sense. At absolute zero, matter contains no heat energy. Also, the triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things: 1) it fixes the magnitude of the kelvin unit as being precisely 1 part in 273.16 parts the difference between absolute zero and the triple point of water; 2) it establishes that one kelvin has precisely the same magnitude as a one degree increment on the Celsius scale; and 3) it establishes the difference between the two scales' null points as being precisely 273.15 kelvins (0 K = –273.15 °C and 273.16 K = 0.01 °C). Formulas for converting from these defining units of temperature to other scales can be be found at Temperature conversion formulas. In the field of plasma physics, because of the high temperatures encountered and the electromagnetic nature of the phenomena involved, it is customary to express temperature in electronvolts (eV) or kiloelectronvolts (keV), where 1 eV = 11,605 K. In the study of QCD matter one routinely meets temperatures of the order of a few hundred MeV, equivalent to about 1012 K. For everyday applications, it is often convenient to use the Celsius scale, in which 0 °C corresponds to the temperature at which water freezes and 100 °C corresponds to the boiling point of water at sea level. In this scale a temperature difference of 1 degree is the same as a 1 K temperature difference, so the scale is essentially the same as the Kelvin scale, but offset by the temperature at which water freezes (273.15 K). Thus the following equation can be used to convert from degrees Celsius to kelvins. $ \mathrm{K = [^\circ C] \left(\frac{1 \, K}{1\, ^\circ C}\right) + 273.15\, K} $ In the United States, the Fahrenheit scale is widely used. On this scale the freezing point of water corresponds to 32 °F and the boiling point to 212 °F. The following formula can be used to convert from Fahrenheit to Celsius: $ \mathrm{\ \!^\circ C = \frac{5\, ^\circ C}{9\, ^\circ F}([^\circ F] - 32\, ^\circ F)} $ See temperature conversion formulas for conversions between most temperature scales. Negative temperatures Edit See main article: Negative temperature. For some systems and specific definitions of temperature, it is possible to obtain a negative temperature. A system with a negative temperature is not colder than absolute zero, but rather it is, in a sense, hotter than infinite temperature. Articles about temperature ranges: Edit 10−12 K = 1 picokelvin (pK) 10−9 K = 1 nanokelvin (nK) 10−6 K = 1 microkelvin (µK) 10−3 K = 1 millikelvin (mK) 100 K = 1 kelvin 101 K = 10 kelvins 102 K = 100 kelvins 103 K = 1,000 kelvins = 1 kilokelvin (kK) 104 K = 10,000 kelvins = 10 kK 105 K = 100,000 kelvins = 100 kK 106 K = 1 megakelvin (MK) 109 K = 1 gigakelvin (GK) 1012 K = 1 terakelvin (TK) See Orders of magnitude (temperature). Template:Comparison of temperature scales Theoretical foundation of temperature Edit Zeroth-law definition of temperature Edit While most people have a basic understanding of the concept of temperature, its formal definition is rather complicated. Before jumping to a formal definition, let us consider the concept of thermal equilibrium. If two closed systems with fixed volumes are brought together, so that they are in thermal contact, changes may take place in the properties of both systems. These changes are due to the transfer of heat between the systems. When a state is reached in which no further changes occur, the systems are in thermal equilibrium. Now a basis for the definition of temperature can be obtained from the so-called zeroth law of thermodynamics which states that if two systems, A and B, are in thermal equilibrium and a third system C is in thermal equilibrium with system A then systems B and C will also be in thermal equilibrium (being in thermal equilibrium is a transitive relation; moreover, it is an equivalence relation). This is an empirical fact, based on observation rather than theory. Since A, B, and C are all in thermal equilibrium, it is reasonable to say each of these systems shares a common value of some property. We call this property temperature. Generally, it is not convenient to place any two arbitrary systems in thermal contact to see if they are in thermal equilibrium and thus have the same temperature. Also, it would only provide an ordinal scale. Therefore, it is useful to establish a temperature scale based on the properties of some reference system. Then, a measuring device can be calibrated based on the properties of the reference system and used to measure the temperature of other systems. One such reference system is a fixed quantity of gas. The ideal gas law indicates that the product of the pressure and volume (P · V) of a gas is directly proportional to the temperature: $ P \cdot V = n \cdot R \cdot T $ (1) where 'T is temperature, n is the number of moles of gas and R is the gas constant. Thus, one can define a scale for temperature based on the corresponding pressure and volume of the gas: the temperature in kelvins is the pressure in pascals of one mole of gas in a container of one cubic metre, divided by 8.31... In practice, such a gas thermometer is not very convenient, but other measuring instruments can be calibrated to this scale. Equation 1 indicates that for a fixed volume of gas, the pressure increases with increasing temperature. Pressure is just a measure of the force applied by the gas on the walls of the container and is related to the energy of the system. Thus, we can see that an increase in temperature corresponds to an increase in the thermal energy of the system. When two systems of differing temperature are placed in thermal contact, the temperature of the hotter system decreases, indicating that heat is leaving that system, while the cooler system is gaining heat and increasing in temperature. Thus heat always moves from a region of high temperature to a region of lower temperature and it is the temperature difference that drives the heat transfer between the two systems. Temperature in gases Edit For a monatomic ideal gas the temperature is related to the translational motion or average speed of the atoms. The kinetic theory of gases uses statistical mechanics to relate this motion to the average kinetic energy of atoms and molecules in the system. This average energy is independent of particle mass, which seems counterintuitive to many people. Although the temperature is related to the average kinetic energy of the particles in a gas, each particle has its own energy which may or may not correspond to the average. However, after an examination of some basic physics equations it makes perfect sense. The second law of thermodynamics states that any two given systems when interacting with each other will later reach the same average energy. Temperature is a measure related to the average kinetic energy of a system. The formula for the kinetic energy of an atom is: $ E_k = \begin{matrix} \frac{1}{2} \end{matrix} mv^2 $ (Note that a calculation of the kinetic energy of a more complicated object, such as a molecule, is slightly more involved. Additional degrees of freedom are available, so molecular rotation or vibration must be included.) Thus, particles of greater mass (say a neon atom relative to a hydrogen molecule) will move slower than lighter counterparts, but will have the same average energy. This average energy is independent of the mass because of the nature of a gas, all particles are in random motion with collisions with other gas molecules, solid objects that may be in the area and the container itself (if there is one). A visual illustration of this from Oklahoma State University makes the point more clear. Particles with different mass have different velocity distributions, but the average kinetic energy is the same because of the ideal gas law. In a gas the distribution of energy (and thus speeds) of the particles corresponds to the Boltzmann distribution. Temperature of the vacuum Edit The temperature of an object is proportional to the average kinetic energy of the molecules in it. In a pure vacuum, there are no molecules. There is nothing to measure the kinetic energy of, and temperature is undefined. If a thermometer were placed in a vacuum, the reading would be a measurement of the internal temperature of the thermometer, not of the vacuum which surrounds it. All objects emit black body radiation. Over time, a thermometer in a pure vacuum will radiate away thermal energy, decreasing in temperature indefinitely until it reaches the zero-point energy limit. In practice, there is no such thing as a pure vacuum since there will always be photons associated with the black body radiation of the walls of the vacuum. A thermometer orbiting the Earth can easily absorb energy from sunlight faster than it can radiate it away. This can lead to a dramatic temperature increase. A thermometer isolated from solar radiation (in the shade of a larger body, for example) is still exposed to Cosmic microwave background radiation. In this case, the temperature will change until the rate of energy loss and gain are in equilibrium. At this point, the thermometer will have a temperature of 2.725 K, which is often referred to as the temperature of space. Second-law definition of temperature Edit In the previous section temperature was defined in terms of the Zeroth Law of thermodynamics. It is also possible to define temperature in terms of the second law of thermodynamics, which deals with entropy. Entropy is a measure of the disorder in a system. The second law states that any process will result in either no change or a net increase in the entropy of the universe. This can be understood in terms of probability. Consider a series of coin tosses. A perfectly ordered system would be one in which every coin toss would come up either heads or tails. For any number of coin tosses, there is only one combination of outcomes corresponding to this situation. On the other hand, there are multiple combinations that can result in disordered or mixed systems, where some fraction are heads and the rest tails. As the number of coin tosses increases, the number of combinations corresponding to imperfectly ordered systems increases. For a very large number of coin tosses, the number of combinations corresponding to ~50% heads and ~50% tails dominates and obtaining an outcome significantly different from 50/50 becomes extremely unlikely. Thus the system naturally progresses to a state of maximum disorder or entropy. We previously stated that temperature controls the flow of heat between two systems and we have just shown that the universe, and we would expect any natural system, tends to progress so as to maximize entropy. Thus, we would expect there to be some relationship between temperature and entropy. In order to find this relationship let's first consider the relationship between heat, work and temperature. A heat engine is a device for converting heat into mechanical work and analysis of the Carnot heat engine provides the necessary relationships we seek. The work from a heat engine corresponds to the difference between the heat put into the system at the high temperature, qH and the heat ejected at the low temperature, qC. The efficiency is the work divided by the heat put into the system or: $ \textrm{efficiency} = \frac {w_{cy}}{q_H} = \frac{q_H-q_C}{q_H} = 1 - \frac{q_C}{q_H} $ (2) where wcy is the work done per cycle. We see that the efficiency depends only on qC/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, qC/qH should be some function of these temperatures: $ \frac{q_C}{q_H} = f(T_H,T_C) $ (3) Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and T2, and the second between T2 and T3. This can only be the case if: $ q_{13} = \frac{q_1 q_2} {q_2 q_3} $ which implies: $ q_{13} = f(T_1,T_3) = f(T_1,T_2)f(T_2,T_3) $ Since the first function is independent of T2, this temperature must cancel on the right side, meaning f(T1,T3) is of the form g(T1)/g(T3) (i.e. f(T1,T3) = f(T1,T2)f(T2,T3) = g(T1)/g(T2)· g(T2)/g(T3) = g(T1)/g(T3)), where g is a function of a single temperature. We can now choose a temperature scale with the property that: $ \frac{q_C}{q_H} = \frac{T_C}{T_H} $ (4) Substituting Equation 4 back into Equation 2 gives a relationship for the efficiency in terms of temperature: $ \textrm{efficiency} = 1 - \frac{q_C}{q_H} = 1 - \frac{T_C}{T_H} $ (5) Notice that for TC = 0 K the efficiency is 100% and that efficiency becomes greater than 100% below 0 K. Since an efficiency greater than 100% violates the first law of thermodynamics, this implies that 0 K is the minimum possible temperature. In fact the lowest temperature ever obtained in a macroscopic system was 20 nK, which was achieved in 1995 at NIST. Subtracting the right hand side of Equation 5 from the middle portion and rearranging gives: $ \frac {q_H}{T_H} - \frac{q_C}{T_C} = 0 $ where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function, S, defined by: $ dS = \frac {dq_\mathrm{rev}}{T} $ (6) where the subscript indicates a reversible process. The change of this state function around any cycle is zero, as is necessary for any state function. This function corresponds to the entropy of the system, which we described previously. We can rearranging Equation 6 to get a new definition for temperature in terms of entropy and heat: $ T = \frac{dq_\mathrm{rev}}{dS} $ (7) For a system, where entropy S may be a function S(E) of its energy E, the temperature T is given by: $ \frac{1}{T} = \frac{dS}{dE} $ (8) The reciprocal of the temperature is the rate of increase of entropy with energy. Body temperature (Thermoregulation) Color temperature (black body radiation) Heat conduction ITS-90 Maxwell's demon Rankine Thermodynamic (absolute) temperature Wet Bulb Globe Temperature Meteorological data and variables Atmospheric pressure · Baroclinity · Cloud · Convection · CAPE · CIN · Dew point · Heat index · Humidex · Humidity · Lifted index · Lightning · Pot T · Precipitation · Sea surface temperature · Surface solar radiation · Surface weather analysis · Temperature · Theta-e · Visibility · Vorticity · Wind chill · Water vapor · Wind Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.), W. H. Freeman Company. ISBN 0-7167-1088-9. Look up this page on Wiktionary: Temperature An elementary introduction to temperature aimed at a middle school audience Why do we have so many temperature scales? A Brief History of Temperature Measurement Temperature units online conversion Template:TemperatureScalesar:درجة الحرارة ast:Temperatura id:Suhu ms:Suhu bs:Temperatura bg:Температура ca:Temperatura cs:Teplota da:Temperatur de:Temperatur et:Temperatuur el:Θερμοκρασία es:Temperatura eo:Temperaturo eu:Tenperatura fa:دما fr:Température gl:Temperatura ko:온도 hi:तापमान hr:Temperatura io:Temperaturo is:Hitastighe:טמפרטורה la:Temperatura lv:Temperatūra lt:Temperatūra hu:Hőmérséklet mk:Температура nl:Temperatuur nds-nl:Temperatuurno:Temperatur nn:Temperaturpt:Temperatura ro:Temperatură ru:Температура simple:Temperature sk:Teplota sl:Temperatura sr:Температура sh:Temperatura fi:Lämpötila sv:Temperatur ta:வெப்பநிலை th:อุณหภูมิ vi:Nhiệt độ uk:Температура yi:טעמפארטור zh:温度 Retrieved from "https://psychology.wikia.org/wiki/Temperature?oldid=37136" Physical quantity
CommonCrawl
Mudstone creep experiment and nonlinear damage model study under cyclic disturbance load Jun-guang Wang1, Qing-lin Sun1, Bing Liang1, Peng-jin Yang1 & Qing-rong Yu1 Scientific Reports volume 10, Article number: 9305 (2020) Cite this article Solid Earth sciences To study the creep characteristics of mudstone under disturbed load, creep rock triaxial compression disturbance tests under different disturbance amplitudes and frequencies are conducted using a self-made triaxial disturbed creep test bench for rock. The influence of different factors on the creep deformation law of each stage is analyzed. The results show that the disturbance effect has a significant impact on the creep properties of mudstone, and various factors have different effects on the creep stages. The instantaneous deformation variable, creep decay time, and steady creep rate change exponentially with the increase in axial pressure, and increase linearly with the increase in disturbance amplitude and disturbance frequency. The disturbance amplitude has a more significant effect on the instantaneous deformation, steady-state creep rate, and accelerated creep. According to the analysis of the test results, a nonlinear disturbance creep damage model based on Burger's model is established. The model is identified and calculated by the improved least squares method based on pattern search. The influence of different disturbance factors on the creep parameters is analyzed. The model fitting results and experimental results are compared to demonstrate that the model is used to simulate different disturbances. It was observed that rock creep under certain conditions exhibits certain adaptability. It is of great significance to carry out rock disturbance creep experiments and study the theory of disturbance creep to ensure the long-term stability of deep rock mass in complex environment. With the gradual depletion of shallow resources and development of resources, the number of coal mines with a depth of more than one kilometer is increasing in China. Owing to the particularity of the environment and the complexity of the stress field of deep rock mass, specifically when the deep rock mass is in a complicated environment, the stress state of some middle strength roadways that surround rock has approached the limit of coal and rock strength; thus, these roadways are vulnerable to instability and failure caused by tunneling or mining disturbances1. Creep refers to the phenomenon of accumulation of permanent strain with time under a constant external force. The creep characteristics of rock are one of the important properties of rock mechanics. Therefore, studying the creep mechanical properties of rock under load disturbances has important practical value for ensuring the long-term stability of rock mass. Numerous scholars have conducted extensive research in the abovementioned fields and obtained useful results. Jindřich Šancer and Bagde M.N.2,3,4 qualitatively studied and analyzed the rheological behavior of sandstone under cyclic loading with variable amplitudes and frequencies. The results showed that the creep properties of sandstone depend strongly on the stress state of rock. Gao Yangfa and Wang Bo5,6 developed an instrument based on rheological disturbance and used it to test rock rheology and its disturbance effect. The disturbance effect of rock rheology was proposed, and the factors that influenced this effect were analyzed. Cui Xihai7 conducted an impact load creep test on mudstone and established a disturbance creep rheological model. Song Dazhao8 studied the creep damage characteristics of a coal-bearing rock mass under disturbing actions. Pu Chengzhi and Wang Qihu9,10 considered the process of rock damage and cracking and established nonlinear creep damage models respectively. Throughout the above studies, most scholars examined the disturbance load when the disturbance creep is applied by an impact load and rarely analyzed the rock disturbance creep laws with respect to disturbance amplitude and the disturbance frequency. Therefore, in this study, based on the previous research, a mudstone disturbance creep test was conducted under different axial pressures, disturbance amplitudes, and disturbance frequencies. The influence of various factors on the creep characteristics was analyzed, and on this basis, a nonlinear perturbation creep damage model was established. The ultimate objective of this study was to provide basic data for studying the creep mechanical properties of deep rock masses in complex environments. Mudstone disturbance creep test Disturbance load and Disturbance creep Disturbance load refers to a group or groups of loads acting on rock only in an instant or within a certain time interval. Passing this instant or time interval, the group or groups of loads disappear immediately. Disturbance creep refers to the deformation of rock under a certain stable stress state during the action of a certain disturbance load. The disturbance load in this paper is a sinusoidal disturbance load with different disturbance amplitude and frequency applied to mudstone under certain axial pressure and confining pressure. The loading process is shown in Fig. 1. Schematic diagram of load application. Mudstone disturbance creep test equipment The test equipment is a rock triaxial disturbance creep test rig, which comprises a triaxial pressure chamber, an axial compression and confining pressure loading system, a disturbance loading and a data monitoring system. The test equipment is shown in Fig. 2. The axial compression loading is realized by adding weights on the weight bench, and the weights conduct the pressure to the triaxial pressure chamber through the lever structure, thus attaining the purpose of applying axle load to the test piece. The confining pressure is loaded by injecting nitrogen through an air pump, and the range of confining pressure loading is 0~10 MPa; The disturbance loading system is composed of YZU-30-6B vibration motor and SRMCO-VM05 frequency converter. The disturbance amplitude is adjusted by adjusting the angle of the rotor of the vibration motor. The disturbance frequency is adjusted by adjusting the SRMCO-VM05 frequency converter, the disturbance frequency range is 0~20 Hz; The data monitoring system consists of TOPRIE multiplex data recorder and DHDAS dynamic strain gauge. The top of the triaxial pressure chamber is equipped with GH-4 pressure sensor, and GH-4 pressure sensor is connected with TOPRIE multiplex data recorder. The real-time axial pressure data of the sample can be viewed on the TOPRIE multiplex data recorder. The accuracy of axial pressure measurement is not more than 0.05% of the measuring range and the minimum resolution is 0.6 N. The change signal of the sample strain is transmitted to DHDAS dynamic strain gauge through BX120-20AA strain gauge. The other end of DHDAS dynamic strain gauge is connected to a computer. The real-time strain data of rock samples can be viewed at the computer end. The strain measurement accuracy is not more than 0.5% ± μm of the measuring range and the minimum resolution is 0.1 μm. Disturbance creep test rig. (1) Air pump. (2) Six-way valve. (3) Stabilizer tank. (4) Pillar. (5) Rigid beam. (6) Vibrator. (7) weight bench. (8) Frequency modulator. (9) GH-4 pressure sensor. (10) Triaxial pressure chamber. (11) Rock specimen. (12) Base. (13) Computer. (14) Dynamic strain gauge. (15) TOPRIE multiplex data recorder. Test scheme and process The rock samples used in the test were cores of mudstone obtained from the Pingdingshan No. 12 mine, the sampling depth was 759 m, and the average density was 2.31 g/cm3. Mudstone is mainly composed of clastic minerals, clay minerals and carbonate minerals, and contains a small amount of pyrite, which is easy to expand when encountering water. The rock samples were ground to the specifications of the international standard (Φ: 50 mm × 100 mm). Discrete samples were eliminated after ultrasonic testing, and 12 samples with good consistency were selected for testing. Six specimens were selected for mudstone uniaxial and triaxial compression tests. The mudstone disturbance creep test used a grading loading method to conduct triaxial compression creep tests with different axial pressures, disturbance amplitudes, and disturbance frequencies. According to the basic mechanical properties of the mudstone, the axial pressure and confining pressure were initially set to 10 and 3 MPa, respectively. The confining pressure was maintained constant, and 4 leves of axial pressure were gradually added in loading gradients of 5 MPa. The disturbance amplitudes were 1.6, 3.2, and 4.8 MPa, and the disturbance frequencies were 1, 3, and 5 Hz. The mudstone disturbance creep test scheme is shown in Table 1. Table 1 Scheme of disturbance creep test. The specific test process is as follows: The model BX120-20AA strain gauge is selected and pasted in the middle of the test piece and the relative position of the strain gauge is "T" shape. After the test piece is sealed by thermoplastic sleeve, it is placed into the triaxial pressure chamber, and the air tightness is checked. The TOPRIE multiplex data recorder and the DHDAS dynamic strain gauge are connected. The vibration motor is connected to the inverter, and the amplitude and frequency of the disturbance power are adjusted to the specified values. Simultaneously, the axial pressure and the confining pressure are applied and maintained constant after adding the required value. The disturbance load is applied, the next axial load is applied after 5 hours. The same specimen disturbance amplitude and frequency remain unchanged, and the confining pressure is maintained constant during the test; When the strain of rock increases with time, the rock will enter the stage of accelerated creep, until the rock has a one-time fracture surface, and the rock creep experiment ends; the stress and strain data are read and processed to analyze the creep characteristics of the mudstone by different factors. This experiment is carried out at room temperature, and the rock sample is in the state of natural water content. Analysis of mudstone disturbance creep test results Analysis of creep curve characteristics The creep curves of mudstone under different disturbance conditions according to the data of the axial strain ε and creep time t of mudstone under different disturbance conditions are plotted in Fig. 3. When the axial load is applied, the samples produce obvious transient strain and then enter the creep attenuation phase. Under a lower axial stress, the creep rate of the mudstones tends to be stable, and as the axial pressure increases, the creep rate of mudstone increases, and the mudstone finally enters the accelerated creep stage, resulting in large deformation in a short time. Creep deformation at a constant stress is smaller without perturbation than that with disturbance. Compared with the undisturbed condition, the time and stress required for mudstone to reach the same creep deformation under the disturbed condition are reduced. For disturbance amplitudes of 1.6, 3.2, and 4.8 MPa, the critical axial stress at which the sample enters the accelerated creep stage are 25, 25, and 20 MPa, respectively, and the axial stress value is 25 MPa; when the disturbance frequencies are 1, 3, and 5 Hz, respectively, the rock enters the accelerated creep state. As can be seen, with the increase of disturbance amplitude, the stress threshold value of mudstone entering accelerated creep decreases while the disturbance frequency has no effect on the stress threshold value of mudstone entering accelerated creep. Creep curves under various conditions. Instantaneous deformation law Instantaneous deformation refers to the instantaneous deformation of rock at the initial stage of loading, which is mainly due to the deformation caused by the tight closure of the internal pores and fissures of the rock. Figure 4 indicates that under undisturbed and disturbed conditions, the instantaneous deformation of each mudstone specimen changes with a negative exponential function as the axial pressure increases. According to the fitting analysis, the relationship between the instantaneous deformation of mudstone, the axial stress and the frequency and amplitude of the stress perturbation is as follows: $$\Delta \varepsilon ={p}_{1}(\Delta \sigma ,f){\sigma }_{1}^{-{q}_{1}(\Delta \sigma ,f)}$$ $${p}_{1}(\Delta \sigma ,f)=0.0901\Delta {\sigma }^{2}-0.0811{f}^{2}-1.03\Delta \sigma +0.218f+5.93,\,{R}^{2}=0.998$$ $${q}_{1}(\Delta \sigma ,f)=-0.00401\,\Delta {\sigma }^{2}-0.0112{f}^{2}+0.0829\Delta \sigma -0.0271f-\,1.13,\,{R}^{2}=0.99$$ Instantaneous deformation variable with axial pressure. In the equations, Δε is the instantaneous strain, Δσ is the disturbance amplitude, f is the disturbance frequency, \({p}_{1}(\Delta \sigma ,f)\) and \({q}_{1}(\Delta \sigma ,f)\) is a functional relationship with Δσ and ƒ, respectively. \({p}_{1}(\Delta \sigma ,f)\) and \({q}_{1}(\Delta \sigma ,f)\) are shown in Eqs. (2) and (3). Figure 5(a,b) indicate that when the axial pressure is constant, the instantaneous deformation variable Δε of the mudstone has a positive correlation linear relationship with an increase in the disturbance amplitude Δσ and the disturbance frequency. When the axial pressure is 15, 20, and 25 MPa, the slopes of Δε with Δσ and ƒ increase are 0.0116, 0.0183, 0.0155 and 0.00840, 0.0114, 0.0122, respectively, which indicates that the instantaneous deformation of mudstone is more sensitive to an increase in the disturbance amplitude. Relationship between instantaneous deformation and disturbance amplitude and frequency. (a) Relationship between instantaneous deformation variable and Δσ. (b) Relationship between instantaneous deformation variable and f. Rules of creep decay time After instantaneous compaction of rock, mudstone enters the stage of decay creep, in which the creep rate of rock decreases with time. When the creep rate decreases to a steady state, the decay creep stage ends. The time is the decay creep time. The creep decay time decreases with a negative exponential relationship as the axial stress increases (as shown in Fig. 6) and tends to be stable. The relationship between the decay time of mudstone creep and the axial stress is as follows: $${t}_{r}={p}_{2}(\Delta \sigma ,f){e}^{-{q}_{2}(\Delta \sigma ,f){\sigma }_{1}}$$ where tr is the creep decay time, Δσ is the disturbance amplitude, ƒ is the disturbance frequency. \({p}_{2}(\Delta \sigma ,f)\) and \({q}_{2}(\Delta \sigma ,f)\) are functional relationships with Δσ and ƒ respectively. \({p}_{2}(\Delta \sigma ,f)\) and \({q}_{2}(\Delta \sigma ,f)\) are shown in Eq. (5) and Eq. (6). $${p}_{2}(\Delta \sigma ,f)=363\Delta {\sigma }^{2}+11.1{f}^{2}-30.1\Delta \sigma -108f+585\,{R}^{2}=0.961$$ $${q}_{2}(\Delta \sigma ,f)=2.13\times {10}^{-4}\Delta {\sigma }^{2}+0.0018{f}^{2}-0.00286\,\Delta \sigma -0.0237f+0.129,\,{R}^{2}=0.932$$ Relationship between creep decay time and axial pressure. Figure 7(a,b) show that when the mudstones experienced the same axial stress, the creep decay time shows a positive linear relationship with an increase in the disturbance amplitude and frequency. When the axial stress values are 15 MPa, 20 MPa, and 25 MPa, the slopes of the linear relationship of creep decay time with Δσ and ƒ are 5.06, 9.30, and 9.47 and 6.19, 11.67, and 12.91, respectively. When the stress state remains unchanged, the influence of the disturbance frequency on the creep decay time is more considerable than the disturbance amplitude. Relationship between creep decay time and disturbance amplitude and frequency. (a) Relationship between creep decay time and Δσ. (b) Relationship between creep decay time and f. Laws of steady state creep rate After stage of the decay creep, the rock enters the steady creep stage, in which the rock creep rate is almost unchanged at the same stress level and the creep curve is straight. The steady creep rate rises with the increase of axial pressure. Figure 8 shows that under different disturbances, the steady state creep rate of mudstone increases exponentially with the increase in axial stress. The steady state creep rate can be obtained through the fitting analysis; further, the relationship between the axial pressure changes is as follows: $$\mathop{\varepsilon }\limits^{\cdot }={p}_{3}(\Delta \sigma ,f){e}^{-{q}_{3}(\Delta \sigma ,f)\cdot {\sigma }_{1}}$$ $${p}_{3}=-0.0144\Delta {\sigma }^{2}+0.00333{f}^{2}+0.0581\Delta \sigma -0.0324f+0.680,\,{R}^{2}=0.996$$ $${q}_{3}=0.00172\Delta {\sigma }^{2}-3.53\times {10}^{-5}{f}^{2}-0.00418\Delta \sigma +0.00194f+0.0579,\,{R}^{2}=0.994$$ Relationship between steady-state creep rate and axial pressure. \(\dot{\varepsilon }\) is the steady state creep rate, Δσ is the disturbance amplitude, f is the disturbance frequency. \({p}_{3}(\Delta \sigma ,f)\) and \({q}_{3}(\Delta \sigma ,f)\) are the functional relationships between Δσ and ƒ respectively. \({p}_{3}(\Delta \sigma ,f)\) and \({q}_{3}(\Delta \sigma ,f)\) are shown in Eqs. (8) and (9). From Fig. 9(a,b), the steady creep rate increases linearly with the increase in the disturbance amplitude and frequency under the same axial stress. The fitting relationship in the figure shows that when the axial stress values are 15 MPa, 20 MPa, and 25 MPa, the slopes of the steady-state creep rate with the increase in Δσ and ƒ are 0.0603, 0.174, and 0.368, and 0.0506, 0.115, and 0.183, respectively. Therefore, the influence of the disturbance amplitude on the steady-state creep rate is more significant, and this effect becomes increasingly evident as the axial stress increases. Relationship between steady creep rate and disturbance amplitude and frequency. (a) Relationship between steady-state creep rate and Δσ. (b) Relationship between steady-state creep rate and f. Laws of accelerated creep deformation When the creep rate of rock increases sharply with time, the rock begins to enter the accelerated creep stage. With time, strength of the rock decreases. When the strength of the mudstone is lower than the applied load, the mudstone begins to fail. Under the disturbance load, the damage caused by different degrees of disturbance to the mudstone is also different, especially in the accelerated creep stage. When the disturbance amplitude values are 1.6, 3.2, and 4.8 MPa, the time periods at which the mudstone enters the accelerated creep phase are 1338 min, 1253 min, and 878 min, respectively. When the disturbance frequency is 1, 3, and 5 Hz, the time periods at which the mudstone enters the accelerated creep phase are 1294, 1253, and 1114 min, respectively. In the accelerated creep phase, the creep holding time and amount of deformation under different disturbance conditions are also different (see Fig. 10). The figure shows that the acceleration creep variable increases with disturbance amplitude and frequency. Figure 8(a) shows that as the disturbance amplitude increases, the acceleration creep time decreases. When Δσ = 4.8 MPa, the acceleration creep phase lasts only 21 min, with the disturbance. With an increase in frequency, changes of the accelerated creep phase of the mudstone is not evident, and the acceleration creep duration fluctuates around 70 min. Accelerated creep law under different conditions. (a) Different disturbance amplitude. (b) Different disturbance frequencies. Establishment of mudstone disturbing creep damage model Proposed creep damage model According to the analysis of mudstone creep disturbance test results under different disturbance conditions, the creep process of mudstone under the disturbing action has the following characteristics: Part of the elastic strain is generated during transient loading, and elastic elements should be present in the model; After loading, the mudstone deforms instantaneously and then enters the stage of decay creep. As time goes on, the creep rate of mudstone tends to be stable and the rock enters the steady creep stage, the model should contain viscous components; With an increase in axial stress, the increase in strain with time does not tend to stabilize and the rock is finally fails. However, the rock has an irreversible accelerated creep with viscoplasticity; Under different stress and disturbance conditions, the creep characteristics of mudstone change significantly, that is, the creep parameters of mudstone change with stress, time, and disturbance; thus, the model should include the damage factors considering the deterioration of rock parameters. Burger's model can effectively reflect the decay and steady creep stages of rock creep process; however, it cannot describe the acceleration phase of creep. Therefore, the nonlinear viscoplastic creep element (NVPB body) is introduced according to the literature11, and this element is connected in series with Burger's model; thus, the whole process of creep can be described more completely. A nonlinear viscoplastic creep element (NVPB body) is shown in Fig. 11. Nonlinear viscoplastic creep components. The creep equation of the NVPB body under stress is as follows: $$\varepsilon (t)=\{\begin{array}{cc}0 & {\sigma }_{0} < {\sigma }_{S}\\ \frac{{\sigma }_{0}-{\sigma }_{s}}{{\eta }_{N}}{(t-{t}_{{\rm{N}}})}^{n} & {\sigma }_{0}\ge {\sigma }_{S}\end{array}$$ In the equation, \({\sigma }_{0}={\sigma }_{1}-{\sigma }_{3}\), \({\sigma }_{1}\) is the first principal stress, \({\sigma }_{3}\) is the third principal stress; \({\sigma }_{S}\) is the stress threshold; t is the creep time; \({\eta }_{{\rm{N}}}\) is the NVPB body viscosity coefficient; and n is the creep index, which tN is the entry point of the accelerated creep time. The test results show that the creep characteristics of mudstone under the disturbance state are related to loading time, axial stress, disturbance amplitude, and disturbance frequency. To establish a mudstone creep damage model that can reflect transient, decay, steady, and accelerated creep, this study introduces damage variables according to Kachanov12 creep damage theory, in which the damage factor should be a function of axial stress, disturbance amplitude, disturbance frequency, and creep time, as described below: $$D=f({\sigma }_{1},\Delta \sigma ,f,t)$$ According to the research results of a large number of rock creep damage tests, the creep damage variable of rock increases with time in the negative exponential function during the creep process13,14. In this study, the damage variable is reduced to Eq. 12. $$D=1-{{\sigma }_{1}}^{-\alpha (\varDelta \sigma ,f)t}$$ where α is a rock damage parameter affected by the disturbance frequency and amplitude. As the values of α and σ1 are different, the damage factor D varies with time t; the trend is shown in Fig. 12. D curves with time for different α and σ1. The nonlinear viscoelastic-plastic creep model is modified for the creep properties of mudstone under disturbance. The model consists of a modified Maxwell body, modified Kelvin, and modified nonlinear viscoplastic body. The schematic diagram of the model is shown in Fig. 13, where 1 and 2 are the elastic modulus and viscosity coefficient of the Maxwell body, and 3 and 4 are the elastic modulus and viscosity coefficient of the Maxwell body, respectively. Improved disturbance-visco-elastoplastic damage creep model. It can be concluded that the creep equations of the mudstone creep damage model are as follows: (1) When there is no disturbance (\(\Delta \sigma =0\), \(f=0\)): $$\varepsilon (t)=\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{M}}}}+\frac{{\sigma }_{1}-{\sigma }_{3}}{{\eta }_{{\rm{M}}}}t+\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{K}}}}(1-{e}^{\frac{-{E}_{{\rm{K}}}}{{\eta }_{{\rm{K}}}}t})\,{\sigma }_{1}-{\sigma }_{3} < {\sigma }_{S}$$ $$\varepsilon (t)=\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{M}}}}+\frac{{\sigma }_{1}-{\sigma }_{3}}{{\eta }_{{\rm{M}}}}t+\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{K}}}}(1-{e}^{\frac{-{E}_{{\rm{K}}}}{{\eta }_{{\rm{K}}}}t})+\frac{{\sigma }_{1}-{\sigma }_{3}-{\sigma }_{S}}{{\eta }_{{\rm{N}}}}{(t-{t}_{{\rm{N}}})}^{n}\,{\sigma }_{1}-{\sigma }_{3}\ge {\sigma }_{S}$$ (2) When there is disturbance: $$\varepsilon (t)=\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{M}}}(1-D)}+\frac{{\sigma }_{1}-{\sigma }_{3}}{{\eta }_{{\rm{M}}}(1-D)}t+\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{K}}}}(1-{e}^{\frac{-{E}_{{\rm{K}}}}{{\eta }_{{\rm{K}}}(1-D)}t})\,{\sigma }_{1}-{\sigma }_{3} < {\sigma }_{S}$$ $$\varepsilon (t)=\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{M}}}(1-D)}+\frac{{\sigma }_{1}-{\sigma }_{3}}{{\eta }_{{\rm{M}}}(1-D)}t+\frac{{\sigma }_{1}-{\sigma }_{3}}{{E}_{{\rm{K}}}}(1-{e}^{\frac{-{E}_{{\rm{K}}}}{{\eta }_{{\rm{K}}}(1-D)}t})+\frac{{\sigma }_{1}-{\sigma }_{3}-{\sigma }_{S}}{{\eta }_{{\rm{N}}}(1-D)}{(t-{t}_{{\rm{N}}})}^{n}\,{\sigma }_{1}-{\sigma }_{3}\ge {\sigma }_{S}$$ Creep damage model verification To verify the applicability of the perturbation creep damage model, according to the perturbation creep test results and creep curve analysis, this study chooses the model-based search (PS) improved nonlinear least squares method for model identification and parameter calculation15. The PS-based least square method takes the objective function of the conventional least square method as the objective function. The PS optimization method is used to optimize the parameters so that the objective function achieves the required accuracy and avoids the only way of the conventional least square method—solving linear equation groups. The solution mechanism is fundamentally changed, which avoids the difficulty of initial value selection and makes it convenient to select the rheological model. And the fitting curve has very high accuracy. In this study, a disturbance of 4.8 MPa and frequency of 3 Hz are obtained as creep data for a disturbance amplitude of 3.2 MPa and disturbance frequency of 5 Hz; these values are fitted and analyzed for a confining pressure of 3 MPa. The experimental values and fitting curves are shown in Fig. 14. To further reflect the influence of different disturbance amplitudes and frequencies on creep damage, the creep data under different disturbance conditions for a confining pressure of 3 MPa and axial pressure of 20 MPa is fitted and analyzed, and specific creep fitting is carried out. The parameter values are listed in Table 2. Comparison between test date and model fitting. Table 2 Model parameters of mudstone disturbance creep damage. Rock creep undergoes transient deformation and decay, steady, and accelerated creep. Disturbance is an important factor affecting the creep properties of rocks, and different disturbance conditions have different effects at each creep stage. Under the same disturbance condition, with the increase in axial pressure, the instantaneous deformation, creep decay time, and steady creep rate of mudstone vary exponentially. Under the same axial pressure, as the disturbance frequency and the disturbance amplitude increases, the instantaneous deformation, creep decay time, and steady creep rate increase linearly. Further, the influence of the disturbance amplitude on instantaneous deformation and steady creep rate is more significant. The disturbance amplitude reduces the threshold stress of the rock entering the creep. The deformation of the rock accelerates creep and increases with the disturbance amplitude and frequency. The acceleration creep is not evident with the increase in the disturbance frequency. However, as the amplitude of the disturbance increases, the creep time decreases sharply. A damage variable considering the disturbance amplitude and the disturbance frequency is introduced. A nonlinear damage creep model of rock based on Burger's model is established. The required parameters are identified and calculated, and the rationality and applicability of the model are verified. The data used to support the findings of this study are available from the corresponding author upon request. Heping, X. I. E., Feng, G. A. O. & Yang, J. U. Research and Development of Rock Mechanics in Deep Ground Engineering [J]. Chinese Journal of Rock Mechanics and Engineering 34, 2162–2178, https://doi.org/10.13722/j.cnki.jrme.2015.1369 (2015). Šancer, J., Štrejbar, M. & Maleňáková, A. Effects of cyclic loading on the rheological properties of sandstones[J]. Open Geosciences, 3, https://doi.org/10.2478/s13533-011-0020-8 (2011). Bagde, M. N. & Petroš, V. Fatigue properties of intact sandstone samples subjected to dynamic uniaxial cyclical loading. Int. J. Rock Mech. Min. 42, 237–250, https://doi.org/10.1016/j.ijrmms.2004.08.008 (2005). Bagde, M. N. & Petroš, V. The effect of machine behaviour and mechanical properties of intact sandstoneunder static and dynamic uniaxial cyclic loading. Rock Mech. Rock Eng. 38, 59–67, https://doi.org/10.1007/s00603-004-0038-z (2005). Yanfa, G. A. O. et al. A RHEOLOGICAL Test of sandstone with pertubation effect and its constitutive relationship study [J]. Chinese Journal of Rock Mechanics and Engineering 27, 3180–3185 (2008). Bo, W. A. N. G. et al. Axial load test study on the perturbation properties of rock rheology[J]. Journal of experimental mechanics 42, 1443–1450, https://doi.org/10.13225/j.cnki.jccs.2017.0286 (2017). Xihai, C. U. I. et al. Experimental study in rheological regularity and constitutive relationship of rock under disturbing loads [J]. Chinese Journal of Rock Mechanics and Engineering 26, 1875–1881, https://doi.org/10.3321/j.issn:1000-6915.2007.09.020 (2007). Dazhao, S. et al. Test study of the perturbation effect of coal measures rocks damage failure[J]. Journal of China University of Mining& Technology 40, 530–535 (2011). Chengzhi, P. U. et al. Variable paramenters nonlinear creep damage model of rock with considerationg of aging, damage and deterioration[J]. Engineering mechanics 34, 17–27 (2017). DOI: CNKI:SUN:GCLX.0.2017-06-005. Qihu, W. et al. A creep constitutive model of rock considering initial damage and creep damage[J]. Rock and Soil Mechanics 36(Supp1), 57–62 (2016). YANG Shengqi. Study on rheological mechanical properties of rock and its engineering applications [D]. Nan Jing:HoHaiUnviersiyty (2006). Kachanov, M. Effective elastic properties of cracked solids: critical review of some basic concepts[J]. Applied Mechanics Review 45, 305–336, https://doi.org/10.1115/1.3119761 (1992). Weiya, X. U. et al. Study on creep damage constitutive relation of creenschist specimen[J]. Chinese Journal of Rock Mechanics and Engineering (S1), 3093–3097, https://doi.org/10.3321/j.issn:1000-6915.2006.z1.077 (2006). Chen Luwang, L. I. et al. Further Development and Application of a Creep Damage Model for Water-Bearing Rocks[J]. Chinese Journal of Solid Mechanics 39, 642–651, https://doi.org/10.19636/j.cnki.cjsm42-1250/o3.2018.018 (2018). Junguang, W., Bing, L. & Mi, T. Study of Creep Characteristics Produced by Nonlinear Damage of Oil Shale in Hydrous State[J]. Journal of experimental mechanics 29, 112–118, https://doi.org/10.7520/1001-4888-13-116 (2014). This study was financially supported by the National Key Research and Development Program of China(no. 2016YFC0600704), the National key research and development plan project (no. 51404130), the Liaoning Provincial Natural Science Foundation of China (no. 20180550708). School of Mechanics & Engineering, Liaoning Technical University, Fuxin, Liaoning, China, 123000 Jun-guang Wang, Qing-lin Sun, Bing Liang, Peng-jin Yang & Qing-rong Yu Jun-guang Wang Qing-lin Sun Bing Liang Peng-jin Yang Qing-rong Yu Jun-guang Wang, Qing-lin Sun, Bing Liang, Peng-jin Yang and Qing-rong Yu contributed equally to this work. Jun-guang Wang and Qing-lin Sun conceived the experiments, Qing-lin Sun, Peng-jin Yang and Qing-rong Yu conducted the experiments, Jun-guang Wang and Bing Liang analysed the results. All authors contributed writing and reviewing the manuscript. Correspondence to Jun-guang Wang. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Wang, Jg., Sun, Ql., Liang, B. et al. Mudstone creep experiment and nonlinear damage model study under cyclic disturbance load. Sci Rep 10, 9305 (2020). https://doi.org/10.1038/s41598-020-66245-w DOI: https://doi.org/10.1038/s41598-020-66245-w
CommonCrawl
Repositioning drugs by targeting network modules: a Parkinson's disease case study Volume 18 Supplement 14 Proceedings of the 14th Annual MCBIOS Conference Zongliang Yue1,2, Itika Arora2, Eric Y. Zhang2, Vincent Laufer3, S. Louis Bridges3 & Jake Y. Chen1,2,4 BMC Bioinformatics volume 18, Article number: 532 (2017) Cite this article Much effort has been devoted to the discovery of specific mechanisms between drugs and single targets to date. However, as biological systems maintain homeostasis at the level of functional networks robustly controlling the internal environment, such networks commonly contain multiple redundant mechanisms designed to counteract loss or perturbation of a single member of the network. As such, investigation of therapeutics that target dysregulated pathways or processes, rather than single targets, may identify agents that function at a level of the biological organization more relevant to the pathology of complex diseases such as Parkinson's Disease (PD). Genome-wide association studies (GWAS) in PD have identified common variants underlying disease susceptibility, while gene expression microarray data provide genome-wide transcriptional profiles. These genomic studies can illustrate upstream perturbations causing the dysfunction in signaling pathways and downstream biochemical mechanisms leading to the PD phenotype. We hypothesize that drugs acting at the level of a gene expression module specific to PD can overcome the lack of efficacy associated with targeting a single gene in polygenic diseases. Thus, this approach represents a promising new direction for module-based drug discovery in human diseases such as PD. We built a framework that integrates GWAS data with gene co-expression modules from tissues representing three brain regions—the frontal gyrus, the lateral substantia, and the medial substantia in PD patients. Using weighted gene correlation network analysis (WGCNA) software package in R, we conducted enrichment analysis of data from a GWAS of PD. This led to the identification of two over-represented PD-specific gene co-expression network modules: the Brown Module (Br) containing 449 genes and the Turquoise module (T) containing 905 genes. Further enrichment analysis identified four functional pathways within the Br module (cellular respiration, intracellular transport, energy coupled proton transport against the electrochemical gradient, and microtubule-based movement), and one functional pathway within the T module (M-phase). Next, we utilized drug-protein regulatory relationship databases (DMAP) and developed a Drug Effect Sum Score (DESS) to evaluate all candidate drugs that might restore gene expression to normal level across the Br and T modules. Among the drugs with the 12 highest DESS scores, 5 had been reported as potential treatments for PD and 6 hold potential repositioning applications. In this study, we present a systems pharmacology framework which draws on genetic data from GWAS and gene expression microarray data to reposition drugs for PD. Our innovative approach integrates gene co-expression modules with biomolecular interaction network analysis to identify network modules critical to the PD pathway and disease mechanism. We quantify the positive effects of drugs in a DESS score that is based on known drug-target activity profiles. Our results illustrate that this modular approach is promising for repositioning drugs for use in polygenic diseases such as PD, and is capable of addressing challenges of the hindered gene target in drug repositioning approaches to date. Parkinson's Disease (PD) is a disorder characterized by depletion of dopamine in the basal ganglia, including the substantia nigra. While the exact etiology of PD is unknown, major advances have been made in understanding underlying disease mechanisms through technologies in genetics, transcriptomics, epigenetics, proteomics and imaging [1]. These advances have increased recognition of the heterogeneity and etiological complexity of PD as a disease. Nevertheless, there is hope for broad-spectrum therapeutic intervention, as even distinct disease subtypes implicate genes intersecting in common pathways [2]. Recently described "Network Medicine" [3] approaches offer a platform to study the molecular complexity of a particular disease systematically. These approaches are well-suited to the identification of disease modules and pathways as well as the molecular relationships between apparently distinct phenotypes [4]. Despite progress towards the understanding of genetic factors that contribute to the etiology of PD, current treatments are aimed at clinically apparent PD — after patients are suffering from the onset of neurodegeneration. While, preventative drugs aim at treatment before or during the pre-clinical stage of PD are lacking, as are curative drugs aimed at the underlying molecular mechanisms have had limited success [5]. The associations discovered in GWAS of PD allow for the identification of disease-specific modules playing a role in triggering the disease. Similarly, gene expression microarray data provides a gross overview of gene expression changes that are associated with diseases like PD. However, future studies of complex diseases will need to move beyond the analysis of single genes and include analysis of interactions between genes or proteins, in order to better understand how functional pathways and networks become dysfunctional [6]. For instance, network-based approaches have already been used to examine various disease molecular mechanisms, e.g., type-2 diabetes [7], cancer [8], and neuronal degeneration specifically [9]. Bioinformatics techniques to characterize network topology and functional modules have been developed recently for functional genomics [10]. The identification of disease modules involving specific mutated genes and the molecular pathways to which they belong will provide new targets for drug development. GWAS and whole exome profiling data are combined in systems biology to illustrate upstream perturbations causing dysfunction in pathways and mechanisms leading to the disease phenotype. Therefore, we introduce the approach of discovering disease-specific modules to reveal the etiology of PD. In this study, we hypothesize that study of PD GWAS [11] and co-expression data [12] will enable identification of disease-specific modules caused by a variation in multiple components of a functional pathway or network. Thus, we propose using a network-based approach called Weighted Gene Co-expression Network Analysis (WGCNA) [13] to detect modules of co-expressed gene networks associated with PD. We then integrate these co-expression clusters with gene regulatory network information and perform enrichment analysis to find PD-specific modules. This method, in combination with functional enrichment and network topology measures, will be used to identify potential targets. This is done by selecting drugs that reverse the altered gene expression signatures found within the PD modules. PD modules which show significant perturbation is identified by comparing global co-expression networks in PD to regulatory networks identified using GWAS 'hits'. After selecting the PD-specific modules for further analysis, we find significantly enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and Gene Ontology terms associated with PD modules. Afterward, we use knowledge of these functional pathways as the basis for "modular drug discovery"—the discovery of drugs that act on many nodes within the disease-specific module. This is accomplished through our innovative Drug Effect Sum Score (DESS) system and then cross-validated through rigorous analysis of published literature. An overview of the framework The pipeline is divided into two color-coded sections as shown in Fig. 1. The first section (colored red) contains steps for construction of PD modules, and the second section (colored green) contains steps to perform modular drug repositioning. The construction of PD modules was carried out in 6 steps: 1) We filtered genes with significant expression changes between the case vs. control samples (with the False Discovery Rate set to 0.05), using a Bayesian inference technique available in the limma package in R [14]. 2). We performed WGCNA, which yields clusters (modules) of highly correlated genes having significant changes across three tissues. 3) We compiled PD-specific GWAS candidate genes and performed one layer extension to generate a gene regulatory network by retrieving the gene-gene regulatory relationship from the PAGER database [15]. 4) We performed enrichment analysis by finding overlapping genes shared between co-expression clusters and GWAS candidate genes, extracting these enriched clusters as PD-specific modules. 5) We constructed PD-specific network module by retrieving the gene-gene interactions for the genes in PD-specific modules from the HAPPI-2 database [16]. 6) Finally, we annotated PD-specific modules with functional groups using ClueGO [17]. The pipeline for mining the PD-specific gene modules and for ranking candidate drugs for drug repositioning. The left frames are the source of the input data, the middle frames are the processes of data, and the left frames are the output of the process. The red frames relate to mining PD-specific modules, while those in green relate to the drug repositioning process The Drug repositioning section (green) was comprised of four steps. First, we calculated a P-score, which is an intuitive pharmacology score that combines the probability for each interaction and the weight of the drug-target interaction using data from the DMAP database (see details in Methods). Second, we calculated the RP-score, which is a measure of Relevant Protein importance in the PD modules network (see details in Methods). Third, we calculated the Drug Effect Score (DES) of each module. Finally, the DESS was calculated across all modules. Using these steps, we obtained a ranked "modular drug list" consisting of candidate treatments based on PD-specific modules. Preparation of PD-specific omics, gene-gene interaction, and drug-protein regulation data Datasets from whole genome expression transcriptional profiling (on the GSE8397-GPL-96 array) were retrieved from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE8397). In the gene expression profile, 47 samples from PD patients and controls were used in three brain regions: the Frontal Gyrus (FG: 8 tissue samples), Lateral Substantia (LS: 16 tissue samples) and Medial Substantia (MS: 23 tissue samples) [12]. SNP data was obtained from a PD paper [11], in which a GWAS was carried out. We mapped probe IDs to gene symbols using the NCBI microarray toolkit and assigned gene expression scores by the averaging probe expression values after adjustment and trimming of background noises by using the standard deviation of the mean values from all samples. Since the standard deviation of the mean values were small enough (0.02 in this study), no samples had been trimmed. After performing probe transformation and synonymous gene merging on data from the Affymetrix Human Genome U133A Array [HG-U133A] and Affymetrix Human Genome U133B Array [HG-U133B], 12,995 genes were mapped by 22,283 probes in the merged matrix from the two arrays. In the prior study, 54 genes were reported as having had significant enrichment [11] in GWAS. The PAGER database [15] was used to obtain gene-gene regulatory relationships (22,127 pairs curated from 645,385 in total). The HAPPI-2 database [16] was used to obtain protein-protein interaction (PPI) data. This integrated protein interaction database comprehensively integrates weighted human protein-protein interaction data from a wide variety of protein-protein database sources. After mapping the proteins to genes using UniProt IDs, we obtained 2,658,799 gene-gene interactions. The drug-target regulatory relationships data was from the DMAP database [18], which consisted of curated 438,004 drug-protein regulatory relationships. PD-specific network module identifications Whole-genome expression data on 12,995 genes was filtered down to 2895 candidate genes, based on a multi-group empirical Bayesian (eBayes) moderated t-test with p-value ≤ 0.05. Next, we performed WGCNA to cluster these genes based on their co-expression. To do this, we first performed our pipeline steps to identify excessive missing values and outlier microarray samples. The detection of the outlier was performed by trimming the hierarchy tree of average Euclidean distance method using cutoff tree height of 100. Second, we chose an exponent for soft thresholding based on analysis of network topology, to further reduce noise and amplify stronger connections in the scale-free topological model. Third, we performed one-step network construction and module detection using hierarchy tree of unsigned TOM-based dissimilarity distance. Fourth, we visualized the genes in modules in a hierarchy tree based on average linkage clustering [13]. Fifth, we analyzed the cluster (Principal Components) and sample (expression data) correlation using Pearson correlation and asymptotic p-value. An initial regulatory relational network was seeded using the 54 candidate genes identified by Moran et al. and expanded using the gene-gene regulatory relationship data. The resulting expanded regulatory relational network consists of 288 genes and 1983 gene-gene regulatory relationships. Subsequently, we performed the enrichment testing of the genes in the expanded regulatory relational network to measure enrichment in the co-expression clusters using the hypergeometric test and assigned the f(pts) score using the formula: $$ \mathrm{f}\left(\mathrm{pts}\right)=\mathsf{\operatorname{sign}}\left(\frac{\mathrm{K}}{\mathrm{N}}-\frac{\mathrm{k}}{\mathrm{n}}\right)\times \boldsymbol{\mathsf{\log}}\left(\frac{\left(\genfrac{}{}{0pt}{}{\mathrm{K}}{\mathrm{k}}\right)\left(\genfrac{}{}{0pt}{}{\mathrm{N}-\mathrm{K}}{\mathrm{n}-\mathrm{k}}\right)}{\left(\genfrac{}{}{0pt}{}{\mathrm{N}}{\mathrm{n}}\right)}\right) $$ where N is the total number of the genes in co-expression clusters, K is the number of overlapping genes between co-expression clusters and genetic candidate genes, n is the number of the genes in the co-expression cluster selected, and k is the overlap genes between selected co-expression cluster and the genetic candidate genes. A positive value for f(pts) indicates the over-representation in the expanded regulatory relational network. PD-specific modules were defined as the over-represented co-expression clusters. We then generated the network of PD-specific modules by applying high-confidence gene-gene interactions (as indicated by 3-star or above in the HAPPI-2 database). In the final step, we performed ClueGO analysis to elucidate mechanisms involved in the PD-specific modules. We applied Bonferroni correction and selected those with post-correction p-value ≤ 0.05 and Kappa score ≥ 0.5 (moderate network strength or stronger) [17]. Modular drug repositioning DESS was calculated using the P-score from the DMAP database, the RP-score from the PD modules, and the module enrichment score f(pts) from the PD modules. We calculated a P-score, an intuitive pharmacology score in the DMAP database, via a probability-weighted summary of all the evidence mined from literature or other drug target databases to determine an overall mechanism of "edge action" for each specific chemical-protein interaction using conf(d,p): $$ \mathsf{conf}\left(d,p\right)={\sum}_{i=1}^N\left({prob}_i\left(d,p\right)\times {\operatorname{sign}}_i\right) $$ where d and p are specific drugs and proteins, respectively. N is the number of types of evidence for the interaction between d and p. prob i (d, p) is confidence in each type of evidence i with a value within the range of [0,1]. sign i has a value of 1 if the evidence i suggests activation and a value of −1 if the evidence i suggests inhibition. Afterwards, to rank each interaction, we used the algorithm in HAPPI [19] by assigning a weight(p) for the proteins interacting with each drug using the following formula adapted from [20]. $$ weight(p)=k\times \mathit{\ln}\left({\sum}_{p,q\in NET} conf\left(p,q\right)\right)-\mathit{\ln}\left({\sum}_{p,q\in NET}N\left(p,q\right)\right) $$ Here, p and q are proteins in the protein interaction network, k is an empirical constant (k = 2 in this study), conf(p,q) is the confidence score of interaction between protein p and q assigned by HAPPI-2, and N(p, q) holds the value of 1 if protein p interacts with q or the value of 0 if protein p does not interact with q. Thus, the foregoing probabilities and weights for each interaction were combined into P-score(d,p), which includes both information on each drug's effects on interacting proteins and the importance of the protein in the protein-protein interaction network: $$ P- score\left(d,p\right)= conf\left(p,q\right)\times weight(p) $$ We applied each gene's RP-score calculation in a manner similar to formula (3) in PD-specific modules using the formula: $$ RP- score={e}^{k\times \ln \left({\sum}_{p,q\in ModuleNET} conf\left(p,q\right)\right)-\ln \left({\sum}_{p,q\in ModuleNET}N\left(p,q\right)\right)} $$ where p and q are the indexes of proteins from the selected module, k is a constant (k = 2 in this study). The term conf(p, q) is the interaction confidence score assigned by HAPPI-2, where conf(p, q) ∈ [0,1]. Further, we calculated a DES(d,m) by using the drug weight score and the module gene RP-score according to the formula: $$ DES\left(d,m\right)={\sum}_{i\in ModT\arg et}^n\left[{\operatorname{sign}}_d\times {\log}_2\left(p-\mathrm{score}\left(d,i\right)\right)\times {\log}_2\left( RP-\mathrm{score}\left(d,i\right)\right)\times {p}_i\right] $$ where m is the module, i is the index of the proteins in the PD-specific module, sign d is the direction of the effect drug d on protein expression, and P-score(d, i) is the pharmacology score of the drug d to target i. p i is the priority score which indicates the source of the candidate. We assigned a value of p i=1/20 when the candidates were from GWAS, p i=1/21 when the candidates were from regulatory one-layer extension of GWAS, and p i=1/22 when the other candidates were from the same module. The DESS(d,M) was calculated by integrating all PD-specific modules according to the expression: $$ DESS\left(d,M\right)={\sum}_{m\epsilon Module}^M DES\left(d,m\right)\times {f}_m(pts) $$ where M is the module set of module m, f m (pts) is probability mass function (pmf) transform score of the PD-specific module m. An example of how the DESS score is calculated for a drug is shown in Fig. 2. Based on the total DESS, modular drugs (drugs selected based on their predicted effect at a module level) and their targets in the modules were collected. We pulled out modular drugs or drugs selected based on their predicted effect at a module level alongside their associated targets. Finally, we applied a single regulatory layer expansion and retrieved drug-target regulatory relationships (DMAP database) and protein-protein interactions (HAPPI-2 database) to generate the "extended modular drug-target network". An example of calculating the DESS for PD-specific gene expression modules. Green indicates increased gene expression, red indicates a decrease. Note that the drug action acts to reverse the direction of gene expression found in the pathological state. Exp. stands for expression value, RP-score stands for the protein relevant score, p. stands for the priority score, P-score stands for the intuitive pharmacology score, DES stands for the Drug Effect Score, module f(ptx) stands for the enrichment score, and the DESS stands for Drug Effect Sum Score Construction of PD genetic association networks The PD genetic association network was constructed using the neighborhood extension method. Starting from the original 54 genes identified using GWAS (described in Materials and Methods, above), we obtained PD genetic association networks consisting of a total of 288 genes and 1983 regulatory relationships. The candidates of significant expression change (eBayes moderates t-test p-value ≤ 0.05) are colored in the PD genetic association networks provided in Fig. 3. The expanded regulatory relational network generated. The color of the nodes indicates the direction of change of expression; red nodes indicate the up-regulated genes, while green nodes stand for the down-regulated genes. Nodes in gray were not assayed by our whole-genome transcriptional profiling. The color scale measures the expression changes accumulated from the three brain regions PD-specific network modules identified The details of the gene co-expression network construction with WGCNA have been previously described [13]. By applying the steps described above in Materials and Methods, 5 co-expression modules were identified. We color-coded these as the Brown (Br) module, the Yellow (Y) module, the Blue (Bl) module, the Green (G) module and the Turquoise (T) module, all of which are shown in Fig. 4. The number of genes in each module is as follows: the Br module containing 544 genes, the Y module containing 199 genes, the Bl module containing 821 genes, the G module containing 116 genes, and the Turquoise containing 1190 genes. The WGCNA analysis of the five co-expression modules - Brown (Br), Yellow (Y), Blue (Bl), Green (G), and Turquoise (T). The dendrogram illustrates the degree similarity using hierarchy tree of TOM-based dissimilarity distance in each module cluster, which forms the basis for subsequent functional pathway identification Enrichment analysis results of two PD-specific network modules Based on the enrichment analysis, we identified two PD-specific modules (the T module and the Br module) shown in Table 1 and Fig. 4. The genes in these modules as displayed in the dendrogram are grouped tightly enough to be susceptible to a modular drug (a drug that acts on many members of the PD-specific module rather than on one target). 2895 genes are included in the gene co-expression modules. 50 of these genes in the co-expression modules overlapped with genes identified from our analysis of genetic data. These 50 genes are distributed among the modules as follows: 21 in the T module, 10 in the Br module, 2 in the G module, 3 in the Y module, and 13 in the Bl module. Using the hypergeometric test, we identified two PD-specific modules (modules having positive f(pts), see Methods) the T module, which had f(pts) = 0.94 and the Br module, which had f(pts) = 0.86. Figure 5 illustrates the correlation of gene expression to case-control status. Specifically, the Pearson correlation coefficient for the expression level of the genes belonging to each module was reported for each sample. Overall, cases and controls are well discriminated by the gene expression signature of the genes in the module. For instance, in the Br module, control samples have a positive correlation with modular gene expression, while disease samples are negatively correlated with gene expression of genes found in each module. The remaining relationships are illustrated in Fig. 5. There are 2 comparisons (lateral control VS lateral case in both T-module and Br module) significantly different with p-value ≤ 0.001 in the student t-test. Table 1 The 5 co-expression module enrichment based on GWAS results Phenotypes corresponding to each module. The color scale indicates the Pearson correlation between the samples and the modules. The number in the brackets indicates the asymptotic P-value for each correlation. In sample names, the "Ctrl" indicates control samples and "Dis" indicates the disease samples. The direction of the correlation differs for case and control samples in each brain region, demonstrating that the gene modules differentiate them well. In student t-test, T module frontal gyrus case VS control is 0.06, latera; substantia case VS control is 6.3×10−4, medial substantia case VS control is 0.03, Br module frontal gyrus case VS control is 0.04, latera; substantia case VS control is 2.2×10−3, medial substantia case VS control is 1.2×10−3 The Br network module (Fig. 6) contains 449 genes and 2373 gene-gene interactions, of which 94 genes are up-regulated and 355 genes are down-regulated. The T network module contains 905 genes and 5156 gene-gene interactions, of which 221 genes are down-regulated and 684 genes are up-regulated. Global view of the protein-protein interaction network of the 2 modules. a. The Br module consists of 449 genes and 2373 gene-gene interactions. b. The T module consists of 905 genes and 5156 gene-gene interactions. The nodes in red color are up-regulated and the nodes in green color are down-regulated ClueGO analysis of PD-specific modules The ClueGO analysis of the Br modules identified 4 GO biological processes, which are shown in Fig. 7 and Table 2. These are cellular respiration, intracellular transport, energy-coupled proton transport, and microtubule-based movement. Furthermore, we identified two KEGG [21] pathways, "synaptic vesicle cycling" and "oxidative phosphorylation". The ClueGO analysis of the T module identified one GO biological process "M phase". Gene Ontology - biological processes (GO-BP) relating to each PD-specific module as identified by ClueGO analysis. a. The GO-BP and KEGG pathways associated with the Br module. b. The GO-BP associated with the T module Table 2 Gene Ontology - biological processes (GO-BP) relating to the two PD-specific modules Identifying drugs with predicted therapeutic effects on the Br and T modules We generated a ranked list of the drugs based on their DESS scores. While there were 1246 (1201 unique drugbankID) candidate drugs for drug repositioning that targeted one or more genes in the gene co-expression module in Additional file 1: Table S1, we selected only 12 (the top 1% according to DESS) candidate drugs as potential treatments (Fig. 8). The components of DESS and number of the drug targets for each drug in the T module and Br module are shown in Fig. 9. Furthermore, the drugs are listed in Table 3 and are discussed below. The Br and T modules' network diagrams for the extended network illustrating which disordered genes are stimulated and inhibited by these 12 drugs is provided in Fig. 10, Additional file 2: Table S2 and Additional file 3: Table S3. Distribution of Drug Effect Sum Score (DESS). The top 1% of the drugs were validated using the literature. The red line indicates the cutoff value of the DESS 1% drugs Stacked bar graph of DESS for highly ranked modular drugs. a. The top 12 most highly ranked modular drugs by DESS. The height of each T module and Br module stack corresponds to the Drug Effect Score (DES) score therapeutically modulated by each agent. b. The number of genes targeted by the most highly ranked modular drugs. The height of each T module and Br module stack corresponds to the drug targets therapeutically modulated by each agent Table 3 Compound IDs for the 12 most highly ranked modular drug candidates The extended modular drug-target network for the Br module (a) and the T module (b). The diamond nodes are drugs; circles are genes. The color of the nodes varies from green to red and indicates down-regulation or up-regulation situation of disordered genes, respectively. The color of the edges stands for the type of action. Red edges mean stimulation, green edges mean inhibition and gray edges means Protein-Protein Interactions. The node size represents the RP-score which indicates the relevance of the gene in the module calculated in Method Conclusions and discussion In this work, we present a framework that identified candidate drugs for repositioning based on analysis of GWAS and gene expression microarray data. Starting with genes identified through a standard GWAS, we extended the analysis to one-layer extension by gene-gene regulatory relationship and built an extended regulatory network. Significant results based on an enrichment analysis were then used to generate PD modules. We improved gene co-expression module cohesion by removing isolated or weakly connected genes. PD network modules were then further informed by the integration of data from Protein-Protein interaction databases. Using this approach, we initially identified over 1201 candidates for drug repurposing. We trimmed this to 12 modular drug candidates based on their DESS. There were three important characteristics of finding within these 12 modular drugs. First, they are noteworthy in that they target PD at the level of the gene co-expression module as opposed to a specific target. Second, most of the genes on the list belong to drug families, which should be expected if data relating to drug target efficacy are accurate and internally consistent. Third, there are general 4 drug families found (steroids and steroid derivatives, lipids and lipid-like molecules, phenylpropanoids and polyketides, and other small molecules), and each family of drugs identified has been previously studied in relation to neurodegenerative disease, suggesting the external validity of our findings as well. The top candidate drug was estradiol, a steroidal estrogen critical in the regulation of the menstrual cycle. It is currently used pharmaceutically in hormone replacement therapies for menopause and hypogonadism. Several studies support a role for the use of estradiol in PD. It has been shown to protect dopaminergic neurons in an MPP+ Parkinson's disease model [22], and a study of postmenopausal women found it to be associated with a reduced risk of PD in women [23]. Further, it is well-established that estrogen deprivation leads to the death of dopaminergic neurons. Of note, many clinical reports also suggest an anti-dopaminergic effect of estrogens on symptoms of PD. It is likely that the timing and dosage of estrogen influence the results in these discrepant findings. Our ninth, tenth and eleventh-ranked drugs (dienestrol, diethylstilbestrol, and methyltestosterone respectively) are isomers relating to diethylstilbestrol (also known as follidiene). Diethylstilbestrol is a synthetic non-steroidal estrogen previously used to treat menopausal and postmenopausal disorders. However, it is now known to have teratogenic and carcinogenic properties [24]. Although these compounds may be contraindicated for use in humans, their high prioritization might prompt us to look for similar compounds without carcinogenic side effects. Methyltestosterone, which had the tenth highest DESS, is a synthetic orally active androgenic-anabolic steroid with relatively high estrogenicity. Methyltestosterone is currently used to treat males with androgen deficiency. Interestingly, testosterone deficiency has previously been reported in patients with PD, and PD itself is seen more commonly in men than women [25]. However, clinical trials have shown no improvement in male PD patients when given exogenous testosterone therapy [26]. Finally, our sixth most highly ranked drug was genistein, an estrogen-like isoflavone compound found exclusively in legumes. Genistein is known to act as an angiogenesis inhibitor and was previously shown to have neuroprotective effects on dopaminergic neurons in mouse models of PD [27]. Resveratrol had the second highest DESS. It is a polyphenolic anti-oxidant stilbenoid compound found in food include the skin of grapes, blueberries, raspberries and mulberries, currently under preclinical investigation as a potential pharmaceutical treatment in treating early onset PD patients. Resveratrol was previously studied in a phase-II clinical trial for individuals with mild to moderate Alzheimer's disease and was found to reduce plasma levels of some AD biomarkers [28,29,30]. The third drug alitretinoin, fourth drug tretinoin, and fifth drug isotretinoin are most highly ranked candidates also belonging to a single family of compounds, retinoids. The first is retinoic acid, a retinoid morphogen crucial to the embryonic development of the anterior-posterior axis in vertebrates, as well as differentiation and maintenance of neural cell lineage. Currently, in-vivo animal studies suggest the possibility of therapeutic applications of retinoic acid for PD through nanoparticle delivery [31]. Isotretinoin, trademarked under the name Accutane, is prescribed as a treatment for severe acne vulgaris. Although isotretinoin is a known teratogen [32], it might be well-suited to treatment of PD given its typical later age of onset. Our seventh and eighth hits, Sirolimus (Rapamune) and Sirolimus (Perceiva), are again related. Perceiva is an ocular formulation of the macrolide compound sirolimus (commonly known as rapamycin) and was developed to treat neovascular age-related macular degeneration. Sirolimus is used for the treatment of Lymphangioleiomyomatosis, as well as in prevention of organ transplant rejection. Interestingly, sirolimus has been shown to improve cognitive deficits in mouse model of Alzheimer's Diseases through inhibition of the mTOR signaling pathway, a pathway which is thought to protect against neuronal death in mouse models of PD [33]. In addition to these twelve candidates, our ClueGO analysis suggests that investigation of two additional biological processes may be profitable. Our analysis of KEGG pathways in relation to the T module implicated mitochondrial respiration as a potential disease mechanism [34]. Interestingly, it has previously been reported that defects in mitochondrial respiration are etiologically related to the pathogenesis of PD. Thus, preservation and restoration of mitochondrial function may hold promise as a potential therapeutic intervention to halt the progression of dopaminergic neurodegeneration in PD. Secondly, in PD, neuronal cells undergo mitotic catastrophe and endoreduplication prior to cell death. It has previously been shown [35] that overexpression of DNA poly β was involved in the rotenone-mediated pathology of cellular and animal models of PD. In a cell culture model, increased levels of DNA poly β promoted rotenone-mediated endoreduplication. Selective injury to dopaminergic neurons by rotenone resulted in the upregulation of DNA poly β as the neuronal cell cycle was reactivated. In summary, we perform drug repositioning by integrating weighted drug-protein regulations on all genes, using our novel DESS to quantitate drug effects on entire co-expression networks. As biological systems use functional pathways and networks to maintain homeostasis, by selecting drugs that act at the level of a gene module we were able to target a level of the biological organization more relevant to the disease pathologies of complex disorders such as PD. Although this approach is still in its infancy, our results suggest that it may circumvent issues associated with single-gene targeting in polygenic diseases like PD. Our analysis has identified several families of related drug candidates, all of which have previously been investigated in relation to PD and other neurodegenerative diseases. As such, we believe our framework gives internally and externally valid results and may be extended to provide complementary insights to other disease-module findings and drug-repositioning projects. The significance of our work should be considered in light of its limitations. First, several of the classes of drugs mentioned have already studied in relation to PD and related phenotypes, as described above. However, members of the families of drugs identified have not resulted in a clinically efficacious treatment for PD to date. As such, a future direction for this line of research is to include a mechanism to account for both additive and potentially non-additive interaction effects between drugs on a disease-specific module. In addition, many of the most highly ranked modular drugs we identified show much promise, but have known adverse effects. Future research will include a method of incorporation of drug side effects into the final priority score. Jankovic J, Sherer T. The future of research in Parkinson disease. JAMA Neurol. 2014;71(11):1351–2. Dawson TM, Dawson VL. Molecular pathways of neurodegeneration in Parkinson's disease. Science. 2003;302(5646):819–22. Chen JY, Piquette-Miller M, Smith BP. Network medicine: finding the links to personalized therapy. Clin Pharmacol Ther. 2013;94(6):613–6. Barabasi AL, Gulbahce N, Loscalzo J. Network medicine: a network-based approach to human disease. Nat Rev Genet. 2011;12(1):56–68. Dexter DT, Jenner P. Parkinson disease: from pathology to molecular disease mechanisms. Free Radic Biol Med. 2013;62:132–44. Fujita KA, Ostaszewski M, Matsuoka Y, Ghosh S, Glaab E, Trefois C, Crespo I, Perumal TM, Jurkowski W, Antony PM, et al. Integrating pathways of Parkinson's disease in a molecular interaction map. Mol Neurobiol. 2014;49(1):88–102. Hale PJ, Lopez-Yunez AM, Chen JY: Genome-wide meta-analysis of genetic susceptible genes for type 2 diabetes. BMC Syst Biol 2012, 6 Suppl 3:S16. Zhang F, Chen JY: Breast cancer subtyping from plasma proteins. BMC Med Genomics 2013, 6 Suppl 1:S6. Santiago JA, Potashkin JA: System-based approaches to decode the molecular links in Parkinson's disease and diabetes. Neurobiol Dis 2014, 72 Pt A:84–91. Wu X, Hasan MA, Chen JY. Pathway and network analysis in proteomics. J Theor Biol. 2014; Lin MK, Farrer MJ. Genetics and genomics of Parkinson's disease. Genome Med. 2014;6(6):48. Moran LB, Duke DC, Deprez M, Dexter DT, Pearce RK, Graeber MB. Whole genome expression profiling of the medial and lateral substantia nigra in Parkinson's disease. Neurogenetics. 2006;7(1):1–11. Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9:559. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, Smyth GK. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43(7):e47. Yue Z, Kshirsagar MM, Nguyen T, Suphavilai C, Neylon MT, Zhu L, Ratliff T, Chen JY. PAGER: constructing PAGs and new PAG-PAG relationships for network biology. Bioinformatics. 2015;31(12):i250–7. Chen JY, Pandey R, Nguyen TM. HAPPI-2: a comprehensive and high-quality map of human annotated and predicted protein interactions. BMC Genomics. 2017;18(1):182. Bindea G, Mlecnik B, Hackl H, Charoentong P, Tosolini M, Kirilovsky A, Fridman WH, Pages F, Trajanoski Z, Galon J. ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinformatics. 2009;25(8):1091–3. Huang H, Nguyen T, Ibrahim S, Shantharam S, Yue Z, Chen JY: DMAP: a connectivity map database to enable identification of novel drug repositioning candidates. BMC bioinformatics 2015, 16 Suppl 13:S4. Chen JY, Mamidipalli S, Huan T: HAPPI: an online database of comprehensive human annotated and predicted protein interactions. BMC Genomics 2009, 10 Suppl 1:S16. Chen JY, Shen C, Sivachenko AY. Mining Alzheimer disease relevant proteins from integrated protein interactome data. Pac Symp Biocomput. 2006:367–78. Kanehisa M, Sato Y, Kawashima M, Furumichi M, Tanabe M. KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 2016;44(D1):D457–62. Sawada H, Ibi M, Kihara T, Honda K, Nakamizo T, Kanki R, Nakanishi M, Sakka N, Akaike A, Shimohama S. Estradiol protects dopaminergic neurons in a MPP+Parkinson's disease model. Neuropharmacology. 2002;42(8):1056–64. Currie LJ, Harrison MB, Trugman JM, Bennett JP, Wooten GF. Postmenopausal estrogen use affects risk for Parkinson disease. Arch Neurol. 2004;61(6):886–8. O'Reilly EJ, Mirzaei F, Forman MR, Ascherio A. Diethylstilbestrol exposure in utero and depression in women. Am J Epidemiol. 2010;171(8):876–82. Harman SM, Tsitouras PD. Reproductive hormones in aging men. I. Measurement of sex steroids, basal luteinizing hormone, and Leydig cell response to human chorionic gonadotropin. J Clin Endocrinol Metab. 1980;51(1):35–40. Okun MS, Fernandez HH, Rodriguez RL, Romrell J, Suelter M, Munson S, Louis ED, Mulligan T, Foster PS, Shenal BV, et al. Testosterone therapy in men with Parkinson disease: results of the TEST-PD study. Arch Neurol. 2006;63(5):729–35. Liu LX, Chen WF, Xie JX, Wong MS. Neuroprotective effects of genistein on dopaminergic neurons in the mice model of Parkinson's disease. Neurosci Res. 2008;60(2):156–61. Turner RS, Thomas RG, Craft S, van Dyck CH, Mintzer J, Reynolds BA, Brewer JB, Rissman RA, Raman R, Aisen PS, et al. A randomized, double-blind, placebo-controlled trial of resveratrol for Alzheimer disease. Neurology. 2015;85(16):1383–91. Ferretta A, Gaballo A, Tanzarella P, Piccoli C, Capitanio N, Nico B, Annese T, Di Paola M, Dell'aquila C, De Mari M et al: Effect of resveratrol on mitochondrial function: implications in parkin-associated familiar Parkinson's disease. Biochim Biophys Acta 2014, 1842(7):902–915. Jin F, Wu Q, YF L, Gong QH, Shi JS. Neuroprotective effect of resveratrol on 6-OHDA-induced Parkinson's disease in rats. Eur J Pharmacol. 2008;600(1–3):78–82. Esteves M, Cristovao AC, Saraiva T, Rocha SM, Baltazar G, Ferreira L, Bernardino L. Retinoic acid-loaded polymeric nanoparticles induce neuroprotection in a mouse model for Parkinson's disease. Front Aging Neurosci. 2015;7:20. Kontaxakis VP, Skourides D, Ferentinos P, Havaki-Kontaxaki BJ, Papadimitriou GN. Isotretinoin and psychopathology: a review. Ann General Psychiatry. 2009;8:2. Malagelada C, Jin ZH, Jackson-Lewis V, Przedborski S, Greene LA. Rapamycin protects against neuron death in in vitro and in vivo models of Parkinson's disease. J Neurosci. 2010;30(3):1166–75. Perier C, Vila M. Mitochondrial biology and Parkinson's disease. Cold Spring Harbor Perspectives Med. 2012;2(2):a009332. Wang H, Chen Y, Chen J, Zhang Z, Lao W, Li X, Huang J, Wang T. Cell cycle regulation of DNA polymerase beta in rotenone-based Parkinson's disease models. PLoS One. 2014;9(10):e109697. We acknowledge the partial support of this research by the University of Alabama at Birmingham (UAB) and the UAB Informatics Institute during the implementation of the project. Our database servers and web applications are maintained by the UAB high-performance computing group. The publication cost of the paper is from Dr.Chen's lab start-up funding in the University of Alabama at Birmingham in informatics institute and UL1 TR001417 "UAB Center for Clinical and Translational Science" grant award. PAGER database is available online http://discovery.informatics.uab.edu/PAGER. DMAP database is available online http://discovery.informatics.uab.edu/cmaps. The supplemental material are available for download. This article has been published as part of BMC Bioinformatics Volume 18 Supplement 14, 2017: Proceedings of the 14th Annual MCBIOS conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-18-supplement-14. Center for Biomedical Big Data, Wenzhou Medical University First Affiliated Hospital, Wenzhou, Zhejiang Province, China Zongliang Yue & Jake Y. Chen Informatics Institute in School of Medicine, University of Alabama at Birmingham, Birmingham, 35233, AL, USA Zongliang Yue, Itika Arora, Eric Y. Zhang & Jake Y. Chen Division of Clinical Immunology and Rheumatology in School of Medicine, University of Alabama at Birmingham, Birmingham, 35233, AL, USA Vincent Laufer & S. Louis Bridges Wenzhou Yuekang InfoTech, Ltd., Wenzhou, Zhejiang Province, China Jake Y. Chen Zongliang Yue Itika Arora Eric Y. Zhang Vincent Laufer S. Louis Bridges ZLY performed the construction of the framework, data analysis and led the writing of the manuscript under the guidance of JYC. IA, EZ participated in drug repositioning performance evaluations. VL, SLB and JYC participated in the revision of the manuscript. JYC conceived the project, supervised the entire research team with frequent feedback in the design, implementation, and evaluation of the project. All authors contributed to the completion of the manuscripts and approved final manuscript. Correspondence to Jake Y. Chen. All authors consent for publication. 1246 (1201 unique drugbankID) candidate drugs for drug repositioning that targeted to PD modules. (XLSX 136 kb) The Protein-Protein Interactions and drug-target regulations in Br module's network. (XLSX 26 kb) The Protein-Protein Interactions and drug-target regulations in T module's network. (XLSX 98 kb) Yue, Z., Arora, I., Zhang, E.Y. et al. Repositioning drugs by targeting network modules: a Parkinson's disease case study. BMC Bioinformatics 18 (Suppl 14), 532 (2017). https://doi.org/10.1186/s12859-017-1889-0 Weighted Gene Correlation Network Analysis (WGCNA) Genome-wide Association Studies (GWAS) Lateral Substantia Medial Substantia
CommonCrawl
Flows of vector fields with point singularities and the vortex-wave system On the Fibonacci complex dynamical systems May 2016, 36(5): 2419-2447. doi: 10.3934/dcds.2016.36.2419 Sharp time decay rates on a hyperbolic plate model under effects of an intermediate damping with a time-dependent coefficient Marcello D'Abbicco 1, , Ruy Coimbra Charão 2, and Cleverson Roberto da Luz 3, Departamento de Computação e Matemática, Universidade de São Paulo (USP), FFCLRP, Av. dos Bandeirantes 3900, Ribeirão Preto, SP 14040-901, Brazil Department of Mathematics, Federal University of Santa Catarina, Campus Trindade, Florianópolis, SC 88040-900, Brazil Department of Mathematics, Federal University of Santa Catarina, Campus Trindade, Florianopolis, SC 88040-900, Brazil Received April 2015 Revised September 2015 Published October 2015 In this work we study decay rates for a hyperbolic plate equation under effects of an intermediate damping term represented by the action of a fractional Laplacian operator and a time-dependent coefficient. We obtain decay rates with very general conditions on the time-dependent coefficient (Theorem 2.1, Section 2), for the power fractional exponent of the Laplace operator $(-\Delta)^\theta$, in the damping term, $\theta \in [0,1]$. For the special time-dependent coefficient $b(t)=\mu (1+t)^{\alpha}$, $\alpha \in (0,1]$, we get optimal decay rates (Theorem 3.1, Section 3). Keywords: diagonalization procedure, Hyperbolic plate equation, time-dependent damping coefficient, fractional damping term, optimal time decay rates.. Mathematics Subject Classification: Primary: 35B40; Secondary: 35L30, 35B35, 74K2. Citation: Marcello D'Abbicco, Ruy Coimbra Charão, Cleverson Roberto da Luz. Sharp time decay rates on a hyperbolic plate model under effects of an intermediate damping with a time-dependent coefficient. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2419-2447. doi: 10.3934/dcds.2016.36.2419 D. Andrade, M. A. Jorge Silva and T. F. Ma, Exponential stability for a plate equation with p-Laplacian and memory terms,, Math. Methods Appl. Sci., 35 (2012), 417. doi: 10.1002/mma.1552. Google Scholar M. A. Astaburuaga, C. Fernandez and G. Perla Menzala, Energy decay rates and the dynamical von Kármán equations,, Appl. Math. Lett., 7 (1994), 7. doi: 10.1016/0893-9659(94)90021-3. Google Scholar I. Chueshov and I. Lasiecka, Long-time behavior of second order evolution equations with nonlinear damping,, Memoirs of the American Mathematical Society, 195 (2008). doi: 10.1090/memo/0912. Google Scholar P. G. Ciarlet, A justification of the von Kármán equations,, Arch. Rational Mech. Anal., 73 (1980), 349. doi: 10.1007/BF00247674. Google Scholar R. Coimbra Charão, C. R. da Luz and R. Ikehata, Sharp decay rates for wave equations with a fractional damping via new method in the Fourier space,, J. Math. Anal. Appl., 408 (2013), 247. doi: 10.1016/j.jmaa.2013.06.016. Google Scholar R. Coimbra Charão, C. R. da Luz and R. Ikehata, New decay rates for a problem of plate dynamics with fractional damping,, J. Hyperbolic Differ. Equ., 10 (2013), 563. doi: 10.1142/S0219891613500203. Google Scholar R. Coimbra Charão and R. Ikehata, Energy decay rates of elastic waves in unbounded domain with potential type of damping,, J. Math. Anal. Appl., 380 (2011), 46. doi: 10.1016/j.jmaa.2011.02.075. Google Scholar C. R. da Luz and R. Coimbra Charão, Asymptotic properties for a semilinear plate equation in unbounded domains,, J. Hyperbolic Differ. Equ., 6 (2009), 269. doi: 10.1142/S0219891609001824. Google Scholar C. R. da Luz, R. Ikehata and R. Coimbra Charão, Asymptotic behavior for abstract evolution differential equations of second order,, J. Differential Equations, 259 (2015), 5017. doi: 10.1016/j.jde.2015.06.012. Google Scholar M. D'Abbicco, The threshold of effective damping for semilinear wave equations,, Math. Methods Appl. Sci., 38 (2015), 1032. doi: 10.1002/mma.3126. Google Scholar M. D'Abbicco and M. R. Ebert, Diffusion phenomena for the wave equation with structural damping in the $L^p-L^q$ framework,, J. Differential Equations, 256 (2014), 2307. doi: 10.1016/j.jde.2014.01.002. Google Scholar M. D'Abbicco and M. R. Ebert, A classification of structural dissipations for evolution operators,, Math. Methods in the Appl. Sci., (2015). doi: 10.1002/mma.3713. Google Scholar M. D'Abbicco and E. Jannelli, A damping term for higher-order hyperbolic equations,, Ann. Mat. Pura ed Appl., (2015), 10231. doi: 10.1007/s10231-015-0477-z. Google Scholar M. D'Abbicco, S. Lucente and M. Reissig, Semi-linear wave equations with effective damping,, Chin. Ann. Math. Ser. B, 34 (2013), 345. doi: 10.1007/s11401-013-0773-0. Google Scholar R. Denk and R. Schnaubelt, A structurally damped plate equation with Dirichlet-Neumann boundary conditions,, J. Differential Equations, 259 (2015), 1323. doi: 10.1016/j.jde.2015.02.043. Google Scholar D. Fang, X. Lu and M. Reissig, High-order energy decay for structural damped systems in the electromagnetical field,, Chin. Ann. Math. Ser. B, 31 (2010), 237. doi: 10.1007/s11401-008-0185-8. Google Scholar P. G. Geredeli and I. Lasiecka, Asymptotic analysis and upper semicontinuity with respect to rotational inertia of attractors to von Kármán plates with geometrically localized dissipation and critical nonlinearity,, Nonlinear Analysis, 91 (2013), 72. doi: 10.1016/j.na.2013.06.008. Google Scholar R. Ikehata and Y. Inoue, Total energy decay for semilinear wave equations with a critical potential type of damping,, Nonlinear Anal., 69 (2008), 1396. doi: 10.1016/j.na.2007.06.039. Google Scholar R. Ikehata and M. Soga, Asymptotic profiles for a strongly damped plate equation with lower order perturbation,, Commun. Pure Appl. Anal., 14 (2015), 1759. doi: 10.3934/cpaa.2015.14.1759. Google Scholar R. Ikehata, G. Todorova and B. Yordanov, Optimal decay rate of the energy for wave equations with critical potential,, J. Math. Soc. Japan, 65 (2013), 183. doi: 10.2969/jmsj/06510183. Google Scholar M. Kainane and M. Reissig, Qualitative properties of solution to structurally damped $\sigma-$evolution models with time decreasing coefficient in the dissipation,, in Complex Analysis and Dynamical Systems VI, (2015). Google Scholar M. Kainane and M. Reissig, Qualitative properties of solution to structurally damped $\sigma-$evolution models with time increasing coefficient in the dissipation,, Adv. Differential Equations, 20 (2015), 433. Google Scholar G. Karch, Selfsimilar profiles in large time asymptotics of solutions to damped wave equations,, Studia Math., 143 (2000), 175. Google Scholar H. Koch and I. Lasiecka, Hadamard wellposedness of weak solutions in nonlinear dynamical elasticity - full von Kármán systems,, Progress Nonlin. Diff. Eqs. Appl., 50 (2002), 197. Google Scholar I. Lasiecka, Uniform stability of a full von Kármán system with nonlinear boundary feedbach},, SIAM J. Control Optim., 36 (1998), 1376. doi: 10.1137/S0363012996301907. Google Scholar I. Lasiecka and A. Benabdallah, Exponential decay rates for a full von Kármán thermoelasticity system with nonlinear thermal coupling,, ESAIM: Control, 8 (2000), 13. doi: 10.1051/proc:2000002. Google Scholar J. Lin, K. Nishihara and J. Zhai, Critical exponent for the semilinear wave equation with time-dependent damping,, Discrete Contin. Dyn. Syst., 32 (2012), 4307. doi: 10.3934/dcds.2012.32.4307. Google Scholar J. R. Luyo Sánchez, O Sistema Dinâmico de Von Kármán em Domínios Não Limitados é Globalmente Bem Posto no Sentido de Hadamard: Análise do seu Limite Singular (Portuguese),, Ph.D thesis, (2003). Google Scholar Q. Ma, Y. Yang and X. Zhang, Existence of exponential attractors for the plate equations with strong damping,, Electron. J. Differential Equations, (2013), 1. Google Scholar P. Martinez, A new method to obtain decay rate estimates for dissipative systems,, ESAIM Control Optim. Calc. Var., 4 (1999), 419. doi: 10.1051/cocv:1999116. Google Scholar S. Matthes and M. Reissig, Qualitative properties of structurally damped wave models,, Eurasian Math. J., 4 (2013), 84. Google Scholar G. P. Menzala and E. Zuazua, Timoshenko's plate equations as a singular limit of the dinamical von Kármán system,, J. Math. Pures Appl., 79 (2000), 73. doi: 10.1016/S0021-7824(00)00149-5. Google Scholar J. P. Puel and M. Tucsnak, Global existence for full von Kármán system,, Appl. Math. Optim., 34 (1996), 139. doi: 10.1007/BF01182621. Google Scholar R. Schnaubelt and M. Veraar, Structurally damped plate and wave equations with random point force in arbitrary space dimensions,, Differential Integral Equations, 23 (2010), 957. Google Scholar Y. Sugitani and S. Kawashima, Decay estimates of solutions to a semi-linear dissipative plate equation,, J. Hyperbolic Differ. Equ., 7 (2010), 471. doi: 10.1142/S0219891610002207. Google Scholar R. Temam, Navier-Stokes Equations: Theory and Numerical Analysis,, Studies in Mathematics and its Applications, (1979). Google Scholar Y. Wakasugi, On the Diffusive Structure for the Damped Wave Equation with Variable Coefficients,, Ph.D thesis, (2014). Google Scholar Y. Wakasugi, Critical exponent for the semilinear wave equation with scale invariant damping,, in Fourier Analysis, (2013), 375. doi: 10.1007/978-3-319-02550-6_19. Google Scholar Y.-Z. Wang, Asymptotic behavior of solutions to the damped nonlinear hyperbolic equation,, J. Appl. Math., (2013). Google Scholar J. Wirth, Wave equations with time-dependent dissipation II. Effective dissipation,, J. Differential Equations, 232 (2007), 74. doi: 10.1016/j.jde.2006.06.004. Google Scholar L. Xu and Q. Ma, Existence of random attractors for the floating beam equation with strong damping and white noise,, Bound. Value Probl., 2015 (2015), 13661. doi: 10.1186/s13661-015-0391-8. Google Scholar K. Yagdjian, The Cauchy Problem for Hyperbolic Operators. Multiple Characteristics. Micro-Local Approach,, Mathematical Topics, (1997). Google Scholar Moez Daoulatli. Rates of decay for the wave systems with time dependent damping. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 407-443. doi: 10.3934/dcds.2011.31.407 Jiayun Lin, Kenji Nishihara, Jian Zhai. Critical exponent for the semilinear wave equation with time-dependent damping. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4307-4320. doi: 10.3934/dcds.2012.32.4307 Zhidong Zhang. An undetermined time-dependent coefficient in a fractional diffusion equation. Inverse Problems & Imaging, 2017, 11 (5) : 875-900. doi: 10.3934/ipi.2017041 Hedy Attouch, Alexandre Cabot, Zaki Chbani, Hassan Riahi. Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient. Evolution Equations & Control Theory, 2018, 7 (3) : 353-371. doi: 10.3934/eect.2018018 Fengjuan Meng, Meihua Yang, Chengkui Zhong. Attractors for wave equations with nonlinear damping on time-dependent space. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 205-225. doi: 10.3934/dcdsb.2016.21.205 Wenjun Liu, Biqing Zhu, Gang Li, Danhua Wang. General decay for a viscoelastic Kirchhoff equation with Balakrishnan-Taylor damping, dynamic boundary conditions and a time-varying delay term. Evolution Equations & Control Theory, 2017, 6 (2) : 239-260. doi: 10.3934/eect.2017013 Robert E. Miller. Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 485-502. doi: 10.3934/dcds.1995.1.485 Qing Chen, Zhong Tan. Time decay of solutions to the compressible Euler equations with damping. Kinetic & Related Models, 2014, 7 (4) : 605-619. doi: 10.3934/krm.2014.7.605 Ryo Ikehata, Shingo Kitazaki. Optimal energy decay rates for some wave equations with double damping terms. Evolution Equations & Control Theory, 2019, 8 (4) : 825-846. doi: 10.3934/eect.2019040 Moez Daoulatli. Energy decay rates for solutions of the wave equation with linear damping in exterior domain. Evolution Equations & Control Theory, 2016, 5 (1) : 37-59. doi: 10.3934/eect.2016.5.37 Mourad Choulli, Yavar Kian. Stability of the determination of a time-dependent coefficient in parabolic equations. Mathematical Control & Related Fields, 2013, 3 (2) : 143-160. doi: 10.3934/mcrf.2013.3.143 Igor Chueshov, Stanislav Kolbasin. Long-time dynamics in plate models with strong nonlinear damping. Communications on Pure & Applied Analysis, 2012, 11 (2) : 659-674. doi: 10.3934/cpaa.2012.11.659 Tingting Liu, Qiaozhen Ma. Time-dependent asymptotic behavior of the solution for plate equations with linear memory. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4595-4616. doi: 10.3934/dcdsb.2018178 Giuseppe Maria Coclite, Mauro Garavello, Laura V. Spinolo. Optimal strategies for a time-dependent harvesting problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 865-900. doi: 10.3934/dcdss.2018053 Alain Haraux, Mohamed Ali Jendoubi. Asymptotics for a second order differential equation with a linear, slowly time-decaying damping term. Evolution Equations & Control Theory, 2013, 2 (3) : 461-470. doi: 10.3934/eect.2013.2.461 Francesco Di Plinio, Gregory S. Duane, Roger Temam. Time-dependent attractor for the Oscillon equation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 141-167. doi: 10.3934/dcds.2011.29.141 Jin Takahashi, Eiji Yanagida. Time-dependent singularities in the heat equation. Communications on Pure & Applied Analysis, 2015, 14 (3) : 969-979. doi: 10.3934/cpaa.2015.14.969 Jean-Paul Chehab, Pierre Garnier, Youcef Mammeri. Long-time behavior of solutions of a BBM equation with generalized damping. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1897-1915. doi: 10.3934/dcdsb.2015.20.1897 Paul Deuring. Pointwise spatial decay of time-dependent Oseen flows: The case of data with noncompact support. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2757-2776. doi: 10.3934/dcds.2013.33.2757 Marcello D'Abbicco Ruy Coimbra Charão Cleverson Roberto da Luz
CommonCrawl
Predicting multi-level drug response with gene expression profile in multiple myeloma using hierarchical ordinal regression Xinyan Zhang1, Bingzong Li2, Huiying Han3, Sha Song3, Hongxia Xu3, Yating Hong2, Nengjun Yi4 & Wenzhuo Zhuang3 Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies. Multiple myeloma (MM) is a malignant plasma cell disorder characterized by the proliferation in the bone marrow of clonal plasma cells [1, 2]. Around 30,280 new multiple myeloma cases are expected to be diagnosed in 2017 [3]. Meanwhile, MM is an incurable disease using conventional treatment, which results in a median overall survival of 3 to 4 years [4, 5]. Nevertheless, treatment outcome in MM has improved significantly in the last decade, partially due to the introduction of novel agents, such as the proteasome inhibitors (e.g. bortezomib) and immunomodulatory drugs (e.g. thalidomide) [6]. In spite of that, the heterogeneity exists in the patients' response to those new treatments and molecular features responsible for the variability in response remain undefined [7,8,9]. It urges more efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme [7]. MM, like other cancers, is caused by the accumulation of genetic abnormalities [2, 10]. Various molecular analyses suggest that myeloma is composed of distinct subtypes that have different molecular pathologies and prognosis [10]. For example, cytogenetic studies reveal that 60 to 80% of myeloma cases reveal chromosomal translocations involving the immunoglobulin heavy (IgH) locus [10]. The most prevalent of these translocations are t(11;14)(q13;q32) and t(4;14)(p16;q32), where the former has better survival than the latter. Another chromosomal translocation in t(8;14)(q24;q32), causing MYC activation, is considered as a secondary hit. Other genetic abnormalities include mutations and copy number changes. Mutational spectrum reveals a heterogeneous landscape with few recurrently affected genes. Only three genes have been reported in more than 10% patients, including KRAS, NRAS, and FAM46C [11,12,13,14]. For copy number changes, the most common ones being gains are on 1q, 3p, 6p, 9p, 11q, 19p, 19q and 21q along with deletions of 1p, 4q, 16q and 22q, among which candidate oncogenes and tumor suppressors have been identified [15,16,17,18]. Thus, it is anticipated that identifying and applying molecular biomarkers to predict clinical response to drugs will help to provide more precise prognostic and predictive classifiers for a specific therapy in MM. With the emergence of high-throughput sequencing technology, it is expected that the number of biomarkers will rise. The markers in predicting drug response will become more reproducible as well [19, 20]. However, the effect sizes of the discovered markers are usually small, which only could contribute a relatively trivial portion to the drug response since it is typically a complex trait, generally influenced by many genomic and environmental factors [19]. Thus, predictive modeling with multiple markers should be used to predict complex traits, such as drug response [19]. Distinct gene expression profiling is believed to be associated with the drug response variability of bortezomib, leading to various disease prognoses [10]. The relationships between the heterogeneity of drug response in bortezomib or its combined therapy with the genomic background of multiple myeloma patients have been investigated [2, 10]. Mulligan et al. [10] generated gene expression data from a national and international phase 2 and 3 clinical trials of bortezomib to develop a genomic classifier for prediction of drug response in relapsed MM. Terragna et al. [2] analyzed the gene expression from MM patients to explore predictors of bortezomib-thalidomide-dexamethasone (VTD) first-line therapy. According to European Group for Bone Marrow Transplantation criteria, drug responses in MM were classified as achieving complete response (CR), partial response (PR), minimal response (MR), no change (NC) and Progressive Disease (PD) [21]. However, in Mulligan et al. [10], the five-level ordinal drug response was categorized as a binary response where only responsive and non-responsive groups were considered. Terragna et al. [2] focused on CR vs non-CR groups by converting the ordinal outcome to a binary outcome. To provide more informative prediction and more efficiently identify biomarkers, it is desirable to analyze multi-level drug response, rather than combining the response to two groups. To address this shortcoming in the previous analyses, we here present a novel approach to identify significantly associated biomarkers and develop genomic classifier using hierarchical ordinal logistic regression. We apply our approach to two public available datasets [2, 10]. Our results show that our hierarchical ordinal regression approach is able to identify genes associated with the multi-level response and to generate predictive models for predicting the multi-level response. Datasets acquisition for ordinal response prediction The gene expression datasets analyzed to predict the drug response were acquired from two independent clinical trials. The two datasets are publically available from GEO under accession number [GEO: GSE9782] and [GEO: GSE68871]. They were originally published in Mulligan et al. [10] and Terragna et al. [2]. Mulligan et al. [10] recruited patients (n = 169) with relapsed myeloma enrolled in phase 2 and phase 3 clinical trials of bortezomib, whose pretreated tumor samples were further analyzed for genomic profiling with consent. Myeloma samples were collected prior to enrollment in clinical trials of bortezomib and samples were subject to replicate gene expression profiling using the Affymetrix 133A microarray (22,283 probes). Terragna et al. [2] focused primarily on treating the new MM patients (n = 118) with the induction therapy of VTD. The gene expression profiling (54,677 probes) was carried out in the Affymetrix Human Genome U133 Plus 2.0 Array. We standardized all the probes for statistical analysis in both datasets. Outcome definitions The original datasets contained drug response and overall survival as two outcomes. We here focused on developing genomic classifier for the drug response. In Mulligan et al. [10], patients were classified as achieving complete response (CR), partial response (PR), minimal response (MR), no change (NC) and Progressive Disease (PD) according to European Group for Bone Marrow Transplantation criteria [21]. In Terragna et al. [2], patients' drug responses were classified as five categories: complete response (CR), near complete response (nCR), very good partial response (VGPR), partial response (PR) and stable disease (SD). For both datasets, the five-level ordinal drug response was used in our analysis. In the meantime, we combined the five-level drug response to a new three-level drug response in both datasets to avoid low frequencies in certain levels. For Mulligan et al. [10], we combined PD and NC to a new level, and PR and MR as another new level. For Terragna et al. [2], we combined SD and PR as a new level, and VGPR and nCR as another new level. Both the original five-level and the new three-level outcomes were separately analyzed for these two datasets. Ordinal drug response prediction modeling Let y i be the ordinal outcome for which there exists a clear ordering of the response categories, and X ij the gene expression profile for the ith individual and jth probe in the data of sample size n with a total number J of probes. For notational convenience, we code the ordinal outcome as the integers 1, 2, ···, K, with K being the number of categories. Univariate ordinal logistic regression to rank top probes It will not be efficient to include all the genes in a predictive model, due to the large number of genes. We thus first use the univariate ordinal logistic regression to filter top q associated probes of the gene expression profile with the ordinal outcome. For the j-th gene expression X ij , the univariate ordinal logistic regression is expressed as: $$ \Pr \left({y}_i=k\right)=\left\{\begin{array}{l}1-{\mathrm{logit}}^{-1}\left({X}_{ij}\alpha -{c}_{1j}\right)\kern9em \mathrm{if}\kern0.5em k=1\\ {}{\mathrm{logit}}^{-1}\left({X}_{ij}\alpha -{c}_{\left(k-1\right)j}\right)-{\mathrm{logit}}^{-1}\left({X}_{ij}\alpha -{c}_{kj}\right)\kern1.25em \mathrm{if}\kern0.5em 1<k<K\\ {}{\mathrm{logit}}^{-1}\left({X}_{ij}\alpha -{c}_{\left(k-1\right)j}\right)\kern9.25em \mathrm{if}\kern0.5em k=K\ \end{array}\right. $$ where c kj denoted cut-points or thresholds, are constrained to increase, c1j < ⋯ < c(K − 1)j. We then select the top q associated probes based on the p-value for testing the hypothesis H0: α = 0. Predictive modeling for genomic classifiers We use all the q selected probes to build a multivariable ordinal model for predicting the multi-level response, i.e. $$ \Pr \left({y}_i=k\right)=\left\{\begin{array}{l}1-{\mathrm{logit}}^{-1}\left({X}_i\beta -{c}_1\right)\kern8.5em \mathrm{if}\kern0.5em k=1\\ {}{\mathrm{logit}}^{-1}\left({X}_i\beta -{c}_{k-1}\right)-{\mathrm{logit}}^{-1}\left({X}_{ij}\beta -{c}_k\right)\kern1.25em \mathrm{if}\kern0.5em 1<k<K\\ {}{\mathrm{logit}}^{-1}\left({X}_i\beta -{c}_{k-1}\right)\kern9em \mathrm{if}\kern0.5em k=K\ \end{array}\right. $$ where the vector X i includes the expression measures of the q genes, and β = (β1, · ··, β q )T is a vector of the effects. With hundreds or tens of correlated top associated probes, however, the standard ordinal logistic regression may fail, due to the non-identifiability and overfitting. To overcome the problems, we use an appropriate prior distribution to constrain the coefficients to lie in reasonable ranges and allow the model to be reliably fitted and to identify important predictors [22, 23]. We employ the commonly used Cauchy prior distribution on the coefficients in the ordered logistic regression: $$ p\left({\beta}_j\right)=\mathrm{Caucy}\left(0,s\right)=\frac{1}{\pi s}\frac{1}{1+{\beta}_j^2/{s}^2} $$ The scale parameter s controls the amount of shrinkage in the coefficient estimate; smaller s induces stronger shrinkage and forces more coefficients towards zero. For the cut-points parameters, we use a uniform prior. We have developed a quasi-Newton algorithm (BFGS) for fitting the hierarchical ordinal model by finding the posterior mode of the parameters (β, c), i.e., estimating the parameters by maximizing the posterior density. Our algorithm has been implemented in our R package BhGLM, which is freely available from the website http://www.ssg.uab.edu/bhglm/ and the public GitHub repository https://github.com/abbyyan3/BhGLM that includes R codes for examples. Assessing the performance of a fitted hierarchical ordinal logistic regression After fitting a hierarchical ordinal model, we obtain the estimate (\( \widehat{\beta},\widehat{c} \)) and can estimate the probabilities: \( {p}_{ik}=\Pr \left({y}_i=k|{X}_i\widehat{\beta},\widehat{c}\right),i=1,\cdots, n;k=1,\cdots, K \). Denote y ik = I(y i = k) as the binary indictor response for the k-th category. We can evaluate the performance using several measures: Deviance: \( d=-2{\sum}_{i=1}^n\log {p}_{ik} \). Deviance measures the overall quality of a fitted model; AUC (area under the ROC curve). We can calculate AUC for the k-th category using {y ik , p ik : i = 1, ···, n} as usual. Then the AUC for all the categories is defined as\( \frac{1}{K}{\sum}_{k=1}^K{AUC}_k \). MSE (mean squared error). MSE is defined as: \( MSE=\frac{1}{K}{\sum}_{k=1}^K\left[\frac{1}{n}{\sum}_{i=1}^n{\left({y}_{ik}-{p}_{ik}\right)}^2\right] \). Misclassification. The misclassification is defined as: \( MIS=\frac{1}{K}{\sum}_{k=1}^K\left[\frac{1}{n}{\sum}_{i=1}^nI\left(|{y}_{ik}-{p}_{ik}|>0.5\right)\right] \), where I(| y ik − p ik | >0.5) = 1 if ∣y ik − p ik ∣ > 0.5, and I(| y ik − p ik | >0.5) = 0 if ∣y ik − p ik ∣ ≤ 0.5; To evaluate the predictive performance of the model, we use the pre-validation method, a variant of cross-validation [24, 25], by randomly splitting the data to H subsets of roughly the same size, and using (H – 1) subsets to fit a model. We then calculate the measures described above with hth subset and cycle through all H subsets to get the averaged measurements to evaluate the predictive performance. To get stable results, we can run H-fold cross-validation multiple times and use the average of the measure over the repeats to assess the predictive performance. We also can use leave-one-out cross-validation (i.e., H = n) to obtain unique result. In this study, 10-fold cross-validation with 10 repeats and leave-one-out cross-validation were both utilized. Deviance AUC, MSE and misclassification rate were all reported. Selecting optimal scale values The performance of the hierarchical ordinal model can depend on the scale parameter in the Cauchy prior. We fit a sequence of models with different scales ranging from 0.01 to 1 by 0.01, from which we can choose an optimal one based on the criteria described above. Selecting the optimal number of q probes To select an optimal number of top q probes, we fit a sequence of models with a different number of probes with the options from 30 and 50 to 500 by 50. The number q will be determined based on the predictive performance of the hierarchical ordinal logistic regression by evaluating the deviance of the models. The chosen top q probes will be identified as associated significant biomarkers and present in heatmaps for visual examination. There were 169 samples analyzed in the dataset, with a total of 22,283 gene expression probes in Mulligan et al. [10]. In Terragna et al. [2], we analyzed 118 samples with a total of 54,677 gene expression probes. The details of both studies and the frequencies of the five-level ordinal drug response outcomes were summarized in Table 1. To avoid low frequencies in some levels for both datasets, we combined the five-level drug response to a new three-level drug response. By combining drug response in Mulligan et al. [10] as the new three-level ordinal outcome, there were 73 patients having a response as PD or NC, 55 patients having a response as MR or PR and 41 patients having a response as CR. By combining drug response in Terragna et al. [2] as the new three-level ordinal outcome, there were 49 patients having a response as SD or PR, 54 patients having a response as VGPR or nCR and 15 patients having a response as CR. Table 1 Summary of studies and frequency table for original ordinal outcome in both studies Predictive genomic classifiers analysis For the original ordinal drug response outcome in Mulligan et al. [10], we first filtered probes based on all the options from top 30 and then 50 probes to top 500 probes by 50 probes. Based on the predictive performance of all the 11 models (Table 2) evaluated with 10-fold cross-validation with 10 repeats, it showed that the best predictive model would include all 450 top probes for smallest deviance. However, for the final predictive model, we included top 50 probes as predictors with a balance between a reasonable decrease in deviance and simplicity in predictive model for clinical application. The prior scale in the final model was chosen at 0.14. Deviance of the final model was 441.919 and AUC was 0.632; while MSE was 0.136 and misclassification rate was 0.189. For the new combined ordinal drug response outcome in Mulligan et al. [10], we filtered probes based on all the options from top 30 and then 50 probes to top 500 probes by 50 probes. Based on the predictive performance of all the 11 models (Table S1 [See Additional file 1]) evaluated by 10-fold cross-validation with 10 repeats, it showed that the best predictive model included all 400 top probes for smallest deviance. However, for the final predictive model, we included top 50 probes as predictors with a balance between a reasonable decrease in deviance and simplicity in predictive model for clinical application. The prior scale in the final model was chosen at 0.16. For the final model, deviance was 316.118, AUC was 0.696, MSE was 0.190 and misclassification rate was 0.273. Comparing the predictive modeling performance between five-level ordinal drug response and the new combined ordinal drug response, AUC increased by 0.060 and deviance decreased by 125.801 for the combined ordinal outcome with a trade-off in MSE increasing by 0.054 and misclassification rate increasing by 0.084. Table 2 Summary of predictive performance using different number of top probes for drug response prediction (five levels) in two studies For the original ordinal drug response outcome in Terragna et al. [2], we also filtered probes based on all the options from top 30 and then 50 probes to top 500 probes by 50 probes. Based on the predictive performance of all the 11 models (Table 2), it showed that the best predictive model included all 500 top probes for smallest deviance. However, for the final predictive model, we included top 30 probes as predictors with a balance between a comparable low deviance and simplicity in predictive model for clinical application. The prior scale in the final model was chosen at 0.95. For the final model, deviance was 270.440, AUC was 0.776, MSE was 0.126 and misclassification rate was 0.188. For the new combined ordinal drug response outcome in Terragna et al. [2], we filtered probes based on all the options from top 30 and then 50 probes to top 500 probes by 50 probes. Based on the predictive performance of all the 11 models (Additional file 1: Table S1), it showed that the best predictive model included all 500 top probes for smallest deviance. However, for the final predictive model, we included top 50 probes as predictors with a balance between a comparable low deviance and simplicity in predictive model for clinical application. The prior scale in the final model was chosen at 0.26. Deviance of the final model was 167.130 and AUC was 0.800; while MSE was 0.152 and misclassification rate was 0.233. We compared the predictive performance of all models using 10-fold cross-validation with 10 repeats and leave one out cross-validation, which lead to similar results. The results were shown in Table 2 and Additional file 1: Table S1. Comparing the predictive modeling performance between five-level ordinal drug response and the new combined ordinal drug response, AUC increased by 0.024 and deviance decreased by 103.310 for the combined ordinal outcome with a trade-off in MSE increasing by 0.026 and misclassification rate increasing by 0.045. Genes identification To visualize the selected significant probes and its relationship with the clinical outcome in Mulligan et al. [10], a heatmap was presented in Fig. 1 with the top 50 significant probes which were used as predictive genomic factors for the five-level ordinal drug response. The top 50 significant probes represent genes of known function. Most of the probes are overexpressed in patients with PR or CR, which covers various functions including ribosomal protein (RPL11, RPL15, RPS7, RPS13), mitochondrial (COX7C), eukaryotic translation initiation factors (EIF3D, EIF3E, EIF3F, EIF3H) genes. Two of the probes are under-expressed in patients achieving PR or CR, which represent the gene function as ATPase plasma membrane Ca2+ transporting 4 (ATP2B4). For the three-level ordinal drug response, a heatmap was presented in Figure S1 [See Additional file 2] with the top 50 significant probes which were used as predictive genomic factors in our final predictive model. The top 50 probes for three-level drug response overlapped with most of the top 50 probes for five-level drug response. Only a few probes represent different genes of functions, including eukaryotic translation elongation factor (EEF2), chloride voltage-gated channel (CLCN3), abhydrolase domain containing (ABHD14A). Heatmap with Top 50 Significantly Probes with Drug response (Five Levels) in Mulligan et al. [10]. A heatmap for the gene expression of selected top significant 50 probes which were used as predictive genomic factors for the five-level ordinal drug response from Mulligan et al. [10]. The bottom of the heatmap presents the names of the 50 probes; while the left side color bar stands for five-level ordinal drug response, including complete response (CR), partial response (PR), minimal response (MR), no change (NC) and progressive disease (PD) To visualize the selected significant probes and its relationship with the clinical outcome in Terragna et al. [2], a heatmap was presented in Fig. 2 with the top 30 significant probes which were used as predictive genomic factors for the five-level ordinal drug response. The top 30 probes in this dataset differentiated with similar chance to over-express or down-express, which also cover various functions including BTG anti-proliferation factor 1 (BTG1), CDP-diacylglycerol synthase 1 (CDS1), RNA polymerase I subunit B (POLR1B), acylglycerol kinase (AGK), cyclin D1 (CCND1), cyclin D2 (CCND2), major histocompatibility complex, class II, DQ beta 1 (HLA-DQB1), mitogen-activated protein kinase 7 (MAP2K7) and suppressor of cytokine signaling 5 (SOCS5) genes. For the three-level ordinal drug response, a heatmap was presented in Figure S2 [See Additional file 3] with the top 50 significant probes which were used as predictive genomic factors in our final predictive model. The top 30 probes for three-level drug response overlapped with most of the top 50 probes for five-level drug response. Only a few probes represent different genes of functions, including SRC proto-oncogene, non-receptor tyrosine kinase (SRC), TNF receptor superfamily member 13C (TNFRSF13C) and checkpoint kinase 1 (CHEK1). Heatmap with Top 30 Significantly Probes with Drug response (Five Levels) in Terragna et al. [2]. A heatmap for the gene expression of selected top significant 30 probes which were used as predictive genomic factors for the five-level ordinal drug response from Terragna et al. [2]. The bottom of the heatmap presents the names of the 30 probes; while the left side color bar stands for five-level ordinal drug response, including complete response (CR), near complete response (nCR), very good partial response (VGPR), partial response (PR) and stable disease (SD) It is highly important to identify genetic biomarkers to predict drug response with a narrow therapeutic index [19, 26]. Chemotherapeutic agents are medications in that category, since the response is variable with potentially lethal side effects [19, 26]. Many studies have been conducted and a large number of biomarkers have been reported [19]. However, a complex outcome, such as drug response, is generally affected by many genomic and environmental factors [19]. Thus, it is desirable that a predictive procedure will possess the capability to consider mutual effects of various biomarkers for drug response [19]. Another potential issue is that the multi-level ordinal drug response is usually recoded in clinical practice. For analytic simplicity, however, such multi-level ordinal outcome is usually combined as just two levels, as in the original papers [2, 10] that we reanalyzed in this study. However, this strategy not only risks both the loss of information in the data and arbitrary to select the recode strategy, but also cannot provide informative prediction [27]. We here utilized a more efficient approach by combining the standard ordinal logistic regression and the hierarchical modeling. Our method can jointly analyze numerous variables for detecting important predictors and for predicting multi-level drug response. We applied our method to reanalyze two publicly available clinical trial datasets, which assessed response to bortezomib in relapsed MM patients [10] and VTD in newly diagnosed MM patients [2]. The original studies both treated the five-level ordinal drug responses as binary responses. To address the drawback of the potential loss of information from recoding, we reanalyzed the datasets by using the original ordinal drug responses. To avoid low frequencies in several levels of the five-level drug responses, we redefined the five-level drug response as a three-level ordinal drug response in both datasets. The results reveal that the predictive performance from VTD in new MM patients is more powerful than treating relapsed MM patients with bortezomib alone. Meanwhile, comparing the analysis results between five-level ordinal drug response and reduced three-level ordinal drug response, AUC increased and deviance decreased for the combined ordinal outcome with a trade-off in MSE and misclassification rate. Our analyses show that the combining ordinal outcome could result in higher MSE and misclassification rate, thus, potential loss of information and misleading interpretation. Although we only compared the five-level ordinal outcome with three-level ordinal outcome, it is anticipated that similar differences will exist if compared with binary outcome. It also implies that the original approach to analyze ordinal outcome as binary outcome will possibly lead to information loss. Furthermore, we identified probes that represent genes of known function. In Mulligan et al. [10], for both five level and three level ordinal drug response, most of the top significant probes are overexpressed in patients with PR or CR, including ribosomal protein (RPL11, RPL15, RPS7, RPS13), mitochondrial (COX7C), eukaryotic translation initiation factors (EIF3D, EIF3E, EIF3F, EIF3H) genes. Among them, ribosomal protein has been investigated by multiple studies to show that mutations in ribosomal protein genes have been found in endometrial cancer (RPL22), T-cell acute lymphoblastic leukemia (RPL10, RPL5 and RPL11), chronic lymphocytic leukemia (RPS15), colorectal cancer (RPS20), and glioma (RPL5) [28]. Moreover, it has also been discussed that eukaryotic initiation factors (EIFs) play an important role in translation initiation and protein synthesis which could alter angiogenesis, tumor development, and apoptosis in cancer progression [29]. Two of the probes are under-expressed in patients achieving PR or CR, which represent the gene ATP2B4. ATP2B4 plays a critical role in intracellular calcium homeostasis by regulating the enzymes to remove bivalent calcium ions from eukaryotic cells against very large concentration gradients [30]. In Terragna et al. [2], we carried a function enrichment analysis to identify the functional annotation of the top probes with KEGG [31] using the Bioinformatics tool DAVID [32, 33]. The top 30 probes for the five-level ordinal drug response also cover various gene functions which belong to multiple important pathways, e.g., Metabolic pathways, p53 signaling pathway, PI3K-Akt signaling pathway, AMPK signaling pathway, Wnt signaling pathway, Jak-STAT signaling pathway, Viral carcinogenesis and MAPK signaling pathway. For the three-level ordinal drug response, the top 50 significant probes cover similar functions as the top 30 probes for the five-level ordinal drug response, with several additional functions such as Cytokine-cytokine receptor interaction, NF-kappa B signaling pathway, Intestinal immune network for IgA production, HTLV-I infection and Primary immunodeficiency. This suggests that the probes we identified are correlated biologically. Based on the functional enrichment analysis results, the probes could be grouped to multiple pathways. One plausible solution is to utilize a pathway-structured model to incorporate that biological information into the predictive model to include more probe information into the prediction, which will be considered in our further work. Although the predictive classifier and genetic biomarkers described here are promising, further research is necessary to assess the relevance of these genomic predictors with more data from other trials or other trials with novel or multi-agent therapy. Our analysis strategy is directly applicable to new data with bortezomib or other therapies in multiple myeloma for patients with newly diagnosed or relapsed cancer. This analysis will help to quickly identify the patient groups that could benefit from the proposed drug therapy or in need of other novel therapies. We propose a novel method to directly analyze the multi-level drug response, rather than combining the response to two groups. Our method employs a hierarchical ordinal logistic model with the heavy-tailed Cauchy prior on the coefficients. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies. Area under the ROC curve BFGS: Quasi-Newton algorithm CR: Complete response EIFs: Eukaryotic initiation factors IgH: Immunoglobulin heavy MM: MR: Minimal response MSE: Mean squared error nCR: Near complete response Progressive disease Partial response Stable disease VGPR: Very good partial response VTD: Bortezomib-thalidomide-dexamethasone (VTD) Kyle RA, Rajkumar SV. Multiple myeloma. Blood. 2008;111(6):2962–72. Terragna C, Remondini D, Martello M, Zamagni E, Pantani L, Patriarca F, Pezzi A, Levi G, Offidani M, Proserpio I, et al. The genetic and genomic background of multiple myeloma patients achieving complete response after induction therapy with bortezomib, thalidomide and dexamethasone (VTD). Oncotarget. 2016;7(9):9666–79. American Cancer S. Cancer Facts & Figures 2017. Atlanta: American Cancer Society; 2017. Decaux O, Lode L, Magrangeas F, Charbonnel C, Gouraud W, Jezequel P, Attal M, Harousseau JL, Moreau P, Bataille R, et al. Prediction of survival in multiple myeloma based on gene expression profiles reveals cell cycle and chromosomal instability signatures in high-risk patients and hyperdiploid signatures in low-risk patients: a study of the Intergroupe francophone du Myelome. J Clin Oncol. 2008;26(29):4798–805. Broyl A, Hose D, Lokhorst H, de Knegt Y, Peeters J, Jauch A, Bertsch U, Buijs A, Stevens-Kroef M, Beverloo HB, et al. Gene expression profiling for molecular classification of multiple myeloma in newly diagnosed patients. Blood. 2010;116(14):2543–53. Hofman IJF, van Duin M, De Bruyne E, Fancello L, Mulligan G, Geerdens E, Garelli E, Mancini C, Lemmens H, Delforge M, et al. RPL5 on 1p22.1 is recurrently deleted in multiple myeloma and its expression is linked to bortezomib response. Leukemia. 2017;31(8):1706–14. Kumar SK, Rajkumar SV, Dispenzieri A, Lacy MQ, Hayman SR, Buadi FK, Zeldenrust SR, Dingli D, Russell SJ, Lust JA, et al. Improved survival in multiple myeloma and the impact of novel therapies. Blood. 2008;111(5):2516–20. Rajkumar SV. Treatment of multiple myeloma. Nat Rev Clin Oncol. 2011;8(8):479–91. Malek E, Abdel-Malek MA, Jagannathan S, Vad N, Karns R, Jegga AG, Broyl A, van Duin M, Sonneveld P, Cottini F, et al. Pharmacogenomics and chemical library screens reveal a novel SCFSKP2 inhibitor that overcomes Bortezomib resistance in multiple myeloma. Leukemia. 2017;31(3):645–53. Mulligan G, Mitsiades C, Bryant B, Zhan F, Chng WJ, Roels S, Koenig E, Fergus A, Huang Y, Richardson P, et al. Gene expression profiling and correlation with outcome in clinical trials of the proteasome inhibitor bortezomib. Blood. 2007;109(8):3177–88. Bolli N, Avet-Loiseau H, Wedge DC, Van Loo P, Alexandrov LB, Martincorena I, Dawson KJ, Iorio F, Nik-Zainal S, Bignell GR, et al. Heterogeneity of genomic evolution and mutational profiles in multiple myeloma. Nat Commun. 2014;5:2997. Walker BA, Boyle EM, Wardell CP, Murison A, Begum DB, Dahir NM, Proszek PZ, Johnson DC, Kaiser MF, Melchor L, et al. Mutational Spectrum, copy number changes, and outcome: results of a sequencing study of patients with newly diagnosed myeloma. J Clin Oncol. 2015;33(33):3911–20. Lohr JG, Stojanov P, Carter SL, Cruz-Gordillo P, Lawrence MS, Auclair D, Sougnez C, Knoechel B, Gould J, Saksena G, et al. Widespread genetic heterogeneity in multiple myeloma: implications for targeted therapy. Cancer Cell. 2014;25(1):91–101. Walker BA, Wardell CP, Murison A, Boyle EM, Begum DB, Dahir NM, Proszek PZ, Melchor L, Pawlyn C, Kaiser MF, et al. APOBEC family mutational signatures are associated with poor prognosis translocations in multiple myeloma. Nat Commun. 2015;6:6997. Kuehl WM, Bergsagel PL. Molecular pathogenesis of multiple myeloma and its premalignant precursor. J Clin Invest. 2012;122(10):3456–63. Morgan GJ, Walker BA, Davies FE. The genetic architecture of multiple myeloma. Nat Rev Cancer. 2012;12(5):335–48. Corre J, Munshi N, Avet-Loiseau H. Genetics of multiple myeloma: another heterogeneity level? Blood. 2015;125(12):1870–6. Lopez-Corral L, Sarasquete ME, Bea S, Garcia-Sanz R, Mateos MV, Corchete LA, Sayagues JM, Garcia EM, Blade J, Oriol A, et al. SNP-based mapping arrays reveal high genomic complexity in monoclonal gammopathies, from MGUS to myeloma status. Leukemia. 2012;26(12):2521–9. Geeleher P, Cox NJ, Huang RS. Clinical drug response can be predicted using baseline gene expression levels and in vitro drug sensitivity in cell lines. Genome Biol. 2014;15(3):R47. Simon R, Roychowdhury S. Implementing personalized cancer genomics in clinical trials. Nat Rev Drug Discov. 2013;12(5):358–69. Blade J, Samson D, Reece D, Apperley J, Bjorkstrand B, Gahrton G, Gertz M, Giralt S, Jagannath S, Vesole D. Criteria for evaluating disease response and progression in patients with multiple myeloma treated by high-dose therapy and haemopoietic stem cell transplantation. Myeloma subcommittee of the EBMT. European Group for Blood and Marrow Transplant. Br J Haematol. 1998;102(5):1115–23. Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB. Bayesian data analysis, third edition edn. New York: Chapman & Hall/CRC Press; 2014. Gelman A, Hill J. Data analysis using regression and multilevel/hierarchical models. New York: Cambridge University Press; 2007. Tibshirani RJ, Efron B. Pre-validation and inference in microarrays. Stat Appl Genet Mol Biol. 2002;1:Article1. Hastie T, Tibshirani R, Wainwright M. Statistical learning with sparsity - the lasso and generalization. New York: CRC Press; 2015. Jiang Y, Wang M. Personalized medicine in oncology: tailoring the right drug to the right patient. Biomark Med. 2010;4(4):523–33. Warner P. Ordinal logistic regression. J Fam Plann Reprod Health Care. 2008;34(3):169–70. Goudarzi KM, Lindstrom MS. Role of ribosomal protein mutations in tumor development (review). Int J Oncol. 2016;48(4):1313–24. Sharma DK, Bressler K, Patel H, Balasingam N, Thakor N. Role of eukaryotic initiation factors during cellular stress and Cancer progression. J Nucleic Acids. 2016;2016:8235121. NCBI: ATPase plasma membrane Ca2+ transporting 4. 2017. Kanehisa M, Goto S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28(1):27–30. Huang da W, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4(1):44–57. Huang da W, Sherman BT, Lempicki RA. Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res. 2009;37(1):1–13. This work was supported in part by research grants from USA National Institutes of Health (R03-DE025646), National Natural Science Foundation of China (81673448), Natural Science Foundation of Jiangsu Province China (BK 20161218), and The Applied Basic Research Programs of Suzhou City (SYS201546). This work was supported in part by research grants from USA National Institutes of Health (R03-DE025646), National Natural Science Foundation of China (81673448), Natural Science Foundation of Jiangsu Province China (BK 20161218), and The Applied Basic Research Programs of Suzhou City (SYS201546). The first two grants mainly support the development of statistical methods and software, and the others mainly support the design of the study and analysis, and interpretation of data and writing the manuscript. The datasets used and analyzed in the current study are publically available from GEO under accession number [GEO: GSE9782] and [GEO: GSE68871], (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE9782 and https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE68871). The method has been incorporated into the freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/ and https://github.com/abbyyan3/BhGLM). Department of Biostatistics, Jiann-Ping Hsu College of Public Health, Georgia Southern University, Statesboro, GA, USA Xinyan Zhang Department of Hematology, The Second Affiliated Hospital of Soochow University, Suzhou, China Bingzong Li & Yating Hong Department of Cell Biology, School of Biology & Basic Medical Sciences, Soochow University, Suzhou, China Huiying Han , Sha Song , Hongxia Xu & Wenzhuo Zhuang Department of Biostatistics, University of Alabama at Birmingham, Birmingham, AL, 35294, USA Nengjun Yi Search for Xinyan Zhang in: Search for Bingzong Li in: Search for Huiying Han in: Search for Sha Song in: Search for Hongxia Xu in: Search for Yating Hong in: Search for Nengjun Yi in: Search for Wenzhuo Zhuang in: WZ and BL proposed the idea and identified the real data sets. NY developed the statistical method and the software. XZ performed simulation studies and real data analysis. XZ, WZ, and NY drafted the manuscript. BL, HH, SS, HX, and YH organized the real data sets, revised the manuscript, and discussed the project with WZ and NY as it progressed and commented on the drafts of the manuscript. All authors read and approved the final manuscript. Correspondence to Nengjun Yi or Wenzhuo Zhuang. Table S1. Summary of predictive performance using different number of top probes for drug response prediction (three levels) in two studies. (DOCX 21 kb) Figure S1. Heatmap with Top 50 Significantly Probes with Drug response (Three Levels) in Mulligan et al. [10]. (DOCX 77 kb) Figure S2. Heatmap with Top 50 Significantly Probes with Drug response (Three Levels) in Terragna et al. [2]. (DOCX 72 kb) Zhang, X., Li, B., Han, H. et al. Predicting multi-level drug response with gene expression profile in multiple myeloma using hierarchical ordinal regression. BMC Cancer 18, 551 (2018) doi:10.1186/s12885-018-4483-6 Hierarchical ordinal regression Multi-level drug response Genetics, genomics and epigenetics
CommonCrawl
mathematics and philosophy of the infinite My Curriculum Vita Automorphism towers Infinitary computability Infinitary utilitarianism Large cardinals My Research Collaborators Recent and Upcoming Talks Appointments and Grants My Academic Appointments About My Courses About My Graduate Students List of My Graduate Students Mathematical Shorts Math for Kids Still don't know, an epistemic logic puzzle Posted on December 20, 2017 by Joel David Hamkins Here is a epistemic logic puzzle that I wrote for my students in the undergraduate logic course I have been teaching this semester at the College of Staten Island at CUNY. We had covered some similar puzzles in lecture, including Cheryl's Birthday and the blue-eyed islanders. Bonus Question. Suppose that Alice and Bob are each given a different fraction, of the form $\frac{1}{n}$, where $n$ is a positive integer, and it is commonly known to them that they each know only their own number and that it is different from the other one. The following conversation ensues. JDH: I privately gave you each a different rational number of the form $\frac{1}{n}$. Who has the larger number? Alice: I don't know. Bob: I don't know either. Alice: I still don't know. Bob: Suddenly, now I know who has the larger number. Alice: In that case, I know both numbers. What numbers were they given? Give the problem a try! See the solution posted below. Meanwhile, for a transfinite epistemic logic challenge — considerably more difficult — see my puzzle Cheryl's rational gifts. When Alice says she doesn't know, in her first remark, the meaning is exactly that she doesn't have $\frac 11$, since that is only way she could have known who had the larger number. When Bob replies after this that he doesn't know, then it must be that he also doesn't have $\frac 11$, but also that he doesn't have $\frac 12$, since in either of these cases he would know that he had the largest number, but with any other number, he couldn't be sure. Alice replies to this that she still doesn't know, and the content of this remark is that Alice has neither $\frac 12$ nor $\frac 13$, since in either of these cases, and only in these cases, she would have known who has the larger number. Bob replies that suddenly, he now knows who has the larger number. The only way this could happen is if he had either $\frac 13$ or $\frac 14$, since in either of these cases he would have the larger number, but otherwise he wouldn't know whether Alice had $\frac 14$ or not. But we can't be sure yet whether Bob has $\frac 13$ or $\frac 14$. When Alice says that now she knows both numbers, however, then it must be because the information that she has allows her to deduce between the two possibilities for Bob. If she had $\frac 15$ or smaller, she wouldn't be able to distinguish the two possibilities for Bob. Since we already ruled out $\frac 13$ for her, she must have $\frac 14$. So Alice has $\frac 14$ and Bob has $\frac 13$. Many of the commentators came to the same conclusion. Congratulations to all who solved the problem! See also the answers posted on my math.stackexchange question and on Twitter: Epistemic logic puzzle: Still Don't Know. Another question from the logic final exam. Epistemic logic. https://t.co/UBpQPTyVPc — Joel David Hamkins (@JDHamkins) December 20, 2017 This entry was posted in Exposition, Math for Kids and tagged epistemic logic, kids, puzzle by Joel David Hamkins. Bookmark the permalink. 4 thoughts on "Still don't know, an epistemic logic puzzle" Charlie Sitler on December 20, 2017 at 9:14 pm said: Joel–By A's 1st comment, it is known that she doesn't have 1/1. By B's response it is known that he does not have 1/2. A's response makes it clear she also does not have 1/3. B's sudden knowledge attests to his having 1/4. A now knows this as well, and she also clearly knows what her own number is. (I don't, but it must be 1/n for n greater than or equal to 5.) does any of that make sense? Road White on December 21, 2017 at 3:47 am said: > B's sudden knowledge attests to his having 1/4. Not necessarily right, B could also be 1/3, since that would still be greater than whatever Alice could have, but only in one of those situations can Alice be sure what he has. Jerome Tauber on December 21, 2017 at 9:37 am said: B's sudden knowledge attests to his having a 1/3 or 1/4. A then would only know B's with certainty if she had a 1/4. Therefore A's number is 1/4 and B's 1/3. Otherwise the analysis above is correct. Charlie Sitler on December 21, 2017 at 11:06 am said: That sounds right to me, Jerome. Thank you very much. –Charlie Sitler Leave a Reply to Road White Cancel reply absoluteness Arthur Apter automorphism towers buttons+switches CH chess class forcing computability countable models definability determinacy elementary embeddings equivalence relations forcing forcing axioms games GBC geology ground axiom HOD hypnagogic digraph indestructibility infinitary computability infinite chess infinite games ITTMs Jonas Reitz kids KM large cardinals maximality principle modal logic models of PA multiverse open games Oxford potentialism PSC-CUNY supercompact truth universal definition universal program Victoria Gitman W. Hugh Woodin weakly compact
CommonCrawl
Using the Twentieth Century Reanalysis to assess climate variability for the European wind industry Philip E. Bett1, Hazel E. Thornton1 & Robin T. Clark1 Theoretical and Applied Climatology volume 127, pages 61–80 (2017)Cite this article We characterise the long-term variability of European near-surface wind speeds using 142 years of data from the Twentieth Century Reanalysis (20CR), and consider the potential of such long-baseline climate data sets for wind energy applications. The low resolution of the 20CR would severely restrict its use on its own for wind farm site-screening. We therefore perform a simple statistical calibration to link it to the higher-resolution ERA-Interim data set (ERAI), such that the adjusted 20CR data has the same wind speed distribution at each location as ERAI during their common period. Using this corrected 20CR data set, wind speeds and variability are characterised in terms of the long-term mean, standard deviation and corresponding trends. Many regions of interest show extremely weak trends on century timescales, but contain large multidecadal variability. Since reanalyses such as ERAI are often used to provide the background climatology for wind farm site assessments, but contain only a few decades of data, our results can be used as a way of incorporating decadal-scale wind climate variability into such studies, allowing investment risks for wind farms to be reduced. Wind is a highly variable phenomenon over all time scales, from gusts lasting seconds, to long-period variations spanning decades (e.g. Watson, 2014). Harnessing the wind resource for electricity production is a rapidly-developing field, with many challenges for engineering, energy systems design, national-scale energy policy and meteorological forecast systems (e.g. Wiser et al, 2011) Short-term wind variability is critically important to the day-to-day management of a wind farm, and efficient running depends on having high quality wind speed forecasts (e.g. Foley et al. 2012; Jung and Broadwater, 2014). However, the impact of long-term, decadal-scale variations in the wind climate is less well understood. This is partly due to a historical lack of data. Typically, when a site is considered for wind farm development, developers are often restricted to using statistical techniques to relate observational records from nearby stations to the site in question. Homogeneous data from any single station will usually only span a few years to a decade, but can be supplemented by data from a dedicated meteorological mast positioned on-site for a limited period of time such as 1–3 years (Petersen and Troen 2012; Liléo et al. 2013; Carta et al. 2013). In the absence of long-term data sets of wind speed itself, studies of long-term wind variability typically use pressure-based metrics as proxies for the wind (e.g. Palutikof et al, 1992), often combined with complex statistical procedures to relate to the wind speed at a site (e.g. Kirchner-Bossi et al. 2013, 2014). Around Europe, indices based on the North Atlantic Oscillation (NAO) have often been used Boccard (2009). Standard NAO indices correlate well with winter wind speeds in northern/western parts of Europe. However, this is not true more generally, such as at other times of the year or in other locations (Hurrell et al. 2003), and alternative indices must be used in these cases (e.g. Folland et al. 2009). Regardless of definition, the NAO does not capture the full variability seen in wind speeds. Thus, there is scope for improvement over all these techniques. Within the past decade, reanalysis data products have been able to extend such site assessment studies, allowing a description of a reasonable climatological period of around 30 years. The two main global reanalysis data sets used for this are the ECMWF Footnote 1 Re-Analysis Interim product (ERA-Interim, hereafter ERAI; Dee et al. 2011), and NASA's Modern Era Retrospective-analysis for Research and Applications (MERRA, Rienecker et al. 2011), which both cover the 'satellite era' (1979 onwards). Such data sets are necessarily produced at relatively low spatial resolution (e.g. grid sizes ∼0.7°), and cannot, on their own, be used to determine the likely wind speeds at a site. In combination with other techniques however, from simple rescaling, detailed statistical modelling or even full dynamical downscaling, reanalysis data can be a key source for obtaining a representative wind climatology for a specific site (Kiss et al. 2009; Petersen and Troen 2012; Kubik et al. 2013; Badger et al. 2014). Most recently, attempts at producing century-scale reanalyses have yielded results: the NOAAFootnote 2 Twentieth Century Reanalysis (hereafter 20CR, Compo et al. 2011) and ECMWF's ERA-20C (Poli et al. 2013; Dee et al. 2013) data sets provide ensemble realisations of the atmospheric state spanning over 100 years. However, as they are at even lower resolution (e.g. 1–2 °), and their early data is subject to substantial uncertainty, care must be taken when considering how to interpret their results in the context of wind farms. Concerns within the wind industry about the possible impacts of future climate change, along with greater availability of larger data sets, have motivated various studies resulting in a greater awareness of the risks of climate variability (whether anthropogenic or natural). In fact, unlike the situation for temperature, there is little evidence of any long-term trend in globally-averaged wind speeds—see, e.g. the Fourth and Fifth Assessment Reports (AR4/AR5 respectively) of the IPCC'sFootnote 3 Working Group I, Trenberth et al. (2007) and Hartmann et al. (2013). The low confidence in such assessments is due in part to difficulties with the historical observational record, coupled with the highly-variable nature of winds in both space and time. For example, various data sets have suggested a positive trend in wind speeds over the oceans, with significant regional variability (Tokinaga and Xie 2011; Young et al. 2011a; Young et al. 2011b; Wentz and Ricciardulli 2011; Young et al. 2012). Over land however, the situation is different: an apparent reduction in surface wind speeds (nicknamed 'global stilling') has been seen in recent decades in some data sets (McVicar et al. 2012, 2013), with studies suggesting that it could be due in part to anthropogenic factors, such as changes in land-use increasing the surface roughness (Vautard et al. 2010; Wever 2012), or aerosol emissions locally changing the thermal structure of the atmosphere (Bichet et al. 2012). It is important to note that stilling is not seen in reanalysis data, which use climatological aerosol levels and do not include land-use change. Over both the land and oceans, opposing trends in different regions and times of year will act to reduce any globally averaged trend signal. While further and better data is required to settle questions on the true scale, causes and interrelationships of changes in wind speeds over oceans and land, it is important to note that these observed trends are always much smaller than interannual variability. Given the uncertainties in trends in the historical wind climate, it is not surprising that projections of future wind climates should also be treated with caution. The review of Pryor and Barthelmie (2010) concluded that wind speeds over Europe would continue to be dominated by natural variability, although by the end of the century some differences could have emerged—although even the sign of the change was uncertain. The IPCC's Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) came to a similar conclusion (Wiser et al. 2011), and the IPCC's AR4 (Meehl et al. 2007; Christensen et al. 2007) and AR5 (Collins et al. 2013; Christensen et al. 2013) noted that there is low confidence in any projected changes. Consequently, Pryor and Schoof (2010) and Dobrynin et al. (2012) found that the choice of emission scenario or concentration pathway has relatively little impact overall on the resulting wind climate. It is important to note that simulations of the historical climate over the twentieth century (from both atmosphere-only and ocean-coupled models) do not reproduce the observed variability in atmospheric circulation (Scaife et al. 2005; Scaife et al. 2009), so the uncertainties in these climate projections do not preclude large multi-decadal variations in the future. Overall, the effect of climate change on the annually-averaged wind resource is thought to be small, although the increased seasonality seen in some studies by 2100 could have a challenging impact on wind-dominated electricity networks (Hueging et al. 2012; Cradden et al. 2012). Thus, when seeking to improve assessments of future wind speeds over the lifetime of a turbine, there is more to be gained from an increased understanding of historical long-term wind variability than through climate change model runs. Given this context, we show in this paper how the new class of century-scale reanalyses can be linked to the more widely-used satellite-era reanalyses, thus allowing for information on the long-term decadal-scale variability in wind speeds to be propagated through the model chain when performing a wind site assessment. In Section 2, we describe the two main data sets we use, including their limitations. We compare them in detail in Section 3, and describe our procedure for relating the two. Section 4 shows results for the wind speed distribution over Europe, including long-term averages, variabilities and changes in the shape of the distribution over time for selected regions. We discuss our conclusions in Section 5. Reanalyses represent the most convenient data sets for assessing the long-term historical wind climate, in the sense that they aim to provide an optimal combination of observations and numerical model: the data provided in a reanalysis aims to give the best estimate of the "true" situation at any given point, as well as being homogeneous in time (e.g. free of systematic shifts), and complete in both space and time. However, in reality, biases and uncertainties inherent in both raw observations (due to location, frequency, instrumentation, etc.) and models (due to resolution, parametrisation schemes, etc.) mean that such data sets must be used with caution. This study primarily uses data from the Twentieth Century Reanalysis project (20CR), in conjunction with wind speeds from the ERA-Interim data set (ERAI) for validation and calibration of the 20CR data. We describe key aspects of these data sets in the following sections. Data from the 20CR ensemble system A full description of the ensemble reanalysis system used in the 20CR project is given in Compo et al. (2011). Here, we describe some key features that have important impacts on our analysis methods and results. The 20CR assimilates sea-level pressure and surface pressure observations alone (from the International Surface Pressure Databank, incorporating the ACRE Footnote 4 project, Allan et al. 2011), using observational fields of sea-surface temperature and sea-ice concentration (HadISST1.1, Rayner et al. 2003) as boundary conditions. It uses the April 2008 experimental version of the NCEPFootnote 5 Global Forecast System (GFS), a coupled atmosphere–land model produced by the NOAA NCEP Environmental Modelling Centre. The 20CR data assimilation system is based on an Ensemble Kalman Filter. The data are produced in a series of 5-yearFootnote 6 'streams' (independent runs, to simplify parallelisation), with 56 members in each stream. A consequence of this system is that ensemble members only remain temporally continuous for the 5-year duration of each stream. This has implications for how variability is assessed over long time periods; we discuss this in more detail in Section 4.1. As highlighted in Compo et al. (2011), when considering variability, it is important to use the ensemble members directly, rather than using the daily ensemble-mean time series alone. The increased uncertainty in the early period of the data leads to greater disagreement between the ensemble members, such that a time series of their mean will have much less variability than the members individually. This would lead to a spurious strong reduction in variability appearing at earlier times in the ensemble mean. We use the updated release of the 20CRv2 data (hereafter simply 20CR), spanning 142 years from 1st Jan 1871 to 31st Dec 2012. While it was produced on a T62 spectral grid with 28 vertical levels, we use the output data provided on a regular latitude–longitude grid with cell size 2°, at the the near-surface pressure level at σ:=P/P surface = 0.995 (around 40 m height). The σ = 0.995 level is a reasonable choice for turbines whose rotor hubs are expected to be some tens of metres above the surface; typical hub heights are between 40 and 100 m, but vary greatly (Wiser et al. 2011); we do not expect our conclusions to be qualitatively affected by the precise height above ground. More details on our choice of levels can be found in Appendix A. We use daily-mean wind speeds U, which we calculate by averaging the wind speed magnitudes from the 6-hourly u (zonal, i.e. westerly) and v (meridional, i.e. southerly) component fields. We are not considering sub-daily variability, as this is likely to be poorly represented with only four timesteps per day, in addition to the low horizontal resolution. Using daily means significantly reduces the amount of data that we need to analyse. However, calculating daily means using only four snapshots is likely to lead to some underestimation, as the wind distributions we are sampling tend to be positively skewed. Using daily means also has an impact on the form of the resulting wind speed distribution, and on Weibull fits in particular; we discuss this in the Supplementary Information. Some recent studies have highlighted potential problems with the 20CR data set. Ferguson and Villarini (2012, 2014) have performed a detailed analysis of change points in the 20CR data, finding that, while these are in fact common in the data set overall, there are many areas, especially in the northern hemisphere, where the 20CR remains largely homogeneous for many decades. Their results emphasize that users of the 20CR data must be aware of possible—indeed, probable—inhomogeneities in the data, and the potential impact this could have on their analyses. Stickler and Brönnimann (2011) found very significant differences between 20CR winds and pilot balloon measurements in the West African Monsoon region over 1940–1957, and Liléo et al. (2013), using the 20CR to study interannual wind variability over Scandinavia, had to discard 20CR data prior to 1920 due to suspicious behaviour in some grid cells. Finally, there has been some debate on the consistency of long-term trends in storminess and extreme winds found in 20CR compared to observations (Donat et al 2011, Brönnimann et al 2012, Wang et al. 2013, 2014), and Krueger et al. (2014, 2013). These studies serve to emphasize the importance of being extremely careful with methodology when comparing reanalysis data with observations, and when identifying trends. Data fromERA-Interim The second source of data we use is the 60 m wind speed fields from the ERAI data set (Dee et al. 2011). This uses the ECMWF Integrated Forecasting System model (IFS), and assimilates observational data of many types, mostly coming from satellites. The atmospheric fields of ERA-Interim were calculated on a T255 spectral grid, with surface fields calculated on a reduced Gaussian grid. We use the 6-hourly wind speed data available on the regular latitude–longitude grid of cell-size 0.75°, and calculate daily-mean wind speeds as for the 20CR. A comparison of ERAI data at 60 and 10 m with the 20CR levels can be found in Appendix A. The reanalysis starts in 1979 and continues to the present; we use the data up to the end of 2013. Further details are available in Dee et al. (2011) and references therein, and the ERA-Interim Archive report (Berrisford et al. 2011). Stopa and Cheung (2014) compared ERAI wind speeds with those measured from buoys and satellite data, finding that the reanalysis performs very well in terms of homogeneity, but with a small negative bias and reduced variability compared to the observations. Szczypta et al. (2011) found that ERA-Interim tended to overestimate wind speeds over most of France, but underestimated it in mountainous areas, compared to the SAFRAN high resolution (8 km) reanalysis data set—although the authors note that the SAFRAN wind speed data is known to be biased low. As already discussed, it is known that reanalysis data sets including ERA-Interim do not exhibit the observed large-scale trends in wind speeds (see, e.g. McVicar et al. 2013, Mears 2013 and references therein), and the relatively low resolution of ERAI (and similar data sets) prevents it from being used directly as a proxy for observations at the scale of a wind farm (Kiss et al. 2009; Kubik et al. 2013). We will instead be using ERAI as an example of the kind of data currently used for providing a climatological basis for wind farm site assessments, the first link in the 'model chain' of dynamical and statistical downscaling for such studies: reanalyses are connected to mesoscale dynamical models, then in turn to microscale models and computational fluid dynamics (CFD) at the scale of a wind farm itself (Petersen and Troen 2012). Linking the reanalyses While the strength of the 20CR is its characterisation of real-world variability on long time scales, the ERA-Interim data set provides wind speeds that are at much higher spatial resolution, and are more tightly-constrained by observations. ERA-Interim is therefore much better suited for developing a climatology of wind speeds over small (sub-national) regions, or, in conjunction with additional dynamical or statistical downscaling techniques, at a point location. However, as it only spans ∼30 years, it cannot give a good indication of climate variability on multi-decadal timescales. In this section, we describe how we calibrate the 20CR wind speed data to produce a data set that has the same distribution of wind speeds in time as ERA-Interim (over their overlapping period), but with the long-term variability of 20CR. Comparison of the reanalyses We focus our study on Europe, and consider several small sub-regions for more detailed examination. To aid comparison, we regrid the ERAI data by area-averaging onto the 20CR's native 2° grid. The 20CR and ERAI data do not exhibit the same climatology in wind speeds over their period of intersection (1979–2012, 34 years). This is due to a number of factors. These include the structural differences (NWP model, data assimilation and reanalysis procedure), spatial resolution and the amount of orographic complexity resolved, the amount and type of observational data assimilated and the mismatch between vertical levels available for comparison. In this section, we denote ensemble-mean daily-mean wind speeds from 20CR (at its σ = 0.995 vertical level) and from ERAI (at its 60 m model level on the 20CR grid), by U 20CR and U ERAI respectively. As we are focusing on the later period of the 20CR data set for our calibration procedure, the ensemble spread is small, so it is acceptable to use the ensemble mean series in this case (this is not generally true for all time periods, or regions of the globe with fewer observations; see Compo et al. 2011). We consider the 'bias' between the 20CR and ERAI data in terms of the simple difference in wind speeds, $$ \beta := U^{\mathrm{20CR}} - U^{\text{ERAI}}, $$ and the day-to-day relative difference compared to ERAI, $$ \beta_{\text{rel}} := \left( U^{\mathrm{20CR}}-U^{\text{ERAI}}\right) / U^{\text{ERAI}}. $$ In Fig. 1, we showFootnote 7 the 34-year mean bias 〈β〉 and the mean of the day-to-day relative bias 〈β rel〉. The bias maps are all rather noisy, but over most of the land surface the bias is negative (i.e. U 20CR<U ERAI), with differences of up to ∼20 % of the ERAI wind speeds in many areas. There are some notable exceptions to this however, with positive biases (i.e. U 20CR>U ERAI): for example over Britain, wind speeds are up to 20 % higher in the 20CR data. Some areas have particularly strong negative bias, such as around the Czech Republic. The Strait of Gibraltar is particularly affected by the low spatial resolution, resulting in the lowest 20CR wind speeds compared to ERAI. We have used a t test to assess whether the data is consistent with 〈β〉=0 (i.e. no bias) at the 1 % level. When it is not consistent with zero, we say there is a significant bias; this is the case for most areas according to this test. Maps of the difference between wind speeds from 20CR and ERA-Interim; details as given in the panels. Crosshatched areas in the top panel are not significantly different from zero at the 1 % level, according to a t test In addition to the spatial variability, it is important to bear in mind that the difference between 20CR and ERAI does not have to be constant in time. Figure 2 shows the day-to day variability of β rel in terms of its standard deviation σ. There is a suggestion in the data of increased β rel around coastal regions, such as in large parts of the Mediterranean, as well as Norway and Britain. The relative-bias variability is generally around 15–30 %, which is a similar magnitude to the mean relative bias 〈β rel〉 shown in Fig. 1. Map of the variability of the daily relative 'bias' β rel between 20CR and ERAI wind speeds, in terms of its standard deviation Finally, we show the correlation between the daily wind speeds of the 20CR and ERAI in Fig. 3. The data are well-correlated in most places, but the correlation is particularly strong (≥0.9) in the Atlantic and northern Europe, including the British Isles. Map of the Pearson correlation between daily-mean wind speeds in 20CR and ERA-Interim. Procedure for calibration The goal of our calibration procedure is to generate a wind speed data set that retains the fluctuation patterns of the 20CR data over time, and between ensemble members, but whose probability density functions (PDFs) of the ensemble-mean wind speed in each grid cell match those of the ERA-Interim data during their overlapping time period. In particular, the PDFs do not have to match over other periods (e.g. if comparing the distribution over 142 years from 20CR to the 35 years from ERAI), the time series do not have to match in detail (although we have shown that they do tend to be well-correlated), and individual ensemble members do not need to match ERAI data—thus retaining the 20CR's important measure of uncertainty. We illustrate our procedure for the case of a particular grid cell in Fig. 4. Illustrating the calibration procedure, in terms of daily-mean wind speed probability distributions, for the single grid cell covering north-western Germany (centre: 8° E, 52° N); for the 20CR, the ensemble mean is used throughout for clarity. Top left: Daily-mean wind speed distributions for ERAI and 20CR over their intersecting time period. Top right: A visualisation of the conditional probability matrix P(U iERAI|U j20CR), such that each row j is a PDF of ERAI wind speeds, given a particular 20CR wind speed \(U^{\mathrm {20CR}}_{j}\) (i.e. the values along each row have the same sum). Darker colours indicate higher frequencies in each ERAI PDF. Bottom left: Wind speed distributions over the full 142 years. Bottom-right: The cumulative distribution function derived from the PDF histograms shown to the left. The dotted lines and arrows illustrate the interpolation in the quantile-matching procedure used to convert wind speeds from the original 20CR data to their calibrated counterparts. Here, a high wind speed for this cell from the original 20CR is transformed to its higher counterpart at the same level in the calibrated distribution Our method proceeds in two stages, and is performed on each grid cell independently. Firstly, a transfer matrix is obtained as the conditional probability density of ERAI wind speeds, given bins of 20CR ensemble mean, daily mean wind speeds for the overlapping period: $$ \mathcal{P}_{ij} := P(U^{\text{ERAI}}_{i}|U^{\mathrm{20CR}}_{j}), $$ where i and j index bins in wind speed for the data sets indicated. We use bins of 0.5 m s −1 covering the range 0–40 m s −1. This transfer matrix is applied to the full 142-year 20CR PDF, to obtain a calibrated PDF of 20CR wind speeds spanning 1871–2012: $$ P\left( U^{\mathrm{20CRc}}_{i}\right) = \sum\limits_{j} \mathcal{P}_{ij} P\left( U^{\mathrm{20CR}}_{j}\right). $$ Secondly, calibrated daily time series of wind speeds, U 20CRc(t), from all ensemble members, are obtained by quantile matching (e.g. Panofsky and Brier 1968): the cumulative distribution function (CDF) of the calibrated 20CR ensemble mean wind speeds is interpolated at the quantiles of each ensemble member's wind speed (see bottom-left panel in Fig. 4). Using the individual ensemble members in this step rather than the ensemble mean allows the ensemble spread to be transferred to the calibrated climatology. In some cases, there can be wind speeds present in the 142-year 20CR data that were greater than any in the 34-year period common with ERAI. This means that such wind speeds have no corresponding frequency in ERAI that we can calibrate to: the CDF corresponding to \(P\left (U^{\mathrm {20CRc}}_{i}\right )\) reaches its maximumFootnote 8 below that wind speed, so quantile matching by interpolating the CDF will fail. In this case, we instead perform a linear regression on the relationship between original and corrected winds up to this point (i.e. U 20CRc(U 20CR)=a U 20CR+b). We then extrapolate this model to obtain corrected wind speeds for the final few high wind days. We demonstrate our procedure for the case of a grid cell in north-western Germany in Fig. 4. This shows the different PDFs in question, the transfer matrix and the quantile matching. It is clear that in this case the ERAI wind climate largely represents a shift to higher wind speeds compared to 20CR (i.e. the 20CR winds are low compared to ERAI), and the calibrated 20CR reproduces this well. The PDFs of both the ERAI and 20CR wind speeds appear somewhat truncated at lower wind speeds, rather than reducing smoothly towards U = 0 m s −1. This is due to the daily averaging of the 6-hourly wind speeds, and has implications when attempting to fit Weibull functions to the wind speed distribution; we discuss this issue in detail in the Supplementary Information. It is important to note that the method we describe here is not unique. Many other techniques for calibrating one data set with another have been developed and used in climatological studies. These are usually designed to compare reanalysis or model data with observations, or climate model data at different spatial scales, such as a global run with regional model output; see (Teutschbein and Seibert 2012), Watanabe et al. (2012), Lafon et al. (2013) and references therein for recent reviews of methods. Compromises are reached between statistical complexity, data volumes, direct numerical simulation and time available. In our case, we have chosen a relatively simple statistical procedure. Results of calibration procedure Time series of annual mean wind speeds from both the original and calibrated 20CR data, and from ERA-Interim, are shown in Fig. 5 for a region covering Denmark and Northern Germany (using area-weighted averaging over the region). The calibrated data retains the interannual variability of the original 20CR wind speeds, but with a climatology matching that of ERA-Interim over 1979–2012. Time series of annual mean wind speeds for a region covering 9° E– 15° E and 53° N– 57° N, showing the original 20CR data (purple), ERA-Interim (red) and calibrated 20CR (green). For the 20CR data, the ensemble members are plotted in paler colours, with the ensemble means of the annual mean data plotted in darker colours. Long-term averages are plotted as horizontal dashed lines We map the bias remaining after our procedure in Fig. 6. This can be compared to the original bias maps in Fig. 1—note that here the values are much smaller. The mean bias 〈β〉=〈U 20CRc−U ERAI〉 is consistent with zero almost everywhere (using a t test at a 1 % significance level, as before). An exception is a residual positive bias east of Gibraltar: we expect this area to be heavily affected by differences in how well the complex orography here is resolved between ERAI and 20CR. Two further exceptions occur in the central and eastern Mediterranean, which correspond to anomalies seen in other aspects of the 20CR data (see later sections), and which we discuss in more detail in Appendix C. Map of the remaining 'bias' after calibration. This can be compared to the map of the original bias in Fig. 1; note the colour scale covers much smaller values here. Crosshatched areas are not significantly different from zero at the 1 % level, using a t test The mean of the relative bias 〈β rel〉 (not shown) is ≤5 % almost everywhere. Finally, we note that the correlations between 20CR and ERAI after calibration (not shown) remain almost identical to those shown previously in Fig. 3. Analysis and results In this section, we use the 20CRc data to analyse the distribution of wind speeds over Europe in various complementary ways. The European context: maps of the long-term average, variability and trends The map of the 142-year mean wind speed in Fig. 7 gives an overview of the geographic distribution of wind speeds over Europe. There is a noticeable land–sea contrast, although it is the mountainous regions that have the lowest mean wind speed, just as is seen in the uncorrected 20CR data (Bett et al. 2013), and is inconsistent with observations. This erroneous behaviour is a known consequence of the orographic drag schemes in atmospheric models (Howard and Clark 2007), and is particularly apparent when (as here) the orographic variability is on a much smaller horizontal scale than the model grid cells. The spatial pattern in fact agrees very well with that derived by Kiss and Jánosi (2008) from the 10 m wind speeds covering 1958–2002 in the ERA-40 reanalysis (Uppala et al. 2005), although since they used winds at a lower level their mean values are correspondingly smaller. Long-term mean wind speed over Europe from the 20CRc data It is important to note that the wind speeds shown here apply to the particular spatial scale of this data set, which implies a certain amount of smoothing compared to values measured at a specific site. For example, Kirchner-Bossi et al. (2013) use a complex statistical procedure to relate sea-level pressure from 20CR to wind speed observations at a range of meteorological stations in Spain. Because they are statistically downscaling to this local scale, the mean wind speed they find is 2–3 m s −1 higher than we show in Fig. 7. We map the wind variability in terms of its standard deviation. The structure of the data set makes the calculation of the long-term standard deviation non-trivial: simply considering the ensemble-mean daily time series would result in a standard deviation that was negatively biased. Furthermore, the ensemble members' time series are only continuous in 5-year chunks, and using them as if they were continuous throughout could potentially inflate the apparent variability at the discontinuities (although in practice the impact of this is likely to be very small). To avoid such spurious signals and trends, we calculate the mean and standard deviation of daily wind speeds in each 5-year stream for each ensemble member, then take ensemble means for each period. We then combine these 5-yearly ensemble-mean standard deviations into single aggregate values for the full 142-year period, for each grid cell; see Appendix B for details. Since the standard deviation of wind speeds tends to correlate with the mean, we show in Fig. 8 the wind variability in terms of the coefficient of variation, the ratio of the standard deviation to the mean. This shows that, in most areas, the wind speed standard deviation is ∼40 % of the mean. The central Mediterranean has proportionally higher variability, with Greece, Turkey and the Alps (whose orography will be extremely poorly represented) showing lower variability. Map of the wind speed variability in terms of the coefficient of variation, i.e. the ratio of the standard deviation to the long-term mean The presence of any long-term trends in the mean or variability of wind speeds could have important consequences for wind farms, in terms of their future deployment, energy yield and maintenance requirements. Figure 9 maps the trends in both the ensemble-mean annual mean wind speed and the ensemble-mean annual standard deviation of daily wind speeds. The trends are found from the ensemble-mean annual time series using the Theil–Sen estimator (Theil 1950; Sen 1968). This is the median of the slopes between all pairs of points in the data set, and is more robust against outliers than simple linear regression, making it more suited to skew-distributed data such as wind speed. Map of the linear trend in the time series of ensemble-annual-mean wind speeds in each grid cell (top), and the ensemble-mean of the annual standard deviation of daily wind speeds (bottom), over 1871–2012. Crosshatched areas indicate where the trend is not significant at the 0.1 % level (see text for details) We test the significance of these trends at the 0.1 % level, using a Mann–Kendall test (Mann 1945; Kendall 1975) modified using the method of Hamed and Rao (1998) to account for autocorrelation in the data (following Sousa et al. 2011); as is the case with much meteorological data, we expect adjacent timesteps to be correlated. As with all significance tests, the result says whether the measured trend was unlikely, given the assumption of there being no true underlying physical trend. If the probability of measuring the trend we did was below 0.1 %, then we describe the trend as 'significant', otherwise we regard it as consistent with zero. We chose the particularly stringent threshold of 0.1 % to guard against detection of spurious trends; we only want to highlight trends we are very sure are present in the data. Some key points about long-term trends in European winds are immediately apparent from Fig. 9. Firstly, they are only on the order of a few centimetres per second per decade; and secondly, that in most areas of the continent, the trend is not significantly different from zero. The trends in standard deviation show a similar spatial pattern, although at an even lower magnitude. There are three areas of apparently significant trend in annual wind speed that merit looking at in more detail: the areas of positive trend in the Atlantic Ocean to the north and west of the British Isles, and the eastern Mediterranean around Crete; and the negative trend in an area of the central Mediterranean around the Italian peninsula and Sicily. The Mediterranean regions were also anomalous in terms of their bias with respect to ERA-Interim (see previous section). We look at the behaviour of wind speeds in these regions in more detail in Appendix C. Bett et al. (2013) used the same significance threshold for analysing trends in the uncorrected 20CR data, but measured trends using simple linear regression and t tests to establish significance. While we consider the present technique to be more robust, the magnitude and spatial patterns of the trends are similar, and similar regions are highlighted as significant, pointing to genuine features in the underlying 20CR data. As already discussed in the context of the mean wind speed, it is important to realise that these trends are those seen at the large scales of the 20CR data, and detailed physical or statistical modelling is required to downscale to a specific location. Considering again the example of Kirchner-Bossi et al. (2013), they find that the site in Spain they describe has a statistically significant negative trend in wind speed of around −0.01 m s −1 decade −1. In our results, the corresponding grid cell has a trend of around +0.01 m s−1 decade−1, and is consistent with zero according to our test. Wind distribution time series We use a region covering England and Wales to to give an example of how wind speed distributions can vary with time. The time series of the area-averaged data from this region are shown in Fig. 10. The annual mean wind speed (panel a) shows both large interannual variability, and (when smoothed with a 5-year boxcar window) strong decadal-scale variation. For example, the smoothed series shows a clear increasing trend from around 1970 to a peak in the mid-1990s, followed by a return to near-average values after 2000. When seen in the 142-year context however, these recent variations are not exceptional, and the year-to-year variability is always much greater. Note that, for this region, the year 2010 is the extreme low-wind year. This is linked to exceptionally cold months at the start and end of that calendar year, and a strongly negative NAO index in the 2009–2010 winter (Cattiaux et al. 2010; Osborn 2011; Brayshaw et al. 2012; Fereday et al. 2012; Maidens et al. 2013; Earl et al. 2013). The peak in wind speeds that occurs in the 1990s is another important feature in this region, and is also seen clearly in the observational record of wind speeds (Earl et al. 2013), in studies using geostrophic winds derived from pressure observations (Palutikof et al. 1992; Alexandersson et al. 2000; Wang et al. 2009), and is consistent with the large positive NAO in these years (e.g. Scaife et al. 2005, and references therein). Indeed, much of the variability of wind speeds in this region is likely to be related to modes of climate variability such as the NAO and Atlantic Multidecadal Oscillation (AMO, e.g. Knight 2006); further consideration of this requires careful seasonal breakdowns of both wind speed and these climate indices however, and is beyond the scope of this paper. Time series of the wind speed distribution for a region covering 5° W– 1° E and 51° N– 55° N. In panels a–c, annual statistics are shown in light colours/shading, with darker lines showing the data smoothed with a 5-year boxcar window. Panel a: Ensemble-mean annual mean wind speed. Individual years are shown with shading indicating the 10th/90th percentiles of the ensemble spread in the annual means. Panel b: Ensemble means of the deciles of the daily wind speed distribution each year (i.e. the 10th to 90th percentiles). Panel c: Distribution half-widths, i.e. half the difference between symmetric decile pairs (as labelled); the standard deviation σ is also plotted, with its trend shown as a thin black dashed line. Panel d: The annual mean of the day-to-day standard deviation between ensemble members, as a fraction of the ensemble-mean annual mean wind speed Our results bear a remarkable qualitative resemblance to those produced over 20 years ago by Palutikof et al. (1992) using geostrophic wind speeds (1881–1989) adjusted to match wind speed observations from a station in England over 1975–1984. A key purpose of that study was to illustrate the long-term variability present in wind speeds, as it could have important implications for wind power production. With the advent of larger datasets and greater computational capacity, we are able to re-emphasize their conclusions and consider the decadal-scale behaviour of the wind more robustly and in greater detail. Considering the time series of the distribution as a whole (panel b), we can see that it follows the same decadal trends as the mean (panel a). The distribution width (panel c) highlights that while the outer reaches of the distribution are subject to much variability, with the distribution width growing and shrinking over decades, the inner parts of the distribution are much more constant. The standard deviation shown in that panel has a small but statistically significant positive trend, of 0.016 m s −1 decade −1. Finally, the bottom panel shows the relative uncertainty in the data, in terms of the annual mean of the day-to-day ensemble spread. As one looks further back, fewer observations are assimilated, and the ensemble members have more freedom to disagree with each other, resulting in increases in this measure of uncertainty. Two peaks are present that are related to the reduction in data from Atlantic shipping during the World Wars; these spikes in uncertainty are ubiquitous for near-Atlantic regions. In Appendix C, we show similar plots for other regions that show particular features of interest, as already discussed. Finally, we have given some consideration to the use of the Weibull (1951) distribution to concisely describe the wind speeds in our calibrated 20CR data. However, as already mentioned, our use of daily average wind speeds means that Weibull distributions tend to provide a poor description of the data. Nevertheless, the Weibull scale parameter, which is proportional to the mean of the distribution, does tend to behave in the same way as the mean wind speeds in terms of variability and trends. In particular, trends are of a similar magnitude and spatial pattern, and 'anomalous' regions in the central and eastern Mediterranean, noted in previous sections, are also present. Additional details and discussion are presented in the Supplementary Information. Discussion and summary In this paper, we have demonstrated how century-scale reanalyses—in particular, the Twentieth Century Reanalysis, 20CR—can be used for assessing the long-term trends and variability of near-surface wind speeds over Europe, through a calibration procedure to relate it to a higher-resolution satellite-era climatology (such as ERA-Interim), and subsequent careful analysis. The long baseline of the 20CR means that it has great potential to inform wind speed assessments for the wind energy industry. In general, reanalysis data is used in conjunction with dynamical and/or statistical downscaling techniques in order to reach the spatial scale of wind farms, as part of the 'model chain' in such assessments. Often, it is the observation-rich and relatively high-resolution data sets of ERA-Interim and MERRA that provide that first reanalysis step. This limits any assessment of long-term variability, since they both only cover ∼3 decades. By calibrating the 20CR data to match the climatology of ERA-Interim over their period of overlap (1979–2012), this 142-year data set can be used in their place, providing a much more robust assessment of historic interannual and decadal variability in regions of Europe, and allowing the 'short-term' trends of the past 10–30 years to be put into the longer-term context. To emphasise this point, we show in Fig. 11 the distribution of the 109 34-consecutive-year trendsFootnote 9 in annual mean wind speed for the England and Wales region described in the previous section. The full 1871–2012 trend is indicated and, as already shown, is near zero. The trend from ERA-Interim for the 34 years of overlap is also marked, with a negative trend driven by the general reduction in wind speeds since the early 1990s. It is clear that the strong multi-decadal variability in wind speeds means that attempting to estimate the long-term trend from a ∼30-year sample can lead to misleading results. Distribution of trends for the England and Wales region. The 34-year trends in annual-mean wind speeds from the calibrated 20CR data are shown as the blue histogram), overplotted with the full 142-year trend (green arrow). The single 34-year trend from the overlapping period of ERA-Interim is shown as a red arrow The 20CR data is a rich source of information on the large decadal-scale variability of wind speeds. However, it is not without limitations, and hence it does need to be analysed with care. For example, in areas of complex orography, near-surface wind speeds are strongly reduced at the spatial scale of the 20CR, making their variability more difficult to interpret. As has been noted in other studies (Compo et al. 2011; Brönnimann et al. 2012), the ensemble nature of the 20CR needs to be taken into account when assessing long-term variability. Disagreement between ensemble members can be large, especially in the early period of the data. This leads to the daily ensemble-mean time series having less variability than the individual members, and can cause apparent trends in variability over time. Therefore, the daily ensemble-mean time series has little use in determining wind variability on long timescales, and the ensemble members should be used. Assessment of trends over the full 142 years of the 20CR is complicated by the fact that the mid-point of the time series, and hence of a simple linear trend, is the 1940s. The reduction in ocean-based measurements during both the First and Second World Wars causes spikes in uncertainty, and in some cases systematic spikes in the wind speeds themselves (see Appendix C). Furthermore, the period after the Second World War corresponds to a large increase in national and international programmes collecting greater amounts of weather data. Taken together, the pre-1950s period is much more susceptible to greater random and systematic uncertainties. Measured trends in the 20CR data should therefore be treated with caution. We have shown in fact that all trends in 20CR surface wind speeds over Europe are either consistent with zero (in most locations), so small to be of little practical relevance (e.g. possibly in the North Atlantic), or due to systematic problems with the data (e.g. in the central and eastern Mediterranean and possibly the North Atlantic; see Appendix C). It is clear that, for most wind energy applications, interannual variability and the large decadal-scale variability are more important than the very small long-term trends in historical European wind speeds. Using century-scale reanalyses such as the 20CR allows wind resource assessment studies to incorporate more information on the historical decadal-scale variability at a site, which can reduce the uncertainties in the financial planning central to wind energy development. European Centre for Medium-range Weather Forecasting National Oceanic and Atmosphere Administration Intergovernmental Panel on Climate Change Atmospheric Circulation Reconstructions over the Earth, http://www.met-acre.org/ National Centres for Environmental Prediction Streams 16 and 17 actually last 6 and 4 years respectively (see Table III in Compo et al. 2011). For simplicity, we assume 5-year streams throughout. Throughout this paper, we present maps on the 20CR's 2° grid in a Lambert Azimuthal Equal-Area projection centred on (10° E, 52° N), following, e.g. Annoni et al. (2003), code EPSG::3035. Calculations are performed on the regular lat.–lon. grid. Note that constructing the CDF in finite bins in wind speed, using a finite number of days, and relating the ensemble member time series to the ensemble mean distributions, means that sometimes the calibrated CDF does not quite reach unity. i.e. the Thiel–Sen trend for 1871–1904 inclusive, and 1872–1905, 1873–1906, …, and 1979–2012. We use population statistics here rather than sample statistics because we use data from every day in each n-year period, rather than estimating the n-year standard deviation from a sample of days. Alexandersson H, Tuomenvirta H, Schmith T, Iden K (2000) Trends of storms in NW Europe derived from an updated pressure data set. Climate Res 14:71–73. doi:10.3354/cr014071 Allan R, Brohan P, Compo GP, Stone R, Luterbacher J, Brönnimann S (2011) The international atmospheric circulation reconstructions over the earth (ACRE) initiative. Bull Am Meteorol Soc 92(11):1421–1425. doi:10.1175/2011BAMS3218.1 Annoni A, Luzet C, Gubler E, Ihde J (eds) (2003) Map Projections for Europe. Proceedings of the "Map Projections for Europe" workshop, Marne-La Vallee, 14-15 December 2000, EUR 20120 EN, Institute for Environment and Sustainability, European Commission Joint Research Centre, http://www.ec-gis.org/document.cfm?id=425&db=document Badger J, Frank H, Hahmann AN, Giebel G (2014) Wind climate estimation based on mesoscale and microscale modelling: statistical-dynamical downscaling for wind energy applications. J Appl Meteorol Clim 53 (8):1901–1919. doi:10.1175/jamc-d-13-0147.1 Berrisford P, Dee D, Poli P, Brugge R, Fielding K, Fuentes M, Kållberg P, Kobayashi S, Uppala S, Simmons A (2011) The ERA-Interim archive. ERA Report Series 1, ECMWF, Shinfield Park, Reading, http://old.ecmwf.int/publications/library/do/references/show?id=90276 Bett PE, Thornton HE, Clark RT (2013) European wind variability over 140 yr. Adv Sci Res 10:51–58. doi:10.5194/asr-10-51-2013 Bichet A, Wild M, Folini D, Schär C (2012) Causes for decadal variations of wind speed over land: sensitivity studies with a global climate model. Geophys Res Lett 39(11):L11,701+. doi:10.1029/2012GL051685 Boccard N (2009) Capacity factor of wind power realized values vs. estimates. Energy Policy 37(7):2679–2688. doi:10.1016/j.enpol.2009.02.046 Brayshaw DJ, Dent C, Zachary S (2012) Wind generation's contribution to supporting peak electricity demand—meteorological insights. Proc Inst Mech Eng O J Risk Reliab 226(1):44–50. doi:10.1177/1748006X11417503 Brönnimann S, Martius O, von Waldow H, Welker C, Luterbacher J, Compo GP, Sardeshmukh PD, Usbeck T (2012) Extreme winds at northern mid-latitudes since 1871. Meteorol Z 21(1):13–27. doi:10.1127/0941-2948/2012/0337 Carta JA, Velázquez S, Cabrera P (2013) A review of measure-correlate-predict (MCP) methods used to estimate long-term wind characteristics at a target site. Renew Sust Energy Rev 27:362–400. doi:10.1016/j.rser.2013.07.004 Cattiaux J, Vautard R, Cassou C, Yiou P, Masson-Delmotte V, Codron F (2010) Winter 2010 in Europe: a cold extreme in a warming climate. Geophys Res Lett 37(20):L20,704+. doi:10.1029/2010gl044613 Christensen J, Hewitson B, Busuioc A, Chen A, Gao X, Held I, Jones R, Kolli RK, Kwon WT, Laprise R, Rueda VM, Mearns L, Menndez CG, Rsnen J, Rinke A, Sarr A, Whetton P (2007) Regional climate projections. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt K, Tignor M, Miller H (eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA., chap 11, http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html Christensen JH, Kumar KK, Aldrian E, An SI, Cavalcanti IFA, De Castro M, Dong W, Goswami P, Hall A, Kanyanga JK, Kitoh A, Kossin J, Lau NC, Renwick J, Stephenson D, Xie SP, Zhou T (2013) Climate Phenomena and their Relevance for Future Regional Climate Change. In: Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (in press), Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA., chap 14, doi:10.1017/CBO9781107415324.028, http://www.ipcc.ch/report/ar5/wg1/ Collins M, Knutti R, Arblaster JM, Dufresne JL, Fichefet T, Friedlingstein P, Gao X, Gutowski WJ, Johns T, Krinner G, Shongwe M, Tebaldi C, Weaver AJ, Wehner M (2013) Long-term Climate Change: Projections, Commitments and Irreversibility. In: Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (in press), Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA., chap 12, doi:10.1017/CBO9781107415324.024, http://www.ipcc.ch/report/ar5/wg1/ Compo GP, Whitaker JS, Sardeshmukh PD, Matsui N, Allan RJ, Yin X, Gleason BE, Vose RS, Rutledge G, Bessemoulin P, Brönnimann S, Brunet M, Crouthamel RI, Grant AN, Groisman PY, Jones PD, Kruk MC, Kruger AC, Marshall GJ, Maugeri M, Mok HY, Nordli Ross TF, Trigo RM, Wang XL, Woodruff SD, Worley SJ (2011) The Twentieth Century Reanalysis project. Q J R Meteor Soc 137(654):1–28. doi:10.1002/qj.776 Cradden LC, Harrison GP, Chick JP (2012) Will climate change impact on wind power development in the UK? Clim. Chang. 115(3-4):837–852. doi:10.1007/s10584-012-0486-5 Dee DP, Uppala SM, Simmons AJ, Berrisford P, Poli P, Kobayashi S, Andrae U, Balmaseda MA, Balsamo G, Bauer P, Bechtold P, Beljaars ACM, Van de Berg L, Bidlot J, Bormann N, Delsol C, Dragani R, Fuentes M, Geer AJ, Haimberger L, Healy SB, Hersbach H, Hólm EV, Isaksen L, Kållberg P, Köhler M, Matricardi M, McNally AP, Monge-Sanz BM, Morcrette JJ, Park BK, Peubey C, De Rosnay P, Tavolato C, Thépaut JN, Vitart F (2011) The ERA-Interim reanalysis: configuration and performance of the data assimilation system. Q J R Meteor Soc 137(656):553–597. doi:10.1002/qj.828 Dee DP, Balmaseda M, Balsamo G, Engelen R, Simmons AJ, Thépaut JN (2013) Toward a consistent reanalysis of the climate system. Bull Am Meteorol Soc 95(8):1235–1248. doi:10.1175/bams-d-13-00043.1 Dobrynin M, Murawsky J, Yang S (2012) Evolution of the global wind wave climate in CMIP5 experiments. Geophys Res Lett 39(18):L18,606. doi:10.1029/2012gl052843 Donat MG, Renggli D, Wild S, Alexander LV, Leckebusch GC, Ulbrich U (2011) Reanalysis suggests long-term upward trends in European storminess since 1871. Geophys. Res. Lett 38(14):L14,703+. doi:10.1029/2011gl047995 Earl N, Dorling S, Hewston R, Von Glasow R (2013) 1980–2010 variability in U.K. surface wind climate. J. Climate 26(4):1172–1191. doi:10.1175/jcli-d-12-00026.1 Fereday DR, Maidens A, Arribas A, Scaife AA, Knight JR (2012) Seasonal forecasts of northern hemisphere winter 2009/10. Environ Res Lett 034(3):031+. doi:10.1088/1748-9326/7/3/034031 Ferguson C, Villarini G (2014) An evaluation of the statistical homogeneity of the Twentieth Century Reanalysis. Clim Dynam 42(11-12):2841–2866. doi:10.1007/s00382-013-1996-1 Ferguson CR, Villarini G (2012) Detecting inhomogeneities in the Twentieth Century Reanalysis over the central United States. J Geophys Res 117(D5):D05,123+. doi:10.1029/2011JD016988 Foley AM, Leahy PG, Marvuglia A, McKeogh EJ (2012) Current methods and advances in forecasting of wind power generation. Renew Energy 37(1):1–8. doi:10.1016/j.renene.2011.05.033 Folland CK, Knight J, Linderholm HW, Fereday D, Ineson S, Hurrell JW (2009) The summer North Atlantic Oscillation: past, present, and future. J Clim 22(5):1082–1103. doi:10.1175/2008jcli2459.1 Hamed KH, Rao AR (1998) A modified Mann–Kendall trend test for autocorrelated data. J Hydrol 204 (1-4):182–196. doi:10.1016/s0022-1694(97)00125-x Hartmann DL, Tank AMGK, Rusticucci M, Alexander LV, Brnnimann S, Charabi Y, Dentener FJ, Dlugokencky EJ, Easterling DR, Kaplan A, Soden BJ, Thorne PW, Wild M, Zhai PM (2013) Observations: Atmosphere and Surface. In: Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA., chap 2, pp 159–254. doi:10.1017/CBO9781107415324.008. http://www.ipcc.ch/report/ar5/wg1/ Howard T, Clark P (2007) Correction and downscaling of NWP wind speed forecasts. Meteorol Appl 14 (2):105–116. doi:10.1002/met.12 Hueging H, Haas R, Born K, Jacob D, Pinto JG (2012) Regional changes in wind energy potential over Europe using regional climate model ensemble projections. J Appl Meteorol Clim 52(4):903–917. doi:10.1175/JAMC-D-12-086.1 Hurrell JW, Kushnir Y, Ottersen G, Visbeck M (2003) An Overview of the North Atlantic Oscillation, Geophysical Monograph, vol 134, American Geophysical Union, Washington, D.C., chap 1, pp 1–35. doi:10.1029/134gm01, http://www.cgd.ucar.edu/staff/jhurrell/naobook.html Jung J, Broadwater RP (2014) Current status and future advances for wind speed and power forecasting. Renew Sust Energy Rev 31:762–777. doi:10.1016/j.rser.2013.12.054 Kalnay E, Kanamitsu M, Kistler R, Collins W, Deaven D, Gandin L, Iredell M, Saha S, White G, Woollen J, Zhu Y, Leetmaa A, Reynolds R, Chelliah M, Ebisuzaki W, Higgins W, Janowiak J, Mo KC, Ropelewski C, Wang J, Jenne R, Joseph D (1996) The NCEP/NCAR 40-year reanalysis project. Bull Am Meteorol Soc 77(3):437–471. doi:10.1175/1520-0477(1996)077<0437:TNYRP>2.0.CO;2 Kendall MG (1975) Rank Correlation Methods, 4th. Charles Griffin & Company, London Kirchner-Bossi N, Prieto L, García-Herrera R, Carro-Calvo L, Salcedo-Sanz S (2013) Multi-decadal variability in a centennial reconstruction of daily wind. Appl. Energy 105:30–46. doi:10.1016/j.apenergy.2012.11.072 Kirchner-Bossi N, García-Herrera R, Prieto L, Trigo RM (2014) A long-term perspective of wind power output variability. Int J Climatol. doi:10.1002/joc.4161 Kiss P, Jánosi IM (2008) Comprehensive empirical analysis of ERA-40 surface wind speed distribution over Europe. Energy Convers Manag 49(8):2142–2151. doi:10.1016/j.enconman.2008.02.003 Kiss P, Varga L, Jánosi IM (2009) Comparison of wind power estimates from the ECMWF reanalyses with direct turbine measurements. J Renew Sustain Energy 033(3):105+. doi:10.1063/1.3153903 Knight JR, Folland CK, Scaife AA (2006) Climate impacts of the Atlantic multidecadal oscillation. Geophys Res Lett 33(17):L17,706+. doi:10.1029/2006gl026242 Krueger O, Schenk F, Feser F, Weisse R (2013) Inconsistencies between long-term trends in storminess derived from the 20CR reanalysis and observations. J Climate 26(3):868–874. doi:10.1175/jcli-d-12-00309.1 Krueger O, Feser F, Bärring L, Kaas E, Schmith T, Tuomenvirta H, Storch H (2014) Trends and low frequency variability of extra-tropical cyclone activity in the ensemble of twentieth century reanalysis by Xiaolan L. Wang, Y. Feng, G. P. Compo, V. R. Swail, F. W. Zwiers, R. J. Allan, and P. D. Sardeshmukh, Climate Dynamics, 2012. Clim Dynam 42(3-4):1127–1128. doi:10.1007/s00382-013-1814-9 Kubik ML, Brayshaw DJ, Coker PJ, Barlow JF (2013) Exploring the role of reanalysis data in simulating regional wind generation variability over Northern Ireland. Renew Energy 57:558–561. doi:10.1016/j.renene.2013.02.012 Lafon T, Dadson S, Buys G, Prudhomme C (2013) Bias correction of daily precipitation simulated by a regional climate model: a comparison of methods. Int J Climatol 33(6):1367–1381. doi:10.1002/joc.3518 Liléo S, Berge E, Undheim O, Klinkert R, Bredesen RE (2013) Long-term correction of wind measurements. state-of-the-art, guidelines and future work. Elforsk report 13:18, Elforsk, URL http://www.elforsk.se/Rapporter/?rid=13_18_ Maidens A, Arribas A, Scaife AA, MacLachlan C, Peterson D, Knight J (2013) The influence of surface forcings on prediction of the north atlantic oscillation regime of winter 2010/11. Mon Weather Rev 141 (11):3801–3813. doi:10.1175/mwr-d-13-00033.1 Mann HB (1945) Nonparametric tests against trend. Econometrica 13(3):245–259. http://www.jstor.org/stable/1907187 McVicar TR, Roderick ML, Donohue RJ, Li LT, Van Niel TG, Thomas A, Grieser J, Jhajharia D, Himri Y, Mahowald NM, Mescherskaya AV, Kruger AC, Rehman S, Dinpashoh Y (2012) Global review and synthesis of trends in observed terrestrial near-surface wind speeds: Implications for evaporation. J Hydrol 416-417:182–205. doi:10.1016/j.jhydrol.2011.10.024 McVicar TR, Vautard R, Thpaut JN, Berrisford P, Dunn RJH (2013) Land surface wind speed. In: Blunden J, Arndt DS (eds) State of the Climate in 2012, vol 94 (8), Bulletin American Meteorologia Society, chap 2, Global Climate, pp S27–S29, doi:10.1175/2013bamsstateoftheclimate.1 Mears C (2013) Ocean surface wind speed. In: Blunden J, Arndt D S (eds) State of the Climate in 2012, vol 94 (8), Bulletin American Meteorologia Society, chap 2, Global Climate, p S29, doi:10.1175/2013bamsstateoftheclimate.1 Meehl GA, Stocker TF, Collins WD, Friedlingstein P, Gaye AT, Gregory JM, Kitoh A, Knutti R, Murphy JM, Noda A, Raper SCB, Watterson IG, Weaver AJ, Zhao ZC (2007) Global climate projections. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA., chap 10, http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html Osborn TJ (2011) Winter 2009/2010 temperatures and a record-breaking North Atlantic Oscillation index. Weather 66(1):19–21. doi:10.1002/wea.660 Palutikof JP, Guo X, Halliday JA (1992) Climate variability and the UK wind resource. J Wind Eng Ind Aerod 39(1-3):243–249. doi:10.1016/0167-6105(92)90550-T Panofsky HA, Brier GW (1968) Some Applications of Statistics to Meteorology. The Pennsylvania State University, University Park, Pennsylvania Petersen EL, Troen I (2012) Wind conditions and resource assessment. WIREs Energy Environ 1(2):206–217. doi:10.1002/wene.4 Pirazzoli PA, Tomasin A (2003) Recent near-surface wind changes in the central Mediterranean and Adriatic areas. Int J Climatol 23(8):963–973. doi:10.1002/joc.925 Poli P, Hersbach H, Tan D, Dee D, JN Thpaut, Simmons A, Peubey C, Laloyaux P, Komori T, Berrisford P, Dragani R, Trmolet Y, Holm E, Bonavita M, Isaksen L, Fisher M (2013) The data assimilation system and initial performance evaluation of the ECMWF pilot reanalysis of the 20th-century assimilating surface observations only (ERA-20C). ERA Report Series 14, ECMWF, Shinfield Park, Reading, http://old.ecmwf.int/publications/library/do/references/show?id=90833 Pryor SC, Barthelmie RJ (2010) Climate change impacts on wind energy: a review. Renew Sust Energy Rev 14(1):430–437. doi:10.1016/j.rser.2009.07.028 Pryor SC, Schoof JT (2010) Importance of the SRES in projections of climate change impacts on near-surface wind regimes. Meteorol Z 19(3):267–274. doi:10.1127/0941-2948/2010/0454 Rayner NA, Parker DE, Horton EB, Folland CK, Alexander LV, Rowell DP, Kent EC, Kaplan A (2003) Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res 108(D14):4407+. doi:10.1029/2002JD002670 Rienecker MM, Suarez MJ, Gelaro R, Todling R, Bacmeister J, Liu E, Bosilovich MG, Schubert SD, Takacs L, Kim GK, Bloom S, Chen J, Collins D, Conaty A, da Silva A, Gu W, Joiner J, Koster RD, Lucchesi R, Molod A, Owens T, Pawson S, Pegion P, Redder CR, Reichle R, Robertson FR, Ruddick AG, Sienkiewicz M, Woollen J (2011) MERRA: NASA's Modern-Era Retrospective Analysis for Research and Applications. J Clim 24(14):3624–3648. doi:10.1175/jcli-d-11-00015.1 Scaife AA, Knight JR, Vallis GK, Folland CK (2005) A stratospheric influence on the winter NAO and North Atlantic surface climate. Geophys Res Lett 32(18):L18,715+. doi:10.1029/2005GL023226 Scaife AA, Kucharski F, Folland CK, Kinter J, Brönnimann S, Fereday D, Fischer AM, Grainger S, Jin EK, Kang IS, Knight JR, Kusunoki S, Lau NC, Nath MJ, Nakaegawa T, Pegion P, Schubert S, Sporyshev P, Syktus J, Yoon JH, Zeng N, Zhou T (2009) The CLIVAR C20C project: selected twentieth century climate events. Clim Dynam 33(5):603–614. doi:10.1007/s00382-008-0451-1 Sen PK (1968) Estimates of the regression coefficient based on Kendall's tau. J Am Stat Assoc 63(324):1379–1389. doi:10.1080/01621459.1968.10480934 Sousa PM, Trigo RM, Aizpurua P, Nieto R, Gimeno L, Garcia-Herrera R (2011) Trends and extremes of drought indices throughout the 20th century in the Mediterranean. Nat Hazards Earth Syst Sci 11 (1):33–51. doi:10.5194/nhess-11-33-2011 Stickler A, Brönnimann S (2011) Significant bias of the NCEP/NCAR and twentieth-century reanalyses relative to pilot balloon observations over the West African Monsoon region (1940–1957). Q J R Meteor Soc 137 (659):1400–1416. doi:10.1002/qj.854 Stopa JE, Cheung KF (2014) Intercomparison of wind and wave data from the ECMWF Reanalysis Interim and the NCEP Climate Forecast System Reanalysis. Ocean Model 75:65–83. doi:10.1016/j.ocemod.2013.12.006 Szczypta C, Calvet JC, Albergel C, Balsamo G, Boussetta S, Carrer D, Lafont S, Meurey C (2011) Verification of the new ECMWF ERA-Interim reanalysis over France. Earth Syst Sci 15(2):647–666. doi:10.5194/hess-15-647-2011 Teutschbein C, Seibert J (2012) Bias correction of regional climate model simulations for hydrological climate-change impact studies: review and evaluation of different methods. J Hydrol 456-457:12–29. doi:10.1016/j.jhydrol.2012.05.052 Theil H (1950) A rank-invariant method of linear and polynomial regression analysis. In: Proceedings of Koninalijke Nederlandse Akademie van Weinenschatpen A. doi:10.1007/978-94-011-2546-8_20, vol 53, pp 386–392 Tokinaga H, Xie SP (2011) Wave- and Anemometer-Based Sea Surface Wind (WASWind) for climate change analysis. J Climate 24(1):267–285. doi:10.1175/2010jcli3789.1 Trenberth KE, Jones PD, Ambenje P, Bojariu R, Easterling D, Tank AK, Parker D, Rahimzadeh F, Renwick JA, Rusticucci M, Soden B, Zhai P (2007) Observations: Surface and atmospheric climate change. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt K, Tignor M, Miller H (eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, chap 3., http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html Uppala SM, Kållberg PW, Simmons AJ, Andrae U, Bechtold FM, Gibson JK, Haseler J, Hernandez A, Kelly GA, Li X, Onogi K, Saarinen S, Sokka N, Allan RP, Andersson E, Arpe K, Balmaseda MA, Beljaars ACM, Berg BJ, Bormann N, Caires S, Chevallier F, Dethof A, Dragosavac M, Fisher M, Fuentes M, Hagemann S, Hólm E, Hoskins BJ, Isaksen L, Janssen PAEM, Jenne R, Mcnally AP, Mahfouf JF, Morcrette JJ, Rayner NA, Saunders RW, Simon P, Sterl A, Trenberth KE, Untch A, Vasiljevic D, Viterbo P, Woollen J (2005) The ERA-40 re-analysis. Q J R Meteor Soc 131(612):2961–3012. doi:10.1256/qj.04.176 Vautard R, Cattiaux J, Yiou P, Thepaut JN, Ciais P (2010) Northern hemisphere atmospheric stilling partly attributed to an increase in surface roughness. Nat Geosci 3(11):756–761. doi:10.1038/ngeo979 Wang XL, Zwiers FW, Swail VR, Feng Y (2009) Trends and variability of storminess in the Northeast Atlantic region, 1874–2007. Clim Dynam 33(7-8):1179–1195. doi:10.1007/s00382-008-0504-5 Wang, Wan H, Zwiers FW, Swail VR, Compo GP, Allan RJ, Vose RS, Jourdain S, Yin X (2011) Trends and low-frequency variability of storminess over western Europe, 1878–2007. Clim Dynam 37 (11-12):2355–2371. doi:10.1007/s00382-011-1107-0 Wang XL, Feng Y, Compo GP, Swail VR, Zwiers FW, Allan RJ, Sardeshmukh PD (2013) Trends and low frequency variability of extra-tropical cyclone activity in the ensemble of twentieth century reanalysis. Clim Dynam 40(11-12):2775–2800. doi:10.1007/s00382-012-1450-9 Wang XL, Feng Y, Compo GP, Zwiers FW, Allan RJ, Swail VR, Sardeshmukh PD (2014) Is the storminess in the Twentieth Century Reanalysis really inconsistent with observations? a reply to the comment by Krueger et al. (2013b). Clim Dynam 42(3-4):1113–1125. doi:10.1007/s00382-013-1828-3 Watanabe S, Kanae S, Seto S, Yeh PJF, Hirabayashi Y, Oki T (2012) Intercomparison of bias-correction methods for monthly temperature and precipitation simulated by multiple climate models. J Geophys Res 117(D23):D23,114+. doi:10.1029/2012jd018192 Watson S (2014) Quantifying the variability of wind energy. WIREs Energy Environ 3(4):330–342. doi:10.1002/wene.95 Weibull W (1951) A statistical distribution function of wide applicability. J Appl Mech 18(3):293–297 Wentz FJ, Ricciardulli L (2011) Comment on Global trends in wind speed and wave height. Science 334 (6058):905. doi:10.1126/science.1210317 Wever N (2012) Quantifying trends in surface roughness and the effect on surface wind speed observations. J Geophys Res 117(D11):D11,104+. doi:10.1029/2011JD017118 Wiser R, Yang Z, Hand M, Hohmeyer O, Infield D, Jensen PH, Nikolaev V, O'Malley M, Sinden G, Zervos A, Von Stechow C (2011) Wind energy. In: Edenhofer O, Pichs-Madruga R, Sokona Y, Seyboth K, Matschoss P, Kadner S, Zwickel T, Eickemeier P, Hansen G, Schlömer S (eds) IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, chap 7., http://srren.ipcc-wg3.de/report/ Young IR, Babanin AV, Zieger S (2011b) Response to comment on global trends in wind speed and wave height. Science 334(6058):905. doi:10.1126/science.1210548 Young IR, Zieger S, Babanin AV (2011b) Global trends in wind speed and wave height. Science 332 (6028):451–455. doi:10.1126/science.1197219 Young IR, Vinoth J, Zieger S, Babanin AV (2012) Investigation of trends in extreme value wave height and wind speed. J Geophys Res 117(C11):C00J06. doi:10.1029/2011jc007753 PB would like to thank Adam Scaife, Chris Folland, Clive Wilson, Malcolm Lee, Jess Standen, Alasdair Skea and Doug McNeall, and the anonymous reviewer, for helpful comments and discussion. Support for the Twentieth Century Reanalysis Project dataset is provided by the U.S. Department of Energy, Office of Science Innovative and Novel Computational Impact on Theory and Experiment (DOE INCITE) program, and Office of Biological and Environmental Research (BER) and by the National Oceanic and Atmospheric Administration Climate Program Office. ERA-Interim data was obtained from the ECMWF archive and are used under license. Met Office Hadley Centre, FitzRoy Road, Exeter, EX1 3PB, UK Philip E. Bett, Hazel E. Thornton & Robin T. Clark Philip E. Bett Hazel E. Thornton Robin T. Clark Correspondence to Philip E. Bett. (PDF 1.19 MB) A: Choice of vertical level In Fig. 12, we compare the daily mean wind speeds from 20CR at the σ:=P/P surface = 0.995 level with those at the other available near-surface levels of P = 1000 hPa and 10 m, over an arbitrary period. They have very similar variability behaviour, with the 1000 hPa winds tending to be slightly higher, and the 10 m winds around 10–20 % lower. Demonstration of the daily mean wind speed at different near-surface levels, for the England and Wales region. Both panels show the results from the ensemble-mean 20CR winds at the σ = 0.995 level, at 1000 hPa, and at 10 m, as well as the 10 and 60 m winds from ERA-Interim after regridding to match 20CR. The top panel shows the daily-mean wind speeds as a ratio of the 20CR σ = 0.995 wind, and the bottom panel shows the actual wind speeds Figure 12 also includes 10 and 60 m winds from ERA-Interim. The 60 m vertical level was chosen as it is roughly similar to the height expected at P = 0.995P surface. A similar alternative would have been the 30 m model level, but we chose the higher level as it would be (marginally) less impacted by surface roughness; 60 m is also closer to wind turbine hub heights and thus more likely to be used for site-selection studies for the wind power industry. Figure 12 also suggests that the 60 m winds provide a fairly good match to the 20CR σ = 0.995 winds by eye. B: Procedure for combining variances To avoid bias, we calculate variances of daily-mean wind speeds for each ensemble member separately, in consecutive n-year periods. In most cases, these periods are n = 5 years, corresponding to the production streams of the 20CR (see Section 4.1); the final period has n = 2 years, covering 2011 and 2012. These are combined into an aggregate populationFootnote 10 variance for the whole 142-year period over all ensemble members, using the following procedure. If we consider a single time series of daily-mean wind speeds U(t j ), at discrete timesteps labelled j, then we can divide it into a series of discrete n-year chunks labelled i, each containing N i days (leap years and the final 2-year period mean that not all N i are equal). For each n-year period i, we can calculate the mean \(\overline {U}_{i} = N_{i}^{-1} {\sum }_{j} U(t_{j})\), the mean of squares \(\overline {U^{2}}_{i} = N_{i}^{-1} {\sum }_{j} {U^{2}_{j}}\), and the variance \({\sigma _{i}^{2}} = \overline {U^{2}}_{i} - \overline {U}_{i}^{2}\). We store the mean and variance for each n-year period, for each ensemble member. The aggregate means over all n-year periods (i.e. the 142-year means in our case) are simply $$\begin{array}{@{}rcl@{}} \overline{U} &=& \frac{{\sum}_{i} N_{i} \overline{U}_{i}}{{\sum}_{i} N_{i}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \overline{U^{2}} &=& \frac{{\sum}_{i} N_{i} \overline{U^{2}}_{i}}{{\sum}_{i} N_{i}}. \end{array} $$ We can use these to write the aggregate population variance in terms of the mean and variance in each period: $$\begin{array}{@{}rcl@{}} \sigma^{2} &=& \overline{U^{2}} - \overline{U}^{2} \end{array} $$ $$\begin{array}{@{}rcl@{}} &=& \frac{{\sum}_{i} N_{i} \overline{U^{2}}_{i}}{{\sum}_{i} N_{i}} - \overline{U}^{2} \end{array} $$ $$\begin{array}{@{}rcl@{}} &=& \frac{{\sum}_{i} N_{i} \left( {\sigma_{i}^{2}}+\overline{U}_{i}^{2} \right)}{{\sum}_{i} N_{i}} - \overline{U}^{2}. \end{array} $$ In practice, since we have stored the n-year means and variances for each ensemble member m, \(\overline {U}_{i,m}\) and \(\sigma ^{2}_{i,m}\), we take ensemble means to obtain \(\overline {U}_{i}\) and \({\sigma ^{2}_{i}}\) for each period. These are then used to calculate \(\overline {U}\) and σ 2 using Eq. 9. C: Additional regional time series In this section, we demonstrate the wind speed time series for some additional regions of interest, in the same manner as for the England and Wales results discussed in Section 4.2 (Fig. 10). The regions are defined in Table 1 and shown in Fig. 13, and were selected as areas of apparently 'significant' trends in wind speed (see Fig. 9). As elsewhere in this paper, trends are calculated using the Theil–Sen estimator, and their significance is tested using the modified Mann–Kendall test (see Section 4.1). Table 1 Definitions of regions used in this study. Coordinates are given as (° East, ° North). Results for the first two regions are given in the main body of this paper, and this Appendix describes the bottom three regions Regions used in this study, overlaid on the mean wind speed from 20CR on an arbitrary day. The regions defined in Table 1 (on a regular lat–lon grid) are marked with boxes Time series of the wind speed distribution for the North Atlantic region, following Fig. 10. The panels show annual values of mean wind speed (a), standard deviation of daily mean winds (b) and mean daily ensemble spread (c). Dark lines in (a, b) give 5-year rolling averages, and trendlines are shown with black dashed lines; they are significant at the 0.1 % level (see text for details) As Fig. 14 but for the wind speed distribution in the Sicily and Central Mediterranean region. While the annual mean wind speeds have a significant negative trend (black dashed line in panel a, see text for details), there is no significant trend in the standard deviation (panel b) As Fig. 14 but for the wind speed distribution for the Crete and Eastern Mediterranean region. Both the annual mean wind speeds (panel a) and the standard deviations (panel b) have statistically significant trends, marked as black dashed lines; see text for details Figure 14 shows the results for the North Atlantic region. As well as having much stronger and more variable wind speeds overall compared to England and Wales, there are also significant positive trends in the annual mean wind speed and annual standard deviation of daily winds. The increase in the uncertainty prior to the 1940s is much more striking than for the England and Wales region, and casts a degree of suspicion on the trend in the annual mean wind speed. It is plausible that the apparent trend is simply due the winds prior to the 1940s in this location being systematically slightly lower than in the subsequent period, rather than being due to any true underlying physical mechanism. A possible cause—at least in part—could be a difference between the variance in the observations ingested by the reanalysis, and the preferred variance of the underlying NWP model. For example, if the observations are more variable than the model (e.g. if left running without assimilating data), then we might imagine that the 20CR data would have less variance at early times when there are much fewer observations. The skewed nature of wind speed distributions means that a trend in variance could lead to a trend in mean wind speeds too. However, the 20CR employs a covariance inflation process (see Compo et al. 2011 and references therein for details), which will act in the opposite direction. Without further detailed study of the model behaviour, these ideas remain at the level of speculation. The WASWind data set produced by Tokinaga and Xie (2011), based on ship-based measurements of wind and wave heights, has a negative trend in winds for the North Atlantic over 1950–2008. In our data, the trend over the 1950–2010 period is positive, but not significantly different from zero. The weakness of both trends, and difficulties with the observations in both cases, means that it is hard to be conclusive about the 'true' situation. However, the negative trend we see between around 1990 to around 2005 is seen in the WASwind data, and Vautard et al. (2010) have shown that it is also present in the ERA-Interim data. Finally, Vautard et al. (2010) found a negligible trend in the North Atlantic in the NCEP/NCAR Reanalysis (Kalnay et al. 1996) over 1979–2008, which is also be consistent with our results. Long-term trends in extreme wind speeds and storminess in the North Atlantic have been discussed in Wang et al. (2009, 2011, 2013), Krueger et al. (2013, 2014) and Wang et al. (2014). These studies relate extreme winds derived from long-term pressure records with those derived from the 20CR data set, and demonstrate both the decadal-scale variability that we see here, and the difficulty of drawing definitive conclusions from trend analysis with this data: different analysis methods can produce very different results, and the 20CR data prior to the 1950s should be treated both carefully and sceptically. The Sicily and Central Mediterranean region appeared to have a significant negative trend in wind speeds in Fig. 9; the time series for the annual mean wind speeds in that region is shown in Fig. 15. We can see again the high levels of uncertainty prior to the 1950s, and a particularly anomalous spike in wind speeds around 1940–1942. If we take that spike to be indicative of the kind of systematic errors that might be present in the early half of the data, but not captured by the ensemble spread, then it is not unreasonable to suppose that the entire period prior to the 1940s could be showing higher wind speeds than it should, and thus accentuating a negative trend. However, there does appear to be a more genuine negative trend in the data from the 1950s onwards, where the uncertainties are much more reasonable. We find that the Theil–Sen slope for the 1950–2012 period is very similar to that of the full 142 years, although in this case it is not significantly different from zero at the 0.1 % level. However, as there are so few decades available from the 1950s, it is difficult to know how such an apparent trend relates to the decadal-scale oscillations that we see here, and in other regions. Overall, the uncertainties in the data make it extremely difficult to separate decadal climate variability, systematic errors, and genuine long-term trend. Pirazzoli and Tomasin (2003) looked at trends in the observed wind speeds over a similar region using station data mostly covering 1951–2000. They found a mixture of trend behaviours: most stations showed a negative trend prior to the 1970s that then became positive; some stations showed no trend, or trends which became negative from the 1970s onwards. In our data, which will not be able to resolve the complex coastal and orographic features of the region, we can see that the 5-year running mean appears to be increasing from the 1950s, changing to a negative trend after the 1970s. While this clearly disagrees with the Pirazzoli and Tomasin (2003) results from some stations, it is unclear how the variety of different observed behaviours in this complex terrain should combine to produce an aggregate trend on the large scales of the 20CR. In any case, the trends in the 20CR data are extremely slight; the main conclusion from our data should be that interannual variability is vastly more important than any trend for this region over a period as short as 50 years. Finally, we show the time series for the Crete and Eastern Mediterranean region in Fig. 16. In this case, the apparent overall trend is positive. There is again a spike in wind speeds in the early 1940s, and a suggestion that the data prior to the 1950s could be systematically shifted relative to the latter period. Another interesting feature is that the early period until around the 1920s shows a slight decrease over time; if we exclude the 1940s spike, this then appears to be followed by a long generally-increasing period until the 1980s, after which the wind speeds have been relatively constant. As before, the uncertainties in the data, both systematic and as seen in the ensemble spread, coupled with the expectation of decadal-scale variability and a time series that is 'only' 14 decades long, mean that it is impossible to know from this data alone how 'real' such a very long oscillation might be. If we allow for systematic shifts in the 1940s and before, the data is consistent with there being no long-term trend, but with decadal-scale variations underlying large interannual variability, as in other regions. What we can say with some certainty, however, is that the wind speeds in this region have been higher since the 1970s than they were in the 1950s and 1960s—with the caveat of there being strong interannual variability. Bett, P.E., Thornton, H.E. & Clark, R.T. Using the Twentieth Century Reanalysis to assess climate variability for the European wind industry. Theor Appl Climatol 127, 61–80 (2017). https://doi.org/10.1007/s00704-015-1591-y Issue Date: January 2017 Ensemble Member North Atlantic Oscillation Index Wind Speed Data
CommonCrawl
Numerical calculation of a quantum field's observables If you're looking for a formulation of QFT that resembles Schrödinger's equation in single-particle QM and that can be solved on a (infinitely fast) computer, here it is: Scalar fields For a single scalar field with Hamiltonian $$ \newcommand{\pl}{\partial} H \sim \int d^3x \Big[\big(\dot\phi(x)\big)^2 + \big(\nabla\phi(x)\big)^2 + V\big(\phi(x)\big)\big], \tag{1} $$ simply replace continuous space with a finite lattice, treat $\phi(x)$ as a collection of independent real variables (one for each lattice site $x$), and use $$ \dot\phi(x)\propto \frac{\pl}{\pl \phi(x)} \tag{2} $$ normalized so that the (lattice version of the) canonical commutation relation holds. Then the equation of motion in the Schrödinger picture is $$ i\frac{\pl}{\pl t}\Psi[\phi,t] = H\Psi[\phi,t] \tag{3} $$ where the complex-valued state-function $\Psi$ depends on time $t$ and on all of the field variables $\phi(x)$, one for each lattice site. Gauge fields The lattice Schròdinger-functional formulation for gauge fields associates elements of the Lie group (not the Lie algebra!) with each link of the lattice (pair of neighboring sites). The analog of the differential operator (2), corresponding to the time-derivative of a gauge field, is nicely described in section 3.3 of https://arxiv.org/abs/1810.05338. The state-function $\Psi$ is a function of all of these group-valued link variables, and gauge invariance is expressed by a Gauss-law constraint. Fermion fields The lattice Schròdinger-functional formulation for fermion fields is conceptually the easiest of all, because the anticommutativity of fermion fields (Pauli exclusion principle) means that you only have a finite number of possible values associated with each lattice site (instead of, say, a continuous real variable like in the scalar-field case), so the Schròdiger equation (3) is just a gigantic matrix equation in this case. In practice, it's messy. Some complications Of course, there are a few complications: Solving a partial differential equation (3) with a ga-jillion independent variables takes an awful lot of computer power, far more than we currently have, unless we settle for compromises like using a lattice with only a handful of sites in each dimension. As far as I know, we don't yet know quite how to put chiral non-abelian gauge theories on a lattice. In particular, we don't yet know quite how to put the Standard Model on a lattice. (X-G Wen has suggested that we actually do know how to do it, at least in principle, but I haven't seen it spelled out yet in terms that I understand.) But we know how to put QCD and QED on a lattice, and it's done routinely, subject to the limitation noted in the first bullet above. Figuring out which states $\Psi$ represent single-particle states is a difficult problem, not to mention the multi-particle states that you'd need to do scattering calculations or to study bound-state properties (like hydrogen in quantum electrodynamics). There are QFTs in which its easy, like non-relativistic QFTs and trivial relativistic QFTs, but it's surprisingly difficult in relativistic theories like quantum electrodynamics where an electron is always accompanied by an electric field and an arbitrary number of arbitrarily low-energy photons. This difficulty is related to the fact that observables in QFT are tied to regions of spacetime, not to particles. Particles are phenomena that the theory predicts, not inputs to the theory's definition. For more detail, I recommend the book Quantum Fields on a Lattice. There are several books about lattice QFT, but this is one of the most comprehensive. Strictly non-relativistic QFT Things are easier in strictly non-relativistic QFT, like the example shown in the question. In that case, the number of particles is fixed, and the model decomposes into separate non-relativistic QM for each fixed number of particles. Each one of these separate non-relativistic models has far fewer independent variables, proportional to the number of particles instead of proportional to the number of points in space. Yes, QFT books are unfortunately often vague about the connection to reality until the chapters where they discuss scattering, in my experience. The measurement-related postulates in QFT are the same as in QM but with some difficulties. $| \langle p | \psi \rangle |^2 $ is the probability density of getting a momentum $p$ in an experiment measuring momentum; this is used in the derivation of the scattering matrix. From that, it follows that $\langle \psi | P^\mu | \psi \rangle$ is the expectation value of momentum in a given state, for example. Whether you then put the time dependence in $|\psi \rangle, P^\mu$, or both is up to the picture being used (schrödinger/heisenberg/interaction). One big difference to QM is that position space doesn't seem to have a satisfactory description in QFT (no lorentz invariant position space state for >1 particle unless you have multi-time arguments). It is often argued that a position-space description is not necessary, with something vague like "no localization possiblie in QFT". I happen to not believe that is true, but I warn you that my view is not the majority view in that regard. Quantum States What do the fixed points of a RG equation mean and what are its importance? Why is the answer different with energy conservation vs forces? Covariant vs contravariant vectors How does a floating object affect the hydrostatic pressure in the fluid? Why gauge transformations does not/cannot change a quantum state? Why doesn't the nuclear fusion in a star make it explode? How to efficiently check if a superoperator is Lindbladian? Operators "carrying" momentum and particle number How do you prove that an instantaneous center of rotation exists? Entropy as a state function for irreversible paths Spin non-conservation in Coulomb interaction Space-time curvature caused by electromagnetic fields - an experimental results
CommonCrawl
About us Aims & Scope Editorial Board Indexing and Abstracting Online Submission Submission Guidelines Ethical Guidelines Peer Review Policy Copyright & License Misconduct & Plagiarism Archiving & Data Policy European Journal of Sustainable Development Research Issue 2 - In Progress 2019 - Volume 3 Issue 4 Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell D. Sakthivadivel 1 * , P. Ganesh Kumar 2, V. S. Vigneswaran 2, M. Meikandan 3, S. Iniyan 2 1 School of Mechanical Engineering (SMEC), Vellore Institute of Technology (VIT) University, Vellore, Tamil Nadu, INDIA 2 Institute for Energy Studies, Department of Mechanical Engineering, CEG campus, Anna University, Chennai, INDIA 3 Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, INDIA European Journal of Sustainable Development Research, 2019 - Volume 3 Issue 4, Article No: em0101 https://doi.org/10.29333/ejosdr/5905 Published Online: 29 Aug 2019 Views: 559 | Downloads: 587 In-text citation: (Sakthivadivel et al., 2019) Reference: Sakthivadivel, D., Ganesh Kumar, P., Vigneswaran, V. S., Meikandan, M., & Iniyan, S. (2019). Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell. European Journal of Sustainable Development Research, 3(4), em0101. https://doi.org/10.29333/ejosdr/5905 In-text citation: (1), (2), (3), etc. Reference: Sakthivadivel D, Ganesh Kumar P, Vigneswaran VS, Meikandan M, Iniyan S. Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell. EUR J SUSTAIN DEV RE. 2019;3(4):em0101. https://doi.org/10.29333/ejosdr/5905 AMA 10th edition Reference: Sakthivadivel D, Ganesh Kumar P, Vigneswaran VS, Meikandan M, Iniyan S. Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell. EUR J SUSTAIN DEV RE. 2019;3(4), em0101. https://doi.org/10.29333/ejosdr/5905 Reference: Sakthivadivel, D., P. Ganesh Kumar, V. S. Vigneswaran, M. Meikandan, and S. Iniyan. "Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell". European Journal of Sustainable Development Research 2019 3 no. 4 (2019): em0101. https://doi.org/10.29333/ejosdr/5905 Reference: Sakthivadivel, D., Ganesh Kumar, P., Vigneswaran, V. S., Meikandan, M., and Iniyan, S. (2019). Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell. European Journal of Sustainable Development Research, 3(4), em0101. https://doi.org/10.29333/ejosdr/5905 Reference: Sakthivadivel, D. et al. "Performance Study of an Advanced Micro-gasifier Stove with Coconut Shell". European Journal of Sustainable Development Research, vol. 3, no. 4, 2019, em0101. https://doi.org/10.29333/ejosdr/5905 In this paper, an attempt has been made to study ACS IES-15 micro-gasifier stove tested with Coconut shell. The testing procedure followed to evaluate the performance of the stove is as per the standard protocol WBT 4.2.3, and the results are analysed in terms of thermal efficiency, firepower, specific fuel consumption, turndown ratio and specific energy consumption. It was found that the thermal efficiency of fixed bed advanced micro-gasifier cook stove ACS IES-15 is 36.7±0.4%. Experiments have also been accomplished to provide data to investigate the performance parameters of the new stove. Prominently the turndown ratio was found to be 3.3 shows better control on the combustion of new stove. Economic analysis of the stove reveals better pay back period for coconut shell. micro-gasifier stove coconut shell thermal efficiency fire power turndown ratio Show / Hide HTML Content Biomass is one of the predominant renewable energy re-sources existing all over the world. It plays a vital role in meeting the energy demand of developing countries. Biomass from agricultural waste and woody constituents are widely used as feed stock for energy (Vamsee et al., 2012). Almost 3 billion people (nearly 40%) of the world's population depends on the traditional use of biomass for cooking and about half of these people live in developing countries like India, Brazil and Africa. Burning of biomass releases heat as well as significant amount of emissions in terms of particulate matter (PM) and carbon dioxide (CO2) with incomplete combustion sometimes leading to carbon monoxide (CO) emissions. Arbex et al. (2017) described that these biomass emissions create a substantial health risk and also have a significant impact on climate change. Due to incomplete combustion of biomass fuels resulting in emission of toxic smoke, reduced combustion efficiency and poor heat transfer rate (Jessica, 2016). Thus the biomass traditional cook stoves operate at significantly lower thermal efficiency (~16%) articulates Raman et al. (2014) and emit high order pollutants than LPG and Kerosene based stoves described by Mukunda et al. (1988). Roth articulates that advanced cook stove (ACS) is the most viable option to replace the traditional cook stoves which have lower thermal efficiency and incomplete combustion (Raman et al., 2013a, 2013b; Roth, 2011). The advanced biomass forced draft cookstoves well advanced due to its increased in heat transfer rate as well as higher combustion efficiency (Jessica 2016). Eventually, fuel savings can be accomplished by ACS stove compared with ICS and TCS stoves in terms of efficiency and specific fuel consumption as indicted by Seemin Rubab and Kandpal (1996); Balakumar et al. 2015. Commercial version of Oorja cook stove was designed by Mukunda et al. (2010) with an efficiency of about 50% and, followed by Varun kumar (2012) has conducted a detailed analysis on Oorja stove by maintaining constant air fuel ratio for varying air flow rates. Jan Alders (2007) and M/s Philips has developed a forced draft cook stove and some experiments were also been reported by Raman et al. (2013a). In this paper a novel advanced micro-gasifer cook stove (called ACS IES-15) is proposed with an optimum tilted secondary air injection of 45º into the combustion chamber (Sakthivadivel and Iniyan, 2017; Sakthivadivel et al., 2017, 2019). In this study a new approach is introduced in order to achieve the higher fuel burning rate, higher fire power and low specific fuel consumption. The thermal efficiency, specific fuel consumption, fire power and turn-down ratio of newly developed ACS IES-15 cook stove are presented in detail. Furthermore the economic analysis of ACS IES-15 stove is elaborated. DESIGN AND EXPERIMENTAL SETUP Biomass gasification is the conversion of solid biomass fuel into combustible gasses like CO, H2, and CH4 by thermochemical conversion with presence of limited oxygen and hydrocarbon in the fuel (Claus et al. 2000). The fabrication of ACS IES-15 stove using low-cost materials is as per the theoretical design presented in Table 1 (Panwar and Rathore, 2008; Sakthivadivel and Iniyan, 2018a, 2108b, 2019). Table 1. Design parameters of ACS IES-15 stove Design Parameter Heat required for cooking the food (Qfd) Efficiency of the cook stove (Raman et al. 2013a) Total heat energy required from the fuel (Qf) Total heat energy needed (Qn) kJ/hr Density of the fuel (ρ) Calorific value of the fuel MJ/kg Fuel consumption rate (FCR) kg/hr Size of combustion chamber (Vcc) Diameter and Height of the combustion chamber Dcc=11.27; Hcc = 15.06 Specific Fuel Consumption (SFC) Total amount of fuel required to perform the cooking process of boiling water in WBT 4.2.3 test is called specific fuel consumption. This can be represented in a simplified equation given by Raman et al. (2013a) as follows: \[SFC = \{\frac{\lbrack 75/(Tboil - Tstart)\rbrack \times \lbrack Massmw \times (1 - MC) - Massfwe\rbrack - 1.5 \times Masschar}{\text{Masswaterremaining}}\}\] (1) where, Mass of fuel wood used to vaporise the water can be written using the following equation: \[Massfwe = \{\frac{\lbrack Massmw \times MC \times 4.186 \times (Tboil - Troom)\rbrack + 2257}{\text{NCVfuel}}\}\] (2) Specific fuel consumption (kg) is the amount of fuel required to boil (or simmer) 1kg of water. Factor of 75 is the standard temperature increase from starting temperature to local boiling temperature. Thermal Efficiency Thermal efficiency is an amount of the heat liberated by the fuel and subsequently transferred to the water in cooking vessel. The rest of the energy is wasted into the atmosphere (Lizette et al., 2018; WBT, 2014). The formula used to calculate the thermal efficiency is given in equation 3 as follows: \[\eta th = \{\frac{\lbrack 4.186 \times (Pwi - Pwf) \times (Twf - Twi)\rbrack + (2257 \times Wv)}{fwd \times NCVfuel}\}\] (3) Characterisation of the fuel The developed ACS IES-15 stove is tested at Institute for Energy Studies (IES), Anna University, Chennai, Tamil Nadu. Also, the performance test is conducted using coconut shell as fuel. Coconut shell is taken as a fuel for this study because it delivers higher fire power (W) than any other biomass solid fuels. The local name of Coconut shell sold in the fuel wood market is 'Kottankuchi or Thotti or Serattai' in the state. Cost of coconut shell is approximately र5000 per ton (Raman et al., 2013a). The proximate and ultimate analysis of the coconut shell fuel used in this experiment is presented in Table 2. Based on the selection of the species for combustion, the proximate and ultimate analysis of the fuel was carried out in terms of the following procedure specified by ASTM standards (Refer Table 2). Table 2. Physical and thermal properties of Coconut shell fuel aCoconut shell Size (cm3) 7.5× 4.1 × 0.2 Bulk Density (kg m3) GCV (MJ/kg) 17.37±1.2 ASTM E711 - 87 Moisture content (%) 10±0.01 Volatile Matter (%) 72.05±0.85 Ash content (%) ASTM D1102 - 84 Fixed Carbon (%) By difference Carbon (%) Hydrogen (%) 5.51±0.02 Nitrogen (%) Oxygen (%) Sulphur (%) aThe mean value ± standard deviation for three determinations The proximate analysis was carried out using muffle furnace and weighing balance while the ultimate analysis was achieved through an organic elemental analyser (Thermo Scientific Flash, 2000). The oxygen content was established by difference method as proposed by ASTM standard. The calorific value was estimated by igniting 1 g pelletized sample in an oxygen bomb calorimeter under adiabatic conditions. The results shown in Table 2 are attained by conducting the experiment in the laboratory on dry basis. The secondary air inlet in the combustion chamber is tilted to an angle of 45º to ensure better turbulence during volatile combustion and even during char burning mode (Sakthivadivel et al., 2017; Sakthivadivel and Iniyan, 2018a, 2018b). EQUIPMENTS AND IN ACS IES-15 STOVE Combustion Chamber The secondary air injection profile is modified as per the design in the newly developed combustion chamber (Sakthivadivel et al., 2017). The material used for fabricating the combustion chamber is carbon steel. The carbon steel sheet of thickness 2 mm is bent into circular shape and two ends are joined together by welding. Similarly, the inner cylinder is also settled and both cylinders are arranged concentrically, they are joined using another strip of metal by welding. Also, cylindrical combustion chamber is placed 10 cm from the base of the combustion chamber to hold the fuel. The primary and secondary air inlets of diameter 0.3 and 0.4 cm respectively are drilled in the combustion chamber. The thermal lining material is prepared with a mixture of vermiculite matter (93%), glass wool (2%) and cement (5%) by weight and the composite insulation is filled in the gap between the two layers of the combustion chamber. The thermal conductivity of this mixture is determined to be about 0.0472 W/m.K (Sakthivadivel and Iniyan, 2018a) by conducting thermal conductivity test as directed by BIS standard IS 9489 (BIS 2015). Care is taken to make sure that there are no obstructions due to the thermal lining material in the primary and secondary air path. Subsequently, the insulated combustion chamber is ready to be made into the stove body. The head of the stove is removed and the combustion chamber is placed inside to complete the arrangement of the ACS IES-15 stove. Figure 1 shows the schematic view of the combustion chamber and experimental setup. Figure 1. Model of (a) Schematic view of the experimental setup (b) combustion chamber Once the gasifier stove is ignited by spreading kerosene on the top of the fuel bed, the gasification and combustion begins and flame front propagates continuously into the bed by supplying of the heat released from the volatile gas reactions and char oxidation. The whole process comprises of sub-stoichiometric high-temperature oxidation and reduction reactions between the solid biomass fuel and an air (oxidant). These high-temperature combustible producer gasses are burnt at the top of the fuel bed with excess air (secondary air) supply. The performance parameters and the thermal efficiency are calculated using WBT 4.2.3 standard protocol. A mercury column thermometer was used to measure the water temperature. A digital weight balance was used to measure the amount of water and wood spent during the water boiling experiment. The accuracy of the digital weight balance equipment used in the experiments is 5 g. All the values related to the performance parameters of the cook stoves like fuel burning rate, fire-power, specific fuel consumption, specific energy consumption and turndown ratio for three different testing methodologies called cold start, hot start and simmering are projected in Table 3. Table 3. Performance of ACS IES-15 stove with Coconut shell fuel Fuel Burning Rate (g/min) Efficiency (%) Fire Power (W) Specific Fuel Consumption (g/L) Turn Down Ratio (TDR) 41±1.5 Fuel Burning Rate The observations made from Table 3 that fuel burning rate (FBR) during the high-power phases are comparatively higher than simmering phase. Although the FBR during the low power phase is less when compared to the high-power phases, the duration of the simmering phase is much longer than the high-power phases. Therefore the total energy consumed during the simmering phase is much higher than the high-power phases. Table 3 shows that there is a considerable variation in the burning rate during cold start and hot start but in simmering phase, it is almost constant for all replicated tests. The reason is due to the regulated air supply into the combustion chamber for better combustion as discussed by Raman et al. (2013a). Varunkumar (2012) established that FBR increases with decrease in the ratio of combustion to gasification flow rates. In order to maintain the stoichiometric condition, the primary air was increased without changing the total flow when the transition to char mode. A significant parameter of this mode of operation is preserving a fraction between the amounts of combustible producer gasses and the primary air supplied for gasification. Here, an attempt is made to achieve stoichiometric condition without changing the air flow rates. Hence, the proper supply of secondary air leads to the higher burning rate and firepower as shown in Table 3. Therefore, the average fuel burning rate of using coconut shell as fuel for ACS stove is about 19.8 g/min. The burning rate of the fuel differs with the calorific value of fuel used and the way of air injected into the combustion chamber. Thermal Efficiency of the Stoves When coconut shell is used as a combustible fuel, the thermal efficiency of the ACS stove is found to be about 36.7±0.4% after conducting three replicated tests. The temperature of the water is continuously monitored using thermometer during all the three phases of WBT test. It can be noticed that the time taken to boil 5 Liters of water is 13 min for ACS stove during cold start condition. However, during the hot start, ACS stove takes about 11 min to boil 5 Liters of water. Eventually, the temperature of the water is maintained between 95-97 ºC for 45 min during the simmering phase for ACS stove as suggested by WBT 4.2.3. Specific Fuel Consumption Specific fuel consumption ACS IES-15 cook stove during high power (cold start and hot start) and low power (simmer) are shown in Table 3. The average SFC of ACS IES-15 cook stove was 71.7 g/L. During cold start and hot start the cook stoves consumed the same amount of fuel unlike simmering phase. During cold and hot start more than 75% of the combustion chamber is loaded. The inner hot surface of combustion chamber with uniform air-fuel mixing due to turbulence causes better fuel burning rate and firepower provides low SFC. Since the combustion chamber is half loaded in simmer phase the 45° air injection and the inner surface of hot combustion chamber provide better SFC (Sakthivadivel and Iniyan, 2017). Meanwhile, this leads to the increase in specific fuel consumption and higher efficiency and low fire power. The firepower of the cook stove increases with increase in calorific value of the fuel. This increase in fire-power is due to an increase in the fuel burning rate of the fuel, as a result, there is an increase in temperature inside the combustion chamber. The flame temperature of an advanced micro-gasifier stove is ranges between 800–1000 °C. Whereas the flame temperature of the conventional cook stove is in the range of 700–800 °C (L'Orange, Volckens, and De-Foort, 2012). Hence, if the flame temperature of the stove increases the heat transfer rate increases, results in higher efficiency of advanced micro-gasifiers stove is achieved than the improved and traditional stoves. In this study, coconut shell is taken as the fuel since it has higher order of calorific value than all other biomass solid fuels available in the market. While burning of coconut shell delivers more firepower in TERI SPT-0610 cook stove than Philips and Oorja plus stoves as discussed by Raman et al. (2013a,b). From Table 3, it is evident that the ACS IES-15 cook stove delivers more firepower in high power and low power phases. As a result, the average thermal efficiency of ACS stove is higher than prescribed limit of MNRE (more than 35%) also compared with TERI SPT-0610 (Raman et al. 2013a). Meanwhile, there is a significant improvement in efficiency at low as well as high power test of ACS stove (Refer Table 3). During simmering phase, the temperature of the water should be maintained between 95-97 °C. So the fuel feeding is limited to half feeding and only about 50% of the combustion chamber (by volume) is filled with fuel. Due to 45° of the secondary air supply in ACS IES-15 stove, easy mixing of air and producer gas is achieved on the top of the fuel bed. The proper supply of air and producer gas forms a uniform combustible mixture that provides clean combustion with high efficiency (Sakthivadivel and Iniyan, 2017). Turndown Ratio The turndown ratio is a measure which controls the fuel saving during real cooking conditions. Raman, Ram and Ruchi Gupta (2014) reports that the higher value of TDR specifies a higher ratio of high power to low power, and could indicate a greater range of power control in the stove. The period of the simmering phase is nearly four times more than the duration of the hot-start phase. Hence, the ACS IES-15 stove having a high thermal efficiency (36.7±1%) and high TDR (3.3). However, the value of TDR only reveals the power control of the cook stove. Specific Energy Consumption The specific energy consumption of the developed cook stove was evaluated by observing the amount of fuel consumed during the three phases of WBT 4.2.3. It is witnessed from the Figure 2 that high specific fuel consumption is found during simmering phase due to constant energy supply needed to maintain the temperature of the water between 95- 97 °C. The total energy consumed during the high power phases are very low when compared to the low power phase can be literally seen from the Figure 2. The reason is that the constant power is delivered during the simmer phase for four time's longer duration than cold start and hot start. Figure 2. The specific energy consumption of the developed cook stove Economic analysis is one of the major considerations when it is essential to compare the different cooking options. The economic analysis used in this section is based on the data obtained during the real cooking conditions and some data like fuel price are taken from the previous studies of different authors are cited below. In this study, we used to project the simple payback period (SPP) alone for the microeconomic analysis. The main objective of computing SPP is to predict the time period required for the reoccurrence on an investment cost to repay the cumulative sum of actual outlay. The higher level of profitability can be expected, if SPP is shorter. The following equation (4) is used to calculate the SPP: \[SPP = \frac{(C_{\text{acs}} - C_{\text{tcs}})}{S_{t}}\] (4) Table 4. Economics of the stove Investment cost of ACS stove Investment cost of TCS stove Cost of coconut shell (Raman et al. 2013a) $/kg Fuel required for four members of a family ACS stove kg/month TCS stove Simple Payback Period This study illustrates the performance of a newly developed ACS IES-15 stove is analysed based on the thermal efficiency, firepower, specific fuel consumption, turndown ratio and specific energy consumption. The following are the important deliberations attained from the study are The thermal efficiency of the ACS IES-15 stove is 36.7±0.4% with a simple payback period of 4 months. The ACS IES-15 has more firepower during cold start, hot start than simmer phase for coconut shell fuel. The specific fuel consumption of simmering phase is very high than that of cold start and hot start due to the constant energy supply required to maintain the suggested temperature. Therefore it has better robustness, low cost, and meets the needs of domestic energy requirement. Qn Energy needed (MJ h−1) T Duty hour GCV Gross calorific value (MJ kg−1) NCV Net calorific value (MJ kg−1) FCR Fuel consumption rate (kg h−1) ηg Gasification efficiency Dcc Reactor diameter (cm) Hcc Reactor height (cm) SGR Specific gasification rate (kg m−2 h−1) ρwood Wood density (kg m−3) Masschar Mass of the remaining charcoal after conducting Massfwe Mass of the fuel wood used to evaporate water Massmw Mass of the moist wood Masswater Remaining mass of water remaining in the pot at the end of the test Mwater, i Initial mass of water with pot (grams) Mwater, f Final mass of water with pot (grams) MC Mass fraction of moisture content of the fuel on wet basis Tboil The local boiling temperature of water (ºC) Troom The air temperature in the room (ºC) TStart Starting temperature of the water (ºC) Twi Water temperature before test (ºC) Twf Water temperature after test (ºC) Wv Mass of water vaporized (grams) Cacs Investment cost of an ACS (र) Ctcs Investment cost of a TCS (र) St Saving in spending for fuel wood during period t (र /month) Alders, J. (2007). The Philips Woodstove. ETHOS Conference, Kirkland. Available at: http://ethoscon.com/pdf/ETHOS/ETHOS2007/Sat_PM/Session_4/Alders_Philips_Woodstovev3.pdf (Accessed on 23.01.2017) Arbex, M. A., Martins, L. C., de Oliveira, R. C., Pereira, L. A. M., Arbex, F. F. and Cancado, J. E. D. (2017). Air pollution from biomass burning and asthma hospital admissions in a sugar cane plantation area. Brazil. J Epidemiology Community Health, 61(5), 395-400. https://doi.org/10.1136/jech.2005.044743 Balakumar, A., Sakthivadivel, D., Iniyan, S. and Jyothi Prakash, E. (2015). Experimental Evaluation of a Forced Draft Micro Gasifier Cook Stove using Juliflora Wood and Coconut Shell. Journal of Chemical and Pharmaceutical Sciences, 1(7), 178-181. Bilsback, K. R., Eilenberg, S. R., Good, N., Heck, L., Johnson, M., et al. (2018). The Firepower Sweep Test: A novel approach to cookstove laboratory testing. Indoor Air, 28(6), 936-949. https://doi.org/10.1111/ina.12497 BIS. (2015). Method of test for thermal conductivity of thermal insulation materials by means of heat flow meter (First Revision of IS 9489), Bureau of Indian standards. Chetan Singh, S. (2009). Renewable Energy Technologies-A practical guide to beginners. PHI Learning Private Limited, New Delhi: pp. 60-68. Hindsgaul, C., Schramm, J., Gratz, L., Henriksen, U. and Dall Bentzen, J. (2000). Physical and chemical characterization of particles in producer gas from wood chips. Bioresource Technology, 73, 147-155. https://doi.org/10.1016/S0960-8524(99)00153-4 L'Orange, C., Volckens, J. and De-Foort, M. (2012). Influence of stove type and cooking pot tempera-ture on particulate matter emissions from biomass cook stoves. Energy Sustain Dev, 16, 448–455. https://doi.org/10.1016/j.esd.2012.08.008 Mukunda, H. S., Dasappa, S., Paul, P. J., Yagnaraman, M., Kumar, D. R. and Deogaonkar, M. (2010). Gasifier stove - science, technology and outreach. Current Science, 98(5), 627–638. Mukunda, H. S., Shrinivasa, U. and Dasappa, S. (1988). Portable single-pan wood stoves for high efficiency. Sadhana, 13, 237-270. https://doi.org/10.1007/BF02759888 Panwar, N. L. and Rathore, N. S. (2008). Design and performance evaluation of a 5 kW producer gas stove. Biomass Bioenergy, 32, 1349–1352. https://doi.org/10.1016/j.biombioe.2008.04.007 Pasangulapati, V., Ramachandriya, K. D., Kumar, A., Wilkins, M. R., Jones, C. L. and Huhnke, R. L. (2012). Effects of cellulose, hemicellulose and lignin on thermochemical conversion characteristics of the selected biomass. Bioresource Technology, 114, 663–669. https://doi.org/10.1016/j.biortech.2012.03.036 Raman, P., Murali, J., Sakthivadivel, D. and Vigneswaran, V. S. (2013a). Performance evaluation of three types of forced draft cook stoves using fuel wood and coconut shell. Biomass and Bioenergy, 49, 333 -340. https://doi.org/10.1016/j.biombioe.2012.12.028 Raman, P., Murali, J., Sakthivadivel, D. and Vigneswaran, V. S. (2013b). Evaluation of Domestic Cookstove Technologies Implemented across the World to Identify Possible Options for Clean and Efficient Cooking Solutions. Journal of Energy and Chemical Engineering, 1(1), 15-26. Raman, P., Ram, N. K. and Gupta, R. (2014). Development, design and performance analysis of a forced draft clean combustion cook stove powered by a thermoelectric generator with multi-utility options. Energy, 69, 813-825. https://doi.org/10.1016/j.energy.2014.03.077 Raman, P., Ram, N. K. and Murali, J. (2014). Improved test method for evaluation of bio-mass cook-stoves. Energy 7, 1479-495. https://doi.org/10.1016/j.energy.2014.04.101 Roth, C. (2011). Micro gasification: cooking with gas from biomass. 1st ed. Eschborn: GIZ HERA e Poverty-Oriented Basic Energy Service. pp. 100. Rubab, S. and Chandra Kandpal, T. (1996). Biofuel mix for cooking in rural areas: Implications for financial viability of improved Cookstoves. Bioresource Technology, 56, 169-178. https://doi.org/10.1016/0960-8524(96)00015-6 Sakthivadivel, D. and Iniyan, S. (2017). Combustion characteristics of biomass fuels in a fixed bed micro-gasifier cook stove. Journal of Mechanical Science and Technology, 31(2), 995-1002. https://doi.org/10.1007/s12206-017-0152-y Sakthivadivel, D. and Iniyan, S. (2018a). Characterization, density and size effects of fuels in an advanced micro-gasifier stove. Biofuels, 31(2), 995-1002. https://doi.org/10.1007/s12206-017-0152-y Sakthivadivel, D. and Iniyan, S. (2018b). Experimental design and 4E (Energy, Exergy, Emission and Economical) analysis of a fixed bed advanced micro-gasifier stove. Environmental Progress and Sustainable Energy, 37(6), 2139-2147. https://doi.org/10.1002/ep.12882 Sakthivadivel, D. and Iniyan, S. (2019). Computational modeling and performance evaluation of an advanced micro-gasifier cookstove with optimum air injection. Biofuels. https://doi.org/10.1080/17597269.2019.1573606 Sakthivadivel, D., Balakumar, A. and Iniyan, S. (2017). Development of Advanced Cook Stove with Optimum Air Mixture using CFD. Asian Journal of Research in Social Sciences and Humanities, 7(2), 384-392. https://doi.org/10.5958/2249-7315.2017.00097.1 Tryner, J. (2016). Combustion phenomena in biomass gasifier cookstoves. Colorado State University, pp. 201. Varunkumar, S. (2012). Packed bed gasification-combustion in biomass based domestic stoves and combustion systems. Indian Institute of Science. WBT. (2014). The Water Boiling Test Version 4.2.3, Cookstove Emissions and Efficiency in a Controlled Laboratory Setting, pp. 84. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
CommonCrawl
The sensitivity of global temperature to CO2 September 13, 2011 / geoenergymath Doing exploratory analysis with the CO2 perturbation data at the Wood For Trees data repository and the results are very interesting. The following graph summarizes the results: Rapid changes in CO2 levels track with global temperatures, with a variance reduction of 30% if d[CO2] derivatives are included in the model. This increase in temperature is not caused by the temperature of the introduced CO2, but is likely due to a reduced capacity for the biosphere to take-up the excess CO2. This kind of modeling is very easy if you have any experience with engineering controls development. The model is of the type called Proportional-Derivative, and it essentially models a first-order equation $$\Delta T = k[CO_2] + B \frac{d[CO_2]}{dt}$$ The key initial filter you have to apply is to average the Mauna Loa CO2 data over an entire year. This gets rid of the seasonal changes and the results just pop out. The numbers I used in the fit are B=1.1 and k=0.0062. The units are months. The B coefficient is large because the CO2 impulse response has got that steep downslope which then tails off: $$ d[CO_2] \sim \frac{1}{\sqrt{t}}$$ Is this a demonstration of causality, +CO2 => +Temperature? If another causal chain supports the change in CO2, then likely. We have fairly good records of fossil fuel (FF) emissions over the years. The cross-correlation of the yearly changes, d[CO2] and d[FF], show a zero-lag peak with a significant correlation (below right). The odds of this happening if the two time-series were randomized is about 1 out of 50. Left chart from Detection of global economic fluctuations in the atmospheric co2 record. This is not as good a cross-correlation as the d[CO2] and dTemperature data — look at year 1998 in particular, but the zero-lag correlation is clearly visible in the chart.. This is the likely causality chain: $$d[FF] \longrightarrow d[CO_2] \longrightarrow dTemperature$$ If it was the other way, an increase in temperature would have to lead to both CO2 and carbon emission increases independently. CO2 could happen because of outgassing feedbacks (CO2 in oceans is actually increasing despite the outgassing), but I find it hard to believe that the world economy would increase FF emissions as a result of a warmer climate. What happens if there is a temperature forcing CO2 ? With the PD model in place the of d[CO2] against Temperature cross-correlation looks like the following: The Fourier Transform set looks like the following: This shows the two curves have the same slope in spectrum and just a scale shift. The upper curve is the ratio between the two curves and is essentially level. The cross-correlation has zero lag and a strong correlation of 0.9. The model again is $$ \Delta T = k[CO_2] + B \frac{d[CO_2]}{dt}$$ The first term is a Proportional term and the second is the Derivative term. I chose the coefficients to minimize the variance between the measured Temperature data and the model for [CO2]. In engineering this is a common formulation for a family of feedback control algorithms called PID control (the I stands for integral). The question is what is controlling what. When I was working with vacuum deposition systems we used PID controllers to control the heat of our furnaces. The difference is that in that situation, the roles are reversed, with the process variable being a temperature reading off a thermocouple and the forcing function is power supplied to a heating coil as a PID combination of T. So it is intuitive for me to immediately think that the [CO2] is the error signal, yet that gives a very strong derivative factor which essentially amplifies the effect. The only way to get a damping factor is by assuming that Temperature is the error signal and then we use a Proportional and an Integral term to model the [CO2] response. Which would then give a similar form and likely an equally good fit. It is really a question of causality, and the controls community have a couple of terms for this. There is the aspect of Controllability and that of Observability (due to Kalman). Controllability: In order to be able to do whatever we (Nature) want with the given dynamic system under control input, the system must be controllable. Observability: In order to see what is going on inside the system under observation, the system must be observable. So it gets to the issue of two points of view: 1. The people that think that CO2 is driving the temperature changes have to assume that nature is executing a Proportional/Derivative Controller on observing the [CO2] concentration over time. 2. The people that think that temperature is driving the CO2 changes have to assume that nature is executing a Proportional/Integral Controller on observing the temperature change over time, and the CO2 is simply a side effect. What people miss is that it can be potentially a combination of the two effects. Nothing says that we can't model something more sophisticated like this: $$ c \Delta T + M \int {\Delta T}dt = k[CO_2] + B \frac{d[CO_2]}{dt}$$ The Laplace transfer function Temperature/CO2 for this is: $$ \frac {s(k + B s)}{c s + M} $$ Because of the s in the numerator, the derivative is still dominating but the other terms can modulate the effect. This blog post did the analysis a while ago, the image below is fascinating because the overlay between the dCO2 and Temperature anomaly matches to the point that the noise even looks similar . This doesn't go back to 1960. If CO2 does follow Temperature, due to the Causius-Clapeyron and the Arrhenius rate law, a positive feedback will occur — as the released CO2 will provide more GHG which will then potentially increase temperature further. It is a matter of quantifying the effect. It may be subtle or it may be strong. From the best cross-correlation fit, the perturbation is either around (1) 3.5 ppm change per degree change in a year or (2) 0.3 degree change per ppm change in a year. (1) makes sense as a Temperature forcing effect as the magnitude doesn't seem too outrageous and would work as a perturbation playing a minor effect on the 100 ppm change in CO2 that we have observed in the last 100 years. (2) seems very strong in the other direction as a CO2 forcing effect. You can understand this if we simply made a 100 ppm change in CO2, then we would see a 30 degree change in temperature, which is pretty ridiculous, unless this is a real quick transient effect as the CO2 quickly disperses to generate less of a GHG effect. Perhaps this explains why the dCO2 versus Temperature data has been largely ignored. Even though the evidence is pretty compelling, it really doesn't further the argument on either side. On the one side interpretation #1 is pretty small and on the other side interpretation #2 is too large, so #1 may be operational. One thing I do think this helps with is providing a good proxy for differential temperature measurements. There is a baseline increase of Temperature (and of CO2), and accurate dCO2 measurements can predict at least some of the changes we will see beyond this baseline. Also, and this is far out, but if #2 is indeed operational, it may give credence to the theory that that we may be seeing the modulation of global temperatures the last 10 years because of a plateauing in oil production. We will no longer see huge excursions in fossil fuel use as it gets too valuable to squander, and so the big transient temperature changes from the baseline no longer occur. That is just a working hypothesis. I still think that understanding the dCO2 against Temperature will aid in making sense of what is going on. As a piece in the jigsaw puzzle it seems very important although it manifests itself only as a second order effect on the overall trend in temperature. In summary, as a feedback term for Temperature driving CO2 this is pretty small but if we flip it and say it is 3.3 degrees change for every ppm change of CO2 in a month, it looks very significant. I think that order of magnitude effect more than anything else is what is troubling. One more plot of the alignment. For this one, the periodic portion of the d[CO2] was removed by incorporating a sine wave with an extra harmonic and averaging that with a kernel function for the period. This is the Fourier analysis with t=time starting from the beginning of the year. $$ 2.78 \cos(2 \pi t – \theta_1) + 0.8 \cos(4 \pi t – \theta_2) $$ $$ \theta_1 = 2 $$ $$ \theta_2 = -0.56 $$ phase shift in radians. The yearly kernel function is calculated from this awk function: BEGIN { n[I++] = $1 END { Groups = int(I / 12) ## Kernel function for(i=0; i<=12; i++) { for(j=0; j<Groups; j++) { x += n[j*12+i] G[i] = x/Groups Scale = (G[12]-G[0]) Y[i] = (G[i]-G[0]) -i*Scale/12 for(i=0; i<12; i++) { Diff = n[j*12+i] – Y[i] print Diff This was then filtered with a 12 month moving average. It looks about the same as the original one from Wood For Trees, with the naive filter applied at the source, and it has the same shape for the cross-correlation. Here it is in any case; I think the fine structure is a bit more apparent(the data near the end points is noisy because I applied the moving average correctly). How can the derivative of CO2 track the temperature so closely? My working theory assumes that new CO2 is the forcing function. An impulse of CO2 enters the atmosphere and it creates an impulse response function over time. Let's say the impulse response is a damped exponential and the atmospheric temperature responds quickly to this profile. The CO2 measured at the Mauna Loa station takes some time to disperse over from the original source points. This implies that a smearing function would describe that dispersion, and we can model that as a convolution. The simplest convolution is an exponential with an exponential, as we just need to get the shape right. But what the convolution does is eliminate the strong early impulse response, and thus create a lagged response. As you can see from the Alpha plot below, the way we get the strong impulse back is to take the derivative. What this does is bring the CO2 signal above the sample-and-hold characteristic caused by the fat-tail. The lag disappears and the temperature anomaly now tracks the d[CO2] impulses. If we believe that CO2 is a forcing function for Temperature, then this behavior must happen as well; the only question is whether the effect is strong enough to be observable. If you realize that the noisy data below is what we started with, and we had to extract a non-seasonal signal from the green curve, one realizes that detecting that subtle a shift in magnitude is certainly possible. ← Explaining the "Missing Carbon" Fat-Tail Impulse Response of CO2 →
CommonCrawl
Home > Proceedings > Geom. Integrability & Quantization > Proceedings of the Fourteenth International Conference on Geometry, Integrability and Quantization Fourteenth International Conference on Geometry, Integrability and Quantization June 8-13, 2012 | Varna, Bulgaria Proceedings of the Fourteenth International Conference on Geometry, Integrability and Quantization Editor(s) Ivaïlo M. Mladenov, Andrei Ludu, Akira Yoshioka Geom. Integrability & Quantization, 14: 272pp. (2013). Geometry, Integrability and Quantization, Volume 14 Rights: Copyright © 2013 Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences First available in Project Euclid: 13 July 2015 Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences Geometry, Integrability and Quantization Vol. 14, - (2013). Harmonic Spheres and Yang--Mills Fields Armen Sergeev Geometry, Integrability and Quantization Vol. 14, 11-33 (2013). https://doi.org/10.7546/giq-14-2013-11-33 Read Abstract + We study a relation between harmonic spheres in loop spaces of compact Lie groups and Yang-Mills fields on the Euclidean four-space $\mathbb R^4$. Schrödinger Minimum Uncertainty States of EM-Field in Nonstationary Media with Negative Differential Conductivity Andrey Angelow, Dimitar Trifonov Quantization of the electromagnetic field in non-stationary media (linear with respect to $E$, with negative differential conductivity) is investigated. The dynamical invariants and statistical properties of the field are found in such media. It is shown that in the eigenstates of linear dynamical invariant, the Schrödinger uncertainty relation is minimized. The time evolution of the tree independent second-order statistical moments (quantum fluctuations: covariance cov(q,p), var(q) and var(p)) are found out. Summary of a Non-Uniqueness Problem of the Covariant Dirac Theory and of Two Solutions of It Mayeul Arminjon We present a summary of: 1) the non-uniqueness problem of the Hamiltonian and energy operators associated, in any given coordinate system, with the generally-covariant Dirac equation, 2) two different ways to restrict the gauge freedom so as to solve that problem, 3) the application of these two ways to the case of a uniformly rotating reference frame in Minkowski spacetime. We find that a spin-rotation coupling term is there only with one of these two ways. Harmonic Analysis on Lagrangian Manifolds of Integrable Hamiltonian Systems Julia Bernatska, Petro Holod For an integrable Hamiltonian system we construct a representation of the phase space symmetry algebra over the space of functions on a Lagrangian manifold. The representation is a result of the canonical quantization of the integrable system using separating variables. The variables are chosen in such way that half of them parameterizes the Lagrangian manifold, which coincides with the Liouville torus of the integrable system. The obtained representation is indecomposable and non-exponentiated. f-biharmonic Maps Between Riemannian Manifolds Yuan-Jen Chiang We show that if $\psi$ is an $f$-biharmonic map from a compact Riemannian manifold into a Riemannian manifold with non-positive curvature satisfying a condition, then $\psi$ is an $f$-harmonic map. We prove that if the $f$-tension field $\tau_f(\psi)$ of a map $\psi$ of Riemannian manifolds is a Jacobi field and $\phi$ is a totally geodesic map of Riemannian manifolds, then $\tau_f( \phi\circ \psi)$ is a Jacobi field. We finally investigate the stress $f$-bienergy tensor, and relate the divergence of the stress $f$-bienergy of a map $\psi$ of Riemannian manifolds with the Jacobi field of the $\tau_f (\psi)$ of the map. Multiparameter Contact Transformations Bogdana Georgieva Geometry, Integrability and Quantization Vol. 14, 87-102 (2013). https://doi.org/10.7546/giq-14-2013-87-102 This is a review of multiparameter families of contact transformations and their relationship with the generalized Hamiltonian system. We derive the integrability conditions for the generalized Hamiltonian system and show that when they are satisfied the solutions to this system determine a family of multiparameter contact transformations of the initial conditions. We prove a necessary and sufficient condition for a multiparameter family of contact transformations to be a group and a characterization of the function which describes the group multiplication rule. Some Constraints and Symmetries in Dynamics of Homogeneously Deformable Elastic Bodies Barbara Gołubowska, Vasyl Kovalchuk, Ewa E. Rożko, Jan J. Sławianowski Geometry, Integrability and Quantization Vol. 14, 103-115 (2013). https://doi.org/10.7546/giq-14-2013-103-115 Our work has been inspired among others by the work of Arnold, Kozlov and Neihstadt. Our goal is to carry out a thorough analysis of the geometric problems we are faced with in the dynamics of affinely rigid bodies. We examine two models: classical dynamics description by d'Alembert and vakonomic one. We conclude that their results are quite different. It is not yet clear which model is practically better. Green's Function, Wavefunction and Wigner Function of the MIC-Kepler Problem Tomoyo Kanazawa The phase-space formulation of the nonrelativistic quantum mechanics is constructed on the basis of a deformation of the classical mechanics by the $\ast$-product. We have taken up the MIC-Kepler problem in which Iwai and Uwano have interpreted its wave-function as the cross section of complex line bundle associated with a principal fibre bundle in the conventional operator formalism. We show that its Green's function, which is derived from the $\ast$-exponential corresponds to unitary operator through the Weyl application, is equal to the infinite series that consists of its wave-functions. Finally, we obtain its Wigner function. A Class of Localized Solutions of the Linear and Nonlinear Wave Equations Lubomir M. Kovachev, Daniela A. Georgieva Following the tradition of the nano and picosecond optics, the basic theoretical studies continue to investigate the processes of propagation of femtosecond and attosecond laser pulses through the corresponding envelope equation for narrow-band laser pulses, working in paraxial approximation. We should point out here that this approximation is not valid for large band pulses. In air due to the small dispersion the wave equation as well as the $3D+1$ amplitude equation describe more accurate the pulse dynamics. New exact localized solutions of the linear wave and amplitude equations are presented. The solutions discover non-paraxial semi-spherical diffraction of single-cycle and half-cycle laser pulses and a new class of spherically symmetric solutions of the wave equation. The propagation of large band optical pulses in nonlinear vacuum is investigated also in the frame of a system of nonlinear wave vector equations. We obtained exact vector solution with its own angular momentum in the form of a shock wave. Cylindrical Fluid Membranes and the Evolutions of Planar Curves Petko Marinov, Mariana Hadzhilazova, Ivaïlo Mladenov An interesting relation between the mKdV equation and the cylindrical equilibrium shapes of fluid membranes is observed. In our setup mKdV arises from the study of the evolution of planar curves in the normal direction. Symmetry Properties of the Membrane Shape Equation Vladimir Pulov, Edy Chacarov, Mariana Hadzhilazova, Ivaïlo Mladenov Here we consider the Helfrich's membrane shape model from a group-theoretical viewpoint. By making use of the conformal metric on the associated surface the model is represented by a system of four second order nonlinear partial differential equations. In order to construct the determining system for the symmetries of the metric we rely on the previously developed package LieSymm-PDE within ${\tt Mathematica}^{\!®}$. In this way we have obtained the determining system consisting of 206 equations. Using the above mentioned programs we have solved the equations in a semi-automatic way. As a result we end up with an infinite dimensional symmetry Lie algebra of the Helfrich`s model in conformal metric representation which we present here in explicit form. Some Remarks on the Exponential Map on the Groups SO(n) and SE(n) Ramona-Andreea Rohan The problem of describing or determining the image of the exponential map $ \exp :\mathfrak{g}\rightarrow G$ of a Lie group $G$ is important and it has many applications. If the group $G$ is compact, then it is well-known that the exponential map is surjective, hence the exponential image is $G$. In this case the problem is reduced to the computation of the exponential and the formulas strongly depend on the group $G$. In this paper we discuss the generalization of Rodrigues formulas for computing the exponential map of the special orthogonal group ${\rm SO}(n) $, which is compact, and of the special Euclidean group ${\rm SE}(n)$, which is not compact but its exponential map is surjective, in the case $ n\geq 4$. Conformal Form of Pseudo-Riemannian Metrics by Normal Coordinate Transformations II Antonio de Siqueira In this paper, we have reintroduced a new approach to conformal geometry developed and presented in two previous papers, in which we show that all $n$-dimensional pseudo-Riemannian metrics are conformal to a flat $n$-dimensional manifold as well as an $n$-dimensional manifold of constant curvature when Riemannian normal coordinates are well-behaved in the origin and in their neighborhood. This was based on an approach developed by French mathematician Elie Cartan. As a consequence of geometry, we have reintroduced the classical and quantum angular momenta of a particle and present new interpretations. We also show that all $n$-dimensional pseudo-Riemannian metrics can be embedded in a hyper-cone of a flat $(n+2)$-dimensional manifold. The Dynamics of the Field of Linear Frames and Gauge Gravitation Jan J. Sławianowski, Agnieszka Martens The paper is motivated by gauge theories of gravitation and condensed matter, tetrad models of gravitation and generalized Born-Infeld type nonlinearity. The main idea is that any generally-covariant and $\mathrm{GL}(n, \mathbb R)$-invariant theory of the n-leg field (tetrad field when $n=4$) must have the Born-Infeld structure. This means that Lagrangian is given by the square root of the determinant of some second-order twice covariant tensor built in a quadratic way of the field derivatives. It is shown that there exist interesting solutions of the group-theoretical structure. Some models of the interaction between gravitation and matter are suggested. It turns out that in a sense the space-time dimension $n=4$, the normal-hyperbolic signature and velocity of light are integration constants of our differential equations. On Multicomponent Derivative Nonlinear Schrödinger Equation Related to Symmetric Spaces Tihomir I. Valchev We study derivative nonlinear Schrödinger equations related to symmetric spaces of the type A.III. We discuss the spectral properties of the corresponding Lax operator and develop the direct scattering problem connected to it. By applying an appropriately chosen dressing factor we derive soliton solutions to the nonlinear equation. We find the integrals of motion by using the method of diagonalization of Lax pair. 2D Novel Structures Along an Optical Fiber Charles-Julien Vandamme, Stefan C. Mancas By using spectral methods, we present seven classes of stable and unstable structures that occur in a dissipative media. By varying parameters and initial conditions we find ranges of existence of stable structures (spinning elliptic, pulsating, stationary, organized exploding), and unstable structures (filament, disorganized exploding, creeping). By varying initial conditions, vorticity, and parameters of the equation, we find a reacher behavior of solutions in the form of creeping-vortex (propellers), spinning rings and spinning "bean-shape" solitons. Each class differentiates from the other by distinctive features of their shape and energy evolution, as well as domain of existence. Unduloid-Like Equilibrium Shapes of Single-Wall Carbon Nanotubes Under Pressure Vassil M. Vassilev In this work, a continuum model is used to determine in analytic form a class of unduloid-like equilibrium shapes of single-wall carbon nanotubes subjected to uniform hydrostatic pressure. The parametric equations of the profile curves of the foregoing shapes are presented in explicit form by means of elliptic functions and integrals. Expansions Over Adjoint Solutions for the Caudrey-Beals-Coifman System with $\mathbb{Z}_p$ Reductions of Mikhailov Type Alexander B. Yanovski We consider the Caudrey-Beals-Coifman linear problem and the theory of the Recursion Operators (Generating Operators) related to it in the presence of $\mathbb{Z}_p$ reduction of Mikhailov type.
CommonCrawl
Genome-wide association study identifies favorable SNP alleles and candidate genes for frost tolerance in pea Sana Beji ORCID: orcid.org/0000-0001-8116-37061, Véronique Fontaine1, Rosemonde Devaux2, Martine Thomas2, Sandra Silvia Negro3, Nasser Bahrman1, Mathieu Siol4, Grégoire Aubert4, Judith Burstin4, Jean-Louis Hilbert1, Bruno Delbreil1 & Isabelle Lejeune-Hénaut1 Frost is a limiting abiotic stress for the winter pea crop (Pisum sativum L.) and identifying the genetic determinants of frost tolerance is a major issue to breed varieties for cold northern areas. Quantitative trait loci (QTLs) have previously been detected from bi-parental mapping populations, giving an overview of the genome regions governing this trait. The recent development of high-throughput genotyping tools for pea brings the opportunity to undertake genetic association studies in order to capture a higher allelic diversity within large collections of genetic resources as well as to refine the localization of the causal polymorphisms thanks to the high marker density. In this study, a genome-wide association study (GWAS) was performed using a set of 365 pea accessions. Phenotyping was carried out by scoring frost damages in the field and in controlled conditions. The association mapping collection was also genotyped using an Illumina Infinium® BeadChip, which allowed to collect data for 11,366 single nucleotide polymorphism (SNP) markers. GWAS identified 62 SNPs significantly associated with frost tolerance and distributed over six of the seven pea linkage groups (LGs). These results confirmed 3 QTLs that were already mapped in multiple environments on LG III, V and VI with bi-parental populations. They also allowed to identify one locus, on LG II, which has not been detected yet and two loci, on LGs I and VII, which have formerly been detected in only one environment. Fifty candidate genes corresponding to annotated significant SNPs, or SNPs in strong linkage disequilibrium with the formers, were found to underlie the frost damage (FD)-related loci detected by GWAS. Additionally, the analyses allowed to define favorable haplotypes of markers for the FD-related loci and their corresponding accessions within the association mapping collection. This study led to identify FD-related loci as well as corresponding favorable haplotypes of markers and representative pea accessions that might to be used in winter pea breeding programs. Among the candidate genes highlighted at the identified FD-related loci, the results also encourage further attention to the presence of C-repeat Binding Factors (CBF) as potential genetic determinants of the frost tolerance locus on LG VI. In 2018, the world area harvested of pea was ranking behind soybean, common bean, chick pea and cow pea, while the world production of pea was fourth to soybean, common bean and chick pea [1]. Average seed yield worldwide was about 1718 kg/ha in 2018, with the highest yields achieved in the Western European countries [1]. Dry peas are an important nutritional source which provide high quality protein for humans and for animal feeding [2]. In addition to the economic importance of pea seeds, pea crops have beneficial environmental impacts, mainly due to their ability to fix atmospheric nitrogen. They do not need nitrogen fertilizers and therefore help reducing N2O emissions [3]. For the past decades, spring sowing has been the most common method of cultivation for dry pea. However, the relatively short duration of the development cycle and various stresses such as biotic stresses, mainly Aphanomyces root rot and Ascochyta blight, as well as abiotic stresses, particularly hydric stress and high temperatures at the end of the development cycle, are at the origin of grain yield losses and variations [4]. Nowadays, winter peas are being developed in order to obtain higher and more stable yields. They are however limited by freezing temperatures during the winter time and the development of winter pea genotypes able to overcome freezing periods is thus desirable. Mechanisms of tolerance to freezing temperatures have already been reviewed in many plant species. Plants can tolerate freezing temperatures using non-exclusive strategies: freezing escape and cold acclimation. Indeed, plants can escape freezing stress by delaying sensitive phenological stages, particularly floral initiation and flowering, given that frost sensitivity increases after floral initiation [5, 6]. Cold acclimation is the process by which certain plants increase their frost tolerance in response to low non freezing temperatures [7,8,9]. The CBF/DREB (C-repeat Binding Factor/Dehydration Responsive Element Binding) transcription factors have an important role in plant cold acclimation. These genes have been isolated first from Arabidopsis thaliana and belong to the AP2/EREBP (APETALA2/Ethylene-Responsive Element Binding Protein) family of transcription factors [10, 11]. In Arabidopsis thaliana, the CBF pathway is characterized by rapid cold induction of CBF genes by altering the expression of CBF-targeted genes, known as the CBF regulon, which in turn contribute to an increase in freezing tolerance [12]. Many studies have reported the significant role of CBF genes in freezing tolerance of herbaceous and woody plant species. Among these studies, the biological role of the CBF pathway for freezing tolerance has also been underlined by the colocalization of CBF genes with freezing tolerance quantitative trait loci (QTL) (Arabidopsis: [13], temperate cereals: [14,15,16,17], forage grasses: [18], legumes: [19]). Moreover, within the temperate cereals, CBF genes underlying FR2, a major homeologous frost tolerance QTL in barley, diploid and hexaploid wheat, are known to be structured in a cluster of tandemly duplicated genes [20]. In legumes a similar feature was found in Medicago truncatula, where a cluster of 12 tandemly duplicated CBF genes was shown to match with a major freezing tolerance QTL on chromosome 6 [19]. The identification of genomic regions controlling frost tolerance has initially been completed for cultivated species through the assessment of mapping populations and QTL mapping. In pea, QTL mapping studies for frost damages have been conducted in multiple field environments as well as in controlled conditions [21,22,23]. QTLs were detected using two populations of recombinant inbred lines (RILs), namely Pop2 and Pop9, respectively derived from crosses between Champagne (frost tolerant) and Térèse (frost sensitive) [21, 22] and China (frost tolerant) and Caméor (frost sensitive) [23]. Four common QTLs were detected within both populations. The corresponding regions were located on linkage groups (LG) III (two independent positions), V and VI. They explained altogether a major part of the phenotypic variation and were repeatable across environmental conditions. Three other loci were specific to one or other of the two populations and detected in fewer environments (Pop2: one locus on LG I, Pop9: one locus on LG VII, detected in one environment; Pop9: one locus on LG V, detected in two environments). The resolution and accuracy with which QTL mapping can identify causal genetic determinism of the considered traits is limited by the high confidence intervals and the relatively low total number of recombination events in bi-parental mapping populations. In addition to QTL mapping, association studies have emerged as a complementary approach to dissect quantitative traits by exploiting natural genetic diversity and ancestral recombination events present in germplasm collections [24]. Used for more than two decades in human genetic research, association genetics have been adapted for genetic dissection in plants, taking advantage of the development of high-throughput genotyping resources in numerous species [24]. Genome-wide association studies (GWAS) aim at identifying genetic markers strongly associated with quantitative traits by using the linkage disequilibrium (LD) between candidate genes and markers. They rely on high-density genetic maps allowing an increased resolution of detection generating more precise QTL positions than bi-parental QTL mapping and give access to multiple allelic variation through the exploration of natural genetic diversity. In addition, GWAS can offer a powerful genomic tool for breeding plants by the identification of associated markers tightly linked to targeted genomic regions which can be used for marker-assisted selection. In the recent years, GWAS has been conducted in many plant species to dissect complex quantitative traits including winter survival and frost tolerance [25,26,27,28,29,30,31,32,33]. High throughput genotyping resources now available in pea [34] have also allowed to carry out GWAS in order to dissect the genetic determinism of resistance to Aphanomyces euteiches, plant architecture and frost tolerance. Desgroux et al. [35] studied associations for resistance to A. euteiches in 175 pea lines using a high-density SNP genotyping consisting of 13.2 K SNPs from the developed GenoPea Infinium® BeadChip [36]. Several markers significantly associated with resistance to A. euteiches harboring relevant putative candidate genes were identified. Significantly associated markers also allowed to refine the confidence interval of QTLs previously detected in bi-parental populations. Using the same SNP resource and a collection of 266 pea accessions, including the 175 former lines, Desgroux et al. also identified genomic intervals significantly associated with plant architecture and resistance to A. euteiches, of which 8 were overlapping for both traits [37]. In a different genetic background composed by a set of 672 pea accessions genotyped with 267 simple sequence repeat (SSR) markers, Liu et al. [38] detected 7 SSRs significantly associated with frost tolerance of which one was located on LG VI and was shown to colocalize with a gene involved in the metabolism of glycoproteins in response to chilling stress. The present study aimed at reconsidering the regions of the genome that control frost tolerance in pea, taking advantage of genome-wide distributed SNPs generated from the 13.2 K GenoPea Infinium® BeadChip [36]. Statistical analyses of phenotypic data In order to undertake genome-wide association analyzes, a collection of 365 pea accessions, here after referred to as the association mapping collection or the collection, was phenotyped for frost damages (FD) under field and controlled conditions. Statistical analyses of frost damages scores showed highly significant genetic variation for all studied traits and the coefficient of genetic variation ranged between 37.6 for FD_Field_date4 and 74.4 for FD_Field_date2 (Table 1). The estimates of broad sense heritabilities (H2) were high for all traits, varying from 0.84 to 0.89. In addition, frost damages observed in controlled conditions and in the field for the date 3 and 4 showed the highest mean scores of 2.8, 2.4 and 2.4 respectively (Table 1). Frequency distributions of best linear unbiased predictions (BLUPs) values for each trait tended to fit normal curves within the collection (Additional file 1). Table 1 Statistical parameters of the pea collection for the five observed traits SNP genotyping of the association mapping collection After quality control, the genotyping data comprised a total of 10,739 polymorphic SNPs with imputed missing data and a minor allele frequency (MAF) greater than 5%. Each linkage group contained 1533 SNPs on average. The distribution of SNPs varied within and between LGs (Additional files 2 and 3). Linkage group III showed the highest number of SNPs (1888 SNPs), while LG I was the least dense (1220 SNPs). Mean MAF in the association mapping collection varied from 0.29 on LGI to 0.31 on LG V and LG VI (Additional file 3). Only 751 of SNPs had a MAF less than or equal to 0.1 (Additional file 4). The distribution of SNP markers across the different LGs was dense and no gap between adjacent SNPs exceeded 1.7 cM, except on LG I and LG V which presented gaps of 2.3 cM (for the interval position 48.4–50.7 cM) and 3.7 cM (for the interval position 0.1–3.7 cM), respectively (Additional file 2). In addition, the map of the 10,739 SNPs used for GWAS showed an average number of 28 SNPs mapped at the same genetic position. LD analysis The distribution of the estimate of the linkage disequilibrium (LD, r2) along to the genetic position for each linkage group as well as for the whole genome is presented in Additional file 5. The r2 value rapidly decreased as the genetic distance increased. The LD decay, estimated as the distance for which r2 decreases to half of its maximum level (0.22), was equal to 0.9 cM for the whole genome. Considering the LGs individually, the LD decay ranged from 0.3 cM for LG IV to 1.4 cM for LG V. Population structure and kinship analyses To avoid false positive results in association analysis, structure and kinship of the association mapping collection were analyzed using 2962 non-redundant positions markers. The collection structure was studied using the discriminant analysis of principal components (DAPC) method. Following the analysis of the Bayesian Information Criterion (BIC) profile and using the 'a-score' criterion, the optimal number of clusters was fixed to 7 (Fig. 1a) and the optimal number of principal components (PCs) was set to 6 (Fig. 1b). Therefore, these 6 PCs and 7 clusters were used for discriminant analysis of principal components. The distribution of individuals into the 7 clusters is represented along the two first axes of DAPC (Fig. 1c). The main passport data (Additional file 6) seemed to be related to the discrimination of the clusters. The cluster 1, comprising mainly wild peas and landraces from Africa and Middle or Far-East, was totally separated from the other clusters (Fig. 1d). The majority of accessions from clusters 2, 5, 6 and 4 were registered as spring sowing types while the winter sowing types were essentially gathered in clusters 3 and 7. These last two clusters differed for their end-use, the cluster 3 being mainly composed of field peas (81%) and the cluster 7 of fodder peas (80%). Population structure of the pea association mapping collection based on Discriminant Analysis of Principal Components (DAPC) analysis. a Number of clusters vs BIC values. The x-axis shows the potential numbers of clusters representing the population structure. The y-axis represents the BIC value associated with each number of clusters. b Number of principal components (PCs) vs a-score criterion. The x-axis shows the potential numbers of PCs which is used in the principal component analysis (PCA) step of DAPC. The y-axis gives the a-score criterion associated with each number of PCs. The optimal number of PCs, represented by a red bar, was obtained after 100 permutations. c Scatterplot showing the distribution of the association mapping collection along the first two principal components of the DAPC. Accessions are represented by dots and genetic clusters as inertia ellipses coded from 1 to 7. The bottom right inset shows eigenvalues of the six principal components in relative magnitude, ordered from 1 to 6 from the left to the right. d Scatterplot showing the correspondence between the classification of accessions for their cultivation status and the 7 clusters identified with DAPC; unknown accessions are shown by black dots The dendrogram using Nei genetic distance among accessions of the association mapping collection revealed also the presence of seven clusters (Additional file 7), as the DAPC method. For 79.61% of the accessions, the assignment to clusters performed by the dendrogram corresponded to the allocation made by the DAPC analysis (Additional file 6). Within the kinship matrix (K), estimated for the whole genome, 85.7% of the kinship coefficient values were less than 0.1. 1.6% of these values were larger than 0.5. For the seven kinship matrices specific to each linkage group (KLG), the kinship coefficient values ranged similarly than for the K matrix (Additional file 8). These results indicated a weak relatedness between accessions and suggested that the majority of the accessions are genetically diverse, which was beneficial for subsequent GWAS mapping. From these results, the two first coordinates of DAPC results (Q matrix) and relatedness matrices (K matrix and KLG matrix) were used as covariates for subsequent association analyses. Genome-wide association mapping The comparison of BIC values of the four GWA-models tested, showed that the linear mixed model which included both Q and KLG matrices as covariates was the optimal model for the following traits: FD_CC, FD_field_date1, FD_field_date2 and FD_field_date3. Whereas, the best fitting model for FD_field_date4 was the linear mixed model which included only KLG matrices (Table 2). The Manhattan and their corresponding quantile-quantile plots of the association mapping results, run with the best model for each trait, are presented in Figs. 2 and 3. A total of 62 markers were significantly associated with any of the studied traits at the Bonferroni threshold -log10 (p) > 5.33. Frost Damage (FD)-associated markers were distributed on all linkage groups except LGIV. These SNPs exhibited a minor allele frequency ranging from 0.13 to 0.5. The number of markers associated with FD_CC, FD_field_date1, FD_field_date2, FD_field_date3 and FD_field_date4 were 40, 4, 6, 3 and 17, respectively. The highest p values were showed by the significant loci located respectively on LGV (9.67E-11) and LGVI (1.02E-10) (Table 3). Table 2 BIC-based comparison of the four models used to control the rate of false positive associations Manhattan plots of markers associated with the five frost damage traits. The plots show the p-values (p) for association between a phenotypical trait and each tested marker (expressed as the negative decimal logarithm of p, y-axis) plotted against the linkage group position of the marker (x-axis). Plots above the red horizontal line indicate the genome-wide significance with the Bonferroni threshold (−log10 (p) > 5.33). a is the plot for the evaluation of frost damages in the controlled conditions experiment. b, c, d and e are the plots for the four evaluations of FD in the field experiment, corresponding to the 4 dates of damages observation Quantile-quantile plots of the association mapping results. The plots show the observed p-values (p) for association between a phenotypical trait and each tested marker, expressed as -log10 of p (y-axis) plotted against -log10 of the expected p-values (x-axis) under the null hypothesis of no association for the analyses. a is the plot for the trait corresponding to the evaluation of frost damages (FD) in controlled conditions experiment. b, c, d and e are the plots for the four traits corresponding to the evaluations of FD in the field experiment Table 3 Significant associations detected in the association mapping collection (363 accessions) for the five observed traits Favorable alleles exploration for frost tolerance in pea Twelve LD blocks were defined around the 62 significant FD-associated markers which included all markers in LD (r2 > 0.8) with the FD-associated markers (Additional file 9). FD-associated markers which were not in significant LD with any other SNP, and thus did not constitute a LD block, were also kept for further analysis. Finally, 75 SNPs, covering six FD-related loci distributed on LG I, II, III, V, VI and VII, were kept to identify favorable and unfavorable haplotypes for frost tolerance. For each of the six FD-related loci, marker haplotypes (two to nine) and corresponding representative accessions were identified (Additional file 10). For each locus, the effect of the different allele combinations was tested thanks to a variance analysis and a multiple comparison test of phenotypic mean effects (Additional file 10). These analyses identified 7 favorable haplotypes over the 6 FD-related regions carrying favorable alleles at each FD-associated marker except the haplotypes V.3 and VII.4 which contained each unfavorable allele, among 1 and 3 FD-associated markers respectively. Accessions carrying favorable haplotypes presented lower values of frost damages ranging between 0.00 ± 0.00 (haplotype VII.4 for the trait FD_Field_date2) and 2.65 ± 0.05 (haplotype III.1 for the trait FD_CC). Six groups of accessions carrying unfavorable haplotypes were also identified for which 100% of unfavorable alleles were observed over the significant FD-associated markers detected by GWAS. Typical accessions carrying favorable haplotypes and showing a mean score of frost damage ≤1 were mainly winter fodder peas (e.g. Black seeded, Champagne, Melrose, Blixt 7) (Additional files 6 and 10). While those carrying unfavorable haplotypes with a mean score of frost damage ≥4 were mainly spring garden peas (e.g. Automobile, Caroubel, Cennia, Ersling) (Additional files 6 and 10). The sequences of the 75 SNP markers related to frost tolerance trait are provided in Tayeh et al. [36] (Table S2). Candidate genes The projection of the 75 FD-related markers on the pea genome assembly [39] allowed to define intervals of 2 Mb on all the pea chromosomes, except the chromosome 4 (LG IV). Four FD-related markers were assigned to unanchored scaffolds which were all less than 2 Mb long: in that case, annotated genes were listed for the whole scaffold. A particular case was encountered for the associated marker PsCam036704_21832_970 which is located on LGVI (51.1 cM) of the genetic consensus map [36] but which projection on the pea genome assembly locates on the chromosome 7 corresponding to the LG VII (Table 3). Considering this ambiguous position, we listed the gene corresponding to this marker as a unique candidate. We located a total of 867 annotated genes, among which 277 corresponded to genes with unknown function according to the pea genome assembly v.1a (Additional file 11). Among the remaining 590 genes, we focused on gene families pointed as candidates in previous studies, i.e. CBF/DREB genes, genes coding for brassinosteroid receptors, genes implied in the production of gibberellin and genes implied in the synthesis of soluble sugars. Nine candidates corresponding to, or at the vicinity of, FD-related markers were found to be annotated as AP2 domain genes in pea and this annotation could in some cases be refined thanks to the annotation of homologous genes in M. truncatula (Additional file 11). The marker PsCam037030_22140_221 on LG VI (49.1 cM), which is in high LD with all the significant FD-associated markers belonging to the LD block VI.2, belongs to the gene Psat1g103560 annotated as a CBF gene according to the annotation of the homologous gene of M. truncatula (Additional file 11). This FD-related marker was also annotated as a CBF14 gene, following a blast search against M. truncatula carried out by Tayeh et al. [36]. Two other genes (Psat1g103560 and Psat1g103600) annotated as CBF genes were identified close to PsCam037030_22140_221, one of which being also precisely annotated as a CBF14 by Tayeh et al. [36]. In the LD block VI.2, 2 FD-associated markers, namely PsCam023246_13111_1125 and PsCam007060_5248_2156, were also found to be close to AP2/ERF (Ethylene-responsive transcription factor) genes, namely Psat1g097280 and Psat1g097280, for which a precise study of the sequence is needed to verify if they belong to the CBF sub-family. Finally, 4 other potential candidate genes were found at the vicinity of the LD block VI.2, namely Psat1g103920, Psat1g106640, Psat1g115640 and Psat1g103680. These genes were annotated as DREB (Dehydration-responsive element-binding protein) genes, another term used to refer to CBF genes, in the homologous genes of M. truncatula (Additional file 11). These 4 genes lie in an interval of 35 Mb situated at a distance of 448 kb from the three other CBF genes mentioned above. One of them, Psat1g103920, contained the marker PsCam050192_32788_145 which was excluded from the GWA analysis because it showed a minor allele frequency lower than 0.05. Three FD-associated markers, namely PsCam035617_20792_637 (LD-block III.1), PsCam048068_30823_2326 (LD-block V.1) and PsCam011774_8038_200 (LD-block VI.8) were found to correspond to 3 genes, namely Psat5g299600, Psat3g087400 and Psat1g119400 encoding brassinosteroid receptors (Additional file 11). The FD-related marker PsCam037922_22979_691, which is only 0.1 cM apart from PsCam035617_20792_637 just mentioned above, was found to lie in Psat5g299720, a gene encoding for the gibberellin 3beta-hydroxylase enzyme and shown to correspond to the dwarfism gene Le in pea (Additional file 11). Finally, three FD-associated markers belonging to the FD-related locus on LG VII, namely PsCam001108_940_48, PsCam037927_22984_97 and PsCam004928_3732_3087 were located within genes related to the synthesis of soluble sugars (Psat7g180280: endo-beta-1,3-glucanase, Psat7g193120: endo-1,4-beta-glucanase and Psat7g214880: beta-glucosidase G2, respectively) (Additional file 11). GWAS brings new insights into the determinism of frost tolerance in pea In the present study, 75 markers associated with frost tolerance, 62 markers significantly detected by GWAS and 13 markers in high LD (r2 > 0.8) with one or the other of the 62 markers, were located among all the linkage groups of pea, except LG IV (Table 3, Additional file 9). Comparing the map positions with those of frost tolerance QTLs previously described, 3 regions corresponding to 62 markers among the 75 markers, were found to colocalize with 3 main QTLs previously detected by linkage mapping in two bi-parental populations for the same trait, namely WFD 3.2, WFD 5.1 and WFD 6.1 [21,22,23] (Fig. 4). These 3 QTLs were repeatedly detected in 5, 11 and 10 field conditions for WFD 3.2, WFD 5.1 and WFD 6.1, respectively. Moreover, the position corresponding to WFD 6.1 seems to also match with an EST marker recently found to be associated with frost tolerance in pea [38]: indeed, Liu et al. identified 7 marker-trait associations within a collection of 672 accessions, among which the marker EST1109 was located on LG VI within a functional gene that has a high homology with a gene encoding an alpha-mannosidase in M. truncatula. We reviewed the 1646 transcript-derived SNP markers mapped on LG VI by Tayeh et al. [36] and found that a marker corresponding to an alpha-mannosidase-like protein was located at 43.2 cM on the consensus map; consequently, as the FD-associated markers detected on LG VI in the present study are comprised between 41.4 and 52.8 cM, it is likely that the LG VI positions of both association studies coincide. Similarly, this study validated the QTL previously detected on LG III (WFD 3.2 in Pop2 [21], III.2 in Pop9 [23]) with a higher resolution than previous linkage mapping studies by the identifications of three main significant FD-associated markers, and whose positive alleles decreased the frost damage of accessions by an average of 0.33 (Table 3). All favorable alleles were carried by the sensitive genotypes Térèse and Caméor unlike the tolerant genotypes Champagne and China which had all undesirable alleles underlying this locus (Table 3). Altogether, the consistency of the 3 positions detected in both bi-parental mapping and association genetics reinforces their interest for breeding. The results presented here constitute an additional step towards the identification of underlying genes potentially involved in the control of frost tolerance, thanks to refined intervals provided by GWAS. Comparative genetic map of genome-wide association study (GWAS) loci identified in the present study and quantitative trait loci (QTL) previously detected for frost tolerance in pea. Only linkage groups (LGs) with significant frost damages (FD)-associated markers detected by GWAS are presented. On each LG, SNP markers are shown on the right and genetic positions between markers are indicated in cM on the left. Frost tolerance loci detected in the present study are shown in red: FD-associated markers are in a red and underlined font; markers in linkage disequilibrium (LD; r2 > 0.8) with associated marker(s) are in a red and non-underlined font; the LD blocks identified by GWAS are drawn as red bars on the right of each LG. QTLs represented by blue and green bars were detected in the Champagne x Térèse [21, 22] and China x Caméor [23] populations respectively. For presentation purposes, only markers at the vicinity of significant loci and a few markers distributed along LGs are shown Unlike the correspondences with bi-parental mapping positions presented above, the present GWA study did not highlight any colocalization with a major QTL, namely WFD3.1, which was however found to be responsible for up to 52 and 19% of the winter frost damage variation within the RILs populations derived from Champagne x Térèse (Pop2) and China x Caméor (Pop9), respectively [21, 23]. The flowering gene Hr (High response to photoperiod), an orthologue of the Arabidopsis Early flowering 3, Elf3 [40], was shown to be a relevant candidate for this QTL as it allows plants to be maintained in a vegetative state under short days and thus to escape the main winter freezing periods. It is likely that, in the case of the WFD3.1 position corresponding to the Hr gene, a strong correlation may have emerged between the population structure, possibly biased by the allelic variation at the Hr locus and the frost damage trait. This hypothesis relies on the observation that Hr may have been the target of natural selection for frost tolerance. Weller et al. [40] speculated that the hr mutation may have arisen within an ancestral pea lineage originating from the Near East domestication center and carrying the Hr allele. The hr mutation possibly permitted summer cropping in areas characterized by colder winters and is therefore highly represented in many domesticated lines of Pisum sativum at the origin of the current spring peas. To explore the hypothesis of undetection of a true association due to confounding with the population structure, we have verified the distribution of the Hr alleles, represented by their Elf3 genotype, within the DAPC clusters of the association mapping population (Additional file 12). The Hr accessions, homozygous for the dominant allele Hr, are the main components of the clusters 7 (96%) and 1 (84%) and they represent 56.9% of the cluster 4. They are thus over represented in the three clusters gathering most of the winter sowing-type accessions, which may have contributed to a correlation between the frost tolerance trait and the population structure. The hr accessions, homozygous for the recessive allele hr, are the main components of the clusters 3 (100%), 2 (100%), 5 (98.7%) and 6 (95.7%), which are mostly spring sowing-type accessions. Consequently, we can suggest that the correction for the population structure (Q matrix) might have resulted in a structuring marker that probably cannot be detected by further association analysis using this sample of accessions. This kind of result has already been reported by Visioni et al. [27] who found that 2 of the 3 most significant SNPs of their study, tightly linked to major known genetic determinants of cold tolerance in barley, were undetected by GWAS if a correction by the structure was used. It was particularly the case for a SNP linked to Vrn-H1, a developmental locus governing barley vernalization requirement, which is for long a candidate for the frost tolerance locus Fr-H1 but whose effect was suspected to be confounded with the population structure. To overcome this point, NAM-like linkage populations with bi-parental crosses in a reference design could be an interesting plant material for association mapping, in order to minimize population structure which may be necessary for dissecting the most structured traits. Comparatively with consistent QTLs previously detected in bi-parental populations, the present GWA study also pinpointed three loci which have either not been detected yet (one region on LG II) or formerly detected in only one environment (two regions located on LG I and LG VII, respectively). The significant marker on LG II is supported only by one experiment in controlled conditions and must therefore be used with caution in breeding programs even if two distinct favorable and unfavorable haplotypes were identified (Additional file 10). The two markers significantly associated with frost tolerance on LG I are located at 47.5 cM on the consensus map, which lies in the projected confidence interval of a previous QTL, namely WFDcle.a, identified in one field condition with the Hr subpopulation extracted from Pop2 [22]. This colocalization slightly reinforces the consistency of this LG I position. In the same way, the 8 significant markers identified on LG VII with this study were located between 72.9 and 89.3 cM on the consensus map, which overlap with one former QTL, namely FD.c, which was detected once in a controlled chamber experiment [22], with the same Hr subpopulation. Thus, the colocalizing region on LG VII relies now on three independent experiments. In a panel of 672 accessions, Liu et al. [38] identified one marker on LG I and two markers on LG VII that were significantly associated with frost tolerance. It would be interesting to check the localization of these markers on the consensus map used here, as their position on the genetic map used by the authors [41] is not reported. Additionally, to these coincident positions for the frost tolerance trait on LG I and LG VII within different experimental conditions, the LG I and the LG VII regions detected in this study seems to also overlap with already detected QTLs for resistance to Aphanomyces euteiches. Indeed, both markers of the FD-related LD block I.1 lie in the confidence interval of the QTL Ae-Ps1.1 identified by Hamon et al. [42], when the latter is projected on the consensus map used in the present work. Besides, the FD-related locus on LG VII includes the LD block VII.16 associated to the resistance to Aphanomyces euteiches [35], with which it shares the FD-associated marker PsCam038378_23415_721. The FD-related locus on LG VII deserves a particular attention for a further use in breeding for frost tolerance because accessions carrying the favorable haplotype underlying this locus (haplotype VII.4) shows very low scores of frost damages, ranging from 0 to 0.96, both in the field and controlled conditions experiments (Additional file 10). The relationships between haplotypes at this locus and the values for frost tolerance and Aphanomyces euteiches resistance will however have to be more precisely explored, to check if both traits can be bred favorably at this locus. GWAS detected frost tolerance-associated markers which are included in relevant putative candidate genes The projection of the 75 FD-related markers on the pea genome assembly [39] allowed to identify 590 annotated genes with known putative protein functions, located in an interval of ±1 Mb on both sides of FD-related markers (Additional file 11). Among the diverse protein functions predicted, some are already related to the acquisition of frost tolerance in the literature. Comparison of map positions has shown that the FD-related locus detected on LG VI in this study colocalizes with the previous QTL WFD 6.1, which is itself orthologous to a major QTL for frost tolerance in M. truncatula (Mt-FTQTL6) [43]. Tayeh et al. moreover showed that Mt-FTQTL6 covers a region containing a cluster of twelve CBF genes tandemly duplicated [19]. In the present study, 9 AP2 domain genes were found to correspond, or to be at the vicinity of, FD-related markers of LG VI. Among these genes, 7 are annotated as CBF or DREB genes. Given these results and the previous findings of Tayeh et al. concerning Mt-FTQTL6 [19], CBF genes located in the LD block VI.2, or at its vicinity, are also relevant candidates determining frost tolerance at this locus in pea. The potential role of CBF genes, and particularly CBF14, has already been highlighted in cereals. In wheat, Zhu et al. [44] showed that the natural variation for frost tolerance is mainly associated with a frost resistance 2 (FR2) locus including tandemly replicated CBF genes that regulates the expression of cold-regulated genes. Additionally, these authors proved that an increased copy number of CBF14 was frequently associated with the tolerant haplotype of the locus FR-A2 and with higher CBF14 transcript levels in response to cold. Novák et al. [45] showed that CBF14 genes contribute to enhance frost tolerance during cold acclimation in cereals. Three candidate genes corresponding to FD-associated markers detected on LG III, LG V and LG VI, and annotated as brassinosteroid receptors, also appear in the literature to be implied in the crosstalk between plant hormone signaling in the cold stress response and the CBF regulon. In Arabidopsis, Eremina et al. [46] provided evidence that brassinosteroids contribute to the control of freezing tolerance. Indeed, these authors showed that brassinosteroid-deficient mutants of Arabidopsis were hypersensitive to freezing stress, whereas an activation of the brassinosteroid signaling pathway increased freezing tolerance both before and after cold acclimation. Furthermore, two brassinosteroids-responsive transcription factors have also been characterized as direct regulators of CBF expression through their binding to the promoters of these genes [46, 47]. In cultivated plants, the role of brassinosteroids is so far documented for the response to the chilling stress, as reviewed by Anwar et al. [48]. Within the FD-related locus of LG III, was identified a candidate gene encoding for the gibberellin 3beta-hydroxylase enzyme which produces bioactive gibberellin, also known as Le in pea (Additional file 11). Recessive le mutants at this locus are impaired in the production of gibberellin and produce a dwarf phenotype [49, 50]. In Arabidopsis, Achard et al. [51] undertook a molecular and genetic approach to evaluate the interaction between the CBF1-dependent cold acclimation pathway and the gibberellin pathway. They proposed a model in which the induction of CBF1 expression by low temperature affects the gibberellin metabolism via upregulation of gibberellin-2-oxydase gene transcripts. The following reduction in bioactive gibberellin causes a higher accumulation of DELLAs, a family of nuclear growth-repressing proteins which in turns restrains plant growth. The Le gene has already been proposed as a candidate for the WFD3.2 and III.3 QTLs identified in the Pop2 and Pop9 populations, respectively. We considered the haplotypes of the parents of both populations at the corresponding FD-associated locus in this study and found that the favorable haplotype (III.1, including the dwarf allele at the Le gene) was borne by Térèse and Caméor, while the unfavorable haplotype (III.2, including the wild-type allele at the Le gene) was carried by Champagne and China. This observation is consistent with the favorable and unfavorable alleles determined by QTL mapping in bi-parental populations. As the three genes constituting the FD-related locus of LG III lie within a 1 cM interval, neither a synergistic effect nor linkage can be excluded. Finally, three candidates underlying the FD-related locus on LG VII (chromosome 7) corresponded to genes related to the synthesis of soluble sugars. As accumulation of soluble sugars during cold acclimation is well documented in many plants, we can suggest that the LG VII-locus may have a role in frost tolerance by accumulating sugars in plant tissues during cold acclimation. Several roles for sugars in protecting cells from freezing injury have been proposed, including functioning as cryoprotectants for specific enzymes, as molecules promoting membrane stability and as osmolytes to prevent excessive dehydration during freezing, as reviewed by Xin and Browse [52]. GWAS provides new markers and new genitors to breed for frost tolerance in pea In the present study, 6 loci related to frost tolerance in pea were identified. At these 6 FD-related loci, 7 favorable haplotypes, carrying the highest number of favorable alleles at the FD-associated markers detected by GWAS, were significantly associated with the lowest scores of frost damages, i.e. the highest levels of frost tolerance. We identified 12 accessions showing lower scores of mean frost damages ranging from 0.13 to 1.04 and cumulating 6 (Glacier), 5 (Melrose, Blixt 7, Winterberger, Holly 9, Black seeded, Holly 17, Blixt 109, Fe and P1259) or 4 (Cote d'or and Picar) favorable haplotypes at the 6 FD-related loci (Additional file 13). All these accessions belong to the cluster 7 which was shown to be totally isolated from the other clusters by the DAPC analysis. They are fodder peas among which frost tolerant accessions have already been identified for breeding. The same 12 accessions also carry the Hr allele which was formerly shown to be favorable to frost tolerance [21]. One common point of these accessions except the accession named 'Glacier, is that they carry the unfavorable haplotype at the FD-related locus on LG III comprising the Le gene. But rather than indicating a minor effect of the favorable haplotype at this locus, genetically linked to the dwarf le allele, this feature is to relate to the observation that field-autumn-sown Hr lines remain dwarf until a longer spring daylength has also triggered off the switch from the prostrate to the erected growth habit. This suggests an epistatic effect of Hr upon the expression of the dwarfism [21]. Comparatively to the above-described material, we also identified 16 accessions carrying all the unfavorable alleles at the FD-related loci located on LG V, VI and VII of the present study, additionally to the unfavorable allele hr. These accessions presented only 3 (Petit provencal, eM and Cador), 2 (Pi196033, Aldot, Chine-d368, 667, Merveille de Kelvedon, Miravil, Wav f502, Mingomark) or 1 (Ceia, Alaska, Finlande, Ersling and Automobile) favorable haplotype(s) identified on LG I, II and/or III, and which are mainly garden spring cultivars or breeding accessions presenting higher scores of mean frost damages ranging from 2.38 and 4.56. This allows us to state that the three loci on LG V, VI and VII play a bigger part in the frost tolerance on pea than the other FD-related loci located on LG I, II and III. Our results can help choosing tolerant progenitors and following favorable haplotypes through marker-assisted breeding. Furthermore, the FD-associated locus on LGV was found to overlap with the confidence interval of the frost tolerance QTL WFD 5.1 earlier detected within the Pop2 population [21] (Fig. 4). Comparatively to the linkage mapping method, with which a confidence interval of 16.6 cM was obtained, the GWA study enabled to refine the confidence interval to 7.4 cM (53.6 to 61 cM on the consensus map). This refined LG V locus presents a particular interest in breeding as it may provide markers to break the genetic relationship between the frost tolerance position and the neighbouring locus governing the seed trypsin activity. Trypsin inhibitors are known to be unfavorable for animal feed because they decrease the digestibility of protein [53]. The locus responsible for the seed trypsin activity (Tri) has been mapped on LG V [54] within the confidence interval of WFD 5.1 [21]. The favorable alleles at both loci are generally in repulsion. On the consensus map, this locus is represented by three markers, annotated as trypsin inhibitor genes [36] and located between 67.0 and 67.3 cM, 6 cM apart from the frost tolerance locus detected by GWAS. Thus, it seems possible to select favorable alleles for frost tolerance corresponding to the FD-associated markers detected by GWAS on LG V together with recessive alleles of the markers encoding for the trypsin inhibitor genes. In the present study, GWAS enabled to confirm QTLs significantly associated with frost tolerance such as WFD 3.2, WFD 5.1 and WFD 6.1. It also allowed to identify one region on LG II, which has not been detected yet and provided significant associations for two regions on LGI and LG VII that were formerly detected in only one environment. The results showed that GWAS is an effective strategy to identify markers precisely defining frost tolerance loci, which can be useful to breed for antagonistic traits as it is for the frost tolerance and Tri loci on LG V which are in linkage disequilibrium and in a repulsion phase. Our results also highlight that GWAS enables to find new sources of frost tolerance within collections of pea genetic resources. Finally, the present GWA study also brought to light the presence of CBF transcriptions factors as potential genetic determinants of the frost tolerance locus on LG VI, with one CBF-annotated marker being in high LD with significant FD-associated markers of the locus and six additional CBF/DREB-annotated genes mapped at the vicinity. As 12 tandemly duplicated CBF genes were already found to be relevant candidates underlying the orthologous frost tolerance QTL on Medicago truncatula chromosome 6, the hypothesis of a similar genomic organization in pea deserves to be tested. The association mapping collection, also named the collection, consists of 365 accessions (Additional file 6) from the pea reference collection described in Burstin et al. [55]. Pisum accessions from the collection represents a large genetic diversity ranging from wild peas (Pisum fulvum, P. humile, P. elatius, P. speciosum, P. transcaucasicum and P. abyssinicum) and landraces, to breeding lines and cultivars. This collection also represents a variability of genotypes based on the type of sowing (winter vs spring peas) and the type of end-use (fodder, field, mangetout, preserve and garden peas). Reference accessions which are the parents of bi-parental populations formerly used in QTL studies for frost tolerance [21,22,23], i.e. Champagne, China, Térèse and Caméor, are included in the collection. All genotypes were purified for one generation by single seed descent (SSD) in insect-proof glasshouse. After this SSD generation, seeds were increased for one generation in insect-proof glasshouse. The seeds produced were sown in a nursery: tissue samples were harvested in bulk for DNA production from 10 sister plants and harvested seeds were used for phenotyping. When necessary, DNA was extracted again from the offspring of these plants. There is therefore zero or one generation between phenotyping and genotyping. Frost tolerance of the collection was evaluated under field and controlled conditions. The field experiment was carried out at the INRAE (National Research Institute for Agriculture, Food and Environment) experimental station of Clermont-Ferrand Theix, France (45.72 °N latitude and 3.02 °E longitude at an altitude of 890 m) during the growing season of 2007–2008. Sowing date was 09 October 2007 and the date of emergence was 26 October 2007. Plots were sown in a randomized complete block design with three replicates. Weeds and diseases were controlled chemically. The record of temperatures indicated that cold acclimation and freezing periods occurred during the experiment (Additional file 14). The collection was assessed for frost tolerance by visual estimation of winter frost damages after the main winter freezing periods have passed. As described in previous studies [21,22,23], a score was assigned to a plot as a whole, based on the extent of necrotic areas of the aerial parts of the plants. The scale ranged from 0 to 5 where 0 represented no damage and 5 a dead plant. Frost damages observations were realized at four dates in 2008: January, 4th and 15th, March, 28th and April, 10th. A frost experiment was also conducted in a controlled environment chamber using the standardized test described previously by Dumont et al. [22], which mimics the successive periods of cold acclimation and frost generally encountered in the field by autumn-sown peas. Pea accessions were placed according to a randomized complete block design with three replicates. To provide three biological replicates, the experiment was carried out three times successively in the same controlled environment chamber. The temperature, light level and humidity were recorded and were similar during the three experiments. Briefly, the plants at the stage of 2nd - 3rd leaf were first treated with a regime of 11 days of cold acclimation at 10 °C/2 °C (day/night) with a 10 h photoperiod. The frost treatment was then carried out at 6 °C/− 8 °C with 8 h of daylight during 4 days. After frost, a recovery period was applied with a temperature regime of 16 °C/5 °C and 10 h of daylight during 8 days. Frost tolerance was evaluated by scoring frost damages at the end of the recovery period with the same scoring scale as the one used to evaluate the frost damages in the field experiment, except that scores was attributed to single plants instead of plots. Overall, 5 traits constituted the phenotyping data for the GWAS, abbreviated as follows: FD_CC: frost damages in the controlled conditions experiment, FD_Field_date1, FD_Field_date2, FD_Field_date3 and FD_Field_date4: winter frost damages evaluated in the field experiment at the first, second, third and fourth date respectively. Phenotypic data analyses The phenotypic data were analyzed with the R 3.5.0 software [56, 57] using a linear mixed model to obtain estimates of variance components, heritability (H2), as well as best linear unbiased predictions (BLUPs) of adjusted means. The following Linear Mixed Model (LMM) was used: Yij = μ + genoi + repj + eij, where Yij is the value of frost damages recorded for the genotype i at the replicate j. μ is the mean, genoi is the random genetic effect of the genotype i, repj is the fixed replicate effect of the replicate j and eij is the residual effect. The model was carried out using the R function "lmer" of the package 'lme4' [58, 59]. Heritability (H2) was estimated using the following formula: \( {\mathrm{H}}^2={\mathrm{V}}_{\mathrm{g}}/\left({\mathrm{V}}_{\mathrm{g}}+\raisebox{1ex}{${\mathrm{V}}_{\mathrm{res}}$}\!\left/ \!\raisebox{-1ex}{${\mathrm{n}}_{\mathrm{rep}}$}\right.\right) \), where Vg is the genotypic variance component, Vres is the residual variance component and nrep is the number of replicates taking into account the missing values. BLUPs for each genotype-trait combination were calculated from each LMM analysis using the function "ranef", implemented in 'lme4' package of R software [59] and were used for the GWA analysis. Genotyping and quality control The collection was genotyped at 11,366 SNPs using the Illumina Infinium® BeadChip 13.2 K SNPs as described in [36]. These SNPs were all located in gene-context sequences and derived from separated transcripts [60]. The consensus genetic map from Tayeh et al. [36] was used as the genetic framework for the association analyses. This map was built on the basis of genotyping data collected for 12 pea recombinant inbred line populations. Considering the large collinearity between individual maps, a set of genotyping data for 15,352 markers and from all populations was used to build the consensus map. The latter shows a cumulative total length of 794.9 cM and a mean inter-marker distance of 0.24 cM. The genotyping matrix, which was composed of a set of 11,366 SNPs and 365 pea accessions, was filtered using Plink v.1.9 software [61, 62]. Accessions and SNP markers with a call rate below 0.90 as well as SNP markers with a minor allele frequency (MAF) below 0.05 were excluded from the GWA analysis. After quality control checking, a genotyping matrix consisting of 10,739 SNPs and 363 accessions with 0.6% missing data was kept for further analyses. The resulting data set was further imputed using Beagle v.3.3.2 software [63]. Beagle applies a Markov model to the hidden states (the haplotype phase and the true genotype) along the chromosome using an EM (Expectation-Maximization) algorithm that iteratively updates model parameters to maximize the model likelihood up to the moment where convergence is achieved. Finally, a genotyping matrix consisting of 10,739 SNPs and 363 accessions with no missing data was used for GWAS. Scripts from Negro et al. [64] were used for the quality control and imputation. Linkage disequilibrium estimation The estimates of linkage disequilibrium (LD) within the collection were determined by the squared allele-frequency correlations (r2) for pairs of loci as described in Weir [65]. Linkage disequilibrium analysis between pairs of SNP markers was calculated in a sliding window of 900 markers using Plink v.1.9 software [61, 62]. Then, intrachromosomal LD quantification and graphical representation of LD decay were accomplished using R 3.5.0 software [56, 57]. The LD decay was measured as the genetic distance (cM) where the average r2 decreased to half its maximum value. Population structure and individual relatedness To control false positive associations, population structure and individual relatedness (kinship) among accessions of the collection were taken into account by fitting markers based structure and kinship matrices in the association models [66]. Kinship and population structure were estimated using a matrix data composed of 363 accessions and a set of 2962 markers without any missing data and corresponding to non-redundant genetic positions randomly selected on the consensus map. The coefficients of kinship between pairs of accessions were estimated using the realized relationship matrix kinship estimation approach implemented in FaST-LMM software [67]. Two alternative approaches were considered to estimate the kinship matrix as described by Rincent et al. [68]. In the first one, the kinship was estimated with all the markers that are not located on the same linkage group (LG) than the tested SNP. Thus, seven kinship matrices were estimated, each being specific to a linkage group; these matrices were noted KLGx with x corresponding to the number of linkage group tested. Such an approach aims at increasing power of detection of significant markers in GWAS particularly in regions of high LD. In the second approach, correlation between markers took into account all the 2962 markers and the kinship matrix was noted K. The discriminant analysis of principal components (DAPC) method developed by Jombard et al. [69] and implemented into the 'adegenet' R package [70,71,72] was used to cluster accessions on the basis of their genotype. This method aims at identifying and describing clusters of genetically related individuals without prior knowledges of groups. First, the optimal number of genetic clusters (k) was determined through the 'K-means' method using the function "find.clusters". The number of clusters was allowed to vary from one to 20 during the determination of the optimal value of k, based on the Bayesian Information Criterion (BIC). The most likely number of clusters was chosen on the basis of the lowest associated BIC. Then, the principal component analysis (PCA) step of DAPC was performed through maximization of the 'a-score' criterion and the optimal number of principal components (PCs) was obtained after 100 iterations using the function "optim.a.score" implemented in 'adegenet' package of R software. Finally, DAPC was performed considering the most likely number of clusters (k) and the optimal number of PCs identified using the function "dapc" implemented in adegenet R-package [70,71,72]. To confirm the allocation of accessions to clusters by DAPC analysis, a Nei genetic distance matrix [73] was calculated with the function "stamppNeisD" implemented in 'StAMPP' package of R software [74] using the genotyping data composed of 363 accessions and a set of 2962 SNPs. Then the resulting matrix was plotted as a dendrogram using the ward method with the package 'cluster' implemented in R software [75]. The two first coordinates of DAPC results were used as covariates (Q matrix) in the GWAS to correct the association tests for false positives. Association mapping BLUPs corresponding to the phenotypic data collected for each accession were used to identify marker-trait association using Linear Mixed Model (LMM) accounting for kinship matrix (K or KLG) with or without population structure matrix (Q) as random effect(s). Four models were therefore compared for their capacity to fit the data: (1) a LMM using the kinship matrix K, (2) a LMM corrected for kinship using the KLG matrices, (3) a LMM including the K and Q matrices and (4) a LMM using both KLG and Q matrices. For each frost damages trait, the best model was chosen by comparing the likelihoods of each model using the Bayesian Information Criterion (BIC) [76]. The model with the smallest BIC was selected. All analyses were performed using LMM provided by the FaST-LMM version algorithm [67]. The threshold to declare an association significant was set at a probability level of the p-value inferior to 4.65E-06, i.e. -log10 (p) > 5.33, which corresponded to the Bonferroni threshold (0.05/ number of tested SNPs). To represent the association results, Manhattan plots and their corresponding quantile-quantile plots were drawn using the package 'QQman' implemented in R software [77]. Local LD block estimation and favorable allele identification Local LD analysis was used to define the LD blocks around significant associated markers detected by GWAS using Plink 1.9 software [61, 62]. For each associated marker, markers in strong linkage disequilibrium (LD; r2 > 0.8) with this one, were identified to define a LD block. By this way, a LD block was defined as the interval including all markers in LD (r2 > 0.8) with the targeted associated marker(s). Unique associated markers which didn't constitute a LD block were kept for further analyses. Thus, for each identified genomic region, LD blocks and unique associated marker(s) composed a significant locus related with frost tolerance. At each significant locus, haplotypes were identified, among the accessions of the collection, according to the non-imputed genotyping data corresponding to the list of markers significantly detected by GWAS and linked markers. Haplotypes showing missing data loci as well as SNP with heterozygous genotypic data were excluded from further analysis. Besides, haplotypes represented by less than 5% of the total number of accessions were also removed from the analysis. Based on the results of association mapping, the allelic effect corresponding to the minor allele (aeff) of markers significantly associated with the frost damage traits were analyzed: if aeff had a negative value, the minor allele of the associated marker was considered to decrease frost damage (favorable allele for frost tolerance); if aeff had a positive value, the minor allele of the associated marker was considered to increase frost damage (unfavorable allele for frost tolerance). For each significant locus and each corresponding trait, the values of frost damages of the different haplotypes were compared using an analysis of variance with a nested design for 'haplotype/genotype', followed by a Student-Newman-Keuls (SNK) comparison test using the function "SNK.test" of the R-package 'agricolae' [78]. Favorable and unfavorable haplotypes at each significant locus were defined as follows: the favorable haplotypes should show a significant lower frost damage mean score while unfavorable haplotypes should show a significant higher frost damage mean score. Finally, we listed representative accessions for each favorable and unfavorable haplotype based on the following condition: each accession should show a mean score of the considered associated trait(s) inferior to 1 for favorable haplotypes and superior to 4 for unfavorable ones. Annotated genes underlying frost tolerance loci To identify genes that may be associated with the frost damage phenotypes, a region encompassing 1 Mb flanking regions upstream and downstream from each of the FD-related markers, ie. significant GWAS markers and markers in LD (r2 > 0.8) with the former ones, was defined. This region was searched for genes annotated in the pea genome assembly v.1a developed by Kreplak et al. [39] using the genome JBrowse available at https://urgi.versailles.inra.fr/Species/Pisum. Genetic positions and sequences of markers used for the current study are available in Tayeh et al. [36].The pea genome assembly v.1a explored in this study is available at https://urgi.versailles.inra.fr/Species/Pisum (Kreplak et al. [39]). The data supporting the findings of this study were used under license within the PeaMUST project and are not publicly available. Phenotyping and genotyping data, all other intermediate data files and scripts are however available from the authors upon reasonable request and subjected to data transfer agreement. AP2/EREBP: APETALA2/Ethylene Responsive Element Binding Protein Bayesian Information Criterion BLUPs: Best Linear Unbiased Predictions CBF: C-repeat Binding Factor centi Morgan COld Regulated CVg: Coefficient of Variation of the genetic variance DAPC: Discriminant Analysis of Principal Components DREB: Dehydration-Responsive Element-Binding Elf3: Early flowering 3 FD: FD_CC: Frost damages in the controlled conditions experiment FD_Field_date1: Frost damages in the field experiment at the first date of scoring Frost damages in the field experiment at the second date of scoring Frost damages in the field experiment at the third date of scoring Frost damages in the field experiment at the fourth date of scoring GWAS: Genome Wide Association Study H2 : Broad sense heritability High response to photoperiod INRAE: National Research Institute for Agriculture, Food and Environment DAPC Cluster Kinship matrix KLG : Kinship matrix corresponding to a given Linkage Group LD: Linkage Disequilibrium Linkage Group LMM: MAF: Minor Allele Frequency NAM: Nested Association Mapping NIL: Near Isogenic Line PCs: Principal Components Principal Components Analysis Structure matrix QTL: Quantitative Trait Locus RIL: Recombinant Inbred Line SE: SNK: Student-Newman-Keuls SNP: Single Nucleotide Polymorphism SSD: Single Seed Descent SSR: Simple Sequence Repeat Vg: Genetic variance WFD: Winter Frost Damage FAOSTAT. FAOSTAT data base. Food and agriculture organization of the United Nations - Statistic Division. 2018. http://www.fao.org/faostat/en/#data/. Accessed 27 Mar 2020. Graham P, Vance C. Legumes: importance and constraints to greater use. Plant Physiol. 2003;131:872–7. https://doi.org/10.1104/pp.017004. Jeuffroy MH, Baranger E, Carrouee B, de Chezelles E, Gosme M, Henault C, et al. Nitrous oxide emissions from crop rotations including wheat, oilseed rape and dry peas. Biogeosciences. 2013;10(3):1787–97. https://doi.org/10.5194/bg-10-1787-2013. Benezit M, Biarnes V, Jeuffroy MH. Impact of climate and diseases on pea yields: what perspectives with climate change? OCL. 2017;24(1):D103. https://doi.org/10.1051/ocl/2016055. Lejeune-Hénaut I, Bourion V, Eteve G, Cunot E, Delhaye K, Desmyter C. Floral initiation in field-grown forage peas is delayed to a greater extent by short photoperiods, than in other types of European varieties. Euphytica. 1999;109(3):201–11. https://doi.org/10.1023/a:1003727324475. Fowler DB, Breton G, Limin AE, Mahfoozi S, Sarhan F. Photoperiod and temperature interactions regulate low-temperature-induced gene expression in barley. Plant Physiol. 2001;127(4):1676–81. https://doi.org/10.1104/pp.010483. Thomashow M. Plant cold acclimation, freezing tolerance genes and regulatory mechanisms. Annu Rev Plant Physiol Plant Mol Biol. 1999;50:571–99. https://doi.org/10.1146/annurev.arplant.50.1.571. Ruelland E, Vaultier M, Zachowski A, Hurry V. Cold signalling and cold acclimation in plants. Adv Bot Res. 2009;49:35–146. https://doi.org/10.1016/S0065-2296(08)00602-2. Rihan HZ, Al-Issawi M, Fuller MP. Advances in physiological and molecular aspects of plant cold tolerance. J Plant Interact. 2017;12(1):143–57. https://doi.org/10.1080/17429145.2017.1308568. Stockinger E, Gilmour S, Thomashow M. Arabidopsis thaliana CBF1 encodes an AP2 domain-containing transcriptional activator that binds to the C-repeat/DRE, a cis-acting DNA regulatory element that stimulates transcription in response to low temperature and water deficit. Proc Natl Acad Sci U S A. 1997;94(3):1035–40. https://doi.org/10.1073/pnas.94.3.1035. Gilmour SJ, Zarka DG, Stockinger EJ, Salazar MP, Houghton JM, Thomashow MF. Low temperature regulation of the Arabidopsis CBF family of AP2 transcriptional activators as an early step in cold-induced COR gene expression. Plant J. 1998;16(4):433–42. https://doi.org/10.1046/j.1365-313x.1998.00310.x. Thomashow M. Molecular basis of plant cold acclimation: insights gained from studying the CBF cold response pathway. Plant Physiol. 2010;154(2):571–7. https://doi.org/10.1104/pp.110.161794. Alonso-Blanco C, Gomez-Mena C, Llorente F, Koornneef M, Salinas J, Martinez-Zapater J. Genetic and molecular analyses of natural variation indicate CBF2 as a candidate gene for underlying a freezing tolerance quantitative trait locus in Arabidopsis. Plant Physiol. 2005;139(3):1304–12. https://doi.org/10.1104/pp.105.068510. Toth B, Galiba G, Feher E, Sutka J, Snape JW. Mapping genes affecting flowering time and frost resistance on chromosome 5B of wheat. Theor Appl Genet. 2003;107(3):509–14. https://doi.org/10.1007/s00122-003-1275-3. Vagujfalvi A, Galiba G, Cattivelli L, Dubcovsky J. The cold regulated transcriptional activator CBF3 is linked to the frost tolerance locus Fr-A2 on wheat chromosome 5A. Mol Gen Genomics. 2003;269(1):60–7. https://doi.org/10.1007/s00438-003-0806-6. Knox AK, Li CX, Vagujfalvi A, Galilba G, Stockinger EJ, Dubcovsky J. Identification of candidate CBF genes for the frost tolerance locus Fr-A(m)2 in Triticum monococcum. Plant Mol Biol. 2008;67(3):257–70. https://doi.org/10.1007/s11103-008-9316-6. Pasquariello M, Barabaschi D, Himmelbach A, Steuernagel B, Ariyadasa R, Stein N, et al. The barley frost resistance-H2 locus. Funct Integr Genomics. 2014;14(1):85–100. https://doi.org/10.1007/s10142-014-0360-9. Sandve SR, Kosmala A, Rudi H, Fjellheim S, Rapacz M, Yamada T, et al. Molecular mechanisms underlying frost tolerance in perennial grasses adapted to cold climates. Plant Sci. 2011;180(1):69–77. https://doi.org/10.1007/s10142-014-0360-9. Tayeh N, Bahrman N, Sellier H, Bluteau A, Blassiau C, Fourment J, et al. A tandem array of CBF/DREB1 genes is located in a major freezing tolerance QTL region on Medicago truncatula chromosome 6. BMC Genomics. 2013;14(1):814. https://doi.org/10.1186/1471-2164-14-814. Tondelli A, Francia E, Barabaschi D, Pasquariello M, Pecchioni N. Inside the CBF locus in Poaceae. Plant Sci. 2011;180(1):39–45. https://doi.org/10.1016/j.plantsci.2010.08.012. Lejeune-Hénaut I, Hanocq E, Béthencourt L, Fontaine V, Delbreil B, Morin J, et al. The flowering locus Hr colocalizes with a major QTL affecting winter frost tolerance in Pisum sativum L. Theor Appl Genet. 2008;116(8):1105–16. https://doi.org/10.1007/s00122-008-0739-x. Dumont E, Fontaine V, Vuylsteker C, Sellier H, Bodele S, Voedts N, et al. Association of sugar content QTL and PQL with physiological traits relevant to frost damage resistance in pea under field and controlled conditions. Theor Appl Genet. 2009;118(8):1561–71. https://doi.org/10.1007/s00122-009-1004-7. Klein A, Houtin H, Rond C, Marget P, Jacquin F, Boucherot K, et al. QTL analysis of frost damage in pea suggests different mechanisms involved in frost tolerance. Theor Appl Genet. 2014;127(6):1319–30. https://doi.org/10.1007/s00122-014-2299-6. Gupta PK, Kulwal PL, Jaiswal V. Association mapping in crop plants: opportunities and challenges. Adv Genet. 2014;85:109–47. https://doi.org/10.1016/b978-0-12-800271-1.00002-0. Li YL, Bock A, Haseneyer G, Korzun V, Wilde P, Schon CC, et al. Association analysis of frost tolerance in rye using candidate genes and phenotypic data from controlled, semi-controlled, and field phenotyping platforms. BMC Plant Biol. 2011;11:146. https://doi.org/10.1186/1471-2229-11-146. Zhao YS, Gowda M, Wurschum T, Longin CFH, Korzun V, Kollers S, et al. Dissecting the genetic architecture of frost tolerance in central European winter wheat. J Exp Bot. 2013;64(14):4453–60. https://doi.org/10.1093/jxb/ert259. Visioni A, Tondelli A, Francia E, Pswarayi A, Malosetti M, Russell J, et al. Genome-wide association mapping of frost tolerance in barley (Hordeum vulgare L.). BMC Genomics. 2013;14:424. https://doi.org/10.1186/1471-2164-14-424. Tondelli A, Pagani D, Ghafoori IN, Rahimi M, Ataei R, Rizza F, et al. Allelic variation at Fr-H1/Vrn-H1 and Fr-H2 loci is the main determinant of frost tolerance in spring barley. Environ Exp Bot. 2014;106:148–55. https://doi.org/10.1016/j.envexpbot.2014.02.014. Yu XQ, Pijut PM, Byrne S, Asp T, Bai GH, Jiang YW. Candidate gene association mapping for winter survival and spring regrowth in perennial ryegrass. Plant Sci. 2015;235:37–45. https://doi.org/10.1016/j.plantsci.2015.03.003. Ali MBM, Welna GC, Sallam A, Martsch R, Balko C, Gebser B, et al. Association analyses to genetically improve drought and freezing tolerance of faba bean (Vicia faba L.). Crop Sci. 2016;56(3):1036–48. https://doi.org/10.2135/cropsci2015.08.0503. Sallam A, Arbaoui M, El-Esawi M, Abshire N, Martsch R. Identification and verification of QTL associated with frost tolerance using linkage mapping and GWAS in winter faba bean. Front Plant Sci. 2016;7:1098. https://doi.org/10.3389/fpls.2016.01098. Tumino G, Voorrips RE, Rizza F, Badeck FW, Morcia C, Ghizzoni R, et al. Population structure and genome-wide association analysis for frost tolerance in oat using continuous SNP array signal intensity ratios. Theor Appl Genet. 2016;129(9):1711–24. https://doi.org/10.1007/s00122-016-2734-y. Wurschum T, Longin CFH, Hahn V, Tucker MR, Leiser WL. Copy number variations of CBF genes at the Fr-A2 locus are essential components of winter hardiness in wheat. Plant J. 2017;89(4):764–73. https://doi.org/10.1111/tpj.13424. Tayeh N, Aubert G, Pilet-Nayel ML, Lejeune-Henaut I, Warkentin TD, Burstin J. Genomic tools in pea breeding programs: status and perspectives. Front Plant Sci. 2015;6:1037. https://doi.org/10.3389/fpls.2015.01037. Desgroux A, L'Anthoene V, Roux-Duparque M, Riviere JP, Aubert G, Tayeh N, et al. Genome-wide association mapping of partial resistance to Aphanomyces euteiches in pea. BMC Genomics. 2016;17:124. https://doi.org/10.1186/s12864-016-2429-4. Tayeh N, Aluome C, Falque M, Jacquin F, Klein A, Chauveau A, et al. Development of two major resources for pea genomics: the GenoPea 13.2K SNP Array and a high density, high resolution consensus genetic map. Plant J. 2015;84(6):1257–73. https://doi.org/10.1111/tpj.13070. Desgroux A, Baudais VN, Aubert V, Le Roy G, de Larambergue H, Miteul H, et al. Comparative genome-wide-association mapping identifies common loci controlling root system architecture and resistance to Aphanomyces euteiches in pea. Front Plant Sci. 2018;8:2195. https://doi.org/10.3389/fpls.2017.02195. Liu R, Fang L, Yang T, Zhang XY, Hu JG, Zhang HY, et al. Marker-trait association analysis of frost tolerance of 672 worldwide pea (Pisum sativum L.) collections. Sci Rep. 2017;7:5919. https://doi.org/10.1038/s41598-017-06222-y. Kreplak J, Madoui M, Cápal P, Novák P, Labadie K, Aubert G, et al. A reference genome for pea provides insight into legume genome evolution. Nat Genet. 2019;51:1411–22. https://doi.org/10.1038/s41588-019-0480-1. Weller JL, Liew LC, Hecht VFG, Rajandran V, Laurie RE, Ridge S, et al. A conserved molecular basis for photoperiod adaptation in two temperate legumes. Proc Nat Acad Sci U S A. 2012;109(51):21158–63. https://doi.org/10.1073/pnas.1207943110. Sun X, Yang T, Hao J, Zhang X, Ford R, Jiang J, et al. SSR genetic linkage map construction of pea (Pisum sativum L.) based on Chinese native varieties. Crop J. 2014;2(2):170–4. https://doi.org/10.1016/j.cj.2014.03.004. Hamon C, Baranger A, Coyne CJ, McGee RJ, Le Goff I, L'Anthoëne V, et al. New consistent QTL in pea associated with partial resistance to Aphanomyces euteiches in multiple French and American environments. Theor Appl Genet. 2011;123(2):261–81. https://doi.org/10.1007/s00122-011-1582-z. Tayeh N, Bahrman N, Devaux R, Bluteau A, Prosperi J, Delbreil B, et al. A high-density genetic map of the Medicago truncatula major freezing tolerance QTL on chromosome 6 reveals colinearity with a QTL related to freezing damage on Pisum sativum linkage group VI. Mol Breeding. 2013;32:279–89. https://doi.org/10.1007/s11032-013-9869-1. Zhu J, Pearce S, Burke A, See DR, Skinner DZ, Dubcovsky J, et al. Copy number and haplotype variation at the VRN-A1 and central FR-A2 loci are associated with frost tolerance in hexaploid wheat. Theor Appl Genet. 2014;127(5):1183–97. https://doi.org/10.1007/s00122-014-2290-2. Novak A, Boldizsar A, Gierczik K, Vagujfalvi A, Adam E, Kozma-Bognar L, et al. Light and temperature signalling at the level of CBF14 gene expression in wheat and barley. Plant Mol Biol Rep. 2017;35(4):399–408. https://doi.org/10.1007/s11105-017-1035-1. Eremina M, Unterholzner SJ, Rathnayake AI, Castellanos M, Khan M, Kugler KG, et al. Brassinosteroids participate in the control of basal and acquired freezing tolerance of plants. Proc Natl Acad Sci U S A. 2016;113(40):E5982–91. https://doi.org/10.1073/pnas.1611477113. Li H, Ye K, Shi Y, Cheng J, Zhang X, Yang S. BZR1 positively regulates freezing tolerance via CBF-dependent and CBF-independent pathways in Arabidopsis. Mol Plant. 2017;10(4):545–59. https://doi.org/10.1016/j.molp.2017.01.004. Anwar A, Liu YM, Dong RR, Bai LQ, Yu X, Li YS. The physiological and molecular mechanism of brassinosteroid in response to stress: a review. Biol Res. 2018;51:46. https://doi.org/10.1186/s40659-018-0195-2. Lester DR, Ross JJ, Davies PJ, Reid JB. Mendel's stem length gene (Le) encodes a gibberellin 3 beta-hydroxylase. Plant Cell. 1997;9(8):1435–43. https://doi.org/10.1105/tpc.9.8.1435. Martin DN, Proebsting WM, Hedden P. Mendel's dwarfing gene: cDNAs from the Le alleles and function of the expressed proteins. Proc Natl Acad Sci U S A. 1997;94(16):8907–11. https://doi.org/10.1073/pnas.94.16.8907. Achard P, Gong F, Cheminant S, Alioua M, Hedden P, Genschik P. The cold-inducible CBF1 factor–dependent signaling pathway modulates the accumulation of the growth-repressing DELLA proteins via its effect on gibberellin metabolism. Plant Cell. 2008;20(8):2117–29. https://doi.org/10.1105/tpc.108.058941. Xin Z, Browse J. Cold comfort farm: the acclimation of plants to freezing temperatures. Plant Cell Environ. 2000;23(9):893–902. https://doi.org/10.1046/j.1365-3040.2000.00611.x. Wiseman J, Al-Mazooqi W, Welham T, Domoney C. The apparent ileal digestibility, determined with young broilers, of amino acids in near-isogenic lines of peas (Pisum sativum L.) differing in trypsin inhibitor activity. J Sci Food Agri. 2003;83(7):644–51. https://doi.org/10.1002/jsfa.1340. Page D, Duc G, Lejeune-Henaut I, Domoney C. Marker-assisted selection of genetic variants for seed trypsin inhibitor contents in peas. Pisum Genetics. 2003;35:19–21 http://hermes.bionet.nsc.ru/pg/35/19.htm. Burstin J, Salloignon P, Chabert-Martinello M, Magnin-Robert JB, Siol M, Jacquin F, et al. Genetic diversity and trait genomic prediction in a pea diversity panel. BMC Genomics. 2015;16:105. https://doi.org/10.1186/s12864-015-1266-1. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2014. http://www.R-project.org/. Accessed 7 Jun 2016. Hervé M. RVAideMemoire: Diverse basic statistical and graphical functions. R package version 0.9–36. 2014. http://CRAN.R-project.org/package=RVAideMemoire. Accessed 7 Jun 2016. Bates D, Mächler M, Bolker BM, Walker SC. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67(1):1–48. https://doi.org/10.18637/jss.v067.i01. Bates D, Maechler M, Bolker B, Walker S, Christensen RHB, Singmann H, et al. Package lme4: linear mixed-effects models using Eigen and S4. R package version 1.1–18-1. 2018. https://CRAN.R-project.org/package=lme4. Accessed 17 Aug 2018. Alves-Carvalho S, Aubert G, Carrère S, Cruaud C, Brochot AL, Jacquin F, et al. Full-length de novo assembly of RNA-seq data in pea (Pisum sativum L.) provides a gene expression atlas and gives insights into root nodulation in this species. Plant J. 2015;84(1):1–19. https://doi.org/10.1111/tpj.12967. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75. https://doi.org/10.1086/519795. Purcell SM. PLINK 1.9. https://www.cog-genomics.org/plink2. Accessed 13 Feb 2017. Browning BL, Browning SR. A unified approach to genotype imputation and haplotype-phase inference for large data sets of trios and unrelated individuals. Am J Hum Genet. 2009;84(2):210–23. https://doi.org/10.1016/j.ajhg.2009.01.005. Negro SS, Millet E, Madur D, Bauland C, Combes V, Welcker C, et al. Genotyping-by-sequencing and SNP-arrays are complementary for detecting quantitative trait loci by tagging different haplotypes in association studies. BMC Plant Biol. 2019;19:318. https://doi.org/10.1186/s12870-019-1926-4. Weir BS. Genetic data analysis II: methods for discrete population genetic data. 2nd ed. Sunderland, Massachussets: Sinauer Associates, Inc; 1996. Yu JM, Pressoir G, Briggs WH, Bi IV, Yamasaki M, Doebley JF, et al. A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nat Genet. 2006;38(2):203–8. https://doi.org/10.1038/ng1702. Lippert C, Listgarten J, Liu Y, Kadie CM, Davidson RI, Heckerman D. FaST linear mixed models for genome-wide association studies. Nat Methods. 2011;8(10):833–5. https://doi.org/10.1038/nmeth.1681. Rincent R, Moreau L, Monod H, Kuhn E, Melchinger AE, Malvar RA, et al. Recovering power in association mapping panels with variable levels of linkage disequilibrium. Genetics. 2014;197(1):375–87. https://doi.org/10.1534/genetics.113.159731. Jombart T, Devillard S, Balloux F. Discriminant analysis of principal components: a new method for the analysis of genetically structured populations. BMC Genet. 2010;11:94. https://doi.org/10.1186/1471-2156-11-94. Jombart T. Adegenet: a R package for the multivariate analysis of genetic markers. Bioinformatics. 2008;24(11):1403–5. https://doi.org/10.1093/bioinformatics/btn129. Jombart T, Ahmed I. Adegenet 1.3-1: new tools for the analysis of genome-wide SNP data. Bioinformatics. 2011;27(21):3070–1. https://doi.org/10.1093/bioinformatics/btr521. Jombart T, Kamvar Z, Collins C, Lustrik R, Beugin M, Knaus B, et al. adegenet: Exploratory analysis of genetic and genomic data version 2.1.1. 2018. https://cran.r-project.org/web/packages/adegenet. Accessed 08 Jun 2018. Nei M. Genetic distance between populations. Am Nat. 1972;106(949):283–92 https://www.jstor.org/stable/2459777. Pembleton LW, Cogan NOI, Forster JW. StAMPP: an R package for calculation of genetic differentiation and structure of mixed-ploidy level populations. Mol Ecol Resour. 2013;13(5):946–52 https://cran.r-project.org/web/packages/StAMPP/index.html. Accessed 05 Dec 2019. Maechler M, Rousseeuw P, Struyf A, Hubert M, Hornik K. cluster: Cluster analysis basics and extensions. R package version 2.1.0. 2019. https://cran.r-project.org/web/packages/cluster. Accessed 05 Dec 2019. Courtois B, Audebert A, Dardou A, Roques S, Ghneim-Herrera T, Droc G, et al. Genome-wide association mapping of root traits in a japonica rice panel. PLoS One. 2013;8(11):e78037. https://doi.org/10.1371/journal.pone.0078037. Turner S. qqman: Q-Q and manhattan plots for GWAS data. R package version 0.1.4. 2017. https://CRAN.R-project.org/package=qqman. Accessed 20 Aug 2018. de Mendiburu F. agricolae: Statistical procedures for agricultural research. R package version 1.2.8. 2017. https://cran.r-project.org/web/packages/agricolae/index.html. Accessed 07 Jun 2018. Yenne SP, Thill DC, Letourneau DJ, Auld DL, Haderlie LC. Techniques for selection of glyphosate-tolerant field pea, Pisum Sativum, Cultivars. Weed Technol. 1988;2(3):286–90. https://doi.org/10.1017/s0890037x00030608. The authors would like to thank the INRAE experimental units of Estrées-Mons, Dijon and Theix, France, for their contribution to the field experiments. They are also grateful to Catherine Desmetz (Agroécologie, INRAE) for additional genotyping of the reference collection, to Marianne Chabert-Martinello, Anthony Klein and Jean-Bernard Magnin-Robert (Agroécologie, INRAE) for providing informations about the constitution of the reference collection and to Renaud Rincent (GDEC, INRAE) for his advices in genome wide association analyses. This work was supported by the PhD fellowship of Sana Beji which was co-funded by the University of Lille, doctoral school for materials, radiation and environmental sciences (France) and by the region Hauts-de-France (France). The acquisition of the phenotyping data was funded by the SNPEA project (ANR06-GPLA019). The acquisition of the genotyping data was funded by the GENOPEA project (ANR-09-GENM-026). The analyzes, interpretation of data and writing of the manuscript were realized in the frame of the PeaMUST project (ANR11-BTBR-0002). The three project fundings come from the French government and were managed by the Research National Agency (ANR). The funding agency did not contribute to the design of the study or collection, nor to the analysis and interpretation of data or to the writing of the manuscript. BioEcoAgro, INRAE, Univ. Liège, Univ. Lille, Univ. Picardie Jules Verne, 2, Chaussée Brunehaut, F-80203, Estrées-Mons, France Sana Beji, Véronique Fontaine, Nasser Bahrman, Jean-Louis Hilbert, Bruno Delbreil & Isabelle Lejeune-Hénaut GCIE-Picardie, INRAE, F-80203, Estrées-Mons, France Rosemonde Devaux & Martine Thomas GQE - Le Moulon, INRAE, Univ. Paris-Sud, CNRS, AgroParisTech, Univ. Paris-Saclay, F-91190, Gif-sur-Yvette, France Sandra Silvia Negro Agroécologie, AgroSup Dijon, INRAE, Univ. Bourgogne, Univ. Bourgogne Franche-Comté, F-21000, Dijon, France Mathieu Siol, Grégoire Aubert & Judith Burstin Sana Beji Véronique Fontaine Rosemonde Devaux Martine Thomas Nasser Bahrman Mathieu Siol Grégoire Aubert Judith Burstin Jean-Louis Hilbert Bruno Delbreil Isabelle Lejeune-Hénaut VF, RD and MT collected phenotypic data in the field and controlled conditions experiments. GA and JB coordinated the genotyping of the association mapping collection and provided genotyping data. SB performed the phenotypic and genetic analyses, as well as association mapping and haplotypes analysis and wrote the manuscript. MS and SSN assisted in analysing genetic data by testing GWAS models and providing a part of R scripts, respectively. ILH and BD conceived and coordinated the study. ILH managed the technical and financial supplies of the study within the PeaMUST project. JLH, BD and ILH coordinated the overall progress and financial support of the study. ILH, BD and NB reviewed and contributed to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Sana Beji. Distribution of Best Linear Unbiased Prediction (BLUP) values for the five traits observed within the pea collection. A: frost damages in the controlled conditions experiment. B, C, D and E: frost damages in the field experiment at the date 1, 2, 3 and 4 respectively. Distribution of 10,739 SNPs along the Pisum sativum linkage groups. Number of SNPs per position are indicated as grey horizontal bars. Genetic position in cM is shown on the y-axis and number of SNPs per position is shown on the x-axis. Description of the Pisum sativum linkage groups (LGs) used in the present study. The number of SNP markers, the genetic length (in cM, from Tayeh et al. [36]) and the average minor allele frequency (MAF) are shown for each LG. Distribution of minor allele frequencies (MAF) for 10,739 SNP markers within the 363 pea accessions. Scatterplot showing the linkage disequilibrium (LD) decay estimated in the association mapping collection. The LD decay across each linkage group (LG) and the overall LD decay across the genome (All LG) are shown. The r2 values of LD between pairs of markers considered are plotted as a function of the genetic position in cM. Red curves represent the estimated LD decay. Blue dashed horizontal lines represent half of the maximum LD value. Blue dashed vertical lines represent the estimated genetic distance (cM) at which the LD decay dropped to half of its maximum. LD decay rate is represented as the point of intersection between the two dashed lines. Description of the association mapping collection. This table presents the list of the pea accessions composing the association mapping collection with their end-use, cultivation status, geographical origin and sowing type. The 'DAPC_Cluster (k)' column shows assignation of the 363 pea accessions to a cluster based on the discriminant analysis of principal components (DAPC). The 'Dendrogram_Cluster (k)' column shows the allocation of individuals to clusters based on the dendrogram using Nei genetic distances between accessions. The description of the pea accessions is extracted from Burstin et al. [55]. CRB Code: Code used for the association mapping, also named collection of biological resources (CRB). *: sowing type modified for this accession, according to Yenne et al. [79]. Dendrogram from Nei genetic distance matrix for 363 genotypes of the pea reference collection. On the y-axis are represented the genetic distances between clusters or accessions. On the x-axis are represented, in red font, the clusters identified for a Nei genetic distance of 7. Distribution of the kinship coefficients between accessions of the association mapping collection. The first histogram (A) describes the distribution of the kinship coefficients within the K matrix, calculated with all markers of the genome. The remaining histograms (B, C, D, E, F, G and H) describe the kinship coefficients within each of the seven KLG matrices calculated as explained in the material and methods section (for example the kinship matrix KLG1 was estimated with all the markers except those that are located on the first linkage group). Description of linkage disequilibrium (LD) blocks per linkage group in the association mapping collection. A LD block consists in a series of at least 2 markers which are in significant LD (r2 > 0.8) with at least one trait-associated marker (underlined marker). LD blocks are named in consecutive numerical order following their linkage group (LG) name. cM*: genetic position, in centiMorgan of each marker along the genetic map of the corresponding linkage group; LD (r2) **: r2 value of each marker with the other markers of the same LD block. Additional file 10: Table S4. Marker haplotype analysis of the association mapping collection. For each linkage group (LG), the list of markers significantly detected by GWAS and markers in linkage disequilibrium (LD; r2 > 0.8) with the former ones is shown. The third line shows genetic positions from the consensus map of Tayeh et al. [36]. The fourth line indicates the LD blocks composed and named as mentioned in the legend of Additional file 9. The following lines show the allelic composition of haplotypes defined by LD blocks and individual associated markers at each of the 6 frost tolerance loci on linkage groups (LGs) I, II, III, V, VI and VII. For each frost damage (FD)-associated marker, the favorable allele is in red font and the unfavorable allele in blue font. Haplotypes are named in consecutive numerical order following their linkage group name; only haplotypes without missing values or heterozygous markers and carried by more than 3% of the lines from the association mapping collection are listed. For each haplotype, accessions and their mean phenotypic values ± standard error of the variables significantly associated with marker(s) in the linkage group are shown. Significant differences between haplotypes were assayed by a SNK means comparison test; favorable haplotypes are shown by a red background and unfavorable haplotypes by a blue background, regarding the SNK test. Haplotypes with a white background are classified in intermediate groups. List of annotated genes underlying genome-wide association loci of frost tolerance in pea. Genes that were located in an interval of ±1 Mb on both sides of markers significantly detected by GWAS and markers in linkage disequilibrium (LD; r2 > 0.8) with the former ones, are listed. For each identified gene, the nearest marker significantly detected by GWAS (underlined font) or marker in LD with associated marker(s) (non-underlined font) is shown. The annotation of genes was extracted from Pisum sativum v.1a genome JBrowse available at https://urgi.versailles.inra.fr/Species/Pisum [39]. Genes positions in the pea genome assembly v.1a are presented by their assigned chromosomes and physical positions indicated in bp. Genes which are not assigned to one of the seven chromosomes, are represented by their physical positions on unanchored scaffolds. a: Annotation refined with the homologous gene from Medicago truncatula available on the Pisum sativum v.1a genome JBrowse, and whose corresponding gene function was identified from the Medicago truncatula v4.0 genome JBrowse available on www.medicagogenome.org. b: Annotation refined with the homologous gene from Glycine max available on the Pisum sativum v.1a genome JBrowse and whose corresponding gene function was identified from the genome v9.0 assembly V1.1 available on http://soykb.org/gene_card.php. c: Annotation refined with the predicted protein function of transcript sequences corresponding to mapped SNPs, extracted from Table S10 in Tayeh et al. [36]. Description of the pea association mapping collection as described in Additional file 6 and the correspondence of the genotyping results of accessions at the Hr locus. Description of accessions of the pea reference collection for their haplotype at the frost damage (FD)-associated loci of the GWA study and at the Hr locus. At the linkage groups (LGs) I, II, III, V, VI and VII, the favorable haplotypes are shown by a red background, the unfavorable haplotypes by a blue background, as described in the legend of Additional file 10. Accessions with undefined haplotypes or intermediate haplotypes were not presented. The same colour code has been used to describe the favorable (red background: Hr) and unfavorable (blue background: hr) allele for the Hr gene, as determined in Lejeune-Hénaut et al. [21]. The mean frost damage score observed in the field experiment as well as its standard error (SE) are given. Frost scores are ranging from 0 (no damage) to 5 (dead plant). The passport data of accessions are extracted from the Additional file 6. Daily air and soil temperatures during the field experiment in Clermont-Ferrand Theix. Beji, S., Fontaine, V., Devaux, R. et al. Genome-wide association study identifies favorable SNP alleles and candidate genes for frost tolerance in pea. BMC Genomics 21, 536 (2020). https://doi.org/10.1186/s12864-020-06928-w DOI: https://doi.org/10.1186/s12864-020-06928-w Frost damages Genome wide association study (GWAS) Pea (Pisum sativum L.) Quantitative trait loci (QTL) Haplotypes of markers
CommonCrawl
9.5 Getting that perfect start: Guettler's diagram So far in this chapter we have been following the "physicist's agenda", using simplified models to look at one phenomenon at a time to build up a picture. But what about the "musician's agenda"? How much have we learned that will be interesting to a player of the violin or cello? If we are honest, the answer is "only a rather limited amount". Schelleng's diagram is relevant, particularly to beginners. Wolf notes can be of direct interest to all players, but they are somewhat of a niche interest since they only affect particular notes. But what players really care about, and spend a lot of time on, relates to transients. Hours of practice are devoted to mastering different bowing techniques, each associated with a particular way to start a note or to transition between notes. If an instrument had some subtle feature which made a particular tricky bowing gesture just a little easier or more reliable, it seems a good guess that a player would describe that instrument as being "easier to play". This possibility opens an interesting avenue of study, using computer simulation models of the kind we have been talking about. The earliest computer models date from the 1970s. A decade later, the increasing power of computers allowed these early bowed-string simulation models to be used to start exploring transient effects systematically [1,2]. How quickly, if at all, is the Helmholtz motion established after a given bow gesture? How does the transient length vary if you change parameters in the model that are relevant to a player, or an instrument maker? This early work established some useful methodology, but we needn't look at the detailed results because the studies suffered from major flaws that only became apparent later: we will meet them in the course of this section and the next. The people we have mentioned so far, like Raman, Cremer and Schelleng, have all been scientists or engineers with an interest in music. But the hero of this section was a musician who developed a strong interest in science. Knut Guettler was a virtuoso player and teacher of the double bass. He got interested in whether theoretical models and computer simulations could tell him things that would be useful in his teaching. Double bass players have a particular problem: some of the notes they play have such low frequencies that they can't afford a bowing transient that takes 10 or 20 period lengths to settle into Helmholtz motion: a short note may be over by then! So Guettler set himself the task of understanding what kind of bow gesture a player needed to perform in order to get a "perfect start" in which the Helmholtz sawtooth waveform was established right from the first slip of the string over the bowhair. Figure 1. Knut Guettler. Image copyright Anders Askenfelt, reproduced by permission. Guettler's initial step was to point out the first of the major flaws in the early computer studies: the transients used in those studies were all physically impossible! In the computer, it is easy to simulate "switch-on" transients in which the bow speed or force suddenly changes. But any physical transient cannot have jumps in either quantity: it must start with either the bow force or the speed (or both) equal to zero. If the bow is already in contact with the string with non-zero normal force, the speed must start from zero. On the other hand, if the bow is already moving when the bow makes contact with the string, as in a string-crossing gesture, then the normal force must build up from zero. This realisation led Guettler to study a more realistic family of transient gestures. The bow starts in contact with the string, and the force is held constant while the bow is accelerated from rest with a chosen value of acceleration. Guettler then used the simplest available model of bowed string motion to pursue his agenda of finding the conditions under which a "perfect start" was possible from one of these constant-acceleration gestures. He assumed an ideal "textbook" string, terminated in mechanical resistances or "dashpots". This simple model is yet another thing that goes back to Raman, and it is essentially the same model that Schelleng used in his discussion of bow force limits for steady Helmholtz motion. Some mathematical details of this "Raman model" are given in the next link. A virtue of this simple model is that it allowed Guettler to follow how the string motion develops during the early stages of a transient. The string is initially sticking to the bow, and it is pulled to one side as the bow moves. The first interesting thing to happen is that, sooner or later, the string releases from the bow. As sketched in Fig. 2, this release will give a similar effect to plucking the string. As we saw back in section 5.4, a pair of corners will then be created, travelling away from the bow symmetrically in both directions. One of these (the one travelling towards the bridge) looks very much like the Helmholtz corner we want, but the other one has the wrong sign. As Schelleng first pointed out [3], in order for a perfect start to occur, the "good" corner must survive and develop into the Helmholtz corner while the other one needs to disappear. Figure 2. Sketch of string displacement before the first slip at the bow (solid line), and at two times shortly after that (dashed lines). A pair of corners travel away from the bow, indicated by the red arrows. In a tour de force of analysis, Guettler was able to track the behaviour through the first few period-lengths after the first release, and he identified four things that might go wrong with the desired sequence of events [4]. For each of the four, he was able to find a criterion that would decide success or failure. The details of the calculation are quite messy, and we need not go into them, but we can show the key results in graphical form. All four criteria take the form of a critical value of the ratio of bow force to bow acceleration. For two of them, the ratio must be bigger than the critical value, while for the other two it must be less than the critical value. There is a simple way to represent the result, illustrated in Fig. 3. Each criterion corresponds to a straight line in the acceleration—force plane. The slopes of these lines are determined by the four critical values. So there are four radial lines in that plane, and for a perfect start we need to be above two of these lines (shown in blue), and below the other two (shown in red). Unless the criteria are inconsistent, the result is a wedge-shaped region in the plane (shaded yellow here) within which a perfect start might be possible. Such plots are now known, naturally enough, as "Guettler diagrams". This particular example is computed using the frequency and typical impedance of a violin G string (196 Hz and 0.363 Ns/m respectively), and the string is assumed to be undamped. The assumed friction coefficients are taken from the measurement shown in Fig. 6 of section 9.2. The chosen bow position has $\beta=0.13$ (recall that this parameter "beta" specifies the position of the bowed point as a fraction of the string length). Figure 3. Example of Guettler's criteria for a perfect start. For a given bow acceleration, the force must lie above the two blue lines, and below the two red lines: in other words, it must lie in the shaded wedge-shaped region. All four of Guettler's boundary lines move in a rather complicated way when the bow position $\beta$ changes. An example of the variation of the slopes of the four lines is plotted in Fig. 4, using the same line colours and types as in Fig. 3. Logarithmic scales have been used for both axes here, to highlight an intriguing parallel with the Schelleng diagram. The Schelleng diagram shows that for a given bow speed, there are limits on the bow force in order for Helmholtz motion to be possible. If $\beta$ is decreased, both limits increase, and they get closer together and eventually meet. The new diagram says that for a given bow acceleration, there are limits on the bow force in order for a perfect start to be possible. These limits, too, increase as $\beta$ decreases, and get closer together and eventually meet. The pattern is more complicated than the Schelleng diagram, because there are two criteria for each of the upper and lower limits (shown in solid and dashed lines), and the lines cross so that all four play a role in determining the allowed region for some values of $\beta$. Figure 4. The variation with bow position $\beta$ of the slopes of the four lines from Fig. 3, using the same line colour convention. The curves are all taken from equations given in Guettler's study [4]. The two solid lines represent Guettler's equation (8b), the dashed red line is for his equation (10b) and the blue dashed line is for his equation (12). The region shaded in yellow is where a perfect start might be possible. The example from Fig. 3 had $\beta=0.13$, so that the allowed region was between the dashed blue line and the solid red line. Just as we did with Schelleng's diagram, we can compare Guettler's predictions with measurements using the Galluzzo bowing machine described in section 9.3.2. For each value of $\beta$, the machine bowed the open D string of a cello 400 times, in a $20\times 20$ grid in the Guettler plane. The bridge force from each note was recorded, and analysed using an automated procedure that attempted to find the length of transient before Helmholtz motion was established (if it ever was established, of course). This automated analysis is fallible, there is no doubt about that, but the same routine has been used in all cases (and will be used again when we come to compare with simulated results) so the comparison between cases should be fair. Figure 5 shows the results, for 6 particular values of $\beta$. It is immediately clear that the results give at least qualitative support for Guettler's predictions. There are not very many perfect starts (which appear as white pixels), but the successful transients (shown in colours other than black) are confined in each case to a vaguely wedge-shaped region. As $\beta$ increases, the boundaries rotate downwards and the wedge tends to get broader (although in the bottom left-hand plot it seems to have got narrower again). For the smallest value of $\beta$ shown here, there are very few coloured pixels. For even smaller values of $\beta$ the results were not worth showing, because they show virtually no coloured pixels at all. The left-hand plot in the bottom row shows a lot of black pixels within the wedge region. The main reason for this is something we saw earlier: with this particular value of $\beta$ the string often chooses to vibrate with S-motion rather than Helmholtz motion. Figure 5. Guettler diagrams measured with the Galluzzo bowing machine, on the open D string of a cello. The string was bowed with a rosin-coated rod for these tests: we will see in section 9.7 how the results change when a conventional cello bow was used. The values of $\beta$ are 0.0499 and 0.0566 (top row); 0.0714 and 0.0899 (middle row); 0.113 and 0.18 (bottom bow). The colour scale indicates the length of transient, in period lengths, before Helmholtz motion is established. White pixels marked $\times$ indicate that there was insufficient data in the measurement to give a reliable estimate, because the delay until the first slip was too long. We can see and hear a few sample waveforms. They are all drawn from one particular column of the right-hand Guettler diagram in the middle row of Fig. 5, corresponding to $\beta=0.0899$. Figure 6 shows three waveforms, corresponding to pixels 2, 9 and 17 of the 9th column of that figure, counting everything from the bottom left-hand corner. The value of bow acceleration is 1.39 m/s$^2$, and the three force values are 0.55 N, 1.58 N and 2.76 N respectively. At the top, plotted in black, is the waveform for the highest force. It appears as a black pixel in Fig. 5, but it looks as if it is close to settling into the Helmholtz sawtooth by the end of the time shown here. You can hear it in Sound 1: it has a rather "scratchy" sound. In the middle, plotted in red, is a "perfect start" which you can hear in Sound 2. At the bottom, plotted in blue, is a case with a slow transient, settling into double-slipping ("surface sound"). You can hear it in Sound 3. All the sounds are very short, only 1/4 s for each one. You should be able to hear noticeable differences between all three, but the quality difference between the second and third sound may not come across very clearly. Figure 6. Measured waveforms for three transients from the Guettler diagram corresponding to $\beta=0.0899$ in Fig. 4. They are all drawn from the 9th column, with acceleration 1.39 m/s$^2$. They are laid out in the same sense as in Fig. 4: counting from the bottom, they correspond to pixels 2, 9 and 17. Sound 1. The sound of the waveform plotted in black in Fig. 6, corresponding to the 17th pixel from the bottom in Fig. 5. Sound 2. The sound of the waveform plotted in red in Fig. 6, corresponding to the 9th pixel from the bottom in Fig. 5. This is the sound of a "perfect start". Sound 3 The sound of the waveform plotted in blue in Fig. 6, corresponding to the 2nd pixel from the bottom in Fig. 5. It produces double slipping motion, "surface sound", rather than Helmholtz motion. To harness the full potential of theoretical bowed-string models for exploring issues of playability, qualitative agreement with measurements is not enough. We would like to use computer models to find out how transient lengths change as a result of changing parameters relevant to players and instrument makers. Well, in order for that agenda to be possible, the model must be sufficiently complete and realistic that it contains all those parameters. It must also be reliable enough to capture the influence of changing them. A first step would be to demonstrate quantitative agreement with measurements. As we will see, this proves to be a tall order. What might we need to include in such a model? We have already met several factors that proved to be significant when looking at plucked strings, and it seems a fair bet that those will all be relevant to bowed strings too: the effect of the string's bending stiffness and its intrinsic damping; the effect on frequencies and damping of coupling to the instrument body; the influence of the second polarisation of string motion, perpendicular to the bowing direction. These factors can indeed all be included [5]: the next link gives some technical details. In addition, there is a new factor that was not relevant to plucked strings. Transverse string vibration is not the only thing to be driven by the force that a violin bow applies to the string. That force is applied tangentially to the surface of the string, so it can also excite torsional motion of the string, as sketched in Fig. 7 and explained in more detail in the next link. Such torsional string motion probably isn't directly responsible for a lot of sound from the instrument, but it can still be very important because of the way it interacts with transverse motion. It can be incorporated in the simulation model with no difficulty, although at some cost in complication [6]. Figure 7. Sketch to show how the friction force from the bow (red arrow) can produce both transverse and torsional motion of the string (blue arrows). To see an important example of this interaction, think what happens when the string is sticking to the bow. If we only consider transverse motion, as in the discussion up to now, that means that the string just under the bow must be moving at the speed of the bow. But when torsional motion is also allowed, the string can roll on the sticking bow. This means, for example, that Schelleng ripples which were previously "trapped" on the finger side of the bow (see Fig. 13 of section 9.2) can now "leak" past the sticking bow and show up in the bridge force: see the previous link for more detail. There is just one context where torsional string vibration is directly relevant to a violinist. Some violins are prone to a phenomenon called the "E string whistle". Occasionally, when playing the open E string, a high-pitched sound is obtained at a pitch unrelated to the expected note at 660 Hz: it occurs at a frequency more like 5 kHz. This is the expected fundamental frequency of the first torsional mode of an E string, and Bruce Stough [7] has convincingly demonstrated that the whistle is indeed caused by torsional motion. Most violin strings have rather high damping of torsional modes, and the player's finger will contribute further damping in a stopped note. But the E string of a violin is usually a steel monofilament, with no over-wrapping. So for an open E string, neither of these factors is present, and torsional modes can have very low damping. This allows torsional vibration to be directly excited by bowing. Some string manufacturers now offer E strings with a layer of over-wrapping, specifically to add torsional damping in order to suppress the whistle. There is one more layer of potential complications in a realistic simulation model, associated with the details of a conventional violin or cello bow. The ribbon of bow hair has a finite width in contact with the string, rather than the single-point contact we have been assuming. Both the hair and the stick of a bow have vibration behaviour of their own, and those might influence the behaviour of the bowed string. But we can duck all those issues for the moment: the Galluzzo experiment deliberately used a rosin-coated rod in place of a conventional bow, and we will first try to match those results. In section 9.7 we will have a careful look at how things might change with a real bow. For the particular cello string used in the Galluzzo measurements, we have a fairly complete set of measured properties relating to its transverse and torsional vibration behaviour. This means that we can formulate a computer simulation based on reliable, calibrated values of all the relevant parameters [5,6]. We can also incorporate a reasonably realistic model of the cello body, by extending the approach used for the wolf note in section 9.4 to include more body resonances. Putting all this together, we can assemble a simulation model that includes a good representation of the behaviour of the cello and its string. We can "bow" this simulated string with the same friction curve we have used before, derived from the steady-sliding measurements and shown in Fig. 6 of section 9.2. We can then run a set of Guettler transients, with the same set of bow forces and accelerations as the measured set we saw in Fig. 5. We can process the result using the same automatic classification routine that we used for the experiments. A typical result, compared with the corresponding measurement, is shown in Fig. 8, and it is very disappointing! No aspect of the simulated pattern gives a convincing match to the measurement. This simulated string would be far harder to play than the real cello: there are fewer coloured pixels, and many of them are scattered around in a speckly pattern rather than coalescing to give broad areas of bright yellow such as we see in the measurement. The only continuous patch of pale-coloured pixels is a narrow wedge running diagonally across the plot. This wedge is indeed reminiscent of Guettler's prediction, but we see no trace of a directly corresponding feature in the measurement. We will explore some of the reasons for this in the subsequent discussion. Figure 8. Guettler diagrams for the case with $\beta=0.0899$: (left) measured, as in Fig. 4; (right) simulated with the model described in the text. To see a little more detail of how the simulation model behaves, Fig. 9 shows the results of some model variations. The top row shows results without including the effect of the string's bending stiffness, while the bottom row has it included. The left-hand column omits the effect of torsional motion in the string, while the right-hand column includes it. So the top left plot is without bending stiffness or torsion, while the bottom right plot includes both, and is the case shown in Fig. 8. Figure 9. Variations in simulated Guettler diagrams. All four cases use the same model and post-processing routine, but the top row omits the effect of string bending stiffness, while the left-hand column omits the effect of torsional motion of the string. The bottom right plot, including both effects, is the same as the right-hand plot in Fig. 8. To glimpse what lies behind these results, Fig. 10 shows a set of simulated waveforms for one particular pixel of these Guettler plots. Counting from the bottom left-hand corner, it is the 9th pixel in the horizontal direction corresponding to acceleration 1.39 m/s$^2$, and the 6th pixel in the vertical direction corresponding to force 1.14 N. This is a bright yellow pixel in the bottom right-hand Guettler plot of Fig. 9 and a slightly darker colour in the upper right-hand plot. These correspond to the waveforms plotted in black and red respectively. It appears as a black pixel in the two left-hand diagrams, and we can see why in the two waveforms plotted in blue and green. The blue curve, corresponding to the case with neither torsion nor bending stiffness, settles into regular double-slipping motion rather than Helmholtz motion. The green curve, with bending stiffness but no torsion, does something similar, but the waveform shows more persistent irregularity from cycle to cycle. Figure 10. Simulated waveforms corresponding to pixel (9,6) of the four Guettler diagrams shown in Fig. 9. This pixel has bow acceleration 1.39 m/s$^2$ and bow force 1.14 N. The blue curve omits both torsion and bending stiffness. The red curve has torsion but no bending stiffness. The green curve has bending stiffness but no torsion. The black curve has both effects included, so that it should be the most realistic of the four. Figure 9 tells us several interesting things. First, none of the plots look anything like the measured case. They all look too "speckly" for comfort: even for cases where a particular transient gave a satisfactory transient leading quickly to Helmholtz motion, there are often neighbouring pixels that are either black, or at least dark red. The interpretation is that if the player tried to do the same gesture twice, they are likely to get very different results because of inevitable tiny differences between the two gestures. This "twitchy" behaviour should sound familiar, if you remember the discussion of chaotic systems in Chapter 8. It looks very much as if all four cases explored in Fig. 9 are exhibiting "sensitive dependence on initial conditions", one of the hallmarks of a chaotic system. The measured Guettler diagram also has some evidence of "twitchiness", with occasional black pixels in the middle of the bright yellow patch, but the effect seems far less extreme. Indeed, sensitive dependence in the measured responses can be demonstrated directly. Paul Galluzzo made repeat measurements of the Guettler plane under nominally identical conditions, and 9 examples of these are illustrated in Fig. 11. Although the qualitative picture remains the same, individual pixels change between takes. Figure 11. Nine repeat scans of the Guettler plan using the Galluzzo experimental rig, illustrating a degree of sensitivity in the response of the real bowed string to small variations. Image copyright Paul Galluzzo, reproduced by permission. We can hazard a guess about where the twitchiness is coming from in the simulated results. When we looked at the phenomenon of chaos in section 8.4, we found that sensitive dependence was intimately tied up with the presence of saddle points in the phase space: two trajectories that are initially very close together can approach a saddle point, and be sprayed apart so that they end up in very different parts of the phase space. Well, our bowed-string model contains a mechanism that could do a similar job, separating two initially similar transients so that they diverge. Figure 12 shows a copy of Fig. 8 from section 9.2, and it reminds us of the abrupt jumps that are inevitably generated by using this friction curve. We can imagine two transients which start out similar, and then come close to one of the critical points at which a jump is triggered. If one transient falls just short of the critical point but the other one passes it, one will have a jump and the other will not, and their subsequent development might be quite different. This idea points us towards the second major flaw in the early computer models: in the next section we will have a critical look at this "friction curve model" and discover that the actual frictional behaviour of a rosin-coated violin bow does not follow any single friction–velocity curve: it is all much more complicated! Figure 12. A copy of Fig. 8 from section 9.2, reminding us that this friction model involves sudden jumps in the force and velocity predictions. This may be the origin of the "twitchiness" seen in the simulation results of Fig. 8. Before that, we should note something else about Fig. 9. At least for these particular cases, we see that both bending stiffness and string torsion have a significant effect on the results: including torsion makes things better (more coloured pixels), while including bending stiffness makes things worse. The extent of these differences is perhaps another manifestation of "chaotic twitchiness": as well as sensitive dependence on details of the bowing transient, we are also seeing sensitive dependence on changes to modelling details, by including or excluding various effects. [1] J. Woodhouse; "On the playability of violins, Part II minimum bow force and transients"; Acustica 78, 137–153 (1993). [2] R. T. Schumacher and J. Woodhouse, "The transient behaviour of models of bowed-string motion"; Chaos 3, 509–523 (1995). [3] J. C. Schelleng; "The bowed string and the player", Journal of the Acoustical Society of America 53, 26–41 (1973). [4] Knut Guettler, "On the creation of the Helmholtz motion in bowed strings"; Acta Acustica united with Acustica, 88, 970–985 (2002). [5] Hossein Mansour, Jim Woodhouse and Gary P. Scavone, "Enhanced wave-based modelling of musical strings, Part 1 Plucked strings"; Acta Acustica united with Acustica, 102, 1082–1093 (2016). [6] Hossein Mansour, Jim Woodhouse and Gary P. Scavone, "Enhanced wave-based modelling of musical strings, Part 2 Bowed strings"; Acta Acustica united with Acustica, 102, 1094–1107 (2016). [7] Bruce Stough, "E string whistles"; Catgut Acoustical Society Journal (Series II), 3, 7, 28—33 (1999).
CommonCrawl
Study sets, textbooks, questions Only $35.99/year Morgan_Cammack Terms in this set (25) Electromagnetic Radiation Electromagnetic radiation (EM radiation or EMR) is the radiant energy released by certain electromagnetic processes. Visible light is one type of electromagnetic radiation; other familiar forms are invisible to the human eye, such as radio waves, infrared light and X-rays. a particle representing a quantum of light or other electromagnetic radiation. A photon carries energy proportional to the radiation frequency but has zero rest mass. Radiation Pressure Radiation pressure is defined as the force per unit area exerted by electromagnetic radiation, and is given by. where p is the momentum, c is the speed of light, and is then energy flux. the branch of science concerned with the investigation and measurement of spectra produced when matter interacts with or emits electromagnetic radiation. Continuous Spectrum an emission spectrum that consists of a continuum of wavelengths. an apparatus for producing and recording spectra for examination. Dark-line Absorption Spectrum Medical Definition of absorption spectrum. : an electromagnetic spectrum in which a decrease in intensity of radiation at specific wavelengths or ranges of wavelengths characteristic of an absorbing substance (as chlorophyll) is manifested especially as a pattern of dark lines or bands—compare emission spectrum. Bright-line Emission Spectrum emission spectrum. noun. 1. the continuous spectrum or pattern of bright lines or bands seen when the electromagnetic radiation emitted by a substance is passed into a spectrometer. The spectrum is characteristic of the emitting substance and the type of excitation to which it is subjected Compare absorption spectrum. Doppler Effect an increase (or decrease) in the frequency of sound, light, or other waves as the source and observer move toward (or away from) each other. The effect causes the sudden change in pitch noticeable in a passing siren, as well as the redshift seen by astronomers. Refracting Telescopes a telescope that uses a converging lens to collect light. Chromatic Aberration the material effect produced by the refraction of different wavelengths of electromagnetic radiation through slightly different angles, resulting in a failure to focus. It causes colored fringes in the images produced by uncorrected lenses. Reflecting Telescope a telescope in which a mirror is used to collect and focus light. an instrument used to detect radio emissions from the sky, whether from natural celestial objects or from artificial satellites. Radio Interferometer radio interferometer. noun, Astronomy. 1. any of several different types of instrumentation designed to observe interference patterns of electromagnetic radiation at radio wavelengths: used in the discovery and measurement of radio sources in the atmosphere. Photosphere the luminous envelope of a star from which its light and heat radiate. a small compact particle of a substance. a reddish gaseous layer immediately above the photosphere of the sun or another star. Together with the corona, it constitutes the star's outer atmosphere. the rarefied gaseous envelope of the sun and other stars. The sun's corona is normally visible only during a total solar eclipse when it is seen as an irregularly shaped pearly glow surrounding the darkened disk of the moon. the continuous flow of charged particles from the sun that permeates the solar system. a spot or patch appearing from time to time on the sun's surface, appearing dark by contrast with its surroundings. Prominences the fact or condition of standing out from something by physically projecting or being particularly noticeable. a brief eruption of intense high-energy radiation from the sun's surface, associated with sunspots and causing electromagnetic disturbances on the earth, as with radio frequency communications and power line transmissions. Auroras a natural electrical phenomenon characterized by the appearance of streamers of reddish or greenish light in the sky, usually near the northern or southern magnetic pole. a nuclear reaction in which atomic nuclei of low atomic number fuse to form a heavier nucleus with the release of energy. Proton-proton Chain The proton-proton chain reaction dominates in stars the size of the Sun or smaller. The proton-proton chain reaction is one of the two (known) sets of fusion reactions by which stars convert hydrogen to helium. It dominates in stars the size of the Sun or smaller. Sets with similar terms ch 23 vocab Simone_Radde Chapter 23 Vocab Madeline_Maxwell9 Light, Astronomical Observations, & the Sun Latasha_BooneTEACHER Chapter 24.1 The study of light Claire_Anneet Other sets by this creator Verified questions It is possible to derive the age of the universe given the value of the Hubble constant and the distance to a galaxy, again with the assumption that the value of the Hubble constant has not changed since the Big Bang. Consider a galaxy at a distance of 400 million light-years receding from us at a velocity, v. If the Hubble constant is 20 km/s per million light-years, what is its velocity? How long ago was that galaxy right next door to our own Galaxy if it has always been receding at its present rate? Express your answer in years. Since the universe began when all galaxies were very close together, this number is a rough estimate for the age of the universe. Two stars are in a visual binary star system that we see face on. One star is very massive whereas the other is much less massive. Assuming circular orbits, describe their relative orbits in terms of orbit size, period, and orbital velocity. Below is a table of four stars along with their apparent and absolute magnitudes. Use this table to answer the following question. $$ \begin{matrix} \text{ } & \text{Apparent Magnitude} & \text{Absolute Magnitude} & \text{Distance}\\ \text{Star A:} & \text{0} & \text{0}\\ \text{Star B:} & \text{0} & \text{2}\\ \text{Star C:} & \text{5} & \text{4}\\ \text{Star D:} & \text{4} & \text{4}\\ \end{matrix} $$ Which object is more luminous: Star C, Star D, or neither? Explain your reasoning. When it is noon for the observer, which constellation will be behind the Sun? Quizlet Live Quizlet Learn Quizlet Plus Quizlet Plus for teachers How Quizlet Works Ad and Cookie Policy DeutschEnglish (UK)English (USA)EspañolFrançais (FR)Français (QC/CA)Bahasa IndonesiaItalianoNederlandspolskiPortuguês (BR)РусскийTürkçeTiếng Việt한국어中文 (简体)中文 (繁體)日本語 © 2022 Quizlet Inc.
CommonCrawl
Improvement of FPPR method to solve ECDLP Yun-Ju Huang1, Christophe Petit2, Naoyuki Shinohara3 & Tsuyoshi Takagi4,5,6 Solving the elliptic curve discrete logarithm problem (ECDLP) by using Gröbner basis has recently appeared as a new threat to the security of elliptic curve cryptography and pairing-based cryptosystems. At Eurocrypt 2012, Faugère, Perret, Petit and Renault proposed a new method (FPPR method) using a multivariable polynomial system to solve ECDLP over finite fields of characteristic 2. At Asiacrypt 2012, Petit and Quisquater showed that this method may beat generic algorithms for extension degrees larger than about 2000. In this paper, we propose a variant of FPPR method that practically reduces the computation time and memory required. Our variant is based on the idea of symmetrization. This idea already provided practical improvements in several previous works for composite-degree extension fields, but its application to prime-degree extension fields has been more challenging. To exploit symmetries in an efficient way in that case, we specialize the definition of factor basis used in FPPR method to replace the original polynomial system by a new and simpler one. We provide theoretical and experimental evidence that our method is faster and requires less memory than FPPR method when the extension degree is large enough. In the last two decades, elliptic curves have become increasingly important. In 2009, the American National Security Agency (NSA) to advocate the use of elliptic curves for public key cryptography [14] which are based on the hardness of elliptic curve discrete logarithm problem (ECDLP) or other hardness problem on elliptic curves. Elliptic curves used in practice are defined either over a prime field \(\mathbb {F}_{p}\) or over a binary field \(\mathbb {F}_{2^{n}}\). Like any other discrete logarithm problem, ECDLP can be solved with generic algorithms such as Baby-step Giant-step algorithm, Pollard's ρ method and their variants [1,16,17,19]. These algorithms can be parallelized very efficiently, but the parallel versions still have an exponential complexity in the size of the parameters. Better algorithms based on the index calculus framework have long been known for discrete logarithm problems over multiplicative groups of finite fields or hyperelliptic curves, but generic algorithms have remained the best algorithms for solving ECDLP until recently. A key step of an index calculus algorithm for solving ECDLP is to solve the point decomposition problem. In 2004, Semaev introduced the summation polynomials (also known as Semaev's polynomials) to solve this problem. Solving Semaev's polynomials is not a trivial task in general, in particular if K is a prime field. For extension fields \(K=\mathbb {F}_{q^{n}}\), Gaudry and Diem [2,9] independently proposed to define V as the subfield \(\mathbb {F}_{q}\) and to apply a Weil descent to further reduce the resolution of Semaev's polynomials to the resolution of a polynomial system of equations over \(\mathbb {F}_{q}\). Diem generalized these ideas by defining V as a vector subspace of \(\mathbb {F}_{q^{n}}\) [3]. Using generic complexity bounds on the resolution of polynomial systems, these authors provided attacks that can beat generic algorithms and can even have subexponential complexity for specific families of curves [2]. At Eurocrypt 2012, Faugère, Perret, Petit and Renault re-analized Diem's attack [3] in the case \(\mathbb {F}_{2^{n}}\) (denoted as FPPR method in this work), and showed that the systems arising from the Weil descent on Semaev's polynomials are much easier to solve than generic systems [7]. Later at Asiacrypt 2012, Petit and Quisquater provided heuristic evidence that ECDLP is subexponential for that very important family of curves, and would beat generic algorithms when n is larger than about 2000 [15]. In 2013, Shantz and Teske provided further experimental results using the so-called "delta method" with smaller factor basis to solve the FPPR system [7,20]. Even though these recent results suggest that ECDLP is weaker than previously expected for binary curves, the attacks are still far from being practical. This is mainly due to the large memory and time required to solve the polynomial systems arising from the Weil descent in practice. In particular, the experimental results presented in [15] for primes n were limited to n=17. In order to validate the heuristic assumptions taken in Petit and Quisquater's analysis and to estimate the exact security level of binary elliptic curves in practice, experiments on larger parameters are definitely required. In this paper, we focus on Diem's version of index calculus for ECDLP over a binary field of prime extension degree n [3,7,15]. In that case, the Weil descent is performed on a vector space that is not a subfield of \(\mathbb {F}_{2^{n}}\), and the resulting polynomial system cannot be re-written in terms of symmetric variables only. We therefore introduce a different method to take advantage of symmetries even in the prime degree extension case. While Shantz and Teske use the same multivariate system as FPPR method [7,20], in this work we re-write the system with both symmetric and non-symmetric variables. The total number of variables is increased compared to [7,15], but we limit this increase as much as possible thanks to an appropriate choice of the vector space V. On the other hand, the use of symmetric variables in our system allows reducing the degrees of the equations significantly. Our experimental results show that our systems can be solved faster than the original systems of [7,15] as long as n is large enough. Notations. In this work, we are interested in solving the elliptic curve discrete logarithm problem on a curve E defined over a finite field \(\mathbb {F}_{2^{n}}\), where n is a prime number. We denote by E α,β the elliptic curve over \(\mathbb {F}_{2^{n}}\) defined by the equation y 2+x y=x 3+α x 2+β. For a given point P∈E, we use x(P) and y(P) to indicate the x-coordinate and y-coordinate of P respectively. From now on, we use the specific symbols P, Q and k for the parameters and solution of the ECDLP: P∈E, Q∈〈P〉, and k is the smallest non-negative integer such that Q= [ k]P. We assume that the order of 〈P〉 is prime here. We identify the field \(\mathbb {F}_{2^{n}}\) as \(\mathbb {F}_{2}[\!\omega ] / h(\omega)\), where h is an irreducible polynomial of degree n. Any element \(e \in \mathbb {F}_{2^{n}}\) can then be represented as p o l y(e):=c 0+c 1 ω+…+c n−1 ω n−1 where \(c_{i} \in \mathbb {F}_{2}\). For any set S, we use the symbol # S to mean the order of S. We denote the degree of regularity as D reg , which is the maximum degree appearing when solving the multivariate polynomial system with Gröbner basis routine. Outline. The remaining of this paper is organized as follows. In Section 2, we recall previous index calculus algorithms for ECDLP, in particular FPPR method attack on binary elliptic curves and previous work exploiting the symmetry of Semaev's polynomials when the extension degree is composite. In Section 3, we describe our variant of FPPR method taking advantage of the symmetries even when the extension degree is prime. In Section 4, we provide experimental results supporting our method with respect to FPPR original attack. Finally in Section 5, we conclude the paper and we introduce further work. Remark. This is a full version of the paper [10] published at the 8th International Workshop on Security (IWSEC 2013), held at Okinawa, Japan. 2 Index calculus for elliptic Curves 2.1 The index calculus method For a given point P∈E α,β , let Q be a point in 〈P〉. The index calculus method can be adapted to elliptic curves to compute the discrete logarithm of Q with respect to P. As shown in Algorithm 1, we first select a factor base F⊂E α,β and we perform a relation search expressed as the loop between the line 3 and 7 of Algorithm 1. This part is currently the efficiency bottleneck of the algorithm. For each step in the loop, we compute R:= [ a]P+[ b]Q for random integers a and b and we apply the Decompose function on R to find all tuples (s o l m ) of m elements \(P_{j_{\ell }} \in F\) such that \(P_{j_{1}}+P_{j_{2}} + \cdots + P_{j_{m}} + R = O\). Note that we may obtain several decompositions for each point R. In the line 6, the AddRelationToMatrix function encodes every decomposition of a point R into a row vector of the matrix M. More precisely, the first # F columns of M correspond to the elements of F, the last two columns correspond to P and Q, and the coefficients corresponding to these points are encoded in the matrix. In the line 8, the ReducedRowEchelonForm function reduces M into a row echelon form. When the rank of M reaches # F+1, the last row of the reduced M is of the form (0,⋯,0,a ′,b ′), which implies that [ a ′]P+[ b ′]Q=O. From this relation, we obtain k=−a ′/b ′ mod #〈P〉. A straightforward method to implement the Decompose function would be to exhaustively compute the sums of all m-tuples of points in F and to compare these sums to R. However, this method would not be efficient enough. 2.2 Semaev's polynomials Semaev's polynomials [18] allow replacing the complicated addition law involved in the point decomposition problem by a somewhat simpler polynomial equation over \(\mathbb {F}_{2^{n}}\). The m-th Semaev's polynomial s m for E α,β is defined as follows: s 2:=x 1+x 2, s 3:=(x 1 x 2+x 1 x 3+x 2 x 3)2+x 1 x 2 x 3+β, and s m :=R e s X (s j+1(x 1,…,x j ,X),s m−j+1(x j+1,…,x m ,X)) for m≥4, 2≤j≤m−2. The polynomial s m is symmetric and has degree 2m−2 with respect to each variable. Definition 1 provides a straightforward method to compute it. In practice, computing large Semaev's polynomials may not be a trivial task, even if the symmetry of the polynomials can be used to accelerate it [12]. Semaev's polynomials have the following property: We have s m (x 1,x 2,…,x m )=0 if and only if there exist \(y_{j}\in \mathbb {F}_{2^{n}}\) such that P j =(x j ,y j )∈E α,β and P 1+P 2+…+P m =O. In his seminal paper [18], Semaev proposed to choose the factor base F in Algorithm 1 as $$F_{V}:=\{(x,y)\in E_{\alpha,\beta} | x\in V\} $$ where V is some subset of the base field of the curve. According to Proposition 1, finding a decomposition of a given point R= [ a]P+[ b]Q is then reduced to first finding x i ∈V such that $$s_{m+1}(x_{1}, x_{2},\ldots, x_{m}, x(R)) = 0, $$ and then finding the corresponding points P j =(x j ,y j )∈F V . A straightforward Decompose function using Semaev's polynomials is described in Algorithm 2. In this algorithm, Semaev's polynomials are solved by a naive exhaustive search method. Since every x-coordinate corresponds to at most two points on the elliptic curve E α,β , each solution of s m+1(x 1,x 2,…,x m ,x(R))=0 may correspond to up to 2m possible solutions in E α,β . These potential solutions are tested in the line 5 of Algorithm 2. As such, Algorithm 2 still involves some exhaustive search and can clearly not solve ECDLP faster than generic algorithms. 2.3 FPPR method At Eurocrypt 2012, following similar approaches by Gaudry [9] and Diem [2,3], FPPR method provided V with the structure of a vector space, to reduce the resolution of Semaev's polynomial to a system of multivariate polynomial equations. They then solved this system using Gröbner basis algorithms [7]. More precisely, FPPR method suggested to fix V as a random vector subspace of \(\mathbb {F}_{2^{n}}/\mathbb {F}_{2}\phantom {\dot {i}\!}\) with dimension n ′. If \(\{v_{1},\ldots,v_{n^{\prime }}\}\phantom {\dot {i}\!}\) is a basis of this vector space, the resolution of Semaev's polynomial is then reduced to a polynomial system as follows. For any fixed P ′∈F V , we can write x(P ′) as $$x(P')=\bar{c}_{1}v_{1} + \bar{c}_{2}v_{2} + \ldots + \bar{c}_{n'}v_{n'} $$ where \(\bar {c}_{\ell }\in \mathbb {F}_{2}\phantom {\dot {i}\!}\) are known elements. Similarly, we can write all the variables x j ∈V in \(s_{m+1}\mid _{x_{m+1} = x(R)}\phantom {\dot {i}\!}\) as $$\left\{\begin{array}{ll} x_{j} = c_{j,1}v_{1} + c_{j,2}v_{2} + \ldots +c_{j,n'}v_{n'}, & 1 \le j \le m,\\ x_{m+1} = r_{1}v_{1} + r_{2}v_{2} + \ldots + r_{n-1}v_{n}, & \end{array}\right. $$ where c j,ℓ are binary variables and \(r_{\ell }\in \mathbb {F}_{2}\) are known. Using these equations to substitute the variables x j in s m+1, we obtain an equation $$s_{m+1} = f_{1}(c_{j,\ell})v_{1} + f_{2}(c_{j,\ell})v_{2} + \ldots +f_{n}(c_{j,\ell})v_{n}, $$ where f 1,f 1,…,f n are polynomials in the binary variables c j,ℓ , 1≤j≤m, 1≤ℓ≤n ′. We have \(s_{m+1}\mid _{x_{m+1} = x(R)} = 0\phantom {\dot {i}\!}\) if and only if each binary coefficient polynomial f ℓ is equal to 0. Solving Semaev's polynomial s m+1 is now equivalent to solving the binary multivariable polynomial system f 1=f 2=…=f m =0 in the variables c j,ℓ , 1≤j≤m,1≤ℓ≤n ′. The Decompose function using this system is described in Algorithm 3. We first substitute x m+1 with x(R) in s m+1. The TransFromSemaevToBinaryWithSym function transforms the equation \(s_{m+1}\mid _{x_{m+1} = x(R)}=0\phantom {\dot {i}\!}\) into system f 1,f 2,…,f m as described above. To solve this system, we compute its Gröbner basis with respect to a lexicographic ordering using an algorithm such as F 4 or F 5 algorithm [4,5]. A Gröbner basis of the system we solved here always contains some univariate polynomial (the polynomial 1 when there is no solution) with lexicographic ordering, and the solutions of f 1,f 2,…,f m can be obtained from the roots of this polynomial. However, since it is much more efficient to compute a Gröbner basis for a graded-reversed lexicographic order than for a lexicographic ordering, a Gröbner basis of f 1,f 2,…,f m is first computed for a graded-reverse lexicographic ordering and then transformed into a Gröbner basis for a lexicographic ordering using FGLM algorithm [6]. After getting the solutions of f 1,f 2,…,f m , we find the corresponding solutions over E α,β . As before, this requires to check whether P 1+P 2+…+P m +R=O for all the potential solutions in the line 6 of Algorithm 3. Although FPPR approach provides a systematic way to solve Semaev's polynomials, their algorithm is still not practical. Petit and Quisquater estimated that the method could beat generic algorithms for extension degrees n larger than about 2000 [15]. This number is much larger than the parameter n=160 that is currently used in applications. In fact, the degrees of the equations in f 1,f 2,…,f m grow quadratically with m, and the number of monomial terms in the equations is exponential in this degree. In practice, the sole computation of the Semaev's polynomial s m+1 seems to be a challenging task for m larger than 7. Because of the large computation costs (both in time and memory), no experimental result has been provided in [7] for n larger than 20. In this work, we provide a variant of FPPR method that practically improves its complexity. Our method exploits the symmetry of Semaev's polynomials to reduce both the degree of the equations and the number of monomial terms appearing during the computation of a Gröbner basis of the system f 1,f 2,…,f m . 2.4 Use of symmetries in previous works The symmetry of Semaev's polynomials has been exploited in previous works, but always for finite fields \(\mathbb {F}_{p^{n}}\) with composite extension degrees n. The approach was already described by Gaudry [9] as a mean to accelerate the Gröbner basis computations. The symmetry of Semaev's polynomials has also been used by Joux and Vitse's to establish new ECDLP records for composite extension degree fields [12,13]. Extra symmetries resulting from the existence of a rational 2-torsion point have also been exploited by Faugère et al. for twisted Edward curves and twisted Jacobi curves [8]. In all these approaches, exploiting the symmetries of the system allows reducing the degrees of the equations and the number of monomials involved in the Gröbner basis computation, hence it reduces both the time and the memory costs. To exploit the symmetry in ECDLP index calculus algorithms, we first rewrite Semaev's polynomial s m+1 with the elementary symmetric polynomials. Let x 1,x 2,…,x m be m variables, then the elementary symmetric polynomials are defined as $$ \left\{ \begin{array}{l} \sigma_{1} := \sum_{1\le j_{1} \le m}{x_{j_{1}}} \\ \sigma_{2} := \sum_{1\le j_{1} < j_{2} \le m}{x_{j_{1}}x_{j_{2}}} \\ \sigma_{3} := \sum_{1\le j_{1} < j_{2} < j_{3} \le m}{x_{j_{1}}x_{j_{2}}x_{j_{3}}} \\ \hspace*{3em}\vdots \\ \sigma_{m} := \prod_{1\le j \le m}{x_{j}} \\ \end{array}\right. $$ Any symmetric polynomial can be written as an algebraic combination of these elementary symmetric polynomials. We denote the symmetrized version of Semaev's polynomial s m by \(s^{\prime }_{m}\). For example for the curve E α,β in characteristic 2, we have $$s_{3} = (x_{1}x_{2} + x_{1}x_{3} + x_{2}x_{3})^{2} + x_{1}x_{2}x_{3} + \beta, $$ where x 3 is supposed to be fixed to some x(R). The elementary symmetric polynomials are $$\left\{ \begin{array}{l} \sigma_{1} = x_{1} + x_{2}, \\ \sigma_{2} = x_{1}x_{2}. \\ \end{array}\right. $$ The symmetrized version of s 3 is therefore $$s_{3}' = (\sigma_{2}+\sigma_{1}x_{3})^{2} + \sigma_{2}x_{3} + \beta.$$ Since x 3 is fixed and the squaring is a linear operation over \(\mathbb {F}_{2}\), we see that symmetrization leads to a much simpler polynomial. Let us now assume that n is a composite number with a non-trivial factor n ′. In this case, we can fix the vector space V as the subfield \(\mathbb {F}_{p^{n^{\prime }}}\phantom {\dot {i}\!}\) of \(\mathbb {F}_{p^{n}}\). We note that all arithmetic operations are closed on the elements of V for this special choice. In particular, we have $$ if\kern1em {x}_i\in V\kern1em then\kern1em {\sigma}_i\in V\kern1em . $$ Let now \(\{v_{1}, v_{2}, \ldots, v_{n/n^{\prime }}\}\phantom {\dot {i}\!}\) be a basis of \(\mathbb {F}_{p^{n}} / \mathbb {F}_{p^{n'}}\phantom {\dot {i}\!}\). We can write $$\begin{array}{ll} \sigma_{j} = d_{j,0}\ for\ 1 \le j \le m, \\ x_{m+1} = r_{1}v_{1} + r_{2}v_{2} + \ldots + r_{n/n'}v_{n/n'},& \\ \end{array} $$ where \(r_{\ell }\in \mathbb {F}_{p^{n'}}\) are known and the variables d j,0 are defined over \(\mathbb {F}_{p^{n'}}\). These relations can be substituted in the equation \(s^{\prime }_{m+1}\mid _{x_{m+1} = x(R)}=0\) to obtain a system of n/n ′ equations in the m variables d j,0 only. Since the total degree and the degree of \(s^{\prime }_{m}\) with respect to each symmetric variable σ i are lower than those of s m with respect to all non-symmetric variables x i , the degrees of the equations in the resulting system are also lower and the system is easier to solve. As long as n/n ′≈m, the system has a reasonable chance to have a solution. Given a solution (σ 1,…,σ m ) for this system, we can recover all possible corresponding values for the variables x 1,…,x m (if there is any) by solving the system given in Definition 2, or equivalently by solving the symmetric polynomial equation $$ x^{m}+\sum_{i=1}^{m}\sigma_{i}x^{m-i}=x^{m}+\sigma_{1}x^{m-1}+\sigma_{2}x^{m-2}+\ldots+\sigma_{m}. $$ Note that the existence of a non-trivial factor of n and the special choice for V are crucial here. Indeed, they allow building a new system that only involves symmetric variables and that is significantly simpler to solve than the previous one. 3 Using symmetries with prime extension degrees When n is prime, the only subfield of \(\mathbb {F}_{2^{n}}\) is \(\mathbb {F}_{2}\), but choosing \(V=\mathbb {F}_{2}\) would imply to choose m=n, hence to work with Semaev's polynomial s n+1 which would not be practical when n is large. In Diem's and FPPR attacks [3,7], the set V is therefore a generic vector subspace of \(\mathbb {F}_{2^{n}}/\mathbb {F}_{2}\) with dimension n ′. In that case, Implication (2) does not hold, but we now show how to nevertheless take advantage of symmetries in Semaev's polynomials. 3.1 A new system with both symmetric and non-symmetric variables Let n be an arbitrary integer (possibly prime) and let V be a vector subspace of \(\mathbb {F}_{2^{n}}/\mathbb {F}_{2}\phantom {\dot {i}\!}\) with dimension n ′. Let \(\{v_{1},\ldots,v_{n^{\prime }}\}\phantom {\dot {i}\!}\) be a basis of V. We can write $$\left\{ \begin{array}{l} x_{j} = c_{j,1}v_{1} + c_{j,2}v_{2} + \ldots +c_{j,n'}v_{n'}, \ for\ 1\leq j\leq m\\ x_{m+1} = r_{1}v_{1} + r_{2}v_{2} + \ldots + r_{n}v_{n}, \\ \end{array}\right. $$ where c j,ℓ with 1≤j≤m and 1≤ℓ≤n ′ are variables but r ℓ , 1≤ℓ≤n are known elements in \(\mathbb {F}_{2}\). Like in the composite extension degree case, we can use the elementary symmetric polynomials to write Semaev's polynomial s m+1 as a polynomial \(s^{\prime }_{m+1}\) in the variables σ j only. However since V is not a field anymore, constraining x j in V does not constrain σ j in V anymore. Since \(\sigma _{j}\in \mathbb {F}_{2^{n}}\), we can however write $$\left\{ \begin{array}{l} \sigma_{1} = d_{1,1}v_{1} + d_{1,2}v_{2} +\ldots +d_{1,n}v_{n}, \\ \sigma_{2} = d_{2,1}v_{1} + d_{2,2}v_{2} + \ldots +d_{2,n}v_{n},\\ \hspace*{6em}\vdots \\ \sigma_{m} = d_{m,1}v_{1} + d_{m,2}v_{2} + \ldots +d_{m,n}v_{n}. \\ \end{array}\right. $$ where d j,ℓ with 1≤j≤m and 1≤ℓ≤n are binary variables. Using these equations, we can substitute σ j in \(s^{\prime }_{m+1}\) to obtain $$s'_{m+1} = f'_{1}v_{1} + f'_{2}v_{2} + \ldots +f'_{n}v_{n} $$ where \(f^{\prime }_{1}, f'_{2},\ldots, f'_{n}\) are polynomials in the binary variables d j,ℓ . Applying a Weil descent on the symmetrized Semaev's polynomial equation \(s^{\prime }_{m+1}=0\), we therefore obtain a polynomial system \(f^{\prime }_{1}=f'_{2}=\ldots =f'_{n}=0\) in the mn binary variables d j,ℓ . The variables d j,ℓ must also satisfy certain constraints provided by System (1). More precisely, substituting both the x j and the σ j variables for binary variables in the equation $$\sigma_{j}=\sum_{\substack{I\subset\{1,\ldots,m\}\\\#I=j}}\ \prod_{k\in I}x_{k}\, $$ we obtain $$\begin{array}{@{}rcl@{}} \lefteqn{d_{j,1}v_{1} + d_{j,2}v_{2}+ \ldots +d_{j,n}v_{n} = \sigma_{j}} \\ &=&\sum_{\substack{I\subset\{1,\ldots,m\}\\\#I=j}}\ \prod_{k\in I}\ \sum_{\ell=1}^{n'}c_{k,\ell}v_{\ell}\\ &=&g_{j,1}v_{1} + g_{j,2}v_{2} + \ldots +g_{j,n}v_{n} \end{array} $$ where g j,ℓ are polynomials in the m n ′ binary variables c j,ℓ only. In other words, applying a Weil descent on each equation of System (1), we obtain mn new equations $$d_{j,\ell}=g_{j,\ell} $$ in the m n+m n ′ binary variables c j,ℓ and d j,ℓ . The resulting system $$\left\{ \begin{array}{ll} f'_{\ell}=0, &1\leq \ell\leq n,\\ d_{j,\ell}=g_{j,\ell}, &1\leq j\leq m, 1\leq \ell\leq n, \end{array}\right. $$ has m n+n equations in m n+m n ′ binary variables. As before, the system is expected to have solutions if m n ′≈n, and it can then be solved using a Gröbner basis algorithm. In comparison with the FPPR [7], the number of variables is multiplied by a factor roughly (m+1). However, the degrees of our equations are also decreased thanks to the symmetrization, and this may decrease the degree of regularity of the system. In order to compare the time and memory complexities of both approaches, let D FPPR and D Ours be the degrees of regularity of the corresponding systems. The time and memory costs are respectively roughly #var\(^{2D_{reg}}\phantom {\dot {i}\!}\) and #var\(^{3D_{reg}}\phantom {\dot {i}\!}\). Assuming that neither D FPPR nor D Ours depends on n (as suggested by Petit and Quisquater's experiments [15]), that D Ours <D FPPR (thanks to the use of symmetric variables) and that m is small enough, then the extra (m+1) factors in the number of variables will be a small price to pay for large enough parameters. In practice, experiments are limited to very small n and m values. For these small parameters, we could not observe any significant advantage of this variant with respect to FPPR. However, the complexity can be improved even further in practice with a clever choice of vector space. 3.2 A special vector space In the prime degree extension case, V cannot be a subfield, hence the symmetric variables σ j are not restricted to V. This led us to introduce mn variables d j,ℓ instead of m variables only in the composite extension degree case. However, we point out that some vector spaces may be "closer to a subfield" than other ones. In particular if V is generated by the basis \(\{1,\omega,\omega ^{2},\ldots,\omega ^{n^{\prime }-1}\},\phantom {\dot {i}\!}\) then we have $$\ if\ x_{j}\in V \ then\ \sigma_{2}\in V' $$ where V ′⊃V is generated by the basis \(\{1,\omega,\omega ^{2},\ldots,\omega ^{2n^{\prime }-2}\}.\phantom {\dot {i}\!}\) More generally, we can write $$\left\{ \begin{array}{l} \sigma_{1} = d_{1,0} + d_{1,1}\omega + \ldots +d_{1,n^{\prime}-1}\omega^{n^{\prime}-1}, \\ \sigma_{2} = d_{2,0} + d_{2,1}\omega + \ldots +d_{2,2n^{\prime}-2}\omega^{2n^{\prime}-2},\\ \hspace*{3em}\vdots \\ \sigma_{m} = d_{m,0} + d_{m,1}\omega + \ldots +d_{m,n-m}\omega^{n-m}. \\ \end{array}\right. $$ Applying a Weil descent on \(s^{\prime }_{m+1}\mid _{x_{m+1}=x(R)}\phantom {\dot {i}\!}\) and each equation of System (1) as before, we obtain a new polynomial system $$\left\{ \begin{array}{ll} f'_{\ell}=0, &0\leq \ell\leq n-1,\\ d_{j,\ell}=g_{j,\ell}, &1\leq j\leq m, 0\leq \ell\leq j(n'-1), \end{array}\right. $$ in \(n+(n'-1)\frac {m(m+1)}{2} + m\) equations and \(n'm+ (n'-1)\frac {m(m+1)}{2} + m\) variables. When m is large and m n ′≈n, the number of variables is decreased by a factor 2 if we use our special choice of vector space instead of a random one. For m=4 and n≈4n ′, the number of variables is reduced from about 5n to about 7n/2. For m=3 and n≈3n ′, the number of variables is reduced from about 4n to about 3n thanks to our special choice for V. In practice, this improvement turns out to be significant. Table 1 is the comparison of different strategies used in the decomposition algorithm. Note that the degree of regularity is decreased from 7 to 4 when m=3 by rewriting s m+1 to \(s^{\prime }_{m+1}\) with the symmetric function. It is difficult to estimate how many degrees of regularity are reduced for m other than 3 so far since we don't have enough experimental results due to the large polynomial system and the little resource. Our experimental results in section 4 implies the heuristic " D Ours <D FPPR " will be true for any m as long as \(s^{\prime }_{m+1}\) had simpler structure and smaller degree than s m+1. The lack of the the data of degree of regularity for m>3 makes the difficulty of the prediction of the degree of regularity in terms of m. This makes the complexity analysis following the step of [15] impossible even for a restricted model. If we make a model for a fixed m=3, then the algorithm become more likely an exhaustive search instead of a sub-exponential algorithm. We will leave the estimation of the degree of regularity as a future work. Table 1 Comparison of different multivariate polynomial systems by experimental results 3.3 New decomposition algorithm Our new algorithm for the decomposition problem is therefore using a new multivariate polynomial system by adopting the symmetric function and the special vector space V described above, denoted as T h i s W o r k. The only difference between FPPR and ThisWork comes from a different TransFromSemaevToBinary function in the line 1 of Algorithm 3. Although the system solved in ThisWork contains more variables and equations than the system solved in FPPR, the degrees of the equations are smaller and they involve less monomial terms. We now describe our experimental results. 4 Experimental results To validate our analysis and experimentally compare our method with FPPR, we implemented both algorithms in Magma. All our experiments were conducted on a CPU with four AMD Opteron Processor 6276 with 16 cores, running at 2.3 GHz with a L3 cache of 16 MB. The Operating System was LinuxMint 14 with 512GB memory. The programming platform was Magma V2.18-9 in its 64-bit version. Gröbner basis were computed with the GroebnerBasis function of Magma. Our implementations of FPPR and ThisWork share the same program, except the different TransFromSemaevToBinary function at line 1 of Algorithm 3. We first focus on the relation search, then we describe experimental results for a whole ECDLP computation. 4.1 Relation search The relation search is the core of both FPPR and our variant. In our experiments, we considered a fixed randomly chosen curve E α,β , a fixed ECDLP with respect to P, and a fixed m=3 for all values of the parameters n and n ′. For random integers a and b, we used both FPPR and ThisWork to find factor basis elements P j ∈F V such that P 1+⋯+P m = [ a]P+[ b]Q. We focused on m=3 (fourth Semaev's polynomial) in our experiments. Indeed, there is no hope to solve ECDLP faster than with generic algorithms using m=2 because of the linear algebra stage at the end of the index calculus algorithma. On the other hand, the method appears unpractical for m=4 even for very small values of n because of the exponential increase with m of the degrees in Semaev's polynomials. The experimental results are given in Tables 2 and 3. For most values of the parameters n and n ′, the experiment was repeated 200 times and average values are presented in the table. For large values n ′=6, the experiment was only repeated 3 times due to the long execution time. Table 2 Comparison of the relation search with systems having solutions Table 3 Comparison of the relation search with systems having no solution We noticed that the time required to solve one system varied significantly depending on whether it had solutions or not. Tables 2 and 3 therefore present results for each case in separate columns. The table contains the following information: D reg is degree of regularity; t trans and t groe are respectively the time (in seconds) needed to transform the polynomial s m+1 into a binary system and to compute a Gröbner basis of this system; mem is the memory required by the experiment (in MB). The experiments show that the degrees of regularity of the systems occurring during the relation search are decreased from values between 6 and 7 in FPPR to values between 3 and 4 in our method. This is particularly important since the complexity of Gröebner basis algorithms is exponential in this degree. As noticed in Section 3, this huge advantage of our method comes at the cost of a significant increase in the number of variables, which itself tends to increase the complexity of Gröbner basis algorithms. However, while our method may require more memory and time for small parameters (n,n ′), it becomes more efficient than FPPR when the parameters increase. We remark that although the time required to solve the system may be larger with our method than with FPPR method for small parameters, the time required to build this system is always smaller. This is due to the much simpler structure of \(s^{\prime }_{m+1}\) compared to s m+1 (lower degrees and less monomial terms). Our method seems to work particularly well compared to FPPR when there is no solution for the system, which will happen most of the times when solving an ECDLP instance. 4.2 Whole ECDLP computation In a next step, we also implemented the whole ECDLP algorithm with the two strategies FPPR and ThisWork. For the specified n, we ran the whole attack using m=3 and several values for n ′. The orders of the curves we picked in our experiments are shown in Table 4 together with the experimental results for the best value of n ′, which turned out to be 3 in all cases. Timings provided in the table are in seconds. Table 4 clearly shows that ThisWork is more efficient than FPPR. Table 4 Comparison of ECDLP (m = 3, n' = 3) It may look strange that n ′=3 leads to optimal timings at first sight. Indeed, the ECDLP attacks described above use m n ′≈n and a constant value for n ′ leads to a method close to exhaustive search. However, this is consistent with the observation already made in [7,15] that exhaustive search is more efficient than index calculus for small parameters. Table 5 also shows that while increasing n ′ increases the probability to have solutions, it also increases the complexity of the Gröebner basis algorithm. This increase turns out to be significant for small parameters. Table 5 Trade-off for choosing m and n' 5 Conclusion and future work In this paper, we proposed a variant of FPPR attack on the binary elliptic curve discrete logarithm problem (ECDLP). Our variant takes advantage of the symmetry of Semaev's polynomials to compute relations more efficiently. While symmetries had also been exploited in similar ECDLP algorithms for curves defined over finite fields with composite extension degrees, our method is the first one in the case of extension fields with prime extension degrees, which is the most interesting case for applications. At Asiacrypt 2012, Petit and Quisquater estimated that FPPR method would beat generic discrete logarithm algorithms for any extension degree larger than roughly 2000. We provided heuristic arguments and experimental data showing that our method reduces both the time and the memory required to compute a relation in FPPR, unless the parameters are very small. Our results therefore imply that Petit and Quisquater's bound can be lowered a little. Our work raises several interesting questions. On a theoretical side, it would be interesting to prove that the degrees of regularity of the systems appearing in the relation search will not rise when n increases. It would also be interesting to provide a more precise analysis of our method and to precisely estimate for which values of the parameters it will become better than FPPR. On a practical side, it would be interesting to improve the resolution of the systems even further. One idea in that direction is pre-computation of the invariant of this algorithm such as the transformation and the Gröbner basis of part of the system. In fact, even the resolution of the system could potentially be improved using special Gröebner basis algorithms such as F 4 trace [4,11]. Using Gröbner basis algorithms to solve ECDLP is a very recent idea. We expect that the index calculus algorithms that have recently appeared in the literature will be subject to further theoretical improvements and practical optimizations in a close future. a In fact, even m=3 would require a double large prime variant of the index calculus algorithm described above in order to beat generic discrete logarithm algorithms [9]. Brent, R.P: An improved Monte Carlo factorization algorithm. BIT Numerical Mathematics. 20, 176–184 (1980). Diem, C.: An index calculus algorithm for plane curves of small degree. In: Hess, F., Pauli, S., Pohst, M.E (eds.) ANTS. Lecture Notes in Computer Science, vol 4076, pp. 543–557. Springer, New York (2006). Diem, C.: On the discrete logarithm problem in elliptic curves. Compositio Mathematica. 147, 75–104 (2011). Faugère, J.-C.: A new efficient algorithm for computing Gröbner bases (F4). Journal of Pure and Applied Algebra. 139(1-3), 61–88 (1999). Faugère, J.C: A new efficient algorithm for computing Gröbner bases without reduction to zero (F5). In: Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation. ISSAC '02, pp. 75–83. ACM, New York, NY, USA (2002). Faugère, J.C, Gianni, P., Lazard, D., Mora, T.: Efficient computation of zero-dimensional Gröbner bases by change of ordering. Journal of Symbolic Computation. 16(4), 329–344 (1993). Faugère, J.-C., Perret, L., Petit, C., Renault, G.: Improving the complexity of index calculus algorithms in elliptic curves over binary field. In: Proceedings of Eurocrypt 2012. Lecture Notes in Computer Science, vol 7237, pp. 27–44. Springer, London (2012). Faugère, J.-C., Gaudry, P., Huot, L., Renault, G.: Using symmetries in the index calculus for elliptic curves discrete logarithm. IACR Cryptology ePrint Archive. 2012, 199 (2012). MATH Google Scholar Gaudry, P.: Index calculus for abelian varieties of small dimension and the elliptic curve discrete logarithm problem. Journal of Symbolic Computation. 44(12), 1690–1702 (2009). Huang, Y.-J., Petit, C., Shinohara, N., Takagi, T.: Improvement of Faugère et al.'s method to solve ECDLP. In: Sakiyama, K., Terada, M. (eds.) IWSEC. Lecture Notes in Computer Science, vol 8231, pp. 115–132. Springer, New York (2013). Joux, A., Vitse, V.: A variant of the F4 algorithm. In: Kiayias, A. (ed.) CT-RSA. Lecture Notes in Computer Science, vol 6558, pp. 356–375. Springer, New York (2011). Joux, A., Vitse, V.: Elliptic Curve Discrete Logarithm Problem over Small Degree Extension Fields - Application to the Static Diffie-Hellman Problem on \(E(\mathbb {F}_{q^{5}})\). J. Cryptology. 26(1), 119–143 (2013). Joux, A., Vitse, V.: Cover and decomposition index calculus on elliptic curves made practical - application to a previously unreachable curve over \(\mathbb {F}_{p^{6}}\). In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT. Lecture Notes in Computer Science, vol 7237, pp. 9–26. Springer, New York (2012). National Security Agency: The Case for Elliptic Curve Cryptography (2009). https://www.nsa.gov/business/programs/elliptic_curve.shtml. Petit, C., Quisquater, J.-J.: On polynomial systems arising from a Weil descent. In: Wang, X., Sako, K. (eds.) Advances in Cryptology - ASIACRYPT 2012. Lecture Notes in Computer Science, vol 7658, pp. 451–466. Springer, New York (2012). Pollard, J.M: A Monte Carlo method for factorization. BIT Numerical Mathematics. 15(3), 331–334 (1975). Pollard, J.M: Kangaroos, monopoly and discrete logarithms. Journal of Cryptology. 13, 437–447 (2000). Semaev, I.: Summation polynomials and the discrete logarithm problem on elliptic curves. IACR Cryptology ePrint Archive. 2004, 31 (2004). Shanks, D.: Class number, a theory of factorization, and genera. In: 1969 Number Theory Institute (Proc. Sympos. Pure Math., Vol. XX, State Univ. New York, Stony Brook, N.Y., 1969), pp. 415–440, Providence, R.I. (1971). Shantz, M., Teske, E.: Solving the elliptic curve discrete logarithm problem using semaev polynomials, weil descent and gröbner basis methods - an experimental study. In: Fischlin, M., Katzenbeisser, S. (eds.) Number Theory and Cryptography. Lecture Notes in Computer Science, vol 8260, pp. 94–107. Springer, New York (2013). This research was done while the second author was an FRS-FNRS research collaborator at Université catholique de Louvain. Graduate School of Mathematics, Kyushu University, 744, Motooka, Nishi-ku, 819-0395, Fukuoka, Japan Yun-Ju Huang University College London, Gower Street, WC1E 6BT, London, United Kingdom Christophe Petit National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, 184-8795, Tokyo, Japan Naoyuki Shinohara Institute of Systems, Information Technologies and Nanotechnologies, Fukuoka SRP Center Building 7F, 2-1-22, Momochihama, Sawara-ku, 814-0001, Fukuoka, Japan Tsuyoshi Takagi CREST, Japan Science and Technology Agency, K's Gobancho 6F, 7, Gobancho, Chiyoda-ku, 102-0076, Tokyo, Japan Institute of Mathematics for Industry, Kyushu University, 744, 582 Motooka, Nishi-ku, 819-0395, Fukuoka, Japan Correspondence to Yun-Ju Huang. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. Huang, YJ., Petit, C., Shinohara, N. et al. Improvement of FPPR method to solve ECDLP. Pac. J. Math. Ind. 7, 1 (2015). https://doi.org/10.1186/s40736-015-0012-6 Elliptic curve Discrete logarithm problem Index calculus Multivariable polynomial system Gröbner basis
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. Holomorphic h-principle for compact manifolds The Oka principle for Stein manifolds says (roughly) that the only obstructions for "things" are topological obstructions (for instance every smooth complex vector bundle admits a holomorphic structure, etc). Is there a similar principle (atleast in some cases) for compact complex manifolds? Or atleast some version of a h-principle for compact manifolds? complex-geometry h-principle Willie Wong VamsiVamsi $\begingroup$ Hi Vamsi, As Johannes states, of course the answer is "no" in general. However, for fiber bundles of very special type, and up to replacing holomorphic section by meromorphic section, there are some such results in the compact case. The basic example is Tsen's theorem: a $\mathbb{P}^1$-bundle over a Riemann surface always has a holomorphic section. $\endgroup$ – Jason Starr I don't think you get an $h$-principle for compact complex manifolds. Example: Given a complex line bundle $L \to M$, it admits a holomorphic structure iff the image of its Chern class in $H^2 (M;\mathcal{O})$ is zero. Similarly, the group of holomorphic line bundles which are topologically trivial is the cokernel of the homomorphism $H^1 (M; \mathbb{Z}) \to H^1 (M;\mathcal{O})$. Complex tori show that Oka's priniciple fails for compact complex manifolds. The Gromov-Phillips h-principle for closed manifolds is false as well, immersions are the only special case which applies to closed manifolds I am aware of. All other versions (e.g. submersions, symplectic structures, positively or negatively curved metrics) fail, and each of them fails in a fairly spectacular manner. There are some exceptions to the rule, but in general I would say that one needs noncompactness to push away all possible obstructions to infinity. Johannes EbertJohannes Ebert Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged complex-geometry h-principle or ask your own question. Almost Complex Structure approach to Deformation of Compact Complex Manifolds From Topological to Smooth and Holomorphic Vector Bundles On holomorphic vector bundles over compact Kahler surfaces Analogue of Infinitesimal Schwarzian for holomorphic $(G,X)$-manifolds Orbit type stratification of a holomorphic symplectic manifold. Do holomorphic symplectic manifolds admit (high codimension) embeddings in some standard space?
CommonCrawl
CAT 2017 Shift-1 Question 70 A man leaves his home and walks at a speed of 12 km per hour, reaching the railway station 10 minutes after the train had departed. If instead he had walked at a speed of 15 km per hour, he would have reached the station 10 minutes before the train's departure. The distance (in km) from his home to the railway station is Correct Answer: 20 Login to watch video solution We see that the man saves 20 minutes by changing his speed from 12 Km/hr to 15 Km/hr. Let d be the distance $$\frac{d}{12} - \frac{d}{15} = \frac{1}{3}$$ $$\frac{d}{60} = \frac{1}{3}$$ d = 20 Km. View Video Solution View more questions from this paper Vedanth K thankuu
CommonCrawl
Intrinsic decay rate estimates for the wave equation with competing viscoelastic and frictional dissipative effects DCDS-B Home An existence criterion for the $\mathcal{PT}$-symmetric phase transition September 2014, 19(7): 1969-1985. doi: 10.3934/dcdsb.2014.19.1969 Uniform weighted estimates on pre-fractal domains Raffaela Capitanelli 1, and Maria Agostina Vivaldi 1, Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Università di Roma "Sapienza", Via A. Scarpa 16, 00161 Roma, Italy, Italy Received April 2013 Revised January 2014 Published August 2014 We establish uniform estimates in weighted Sobolev spaces for the solutions of the Dirichlet problems on snowflake pre-fractal domains. Keywords: boundary value problems for second-order elliptic equations, regularity., Fractals. Mathematics Subject Classification: Primary: 28A80; Secondary: 35J25, 35D3. Citation: Raffaela Capitanelli, Maria Agostina Vivaldi. Uniform weighted estimates on pre-fractal domains. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1969-1985. doi: 10.3934/dcdsb.2014.19.1969 Y. Achdou, T. Deheuvels and N. Tchou, JLip versus Sobolev spaces on a class of self-similar fractal foliages,, J. Math. Pures Appl. (9), 97 (2012), 142. doi: 10.1016/j.matpur.2011.07.002. Google Scholar Y. Achdou, C. Sabot and N. Tchou, Diffusion and propagation problems in some ramified domains with a fractal boundary,, M2AN Math. Model. Numer. Anal., 40 (2006), 623. doi: 10.1051/m2an:2006027. Google Scholar Y. Achdou and N. Tchou, Neumann conditions on fractal boundaries,, Asymptot. Anal., 53 (2007), 61. Google Scholar R. Adams, Sobolev Spaces,, Academic Press, (1975). Google Scholar B. Bennewitz and J. L. Lewis, On the dimension of p-harmonic measure,, Ann. Acad. Sci. Fenn. Math., 30 (2005), 459. Google Scholar M. T. Barlow and E. A. Perkins, Brownian motion on the Sierpinski gasket,, Probab. Theory. Related Fields, 79 (1988), 543. doi: 10.1007/BF00318785. Google Scholar R. F. Bass, K. Burdzy and Z.Chen, On the Robin problem in fractal domains,, Proc. Lond. Math. Soc. (3), 96 (2008), 273. doi: 10.1112/plms/pdm045. Google Scholar M. T. Barlow, R. F. Bass, T. Kumagai and A. Teplyaev, Uniqueness of Brownian motion on Sierpinski carpets,, J. Eur. Math. Soc. (JEMS), 12 (2010), 655. Google Scholar M. Borsuk and V. A. Kondratiev, Elliptic Boundary Value Problems of Second Order in Piecewise Smooth Domains,, North-Holland Mathematical Library, (2006). doi: 10.1016/S0924-6509(06)80026-7. Google Scholar R. Capitanelli, Transfer across scale irregular domains,, Applied and industrial mathematics in Italy III, 82 (2010), 165. doi: 10.1142/9789814280303_0015. Google Scholar R. Capitanelli, Robin boundary condition on scale irregular fractals,, Commun. Pure Appl. Anal., 9 (2010), 1221. doi: 10.3934/cpaa.2010.9.1221. Google Scholar R. Capitanelli, Asymptotics for mixed Dirichlet-Robin problems in irregular domains,, J. Math. Anal. Appl., 362 (2010), 450. doi: 10.1016/j.jmaa.2009.09.042. Google Scholar R. Capitanelli and M. A. Vivaldi, Insulating layers and Robin problems on Koch mixtures,, J. Differential Equations, 251 (2011), 1332. doi: 10.1016/j.jde.2011.02.003. Google Scholar R. Capitanelli and M. A. Vivaldi, On the Laplacean transfer across fractal mixtures,, Asymptot. Anal., 83 (2013), 1. Google Scholar R. Capitanelli, M. R. Lancia and M. A. Vivaldi, Insulating layers of fractal type,, Differential and Integral Equations, 26 (2013), 1055. Google Scholar B. E. J. Dahlberg, $L^q$-estimates for Green potentials in Lipschitz domains,, Math. Scand., 44 (1979), 149. Google Scholar M. Filoche and B. Sapoval, Transfer across random versus Deterministic Fractal Interfaces,, Phys. Rev. Lett., 84 (2000), 5776. doi: 10.1103/PhysRevLett.84.5776. Google Scholar D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order,, Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], (1983). Google Scholar D. S. Grebenkov, M. Filoche and B. Sapoval, Mathematical Basis for a General Theory of Laplacian Transport towards Irregular Interfaces,, Phys. Rev. E, 73 (2006). doi: 10.1103/PhysRevE.73.021103. Google Scholar P. Grisvard, Elliptic Problems in Nonsmooth Domains,, Monographs and Studies in Mathematics, (1985). Google Scholar J. E. Hutchinson, Fractals and selfsimilarity,, Indiana Univ. Math. J., 30 (1981), 713. doi: 10.1512/iumj.1981.30.30055. Google Scholar D. S. Jerison and C. E. Kenig, Boundary behaviour of harmonic functions in non-tangentially accessible domains,, Adv. in Math., 46 (1982), 80. doi: 10.1016/0001-8708(82)90055-X. Google Scholar A. Jonsson and H. Wallin, Function spaces on subsets of $\mathbbR^n$,, Math. Rep., 2 (1984). Google Scholar A. Jonsson and H. Wallin, Boundary value problems and Brownian motion on fractals,, Chaos Solitons Fractals, 8 (1997), 191. doi: 10.1016/S0960-0779(96)00048-3. Google Scholar J. Kigami, Analysis on Fractals,, Cambridge Tracts in Mathematics, (2001). doi: 10.1017/CBO9780511470943. Google Scholar V. A. Kondratiev, Boundary value problems for elliptic equations in domains with conical or angular points,, Trudy Moskov. Mat. Ob., 16 (1967), 209. Google Scholar S. Kusuoka, A diffusion Process on a Fractal,, Probabilistic Methods in Mathematical Physics, (1987), 251. Google Scholar S. Kusuoka, Diffusion Processes in Nested Fractals,, Lect. Notes in Math., (1567). Google Scholar M. R. Lancia and M. A. Vivaldi, On the regularity of the solutions for transmission problems,, Adv. Math. Sci. Appl., 12 (2002), 455. Google Scholar M. R. Lancia and M. A. Vivaldi, Asymptotic convergence of transmission energy forms,, Adv. Math. Sc. Appl., 13 (2003), 315. Google Scholar M. R. Lancia, U. Mosco and M. A. Vivaldi, Homogenization for conductive thin layers of pre-fractal type,, J. Math. Anal. Appl., 347 (2008), 354. doi: 10.1016/j.jmaa.2008.06.011. Google Scholar T. Lindstrøm, Brownian motion penetrating the Sierpinski gasket,, Asymptotic Problems in Probability Theory, (1993), 248. Google Scholar V. G. Maz'ya, Sobolev Spaces,, {Springer-Verlag}, (1985). doi: 10.1007/978-3-662-09922-3. Google Scholar V. G. Maz'ya, S. Nazarov and B. Plamenevskij, Asymptotic Theory of Elliptic Boundary Value Problems in Singularly Perturbed Domains,, Vol. I. Operator Theory: Advances and Applications, (2000). Google Scholar U. Mosco, Convergence of convex sets and of solutions of variational inequalities,, Adv. in Math., 3 (1969), 510. doi: 10.1016/0001-8708(69)90009-7. Google Scholar U. Mosco, Composite media and asymptotic Dirichlet forms,, J. Funct. Anal., 123 (1994), 368. doi: 10.1006/jfan.1994.1093. Google Scholar U. Mosco, An elementary introduction to fractal analysis,, Nonlinear Analysis and Applications to Physical Sciences, (2004), 51. Google Scholar K. Nyström, Smoothness Properties of Dirichlet Problems in Domains with a Fractal Boundary,, Ph. D. Dissertation, (1994). Google Scholar K. Nyström, Integrability of Green potentials in fractal domains,, Ark. Mat., 34 (1996), 335. doi: 10.1007/BF02559551. Google Scholar C. Pommerenke, Boundary Behaviour of Conformal Maps,, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], (1992). doi: 10.1007/978-3-662-02770-7. Google Scholar G. Savaré and G. Schimperna, Domain perturbations and estimates for the solutions of second order elliptic equations,, J. Math. Pures Appl. (9), 81 (2002), 1071. doi: 10.1016/S0021-7824(02)01256-4. Google Scholar E. M. Stein, Singular Integrals and Differentiability Properties of Functions,, Princeton Mathematical Series, (1970). Google Scholar R. Strichartz, Differential Equations on Fractals,, Princeton University Press, (2006). Google Scholar A. Wannebo, Hardy inequalities,, Proc. Amer. Math. Soc., 109 (1990), 85. doi: 10.1090/S0002-9939-1990-1010807-1. Google Scholar Johnny Henderson, Rodica Luca. Existence of positive solutions for a system of nonlinear second-order integral boundary value problems. Conference Publications, 2015, 2015 (special) : 596-604. doi: 10.3934/proc.2015.0596 Annamaria Canino, Elisa De Giorgio, Berardino Sciunzi. Second order regularity for degenerate nonlinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4231-4242. doi: 10.3934/dcds.2018184 Inara Yermachenko, Felix Sadyrbaev. Types of solutions and multiplicity results for second order nonlinear boundary value problems. Conference Publications, 2007, 2007 (Special) : 1061-1069. doi: 10.3934/proc.2007.2007.1061 Doyoon Kim, Seungjin Ryu. The weak maximum principle for second-order elliptic and parabolic conormal derivative problems. Communications on Pure & Applied Analysis, 2020, 19 (1) : 493-510. doi: 10.3934/cpaa.2020024 José F. Cariñena, Javier de Lucas Araujo. Superposition rules and second-order Riccati equations. Journal of Geometric Mechanics, 2011, 3 (1) : 1-22. doi: 10.3934/jgm.2011.3.1 Hongwei Lou, Jiongmin Yong. Second-order necessary conditions for optimal control of semilinear elliptic equations with leading term containing controls. Mathematical Control & Related Fields, 2018, 8 (1) : 57-88. doi: 10.3934/mcrf.2018003 Claudia Anedda, Giovanni Porru. Second order estimates for boundary blow-up solutions of elliptic equations. Conference Publications, 2007, 2007 (Special) : 54-63. doi: 10.3934/proc.2007.2007.54 Qiong Meng, X. H. Tang. Solutions of a second-order Hamiltonian system with periodic boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1053-1067. doi: 10.3934/cpaa.2010.9.1053 Joachim Escher, Christina Lienstromberg. A survey on second order free boundary value problems modelling MEMS with general permittivity profile. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 745-771. doi: 10.3934/dcdss.2017038 Leonardo Colombo, David Martín de Diego. Second-order variational problems on Lie groupoids and optimal control applications. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6023-6064. doi: 10.3934/dcds.2016064 W. Sarlet, G. E. Prince, M. Crampin. Generalized submersiveness of second-order ordinary differential equations. Journal of Geometric Mechanics, 2009, 1 (2) : 209-221. doi: 10.3934/jgm.2009.1.209 José F. Cariñena, Irina Gheorghiu, Eduardo Martínez. Jacobi fields for second-order differential equations on Lie algebroids. Conference Publications, 2015, 2015 (special) : 213-222. doi: 10.3934/proc.2015.0213 Raegan Higgins. Asymptotic behavior of second-order nonlinear dynamic equations on time scales. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 609-622. doi: 10.3934/dcdsb.2010.13.609 Jaume Llibre, Amar Makhlouf. Periodic solutions of some classes of continuous second-order differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 477-482. doi: 10.3934/dcdsb.2017022 M. Euler, N. Euler, M. C. Nucci. On nonlocal symmetries generated by recursion operators: Second-order evolution equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4239-4247. doi: 10.3934/dcds.2017181 Alain Haraux, Mitsuharu Ôtani. Analyticity and regularity for a class of second order evolution equations. Evolution Equations & Control Theory, 2013, 2 (1) : 101-117. doi: 10.3934/eect.2013.2.101 Roberto Triggiani. Sharp regularity theory of second order hyperbolic equations with Neumann boundary control non-smooth in space. Evolution Equations & Control Theory, 2016, 5 (4) : 489-514. doi: 10.3934/eect.2016016 Hugo Beirão da Veiga. Elliptic boundary value problems in spaces of continuous functions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 43-52. doi: 10.3934/dcdss.2016.9.43 Feliz Minhós, Rui Carapinha. On higher order nonlinear impulsive boundary value problems. Conference Publications, 2015, 2015 (special) : 851-860. doi: 10.3934/proc.2015.0851 Bernd Kawohl, Vasilii Kurta. A Liouville comparison principle for solutions of singular quasilinear elliptic second-order partial differential inequalities. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1747-1762. doi: 10.3934/cpaa.2011.10.1747 Raffaela Capitanelli Maria Agostina Vivaldi
CommonCrawl
Vector fields 1 What are vector fields? 2 Motion under forces: a discrete model 3 The algebra and geometry of vector fields 4 Summation along a curve: flow and work 5 Line integrals: work 6 Sums along closed curves reveal exactness 7 Path-independence of integrals 8 How a ball is spun by the stream 9 The Fundamental Theorem of Discrete Calculus of degree $2$ 10 Green's Theorem: the Fundamental Theorem of Calculus for vector fields in dimension $2$ What are vector fields? The first metaphor for a vector field is a hydraulic system. Example. Suppose we have a system of pipes with water flowing through them. We model the process with a partition of the plane with its edges representing the pipes and nodes representing the junctions. Then a number is assigned to each edge representing the strength of the flow (in the direction of one of the axes). Such a system may look like this: Here the strength of the flow is shown as the thickness of the arrow. This is a real-valued $1$-form. Furthermore, there may be leakage. In addition to the amount of water that actually passes all the way through the pipe, we can make record of the amount that is lost. That's another real-valued $1$-form. If we assume that the direction of the leakage is perpendicular to the pipe, the two numbers can be combined into a vector. The result is a vector-valued $1$-form. $\square$ Warning: the two real-valued $1$-forms are re-constructed from the vector-valued $1$-form but not as its two components but as its projections on the corresponding edges. The second metaphor for a vector field is a flow-through. Example. The data from last example can be used to illustrate a flow of liquid or another material from compartment to compartment through walls. A vector-valued $1$-form may look like this: The situation is reversed in comparison to the last example: the component perpendicular to the edge is the relevant one. This interpretation is changes in dimension $3$ however: the component perpendicular to the face is the relevant one. $\square$ The third metaphor for a vector field is velocities of particles. Example. Imagine little flags placed on the lawn; then their directions form a vector field, while the air flow that produced it remains invisible. Each flag shows the direction (if not the magnitude) of the velocity of the flow at that location. Such flags are also placed on a model airplane in a wind-tunnel. A similar idea is used to model a fluid flow. The dynamics of each particle is governed by the velocity of the flow, at each location, the same at every moment of time. In other words, the vector field supplies a direction to every location. How do we trace the path of a particle? Let's consider this vector field: $$V(x,y)=<y,-x>.$$ Even though the vector field is continuous, the path can be approximated by a parametric curve over a partition of an interval, as follows. At our current location and current time, we examine the vector field to find the velocity and then move accordingly to the next location. We start at this location: $$X_0=(2,0).$$ We substitute these two numbers into the equations: $$V(0,2)=<2,0>.$$ This is the direction we will follow. Our next location on the $xy$-plane is then: $$X_1=(0,2)+<2,0>=(2,2).$$ We again substitute these two numbers into $V$: $$V(2,2)=<2,-2>,$$ leading to the next step. Our next location on the $xy$-plane is: $$X_2=(2,2)+<2,-2>=(4,0).$$ One more step: $X_2$ is substituted into $V$ and our next location is: $$X_3=(4,0)+<0,-4>=(4,-4).$$ The sequence is spiraling away from the origin. Let's now carry out this procedure with a spreadsheet (with a smaller time increment). The formulas for $x_n$ and $y_n$ are respectively: $$\texttt{=R[-1]C+R[-1]C[1]*R3C1}, \qquad \texttt{=R[-1]C-R[-1]C[-1]*R3C1}.$$ These are the results: In general, a vector field $V(x,y)=<f(x,y),\ g(x,y)>$ is used to create a system of two ordinary differential equations (ODEs): $$X'(t)=V(X(t))\quad\text{ or }\quad <x'(t),\ y'(t)>=V(x(t),\ y(t))\quad\text{ or }\quad \begin{cases} x'(t)&=f(x(t),\ y(t)),\\ y'(t)&=g(x(t),\ y(t)). \end{cases}$$ Its solution is a pair of functions $x=x(t)$ and $y=y(t)$ that satisfy the equations for every $t$. The equations mean that the vectors of the vector field are tangent to these trajectories. ODEs are discussed in Chapter 24. $\square$ The fourth metaphor for vector fields is a location-dependent force. Example (gravity). Recall from Chapter 21 that Newton's Law of Gravity states that the force of gravity between two objects is given by the formula: $$f(X) = G \frac{mM}{r^2},$$ where: $f$ is the magnitude of the force between the objects; $G$ is the gravitational constant; $m$ is the mass of the first object; $M$ is the mass of the second object; $r$ is the distance between the centers of the masses. Now, let's assume that the first object is located at the origin. Then the vector of location of the second object is $X$ and the force is a multiple of this vector. If $F(X)$ is the vector of the force at the location $X$, then: $$F(X)=-G mM\frac{X}{||X||^3}.$$ That's the vector form of the law! We plot the magnitude of the force as a function of two variables: And this is the resulting vector field: The motion is approximated in the manner described in the last example with the details provided in this chapter. $\square$ When the initial velocity of an object is zero, it will follow the direction of the force. For example, on object will fall directly on the surface of the Earth. This idea bridges the gap between velocity fields and force fields. Definition. A vector field is a function defined on a subset of ${\bf R}^n$ with values in ${\bf R}^n$. Warning: Though unnecessary mathematically, for the purposes of visualization and modeling we think of the input of vector fields as points and outputs as vectors. But what about the difference of a functions of several variables? It's not vector-valued! Some vector fields however might have the difference behind them: the projection $p$ of a vector field $V$ on a partition is a function defined at the secondary nodes of the partition as the dot product of the vectors with the corresponding oriented edges: $$p(C)=V(C)\cdot E,$$ where $C$ is the secondary node of the edge $E$. When the projection of $V$ is the difference of some function, we call $V$ gradient. When no secondary nodes are specified, the formula $p(E)=V(E)\cdot E$ makes a real-valued $1$-form from a vector-valued one. Motion under forces: a discrete model Suppose we know the forces affecting a moving object. How can we predict its dynamics? We simply generalize the $1$-dimensional analysis from Chapter 10 to the vector case. Assuming a fixed mass, the total force gives us our acceleration. We to compute: the velocity from the acceleration, and then the location from the velocity. A fixed time increment $\Delta t$ is supplied ahead of time even though it can also be variable. We start with the following three quantities that come from the setup of the motion: the initial time $t_0$, the initial velocity $V_0$, and the initial location $P_0$. They are placed in the consecutive cells of the first row of the spreadsheet: $$\begin{array}{c|c|c|c|c} &\text{iteration } n&\text{time }t_n&\text{acceleration }A_n&\text{velocity }V_n&\text{location }P_n\\ \hline \text{initial:}&0&3.5&--&<33,44>&<22,11>\\ \end{array}$$ As we progress in time and space, new numbers are placed in the next row of our spreadsheet. There is a set of columns for each vector, two or three depending on the dimension. Just as before, we rely on recursive formulas. The current acceleration $A_0$ given in the first cells of the second row. The current velocity $V_1$ is found and placed in the second pair (or triple) of cells of the second row of our spreadsheet: current velocity $=$ initial velocity $+$ current acceleration $\cdot$ time increment. The second quantity we use is the initial location $P_0$. The following is placed in the third set of cells of the second row: current location $=$ initial location $+$ current velocity $\cdot$ time increment. This dependence is shown below: $$\begin{array}{c|c|c|cccc} &\text{iteration } n&\text{time }t_n&\text{acceleration }A_n&&\text{velocity }V_n&&\text{location }P_n\\ \hline \text{initial:}&0&3.6&--&&<33,44>&&<22,11>\\ &&&& &\downarrow& &\downarrow\\ \text{current:}&1&t_1&<66,77>&\to&V_1&\to&P_1\\ \end{array}$$ We continue with the rest in the same manner. As we progress in time and space, numbers and vectors are supplied and placed in each of the four sets of columns of our spreadsheet one row at a time: $$t_n,\ A_n,\ V_n,\ P_n,\ n=1,2,3,...$$ The first quantity in each row we compute is the time: $$t_{n+1}=t_n+\Delta t.$$ The next is the acceleration $A_{n+1}$. Where does it come from? It may come as pure data: the column is filled with number ahead of time or it is being filled as we progress in time and space. Alternatively, there is an explicit, functional dependence of the acceleration (or the force) on the rest of the quantities. The acceleration may be a function of the following: 1. the current time, e.g., $A_{n+1}=<\sin t_{n+1},\ \cos t_{n+1}>$, such as when we speed up the car, or 2. the last location, such as when the gravity depends on the distance to the planet (below), or 3. the last velocity, e.g., $A_{n+1}=-V_n$ such as when the air resistance works in the opposite direction of the velocity, or all three. The $n$th iteration of the velocity $V_n$ is computed: current velocity $=$ last velocity $+$ current acceleration $\cdot$ time increment, $V_{n+1}=V_n+A_n\cdot \Delta t$. The values of the velocity are placed in the second set of columns of our spreadsheet. The $n$th iteration of the location $P_n$ is computed: current location $=$ last location $+$ current velocity $\cdot$ time increment, $P_{n+1}=P_n+V_n\cdot \Delta t$. The values of the location are placed in the third set of columns of our spreadsheet. The result is a growing table of values: $$\begin{array}{c|c|c|c|c|c} &\text{iteration } n&\text{time }t_n&&\text{acceleration }A_n&\text{velocity }V_n&\text{location }P_n\\ \hline \text{initial:}&0&3.5&&--&<33,44>&<22,11>\\ &1&3.6&&<66,77>&<38.5,45.1>&<25.3,13.0>\\ &...&...&&...&...&...\\ &1000&103.5&&<666,777>&<4,1>&<336,200>\\ &...&...&&...&...&...\\ \end{array}$$ The result may be seen as four sequences $t_n,\ A_n,\ V_n,\ P_n$ or as the table of values of three vector-valued functions of $t$. Exercise. Implement a variable time increment: $\Delta t_{n+1}=t_{n+1}-t_n$. Example. A rolling ball is unaffected by horizontal forces. Therefore, $A_n=0$ for all $n$. The recursive formulas for the horizontal motion simplify as follows: the velocity $V_{n+1}=V_n+A_n\cdot \Delta t=V_n=V_0$ is constant; the position $P_{n+1}=P_n+V_n\cdot \Delta t=P_n+V_0\cdot \Delta t$ grows at equal increments. In other words, the position depends linearly on the time. $\square$ Example. A falling ball is unaffected by horizontal forces and the vertical force is constant: $A_n=A$ for all $n$. The first of the two recursive formulas for the vertical motion simplifies as follows: the velocity $V_{n+1}=V_n+A_n\cdot \Delta t=V_n+A\cdot \Delta t$ grows at equal increments; the position $P_{n+1}=P_n+V_n\cdot \Delta t$ grows at linearly increasing increments. In other words, the position depends quadratically on the time. $\square$ Example. A falling ball is unaffected by horizontal forces and the vertical force is constant: $$A_{n}=<0,-g>.$$ Now recall the setup considered previously: from a $200$ feet elevation, a cannon is fired horizontally at $200$ feet per second. The initial conditions are: the initial location, $P_0=<0,200>$; the initial velocity, $V_0=<200,0>$. Then we have recursive vector equations: $$V_{n+1}=V_n+<0,-g>\Delta t\ \text{ and }\ P_{n+1}=P_n+V_n\Delta t.$$ Implemented with a spreadsheet, the formulas produce these results: $\square$ Example. Let's apply what we have learned to planetary motion. The problem above about a ball thrown in the air has a solution: its trajectory is a parabola. However, we also know that if we throw really-really hard (like a rocket), the ball will start to orbit the Earth following an ellipse. The motion of two planets (or the sum and a planet, or a planet and a satellite, etc.) is governed by the Newton Law of Gravity. From this law, another law of motion can be derived. Consider the Kepler's Laws of Planetary Motion: 1. The orbit of a planet is an ellipse with the Sun at one of the two foci. 2. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. 3. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit. To confirm the law, we use the formulas above but this time the acceleration depends on the location, as follows: The resulting trajectory does seem to be an ellipse (confirmed by finding its foci): Note that the Second Kepler's Law implies that the motion is different from one provided by the standard parametrization of the ellipse. Our computation can produce other kinds of trajectories such as a hyperbola: Example. The Earth revolves around the Sun and the Moon revolves around the Earth. The result derived from such a generic description should look like the one on left. Now, let's use the actual data: (1) The average distance between the Earth and the Sun is $149.60$ million km. (2) The average distance between the Moon and the Earth is $385,000$ km. (3) The Moon orbits Earth one revolution in $27.323$ days. The paths are plotted on right. As you can see, not only the Moon never goes backwards but also its orbit is in fact convex! (By "convex orbit" we mean "convex region inside the orbit": any two points inside are connected by the segment that is also inside.) $\square$ Example. Below we have: a hypothetical star (orange) is orbited by a planet (blue) which is also orbited by its moon (purple). Now we vary the number of times per year the moon orbits the planet, from $20$ to $1/3$. The algebra and geometry of vector fields Vector fields appear in all dimensions. The idea is the same: there is a flow of liquid or gas and we record how fast a single particle at every location is moving. Example (dimension $1$). The flow is in a pipe. The same idea applies to a canal with the water that has the exact same velocity at all locations across it. Of course these are just numerical functions: This is just another way to visualize them. $\square$ Example (dimension $2$). Not every vector field of dimension $n>1$ is gradient and, therefore, some of them cannot be visualized as flows on a surface under nothing but gravity. A vector field of dimension $n=2$ is then seen as a flow on the plane: liquid in a pond or the air over a surface of the Earth. The metaphor applies under the assumption that the air or water has the exact same velocity at every locations regardless of the elevation. $\square$ Example (dimension $3$). This time, a vector field is thought of as a flow without any restrictions on the velocities of the particles. A model of stock prices as a flow will lead to $10,000$-dimensional vector field. This necessitates our use of vector notation. We also start thinking of the input, just as the output, to be vectors (of the same dimension). For example, the two "radial" vector fields in the last section have the same representation: $$V(X)=X.$$ An even simpler vector field is a constant: $$V(X)=V_0.$$ Each vector field is just a vector -- at a fixed location. Then it is just a location-dependent (but time-independent!) vector but still a vector. That is why all algebraic operation for vectors are applicable to vector fields. First, addition. Imagine that that we have a river -- with the velocities of the water particles represented by vector field $V$ -- and then wind starts -- with the velocities of the air particles represented by vector field $W$. One can argue that the resulting dynamics of water particles will be represented by the vector field $V+U$. Second, scalar multiplication. If the velocities of the water particles in a pipe are represented by vector field $V$ and we then double the pressure (i.e., pump twice as much water), we expect the new velocities to be represented by the vector field $2V$. Reversing the flow will be represented by the vector field $-V$. Furthermore, the scalar might also be location-dependent, i.e., we are multiplying our vector field in ${\bf R}^n$ by a (scalar) function of $n$ variables. Example. The computations with specific vector fields are carried out one location at a time and component at a time. Now geometry. What is the magnitude of a vector? As a function, it takes a vector as an input and produces a number as the output. It's just another function of $n$ variables. We can apply it to vector fields via composition: $$f(X)=||V(X)||.$$ The result is a function of $n$ variables that gives us the magnitude of the vector $V(X)$ at location $X$. The construction is exemplified by the "scalar" version of the Newton's Law of Gravity. Furthermore, we can use this function to modify vector fields in a special way: $$W(X)=\frac{V(X)}{||V(X)||}.$$ The result is a new vector fields with the exactly same directions of the vectors but with unit length. The domain of the new vector field might change as it is undefined at those $X$ where $V(X)=0$. This construction is called normalization. Example. The "accelerated outflow" presented in the first section is no longer accelerated after normalization: $$W(X)=\frac{X}{||X||}.$$ The speed is constant! The price we pay for making the vector field well-behaved is the appearance of a hole in the domain, $X\ne 0$. $\square$ Exercise. Show that the hole can't be repaired, in the following sense: there is no such vector $U$ that $||W(X)-U||\to 0$ as $X\to 0$ (i.e., this is a non-removable discontinuity). Exercise. What if we do the dot product of two vector fields? If can we rotate a vector, we can rotate vector fields $V$? In dimension $2$, the normal vector field of a vector field $V=<u,v>$ on the plane is given by $$V^\perp=<u,v>^\perp=<-v,u>.$$ We have then a special operation on vectors fields. For example, rotating a constant vector field is also constant. However, the normal of the rotation vector field is the radial vector field. Summation along a curve: flow and work Example. We look at this as a system of pipes with the numbers indicating the rate of the flow in each pipe (along the directions of the axes). What is the total flow along this "staircase"? We simply add the values located on these edges: $$W=1+0+0+2 +(-1)+1+(-2).$$ But these edges just happen to be positively oriented. What if we, instead, go around the first square? We have the following: $$W=1+0-2-0=-1.$$ Going against one of the oriented edges, makes us count the flow with the opposite sign. $\square$ Recall that an oriented edge $E_i$ of a partition in ${\bf R}^n$ is a vector that goes with or against the edge and any collection of such edges $C=\{E_i:\ i=0,1,...,n\}$ is seen as an oriented curve. Definition. Suppose $C$ is an oriented curve in ${\bf R}^n$ that consists of oriented edges $E_i,\ i=1,...,n$, of a partition in ${\bf R}^n$. If a function $G$ defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and, in particular, at the edges $\{Q_i\}$ of the curve, then the sum of $G$ along curve $C$ is defined and denoted to be the following: $$\sum_C G=\sum_{i=1}^n G(Q_i).$$ When the secondary nodes aren't specified, this sum is the sum of the real-valued $1$-form $G$. Unlike the arc-length, the sum depends on the direction of the trip. This dependence is however very simple: the sign is reversed when the direction is reversed. Theorem (Negativity). $$\sum_{-C} G =-\sum_C G .$$ Theorem (Linearity). For any two functions $F$ and $G$ defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and any two numbers $\lambda$ and $\mu$, we have: $$\sum_C(\lambda F+\mu G)=\lambda\sum_CF+\mu\sum_{C}G.$$ Theorem (Additivity). For any two oriented curves of edges $C$ and $K$ with no edges in common and that together form an oriented curve of edges $C\cup K$, we have: $$\sum_{C\cup K}F=\sum_CF+\sum_K F.$$ Let's examine another problem: the work of a force. Suppose a ball is thrown. This force is directed down, just as the movement of the ball. The work done on the ball by this force as it falls is equal to the (signed) magnitude of the force, i.e., the weight of the ball, multiplied by the (signed) distance to the ground, i.e., the displacement. All horizontal motion is ignored as unrelated to the gravity. Moving an object up from the ground the work performed by the gravitational force is negative. Of course, we are speaking of vectors. In the $1$-dimensional case, suppose that the force $F$ is constant and the displacement $D$ is along a straight line. Then the work $W$ is equal to their product: $$W=F\cdot D.$$ The force may vary however with location: spring, gravitation, air pressure. Example. In the case of an object attached to a spring, the force is proportional to the (signed) distance of the object to its equilibrium: $$F=-kx.$$ In summary, if a function $F$ on segment $[a,b]$ is called a force function then its Riemann integral $\int_a^bF\, dx$ is called the work of the force over interval $[a,b]$. Let's now proceed to the $n$-dimensional case but start with a constant force and linear motion... This time, the force and the displacement may be misaligned. In addition to motion "with the force" and "against the force", the third possibility emerges: what if we move perpendicular to the force? Then the work is zero. This is the case of horizontal motion under gravity force, which is constant close to the surface of the Earth. What if the direction of our path varies but only within the standard square grid on the plane? We realize that there is a force vector associated with each edge of our trip and possibly with every edge of the grid. However, only one of these vector components matters: the horizontal when the edge is horizontal and the vertical when the edge is vertical. It is then sufficient to assign this single number to each edge to indicate the force applied to this part of the trip. Example. As a familiar interpretation, we can look at this as a system of pipes with the numbers indicating the speed of the flow in each pipe (along the directions of the axes). If, for example, we are moving through a grid with $\Delta x\times \Delta y$ cells, the work along the "staircase" is $$W=1\cdot \Delta x+0\cdot \Delta y+0\cdot \Delta x+2\cdot \Delta y+(-1)\cdot \Delta x+1\cdot \Delta y+(-2)\cdot \Delta x.$$ When $\Delta x=\Delta y=1$, this is simply the sum of the values provided: $$W=1+0+0+2+(-1)+1+(-2)=1.$$ What if we, instead, go around the first square? Then $$W=1+0-2-0=-1.$$ Going against one of the oriented edges, makes us count the work with the opposite sign. In other words, the edge and the displacement are multiples of each other. $\square$ When the direction of the force isn't limited to the grid anymore, it can take, of course, one of the diagonal directions. In fact, there is a whole circle of possible directions. The vector of the force, too, can take all available directions. In order to find and discard the irrelevant part of the force $F$, we decompose it into parallel and normal components relative to the displacement: $$F=F_\perp+F_{||}.$$ The relevant ("collinear") component of the force $F$ is the projection on the displacement vector: $$F_{||}=||F||\cos \alpha,$$ where $\alpha$ is the angle of $F$ with $D$. Of course, we are taking about the dot product. The work of the force vector $F$ along the displacement vector $D$ is defined to be their dot product: $$W=F\cdot D.$$ The work is proportional to the magnitude of the force and to the magnitude of the displacement. It is also proportional the projection of the former on the latter (the relevant part of the force) and the latter on the former (the relevant part of the displacement). It makes sense. In our interpretation of a vector field as a system of pipes has two numbers, this is a vector associated with each pipe indicating the speed of the flow in the pipe (along the direction of one of the pipe) as well as the leakage (perpendicular to this direction). Then, the relevant part of the force is found as the (scalar) projection of the vector of the force on the vector of displacement. The difference is between real-valued and vector-valued $1$-forms. Thus, the work is represented as the dot product of the vector of the force and the vector of displacement. Definition. Suppose $C$ is an oriented curve in ${\bf R}^n$ that consists of oriented edges $E_i,\ i=1,...,n$, of a partition in ${\bf R}^n$. If a vector field $F$ is defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and, in particular, at the edges $\{Q_i\}$ of the curve, then the Riemann sum of $F$ along curve $C$ is defined and denoted to be the following: $$\sum_C F \cdot \Delta X=\sum_{i=1}^n F(Q_i)\cdot E_{i}.$$ In other words, this is the Riemann sum of a vector field, $F$, is the sum of a certain real-valued function, $F\cdot E$, along a curve as defined in the beginning of the section. When the vector field $F$ is called a force field, then the sum of $F$ along $C$ is also called the work of force $F$ along curve $C$. Note that only the part of the force field passed through affects the work. The properties follows the ones above. Theorem (Negativity). $$\sum_{-C} F \cdot \Delta X=-\sum_C F \cdot \Delta X.$$ Theorem (Linearity). For any two vector fields $F$ and $G$ defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and any two numbers $\lambda$ and $\mu$, we have: $$\sum_C(\lambda F+\mu G)\cdot \Delta X=\lambda\sum_CF\cdot \Delta X+\mu\sum_{C}G\cdot \Delta X.$$ Theorem (Additivity). For any two oriented curves $C$ and $K$ with only finitely many points in common and that together form an oriented curve $C\cup K$, we have: $$\sum_{C\cup K}F\cdot \Delta X=\sum_CF\cdot \Delta X+\sum_K F\cdot \Delta X.$$ Line integrals: work A more general setting is that of a motion through space, ${\bf R}^n$, with a continuously changing force. We first assume that we move from point to point along a straight line. Example. Away from the ground, the gravity is proportional to the reciprocal of the square of the distance of the object to the center of the planet: $$F(X)=-\frac{kX}{||X||^3}.$$ The pressure and, therefore, the medium's resistance to motion may change arbitrarily. Multiple springs create a $2$-dimensional variability of forces: The definition of work applies to straight travel... or to travel along multiple straight edges: If these segments are given by the displacement vectors $D_1,...,D_n$ and the force for each is given by the vectors $F_1,...,F_n$, then the work is defined to be the simple sum of the work along each: $$W=F_1\cdot D_1+...+F_n\cdot D_n.$$ Example. If the force is constant $F_i=F$, we simplify, $$W=F\cdot D_1+...+F\cdot D_n=F\cdot (D_1+...+D_n),$$ and discover that the total work is the dot product of the force and the total displacement. This makes sense. This is a simple example of "path-independence". Furthermore, the round trip will require zero work... unless one has to walk to school "$5$ miles -- uphill both ways!" The issue isn't as simple as it seems: even though it is impossible to make round trip while walking uphill, it is possible during this trip to walk against the wind even though the wind doesn't change. It all depends on the nature of the vector field. $\square$ Example. In order to compute the work of a vector field along a curve made of straight edges, all we need is the formula: $$W=F_1\cdot D_1+...+F_n\cdot D_n.$$ In order for the computation to make sense, the edges of the path and the vectors of the force have to be paired up! Here's a simple example: We pick the value of the force from the initial point of each edge: $$W=<-1,0>\cdot <0,1>+<0,2>\cdot <1,0>+<1,2>\cdot <1,1>=3.$$ Example. It is possible that there is no vector field and the force is determined entirely by our motion. For example, the air or water resistance is directed against our velocity (and is proportional to the speed). The computations remain the same. $\square$ The general setup for defining and computing work along a curve is identical to what we have done several times. Suppose we have a sequence of points $P_i,\ i=0,1,...,n$, in ${\bf R}^n$. We will treat this sequence as an oriented curve $C$ by representing it as the path of a parametric curve as follows. Suppose we have a sampled partition of an interval $[a,b]$: $$a=t_0\le c_1\le t_1\le ... \le c_n\le t_n=b.$$ We define a parametric curve by: $$X(t_i)=P_i,\ i=0,1,...,n.$$ However, it doesn't matter how fast we go along this path. It is the path itself -- the locations we visit -- that matters. The direction of the trip matters too. This is then about an oriented curve. In the meantime, a non-constant vectors along the path typically come from a vector field, $F=F(X)$. If its vectors change incrementally, one may be able to compute the work by a simple summation, as above. We then find a regular parametrization of the latter: a parametric curve $X=X(t)$ defined on the interval $[a,b]$. We divide the path into small segments with end-points $X_i=X(t_i)$ and then sample the force at the points $Q_i=X(c_i)$. Then the work along each of these segments is approximated by the work with the force being constantly equal to $F(Q_i)$: $$\text{ work along }i\text{th segment}\approx \text{ force }\cdot \text{ length}=F(Q_i)\cdot \Delta X_i,$$ where $\Delta X_i$ is the displacement along the $i$th segment. Then, $$\text{total work }\approx \sum_{i=1}^n F(Q_i)\cdot (X_{i+1}- X_i)=\sum_{i=1}^n F(X(c_i))\cdot (X(t_{i+1})-X(t_i)).$$ This is the formula that we have used and will continue to use for approximations. Note that this is just the sum of a discrete $1$-form. Example. Estimate the work of the force field $$F(x,y)=<xy,\ x-y>$$ along the upper half of the unit circle directed counterclockwise. First we parametrize the curve: $$X(t)=<\cos t,\ \sin t>,\ 0\le t\le \pi.$$ We choose $n=4$ intervals of equal length with the left-ends as the secondary nodes: $$\begin{array}{lll} x_0=0& x_1=\pi/4& x_2=\pi/2& x_3=3\pi/4&x_4=\pi\\ c_1=0& c_2=\pi/4& c_3=\pi/2& c_4=3\pi/4&\\ X_0=(1,\ 0)& X_1=(\sqrt{2}/2,\ \sqrt{2}/2)& X_2=(0,1)& X_3=(-\sqrt{2}/2,\ \sqrt{2}/2)& X_4=(-1,0)\\ Q_1=(1,\ 0)& Q_2=(\sqrt{2}/2,\ \sqrt{2}/2)& Q_3=(0,1)& Q_4=(-\sqrt{2}/2,\ \sqrt{2}/2)\\ F(Q_1)=<0,\ 1>& F(Q_2)=<1/2,\ 0>& F(Q_3)=<0,\ -1>& F(Q_4)=<-1/2,\ -\sqrt{2}> \end{array}$$ Then, $$\begin{array}{lll} W&\approx <0,1>\cdot <\sqrt{2}/2-1,\ \sqrt{2}/2> + <1/2,0>\cdot <-\sqrt{2}/2,\ 1-\sqrt{2}/2>\\ &+<0,-1>\cdot <-\sqrt{2}/2,\ \sqrt{2}/2-1> + <-1/2,\ -\sqrt{2}>\cdot <-1+\sqrt{2}/2,\ -\sqrt{2}/2>\\ &=.. . \end{array}$$ $\square$ To bring the full power of the calculus machinery, we, once again, proceed to convert the expression into the Riemann sum of a certain function over this partition: $$\text{total work }\approx \sum_{i=1}^n F(X(c_i))\cdot \frac{X(t_{i+1})-X(t_i)}{t_{i+1}-t_i}(t_{i+1}-t_i)=\sum_a^b\left((F\circ X)\cdot \frac{\Delta X}{\Delta t}\right)\Delta t.$$ Then, we define the work of the force as the limit, if it exists, of these Riemann sums, i.e., the Riemann integral. Definition. Suppose $C$ is an oriented curve in ${\bf R}^n$. For a vector field $F$ in ${\bf R}^n$, the line integral of $F$ along $C$ is denoted and defined to be the following: $$\int_CF\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt,$$ where $X=X(t),\ a\le t\le b$, is a regular parametrization of $C$. When the vector field $F$ is called a force field, then the integral of $F$ along $C$ is also called the work of force $F$ along curve $C$. The first term in the integral shows how the force varies with time during our trip. Just as always, the Leibniz notation reveals the meaning: $$\int_CF\cdot dX=\int_a^b (F\circ X)\cdot \frac{dX}{dt} dt,$$ Once all the vector algebra is done, we are left with just a familiar numerical integral from Chapter 10. Furthermore, when $n=1$, the integral is the familiar numerical integral from Chapter 10. Indeed, suppose $x=F(t)$ is just a numerical function and $C$ is the interval $[A,B]$ in the $x$-axis. Then we have: $$\int_CF\cdot dX=\int_{x=A}^{x=B} F\, dx=\int_{t=a}^{t=b}F(x(t))x'(t)\, dt,$$ where $x=x(t)$ serves as a parametrization of this interval so that $x(a)=A$ and $x(b)=B$. This is just an interpretation of the integration by substitution formula. Example. Compute the work of a constant vector field, $F=<-1,2>$, along a straight line, the segment from $(0,0)$ to $(1,3)$. First parametrize the curve and find its derivative: $$X(t)=<1,3>t,\ 0\le t\le 1,\ \Longrightarrow\ X'(t)=<1,3>.$$ Then, $$W=\int_CF\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt=\int_0^1<-1,2>\cdot <1,3>\, dt=\int_0^1 5\, dt=5.$$ $\square$ Example. Compute the work of the radial vector field, $F(X)=X=<x,y>$, along the upper half-circle from $(1,0)$ to $(-1,0)$. First parametrize the curve and find its derivative: $$X(t)=<\cos t,\ \sin t >,\ 0\le t\le \pi,\ \Longrightarrow\ X'(t)=<-\sin t,\cos t>.$$ Then, $$\begin{array}{lll} W&=\int_CF\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt\\ &=\int_0^\pi <\cos t,\ \sin t >\cdot <-\sin t,\ \cos t>\, dt\\ &=\int_0^\pi (\cos t (-\sin t)+\sin t\cos t)\, dt\\ &=0. \end{array}$$ $\square$ Theorem. The work is independent of parametrization. Thus, just as we used parametric curves to study a function of several variables, we use them to study a vector field. Note however, that only the part of the vector field visited by the parametric curve affects the line integral. Unlike the arc-length, the work depends on the direction of the trip. Theorem (Negativity). $$\int_{-C}F\cdot dX=-\int_CF\cdot dX.$$ Example. Is the work positive or negative? When all the angles are acute, it's positive. $\square$ Exercise. Finish the example. Exercise. How much work does it take to move an object attached to a spring $s$ units from the equilibrium? Exercise. How much work does it take to move an object $s$ units from the center of a planet? Exercise. What is the value of the line integral of the gradient of a function along one of its level curves? Theorem (Linearity). For any two vector fields $F$ and $G$ and any two numbers $\lambda$ and $\mu$, we have: $$\int_{C}(\lambda F+\mu G)\cdot dX=\lambda\int_CF\cdot dX+\mu\int_{C}G\cdot dX.$$ Theorem (Additivity). For any two oriented curves $C$ and $K$ with only finitely many points in common and that together form an oriented curve $C\cup K$, we have: $$\int_{C\cup K}F\cdot dX=\int_CF\cdot dX+\int_K F\cdot dX.$$ Let's look at the component representation of the integral. Starting with dimension $n=1$, the definition, $$\int_C F\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt,$$ becomes ($F=f,\ X=x,\ C=[A,B]$): $$\int_A^B f(x)\, dx=\int_a^b f(x(t)) x'(t)\, dt,$$ where $A=x(a)$ and $B=x(b)$. In ${\bf R}^2$, we have the following component representation of a vector field $F$ and the increment of $X$: $$F=<p,q> \text{ and } dX=<dx,dy>.$$ Then the line integral of $F$ along $C$ is denoted by: $$\int_C F\cdot dX=\int_C <p,q>\cdot <dx,dy>=\int_C p\, dx+ q\, dy.$$ Here, the integrand is a differential form of degree $1$: $$p\, dx+ q\, dy$$ The notation matches the formula of the definition. Indeed, the curve's parametrization $X=X(t),\ a\le t\le b$, has a component representation: $$X=<x,y>,$$ therefore, $$\int_a^bF(X(t))\cdot X'(t)\, dt=\int_a^bF(x(t),y(t))\cdot <x'(t),y'(t)>\, dt=\int_a^b p(x(t),y(t))x'(t)\, dt+\int_a^b q(x(t),y(t))y'(t)\, dt.$$ Similarly, in ${\bf R}^3$, we have a component representation of a vector field $F$ and the increment of $X$: $$F=<p,q,r> \text{ and } dX=<dx,dy,dz>.$$ Then the line integral of $F$ along $C$ is denoted by: $$\int_C F\cdot dX=\int_C p\, dx+ q\, dy+ r\, dz.$$ Let's review the recent integrals that involve parametric curves. Suppose $X=X(t)$ is a parametric curve on $[a,b]$. $\bullet$ The first is the (component-wise) integral of the parametric curve: $$\int_a^bX(t)\, dt,$$providing the displacement from the known velocity, as functions of time. $\bullet$ The second is the arc-length integral: $$\int_C f\, ds=\int_a^bf(X(t))||X'(t)||\, dt,$$ providing the mass of a curve of variable density. $\bullet$ The third is the line integral along an oriented curve: $$\int_C F\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt,$$ providing the work of the force field. The main difference between the first and the other two is that in the former case the parametric curve is the integrand (and the output is another parametric curve) and in the latter it provides the domain of integration (and the output is a number). Sums along closed curves reveal exactness Example. Let's consider the curve around a single square of the partition. Suppose $G$ is constant function on the partition (left): it has same value for each horizontal edge and same for each vertical edge. Then the flow along the curve is zero! Note that $G$ is exact: $G=\Delta f$ with the values of $f$ given by: $$f=\left[ \begin{array}{ll}\hline 2&3\\1&2\\ \hline\end{array} \right].$$ Suppose $G$ is rotational (right). Then the flow is not zero! Note that $G$ isn't exact, as demonstrated in the first section of the chapter. $\square$ Suppose $C$ is an oriented curve that consists of oriented edges $Q_i,\ i=1,...,m$, of a partition of a region $D$ in ${\bf R}^n$. Suppose a function defined on the secondary nodes $F$ is the difference in $D$, $G=\Delta f$, of some function $f$ defined on the primary nodes of the partition. We carry out a familiar computation: we just add all of these and cancel the repeated nodes: $$\begin{array}{lll} \sum_{C} G&=G(Q_1)&+G(Q_2)&+...&+G(Q_m)\\ &=G(P_{0}P_{1})&+G(P_{1}P_{2})&+...&+G(P_{m-1}P_{m})\\ &=\big[f(P_{1})-f(P_{0})\big]&+\big[f(P_{2})-f(P_{1})\big]&+...&+\big[f(P_{m})-f(P_{m-1})\big]\\ &=-f(P_0)&&&+f(P_m)\\ &=f(B)-f(A). \end{array}$$ We have proven the following. Theorem (Fundamental Theorem of Calculus for differences II). Suppose a function defined on the secondary nodes $G$ is exact, i.e., $G=\Delta f$ for some function $f$ defined on the primary nodes of the partition of region $D$. If an oriented curve $C$ in $D$ starts at node $A$ and ends at node $B$, then we have: $$\sum_C G=f(B)-f(A).$$ Now, the sum on right is independent of our choice of $C$ as long as it is from $A$ to $B$! We formalize this property below. Definition. A function defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is called path-independent over $D$ if its sum along any oriented curve depends only on the start- and the end-points of the curve; i.e., $$\sum_C G=\sum_K G,$$ for any two curves of edges $C$ and $K$ from node $A$ to node $B$ that lie entirely in $D$. What can we say about the sums of such functions along closed curves? The path-independence allows us to compare the curve to any curve with the same end-points. What is the simplest one? Consider this: if there are no pipes, there is no flow! We are talking about a special kind of path, a constant curve: $K=\{A\}$. Let's compare it to another curve $C$. The curve $K$ is trivial; therefore, we have: $$\sum_C G=\sum_K G=0.$$ So, path-independence implies zero sums along any closed curve. The converse is also true. Suppose we have two curves $C$ and $K$ from $A$ to $B$. We create a new, closed curve from them. We glue $C$ and the reversed $K$ together: $$Q=C\cup -K.$$ It goes from $A$ to $A$. Then, from Additivity and Negativity we have: $$0=\sum_Q G=\sum_C G+\sum_{-K} G=\sum_C G-\sum_{K} G.$$ Therefore, $$\sum_C G=\sum_{K} G.$$ In summary, we have the following. Theorem (Path-independence). A function defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is path-independent if and only all of its sums along closed curves in the partition are equal to zero. Suppose we have a path-independent function $G$ defined on edges of a partition of some set $D$ in ${\bf R}$. We know it to be exact, but how do we find $f$ with $\Delta f=G$? The idea comes from Chapter 11. First, we choose an arbitrary node $A$ in $D$ and then carry out a summation along every possible curve from $A$. We define for each $X$ in $D$: $$f(X)=\sum_C G,$$ where $C$ is any curve from $A$ to $X$. A choice of $C$ doesn't matter because $G$ is path-independent. To ensure that this function is well defined we need an extra requirement. Theorem (Fundamental Theorem of Calculus for differences I). On a partition of a path-connected region $D$ in ${\bf R}^n$, if $G=\Delta f$, the function below is well-defined for a fixed $A$ in $D$: $$g(X)=\sum_C G,$$ where $C$ is any curve from $A$ to $X$ within the partition of $D$, and, furthermore, $$\Delta g=G.$$ Proof. Because the region in path-connected, there is always a curve from $A$ to any $X$. $\blacksquare$ What about vector fields? If $F$ is a vector field, we apply the above analysis to its projection $G=F\cdot \Delta X$. The sums become Riemann sums... Example. Suppose $C$ is an oriented curve that consists of oriented edges $Q_i,\ i=1,...,m$, of a partition of a region $D$ in ${\bf R}^n$ and $$Q_i=P_{i-1}P_i\text{ with } P_0=P_m=A.$$ Suppose $F$ is constant vector field in $D$: $F(X)=G$ for all $X$ in $D$. Then the work of $G$ along $C$ is the following Riemann sum: $$\begin{array}{ll} \sum_C F \cdot \Delta X&=\sum_{i=1}^m F(Q_i)\cdot Q_{i}\\ &=\sum_{i=1}^m F\cdot Q_{i}\\ &=F\cdot \sum_{i=1}^m P_{i-1}P_i\\ &=F\cdot \sum_{i=1}^m (P_i-P_{i-1})\\ &=F\cdot \big[(P_1-P_{0})+(P_2-P_{1})+...+(P_m-P_{m-1})\big]\\ &=F\cdot \big[-P_{0}+P_m\big]\\ &=0. \end{array}$$ It is zero! $\square$ Example. The story is the exact opposite for the rotation vector field: $$F=<-y,x>.$$ Let's consider a single square of the partition; for example, $S=[1,2]\times [1,2]$. Suppose curve $C$ goes counterclockwise and the secondary nodes are the starting points of the edges. Then the work of $G$ along $C$ is the following Riemann sum: $$\begin{array}{ll} \sum_C F \cdot \Delta X&=\sum_{i=1}^4 F(Q_i)\cdot Q_{i}\\ &=F(1,1)\cdot <1,0>+F(2,1)\cdot <0,1>+F(2,2)\cdot <-1,0>+F(1,2)\cdot <0,-1>\\ &=<-1,1>\cdot <1,0>+<-1,2>\cdot <0,1>+<-2,2>\cdot <-1,0>+<-2,1>\cdot <0,-1>\\ &=-1+2+2-1\\ &=2. \end{array}$$ It is not zero! $\square$ The above formula for differences takes the following form: $$\sum_C F\cdot \Delta X=f(B)-f(A).$$ Not only the proof but also the formula itself looks like the familiar Fundamental Theorem of Calculus for numerical integrals from Chapter 11. Definition. A vector field $F$ defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is called path-independent if its projection $F\cdot \Delta X$ is; i.e., the Riemann sum along any oriented curve depends only on the start- and the end-points of the curve: $$\sum_C F\cdot \Delta X=\sum_K F\cdot \Delta X,$$ for any two curves of edges $C$ and $K$ from node $A$ to node $B$ that lie entirely in $D$. For the sum along a closed curve, we note once again: if we stay home, we don't do any work! We have for a path-independent vector field $F$: $$\sum_C F\cdot \Delta X=\sum_K F\cdot \Delta X=0.$$ Conversely, suppose we have two curves $C$ and $K$ from $A$ to $B$. We create a new, closed curve from them, from $A$ to $A$, by gluing $C$ and the reversed $K$ together: $$Q=C\cup -K.$$ From the corresponding result for differences we derive the following. Theorem (Path-independence of vector fields). A vector field defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is path-independent if and only if all of its Riemann sums along closed curves of edges in $D$ are equal to zero. Path-independence of integrals Again, let's consider constant force fields along closed curves, i.e., parametrized by some $X=X(t),\ a\le t\le b$, with $X(a)=X(b)=A$. Line integrals along closed curves have special notation: $$\oint_C F\cdot dX.$$ Example. Once again, what is the work of a constant force field along a closed curve such as a circle? Consider two diametrically opposite points on the circle. The directions of the tangents to the curve are opposite while the vector field is the same. Therefore, the terms $F\cdot X'$ in the work integral are negative of each other. So, because of this symmetry, two opposite halves of the circle will have work negative of each other and cancel. The work must be zero! Let's confirm this for $F=<p,q>$ and the standard parametrization of the circle: $$\begin{array}{ll} W&=\oint_C F\cdot dX=\int_a^b F(X(t))\cdot X'(t)\, dt\\ &=\int_0^{2\pi} <p,q>\cdot <\cos t,\ \sin t>'\, dt\\ &=\int_0^{2\pi}<p,q>\cdot <-\sin t,\ \cos t>\, dt\\ &=\int_0^{2\pi}(-p\sin t+q\cos t)\, dt\\ &=(p\cos t-q\sin t)\bigg|_0^{2\pi}+(p\cos t-q\sin t)\bigg|_0^{2\pi}\\ &=0+0=0. \end{array}$$ So, work cancels out during this round trip. $\square$ Consider any point. The direction of the tangent to the curve is the same as the vector field. Therefore, the terms $F\cdot X'$ cannot cancel. The work is not zero! Let's confirm this result: $$\begin{array}{ll} W&=\oint_C F\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt\\ &=\int_0^{2\pi}<-y,x>\bigg|_{x=\cos t,\ y=\sin t}\cdot <\cos t,\ \sin t>'\, dt\\ &=\int_0^{2\pi}<-\sin t,\ \cos t>\cdot <-\sin t,\ \cos t>\, dt\\ &=\int_0^{2\pi}(\sin^2 t+\cos^2 t)\, dt\\ &=\int_0^{2\pi}1\, dt\\ &=2\pi. \end{array}$$ We have walked against wind all the way in this round trip! The same logic applies to any location-dependent multiple of $F$ as long as the symmetry is preserved. For example, the familiar one below qualifies: $$G(X)=\frac{F(X)}{||X||^2}.$$ Even though, as we know, this vector field passes the Gradient Test, it has a positive line integral over a circle: $$W=\oint_C G\cdot dX=\int_a^b G(X(t))\cdot G'(t)\, dt>0,$$ because the integrand is positive. $\square$ The difference between the two outcomes may be explained by the fact that the constant vector field is gradient: $$<p,q>=\nabla f, \text{ where } f(x,y)=px+qy,$$ while the rotation vector field is not: $$<-y,x>\ne\nabla f, \text{ for any } z=f(x,y).$$ Is there anything special about line integrals of gradient vector fields over curves that aren't closed? We reach the same conclusion for the discrete case: the line integral depends only on the potential function of $F$. But the latter is an antiderivative of $F$! This idea shows that this is just an analog of the original Fundamental Theorem of Calculus II (there will be FTC I later). Theorem (Fundamental Theorem of Calculus of gradient vector fields II). If on a subset of ${\bf R}^n$, we have $F=\nabla f$ and an oriented curve $C$ in ${\bf R}^n$ starts at point $A$ and ends at $B$, then $$\int_C F\cdot dX=f(B)-f(A).$$ Proof. Suppose we have: $$F=\nabla f,$$ and an oriented curve $C$ in ${\bf R}^n$ that starts at point $A$ and ends at $B$: Then, after parametrizing $C$ with $X=X(t),\ a\le t\le b$, we have via the Fundamental Theorem of Calculus (Chapter 11) and the Chain Rule (Chapter 8): $$\begin{array}{lll} W&=\int_C F\cdot dX\\ &=\int_a^b F(X(t))\cdot X'(t)\, dt\\ &=\int_a^b \nabla f(X(t))\cdot X'(t)\, dt&\text{...we recognize the integrand as a part of CR...}\\ &=\int_a^b \frac{d}{dt} f(X(t))\, dt&\text{...we apply now FTC II...}\\ &=f(X(t))\bigg|_a^b\\ &=f(X(b))-f(X(a))\\ &=f(B)-f(A). \end{array}$$ $\blacksquare$ For dimension $n=1$, we just take $y=F(x)$ to be a numerical function with antiderivative $f$ and $C$ is the interval $[A,B]$ in the $x$-axis. We also choose $x=x(t)$ to be a parametrization of this interval so that $x(a)=A$ and $x(b)=B$. Then we have from above: $$\int_CF\, dX=\int_{x=A}^{x=B} F\, dx=\int_{t=a}^{t=b}F(x(t))x'(t)\, dt=f(x(t))\bigg|_{t=a}^{t=b}=f(x(b))-f(x(a))=f(B)-f(A).$$ We have another interpretation of substitution in definite integrals. Not only the proof but also the formula itself looks like the familiar Fundamental Theorem of Calculus for numerical integrals from Chapter 11. Because it is restricted to gradient vector fields, this is just a preliminary version. Warning: Before applying the formula, confirm that the vector field is gradient! The example of $F=<-y,x>$ is to be remembered at all times. So, if $F$ is a gradient vector field then $$\oint_C F\cdot dX=0.$$ Therefore, the work is zero on net so that there is no gain or loss of energy. This is the reason why gradient vector fields are also called conservative. Example. Consider this rotation vector field, $V=<-y,x>,$ and especially its multiple: $$F=\frac{V}{||V||^2}=\frac{1}{x^2+y^2}<y,\ -x>=\left< \frac{y}{x^2+y^2},\ -\frac{x}{x^2+y^2}\right>=<p,q>.$$ We previously demonstrated the following: $$\begin{array}{lll} p_y=\frac{\partial}{\partial y}\frac{y}{x^2+y^2}=\frac{1\cdot (x^2+y^2)-y\cdot 2y}{(x^2+y^2)^2}=\frac{x^2-y^2}{(x^2+y^2)^2}\\ q_x=\frac{\partial}{\partial x}\frac{-x}{x^2+y^2}=-\frac{1\cdot (x^2+y^2)-x\cdot 2x}{(x^2+y^2)^2}=-\frac{y^2-x^2}{(x^2+y^2)^2}\\ \end{array}\ \Longrightarrow\operatorname{rot}F=q_x-p_y=0$$ So, the rotor of the vector field is zero and it passes the Gradient Test; however, is it gradient? We demonstrate now that it is not. Indeed, suppose $X=X(t)$ is a counterclockwise parametrization of the circle. Then $F(X(t))$ is parallel to $X'(t)$. Therefore, $F(X(t))\cdot X'(t)>0$. It follows that the line integral along the circle is positive: $$0=\oint_C F\cdot dX=\int_a^b F(X(t))\cdot X'(t)\, dt>0.$$ It is as if we have climbed a spiral staircase! A contradiction. $\square$ Not only the expression on right $$\int_C F\cdot dX=f(B)-f(A)$$ is independent of parametrization of the curve $C$, it is independent of our choice of $C$ as long as it is from $A$ to $B$! Definition. A vector field defined on a subset $D$ of ${\bf R}^n$ is called path-independent if any line integral along a curve depends only on the start- and the end-points of the curve; i.e., $$\int_C F\cdot dX=\int_K F\cdot dX,$$ for any two curves $C$ and $K$ from point $A$ to point $B$ that lie entirely in $D$. What if $A=B$? What can we say about line integral along a closed curve $C$? As an example, consider this: if we stay home, we don't do any work! We are talking about a constant curve, $K=\{A\}$. Let's compare it to another curve $C$. The parametrization of $K$ is trivial: $X(t)=A$ on the whole interval $[a,b]$. Therefore, $X'(t)=0$ and we have: $$\int_C F\cdot dX=\int_K F\cdot dX=\int_a^b F(X(t))\cdot X'(t)\, dt=\int_a^b F(X(t))\cdot 0\, dt=0.$$ The converse is also true. Suppose we have two curves $C$ and $K$ from $A$ to $B$. Just as in the last section, we create a new, closed curve from them. We glue $C$ and the reversed $K$ together: $$Q=C\cup -K.$$ It goes from $A$ to $A$. Then, from Additivity and Negativity we have: $$0=\int_Q F\cdot dX=\int_C F\cdot dX+\int_{-K} F\cdot dX=\int_C F\cdot dX-\int_{K} F\cdot dX.$$ Therefore, $$\int_C F\cdot dX=\int_{K}F\cdot dX.$$ In summary, we have the following. Theorem. A vector field defined on a subset $D$ of ${\bf R}^n$ is path-independent if and only if it has all of its line integrals along closed curves in $D$ equal to zero. We have established the following. Theorem. All gradient vector fields are path-independent. Proof. $F$ is a gradient vector field, then $\oint_C F\cdot dX=0,$ then $F$ is path-independent. Recall that we considered Riemann integral (the area under the graph) but with a variable upper limit. It is illustrated below: $x$ runs from $a$ to $b$ and beyond. Then the Fundamental Theorem of Calculus II states that for any continuous function $F$ on $[a,b]$, the function defined by $$\int_{a}^{x} F \, dx $$ is an antiderivative of $F$ on $(a,b)$. In the new setting, we have a path-independent vector field $F$ defined on some set $D$ in ${\bf R}$ and we need to find its potential function, i.e., a function the gradient of which is $F$, $\nabla f=F$. First, we choose an arbitrary point $A$ in $D$ and then do a lot of line integration. We define for each $X$ in $D$: $$f(X)=\int_CF\cdot dX,$$ where $C$ is any curve from $A$ to $X$. A choice of $C$ doesn't matter because $F$ is path-independent by assumption. There is an extra requirement. Theorem (Fundamental Theorem of Calculus of gradient vector fields I). For any gradient vector field $F$ defined on a path-connected region in ${\bf R}^n$, the function defined for a fixed $A$ in $D$ by: $$f(X)=\int_CF\cdot dX,$$ where $C$ is any curve from $A$ to $X$ within $D$, is a potential function of $F$ on $D$. How a ball is spun by the stream Suppose we have a vector field that describes the velocity field of a fluid flow. Let's place a ping-pong ball within the flow. We put it on a pole so that the ball remains fixed while it can freely rotate. We see the particles bombarding the ball and think of the vector field of the flow as a force field. Due to the ball's rough surface, the fluid flowing past it will make it spin around the pole. It is clear that a constant vector field will produce no spin or rotation. However, it is not the rotation of the vectors that we speak of. We are not asking: is a specific particle of water making a circle? but rather: does the combined motion of the particles make the ball spin? For example, this is what we see in the image on right. The ball in the center is in the middle of a whirl and will be clearly spun in the counterclockwise direction. The ball at the bottom is in the part of the stream with a constant direction but not magnitude. Will it spin? The ball at the top is being pushed in various directions at the same time and its spin seems very uncertain. How do we predict and how do we measure the amount of rotation? Example. The answer is simple when the force is applied to just one side of the ball as in the case of all racket sports: Let's take a closer look at the ball in the stream. For simplicity, let's assume that we can detect only four distinct values of the vector field on the four sides (on the grid) of the ball. We also assume at first that these four vectors are tangent to the surface of the ball. In other words, this is just a vector field. What is the net effect of these forces on the ball? Think of the ball as a tiny wind-mill with the four forces pushing (or pulling) its four blades. We just go around the ball (counterclockwise starting at the bottom) adding these numbers: $$1+1-2+1=1>0.$$ The ball will spin counterclockwise! In order to measure the amount of spin, let's assume that this is a unit square. Then, of course, the sum above is just a line sum from Chapter 21 representing the work performed by the force of the flow to spin the ball. Let's look at this quantity from the coordinate point of view. We observe that the forces with the same direction but on the opposite sides are cancelled. We see this effect if we re-arrange the terms: $$W=\text{horizontal: } 1-2\ +\text{ vertical: } 1+1.$$ We then represent each vector in terms of its $x$ and $y$ components: $$\text{force }=\quad\begin{array}{|lcr|} \hline \bullet-& \to\to&-\bullet\\ |&&|\\ \downarrow& &\uparrow\\ |&&|\\ \bullet-&\to&-\bullet\\ \hline \end{array} \quad=\quad \begin{array}{|lcr|} \hline \bullet-& 2&-\bullet\\ |&&|\\ -1& &1\\ |&&|\\ \bullet-&1&-\bullet\\ \hline \end{array}$$ The expression can then be seen as: $W=-$(the vertical change of the horizontal values) $+$ (the horizontal change of the vertical values). According to the Exactness Test for dimension $2$, a function $G$ defined on the edges of a partition is not exact when $\Delta_y p\ne\Delta_x q$. We form the following function to study this further. Definition. For a function $G$ defined on the edges of a partition of the $xy$-plane, the difference of $G$ is a function of two variables defined at the $2$-cells of the partition and denoted by: $$\Delta G=\Delta_x q-\Delta_y p,$$ where $p$ and $q$ are the $x$- and $y$-components of $G$ (i.e., its values on the horizontal and vertical edges respectively). It is as if we cover the whole stream with those little balls and study their rotation. Definition. If its difference is zero, a function defined on the edges of a partition is called closed. The negative rotation simply means rotation in the opposite direction. Example. All vector fields have vectors that change directions, i.e., rotate. What if they don't? Let's consider a flow with a constant direction but variable magnitude: $$\text{force }=\quad\begin{array}{|lcr|} \hline \bullet-& \to\to&-\bullet\\ |&&|\\ \cdot& &\cdot\\ |&&|\\ \bullet-&\to&-\bullet\\ \hline \end{array} \quad=\quad \begin{array}{|lcr|} \hline \bullet-& 2&-\bullet\\ |&&|\\ 0& &0\\ |&&|\\ \bullet-&1&-\bullet\\ \hline \end{array}$$ The rotor is $-1$ but where is rotation? Well, the speed of the water on one side is faster than on the other and this difference is the cause of the ball's spinning. $\square$ With this new concept, we can restate the Exactness Test. Theorem (Exactness Test dimension $2$). If $G$ is exact, it is closed; briefly: $$\Delta (\Delta h)=0.$$ Let's try a more general point of view: vector fields. Example. We represent each vector in terms of its $x$- and $y$-components: $$\text{force }=\quad\begin{array}{|lcr|} \hline \bullet-& \to\to&-\bullet\\ |&&|\\ \downarrow& &\uparrow\\ |&&|\\ \bullet-&\to&-\bullet\\ \hline \end{array} \quad=\quad \begin{array}{|lcr|} \hline \bullet-& <2,0>&-\bullet\\ |&&|\\ <0,-1>& &<0,1>\\ |&&|\\ \bullet-&<1,0>&-\bullet\\ \hline \end{array}$$ The expression can then be seen as: $W=-$(the vertical change of the horizontal vectors) $+$ (the horizontal change of the vertical vectors); or: $$W=\text{horizontal: } -(2-1)+\ \text{ vertical: } 1-(-1).$$ Of course, only the vertical/horizontal components of the vectors acting along the vertical/horizontal edges matter! So the result should remain the same if we modify make the other components non-zero: Then, we have: $$F=\begin{array}{|lcr|} \hline \bullet-& <2,0>&-\bullet\\ |&&|\\ <1/2,-1>& &<1,1>\\ |&&|\\ \bullet-&<1,-1>&-\bullet\\ \hline \end{array}$$ The value of $W$ above remains the same even though the forces are directed off the tangent of the ball! The difference is between a real-valued $1$-form and a vector-valued $1$-form. If $F=<p,q>$, we have component-wise: $$p=\begin{array}{|lcr|} \hline \bullet-& 2&-\bullet\\ |&&|\\ 1/2& &1\\ |&&|\\ \bullet-&1&-\bullet\\ \hline \end{array}\quad\leadsto\ \Delta_y p =2-1=1,\qquad q= \begin{array}{|lcr|} \hline \bullet-& 0&-\bullet\\ |&&|\\ -1& &1\\ |&&|\\ \bullet-&-1&-\bullet\\ \hline \end{array}\leadsto\ \Delta_x y =1-(-1)=2.$$ Then, $$W=-\Delta_y p+\Delta_x q=-1+2=1.$$ This is the familiar rotor from Chapter 21! Here is another way to arrive to this quantity. If $C$ is the border of the square oriented in the counterclockwise direction, the line sum along $C$ gives us the following: $$\begin{array}{ll} W&=\sum_C F\\ &=\begin{array}{|cccc|} \hline & <2,0>\cdot<-1,0>&+\\ <1/2,-1>\cdot<0,-1>& + &<1,1>\cdot<0,1>\\ +&<1,-1>\cdot<1,0>\\ \hline \end{array}\\ &=\begin{array}{|cccc|} \hline & -2&+\\ 1& + &1\\ +&1\\ \hline \end{array}\\ &=1. \end{array}$$ $\square$ According to the Gradient Test for dimension $2$, a vector field $F=<p,q>$ is not gradient when $\frac{\Delta p}{\Delta y}\ne\frac{\Delta q}{\Delta x}$. We form the following function of two variables to study this further. Definition. For a vector field $F$ defined on the secondary nodes (the $1$-cells) of a partition of region in the $xy$-plane, the rotor of $F$ is a function defined on tertiary nodes (the $2$-cells) of the partition and denoted by: $$\operatorname{rot} F=\frac{\Delta p}{\Delta y}-\frac{\Delta q}{\Delta x},$$ where $p$ and $q$ are the $x$- and $y$-components of $V$ (i.e., its values on the horizontal and vertical edges respectively). Definition. If the rotor is zero, the vector field is called irrotational. One can see a high value of the rotor in the center and zero around it in the following example: Example. From the equality of the mixed partial difference quotients, it follows that the rotor of the gradient of a function gives values exactly equal to $0$: With this new concept, we can restate the Gradient Test. Corollary (Gradient Test dimension $2$). If a vector field is gradient, then it's irrotational. What about the $3$-dimensional space? Once again, we place a small ball within the flow in such a way that the ball remains fixed while being able to rotate. If the ball has a rough surface, the fluid flowing past it will make it spin. In the discrete case, each face, i.e., a $2$-cell, of the partition is subject to the $2$-dimensional analysis presented above. In other words, the ball located within a face rotates around the axis perpendicular to the face. According to the Exactness Test for dimension $3$, $G=<p,q,r>$ is not exact when one of these fails: $$\Delta_y p=\Delta_x q,\ \Delta_z q=\Delta_y r,\ \Delta_x r=\Delta_z p.$$ We form the following vector field to study this further. Definition. For a function $F$ defined on the secondary nodes (edges) of a partition of the $xyz$-space, the difference of $G$ is a function defined at the tertiary nodes ($2$-cells) of a partition of a cell in the $xyz$-space and denoted by: $$\Delta G=\begin{cases} \Delta_y r-\Delta_z q&\text{ on the faces parallel to the }yz\text{-plane},\\ \Delta_z p-\Delta_x r&\text{ on the faces parallel to the }xz\text{-plane},\\ \Delta_x q-\Delta_y p&\text{ on the faces parallel to the }xy\text{-plane}, \end{cases}$$ where $p$, $q$, and $r$ are the $x$-, $y$-, and $z$-components of $G$ respectively. If the difference is zero, $G$ is called closed. Of course, the $3$-dimensional difference is made of the three $2$-dimensional ones with respect to each of the three pairs of coordinates. Same statement as for dimension $2$! According to the Gradient Test for dimension $3$, a vector field $V=<p,q,r>$ is not gradient when one of these fails: $$\frac{\Delta p}{\Delta y}=\frac{\Delta q}{\Delta x},\ \frac{\Delta q}{\Delta z}=\frac{\Delta r}{\Delta y} ,\ \frac{\Delta r}{\Delta x}=\frac{\Delta p}{\Delta z}.$$ Definition. For a function $F$ defined on the edges of a partition of the $xyz$-space, the curl, of $F$ is a function of three variables defined at the $2$-cells of a partition of a cell in the $xyz$-space and denoted by: $$\operatorname{curl} F=\begin{cases} \frac{\Delta r}{\Delta y}-\frac{\Delta q}{\Delta z}&\text{ on the faces parallel to the }yz\text{-plane},\\ \frac{\Delta p}{\Delta z}-\frac{\Delta r}{\Delta x}&\text{ on the faces parallel to the }xz\text{-plane},\\ \frac{\Delta q}{\Delta x}-\frac{\Delta p}{\Delta y}&\text{ on the faces parallel to the }xy\text{-plane}, \end{cases}$$ where $p$, $q$, and $r$ are the $x$-, $y$-, and $z$-components of $F$ respectively. If the curl is zero, $F$ is called irrotational. Of course, the curl is made of the three rotors with respect to the three pairs of coordinates. Same statement! The two theorems can be restated in an even more concise form, in terms of the compositions of these functions of functions: $$\Delta\Delta=0.$$ When no secondary nodes are specified, we deal with discrete forms. Then, if travel along the following diagram, we end up at zero no matter what the starting point is: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llll} 0\text{-forms }&\ra{\Delta}&\ 1\text{-forms }& \ra{\Delta}&\ 2\text{-forms}. \end{array}$$ The Fundamental Theorem of Discrete Calculus of degree $2$ Suppose curve $C$ is the border of the rectangle $R$ oriented in the counterclockwise direction. Suppose the flow is given by these numbers as defined on each of the edges of the rectangle: $$G=\begin{array}{|ccc|} \hline \bullet& p_3&\bullet\\ q_4& &q_2\\ \bullet&p_1&\bullet\\ \hline \end{array},\quad C=\begin{array}{|ccc|} \hline \bullet& \leftarrow &\bullet\\ \downarrow& &\uparrow\\ \bullet&\to&\bullet\\ \hline \end{array}$$ Then the line integral along $C$ is the following: $$\begin{array}{ll} W&=\sum_C G\\ &\begin{array}{cccc} =&&& -p_3&\\ &+&-q_4& + &q_2\\ &+&&p_1 \end{array}\\ &=-p_3-q_4+ q_2+p_1\\ &=(q_2-q_4)-(p_3-p_1)\quad (\text{i.e., horizontal change of }q\text{ and vertical change of }p)\\ &=\Delta_x G- \Delta_y G\\ &=\Delta G. \end{array}$$ As you can see, rearranging the four terms of the work that come from the trip around the square creates the following. First, it is the difference of the vertical flow on the two sides of the ball and, second, it is the difference of the horizontal flow on the other two sides. Finally, the difference of these two quantities appears and it indicates the total flow. It is the difference of $G$. We have a preliminary result below. Theorem. In a partition of a plane region $R$, if $C$ is a simple closed curve that constitutes the boundary of a single $2$-cell $D$ of the partition by going counterclockwise around $D$, we have the following for any function $G$ defined on the secondary nodes of the partition: $$\sum_{C} G=\Delta G.$$ What if we have a more complex object in the stream? How do we measure the amount of flow around it? We approach the problem as follows: we suppose that there are many little balls in the flow forming some shape and then find the amount of the flow around the balls. Note that every ball will try to rotate all of its adjacent balls in the same direction at the same speed with no more flow required. This idea of cancellation of spin takes an algebraic form below. We will start with a single rectangle and then build more and more complex regions on the plane from the rectangles of our grid -- as if each contains a ball -- while maintaining the formula. Let's put two rectangles together. Suppose we have two adjacent ones, $R_1$ and $R_2$, bounded by curves $C_1$ and $C_2$. We write the Fundamental Theorem for either and then add the two: $$\begin{array}{lll} &\sum_{C_1} G&=\sum_{R_1} \Delta G\\ +\\ &\sum_{C_2} G&=\sum_{R_2} \Delta G\\ \hline &\sum_{C_1\cup C_2} G&=\sum_{R_1\cup R_2} \Delta G \end{array}$$ In the right-hand side, we have a single sum according to Additivity of sums and in the left-hand side, we have a single sums according to Additivity. Here $C_1\cup C_2$ is the curve that consists of $C_1$ and $C_2$ traveled consecutively. Now, this is an unsatisfactory result because ${C_1\cup C_2}$ doesn't bound ${R_1\cup R_2}$. Fortunately, the left-hand side can be simplified: the two curves share an edge but travel it in the opposite directions. We have a cancellation according to Negativity for sums. The result is: $$\sum_{\partial D}\Delta G=\sum_{D} \Delta G,$$ where $D$ is the union of the two rectangles and $\partial D$ is its boundary. We have constructed the Fundamental Theorem for this more complex region! We continue on adding one rectangle at a time to our region $D$ and cancelling the edges shared with others producing bigger and bigger curve $C=\partial D$ that bounds $D$: We can add as many rectangles as we like and producing larger and larger region made of the rectangles and bounded by a single closed curve made of edges... unless we circle back! Then the boundary curve might break into two... We will ignore this possibility for now and state the second preliminary version of the main theorem. Theorem. In a partition of a plane region $R$, if $C$ is a simple closed curve that constitutes the boundary of $R$ by going counterclockwise around $R$, we have for any function $G$ defined on the secondary nodes of the partition: $$\sum_{C=\partial R} G=\sum_{R} \Delta G.$$ What if the function is the difference? Then its difference is zero and, therefore, our formula takes this form: $$\sum_{\partial D} \Delta G=\sum_{D} \Delta G=\sum_{D}0 =0.$$ The sum along any closed curve is then zero and, according to the Path-independence Theorem, $G$ is path-independent. Then, $$\sum_{C} \Delta G=f(B)-f(A),$$ for any curve $C$ from $A$ to $B$, where $f$ is a potential function of $G$. We have arrived at the Fundamental Theorem of Calculus for differences. It follows that the Fundamental Theorem is its generalization. However, as the Fundamental Theorem of Calculus for parametric curves, i.e., degree $1$, indicates, there are more than one fundamental theorem for each dimension! What is our function doesn't depend on $y$, i.e., $G(x,y)=q(x)$, while $R$ is a rectangle $[a,b]\times [c,d]$? In the left-hand side of the formula, the sums along the two horizontal sides of $R$ cancel each other: $$\sum_{C} G\cdot dX=q(b)-q(b).$$ In the right-hand side of the formula, we have: $$\sum_{R} \Delta G=\sum_{[a,b]\times [c,d]} \Delta_x G=\sum_{[a,b]} \Delta q.$$ We have arrived at the original Fundamental Theorem of Discrete Calculus (degree $1$) from Chapter 11: $$q(b)-q(a)=\sum_{[a,b]} \Delta q.$$ Not only have we derived the degree $1$ from degree $2$, but also both theorems have the same form! We realize that in the above formula, $$\sum_{\{a,b\}}q=\sum_{[a,b]} \Delta q,$$ the right-hand side is an integral of a $1$-form over a ($1$-dimensional) region, $R=[a,b]$, while the left hand-side is a $0$-form over the boundary, $\partial R=\{a,b\}$, properly oriented, of that region. Now, what if the boundary curve does break into two when we add a new square? In the example below the square is added along with four of its edges. As a result, we add the two vertical edges while the two horizontal are cancel as before. Thus a new square is seamlessly added but we also see the appearance of a hole: The difference is dramatic: not only the boundary of the region is now made of two curves but also the one outside goes counterclockwise (as before) while the one inside goes clockwise! However, either curve has the region to its left. Our formula, $$\sum_{C} G=\sum_{R} \Delta G,$$ doesn't work anymore, even though the meaning of the right-hand side is still clear. But what should be the meaning of the left-hand side? It should be the total sum of $G$ over all boundary curves of $R$, correctly oriented! Thus, the fundamental is the relation between a region $R$ in a partition and its boundary $\partial R$. Theorem (Fundamental Theorem of Discrete Calculus of degree $2$). In a partition of a plane region $R$, we have the following for any function $G$ defined on the secondary nodes of the partition: $$\sum_{\partial R} G=\sum_{R} \Delta G.$$ Example. We know that for a region bounded by a simple closed curve, the sum along any closed curve is $0$. Let's take a look at what happens in regions with holes. Consider this rotation function $G$: Its values are $\pm 1$ with directions indicated except for the four edges in the middle with values of $\pm 3$. The function is defined on the $3\times 3$ region $R$ that excludes the middle square. By direct examination we show that the difference of $G$ is zero at every face of $R$: $$\Delta G=0.$$ So, $G$ passes the Exactness Test; however, is it exact? We demonstrate now that it is not. Indeed, the sum of $G$ along the outer boundary of $R$ isn't zero: $$\sum_C G=12.$$ How does it work with our theorem: $$\sum_{C} G=\sum_{R} \Delta G?$$ It seems that the left-hand side is positive while right-hand side is zero... What we have overlooked is that $G$ and, therefore, its difference are undefined at the middle square! So, $C$ doesn't bound $R$. In fact, the boundary of $R$ includes another curve, $C'$, going clockwise. Then, $$\sum_{C} G+\sum_{C'} G=\sum_{R} \Delta G=0.$$ Therefore, we have: $$\sum_{C} G=\sum_{-C'} G.$$ So, moving from the larger path to the smaller (or vice versa) doesn't change the sum! Also notice that the sums from one corner to the opposite are $6$ and $-6$. There is no path-independence! $\square$ To summarize, even when the difference is -- within the region -- zero, the sum along a path that goes around the hole may be non-zero: Furthermore, the sum remains the same for all closed curves as long as they make exactly the same number of turns around the origin! The meaning of path-independence changes accordingly; it all depends on how the curve goes between the holes: Next, we consider the relation of this line integral that represents the work performed by the flow to spin the ball and the rotor of the vector field. Recall that we have a vector field $F=<p,q>$ of the velocity field of a fluid flow with a ping-pong ball within it that can freely rotate but not move. We measure the amount of rotation as the work performed by the force of the flow rotating the ball. Let's first suppose we have a grid on the plane with rectangles: $$\Delta x\times \Delta y.$$ Suppose that the flow rotates this rectangle just like the ball before. We can also look at a vector field $F=<p,q>$ of the velocity field of a fluid flow. Thus, the Riemann sum of the vector field along the boundary of a rectangle is equal to the (double) Riemann sum of the rotor over this rectangle and, furthermore, over any region made of such rectangles. Theorem (Fundamental Theorem of Discrete Calculus for vector fields). In a partition of a plane region $R$, we have the following, we have for any vector field $F$ defined on the secondary nodes of the partition: $$\sum_{\partial R} F\cdot \Delta X=\sum_{R} \operatorname{rot} F\, \Delta A.$$ Proof. The proof can be independent from the last theorem. Suppose curve $C$ is the border of the rectangle $R$ oriented in the counterclockwise direction. Suppose the vector field is given by these vectors as defined on each of the edges of the rectangle, which is shown on right: $$F=\begin{array}{|ccc|} \hline \bullet& <p_3,q_3>&\bullet\\ <p_4,q_4>& &<p_2,q_2>\\ \bullet&<p_1,q_1>&\bullet\\ \hline \end{array},\quad \Delta X= \begin{array}{|ccc|} \hline \bullet& <-\Delta x,0>&\bullet\\ <0,-\Delta y>& &<0,\Delta y>\\ \bullet&<\Delta x,0>&\bullet\\ \hline \end{array}$$ Then the Riemann sum along $C$ is: $$\begin{array}{ll} W&=\sum_C F\cdot \Delta X\\ &\begin{array}{cccc} =&&& <p_3,q_3>\cdot<-\Delta x,0>&\\ &+&<p_4,q_4>\cdot<0,-\Delta y>& + &<p_2,q_2>\cdot<0,\Delta y>\\ &+&&<p_1,q_1>\cdot<\Delta x,0>\\ =&&& -p_3\Delta x&\\ &+&-q_4\Delta y& + &q_2\Delta y\\ &+&&p_1\Delta x \end{array}\\ &=-p_3\Delta x-q_4\Delta y+ q_2\Delta y+p_1\Delta x\\ &=(q_2-q_4)\Delta y-(p_3-p_1)\Delta x\\ &=\frac{q_2-q_4}{\Delta x}\Delta x\Delta y-\frac{p_3-p_1}{\Delta y}\Delta y\Delta x\\ &=\left(\frac{\Delta q}{\Delta x}-\frac{\Delta p}{\Delta y}\right)\Delta x\Delta y. \end{array}$$ $\blacksquare$ Green's Theorem: the Fundamental Theorem of Calculus for vector fields in dimension $2$ According to the Gradient Test for dimension $2$, a vector field $F=<p,q>$ is not gradient when $p_y\ne q_x$. We form the following function of two variables to study this further (as if we cover the whole stream with those little balls). Definition. The rotor of a differentiable on an open region on the plane vector field $F=<p,q>$ is a function of two variables defined on the region and denoted by $$\operatorname{rot}F=q_x-p_y.$$ Example. All vector fields have vectors that change directions, i.e., "rotate". What if they don't? Let's consider a vector field with a constant direction but variable magnitude. Let's try: $$F(x,y)=<y^2,0>.$$ Then $$\operatorname{rot}V=q_x-p_y=0-2y\ne 0.$$ The rotation is again non-zero. In fact the graph of the rotor shows that the rotation will be counterclockwise on right and clockwise on left. The effect is seen when a person lies on the top of two adjacent -- up and down -- escalators: With this new concept, we can restate the Gradient Tests from Chapter 21. Theorem (Gradient Test dimension $2$). Suppose $F$ is a vector field on an open region in ${\bf R}^2$ with continuously differentiable component functions. If $F$ is gradient (i.e., $F=\operatorname{grad}h$), then it's irrotational: $\operatorname{rot}F=0$; briefly: $$\operatorname{rot}(\operatorname{grad}h)=0.$$ What about $3$-dimensional vector fields? Once again, suppose we have a vector field that describes the velocity field of a fluid flow. We place a small ball within the flow in such a way that the ball remains fixed while being able to rotate. If the ball has a rough surface, the fluid flowing past it will make it spin. The ball can rotate around any axis. We can restate the Gradient Test for dimension $3$ as follows. Theorem (Gradient Test dimension $3$). Suppose $F$ is a vector field on an open region in ${\bf R}^3$ with continuously differentiable component functions. If $F$ is gradient (i.e., $F=\operatorname{grad}h$), then it's irrotational with respect to all three pairs of coordinates: $$\operatorname{rot}_{y,z}<q,r>=0,\ \operatorname{rot}_{z,x}<r,p>=0,\ \operatorname{rot}_{x,y}<p,q>=0.$$ The subscripts indicate with respect to which two variables we differentiate while the third to be kept fixed. In fact, we can form the following vector field called the curl of F that take care of all three rotors: $$\operatorname{curl}F=\operatorname{rot}_{y,z}<q,r>i+\operatorname{rot}_{z,x}<r,p>j+\operatorname{rot}_{x,y}<p,q>k=<r_y-q_z,p_z-r_x,q_x-p_y>.$$ In particular, when the vector field $V=pi+qj+rk$ has a zero $z$-component, $r=0$, while $p$ and $q$ don't depend on $z$, the curl is reduced to the rotor: $$\operatorname{curl}(pi+qj)= \operatorname{rot}<p,q>k.$$ Exercise. Define a $4$-dimensional analog of the rotor. The two theorems can be restated in an even more concise form, in terms of the compositions of these functions of functions: $$\operatorname{rot}\operatorname{grad}=0\text{ and } \operatorname{curl}\operatorname{grad}=0.$$ Once again, we end up at zero no matter what the starting point is: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llll} \text{ functions of two variables }&\ra{\operatorname{grad}}& \text{ vector fields in }{\bf R}^2 & \ra{\operatorname{rot}}&\text{ functions of two variables}, \\ \text{ functions of three variables }& \ra{\operatorname{grad}}&\text{ vector fields in }{\bf R}^3 &\ra{\operatorname{curl}}&\text{ vector fields}. \end{array}$$ The analysis of the work integral in the continuous case is similar to the one for the discrete case. How do we measure the amount of its rotation, i.e., the work performed by the force of the flow rotating it? We suppose that there are many little balls in the flow forming some shape and then find the amount of their total rotation, i.e., the work performed by the force of the flow rotating the balls. Just as before, we start with a single rectangle and then build more and more complex regions on the plane from the rectangles of our grid -- as if each contains a ball -- while maintaining the formula. If we have just two adjacent squares, $R_1$ and $R_2$, bounded by curves $C_1$ and $C_2$, we write Green's formula for either and then add the two: $$\begin{array}{lll} &\oint_{C_1} F\cdot dX&=\iint_{R_1} \operatorname{rot} F\, dA\\ +\\ &\oint_{C_2} F\cdot dX&=\iint_{R_2} \operatorname{rot} F\, dA\\ \hline &\oint_{C_1\cup C_2} F\cdot dX&=\iint_{R_1\cup R_2} \operatorname{rot} F\, dA \end{array}$$ In the right-hand side, we have a single integral according to Additivity of double integrals and in the left-hand side, we have a single integral according to Additivity of line integrals. Here $C_1\cup C_2$ is the curve that consists of $C_1$ and $C_2$ traveled consecutively. The left-hand side is simplified: the two curves share an edge but travel it in the opposite directions. We have a cancellation according to Negativity for line integrals. The result is: $$\oint_{\partial D} F\cdot dX=\iint_{D} \operatorname{rot} F\, dA,$$ where $D$ is the union of the two rectangles and $\partial D$ is its boundary. We continue on adding one rectangle at a time to our region $D$ and cancelling the edges shared with others producing bigger and bigger curve $C=\partial D$ that bounds $D$: Or we can add whole regions... It is possible, however, that the boundary curve might seize to be a single closed curve! Theorem (Fundamental Theorem of Calculus for vector fields). Suppose a plane region $R$ is bounded by piece-wise differentiable curve $C$ (possibly made of several disconnected pieces). Then for any vector field $F$ with continuously differentiable components on an open set containing $R$, we have: $$\oint_{C} F\cdot dX=\iint_{R} \operatorname{rot} F\, dA.$$ Proof. We only demonstrate the proof for a region $R$ that has a partition that also produces a partition of $C$. We sample $F$ at the secondary nodes of the partition of $C$ and $\operatorname{rot} F$ at the tertiary nodes of the partition of $R$. We then use the Fundamental Theorem of Discrete Calculus for vector fields: $$\sum_{\partial R} F\cdot \Delta X=\sum_{R} \operatorname{rot} F\, \Delta A.$$ We take the limits of these two Riemann sums over the partitions with the mesh approaching zero. $\blacksquare$ This is also known as Green's Formula. Written component-wise, it takes the following form: $$\int_C p\, dx+ q\, dy=\iint_{R} (q_x-p_y)\, dxdy.$$ Let's trace the theorem back to some familiar things. What if the vector field is gradient? Then its rotor is zero and, therefore, our formula takes this form: $$\oint_{\partial D} F\cdot dX=\iint_{D} \operatorname{rot} F\, dA=\iint_{D}0\, dA =0.$$ The line integral along any closed curve is then zero and, according to the Path-independence Theorem, $F$ is path-independent. Then, $$\int_{C} F\cdot dX=f(B)-f(A),$$ for any curve $C$ from $A$ to $B$, where $f$ is a potential function of $F$. We have arrived at the Fundamental Theorem of Calculus for gradient vector fields. It follows that Green's Theorem is its generalization. This confirms the role of Green's Theorem is the Fundamental Theorem of Calculus for all vector fields for dimension $2$. What is the vector field doesn't depend on $y$, i.e., $F(x,y)=F(x)=<p(x),q(x)>$, while $R$ is a rectangle $[a,b]\times [c,d]$? First the left-hand side of the formula... The line integrals along the two horizontal sides of $R$ cancel each other. We are left with: $$\oint_{\partial D} F\cdot dX=F(b)\cdot A-F(a)\cdot A,$$ where $A$ is the vector that represents the vertical sides of $R$ (oriented vertically). Then, $$\oint_{\partial D} F\cdot dX=(q(b)-q(a))(d-c).$$ Now the right-hand side of the formula... The rotor is simply $q'(x)$. Then, $$\iint_{R} \operatorname{rot} F\, dA=\iint_{[a,b]\times [c,d]} q'(x)\, dxdy=\int_a^b\int_c^d q'(x)\, dxdy=\int_a^b q'(x)\, dx\, (d-c).$$ We have arrived to the original Fundamental Theorem of Calculus from Chapter 11: $$q(b)-q(a)=\int_a^b q'(x)\, dx.$$ Example. Let's, again, consider this rotation vector field: $$F=\frac{1}{x^2+y^2}<y,\ -x>=\left< \frac{y}{x^2+y^2},\ -\frac{x}{x^2+y^2}\right>=<p,q>,$$ which is irrotational: $$\operatorname{rot}F=q_x-p_y=0.$$ Even though it passes the Gradient Test, it is not gradient. Indeed, if $X=X(t)$ is a counterclockwise parametrization of the circle, $F(X(t))$ is parallel to $X'(t)$, and, therefore, the line integral along the circle is positive: $$\int_a^b F(X(t))\cdot X'(t)\, dt>0.$$ On the other hand, according to our theorem, we have: $$\oint_{C} F\cdot dX=\iint_{R} \operatorname{rot} F\, dA=0.$$ So, $C$ doesn't bound $R$. A hole is what makes a spiral staircase possible by providing a place for the pole. Now, we'd need $R$ to be a ring so that the boundary of $R$ would include another curve, maybe a smaller circle, $C'$, going clockwise, Then, $$\oint_{C} F\cdot dX+\oint_{C'} F\cdot dX=\iint_{R} \operatorname{rot} F\, dA=0.$$ Therefore, we have: $$\oint_{C} F\cdot dX=\oint_{-C'} F\cdot dX.$$ So, moving from the larger circle to the smaller (or vice versa) doesn't change the line integral, i.e. the work. A remarkable result! It is seen as even more remarkable once we realize that the integral remains the same for all closed curves as long as they make exactly the same number of turns around the origin! $\square$ To summarize, even when the rotor is -- within the region -- zero, the line integral along a curve that goes around the hole may be non-zero. Furthermore, the integral remains the same for all closed curves as long as they make exactly the same number of turns around the origin! The meaning of path-independence changes accordingly; it all depends on how the curve goes between the holes: Example. Imagine that we need to find the area of a piece of land we have no access to, such as a fortification or a pond. Conveniently, Green's Formula allows us to compute area of a region without visiting the inside but by just taking a trip around it. We just need to pick an appropriate vector field: $$F=<0,x>\ \Longrightarrow\ p=0,\ q=x\ \Longrightarrow\ p_y=0,\ q_x=1.$$ Then the formula takes the following form: $$\begin{array}{ll} \iint_{R} (q_x-p_y)\, dxdy&=\int_C p\, dx&+ q\, dy,\\ \iint_{R} 1\, dxdy&=\int_C 0\, dx&+ x\, dy,\\ \text{area of }R&=& \int_C x\, dy.\\ \end{array}$$ For example, the area of the disk $R$ of radius $r$ is a certain line integral around the circle $C$. We take $C$ to be parametrized the usual way: $$x=r\cos t,\ y=r\sin t.$$ Then, $$\text{area of the circle}= \int_C x\, dy=\int_0^{2\pi}r\cos t(r\sin t)'\, dt=r^2\int_0^{2\pi}\cos^2 t\, dt=r^2\big(x/2+\sin 2x\big)\big|_0^{2\pi}=r^22\pi/2=\pi r^2.$$ $\square$ Retrieved from "https://calculus123.com/index.php?title=Vector_fields&oldid=593"
CommonCrawl
Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use? Intrigued by old scientific results & many positive anecdotes since, I experimented with microdosing LSD - taking doses ~10μg, far below the level at which it causes its famous effects. At this level, the anecdotes claim the usual broad spectrum of positive effects on mood, depression, ability to do work, etc. After researching the matter a bit, I discovered that as far as I could tell, since the original experiment in the 1960s, no one had ever done a blind or even a randomized self-experiment on it. ADMISSIONSUNDERGRADUATE GRADUATE CONTINUING EDUCATION RESEARCHDIVISIONS RESEARCH IMPACT LIBRARIES INNOVATION AND PARTNERSHIP SUPPORT FOR RESEARCHERS RESEARCH IN CONVERSATION PUBLIC ENGAGEMENT WITH RESEARCH NEWS & EVENTSEVENTS SCIENCE BLOG ARTS BLOG OXFORD AND BREXIT NEWS RELEASES FOR JOURNALISTS FILMING IN OXFORD FIND AN EXPERT ABOUTORGANISATION FACTS AND FIGURES OXFORD PEOPLE OXFORD ACCESS INTERNATIONAL OXFORD BUILDING OUR FUTURE JOBS 牛津大学Staff Oxford students Alumni Visitors Local community There is an ancient precedent to humans using natural compounds to elevate cognitive performance. Incan warriors in the 15th century would ingest coca leaves (the basis for cocaine) before battle. Ethiopian hunters in the 10th century developed coffee bean paste to improve hunting stamina. Modern athletes ubiquitously consume protein powders and hormones to enhance their training, recovery, and performance. The most widely consumed psychoactive compound today is caffeine. Millions of people use coffee and tea to be more alert and focused. Nootropics are a responsible way of using smart drugs to enhance productivity. As defined by Giurgea in the 1960's, nootropics should have little to no side-effects. With nootropics, there should be no dependency. And maybe the effects of nootropics are smaller than for instance Adderall, you still improve your productivity without risking your life. This is what separates nootropics from other drugs. At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can't turn up anything noticeable, I don't think I'll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it's only ~$15, after all.) Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn't do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits. Bought 5,000 IU soft-gels of Vitamin D-333 (Examine.com; FDA adverse events) because I was feeling very apathetic in January 2011 and not getting much done, even slacking on regular habits like Mnemosyne spaced repetition review or dual n-back or my Wikipedia watchlist. Introspecting, I was reminded of depression & dysthymia & seasonal affective disorder. Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says. Companies already know a great deal about how their employees live their lives. With the help of wearable technologies and health screenings, companies can now analyze the relation between bodily activities — exercise, sleep, nutrition, etc. — and work performance. With the justification that healthy employees perform better, some companies have made exercise mandatory by using sanctions against those who refuse to perform. And according to The Kaiser Family Foundation, of the large U.S. companies that offer health screenings, nearly half of them use financial incentives to persuade employees to participate. Yes, according to a new policy at Duke University, which says that the "unauthorized use of prescription medicine to enhance academic performance" should be treated as cheating." And no, according to law professor Nita Farahany, herself based at Duke University, who has called the policy "ill-conceived," arguing that "banning smart drugs disempowers students from making educated choices for themselves." Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow and colleagues (2004) showed that MPH increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., 2008). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage, and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement? Never heard of OptiMind before? This supplement promotes itself as an all-natural nootropic supplement that increases focus, improves memory, and enhances overall mental drive. The product first captured our attention when we noticed that their supplement blend contains a few of the same ingredients currently present in our editor's #1 choice. So, of course, we grew curious to see whether their formula was as (un)successful as their initial branding techniques. Keep reading to find out what we discovered… Learn More... My answer is that this is not a lot of research or very good research (not nearly as good as the research on nicotine, eg.), and assuming it's true, I don't value long-term memory that much because LTM is something that is easily assisted or replaced (personal archives, and spaced repetition). For me, my problems tend to be more about akrasia and energy and not getting things done, so even if a stimulant comes with a little cost to long-term memory, it's still useful for me. I'm going continue to use the caffeine. It's not so bad in conjunction with tea, is very cheap, and I'm already addicted, so why not? Caffeine is extremely cheap, addictive, has minimal effects on health (and may be beneficial, from the various epidemiological associations with tea/coffee/chocolate & longevity), and costs extra to remove from drinks popular regardless of their caffeine content (coffee and tea again). What would be the point of carefully investigating it? Suppose there was conclusive evidence on the topic, the value of this evidence to me would be roughly $0 or since ignorance is bliss, negative money - because unless the negative effects were drastic (which current studies rule out, although tea has other issues like fluoride or metal contents), I would not change anything about my life. Why? I enjoy my tea too much. My usual tea seller doesn't even have decaffeinated oolong in general, much less various varieties I might want to drink, apparently because de-caffeinating is so expensive it's not worthwhile. What am I supposed to do, give up my tea and caffeine just to save on the cost of caffeine? Buy de-caffeinating machines (which I couldn't even find any prices for, googling)? This also holds true for people who drink coffee or caffeinated soda. (As opposed to a drug like modafinil which is expensive, and so the value of a definitive answer is substantial and would justify some more extensive calculating of cost-benefit.) "A system that will monitor their behavior and send signals out of their body and notify their doctor? You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia," says Paul Appelbaum, director of Columbia University's psychiatry department in an interview with the New York Times. Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.) On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people. The peculiar tired-sharp feeling was there as usual, and the DNB scores continue to suggest this is not an illusion, as they remain in the same 30-50% band as my normal performance. I did not notice the previous aboulia feeling; instead, around noon, I was filled with a nervous energy and a disturbingly rapid pulse which meditation & deep breathing did little to help with, and which didn't go away for an hour or so. Fortunately, this was primarily at church, so while I felt irritable, I didn't actually interact with anyone or snap at them, and was able to keep a lid on it. I have no idea what that was about. I wondered if it might've been a serotonin storm since amphetamines are some of the drugs that can trigger storms but the Adderall had been at 10:50 AM the previous day, or >25 hours (the half-lives of the ingredients being around 13 hours). An hour or two previously I had taken my usual caffeine-piracetam pill with my morning tea - could that have interacted with the armodafinil and the residual Adderall? Or was it caffeine+modafinil? Speculation, perhaps. A house-mate was ill for a few hours the previous day, so maybe the truth is as prosaic as me catching whatever he had. A television advertisement goes: "It's time to let Focus Factor be your memory-fog lifter." But is this supplement up to task? Focus Factor wastes no time, whether paid airtime or free online presence: it claims to be America's #1 selling brain health supplement with more than 4 million bottles sold and millions across the country actively caring for their brain health. It deems itself instrumental in helping anyone stay focused and on top of his game at home, work, or school. Learn More... Among the questions to be addressed in the present article are, How widespread is the use of prescription stimulants for cognitive enhancement? Who uses them, for what specific purposes? Given that nonmedical use of these substances is illegal, how are they obtained? Furthermore, do these substances actually enhance cognition? If so, what aspects of cognition do they enhance? Is everyone able to be enhanced, or are some groups of healthy individuals helped by these drugs and others not? The goal of this article is to address these questions by reviewing and synthesizing findings from the existing scientific literature. We begin with a brief overview of the psychopharmacology of the two most commonly used prescription stimulants. Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine). Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185). Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. Both nootropics startups provide me with samples to try. In the case of Nootrobox, it is capsules called Sprint designed for a short boost of cognitive enhancement. They contain caffeine – the equivalent of about a cup of coffee, and L-theanine – about 10 times what is in a cup of green tea, in a ratio that is supposed to have a synergistic effect (all the ingredients Nootrobox uses are either regulated as supplements or have a "generally regarded as safe" designation by US authorities) Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress. Federal law classifies most nootropics as dietary supplements, which means that the Food and Drug Administration does not regulate manufacturers' statements about their benefits (as the giant "This product is not intended to diagnose, treat, cure, or prevent any disease" disclaimer on the label indicates). And the types of claims that the feds do allow supplement companies to make are often vague and/or supported by less-than-compelling scientific evidence. "If you find a study that says that an ingredient caused neurons to fire on rat brain cells in a petri dish," says Pieter Cohen, an assistant professor at Harvard Medical School, "you can probably get away with saying that it 'enhances memory' or 'promotes brain health.'" On the plus side: - I noticed the less-fatigue thing to a greater extent, getting out of my classes much less tired than usual. (Caveat: my sleep schedule recently changed for the saner, so it's possible that's responsible. I think it's more the piracetam+choline, though.) - One thing I wasn't expecting was a decrease in my appetite - nobody had mentioned that in their reports.I don't like being bothered by my appetite (I know how to eat fine without it reminding me), so I count this as a plus. - Fidgeting was reduced further Most diehard nootropic users have considered using racetams for enhancing brain function. Racetams are synthetic nootropic substances first developed in Russia. These smart drugs vary in potency, but they are not stimulants. They are unlike traditional ADHD medications (Adderall, Ritalin, Vyvanse, etc.). Instead, racetams boost cognition by enhancing the cholinergic system. The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly. Smart pills have revolutionized the diagnosis of gastrointestinal disorders and could replace conventional diagnostic techniques such as endoscopy. Traditionally, an endoscopy probe is inserted into a patient's esophagus, and subsequently the upper and lower gastrointestinal tract, for diagnostic purposes. There is a risk of perforation or tearing of the esophageal lining, and the patient faces discomfort during and after the procedure. A smart pill or wireless capsule endoscopy (WCE), however, can easily be swallowed and maneuvered to capture images, and requires minimal patient preparation, such as sedation. The built-in sensors allow the measurement of all fluids and gases in the gut, giving the physician a multidimensional picture of the human body. The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes!
CommonCrawl
2018 South Central USA Regional Contest Contest is over. Not yet started. Contest is starting in -436 days 3:49:56 Time elapsed Problem L Unterwave Distance The first sample input. The year is 2312, and Humanity's Rush to the Stars is now $200$ years old. The problem of real-time communication between star systems remains, however, and the key is the unterwave. Unterwave (UW) communication works as follows: direct UW links have been set up between certain pairs of star systems, and two systems without a direct link communicate by hopping along existing links through intermediary systems. Although no system has more than $20$ direct links to other systems, any two systems can communicate, either directly or indirectly. You have been asked to select two star systems for your company's new home offices—one in a human system, and one in an alien system. (Oh yeah, there are aliens.) You are to select them so that the UW distance (a measure of communication latency) is minimized. Every star system has a gravity value that affects UW communication. Just as no two snowflakes on earth have the same crystalline structure, no two star systems in the universe have the same gravity. If a communication path from from one system to another encounters the sequence $G = g_1, g_2, \ldots , g_ n$ of gravity values, including the source and destination of the message, then three sequences of $n-1$ terms each may be defined, relating to physical characteristics of UW communication: \begin{align*} \operatorname {cap}(G) & = g_2+g_1,\ g_3+g_2,\ \ldots ,\ g_ n+g_{n-1} & \textrm{(Capacitance)} \\ \operatorname {pot}(G) & = g_2-g_1,\ g_3-g_2,\ \ldots ,\ g_ n-g_{n-1} & \textrm{(Potential)} \\ \operatorname {ind}(G) & = g_2\cdot g_1,\ g_3\cdot g_2,\ \ldots ,\ g_ n\cdot g_{n-1} & \textrm{(Inductance)} \end{align*} For two sequences $A=a_1,\ a_2,\ \ldots ,\ a_ m$ and $B=b_1,\ b_2,\ \ldots ,\ b_ m$ we define: \begin{align*} A\cdot B & = a_1\cdot b_1,\ a_2\cdot b_2,\ \ldots ,\ a_ m\cdot b_ m & A-B & = a_1- b_1,\ a_2- b_2,\ \ldots ,\ a_ m- b_ m \end{align*} Finally, the UW distance is given by the absolute value of the sum of the values of the sequence \[ \operatorname {pot}(G)\cdot \left(\left[\operatorname {cap}(G)\cdot \operatorname {cap}(G)\right]-\operatorname {ind}(G)\right). \] The UW distance can be quite large. Fortunately, your company has a gravity dispersal device, which may be placed in any single system (and remain there). It reduces by $1$ the gravity of that system, but increases by $1$ the gravities of all systems directly linked to it. Use of this device is optional. The first line has a single integer $2\leq n\leq 50\, 000$, giving the number of star systems. The next $n$ lines describe the star systems, numbered $1$ to $n$. Each line has the form $g\ d$ where $g$ is the gravity value of that system ($1\leq g\leq 1\, 000\, 000$) and $d$ is either 'h' or 'a', designating the system as human or alien (there is at least one system of each type). All values of $g$ are distinct. The next line has a positive integer $e$ giving the number of direct links between systems. Each of the next $e$ lines contains two integers separated by a single space, which are the numbers of two systems that are directly linked. Links are bidirectional, and no link appears more than once in the input. No system links directly to more than $20$ other systems. Output the minimum UW distance that can be achieved between an alien and human system. Sample Input 1 Sample Output 1 Problem ID: unterwavedistance CPU Time limit: 5 seconds Memory limit: 1024 MB Sample data files Author: Robert Hochberg Source: 2018 ICPC South Central USA Regional Contest Powered by Kattis
CommonCrawl
Malnutrition among hospitalized children 12–59 months of age in Abyan and Lahj Governorates / Yemen Ali Ahmed Al-Waleedi1 & Abdulla Salem Bin-Ghouth2 The analysis of acute malnutrition in 2018 for the Integrated Phase Classification of Food Security in Yemen shows that high malnutrition rates are present in Abyan governorate (23%) and Lahj governorate (21%). This analysis was community based addressed all children and mostly due to problems related to food intake. The role of diseases was not yet addressed in Yemen. The aim of this study is to assess acute and chronic malnutrition among hospitalized children at 12–59 months of age in Lahj and Abyan governorates in Yemen. A cross-sectional, multi-center study is designed. The assessment of the nutritional status was measured by standardized anthropometry of 951 sick children at 12–59 months of age. The prevalence of Global acute malnutrition (GAM) among the sick children seeking care in health facilities in Lahj and Abyan is 21%. More specifically; the prevalence of moderate acute malnutrition (MAM) is 15.1% while the prevalence of severe acute malnutrition (SAM) is 6.2%. The prevalence of acute malnutrition (wasting) among the studied sick children in lahj is 23.4% while in Abyan is 19.3%. The prevalence of MAM in Lahj is 17.7% and the prevalence of SAM is 5.7%. The prevalence of acute malnutrition (wasting) in Abyan is 12.6% while the prevalence of SAM in Abyan is 6.7%. The prevalence of acute malnutrition among male children (25.2%) is significantly higher than among female children (17.5%). The prevalence of the chronic malnutrition (Stunting) in the studied sick children is 41.3%; the prevalence of stunting in Lahj is 41% while in Abyan is 41.7%. High acute and chronic malnutrition rates were identified among sick children seeking care in health facilities in lahj and Abyan, and higher than the SPHERE indicators of malnutrition. Boys are more exposed than girls to acute and chronic malnutrition. Malnutrition in children is of high concern in developing countries like Yemen. However, malnutrition is multifactorial. Malnutrition in low-income countries is often, but not solely, be attributable to limited access to food and/or medical care, it is often triggered by disease [1]. In one study among 3101 hospitalized children in nine countries in sub-Saharan Africa in 2019; it was found that 24.6% of the hospitalized children had moderate wasting, and 39·3% had severe wasting with death rate of 11·3% [2]. Most of the local and international reports described the situation in Yemen is the worst humanitarian crises in the world. Malnutrition among Yemeni children is one of the painful crises. In 2015, UNICEF's report concludes that a striking ten of Yemen's 22 governorates are on the edge of famine, as defined by the five-point Integrated Food Security Phase Classification (IPC) scale [3]. In 2017; a study published in the lancet indicated that according to organizations working to end hunger, about 370 000 of Yemen's children are suffering from severe malnutrition. Additionally, one million children younger than five years old are at risk of acute malnutrition [4]. The rate of child malnutrition in Yemen is one of the highest in the world and the nutrition situation continues to deteriorate. World food program (WFP) reported that about one third of families have gaps in their diets, and hardly ever consume foods like pulses, vegetables, fruit, dairy products or meat. Malnutrition rates among children in Yemen remain among the highest in the world, with 2.3 million children under five years requiring treatment for acute malnutrition [5]. Another study among children under five years of age identified that the high malnutrition level (the prevalence of stunting was 47%, wasting was 16%, and underweight was 39%) [6]. Recent study in 2022 reported that more than 2.3 million children under the age of five in Yemen suffer from acute malnutrition. Approximately 450,000 are expected to suffer from severe acute malnutrition and may die if they do not receive urgent treatment [7]. Cases of acute malnutrition among children under five have risen to the highest levels recorded in parts of Yemen. More than half a million cases recorded in the southern districts. Analysis of acute malnutrition in 2018 in Yemen for the Integrated Phase Classification of Food Security issued by the Organization Food and Agriculture of the United Nations (FAO), the United Nations Children's Fund (UNICEF), the World Food Program and their partner identified high rate of malnutrition. The most affected areas included in this analysis are Abyan governorate (23%), Lahj governorate (21%) [8], in another two studies the global acute malnutrition in Abyan was 10% [9]. And in lahj was 27.3% [10]. Malnourished children are more vulnerable to illnesses, including diarrhea, respiratory infections, and malaria, which are a major concern in Yemen [11] Disease-related malnutrition in children is the consequence of different factors. For example, food intake due to anorexia, feeding difficulties or the effects of medications or due to the hyper metabolic state caused by the underlying disease [12,13,14]. Identification of malnutrition among hospitalized children is important because most pediatricians have no concern on the impact of malnutrition on the clinical outcome of the sick child. Mostly, they neglect the malnutrition as a determinant of the disease prognosis. This study aimed is to assess the malnutrition among sick children in two governorates in the southern Yemen of high malnutrition prevalence. The specific objective is to assess the prevalence of acute and chronic malnutrition among children aged 12 to 59 months seeking outpatient care and their association with governorate, health facility, gender, residency, family income, availability to drinking water. A cross-sectional, multi-center study was designed in to determine the prevalence of malnutrition and related morbidity among hospitalized children 12–59 months of age in Abyan and Lahj governorates. In Yemen, there are 22 governorates; since The Civil War in 2015; twelve governorates in the south are under the control of International recognized government (IRG) including lahj and Abyan governorates. The total population of Yemen in 2021 is 31,153,000 people. The total population of Lahj in 2021 is 1,070,000 people including 115,685 children at 12–59 months while population of Abyan is 609,000 people including 55,547 children at age of 12–59 months [15]. In each governorate there is one government hospital and, in each district, there is one district hospitals. In each district there is a main city and a group of villages, in a big village there is one primary health care (PHC) center. Figure 1 Map of Yemen the coloured areas are Lahj (left) and Abyan (Right) governorates. Source: https://en.wikipedia.org/wiki/Lahij_Governorate & https://en.wikipedia.org/wiki/Abyan_Governorate The assessment of the nutritional status was measured by standardized anthropometry at attendance in outpatient clinic. Wasting measured by Weight for height/length or MUAC and stunting measured by Hight/length for age (SDS, WHO reference) are the primary outcome variables. Gender, Family residency, family income, availability of drinking water in the house are the independent variables. The study population are children at 12–59 months of age who attend the health facility to seek care for certain health problem. Mothers were interviewed while a trained nurse measured the weight and height and mid upper arm circumference (MUAC) of the sick child. From each governorate; five health facilities were selected. These facilities were: the main governorate hospital, two district hospitals and two health centers from two different villages in two different districts. There is only one governorate hospital in every governorate. The other four health facilities were selected based on the selection of the districts. From every governorate; two districts were selected by simple random sampling out of 12 districts in every governorate. From each district we select the district hospital (it is only one district hospital in each district) and two villages out of 8–12 villages in each district. From each village we select the village PHC center (it is one center in every village). Data was collected through a group of enumerators and two field supervisors. Training of two days were conducted in Lahj (Al-Hottah city) in 28th of February, 2022 and in Abyan (Zunjibar city) in 3rd of March, 2022 where enumerators were trained about the questionnaire and the selection of the targeted children (sick children seeking care in the selected health facility within 12–59 months of age). IT personal trained the enumerators about applying the KOBO toolbox and upload the digital questionnaire to their mobiles. This method is most effective method to make the research team monitors in daily basis the process of data collection. Sample size calculation The formula that is used to calculate the sample size is Danieal formula of cross-sectional study in infinite population [16]. The following simple formula (Daniel, 1999) can be used: $$N=\frac{{z}^{2}P\left(1-p\right)}{{d}^{2}}$$ where \(N\) = sample size, \(z\) = z statistic for a level of confidence (1.96), \(P\)= expected prevalence or proportion, here is 10% based on the prevalence of malnutrition in Abyan[9], and \(d\) = precision (\(d\) = 2). Accordingly; the sample size will be: $$N=\frac{1.962*0.10*\left(1-0.10\right)}{{2}^{2}}=3457/4=864$$ We add 10% to avoid non response, so the final sample size was 864 + 86 = 950. The samples size was distributed equally for Lahj and Abyan (475 from each) then was distributed proportional to health facility category and based on the follow of the sick children to health facilities (37% in hospitals, 21% in district hospitals and 42% in health centers). So; the sample was distributed as 175 children from each governorate hospital, 100 children from each district hospital and 50 children from each health center.) Anthropometric measurements [17] Weight: Children were weighed standing on the weight scale to the nearest 0.1 kg. For the children who could not stand, weight was measured in infant weight scale. Height/Length: Height and length of children were measured using height scale and recorded to the nearest 0.1 cm. Children equal or less than 87.0 cm were measured lying down, and children greater than or equal to 87.0 cm were measured in standing position. MUAC: Mid-upper arm circumference measurements were made using a flexible and non-stretch tape. MUAC measurements was taken on the mid-point of the left upper arm. All the selected sick children in the aged 12–59 months were measured to the nearest 0.1 cm. The MUAC is interpreted as both for graduated and color labeled. Red color [MUAC > 115 mm], and < 125 mm] were considered a moderately malnourished. While the green color [MUAC ≥ 125 mm] were categorized as normal according to WHO classification. Operational definition of the Outcome indicators [18] Wasting: Weight‐for‐height (wasting) provides the clearest picture of acute malnutrition. Moderate Acute Malnutrition (MAM) is identified by moderate wasting WFH ≤ -2 z score and ≥ -3 z‐score for children 0‐59 months (or for children 6‐59 months, MUAC < 115 mm and ≥ 125 mm). Table 1. Table 1 Anthropometric measurements and indicators Severe Acute Malnutrition (SAM) is identified by severe wasting < -3 z‐score for children 12‐59 months (or for children 12‐59 months, MUAC < 115 mm) or the presence of bilateral pitting edema. Global Acute Malnutrition (GAM) is the presence of both MAM and SAM in a population. A GAM value of more than 10 percent indicates an emergency. If GAM is exceeding 15% it is considered critical while at 11–14% is severe GAM and if GAM at level of > 5% and less than 10% is considered poor indicator. Chronic malnutrition (Stunting) (Height-for-age Z score (HAZ)) The HAZ measure indicates if a child of a given age is chronically malnourished (stunted). The height-for-age index of a child from the studied population is expressed in Z-score (HAZ). The indicators proportion of wasting (MAM, SAM and GAM) among the hospitalized children in Lahj and Abyan = Number of hospitalized children have MAM/ all hospitalized children under study. = Number of hospitalized children have SAM/ all hospitalized children under study. Proportion of stunting (Chronic malnutrition) among the hospitalized children in Lahj and Abyan = Number of hospitalized children have stunting/ all hospitalized children under study. Data were transforming from the KOBO application data set to excel file, then to the statistical package for social sciences (SPSS) version 24. Descriptive statistics uses are mean, standard deviations, to describe the quantitative variables while frequency and percentages to describe the qualitative variables. Chi square test is used as a tool of inferential statistics to assess the significance of association between malnutrition and the independent variables of gender, residency, family income, availability of drinking water in the house. A cut of point of 0.05 was used for significant level. Fischer exact test is used alternatively to chi square test if the expected dells are less than 5. For purpose of bivariate and logistic regression; the dependent variables were ree-classified into two categories: the acute malnutrition is reclassified into acute malnutrition and normal, where acute malnutrition included MAM and SAM children. The chronic malnutrition (Stunting) variable is a dependent variable is re-classified int o two categories: Chronic malnutrition (Stunting) and normal, where chronic malnutrition included moderate stunting and sever stunting. the socioeconomic variables of more than two categories are reclassified. The independent variables are re-classified in to two categories: For example, residency is re-classified to be (Resident and non-resident. The non-resident group includes IDPs, refugees and marginalized people. The availability of water of in the house is re-classified to available or not available. The availability of water includes the available and regular supply and availability with irregular supply. The outcome variables; acute and chronic malnutrition, each variable is classified into two categories. In all bivariate and logistic regression, the significant level is 0.05, Odds Ratio (OR) and 95% confidence interval (95%CI) are used to assess the strength of the association. Socio-demographic characteristics of the studied children The total number of sick children seeking care in the selected 10 health facilities in Lahj and Abyan during the study period (1–13, March 2022) are 951 children at the age of 12–59 months. The mean age is 29.5 years old (± 14 years). The range from 12 to 59 months. A total of 491 female children (51.6%) while male children are 460 children (48.4%). There are 474 children form Lahj governorate (49.8%) and 477 children from Abyan governorate (50.2%). Most of the children's mothers are either illiterate (37.4%) or has primary /essential education (36/1%) while most of the fathers had primary or essential education (39.2%). About 64% of children's fathers are employed, but about 35% is unemployed; most of the mothers reported that their family monthly income is not enough (88.1%). About 75% of the children are of resident families and 23.7% of internal displaced people (IDPs). Most of the households of the children (62%) has irregular drinking water supply Table 2. Table 2 Socio-demographic characteristics of the sick children involved in the study (N = 951) Prevalence of acute malnutrition among sick children The prevalence of Global acute malnutrition (GAM) among the sick children seeking care in health facilities in Lahj and Abyan is 21% (203/951) Fig. 2. More specifically; the prevalence of moderate acute malnutrition (MAM is 15.1% (144/951) while the prevalence of severe acute malnutrition (SAM) is 6.2%. (59/951) Fig. 3 Prevalence of acute malnutrion (Wasting) among 951 sick children seeking care in health facilities in Lahj and Abyan, March 2022 Prevalence of MAM and SAM among sick children seeking care in health facilities in Lahj and Abyan, March 2022 The prevalence of global acute malnutrition (wasting) among the studied sick children in lahj is 23.4% while in Abyan is 19.3%. The prevalence of MAM in Lahj is 17.7% and the prevalence of SAM is 5.7%. The prevalence of MAM among the studied sick children in Abyan is 12.6% while the prevalence of SAM in Abyan is 6.7% but these differences are not significant (P-value 0.113). The prevalence of the chronic malnutrition (Stunting) The prevalence of the chronic malnutrition (Stunting) in the studied sick children is 41.3%; the prevalence of stunting in Lahj is 41% while in Abyan is 41.7%. The prevalence of moderate stunting in all the studied sick children is 24.3% and the prevalence of severe stunting is 17.2%. Prevalence of moderate stunting is higher in Abyan 26.4% while the prevalence of severe stunting is higher in Lahj (19.2%) but these differences are not significant (P-vale 0.117). Table 3. A 65 out of 951 children was found have concurrent form of malnutrition (wasting and stunting), giving the prevalence of concurrent forms of malnutrition to 6.8% Table 3 Prevalence of acute malnutrition among the sick children seeking care in health facilities by governorate, March 2022 Variations in prevalence of malnutrition by gender The prevalence of acute malnutrition among male children (25.2%) is significantly higher than prevalence of acute malnutrition among female children (17.5%). Moreover, Prevalence of MAM and SAM among males (17.6% & 7.6% respectively) are significantly higher than females (12.8% & 4.9% respectively) (P-value 0.004). The prevalence of stunting in males (45.3%) is significantly higher than females (37.7%). Moreover, Prevalence of moderate stunting and severe stunting among males (25.5% & 19.8% respectively) are significantly higher than females (22.6% & 15.1% respectively) (P-value 0.05). Table 4. Table 4 Gender and malnutrition The socio-economic characteristics and malnutrition For purpose of bivariate analysis; the socioeconomic variables are re-classified to be dichotomous variables for example; Residency and availability of water (see the statistical analysis section). In Table 5, the bivariate analysis is presented between the socio-economic factors (independent variables) and acute malnutrition (dependent variable). In bivariate analysis; only sex and type of health facility have significant association with acute malnutrition. Table 5 shows that high prevalence of acute malnutrition among the males (25.2%) is significantly higher than females (17.5%) (P-value 0.004). The prevalence of acute malnutrition in sick children seeking care in hospital clinics (24.8%) is significantly higher than prevalence of acute malnutrition in sick children seeking care in PHC centers (16.3%) (p-value 0.002). In Table 6, the bivariate analysis is presented between the socio-economic factors (independent variables) and chronic malnutrition (dependent variable). In bivariate analysis, sex, type of health facility and residency are significantly associated with chronic malnutrition (Stunting). Table 6 shows that the prevalence of stunting is higher in males (45.2%) than females (37.7%) (p-value 0.021). The prevalence of stunting in sick children seeking care in hospital clinics (47.8%) is significantly higher than prevalence of stunting in sick children seeking care in PHC centers (32.3%) (p-value 0.000). The prevalence of stunting in non-residents (IDPS, Refuges and marginalized groups) is significant higher (51.7%) than the prevalence of stunting in residents (37.9%) (P-value 0.000). Table 6. Table 5 Bivariate analysis of association of acute malnutrition and socio-economic characteristics Table 6 Bivariate analysis of association of Chronic malnutrition and socio-economic characteristics In logistic regression, gender (male) and type of health facility (Hospital) are significantly associated with acute malnutrition (wasting). The odds of a male child have acute malnutrition is 60% higher than that of the female child (adjusted OR 1.6, 95% CI (0.161–0.797), p = 0.003) when other variables are held constant. Also, the odds of a child admitted into hospital having acute malnutrition is 0.45 lower than the child from the PHC facility child (adjusted OR 0.55, 95% CI (-0952–0.239), p = 0.001) when other variables are held constant. Regarding chronic malnutrition (Stunting); gender, type of health facility (Hospital) and residency are significantly associated with chronic malnutrition (stunting). The odds of a male child have chronic malnutrition is 40% higher than that of the female child (adjusted OR 1.4, 95% CI (0.082–0.625) when other variables are held constant, P-value 0.010. The odds of a child admitted into hospital having chronic malnutrition is 80% more than the child from the PHC facility (adjusted OR 1.8, 95% CI (0.345–0.947), P-value 0.000 when other variables are held constant. The odds of child of non-resident family having acute malnutrition are 39% more than the child from resident family (adjusted OR 1.39, 955 CI 0.009- 0.652. P-value 0.040) when other variables are held constant. Table 7 Table 7 Association of acute/chronic malnutrition and socio-economic characteristics as a result of logistic regression This health facility-based study was conducted in two southern governorates in Yemen (Lahj and Abyan) where facing armed conflicts since 2015 affecting negatively upon the provision of the health services and exacerbate the existing malnutrition problem among children under five years. Out of the scope of the routine measurements of malnutrition in the community; this study focused on sick children seeking care in the health facilities for other health problems rather than malnutrition. In this study; results revealed high prevalence of acute and chronic malnutrition (21.3% & 41.3% respectively) among studied sick children at 12–59 months of age in Lahj and Abyan governorates. This study focused on sick children observed in the outpatient clinics in both primary health care centers and hospitals. The reported rates of malnutrition in this study are higher than what, were reported in other developed or developing countries. Studies in developed countries reported significant proportion of malnutrition among hospitalized patients, Groleau V (2014) in their study in Canada reported that the prevalence of acute and chronic malnutrition among hospitalized children was 13.3% [19]. In another study in Canada in 2014; it was found that prevalence of acute malnutrition among children admitted to pediatric department was 6.9% while prevalence of chronic malnutrition was 13.4% [20]. Hulst J et al., (2004) concluded in their study in Netherlands that among critically ill patients it was found that the prevalence of malnutrition among children admitted to intensive care units (ICU) was 24% [21]. In developing countries, one study in Malaysia reported that the prevalence of acute and chronic under-nutrition among hospitalized children were 11% and 14% respectively [22]. In Pakistan the prevalence of stunting among children attending out-patient clinics was 21% [23]. In one controversial result from Tanzania; the prevalence of stunting and wasting was 8.37% & 1.41% respectively among children attending hospitals and primary care centers with predominance of boys' malnutrition over females [24]. This controversial results of low prevalence are due to methodological issue; investigators targeted all children attend to the health facilities either for seeking care or to attending to well child clinic for vaccination. Severe acute malnutrition (SAM) has a significant contribution to child death if untreated, and may be exceeded the minimum Sphere standard (< 10%) especially in developing countries like Yemen and Ethiopia [25, 26]. In this study; the prevalence of SAM and MAM in sick children were 6.2% and 12.8% respectively, these figures are higher than the same indicators from community-based survey in Yemen (SAM 4.9%, MAM 8.4%) [27] and Ethiopia (SAM 3.6%, MAM 10.6%) prevalence of SAM in children under 5byears in Ethiopia [28]. Male children are more exposed to both acute and chronic malnutrition than females [29, 30]. In one study in Pakistan authors reported that significant higher prevalence of stunting in males than female children [23]. Concurrent wasting and stunting are an important problem among children under five years and it is considered risk factor to child mortality [31]. in this study; the prevalence of concurrent wasting and stunting was 6.85%, it is similar to prevalence in other developing countries like in Senegal it was 6.2% [31], 5.8% in Ethiopia [32] and 5% in Uganda [33]. Poverty is a critical determinant of malnutrition [34,35,36]. Yemen is a poor country, with poverty rates in Yemen increasing in recent years. For example, in 2018; the country ranked 178th out of 188 countries in the global Human Development Index ranking. Since 2015; Yemen was facing dramatic situation due to war and multi-epidemics and poverty [37]. Poverty can be both a cause and a consequence of malnutrition [38]. In this study; significant high prevalence chronic malnutrition among non-residents groups (IDPs, refugees and marginalized groups (so-called in Yemen Al-Mahmasheen). The implication of this study is to give more attention for screening malnutrition among sick children as a routine examination. Studies reported that this routine screening is ignored in the routine medical care of sick children in many developing countries. In one study in Burundi (2019) it was found that only 3% of health workers screened children (6–59 months) for malnutrition [39]. There are certain limitations of this study. The study was limited to a selected health facilities in two governorates in southern Yemen due to logistics and accessibility issues. The study limited to patients attending health facilities in outpatient clinics, so critically ill-children were not included in the study. High acute and chronic malnutrition rates were identified among sick children seeking care in health facilities in lahj and Abyan governorates in Yemen. These higher malnutrion rates exceeded the SPHERE indicators of malnutrition. Boys are more exposed than girls to acute and chronic malnutrition. Gender (male) and type of health facility (Hospital) are predictors to acute malnutrition (wasting) while gender, type of health facility (Hospital) and residency (Non-resident) are predictors to chronic malnutrition (stunting). Authors advised that early detection of malnutrition in children at outpatient clinics should not be neglected. To avoid this ignorance to treat appropriately and to reduce mortality, authors recommended every sick child observed in outpatient or in-patents pediatric departments should be screening for malnutrition. All data sets are available and can be shared by requesting it from the corresponding author by email. GAM: Global Acute Malnutrition H/A: Height for Age HAZ: Height for Age Z score HUCOM: Hadhramout University College of Medicine FAO: IDPs: IPC: Integrated Phase Classification of Food Security IRG: International Recognized Government IRVD: The International War and Disaster Victims Protection Association MAM: Moderate Acute Malnutrition Odds Ratio 95%CI: PHC: SAM: Severe Acute Malnutrition Statistical Package for Social Sciences UNICEF: WFP: Weight for Height McCarthy A, Delvin E, Marcil V, et al. Prevalence of Malnutrition in Pediatric Hospitals in Developed and In-Transition Countries: The Impact of Hospital Practices. Nutrients. 2019;11(2):236. Published 2019 Jan 22. doi:https://doi.org/10.3390/nu11020236 Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6412458/ The Childhood Acute Illness and Nutrition (CHAIN) Network. Childhood mortality during and after acute illness in Africa and south Asia: a prospective cohort study. Lancet Glob Health 2022; 10: e673–84. Available at: https://www.thelancet.com/action/showPdf?pii=S2214-109X%2822%2900118-8 Accessed 4/5/2022 UNICEF. Yemen Humanitarian Situation Report. http://www.unicef.org/mena/UNICEF_Yemen_Crisis_SitRep_-_8_July_to_21_July_2015.pdf. (accessed Feb 28, 2022). Abdulaziz M Eshaq, Ahmed M Fothan, Elyse C Jensen, Tehreem A Khan, Abdulhadi A AlAmodi. Malnutrition in Yemen: an invisible crisis. The Lancet. 2017. (389): 10064:31–32. DOI: https://doi.org/10.1016/S0140-6736(16)32592-2 Available at: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)32592-2/fulltext Accessed Feb 11, 2022 WFP. Yemen emergency. Available at: https://www.wfp.org/emergencies/yemen-emergency Accessed Feb 11,2022 Al-Zangabila K, Poudel Adhikari S, Wang Q, Sunil TS, Rozelle S, Zhou H. Alarmingly high malnutrition in childhood and its associated factors: A study among children under 5 in Yemen. Medicine (Baltimore). 2021;100(5): e24419. https://doi.org/10.1097/MD.0000000000024419.PMID:33592890;PMCID:PMC7870187. Nélio Barreto Vieira, Sionara Melo Figueiredo de Carvalho, Modesto Leite Rolim Neto, Hildson Leandro de Menezes. The silence of the lambs: Child morbidity and mortality from malnutrition in Yemen. Journal of pediatric nyrsing. January 05, 2022. DOI:https://doi.org/10.1016/j.pedn.2021.12.006 (Article in Press). Available at: https://www.pediatricnursing.org/article/S0882-5963(21)00374-2/pdf Accessed Feb 11, 2022 UNICEF. Increasing the cases of malnutrition in young children in Yemen within the deteriorated situation. Available at: https://www.unicef.org/yemen/ar/ Accessed 11/2/2022 (in Arabic) UNICEF & Action contra la faim. NUTRITION AND RETROSPECTIVE MORTALITY SURVEY HIGHLANDS AND LOWLANDS LIVELIHOOD ZONES OF ABYAN GOVERNORATE. Final survey report. 2018. Available at: https://reliefweb.int/sites/reliefweb.int/files/resources/smart_survey_abyan_jan_2018.pdf Accessed Feb 12, 2022. OCHA. NUTRITION AND RETROSPECTIVE MORTALITY SURVEY HIGHLANDS AND LOWLANDS LIVELIHOOD ZONES OF LAHJ GOVERNORATE. Final survey report. 2018. Available at: https://www.humanitarianresponse.info/en/operations/yemen/document/smart-survey-lahj-jul-2017 Accessed Feb 21,2022 Alves RNP, de Vasconcelos CAC, Vieira NB, Pereira YTG, Feitosa PWG, Maia MAG, de Carvalho SMF, Neto MLR, de Menezes HL. The silence of the lambs: Child morbidity and mortality from malnutrition in Yemen. J Pediatr Nurs. 2022 Jan 5:S0882–5963(21)00374–2. doi: https://doi.org/10.1016/j.pedn.2021.12.006. Epub ahead of print. PMID: 34998655. Available at: https://pubmed.ncbi.nlm.nih.gov/34998655/ Accessed April 16, 2022 Hecht C, Weber M, Grote V, Daskalou E, Dell'Era L, Flynn D, Gerasimidis K, Gottrand F, Hartman C, Hulst J, et al. Disease associated malnutrition correlates with length of hospital stay in children. Clin Nutr. 2015;34:53–9. https://doi.org/10.1016/j.clnu.2014.01.003. Pawellek I, Dokoupil K, Koletzko B. Prevalence of malnutrition in paediatric hospital patients. Clin Nutr. 2008;27:72–6. https://doi.org/10.1016/j.clnu.2007.11.001. Marino LV, Thomas PC, Beattie RM. Screening tools for paediatric malnutrition: Are we there yet? Curr Opin Clin Nutr Metab Care. 2018;21:184–94. https://doi.org/10.1097/MCO.0000000000000464. Central Statistics Organization, Yemen. Population projection. Available at: http://www.cso-yemen.com/content.php?lng=arabic&id=553 Accessed 1/5/2022). L. Naing, T. Winn, B.N. Rusli. Practical Issues in Calculating the Sample Size for Prevalence Studies. Archives of Orofacial Sciences 2006; 1: 9–14. Available at: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.504.2129&rep=rep1&type=pdf Accessed Feb 12, 2022 MOPH&P. GUIDELINES FOR THE MANAGEMENT OF THE SEVERELY MALNOURISHED IN YEMEN. 1st version. 2008 WHO. Understanding the Difference Among MAM, SAM, and GAM and their Importance on a Population Basis in: WHO. 2000. The Management of Nutrition in Major Emergencies. Available at: https://www.globalhealthlearning.org/sites/default/files/page-files/MAM%2C%20SAM%2C%20and%20GAM.pdf Accessed Feb 13, 2022 Groleau V, Thibault M, Doyon M, Brochu EE, Roy CC, Babakissa C. Malnutrition in hospitalized children: prevalence, impact, and management. Can J Diet Pract Res. 2014 Spring;75(1):29–34. doi: https://doi.org/10.3148/75.1.2014.29. PMID: 24606957. Baxter JA, Al-Madhaki FI, Zlotkin SH. Prevalence of malnutrition at the time of admission among patients admitted to a Canadian tertiary-care pediatric hospital. Pediatric Child Health. 2014;19(8):413–7. https://doi.org/10.1093/pch/19.8.413. Hulst J et al. Malnutrition in critically ill children from admission to 6 months after discharge. Clinical nutrition. 2004. 23:223–232. Available at: https://d1wqtxts1xzle7.cloudfront.net/44323778/Malnutrition_20critically Accessed 13/4/2022. Lee WS, Ahmad Z. The prevalence of undernutrition upon hospitalization in children in developing country: a single hospital study from Malaysia. Pediatric and neonatology. 2017. 58; 5: 415–420 DOI: https://doi.org/10.1016/j.pedneo.2016.08.010 available at: https://www.pediatr-neonatol.com/article/S1875-9572(17)30101-8/fulltext Accessed 13/4/2022. Fatima, Sehrish et al. "Stunting and associated factors in children of less than five years: A hospital-based study." Pakistan journal of medical sciences vol. 36,3 (2020): 581–585. doi:https://doi.org/10.12669/pjms.36.3.1370 Juma OA, Enumah ZO, Wheatley H, et al. Prevalence and assessment of malnutrition among children attending the Reproductive and Child Health clinic at Bagamoyo District Hospital. Tanzania BMC Public Health. 2016;16:1094. https://doi.org/10.1186/s12889-016-3751-0. Bitew ZW, Ayele EG, Worku T, et al. Determinants of mortality among under-five children admitted with severe acute malnutrition in Addis Ababa. Ethiopia Nutr J. 2021;20:94. https://doi.org/10.1186/s12937-021-00750-0. Organization WH. Pocket book of hospital care for children: guidelines for the management of common childhood illnesses: World Health Organization; 2013. https://apps.who.int/iris/handle/10665/81170. Dureab F, Al-Falahi E, Ismail O, et al. An Overview on Acute Malnutrition and Food Insecurity among Children during the Conflict in Yemen. Children (Basel). 2019;6(6):77. Published 2019 Jun 5. doi:https://doi.org/10.3390/children6060077 Yeshaneh A, Mulu T, Gasheneit A, Adane D. Prevalence of wasting and associated factors among children aged 6–59 months in Wolkite town of the Gurage zone, Southern Ethiopia, 2020. A cross-sectional study PLoS ONE. 2022;17(1): e0259722. https://doi.org/10.1371/journal.pone.0259722. Dukhi N. Global Prevalence of Malnutrition: Evidence from Literature. OPEN ACCESS PEER-REVIEWED CHAPTER. AVAIABLE AT: HTTPS://WWW.INTECHOPEN.COM/CHAPTERS/71665 DOI: https://doi.org/10.5772/intechopen.92006. Sand A, Kumar R, Shaikh BT, Somrongthong R, Hafeez A, Rai D. Determinants of severe acute malnutrition among children under five years in a rural remote setting: A hospital-based study from district Tharparkar-Sindh. Pakistan Pak J Med Sci. 2018;34(2):260–5. https://doi.org/10.12669/pjms.342.14977. Garenne M, Myatt M, Khara T, Dolan C, Briend A. Concurrent wasting and stunting among under-five children in Niakhar, Senegal. Matern Child Nutr. 2019 Apr;15(2):e12736. doi: https://doi.org/10.1111/mcn.12736. Epub 2018 Nov 25. PMID: 30367556; PMCID: PMC6587969. Accessed 4/5/2022 Roba AA, Assefa N, Dessie Y, Tolera A, Teji K, Elena H, Bliznashka L, Fawzi W. Prevalence and determinants of concurrent wasting and stunting and other indicators of malnutrition among children 6–59 months old in Kersa, Ethiopia. Matern Child Nutr. 2021 Jul;17(3):e13172. doi: https://doi.org/10.1111/mcn.13172. Epub 2021 Mar 16. PMID: 33728748; PMCID: PMC8189198. Accessed 4/5/2022 Iversen PO, Ngari M, Westerberg AC, Muhoozi G, Atukunda P. Child stunting concurrent with wasting or being overweight: A 6-y follow up of a randomized maternal education trial in Uganda. Nutrition. 2021 Sep;89:111281. doi: https://doi.org/10.1016/j.nut.2021.111281. Epub 2021 Apr 16. PMID: 34090214. Accessed 4/5/2022 Siddiqui F, Salam RA, Lassi ZS, Das JK. The Intertwined Relationship Between Malnutrition and Poverty. Front Public Health. 2020;28(8):453. https://doi.org/10.3389/fpubh.2020.00453.PMID:32984245;PMCID:PMC7485412. Panda BK, Mohanty SK, Nayak I, et al. Malnutrition and poverty in India: does the use of public distribution system matter? BMC Nutr. 2020;6:41. https://doi.org/10.1186/s40795-020-00369-0. Rahman MA, Halder HR, Rahman MS, Parvez M. Poverty and childhood malnutrition: Evidence-based on a nationally representative survey of Bangladesh. PLoS ONE. 2021;16(12): e0261420. https://doi.org/10.1371/journal.pone.0261420. Ghouth ASB. The Multi-Epidemics in Yemen: the Ugly Face of the War. Ann Infect Dis Epidemiol. 2018; 3(2): 1033. Available at: http://www.remedypublications.com/open-access/the-multi-epidemics-in-yemen-the-ugly-face-of-the-war-1171.pdf YEMEN MULTISECTORAL NUTRITION ACTION PLAN 2020–2023. Official report. 2020. Available at: https://mqsunplus.path.org/wp-content/uploads/2020/08/Yemen-MSNAP-FINAL_29April2020.pdf accessed 14/4/2022. Nimpagaritse M, Korachais C, Nsengiyumva G, et al. Addressing malnutrition among children in routine care: how is the Integrated Management of Childhood Illnesses strategy implemented at health centre level in Burundi? BMC Nutr. 2019;5:22. https://doi.org/10.1186/s40795-019-0282-y. This study is implemented in Lahj and Abyan governorates in Yemen and aimed to investigate the malnutrition among children at the age of 1-4 years and seeking care in the health facilities. This work can't accomplish without cooperation and coordination with different actors from funding to implementation to finalize of this report. Here, the investigators appreciated the support of The International War and Disaster Victims Protection Association (IRVD) who grant this study, the investigators thank a lot this funding agency for their great support. The investigators thank Mr. Ahmed Qiad for his great work during the preparation the digital questionnaire, using the KOBO application. Mr Ahmed trained the enumerators in Lahj and Abyan about the electronic monitoring system which developed that makes our daily follow-up easy and more flexible quick. Regarding the fieldwork at the governorate level; Investigators appreciated the great role of Mr. Fahd Abdu (lahj supervisor) and Mr. Kamal Jubran (Abyan supervisor) for their close supervision of the fieldwork. Our thanks extended to the enumerators who did a hard duty in data collection. They collected data from mothers and take the anthropometric measurements of the sick children who attended the selected health facilities in Abyan and Lahj governorates. Finally, the investigators thank mothers who participated in this study by giving valuable data about their children and hope their children receiving a good care and being cured. The research team obtained fund from The International War and Disaster Victims Protection Association (IRVD). Department of Public Health and Community Medicine, Faculty of Medicine, University of Aden, Aden, Yemen Ali Ahmed Al-Waleedi Department of Community Medicine, Hadharamout University College of Medicine (HUCOM), Hadhramout University, 8892, Mukalla, Fwah, Yemen Abdulla Salem Bin-Ghouth AL-Waleedi AA and Bin-Ghouth AS participated in proposal development. Bin-Ghouth design the questionnaire and participated in training of data collectors. Al-Waleedi reviewed the questionnaire, organized and supervised the fieldwork. Both authors participated in data analysis, writing the first draft of the final report and reviewed and approved the manuscript. Correspondence to Abdulla Salem Bin-Ghouth. The research proposal was approved by Research ethics committee of Hadramout University College of Medicine (HUCOM). The objectives of the study were clarified for the participant. We ensured that the information of those who agreed to participate in this study was kept in the strictest confidence and used for the benefit of the community. Writing informed consent was obtained from mothers of studied children. All methods were carried out in accordance with relevant guidelines and regulations. Authors declared that there is no conflict of interest. Al-Waleedi, A.A., Bin-Ghouth, A.S. Malnutrition among hospitalized children 12–59 months of age in Abyan and Lahj Governorates / Yemen. BMC Nutr 8, 78 (2022). https://doi.org/10.1186/s40795-022-00574-z DOI: https://doi.org/10.1186/s40795-022-00574-z Sick children Wasting
CommonCrawl
Took pill 1:27 PM. At 2 my hunger gets the best of me (despite my usual tea drinking and caffeine+piracetam pills) and I eat a large lunch. This makes me suspicious it was placebo - on the previous days I had noted a considerable appetite-suppressant effect. 5:25 PM: I don't feel unusually tired, but nothing special about my productivity. 8 PM; no longer so sure. Read and excerpted a fair bit of research I had been putting off since the morning. After putting away all the laundry at 10, still feeling active, I check. It was Adderall. I can't claim this one either way. By 9 or 10 I had begun to wonder whether it was really Adderall, but I didn't feel confident saying it was; my feeling could be fairly described as 50%. Sarter is downbeat, however, about the likelihood of the pharmaceutical industry actually turning candidate smart drugs into products. Its interest in cognitive enhancers is shrinking, he says, "because these drugs are not working for the big indications, which is the market that drives these developments. Even adult ADHD has not been considered a sufficiently attractive large market." Low-tech methods of cognitive enhancement include many components of what has traditionally been viewed as a healthy lifestyle, such as exercise, good nutrition, adequate sleep, and stress management. These low-tech methods nevertheless belong in a discussion of brain enhancement because, in addition to benefiting cognitive performance, their effects on brain function have been demonstrated (Almeida et al., 2002; Boonstra, Stins, Daffertshofer, & Beek, 2007; Hillman, Erickson, & Kramer, 2008; Lutz, Slagter, Dunne, & Davidson, 2008; Van Dongen, Maislin, Mullington, & Dinges, 2003). Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away. Want to try a nootropic stack for yourself? Your best bet is to buy Smart Drugs online. You can get good prices and have the supplements delivered to your home. This means no hassle for you. And after you get them in the mail, you can start to see the benefits for yourself. If you're going to order smart drugs on the internet, it's important to go with one of the top manufacturers so that you get the best product possible. I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.) The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions. Autism Brain brain fuel brain health Brain Injury broth Cholesterol choline DAI DHA Diabetes digestion Exercise Fat Functional Medicine gastric Gluten gut-brain Gut Brain Axis gut health Health intestinal permeability keto Ketogenic leaky Gut Learning Medicine Metabolism Music Therapy neurology Neuroplasticity neurorehabilitation Nutrition omega Paleo Physical Therapy Recovery Science second brain superfood synaptogenesis TBI Therapy tube feed uridine Smart Pill is formulated with herbs, amino acids, vitamins and co-factors to provide nourishment for the brain, which may enhance memory, cognitive function, and clarity. , which may enhance memory, cognitive function, and clarity. In a natural base containing potent standardized extract 24% flavonoid glycosides. Fast acting super potent formula. A unique formulation containing a blend of essential nutrients, herbs and co-factors. "I love this book! As someone that deals with an autoimmune condition, I deal with sever brain fog. I'm currently in school and this has had a very negative impact on my learning. I have been looking for something like this to help my brain function better. This book has me thinking clearer, and my memory has improved. I'm eating healthier and overall feeling much better. This book is very easy to follow and also has some great recipes included." Up to 20% of Ivy League college students have already tried "smart drugs," so we can expect these pills to feature prominently in organizations (if they don't already). After all, the pressure to perform is unlikely to disappear the moment students graduate. And senior employees with demanding jobs might find these drugs even more useful than a 19-year-old college kid does. Indeed, a 2012 Royal Society report emphasized that these "enhancements," along with other technologies for self-enhancement, are likely to have far-reaching implications for the business world. The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. And in his followup work, An opportunity cost model of subjective effort and task performance (discussion). Kurzban seems to have successfully refuted the blood-glucose theory, with few dissenters from commenting researchers. The more recent opinion seems to be that the sugar interventions serve more as a reward-signal indicating more effort is a good idea, not refueling the engine of the brain (which would seem to fit well with research on procrastination).↩ I decided to try out day-time usage on 2 consecutive days, taking the 100mg at noon or 1 PM. On both days, I thought I did feel more energetic but nothing extraordinary (maybe not even as strong as the nicotine), and I had trouble falling asleep on Halloween, thinking about the meta-ethics essay I had been writing diligently on both days. Not a good use compared to staying up a night. Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector. Christopher Wanjek is the Bad Medicine columnist for Live Science and a health and science writer based near Washington, D.C. He is the author of two health books, "Food at Work" (2005) and "Bad Medicine" (2003), and a comical science novel, "Hey Einstein" (2012). For Live Science, Christopher covers public health, nutrition and biology, and he occasionally opines with a great deal of healthy skepticism. His "Food at Work" book and project, commissioned by the U.N.'s International Labor Organization, concerns workers health, safety and productivity. Christopher has presented this book in more than 20 countries and has inspired the passage of laws to support worker meal programs in numerous countries. Christopher holds a Master of Health degree from Harvard School of Public Health and a degree in journalism from Temple University. He has two Twitter handles, @wanjek (for science) and @lostlenowriter (for jokes). Take at 11 AM; distractions ensue and the Christmas tree-cutting also takes up much of the day. By 7 PM, I am exhausted and in a bad mood. While I don't expect day-time modafinil to buoy me up, I do expect it to at least buffer me against being tired, and so I conclude placebo this time, and with more confidence than yesterday (65%). I check before bed, and it was placebo. Instead, I urge the military to examine the use of smart drugs and the potential benefits they bring to the military. If they are safe, and pride cognitive enhancement to servicemembers, then we should discuss their use in the military. Imagine the potential benefits on the battlefield. They could potentially lead to an increase in the speed and tempo of our individual and collective OODA loop. They could improve our ability to become aware and make observations. Improve the speed of orientation and decision-making. Lastly, smart drugs could improve our ability to act and adapt to rapidly changing situations. Adderall is an amphetamine, used as a drug to help focus and concentration in people with ADHD, and promote wakefulness for sufferers of narcolepsy. Adderall increases levels of dopamine and norepinephrine in the brain, along with a few other chemicals and neurotransmitters. It's used off-label as a study drug, because, as mentioned, it is believed to increase focus and concentration, improve cognition and help users stay awake. Please note: Side Effects Possible. Stayed up with the purpose of finishing my work for a contest. This time, instead of taking the pill as a single large dose (I feel that after 3 times, I understand what it's like), I will take 4 doses over the new day. I took the first quarter at 1 AM, when I was starting to feel a little foggy but not majorly impaired. Second dose, 5:30 AM; feeling a little impaired. 8:20 AM, third dose; as usual, I feel physically a bit off and mentally tired - but still mentally sharp when I actually do something. Early on, my heart rate seemed a bit high and my limbs trembling, but it's pretty clear now that that was the caffeine or piracetam. It may be that the other day, it was the caffeine's fault as I suspected. The final dose was around noon. The afternoon crash wasn't so pronounced this time, although motivation remains a problem. I put everything into finishing up the spaced repetition literature review, and didn't do any n-backing until 11:30 PM: 32/34/31/54/40%. "There seems to be a growing percentage of intellectual workers in Silicon Valley and Wall Street using nootropics. They are akin to intellectual professional athletes where the stakes and competition is high," says Geoffrey Woo, the CEO and co-founder of nutrition company HVMN, which produces a line of nootropic supplements. Denton agrees. "I think nootropics just make things more and more competitive. The ease of access to Chinese, Russian intellectual capital in the United States, for example, is increasing. And there is a willingness to get any possible edge that's available." Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn't count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is $9 so we need 13 units and 13 times 9 is $117. But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures. Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5. Two additional studies used other spatial working memory tasks. Barch and Carter (2005) required subjects to maintain one of 18 locations on the perimeter of a circle in working memory and then report the name of the letter that appeared there in a similarly arranged circle of letters. d-AMP caused a speeding of responses but no change in accuracy. Fleming et al. (1995) referred to a spatial delay response task, with no further description or citation. They reported no effect of d-AMP in the task except in the zero-delay condition (which presumably places minimal demand on working memory). Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment. Smart drugs act within the brain speeding up chemical transfers, acting as neurotransmitters, or otherwise altering the exchange of brain chemicals. There are typically very few side effects, and they are considered generally safe when used as indicated. Special care should be used by those who have underlying health conditions, are on other medications, pregnant women, and children, as there is no long-term data on the use and effects of nootropics in these groups. How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology. Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment. Swanson J, Arnold LE, Kraemer H, Hechtman L, Molina B, Hinshaw S, Wigal T. Evidence, interpretation and qualification from multiple reports of long-term outcomes in the Multimodal Treatment Study of Children With ADHD (MTA): Part II. Supporting details. Journal of Attention Disorders. 2008;12:15–43. doi: 10.1177/1087054708319525. [PubMed] [CrossRef] If stimulants truly enhance cognition but do so to only a small degree, this raises the question of whether small effects are of practical use in the real world. Under some circumstances, the answer would undoubtedly be yes. Success in academic and occupational competitions often hinges on the difference between being at the top or merely near the top. A scholarship or a promotion that can go to only one person will not benefit the runner-up at all. Hence, even a small edge in the competition can be important. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. Even the best of today's nootropics only just barely scratch the surface. You might say that we are in the "Nokia 1100" phase of taking nootropics, and as better tools and more data come along, the leading thinkers in the space see a powerful future. For example, they are already beginning to look past biochemistry to the epigenome. Not only is the epigenome the code that runs much of your native biochemistry, we now know that experiences in life can be recorded in your epigenome and then passed onto future generations. There is every reason to believe that you are currently running epigenetic code that you inherited from your great-grandmother's life experiences. And there is every reason to believe that the epigenome can be hacked – that the nootropics of the future can not only support and enhance our biochemistry, but can permanently change the epigenetic code that drives that biochemistry and that we pass onto our children. This is why many healthy individuals use nootropics. They have great benefits and can promote brain function and reduce oxidative stress. They can also improve sleep quality. It is a known fact that cognitive decline is often linked to aging. It may not be as visible as skin aging, but the brain does in fact age. Often, cognitive decline is not noticeable because it could be as mild as forgetting names of people. However, research has shown that even in healthy adults, cognitive decline can start as early as in the late twenties or early thirties. Popular smart drugs on the market include methylphenidate (commonly known as Ritalin) and amphetamine (Adderall), stimulants normally used to treat attention deficit hyperactivity disorder or ADHD. In recent years, another drug called modafinil has emerged as the new favourite amongst college students. Primarily used to treat excessive sleepiness associated with the sleep disorder narcolepsy, modafinil increases alertness and energy. Natural nootropic supplements derive from various nutritional studies. Research shows the health benefits of isolated vitamins, nutrients, and herbs. By increasing your intake of certain herbal substances, you can enhance brain function. Below is a list of the top categories of natural and herbal nootropics. These supplements are mainstays in many of today's best smart pills. The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia. Lebowitz says that if you're purchasing supplements to improve your brain power, you're probably wasting your money. "There is nothing you can buy at your local health food store that will improve your thinking skills," Lebowitz says. So that turmeric latte you've been drinking everyday has no additional brain benefits compared to a regular cup of java. Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively). Creatine is a substance that's produced in the human body. It is initially produced in the kidneys, and the process is completed in the liver. It is then stored in the brain tissues and muscles, to support the energy demands of a human body. Athletes and bodybuilders use creatine supplements to relieve fatigue and increase the recovery of the muscle tissues affected by vigorous physical activities. Apart from helping the tissues to recover faster, creatine also helps in enhancing the mental functions in sleep-deprived adults, and it also improves the performance of difficult cognitive tasks. The beneficial effects as well as the potentially serious side effects of these drugs can be understood in terms of their effects on the catecholamine neurotransmitters dopamine and norepinephrine (Wilens, 2006). These neurotransmitters play an important role in cognition, affecting the cortical and subcortical systems that enable people to focus and flexibly deploy attention (Robbins & Arnsten, 2009). In addition, the brain's reward centers are innervated by dopamine neurons, accounting for the pleasurable feelings engendered by these stimulants (Robbins & Everett, 1996). Like caffeine, nicotine tolerates rapidly and addiction can develop, after which the apparent performance boosts may only represent a return to baseline after withdrawal; so nicotine as a stimulant should be used judiciously, perhaps roughly as frequent as modafinil. Another problem is that nicotine has a half-life of merely 1-2 hours, making regular dosing a requirement. There is also some elevated heart-rate/blood-pressure often associated with nicotine, which may be a concern. (Possible alternatives to nicotine include cytisine, 2'-methylnicotine, GTS-21, galantamine, Varenicline, WAY-317,538, EVP-6124, and Wellbutrin, but none have emerged as clearly superior.) The surveys just reviewed indicate that many healthy, normal students use prescription stimulants to enhance their cognitive performance, based in part on the belief that stimulants enhance cognitive abilities such as attention and memorization. Of course, it is possible that these users are mistaken. One possibility is that the perceived cognitive benefits are placebo effects. Another is that the drugs alter students' perceptions of the amount or quality of work accomplished, rather than affecting the work itself (Hurst, Weidner, & Radlow, 1967). A third possibility is that stimulants enhance energy, wakefulness, or motivation, which improves the quality and quantity of work that students can produce with a given, unchanged, level of cognitive ability. To determine whether these drugs enhance cognition in normal individuals, their effects on cognitive task performance must be assessed in relation to placebo in a masked study design. Dopaminergics are smart drug substances that affect levels of dopamine within the brain. Dopamine is a major neurotransmitter, responsible for the good feelings and biochemical positive feedback from behaviors for which our biology naturally rewards us: tasty food, sex, positive social relationships, etc. Use of dopaminergic smart drugs promotes attention and alertness by either increasing the efficacy of dopamine within the brain, or inhibiting the enzymes that break dopamine down. Examples of popular dopaminergic smart drug drugs include Yohimbe, selegiline and L-Tyrosine. Nor am I sure how important the results are - partway through, I haven't noticed anything bad, at least, from taking Noopept. And any effect is going to be subtle: people seem to think that 10mg is too small for an ingested rather than sublingual dose and I should be taking twice as much, and Noopept's claimed to be a chronic gradual sort of thing, with less of an acute effect. If the effect size is positive, regardless of statistical-significance, I'll probably think about doing a bigger real self-experiment (more days blocked into weeks or months & 20mg dose) Of all the smart drugs in the world, Modafinil is most often touted as the best. It's a powerful cognitive enhancer, great for boosting alertness, and has very few, mild side effects that most healthy users will never experience. And no, you can't have any. Sorry. Modafinil is a prescription medication used to treat disorders like narcolepsy, shift work sleep disorder, and for those who suffer from obstructive sleep apnea. Many of the food-derived ingredients that are often included in nootropics—omega-3s in particular, but also flavonoids—do seem to improve brain health and function. But while eating fatty fish, berries and other healthy foods that are high in these nutrients appears to be good for your brain, the evidence backing the cognitive benefits of OTC supplements that contain these and other nutrients is weak. An additional complexity, related to individual differences, concerns dosage. This factor, which varies across studies and may be fixed or determined by participant body weight within a study, undoubtedly influences the cognitive effects of stimulant drugs. Furthermore, single-unit recordings with animals and, more recently, imaging of humans indicate that the effects of stimulant dose are nonmonotonic; increases enhance prefrontal function only up to a point, with further increases impairing function (e.g., Arnsten, 1998; Mattay et al., 2003; Robbins & Arnsten, 2009). Yet additional complexity comes from the fact that the optimal dosage depends on the same kinds of individual characteristics just discussed and on the task (Mattay et al., 2003). ADHD medication sales are growing rapidly, with annual revenues of $12.9 billion in 2015. These drugs can be obtained legally by those who have a prescription, which also includes those who have deliberately faked the symptoms in order to acquire the desired medication. (According to an experiment published in 2010, it is difficult for medical practitioners to separate those who feign the symptoms from those who actually have them.) That said, faking might not be necessary if a doctor deems your desired productivity level or your stress around a big project as reason enough to prescribe medication. You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes. The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. These days, nootropics are beginning to take their rightful place as a particularly powerful tool in the Neurohacker's toolbox. After all, biochemistry is deeply foundational to neural function. Whether you are trying to fix the damage that is done to your nervous system by a stressful and toxic environment or support and enhance your neural functioning, getting the chemistry right is table-stakes. And we are starting to get good at getting it right. What's changed?
CommonCrawl
Situation awareness modeling for emergency management on offshore platforms Syed Nasir Danial1, Jennifer Smith1, Faisal Khan ORCID: orcid.org/0000-0002-5638-42991 & Brian Veitch1 Situation awareness is the first and most important step in emergency management. It is a dynamic step involving evolving conditions and environments. It is an area of active research. This study presents a Markov Logic Network to model SA focusing on fire accidents and emergency evacuation. The model has been trained using empirical data obtained from case studies. The case studies involved human participants who were trained for responding to emergencies involving fire and smoke using a virtual environment. The simulated (queried) and empirical findings are reasonably consistent. The proposed model enables implementing an agent that exploits environmental cues and cognitive states to determine the type of emergency currently being faced. Considering each emergency type as a situation, the model can be used to develop a repertoire of situations for agents so that the repertoire can act as an agent's experience for later decision-making. The present work proposes a model based on Markov Logic Network (MLN) [16] for representing emergency situations involving smoke and fire on offshore petroleum platforms. The model is tested for two important situations, FIRE and EVACUATE. In the FIRE situation, fire is observed due to smoke at some place on the platform, and all workers need to muster to their primary muster station. In the EVACUATE situation, the fire is escalated so that some escape routes to the primary muster station are blocked and all personnel needs to muster at the lifeboat or alternative muster station. The purpose of this work is to have a model that can be used by a software agent so that the agent can exhibit human-like situation awareness (SA). Such agents can subsequently be used, for example, in training simulators to enrich trainees' experience by showing them various scenarios in which the agent shows recognition of different situations (to makes various decisions). A participant can learn from the agent what information is important in a given scenario for correct SA. Representing the emergency response of agents operating in a virtual environment (VE) is a challenging and active research area. Emergencies on board can arise from several factors, among which accidents are on top [28]. The Cullen Report [10] following the Piper Alpha disaster has clear recommendations for operators to perform a risk assessment of ingress of smoke or gas into the accommodation areas. Klein [31] says that VE training is important for the crew in many respects, for example, because trainees get opportunities to learn from and about each other as a team, and also to learn about the cues that unfold in an evolving training scenario. Thus, a VE has an essential role as a training environment, and agents are important elements of VE fidelity [36]. Situations are highly structured parts of the world that span a limited space and time, and people talk about them using language. They are composed of objects having properties such that the objects stand in relations with one another [4]. An agent's world can be considered as a collection of situations, and the agent should be able to discriminate among them. Devlin [14] extends Barwise and Perry's Situation Theory [3, 5] and proposes a representation using a concept called infon, which is an informational item of the form "objects a1,…, an do/do not stand in the relation P". A situation, formally, is then some part of the world that is supported by a set of infons. This work considers SA as being a phenomenon that refers to the information flow [13] from a situation to a subject such that the subject can reason about the situation. Endsley's [17] model of human SA describes this information flow as a process with three successive levels. Level-1 begins when a person starts perceiving information as environmental cues. This part of Endsley's SA model has a direct resemblance with acquiring information about the presence of object a1…an for developing relevant infons in a situation. Level-2 in Endsley's model explains that the person should be able to extract meaning from what has already been perceived. Level-3 of the model says that the meaning of cues should enable a person to foresee something shortly. Kokar et al. [32] developed an ontology, called situation theory ontology (STO), that defines semantics for situation theory by including a meta-class describing the types of things (individuals, individual's properties and relations among them) that constitute a situation as a type in accord with Barwise and Devlin's situation semantics. Inference on the available facts (infons) with some background knowledge about the objects and their relations within the ontological framework not only supports level-2 of Endsley's SA model but also gives potential to achieve level-3 SA. For example, if an agent knows that fire lit in an oil container should not be put out with water, only then can the agent preempt somebody from doing so. For that, the agent should project the current information about the position of the fire and the water source approaching the oil container into a future state using a rule that exploits some predicate like fireEscalates(oil, water). STO satisfies many characteristics of Endsley's SA model, and it was implemented in the Web Ontology Language (OWL) using the full profile (OWL-Full). Now that OWL changed in 2009 and the support for OWL-Full, which is required to fulfill the theoretical requirements of Barwise and Devlin's approach to situation modeling, is unavailable, STO is difficult for use as a platform for modeling SA. The concept of context in the literature related to artificial intelligence (AI) is similar to the situation in the SA literature. Sowa [60, 61] uses conceptual graphs (CG) to represent context or situations. CGs are an extension of Peirce's existential graphs (c. 1882) with features taken from semantic networks of AI and linguistics. CGs are bipartite graphs where boxes are used to represent concepts, and circles are used to show relations. As a simple example, a situation "Cat is on mat" can be represented in a CG using a linear notation as: [Cat] → (On) → [Mat], where Cat and Mat are two concepts (each for one object/individual in the real world) related to each other by the relation On. Sowa [61], and Akman and Surav [1] say that both context and situation are the same notions. Kokar et al. [32] report that contexts (situations) in AI are dealt with using predicates such as isa(c, p) to mean that the proposition p holds true in the context c. Predicates in First-Order-Logic (FOL) are building blocks of the system based on it. CG is computationally equivalent to FOL [61]. Rules in FOL are considered as hard constraints in that a world is thought to exist only when the rules are valid. This is contrary to situations in real life. A rule like smoke causes cancer in FOL is always valid, so an agent that smokes certainly has cancer. But this is not the situation in the real world where rules are violated, and the violation is only a matter of limitation regarding the frequency of cases where the rule is not observed. Domingos and Lowd [15] consider FOL rules as hard constraints that limit the progress in AI research, and offer a method to describe soft rules using MLNs. Soft rules are formed by assigning weights to the FOL rules in MLNs. The weights determine how likely the entities of the world might follow a rule. The higher the value of the weight, the harder the rule becomes. The present work uses MLNs to construct a model for situations in emergency scenarios, particularly those arising on offshore petroleum platforms. The purpose is to create software agents for training in VEs, where an agent exploits environmental cues to understand different emergency situations. This way, the agent can be given an ability to construct a repertoire of situations that it observes. Such agents can be expected to make experience-based decisions when exposed to emergencies in a solo or a group training environment. Applications of such agent models can be found in many fields, including pilot behavior modeling [24] during midair encounter, game programming, and so on. Being aware of a situation is not merely an outcome of a typical feature matching mechanism, as some authors suggest [43]. Awareness helps categorization of things according to certain common grounds. In other words, recognition of a situation, should mean first, to model a situation using a knowledge representation schema, and second, to devise a mechanism whereby inference can be performed on the stored knowledge to extract new knowledge. Since MLNs support inference—even on incomplete data—the resulting model of SA has some resemblance to Endsley's SA model. Moreover, as MLNs allow conflicting rules, it is a more natural choice for modeling situations in which cues at different times and space could take different meanings. Social agents can interact with human participants during an emergency egress scenario to form a group-training situation to learn from human responses and then to guide other computing modules for evaluation of human responses. Participants can also learn from these agents to respond in a scenario. The use of these agents in training exercises reduces the necessity of having a large number of real people in a large-scale group training [40]. Also, the rehearsals with agents are more effective than with human counterparts because of the consistent, usually scripted, agent behavior. A more realistic approach is to replace the scripted agent's behavior to more natural, human-like behavior so that a participant can trust the agent responses and may consider it a colleague, rather than a robot. The works in [11, 12] focus on route learning for agents and propose a model where an agent can exhibit behavior that is similar to a human participant while learning a new escape route. Risks associated with human responses during an evolving emergency are assessed in [42]. The authors assert that hazards (like fires, smoke), weather condition, malfunctioning equipment, and inadequate emergency preparedness such as that related with the recognition of platform alarms are important factors that affect the human response. Musharraf et al. [38] propose a methodology to account for individual differences in agent modeling for emergency response training. The problem of modeling SA for such agents is still another important area that has potential implications in the way agents make decisions in evolving emergencies. Chowdhury [8] explores various situations that occur on offshore rigs, platforms, and installations. The author explains how fire and evacuation situations are indicated on different platforms. "Previous works" section describes some recent work in situation awareness. "A method to model situation awareness" section describes the proposed methodology to model SA based on MLN. "Case studies: SA during offshore emergency scenarios" section describes a case study and experimental results that serve to assess the validity of the proposed model. "Results and discussion" section contains a discussion of the results, and "Conclusion" section presents concluding remarks and future directions. With the increasing demand of intelligence based systems, encompassing from smart cars to smart homes, the use of situation recognition has become a focal point in research because of its importance in enabling artificial intelligence. Récopé et al. [52] attempt to discover the reasons for interindividual differences in volleyball players' defensive behavior during different identical situations. The authors raised an important question, "Might other dimensions of situation assessment, which have so far not been studied to any great extent, be involved?" Based on an experiment involving two volleyball teams, the authors conclude that an individual's activity is governed by a specific norm that organizes, orients and enhances understanding of the actions as a coherent totality. In other words, there is a subconscious sensemaking that individuals use in order to determine the relevance of cues corresponding to different situations. In order to assess network security within an Internet of Things (IoT), Xu et al. [70] propose an ontology-based model for SA for network security of IoT. Again, ontological knowledge helps identifying concepts and relations in order to understand what type of situation is currently being observed. An IoT security situation is described by employing knowledge about the context, attack, vulnerability, and network flow. A model of how SA spreads among agents in a multiagent system is presented in [6]. Nasar and Jaffry [41] study this work [6] and extend it, using Agent Based Modeling (ABM) and Population Based Modeling (PBM) techniques, by incorporating trust in the SA model. Thus, the resulting agents' beliefs and decisions about the environment have been shown to be affected by their trust in other agents. Johnson et al. [27] addressed the issue of decrease in SA when the flight control mode changes from automatic to manual mode. The authors proposed a cognitive model based on "perceive-think-decide-do" scheme that estimates the effects of change in the flight mode on operator behavior. The primary contribution of the proposed model is an attention executive module, which is responsible to detect changes in attention on specific control loops based on changes in priorities. The authors of [30] develop a model that uses social media posts and process them, by clustering consistent posts, in the way that a user can gain more better insights by reading different views (or world view) that the system has generated. This approach is not particular to model situation awareness for agents; however, people can assess a situation, described through posts, better by reading the world views about the posts on tweeter or any other social media platform that exploits the proposed technique. Yang et al. [71] develop a probabilistic model for robots to decide about a role that otherwise would have been fulfilled by a human had there been the same situation. Situations are classified here as: easy, medium, and hard. The model takes input as 2D and 3D images, and the robot model should get its role first, and then decides upon actions per role and the situation as recognized through the images. Roles are recognized by fusing the results of two indicators, the distance-based inference (DBI), and the knowledge-based inference (KBI). The DBI uses a relative distance between humans and mission critical objects to determine the probability of a possible role. The KBI uses a Bayesian network that integrates human actions and object existence to determine a possible role. The final role is determined as a fusion of DBI and KBI by using information entropy measure. The actions of a person that is detected as target, because he is carrying the mission critical object, is a major contributor of changes in the situation. Situation levels are determined by using the target person actions (moving, stationary) and the relative position of several mission related entities at some time t by using a Bayesian network. Actions are decided based on the situation level and the inferred role. The proposed approach is robust in recognizing roles because of the fusion of different inference results, it would be useful if situations to be encountered are of fundamentally the same type, so that they can be classified as easy, medium, and hard. For example, what would a robot do if the situation is complex, as is the case of an offshore emergency where the environment is cluttered with many objects, crew, alarms, exit signs, announcements, and so on. In such conditions, different situations are possible, and the question of classifying a situation into easy, medium, and hard seems an idealistic assumption. Hu et al. [24] developed model for predicting pilot behavior during midair collision recognition-primed decision model. Features extracted from the environment are compared with the stored attributes of situations, and an already encoded situation is retrieved based on a Bayesian classifier as a similarity criterion. Naderpour et al. [39] developed a cognition-driven SA support system for safety–critical environments using Bayesian networks. The system consists of four major components to deal with (1) receiving cues from environments, (2) assessing situation based on dynamic Bayesian network and fuzzy risk estimation method, (3) recovering from a situation, that advises measures to reduce the risk of a situation, and (4) an interface for better interaction with people. Another study [59] categorizes maritime anomalies, such as speeding of a vessel, according to the levels in the JDL data fusion model [35]. Szczerbak et al. [63] use conceptual graphs to represent ordinary real-world situations and introduce a method to reason about similar situations. Liu et al. [34] propose an information fusion model with three layers for event recognition in a smart space where sensory data is collected in the first layer, context is represented as MLN in the second layer. The third layer maps the contextual information of the second layer to corresponding events. To fuse uncertain knowledge and evidence Snidaro et al. [58] develop an MLN based SA model for maritime events. Gayathri et al. [19] use MLN to develop an ontology that can be used to recognize activities in smart homes. The purpose is to detect an abnormal activity (or a situation) and inform the remote caretaker. Using a technique called Event Pattern Activity Modeling [20], observations collected through sensors have been parsed into concepts in an ontology, and the relevant descriptive logic rules are generated. These rules are then converted into FOL equivalents, and weights are assigned to FOL rules to develop the MLN based activity model. Given the observations through sensors, the MLN activity model can be used to suggest different interpretations of the observed data in a probabilistic sense. The use of MLNs enable representation of cyclic dependency among the rules, which is a major advantage of MLNs over Bayesian networks. A method to model situation awareness Take S to be a countable set and ℘(S) to define the set of all subsets of S, where the points of S are sites, each of which can either be empty or occupied by an object (such as a formula in a logical framework or a particle as it appears in the statistical mechanics literature). The sites of S can be represented by binary variables X1, X2, …, Xn. The subset Λ ∈ ℘(S) is regarded as describing a situation when the points of Λ are occupied and the points of S − Λ are not. The elements of ℘(S) are sometimes called configurations. The set S, representing the sites, may have some additional structure. As sites are connected, S can be considered as forming an undirected graph G [48], so the points of S are the vertices of some finite graph G(S, E), where E is the set of edges. The present work involves modeling a probability measure (defined in the following subsections), restricted to the sample space Σ = {0, 1}S, having a kind of spatial Markov property given in terms of neighbor relations of G [22], called a Markov random field [25, 29, 45]. G(S, E) is countable and does not contain multiple edges and loops. If x, y ∈ S and there is an edge of the graph G between x and y, then x and y are considered neighbors of each other [48]. Formally, the function f: S × S → {0, 1} is given by $$f\left( {x, y} \right) = \left\{ {\begin{array}{l} {1\;\;\;\;\;\;{\text{if}}\;x\;{\text{and}}\;y\;{\text{are}}\;{\text{neighbors}},} \\ {0\;\;\;\;\;\;{\text{otherwise}}} \\ \end{array} } \right.$$ If Λ ∈ ℘(S) then the boundary ∂Λ ∈ ℘(S) is defined as: $$\partial \Lambda = \{y \in S -\Lambda | f\left( {x, y} \right) = 1,\; {\text{for}}\;{\text{some}}\;x \in \Lambda \}$$ A Markov network (MN) is composed of G and a set of potential functions ϕk. G has a node for each variable, and MN has a potential function for each cliqueFootnote 1 in G. A potential function is a non-negative real-valued function of the configuration or state of the variables in the corresponding clique. The joint distribution of the variables X1, X2, …, Xn can be developed to understand the influence of a site, i.e., a variable, on its neighbors [50] as defined below: $$P\left( {X = x} \right) = \frac{1}{Z}\mathop \prod \limits_{k} \phi_{k} \left( {x_{\left[ k \right]} } \right)$$ where x[k] is the configuration of the kth clique, i.e., the values of the variables in the kth clique. Z is partition function for normalization, \(Z = \mathop \sum \nolimits_{x \in \Omega } \mathop \prod \nolimits_{k} \phi_{k} \left( {x_{\left[ k \right]} } \right)\). Markov Logic Network Because a random variable assigned with a value can be considered as a proposition [23], Domingos and Richardson [16] define MN by first considering the variables as rules/formulas in FOL. Unlike FOL, a formula in MLN is assigned a weight (a real number), not just the Boolean true or false. Formally, an MLN L is defined as a set of pairs (Fi, wi) with Fis being the formulas and wis being the weights assigned to the formulas. If C = {c1, c2, …, c|C|} is the set of constants or ground predicates (the facts), then L induces a Markov network ML,C such that the probability distribution over possible worlds x is given by: $$P\left( {X = x} \right) = \frac{1}{Z}\exp \left( {\mathop \sum \limits_{i} w_{i} n_{i} \left( x \right)} \right) = \frac{1}{Z}\mathop \prod \limits_{i} \phi_{i} \left( {x_{\left[ i \right]} } \right)^{{n_{i} \left( x \right)}}$$ where ni(x) is the number of true groundings of Fi in x, x[i] is the state or configuration (i.e., the truth assignments) of the predicates in Fi, and \(\phi_{i} \left( {x_{\left[ i \right]} } \right) = e^{{w_{i} }}\). The FIRE and EVACUATE emergency situations Fire and evacuate are among the important types of emergencies that occur on offshore petroleum installations [62]. Chowdhury [8] describes various emergencies, such as fire/blowout, evacuate, H2S release, and the types of alarms used on different offshore rigs. A fire may erupt due to many reasons, such as a gas release near an igniting source, or an electrical spark near a fuel line. Explosions also result in fires. In any case, if a fire event occurs a fire alarm is raised, and people on board must leave their work and report to their designated muster station, which is usually their primary muster station. This type of situation is called a FIRE situation, and it will end when an all-clear alarm sounds, which means that the fire has been taken care of and the people can now return to their duties. In case a FIRE situation escalates, meaning that the fire spreads and blocks various paths so that personnel's safety could be further compromised, an EVACUATE situation may come into effect, and this new situation is communicated to people by another alarm, different from the fire alarm. In the EVACUATE situation, people must report to their designated secondary muster station, the lifeboat station, from where the final evacuation from the platform can proceed. Knowledge representation of emergency situations An interesting aspect of modeling a situation is to identify the factors that lead to the situation of interest. Typically, a situation involves preconditions or events, some of which are observable, and some are not directly visible [58]. Since MLNs are based on FOL rules, the basic methodology as described in [15, 16], and followed here, requires developing FOL rules, followed by assigning the weights, and finally performing the required inference. Nonetheless, there is no straightforward way of writing FOL rules for a knowledge domain. Writing FOL rules requires experience and thorough domain knowledge. Also, the developed FOL rules must fulfill some criteria of acceptance. For example, a rule like "smoke causes cancer" has been given serious attention among medical practitioners [9] since the constitution of a study group in 1957 [56]. This group was appointed by several institutes, including the National Cancer Institute, and it concludes, by considering the scientific evidence, that cigarette smoking is a causative factor for a rapid increase in the incidence of human epidermoid carcinoma of the lung. Figure 1 proposes a methodology that incorporates the basic steps of constructing an MLN iteratively so that each rule could be judged against some heuristic criteria of acceptance, for example, by assigning the weights to rules through empirical findings using a learning algorithm [53] and then seeing if the weights make sense. In any case, if many of the rules come up as negatively weighted, then such a knowledgebase will have little practical value, and one must look into the training samples and/or the rules themselves. In the former case, it is possible that the training sample includes little evidence where the rules were successful. In the latter case, it is possible that the rules were not constituted correctly, regarding the specification of different predicates, their connections using logical connectives, and their implication into a consequent. In short, one must go back and update the rules and/or training–testing data sample, as shown in Fig. 1 until the desired results are met. The choice of a learning algorithm is also a point to consider. Since discriminative learning does not model dependencies between inputs within the training sample, it often produces results [53] better than generative learning techniques. Using the testing samples as evidence, the probability that a query predicate holds is estimated by employing an inference mechanism, such as by using the MC-SAT algorithm [46]. The proposed methodology to develop a situationally aware agent model based on MLN Table 1 lists the variables studied in this work for SA about the situations discussed earlier in "The FIRE and EVACUATE emergency situations" section, the FIRE situation, which asks all personnel to move to the primary muster station, and the EVACUATE situation, which involves escalation of a fire into a larger fire that obstructs the primary escape-route leading to the primary muster station, thereby necessitating re-routing to the alternative or lifeboat station. A set of FOL rules are proposed in Table 2 so that an agent recognizes these situations like the way a human counterpart recognizes them. The preconditions (antecedents of FOL rules) used here are common among experts and have been suggested in earlier studies [8, 18, 49, 55, 57, 62, 64,65,66, 68]. The query predicates determine the probability of recognizing alarms, having a FIRE situation, having an EVACUATE situation, and having some (unknown) situation given the evidence predicates. Table 1 Variable/predicate names and description Table 2 The FOL rules that are showing the knowledge base for basic emergency preparedness The variability in the emergency alarm systems and indicators used at different offshore installations is a source of confusion when a real emergency occurs, especially for personnel who frequently move from one to another platform for performing special tasks. Alarm recognition is considered a major contributor to the awareness of an emergency type [8]. Different alarms mean different situations requiring a different course of actions by the personnel onboard. The scope of the present work is limited to SA and does not extend to finding a suitable course of action in case of an emergency. Recognition of alarms is something that cannot directly be observed unless the person is asked, so a search for further factors that indicate that an alarm has been recognized is required. An alarm cannot be recognized if it was not heard, whereas listening needs attention towards the alarm signal [51]. Emergency alarm signals are so loud that it is hard not to hear them, but that does not mean that people will always recognize which situation the present alarm is for. An agent can exploit rule # 1 in Table 2 to express the behavior of not recognizing an alarm if, for any reason, such as the inertial tendency of people to keep doing what they are doing [69], the agent does not listen to it. Several studies [49, 65] show that people do not start evacuating a building or moving to a muster location automatically when they hear alarms unless they are trained to do so, and there are some other factors or cues that lead them to act as needed in that situation. Rule#2 uses two more factors to frame the conclusion of recognizing an alarm beside just listening. The first factor reflects a person's ability to develop the intention of moving to the required muster station. The required muster station is referred to by the variable mloc that takes values from the set {MESSHALL, LIFEBOAT}. Literature shows that intention is an important cognitive state that affects one's ability to participate in a decision-making process [7, 64]. Intention is modeled here as a predicate HITR that takes a value true if the agent develops the intention to move to mloc during a time interval t. An agent's intention can be inferred by observing which route is taken up immediately after listening to the alarm. The agent can also be delayed in developing the intention to reach mloc and may require other cues for building up this intention. Therefore, to know if an alarm is recognized without the help of other cues, such as observing smoke, it is necessary to know when the agent develops the intention of moving to the required muster station after listening to an alarm. HITR is used in conjunction with the predicate BST that ensures the intention of moving to the muster location is developed before seeing a threat because if an agent sees a threat, it would be unclear if its intention of moving to mloc is due to the threat or the alarm. The probability of recognizing the alarm is determined by using the conjunction of the three predicates. If any of the antecedent predicates fail, the chances of recognizing the alarm will be reduced. The variable ST (see Table 1) is used to indicate that the agent observes a threat. An agent who sees a threat (such as smoke or blowout) is highly likely to discover the type of emergencies involved (FIRE or EVACUATE). Rules # 3 and 4 say that an agent will be aware of 'some' emergency if it just listens to an alarm or observes a threat. Public address (PA) announcements are also important cues for getting to know details about a developing situation [8, 18, 62, 68]. PAs are verbal announcements with clear words detailing the situation. The details include the location of a threat or hazard, what actions are needed, and what areas are affected. The agent can take advantage of the PA to learn about a developing emergency. However, this needs a focus on the words in the PA. The literature on distraction explains how people get distracted in different situations. Tutolo [66] says that children's ability to listen without being distracted improves with age. Inattention to the available information has been studied for the offshore drilling environment in [57]. The authors discuss other factors, such as stress, that influence focus of attention by producing a narrowing or tunneling effect so that a person is left focusing on only a limited number of cues under some stressors. Tversky and Kahneman [67] call this cognitive tunnel vision. The predicate HFO is true when the agent has a focus on a PA being uttered. An agent that is engaged in all activities except what is communicated in the PA is defined to have no focus, whereas one that suspends its current engagements and begins performing the actions according to the PA is considered to have focused on the PA. Similarly, if an agent, while moving, suddenly changes its course because of instructions given in the PA a moment before, this also considered to have exhibited a clear sign of responding to the PA. In general, gestures can be noticed to determine if an agent has a focus on an ongoing PA or not. The predicate FPA is used to demonstrate the requirement of following the PA. If HFO is true, but FPA is false, it means that, though the agent had focused on the PA's words, it is confused or does not have an understanding of the situation, and therefore, the agent is unable to follow the PA. Rule#5 is a disjunction of three different rules: the first determines SA about the emergency based on focus and understanding of PA, the second uses direct exposure to the threat/hazard, and the third is based on the recognition of alarms. This last disjunct in rule#5 uses the predicate KETA to link an alarm to the corresponding situation or emergency type because that is needed to conclude in the consequent predicate HSES. Rules # 6 & 7 are to ensure that FIRE and EVACUATE are two distinct types of situations, besides that EVACUATE may occur because of a fire [8, 62]. Rule # 8 says that if during some initial time interval t0 a FIRE situation is observed, and during some later interval t1 (where t0 \(\prec\) t1) this situation escalates to EVACUATE, then the FIRE situation will no longer exist during t1, although one may witness real fires during the EVACUATE situation. Case studies: SA during offshore emergency scenarios This work uses two case studies developed using the experiment performed in [55] to acquire training and testing data for SA during offshore platform egress scenarios so that the proposed model (in Table 2) can be judged against the empirical data. The objective of Smith's experiment was to assess VE training effects on people's ability to learn and respond during offshore egress scenarios involving fire hazards. The distribution of training of the participants and testing their performance is shown in Fig. 2. The experiment targeted six learning objectives: (1) establish spatial awareness of the environment, (2) routes and mapping, (3) emergency alarm recognition, (4) continually assess situation and avoid hazards on route, (5) register at temporary refuge, and (6) general safe practices such as closing the doors when there is an emergency alarm in effect due to fire or smoke hazard. There were three sessions with increasing complexity. Session 1 (S1) involved training, practice, and testing for the learning objectives 1, 2, 5 & 6, session 2 (S2) used scenarios involving the learning objectives 3, 5 & 6, and session 3 targeted the objectives 3, 4, 5 & 6. The experiment involved 36 participants divided into two groups: Group 1 contained 17, and Group 2 contained 19 participants. Group 1 was trained in several sessions, whereas Group 2 participants received only a single training session. The VE used in this experiment was All-hands Virtual Emergency Response Trainer (AVERT). AVERT is a research simulator of an offshore petroleum facility. It is used to train participants to improve their response should they face an emergency such as a fire or an explosion. The present work uses only the third and the fourth learning objectives because they deal with the SA the participants exhibited during each scenario. The data was obtained by a careful reading of the log files and watching the replay videos of session S3 recorded for each participant during the testing phase of the relevant scenarios. (Source: Adopted from [55]) Training exposure to participants. Sessions S1, S2, and S3. The datasets are obtained from S3 for both groups Situations in experimental scenarios Smith's experiment [55] involves emergencies in which, initially, there is a fire in the galley. After some time, the fire escalates so that the primary muster station, which is the mess hall on deck A of the platform, becomes compromised. An audible fire alarm (the General Platform Alarm, GPA) followed by the relevant PA is made right after the initial fire event. The escalation of the fire in the galley to fire in the mess hall is then announced by a Prepare to Abandon Platform Alarm (PAPA), followed by another PA. Initially, a participant is situated in their cabin (see the floor map in Fig. 3-1) when a GPA alarm activates, followed by a platform announcement. The PA announcement directs the participant to muster at their designated muster station, which is the mess hall on A-deck for a FIRE situation. Upon hearing the GPA, the participant needs to move out of the cabin and choose from the primary route (the solid lines, which goes through the main stairwell), or the secondary escape route (the dotted lines, which uses the external stairwell) to reach A-deck. The participants were trained to deal with these situations earlier using escape route training videos and instructions in the training session S1. While moving toward the mess hall, after a fixed interval of time t0, the participant receives a call to abandon the platform. This is the PAPA alarm, which indicates to the participants that they should immediately move to the secondary or alternative muster location, which is the lifeboat station at the starboard side of the platform (see Fig. 3-2). The time interval when PAPA is activated to the end of a scenario is termed t1. Thus, t0 is the time interval in which the participants get all cues related with the FIRE emergency, such as smoke in the stairwell, GPA alarm, and PA announcement that includes the words "fire in the galley". Similarly, t1 is the time interval that starts when t0 expires and ends at the end of the scenario. During the t1 period, the participant receives cues related with an EVACUATE situation. The PAs use clear words as to what needs to be done in an emergency and what parts of the escape route are expected to be blocked due to fire or smoke. Although GPA and PAPA are activated at different times, indicating two different situations, the other environmental cues can be observed at any time during their lifetimes. For example, smoke in the main stairwell is considered as a cue for a FIRE situation. Some participants reached at this spot in the main stairwell after the PAPA was activated. Situations like these are complex because of confusion due to conflicting cues. Floor map for decks A and C in AVERT simulator. A participant starts from Cabin (S) in part (1) and ends either at the mess hall or the lifeboat station in part (2) using external stairwell or main stairwell. The dotted lines show the alternate route, and the solid lines refer to the primary route Data set for training and testing the model Empirical data set (D1) The empirical dataset D1 comprises the data collected from 17 participants in Group 1. For brevity, the data from only two participants are shown in Table 3. Each predicate takes typed variables, so corresponding ground atoms are shown in the second and third columns of the table. The data set D1 is split into two parts. Based on the methodology in Fig. 1, the model in Table 2 was trained with different sizes of training/testing ratios, like 50/50, 60/40, 80/20. Eventually, an 80/20 split of D1 was found to produce good results. That is, 80% of the data in D1 was used for training the rules in Table 2, and 20% of the data was used here for testing the model. Table 3 A sample of validation data for two participants, P1G1 and P2G1 Empirical dataset (D2) The empirical dataset D2 comprises the data collected from all 19 participants in Group 2. Again based on the methodology in Fig. 1, different samples sizes were tried for partitioning the dataset D2; the 80/20 ratio for training and testing samples was used here. Setting up the model We consider close world assumption for all predicates except KETA, KETT, and KETPA. The predicates KETA, KETT, and KETPA employ open world assumption because these predicates are designed to be present in the model as a container for the background knowledge. KETA is true when the agent has knowledge about which alarm is for which emergency situation type, i.e., the fact that the GPA alarm sounds for the FIRE type emergency, and the PAPA alarm is activated for EVACUATE type. KETT is used to mean which type of threat is observed in an emergency. For example, a fire confined to a small area, at most, could mean to move to the primary muster station. Three types of threats are considered in this study. The threat smoke in the stairwell (SMK_STAI) should be recognized as a FIRE type emergency. If an agent sees smoke coming out of the mess hall vent (SMK_VENT), or the agent enters into the mess hall and sees smoke (SMK_MSHA) there, it means the situation is of type EVACUATE because the primary muster station is compromised. If KETT is true, it means that the agent knows the relationships between a threat and possible type of emergency situation that could originate from this threat. Similarly, the KETPA predicate is true if the agent knows which words in the PA would lead to a particular emergency type. For example, the sentences, "a fire in the galley" or "move to primary muster station" mean that the emergency type is FIRE. On the other hand, the words, "primary escape route is blocked" or "a fire has escalated" mean that the situation is EVACUATE. This knowledge was given to the participants of Smith's experiment as part of the training curriculum. Therefore, during training of the model the truth values of KETA, KETT, and KETPA are taken as true to mean that the agents based on the proposed model have this background knowledge. Calculating the model weights We use the software package Alchemy 2.0 [2] for developing the proposed MLN model. The non-evidence predicates used for both D1 and D2 are R, HES and HSES. The model is trained separately for data sets D1 and D2 using a discriminative learning method so that weights can be assigned to the rules presented in Table 2. It was observed that some participants did not listen to an alarm even though it was audible. The use of Listens (L) as a predicate came up (see Table 2) with the empirical observations, where, with some participants the predicate takes a false value. On the other hand, if Hears were used instead of Listens, then there would not be any case with a false value for Hears because all the participants had hearing abilities in the normal range. Similar considerations were taken for other rules. Table 4 shows the weights. A portion of ground MN obtained by grounding the rules#2–5 is depicted in Fig. 4, which shows how the nodes corresponding to each predicate are related. Table 4 Weights assigned to rules using datasets D1 and D2 A portion of ground MN obtained using grounding of the predicates in rules 2–5 Querying the proposed MLN based model of agent SA is the same as querying a knowledgebase. We use the MC-SAT algorithm using the Alchemy inference engine for querying. Now if the model is used in an agent program as a part of situation assessment logic, the evidence would come via the available sensors. Given the evidence predicates, the agent can determine the chances that a query predicate is true in the present conditions. The most important things an agent seeks in an evolving emergency are the recognition of alarms and determination of the type of emergency it is in at a given time. For this reason, the query predicates are obtained by grounding the following predicates: where, the predicate R is read as the agent, ag, recognizes an alarm, al, during the time interval t. HES means that the agent, ag, has an emergency, e, of type emgSitType, during time t, and the predicate HSES represents an agent, ag, who has got some sense of emergency. If in any case, the truth value of HSES is true and HES is false, it would mean that the agent is unable to determine the type of emergency despite that it has sensed the emergency situation. The predicates obtained after grounding the predicates listed in Table 2 other than the query predicates mentioned in (5) are used as part of the evidence predicates that need to be provided to the inference engine to obtain the results of the queries presented in (5). Table 5 presents the probabilities estimated against the queries for the cases in the testing datasets. The test datasets were formed by taking 20% of the total samples from D1 and D2 respectively, as reported in "Data set for training and testing the model" section. Table 5 Query results With regards to the training and testing datasets for the model, the total duration each participant spends during a training or testing session has been divided into two intervals. The first is the interval t0 that starts from the beginning of a session until the time when the GPA alarm stops. The second interval is termed t1, which is the interval that follows immediately after t0 ends, and it ends at the end of each session. t0 covers the period when there is FIRE type emergency, and t1 covers the duration when there is EVACUATE type emergency. This division of time is important to assess the importance of cues relevant to each emergency type. For example, if an agent observes smoke in the central stairwell, then this is an important cue for FIRE type emergency because in that case, the agent should move to the primary muster station, the mess hall. On the other hand, smoke in the central stairwell should not be considered during t1, or when the PAPA alarm sounds, because PAPA alarm is a call to gather at the secondary, or alternative muster station, the LIFEBOAT station. Often in such cases, the primary muster station may have been compromised, or the routes that lead to the primary muster station may have been blocked. Table 5 presents the results that are obtained for seven participants P1G1, P2G1, P3G1, P1G2, P2G2, P3G2, and P4G2. The names of these participants are kept hidden due to privacy. The information obtained by watching the replay videos and by observing the log files is divided into two columns with the view that those predicates that are used as part of the evidence in the inference algorithm are kept under the heading of evidence and those that are used to query the model are kept as empirical results. Both columns contain the empirical results obtained from Smith's experiment. The truth values of the empirical results are used for validating the model output that is described as the last column in Table 5. Simulation results against the participant P1G1 Now consider the case when the participant, P1G1, was tested in AVERT. The evidence predicates suggest that immediately after hearing the alarm, P1G1 developed the intention to move to the mess hall, the primary muster station, which was correct, but the participant spent more time than needed time and so reached the mess hall when t0 had already expired. On the other hand, this also means that P1G1 recognized the GPA alarm, R(P1G1, GPA, t0), and developed awareness about the FIRE situation, HES(P1G1, FIRE, t0), during the initial time interval t0. But as a slow mover, P1G1 observed the smoke in the stairwell, mess hall, and the smoke coming through the mess hall ventilation during t1. P1G1 also did not pay attention to the PAPA alarm, which is the reason for ¬L(P1G1, PAPA, t1), which was activated when P1G1 was still in the main stairwell. P1G1 took about 20 s more in t1, ignoring the fact that the PAPA alarm implies a re-route towards the lifeboat station through the secondary escape route. So, unnoticed from the PAPA alarm and the relevant PA, P1G1 entered the mess hall and saw thick smoke. Studies [47, 54] suggest that humans show dominance on visual information than on other types of sensory cues such as auditory information. Observing smoke drew the P1G1's attention on smoke, and he instantly realized a need to move out of the mess hall, which was done by re-routing to the lifeboat. But this realization of the situation comes only when P1G1 saw smoke, and it was not due to the PAPA alarm or the relevant PA. In a real situation, entering an area filled with smoke due to fire or any other toxic element could be lethal. Also, observing a fire or smoke is a natural cue that would develop awareness about a fire situation. It is, nevertheless, hard to develop awareness about an evacuation situation by watching a fire or smoke unless the relevant alarms and/or platform announcements are heard and recognized. This is the reason why P1G1, although mustered at the lifeboat station, is considered to be poor in responding to the evacuation situation, and that is why we have ¬R(P1G1, PAPA, t1) and ¬HES(P1G1, EVACUATE, t1) in the empirical results for P1G1. Similarly, P1G1 spent a fraction of the interval t1 maintaining the impression of a fire situation, although the fire situation had already been escalated to an evacuation situation, which is why we have a predicate HES(P1G1, FIRE, t1) in the empirical results. The model output is probabilities obtained against the query predicates, as shown in the last column of Table 5. Ideally, a high probability is a good fit for a queried predicate when the corresponding empirical result has a truth value of true. Similarly, a low output probability should serve a good fit for the queries predicate when its empirical truth value is false. This is very much evident for P1G1. Given the listed evidence for P1G1, the probability that an agent would recognize a GPA is 0.91, and the probability the same agent would get immediate fire emergency awareness is 0.92. However, there are fewer chances (only 16%) that the agent would respond to the escalating situation from FIRE to EVACUATE because the likelihood of recognition of the PAPA alarm is zero, as the agent does not listen to or has no focus on the sounding alarm. In any case, if we change the evidence truth value for the predicate 1.10 in Table 5 from false to true, the corresponding probability of recognizing PAPA during t1 would increase from 0.0 to 0.48. The reason for getting a zero probability is due to the hard constraint (rule#1) listed in Table 4. Similarly, if P1G1 realized the presence of smoke in the stairwell during t0 rather than t1, for example, if P1G1 had moved fast, then the chances for having a FIRE situation during t1 would have been lowered from 0.74 to 0.46, and the chances for getting awareness about the EVACUATE situation would be increased from 16 to 23% during t1. This is because the SMK_STAI, i.e., seeing smoke in the stairs, is a positive cue for a fire situation, but when one observes it in the presence of a cue that is for an evacuation situation, for example, a PAPA alarm, the two conflicting cues would cause confusion, and the agent needs to decide which cue should be considered. P1G1 preferred SMK_STAI during t1 over the PAPA alarm and so entered the mess hall, although this decision was wrong as it wasted egress time and exposed the participant to a hazard. The case of participant P2G1 shows a slight deviation between the model output and the empirical results at only one place (see empirical result # 2.3 and corresponding model output probability in Table 5). The model output probability of keeping the impression of a fire situation, though the situation had turned into an evacuation situation, is a bit high (0.29) compared to the empirical result where the truth value of the involved predicate, HES(P2G1, FIRE, t1), was false. The rest of the model output probabilities, estimated for modeling P2G1's behavior, are reasonable. Simulation results against the participants P3G1 and P1G2 The only thing participant P3G1 took into consideration during t0 was the smoke coming out from the mess hall ventilation. P3G1 did not recognize the GPA alarm nor heed the PA for the FIRE emergency. P3G1 never had any intention to move to the mess hall. The model output for recognizing the GPA alarm (0.49) during t0 is reasonable because the time when the GPA starts sounding is the time when the participant is in the cabin, and there are no other available cues except the alarm sound and the relevant PA. The model output probabilities are in good agreement with the empirical results except for a slightly larger value of 0.44 for the probability of having awareness about FIRE emergency during t0, whereas P3G1 remained unaware about the fire emergency, and from the beginning of the scenario P3G1 had decided to muster at the LIFEBOAT station. The results obtained against the evidence for the participant P1G2 are all in good agreement with the empirical values. By giving the evidence of P2G2, the model recognizes the fire alarm during t0 with 0.87 probability. P2G2 did not recognize the PAPA during the experiment, and the model output is 0.49 for the predicate R(P2G2, PAPA, t1). The reason for having a probability near 0.5 is that when the interval shifted from t0 to t1, there are only two cues suggesting that the situation has escalated from FIRE to EVACUATE (smoke from the vents and the smoke in the mess hall) and the smoke in the stairwell is a cue for moving to the mess hall. This is a conflicting situation. Moreover, as P2G2 moved into the mess hall while the PAPA alarm was still on along with the relevant PA, the predicate BST(P2G2, PAPA, t1) takes a false value in the evidence that reduced the probability of recognizing PAPA during t1 from 0.94 (if BST(P2G2, PAPA, t1) is true) to 0.49 when the predicate BST is false, as in the case of P2G2. Similar reasoning is true for recognizing the FIRE and EVACUATE situations during t1. If we set BST(P2G2, PAPA, t1) true in the evidence dataset for P2G2, then the new values for probabilities for having awareness about FIRE and EVACUATE situations during t0 and t1 come out to be 0.94 for a FIRE at t0 and 0.96 for EVACUATE at t1. This shows the importance of recognizing the alarm before seeing any real threat. The participant P3G2 did not recognize the GPA alarm, and the model probability against the query predicate is 0.5 for similar reasons we observed in the case of P3G1. The rest of the results for P3G2, as reported in Table 5, support the empirical results for P3G2. Similar reasons are there for the results obtained against the query predicates for P4G2. A MLN-based model of SA for agents in a VE is proposed in this work. The methodology used here involves assessing the environmental and cognitive factors, such as alarms, fire/smoke, intention, and focus of attention, for potential impact on awareness of emergencies. The proposed model has been used to represent two case studies that involve fire and evacuation situations on an offshore petroleum platform. The case studies were carried out in a VE with real people. Data obtained from the case studies are used to validate the model output. Empirical and simulated results agree in asserting the importance of alarm recognition and focus of attention for awareness about the emergency situations involving smoke and fire. Endsley's SA model describes how people get awareness about a situation, but it does not provide how such a model can be used for software agents [32]. The present work shows a potential approach to modeling SA for software agents. Agents based on this model can be used in several application areas. For example, one can exploit such agents so that different situations can be considered as different experiences, and hence a repertoire of situations can be made as a basis for decision-making regarding choosing actions in a given a situation. Virtual training environments are good examples of using such agents for cohort training where agents, based on the proposed methodology, can exhibit different behaviors in different situations for training purposes. Due to the inherent stochasticity of the proposed approach, the model is dynamic, and it has an advantage over other models, such as ontology-based SA models [32, 33, 37], and case-based SA models [44], in that it can recognize a situation even if some of the FOL rules violate. This work has the potential to be used in Naturalistic Decision-Making (NDM) environments where situations are central entities to decision making [21]. Another application is in intelligent tutoring where the model can be used to make student models in a VE for training people for different tasks of SA. Different kinds of agents can be developed—even without using training–testing samples, by manually selecting weights [26]—for tutoring different behaviors. For example, an agent that has poor capabilities of recognizing alarms should use a real positive number near zero as a weight for rule#2. Similarly, an agent that acts as an expert should have high values of weights in the rules, and the evidence database should contain as much of the needed information as possible so that the agent acts as an expert in retrieving cues from the environment. The training and testing samples used for the validation of the MLN-based SA model are publicly available. Full datasets are provided here as supplementary files. The replay videos used to create the training and testing samples have restricted access because AVERT simulator is not available for public use. A clique of a graph G is a complete subgraph of G. ABM: agent based modeling AI: AVERT: All-hands Virtual Emergency Response Trainer BST : the predicate "Before seeing a threat" the set of ground predicates or facts CG: conceptual graph the dataset from Group 1 participants DBI: distance based inference the set of edges for graph G EVACUATE : an evacuate situation FIRE : a fire situation FOL: First-Order-Logic FPA : the predicate "Follows public address" G : the finite graph having S nodes and E edges GPA : general platform alarm Gt : the predicate "Greater" H2S: hydrogen sulphide HES : the predicate "Has emergency situation" HFO : the predicate "Has focus on" HITR : The predicate "Has intention to reach" HSES : the predicate "Has some emergency situation" IoT : KBI : knowledge based inference KETA : the predicate "Knows emergency type for an alarm" situation awareness SMK_STAI : constant for smoke in stairwell SMK_VENT : constant for smoke coming out from a vent in MSH SMK_MSH : constant for smoke in MSH ST : the predicate "Sees threat" STO: situation theory ontology t0 : constant for time duration in FIRE emergency KETPA : the predicate "Knows emergency type for a PA" KETT : the predicate "Knows emergency type for a threat" the predicate "Listens" the name of the MLN developed in this work LIFEBOAT or LFB : the secondary or alternative muster station MCMC: Markov chain Monte Carlo algorithm MC-SAT: Markov Chain SATisfiability algorithm MESSHALL (Also, MSH) : the primary muster station MLN: mloc : the variable that contains name of the required muster station, used in the predicates MN: Markov network NDM: naturalistic decision-making OWL: P(.): a probability function PA: PA_GPA : PA announcement after GPA PA_PAPA : PA announcement after PAPA PAPA : prepare to abandon platform alarm PBM: population based modeling Q : the set of question/queries R : the predicate "Recognizes" R j : the set of rules containing j number of rules, where j > 0 S : the set of sites containing objects or empty [objects may be formulas representing situations] constant for time duration in EVACUATE testing data set training data set VE: virtual environment Wk : the set of k weights, w1, w2, …, wk the partition function for normalization Λ: the subset of S such that sites are occupied; for S − Λ the sites are not occupied ℘: the power set notation w i : the weight for an ith formula \(\phi_{k}\) : the potential function for the kth clique of the MN comprising G k : the running variable used in numbering the cliques of graph G n i(x): the number of true groundings of ith formula Akman V, Surav M (1996) Steps toward formalizing context. AI Mag. https://doi.org/10.1609/aimag.v17i3.1231 Alchemy (2012) Alchemy: a software for statistical relational learning and probabilistic logic inference based on Markov logic representation. Washington DC Barwise J (1981) Scenes and other situations. J Philos 78(7):369. https://doi.org/10.2307/2026481 Barwise J, Perry J (1980) The situation underground. Stanford Cognitive Science Group 1980, Section D, California Barwise J, Perry J (1983) Situations and attitudes. MIT Press, Cambridge Bosse T, Mogles N (2014) Spread of situation awareness in a group: Population-based vs. agent-based modelling. In: Proceedings—2014 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology—Workshops, WI-IAT 2014, vol 3, pp 1117–1124. https://doi.org/10.1109/wi-iat.2014.169 Bratman M (1987) Intention, plans, and practical reason. Harward University Press, Cambridge Chowdhury S (2016) Optimization and business improvement: studies in upstream oil and gas industry. Wiley, New Jersey Cornfield J, Haenszel W, Hammond EC, Lilienfeld AM, Shimkin MB, Wynder EL (2009) Smoking and lung cancer: recent evidence and a discussion of some questions. Int J Epidemiol 38(5):1175–1191. https://doi.org/10.1093/ije/dyp289 Cullen LWD (1993) The public inquiry into the Piper Alpha disaster. Drill Contract 49:4 Danial SN, Khan F, Veitch B (2018) A Generalized Stochastic Petri Net model of route learning for emergency egress situations. Eng Appl Artif Intell 72:170–182 Danial SN, Smith J, Khan F, Veitch B (2019) Human-like sequential learning of escape routes for virtual reality agents. Fire Technol 55(3):1057–1083. https://doi.org/10.1007/s10694-019-00819-7 Devlin KJ (1991) Logic and information. Cambridge University Press, Cambridge Devlin KJ (1991) Situations as mathematical abstractions. Situat Theory Appl 1:25–39 MathSciNet Google Scholar Domingos P, Lowd D (2009) Markov logic: an interface layer for Artificial Intelligence. In: Brachman T, Dietterich RJ (eds) Synthesis lectures on artificial intelligence and machine learning. Morgan & Claypool Publishers, Seatle Domingos P, Richardson M (2007) Markov logic: a unifying framework for statistical relational learning. In: Getoor B, Taskar L (eds) Introduction to statistical relational learning. MIT Press, Cambridge Endsley M (1988) Design and evaluation for situation awareness enhancement. In: Proceedings of the human factors and ergonomics society annual meeting, vol 32. https://doi.org/10.1177/154193128803200221 ExxonMobil (2010) OIMS: system 10-2 emergency preparedness and response. https://www.cnsopb.ns.ca/sites/default/files/inline/12450_so41877.1_spill_response_soei_0.pdf. Accessed 15 Sept 2018 Gayathri KS, Easwarakumar KS, Elias S (2017) Probabilistic ontology based activity recognition in smart homes using Markov Logic Network. Knowl-Based Syst 121:173–184. https://doi.org/10.1016/j.knosys.2017.01.025 Gayathri KS, Elias S, Shivashankar S (2014) An ontology and pattern clustering approach for activity recognition in smart environments. https://doi.org/10.1007/978-81-322-1771-8_72 Gore J, Flin R, Stanton N, Wong BLW (2015) Applications for naturalistic decision-making. J Occup Organ Psychol 88(2):223–230. https://doi.org/10.1111/joop.12121 Grimmett G (2010) Probability on graphs: random processes on graphs and lattices. Cambridge University Press, Cambridge Halpern JY (2003) Reasoning about uncertainty. MIT Press, Cambridge Hu Y, Li R, Zhang Y (2018) Predicting pilot behavior during midair encounters using recognition primed decision model. Inf Sci 422:377–395. https://doi.org/10.1016/j.ins.2017.09.035 Isham V (1981) An introduction to spatial point processes and Markov random fields. Int Stat Rev 49(1):21. https://doi.org/10.2307/1403035 MathSciNet Article MATH Google Scholar Jain D (2011) Knowledge engineering with Markov logic networks: a review. In: Beierle G, Kern-Isberner C (eds), Proceedings of evolving knowledge in theory and applications. 3rd workshop on dynamics of knowledge and belief (DKB-2011) at the 34th annual German conference on artificial intelligence, KI-2011, vol 361. Berlin: Fakultät für Mathematik und Informatik, FernUniversität in Hagen, pp 16–30 Johnson AW, Duda KR, Sheridan TB, Oman CM (2017) A closed-loop model of operator visual attention, situation awareness, and performance across automation mode transitions. Hum Factors J Hum Factors Ergon Soc 59(2):229–241. https://doi.org/10.1177/0018720816665759 Khan B, Khan F, Veitch B, Yang M (2018) An operational risk analysis tool to analyze marine transportation in Arctic waters. Reliab Eng Syst Saf 169:485–502. https://doi.org/10.1016/j.ress.2017.09.014 Kindermann R, Snell JL (1980) Markov random fields and their applications. In: Science, vol 1. https://doi.org/10.1109/tvcg.2009.208 Kingston C, Nurse JRC, Agrafiotis I, Milich AB (2018) Using semantic clustering to support situation awareness on Twitter: the case of world views. Hum-Centric Comput Inf Sci 8(1):22. https://doi.org/10.1186/s13673-018-0145-6 Klein GA (1998) Sources of power. MIT Press, Cambridge Kokar MM, Matheus CJ, Baclawski K (2009) Ontology-based situation awareness. Inf Fusion 10(1):83–98. https://doi.org/10.1016/j.inffus.2007.01.004 Kokar MM, Shin S, Ulicny B, Moskal J (2014) Inferring relations and individuals relevant to a situation: An example. In: 2014 IEEE international inter-disciplinary conference on cognitive methods in situation awareness and decision support (CogSIMA), pp 18–194. https://doi.org/10.1109/cogsima.2014.6816561 Liu F, Deng D, Li P (2017) Dynamic context-aware event recognition based on Markov Logic Networks. Sensors 17(3):491. https://doi.org/10.3390/s17030491 Llinas J, Bowman C, Rogova G, Steinberg A, Waltz E, White F (2004) Revisiting the JDL Data Fusion Model II (2004). In: Svensson P, Schubert J (eds), Proceedings of the seventh international conference on information fusion (FUSION 2004), June 28–July 1, 2004. Stockholm, Sweden Luck M, Aylett R (2000) Applying artificial intelligence to virtual reality: intelligent virtual environments. Appl Artif Intell 14(1):3–32. https://doi.org/10.1080/088395100117142 Malizia A, Onorati T, Diaz P, Aedo I, Astorga-Paliza F (2010) SEMA4A: an ontology for emergency notification systems accessibility. Expert Syst Appl 37(4):3380–3391. https://doi.org/10.1016/j.eswa.2009.10.010 Musharraf M, Smith J, Khan F, Veitch B, MacKinnon S (2018) Incorporating individual differences in human reliability analysis: an extension to the virtual experimental technique. Saf Sci 107:216–223. https://doi.org/10.1016/j.ssci.2017.07.010 Naderpour M, Lu J, Zhang G (2014) An intelligent situation awareness support system for safety-critical environments. Decis Support Syst 59:325–340. https://doi.org/10.1016/j.dss.2014.01.004 Nakanishi H, Shimizu S, Isbister K (2005) Sensitizing social agents for virtual training. Appl Artif Intell 19(3–4):341–361. https://doi.org/10.1080/08839510590910192 Nasar Z, Jaffry SW (2018) Trust-based situation awareness: comparative analysis of agent-based and population-based modeling. Complexity 2018:1–17. https://doi.org/10.1155/2018/9540726 Article MATH Google Scholar Norazahar N, Smith J, Khan F, Veitch B (2018) The use of a virtual environment in managing risks associated with human responses in emergency situations on offshore installations. Ocean Eng 147:621–628. https://doi.org/10.1016/j.oceaneng.2017.09.044 Nowroozi A, Shiri ME, Aslanian A, Lucas C (2012) A general computational recognition primed decision model with multi-agent rescue simulation benchmark. Inf Sci 187:52–71. https://doi.org/10.1016/j.ins.2011.09.039 Nwiabu N, Allison I, Holt P, Lowit P, Oyeneyin B (2012) Case-based situation awareness. In: 2012 IEEE international multi-disciplinary conference on cognitive methods in situation awareness and decision support. 6–8 March 2012, pp 22–29. https://doi.org/10.1109/cogsima.2012.6188388 Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inferences. Morgan Kaufmann, San Mateo Poon H, Domingos P (2006) Sound and efficient inference with probabilistic and deterministic dependencies. In: Proceedings of the 21st national conference on artificial intelligence, vol 1, pp 458–463. https://homes.cs.washington.edu/~pedrod/papers/aaai06a.pdf Posner MI, Nissen MJ, Klein RM (1976) Visual dominance: an information-processing account of its origins and significance. Psychol Rev 83(2):157–171. https://doi.org/10.1037/0033-295X.83.2.157 Preston CJ (1974) Gibbs states on countable sets. Cambridge University Press, Cambridge Proulx G (2007) Response to fire alarms. Fire Protect Eng 33:8–14 Raedt L, De Kersting K, Natarajan S, Poole D (2016) Statistical relational artificial intelligence: logic, probability, and computation. In: Synthesis lectures on artificial intelligence and machine learning, vol 10. https://doi.org/10.2200/s00692ed1v01y201601aim032 Reason J (1990) Human error. Cambridge University Press, Cambridge Récopé M, Fache H, Beaujouan J, Coutarel F, Rix-Lièvre G (2019) A study of the individual activity of professional volleyball players: situation assessment and sensemaking under time pressure. Appl Ergon 80:226–237. https://doi.org/10.1016/j.apergo.2018.07.003 Singla P, Domingos P (2005) Discriminative training of Markov Logic Networks. In: Proceedings of the 20th national conference on artificial intelligence, vol 2, pp 868–873. https://homes.cs.washington.edu/~pedrod/papers/aaai05.pdf Sinnett S, Spence C, Soto-Faraco S (2007) Visual dominance and attention: the Colavita effect revisited. Percept Psychophys 69(5):673–686. https://doi.org/10.3758/BF03193770 Smith J (2015) The effect of virtual environment training on participant competence and learning in offshore emergency egress scenarios. Memorial University of Newfoundland, St. John's Smoking and Health: Joint Report of the Study Group on Smoking and Health (1957) Science 125(3258):1129–1133. https://doi.org/10.1126/science.125.3258.1129 Sneddon A, Mearns K, Flin R (2013) Stress, fatigue, situation awareness and safety in offshore drilling crews. Saf Sci 56:80–88. https://doi.org/10.1016/j.ssci.2012.05.027 Snidaro L, Visentini I, Bryan K (2015) Fusing uncertain knowledge and evidence for maritime situational awareness via Markov Logic Networks. Inf Fusion 21:159–172. https://doi.org/10.1016/j.inffus.2013.03.004 Snidaro L, Visentini I, Bryan K, Foresti GL (2012) Markov Logic Networks for context integration and situation assessment in maritime domain. In: 2012 15th international conference on information fusion, pp 1534–1539 Sowa JF (1984) Conceptual structures: information processing in mind and machine. Addison-Wesley, Reading Sowa JF (2000) Knowledge representation: logical, philosophical and computational foundations. Brooks/Cole Thomson Learning, Pacific Grove Spouge J (1999) A guide to quantitative risk assessment for offshore installations. CMPT Publication, Aberdeen Szczerbak M, Bouabdallah A, Toutain F, Bonnin J-M (2013) A model to compare and manipulate situations represented as semantically labeled graphs. In: Pfeiffer HD, Ignatov DI, Poelmans J, Gadiraju N (eds) Conceptual structures for STEM research and education. Springer, Berlin, pp 44–57 Thilakarathne DJ (2015) Modelling of situation awareness with perception, attention, and prior and retrospective awareness. Biol Inspired Cogn Architect 12:77–104. https://doi.org/10.1016/j.bica.2015.04.010 Tong D, Canter D (1985) The decision to evacuate: a study of the motivations which contribute to evacuation in the event of fire. Fire Saf J 9(3):257–265. https://doi.org/10.1016/0379-7112(85)90036-0 Tutolo D (1979) Attention: necessary aspect of listening. Lang Arts 56(1):34–37 Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185(4157):1124–1131. https://doi.org/10.1126/science.185.4157.1124 Wankhede A (2017) Different types of alarms on ships. Mar Insight. https://www.marineinsight.com/marine-safety/different-types-of-alarms-on-ship/. Accessed 28 Sept 2018 Winerman L (2004) Fighting fire with psychology. Monitor Pscyhol 35(8):28 Xu G, Cao Y, Ren Y, Li X, Feng Z (2017) Network security situation awareness based on semantic ontology and user-defined rules for internet of things. IEEE Access 5:21046–21056. https://doi.org/10.1109/ACCESS.2017.2734681 Yang C, Wang D, Zeng Y, Yue Y, Siritanawan P (2019) Knowledge-based multimodal information fusion for role recognition and situation assessment by using mobile robot. Inf Fusion 50:126–138. https://doi.org/10.1016/j.inffus.2018.10.007 The authors would like to thank Engr. Saqib Munawar for sharing his experience of emergency egress on various offshore rigs. Especially, his knowledge about alarm systems and fire drills were important for this study. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC)-Husky Energy Industrial Research Chair in Safety at Sea, and the Canada Research Chair Program in Offshore Safety and Risk Engineering. Centre for Risk, Integrity and Safety Engineering (C-RISE), Faculty of Engineering and Applied Science, Memorial University of Newfoundland, St. John's, NL, Canada Syed Nasir Danial, Jennifer Smith, Faisal Khan & Brian Veitch Syed Nasir Danial Brian Veitch SND performed the design and implementation of the MLN model and extracted the empirical data from Smith's [55] experiment for validation of the model. JS performed the experiment presented in [55] and verified the data extracted from the experiment. FK supervised the MLN model development. BV supervised the entire study and performed the editorial process. All authors read and approved the final draft. Correspondence to Faisal Khan. Danial, S.N., Smith, J., Khan, F. et al. Situation awareness modeling for emergency management on offshore platforms. Hum. Cent. Comput. Inf. Sci. 9, 37 (2019). https://doi.org/10.1186/s13673-019-0199-0 Situation modeling Markov Logic Networks Application Agent situation awareness
CommonCrawl
arXiv.org > hep-th > arXiv:hep-th/0510016v1 hep-th High Energy Physics - Theory Title:Weak gauge principle and electric charge quantization Authors:E. Minguzzi, C. Tejero Prieto, A. Lopez Almorox (Submitted on 3 Oct 2005 (this version), latest version 28 Jun 2006 (v2)) Abstract: We review the argument that relates the quantization of electric charge to the topology of the spacetime manifold starting from the gauge principle. We formulate it in the language of Cech cohomology so that its generalization to cases that do not involve a monopole field becomes straightforward. We consider two different formulations of the gauge principle, the usual (strong) version and a weaker version in which the transition functions can differ from matter field to matter field. From both versions it follows that the charges are quantized if the electromagnetic field is not exact. The weak case is studied in detail. To each pair of particles there corresponds an interference class $k \in H^{1}(M,U(1))$ that controls the different behavior of the particles under topological Aharonov-Bohm experiments. If this class is trivial the phenomenology reduces to that of the usual strong gauge principle case. It is shown that the theory may give rise to two natural quantization units that we identify with the quantization unit (realized inside the quarks) and the electric charge. Then we show that the color charge can have topological origin, the number of colors being related to the order of the torsion subgroup of $H^{2}(M,\mathbb{Z})$. We also point out that the quantization of charge may be due to a weak non-exact component of the electromagnetic field extended over cosmological scales if at that scales a non-trivial topology of the spacetime manifold arises. This component could have formed in the initial instants of the Universe when its topology acquired a final form. Then the expansion of the Universe would have decreased its magnitude making it undetectable in today experiments. Comments: Revtex4, 13 pages Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) Cite as: arXiv:hep-th/0510016 (or arXiv:hep-th/0510016v1 for this version) From: Ettore Minguzzi [view email] [v1] Mon, 3 Oct 2005 16:34:08 UTC (30 KB) [v2] Wed, 28 Jun 2006 10:22:32 UTC (32 KB)
CommonCrawl
A Graphical Introduction to Lattices Here is my (extended) family tree: Everyone in the tree shares at least one common ancestor and at least one common descendant. This makes my family tree a lattice, an important mathematical structure. While lattices are often presented in abstract algebraic form, they have a simple graphical representation called a Hasse diagram, which is similar to a family tree. Because most lattice theory assumes a strong background in algebra, I think the results are not as well known as they should be. I hope to give a sampling of some lattices here, and a hint of their power. What are Lattices? A lattice is a structure with two requirements: Every two elements have a "least upper bound." In the example above, this is the "most recent common ancestor". Every two elements have a "greatest lower bound." In the example above, this is the "oldest common descendant". Note that the bound of some elements can be themselves; e.g. the most recent common ancestor of me and my mother is my mother. Lattices are a natural way of describing partial orders, i.e. cases where we sometimes know which element came "first", but sometimes don't. For example, because the most recent common ancestor of my mother and myself is my mother, we know who came "first" - my mother must be older. Because the least upper bound of my mother and my father is some third person, we don't know which one is older. Here's an example of four different ways to fill your shopping cart: The lines between two sets indicates preference: one apple is better than nothing, but one apple and one banana is even better than one apple. (Note that the arrows aren't directed, because every relation has a dual [e.g. the "better than" relation has a dual relation "worse than]. So whether you read the graph top-to-bottom or bottom-to-top, it doesn't really matter. By convention, things on the bottom are "less than" things on the top.) Now, some people might prefer apples to bananas, and some might prefer bananas to apples, so we can't draw any lines between the "one apple" and the "one banana" situations. Nonetheless, we can still say that you prefer having both to just one, so this order is pretty universal. The least upper bound in this case is "the worst shopping cart which is still preferred or equal to both things" (doesn't quite roll of the tongue, does it?), and the greatest lower bound is "the best shopping cart which is still worse than or equal to both things". Because these two operations exist, this means that shopping carts (or rather the goods that could be in shopping carts) make up a lattice. A huge swath of economic and ethical problems deal with preferences which can be put into lattices like this, which makes lattice theory a powerful tool for solving these problems. This is a more classical "math" lattice: Here a line between two integers indicates that the lower one is a factor of the higher one. The least upper bound in this lattice is the least common multiple (lcm) and the greatest lower bound is the greatest common divisor (gcd, some people call this the "greatest common factor"). The greatest common divisor of 4 and 10 is 2, and the least common multiple of 2 and 3 is 6. Again we don't have a total ordering - 2 isn't a factor of 3 or vice versa - but we can still say something about the order. An important set of questions about lattices deal with operations which don't change the lattice structure. For example, $k\cdot\gcd(x,y)=\gcd(kx,ky)$, so multiplying by an integer "preserves" this lattice. Multiplying the lattice by three still preserves the divisibility relation. A lot of facts about gcd/lcm in integer lattices are true in all lattices; e.g. the fact that $x\cdot y=\gcd(x,y)\cdot \text{lcm}(x,y)$. Here is the simplest example of a lattice you'll probably ever see: Suppose we describe this as saying "False is less than True". Then the operation AND becomes equivalent to the operation "min", and the operation OR becomes equivalent to the operation "max": A AND B = min{A, B} A OR B = max{A, B} Note that this holds true of more elaborate equations, e.g. A AND (B OR C) = min{A, max{B, C}}. In fact, even more complicated Boolean algebras are lattices, so we can describe complex logical "gates" using the language of lattices. Everything is Addition I switch now from examples of lattices to a powerful theorem: [Holder]: Every operation which preserves a lattice and doesn't use "incomparable" objects is equivalent to addition.1 The proof of this is fairly complicated, but there's a famous example which shows that multiplication is equivalent to addition: logarithms. The relevant fact about logarithms is that $\log(x\cdot y)=\log(x)+\log(y)$, meaning that the problem of multiplying $x$ and $y$ can be reduced to the problem of adding their logarithms. Older readers will remember that this trick was used by slide rules before there were electronic calculators. Holder's theorem shows that similar tricks exist for any lattice-preserving operation. Everything is a Set Consider our division lattice from before (I've cut off a few numbers for simplicity): Now replace each number with the set of all its factors: We now have another lattice, where the relationship between each node is set inclusion. E.g. {2,1} is included in {4,2,1}, so there's a line between the two. You can see that we've made an equivalent lattice. This holds true more generally: any lattice is equivalent to another lattice where the relationship is set inclusion.2 Max and Min Revisited Consider the following statements from various areas of math: $$\begin{eqnarray} \max\{x,y\} & = & x & + & y & - & \min\{x,y\} &\text{ (Basic arithmetic)} \\ P(x\text{ OR } y) & = & P(x) & + & P(y) & - & P(x\text{ AND } y) & \text{ (Probability)} \\ I(x; y) & = & H(x) & + & H(y) & - & H(x,y) & \text{ (Information theory)} \\ \gcd(x,y) & = & x & \cdot & y & \div & \text{lcm}(x,y) & \text{ (Basic number theory)} \\ \end{eqnarray}$$When laid out like this, the similarities between these seemingly disconnected areas of math is obvious - these results all come from the basic lattice laws. It turns out that merely assuming a lattice-like structure for probability results in the sum, product and Bayes' rule of probability, giving an argument for the Bayesian interpretation of probability. The problem with abstract algebraic results is that they require an abstract algebraic explanation. I hope I've managed to give you a taste of how lattices can be used, without requiring too much background knowledge. If you're interested in learning more: Most of what I know about lattices comes from Glass' Partially Ordered Groups, which is great if you're already familiar with group theory, but not so great otherwise. Rota's The Many Lives of Lattice Theory gives a more technical overview of lattices (as well as an overview of why everyone who doesn't like lattices is an idiot) and J.B. Nation has some good notes on lattice theory, both of which require slightly less background. Literature about specific uses of lattices, such as in computer science or logic, also exists. Formally, every l-group with only trivial convex subgroups is l-isomorphic to a subgroup of the reals under addition. Holder technically proved this fact for ordered groups, not lattice-ordered groups, but it's an immediate consequence. By "equivalent" I mean l-isomorphic.
CommonCrawl
Discussion and conclusions Identification of low frequency and rare variants for hypertension using sparse-data methods Ji-Hyung Shin1Email author, Ruiyang Yi1 and Shelley B. Bull1Email author Availability of genomic sequence data provides opportunities to study the role of low-frequency and rare variants in the etiology of complex disease. In this study, we conduct association analyses of hypertension status in the cohort of 1943 unrelated Mexican Americans provided by Genetic Analysis Workshop 19, focusing on exonic variants in MAP4 on chromosome 3. Our primary interest is to compare the performance of standard and sparse-data approaches for single-variant tests and variant-collapsing tests for sets of rare and low-frequency variants. We analyze both the real and the simulated phenotypes. Minor Allele Frequency Rare Variant Standard Score Test Sequence Kernel Association Test Simulated Phenotype Despite the success of genome-wide association studies, much of the genetic contribution to complex diseases and traits remains unexplained. Therefore, an increasing number of studies have turned to low-frequency and rare variant association analysis for additional explanation of disease risk or trait variability. For binary phenotypes, single-variant analyses of low-frequency and rare variants are challenging because the conventional logistic regression approaches often violate the large-sample-size assumption for test statistics, resulting in poor type 1 error control or low statistical power [1, 2]. The standard score test, in particular, can be extremely anticonservative under the null [3]. Variant-collapsing methods across multiple variants or sparse-data methods for single-variant analysis offer an alternative [1–5]. Furthermore, depending on the linkage disequilibrium (LD) structure, it is possible that even nonfunctional low-frequency or common variants can capture functional rare variant signals [4]. On the other hand, because power is higher for a variant with a higher minor allele frequency (MAF), a common functional variant will usually be better detected by a single-variant test rather than as part of a collapsing test that incorporates nonfunctional variants. In this report, we analyze the exome-sequence data and both the real and simulated phenotype data of the unrelated Mexican American sample to evaluate and compare the performance of single-variant and variant-collapsing methods for association analysis. To relate genotypes to hypertension, we consider the logistic regression model $$ logit\left(P\left(HT{N}_i=1\ \Big| covariates\right)\right) = {\beta}_0 + AG{E}_i{\beta}_a + SE{X}_i{\beta}_s + {\boldsymbol{G}}_i{\boldsymbol{\beta}}_g, $$ where i = 1, …, 1943 indexes the individuals, HTN i indicates hypertension status of the i th individual (1 if the individual is hypertensive and 0, otherwise); AGE i is the age at the time of examination, SEX i is the gender of the individual, and \( {\boldsymbol{G}}_i=\left({G}_{i_1},{G}_{i_2}, \dots,\ {G}_{i_m}\right) \) indicates the vector containing the numbers of copies of the nonreference alleles at m variants (ie, additively coded genotype), and \( {\boldsymbol{\beta}}_g\hbox{'} = \left({\beta}_1,{\beta}_2,\dots,\ {\beta}_m\right) \) is the vector of the associated parameters. For a single-variant analysis with m = 1, we apply 2 types of nonstandard approaches: Firth-type penalized logistic regression likelihood ratio (LR) tests [2, 6–8], and small-sample adjusted score tests [9], and compare them to standard LR and score tests. The LR and score tests are asymptotically equivalent but may be discrepant in finite samples. The penalized LR test is based on the penalized log-likelihood function $$ {l}_p\left(\boldsymbol{\beta} \right)=l\left(\boldsymbol{\beta} \right)+\frac{1}{2} \log \left(\left|i\left(\boldsymbol{\beta} \right)\right|\right), $$ where i(β) is the Fisher information matrix. This is a generalization of Haldane's statistic for sparse 2 × 2 table analysis, where \( \frac{1}{2} \) is added to each cell. For the small-sample-adjusted single-variant score tests, we apply an approach to adjust the null distribution of the test statistic by incorporating small-sample variance and/or kurtosis (see Lee et al. [9], pp. 226–227); this approach was originally recommended for variant-collapsing tests. For variant-collapsing analysis, we consider a MAF-based weighted burden test [1], a nonburden sequence kernel association test (SKAT) and a unified approach (SKAT-O) that optimally combines a burden test and a SKAT (eg, Lee et al. [9]). For these tests, we first define K subregions, then pool the variants within each subregion, and test K null hypotheses \( {H}_{0_K}:\left({\beta}_1,{\beta}_2,\dots,\ {\beta}_{m_k}\right)\hbox{'}=\left(0,\ 0,\dots,\ 0\right)\hbox{'} \), where m k indicates the number of variants within the k-th subregion (k = 1,…, K). For convenience, we determine the subregions on the basis of physical proximities among the variants. Applying these methods, we analyzed exonic variants within MAP4 gene on chromosome 3 in the real and the simulated phenotype data sets. For the imputed variants, we analyzed the predicted dosages rather than their best-guess genotypes. In addition, we examined all polymorphic variants, including the singletons to assess the extremes at which the tests break down. For the standard and penalized logistic regression tests, we used the R glm function and pmlr (Penalized Multinomial Logistic Regression) package [10], respectively. For the small-sample-adjusted score test and the variant-collapsing tests, we used the R package SKAT [11], with analytical variance estimates and empirical kurtosis estimates based on 10,000 bootstrap replicates. For the variant-collapsing methods, we let K = 6 based on a visual inspection of the physical positions of the variants (Fig. 1a). Pairwise LD measures for markers within MAP4 region on chromosome 3 in 1943 unrelated samples. The hg19 genome assembly was used for annotation. In panel (a), each pixel represents pairwise LD, measured by the squared allelic correlation coefficient r 2 between 2 markers. In panel (b), LD is measured by Lewontin's |D '|. The latter are generally higher because |D '| takes into account that the correlation is constrained by the allele frequencies. As indicated by the color key, stronger LD is represented by red and weaker by white. The LD plot was produced using the LDheatmap package [16] In the real data set, we defined the hypertension phenotype using the conventional diagnostic criteria: a systolic blood pressure (SBP) greater than 140 mm Hg or a diastolic blood pressure (DBP) greater than 90 mm Hg. We also defined individuals on antihypertensive medication to be hypertensive regardless of their SBP and DBP levels. For the simulated phenotypes, 2 data sets were available, "SIMQ1" and "SIMPHEN," each with 200 replicates. SIMQ1, designed for evaluating type 1 error rates, contained normally distributed Q 1 generated under no genetic effects. Because SIMQ1 did not have binary phenotypes, we dichotomized Q 1 to create hypothetical disease status Q 2, letting Q 2 correlate with AGE and SEX through Q 1. We let Q 2 = 1 if Q 1 was greater than 51.2 and 0 otherwise, such that the disease prevalence for Q 2 was 17.8 %, the same as the prevalence of hypertension in SIMPHEN, which we used for evaluating power. The hypertension phenotype was derived from blood pressure phenotypes generated under a model with more than 1000 variants in more than 200 genes [12]. MAP4 variants in the unrelated sample Of the 409 exonic MAP4 variants, only 90 were polymorphic in the sample of 1943 unrelated individuals. These variants had MAFs ranging from 0.00027 to 0.34. As expected, rare variants (MAF <1 %) were most prevalent in the sample; except for 4 common variants, all variants had MAF less than 5 % (Fig. 2, Table 1). As expected for rare variants (eg, Pritchard [13]), the pairwise LD in the 90 variants was generally weak, with the exception of a few variants in strong LD in an upstream region (see Fig. 1). However, the strong LD seems to arise because of their physical proximities (all the markers in the LD block are located within 39 bases). Distribution of the frequency for the 90 polymorphic MAP4 variants according to the number of individuals with genotype dosage G > 0. Height of the bars indicates the total number of variants for a given count of observations with G > 0, and red bars indicate the counts for the 26 functional variants used in the simulation model Frequencies of rare (MAF <1 %), low-frequency (1 % ≤ MAF <5 %) and common (MAF ≥5 %) variants in K = 6 subregions within MAP4 region on chromosome 3 Variant IDs Rarea Low-frequencya Commona Totala aThe values in parentheses indicate the numbers of variants designated as functional in the simulation study Analysis of the real phenotype data We found that the standard score test rejects the null hypothesis far more often than the other single-variant tests (results not shown), suggesting that it may be anti-conservative. This agrees with published simulations under a case-control design [3] and is confirmed by our own unpublished simulation studies under a cohort design at the observed hypertension prevalence of 26 %. After correcting for multiple testing, no single-variant tests identified any association (minimum unadjusted p value = 0.006). The burden, SKAT and SKAT-O (optimal sequence kernel association test) tests, each of which pooled all polymorphic variants within the K = 6 subregions defined in Table 1 and Fig. 1a, did not find the MAP4 gene to be significant either (minimum unadjusted p values = 0.12, 0.24, and 0.20, respectively). Analysis of the simulated phenotype data It has been demonstrated that for a genome-wide study with a large sample size, minor allele count (MAC) is the key parameter determining test calibration [3]. Because we analyzed predicted dosages, we do not have a MAC for all the variants. Hence, for the presentation of simulation results, we use the count of individuals with G > 0 dosage, denoted by \( \tilde{MAC} \), which is close to the MAC for a low MAF. For type 1 error rates of the single-variant tests, we pooled the results across all variants with the same values of \( \tilde{MAC} \). Power for the single-variant tests was evaluated separately for each of the 26 functional variants. For the variant-collapsing tests, power was examined for each subregion containing at least one functional variant. Test size and type I error Examination of quantile–quantile (Q-Q) plots of the single-variant test p values for rare variants revealed departures from the expected distribution under the null hypothesis of no genetic effects with some discrepancies among tests. For example, for var_3_47660325, with \( \tilde{MAC}=1, \) all the single-variant tests showed unusual departures from the expected (Fig. 3a). For low-frequency variants, the p value distributions were close to the expected, except in the upper tail where all tests seemed to be anticonservative (eg, Fig. 3b). As expected, the common variant test p values were close to the null distribution with no discrepancy among the tests (eg, Fig. 3c). Q-Q plots of p values from the single-variant tests under the null hypothesis. The p values from the standard likelihood ratio test (LRT), penalized likelihood ratio test (PLRT), standard score test (Score) and small-sample-adjusted score tests (Score-Var-Adj and Score-Var-Kurt-Adj) are indicated by yellow squares, black circles, red point-down triangles, purple diamonds, and green point-up triangles, respectively. Panels (a), (b), and (c) show a rare, a low-frequency, and a common variant with \( \tilde{MAC} \) = 1, 87, and 1065 observations with genotype dosage G > 0. The results are based on 200 replicates of the null binary phenotypes Q 2 Examination of empirical type 1 error rates for the single-variant tests demonstrates that no method performed uniformly better than others for the rare variants with very low MACs (Fig. 4). For example, when \( \tilde{MAC} \) is less than 15, the standard score test tended to be anticonservative at a significance testing level of 0.01 (Fig. 4), but was conservative at the less stringent significance level of 0.05 (results not shown). The standard LR test tended to be conservative for low \( \tilde{MAC} \) (eg, <10), but could be anticonservative when this count was between 10 and 20. Although the 2 small-sample score tests could also be anticonservative, and the penalized LR test tended to be conservative in general, the type 1 error rates of these tests were closer to the nominal level than the standard tests. When \( \tilde{MAC} \) is 66 or greater (or MAF >1 %), all the single-variant tests seem to control type 1 error reasonably well. Empirical type 1 error rates of the single-variant tests at significance level of 0.01, according to the number \( \tilde{MAC} \) of observations with genotype dosage G > 0 in the 1862 individuals with complete information on AGE and SEX. For the count-specific assessment, the results were pooled for the variants with the same \( \tilde{MAC} \) value, and the proportions of the p values <0.01 were then computed. Tests are the standard likelihood ratio test (LRT), penalized likelihood ratio test (PLRT), standard score test (Score) and small-sample-adjusted score tests (Score-Var-Adj and Score-Var-Kurt-Adj), which are indicated by yellow squares, black circles, red point-down triangles, purple diamonds, and green point-up triangles, respectively. The vertical line segments indicate ±2 simulation error bars, which were calculated based on the total number of the polymorphic variants in each \( \tilde{MAC} \) -specific group. For example, for the variants with \( \tilde{MAC} \) = 1, the error bars were obtained based on 200 × 44 = 8800 simulated data sets All the single-variant tests had power of less than 20 % to detect each of the rare variants, but had 100 % power for the low-frequency and the common variants at the significance levels of 0.01 and 0.05. For the low-frequency variants, tests had discrepant p values, and differential power at a stricter significance level. For example, in Fig. 5b, all the tests for var_3_47957996 (MAF = 0.0024) had p values of less than 0.01; however, the 2 LR tests had consistently lower p values than the 3 score tests. At a significance level of 1e–06, the standard and penalized LR tests had 91 and 82 % power, respectively, whereas the standard, the small-sample-variance, and the small-sample-variance-kurtosis score tests had less than 10 % power. Q-Q plots of the p values from the single-variant tests and variant-collapsing tests of markers in subregion 4 (see Table 1 and Fig. 2a). Panels (a) to (c) show the p values from the single-variant tests for a rare, low-frequency, and common functional variant, respectively; the tests are the standard likelihood ratio test (LRT), penalized likelihood ratio test (PLRT), standard score test (Score) and small-sample-adjusted score tests (Score-Var-Adj and Score-Var-Kurt-Adj), which are indicated by yellow squares, black circles, red point-down triangles, purple diamonds, and green point-up triangles, respectively. Panels (d) to (f) show the results from the variant-collapsing tests when they include the rare variants, rare and low-frequency variants, and all the variants within the region. The p values from the weighted burden tests, SKAT, and SKAT-O are, respectively, represented by pink circles, blue cross marks, and green diamonds. The calculations are based on 200 simulated data sets in SIMPHEN Among the 4 subregions with at least 1 functional variant, power was nonnegligible only in subregions 4 and 6 (Table 2). Figure 5d-f shows the results from the variant-collapsing tests of the markers in subregion 4, which contains all 3 types of functional variants (rare, low-frequency, and common). As expected, the burden test tended to have lower power than SKAT or SKAT-O because the subregion includes both protective and deleterious variants. These tests all had low power when the subregion includes only rare variants (eg, Fig. 5d). The power improved when the subregion included both the rare and the low-frequency functional variants (eg, Fig. 5e). When, however, the common variants were additionally included, the power did not seem to improve further (Fig. 5f). When compared with the single-variant tests of markers in the same subregion (Fig. 5b and c), the results suggest that this subregion would have been detected by some of the single-variant tests, as well, even at the genome-wide significance level of 5e–08. Empirical power estimates of the collapsing-variant tests based on 200 simulated data sets in SIMPHEN, according to the MAP4 subregions containing at least 1 functional variant Burdena SKATa SKAT-Oa Rare & low- frequency 0.260b aFor each subregion, power was estimated when a test includes only rare, rare and low-frequency, and all variants bA result of 2 nonfunctional common variants that were in LD (r 2 > 0.3) with the 2 functional common variants in subregion 4 In this article, we evaluated standard and sparse-data methods for single-variant and variant-collapsing tests to examine the association between a hypertension phenotype and exonic variants in MAP4 gene on chromosome 3, using both the real and the simulated phenotypes in unrelated Mexican Americans. In the analysis of the real phenotype data, none of the single-variant and the variant-collapsing methods detected MAP4 variants significantly associated with hypertension. A limitation of our analysis is that we did not make any adjustment for ancestry admixture/population structure. In genetic association studies of admixed populations such as Mexican Americans, addressing differential ancestral backgrounds is important to avoid false positive or negative association signals [14, 15]. In our simulation investigation, we found that the sparse-data approaches improve type 1 error control, but their power remains low for detecting the rare variant effects. Because power of the association tests depends on both frequency and effect size of rare variants, even with large effects, the tests may detect rare variants only in studies with large samples. We may be more successful in identifying rare variants when we use joint or meta-analyses combining data or summary statistics from different studies (eg, Ma et al. [3]). For the low-frequency variants, all the single-variant tests seem to have improved type 1 error rates and power. It seems that the LR tests have higher power than the score tests at a stringent significance level. However, we cannot make any concrete conclusions because of the limited number of replications provided in the simulation design. Although more thorough investigation is necessary, overall, the penalized LR test and the score test with small-sample variance and kurtosis seem to be better choices than the standard tests for the analyses of rare and low-frequency variants. Moreover, caution is indicated when different tests of the same hypothesis give inconsistent p values as it suggests large-sample approximations for test statistics may be invalid. Although previous simulation studies have shown that collapsing tests can have greater power than single-variant tests (see, eg, Madsen and Browning [1]), our investigation suggests that power of collapsing tests can be low when the tests include only the rare variants (see, eg, Fig. 5d). In addition to MAF and effect size, power of collapsing tests depends on the number of associated variants, the number of neutral variants, and whether the direction of effects is consistent within gene, so that selection of good binning and weighting strategies may boost power for detecting regions containing only rare variants. We thank the reviewers for their careful reading of the manuscript and thoughtful comments. This work was supported in part by grants from the MITACS Network of Centres of Excellence in Mathematical Sciences and the Natural Sciences and Engineering Research Council of Canada. This article has been published as part of BMC Proceedings Volume 17 Supplementary 7, 2016: Genetic Analysis Workshop 19: Sequence, Blood Pressure and Expression Data. Summary articles. The full contents of the supplement are available online at www.biomedcentral.com/bmcgenet/supplements/17/S2. Publication of the proceedings of Genetic Analysis Workshop 19 was supported by National Institutes of Health grant R01 GM031575. JS and SBB designed the overall study and drafted the manuscript. JS and RY conducted statistical analyses. All authors read and approved the final manuscript. Lunenfeld-Tanenbaum Research Institute, Sinai Health System, University of Toronto, Toronto, ON, M5T 3L9, Canada Madsen BE, Browning SR. A groupwise association test for rare mutations using a weighted sum statistic. PLoS Genet. 2009;5(2):e1000384.View ArticlePubMedPubMed CentralGoogle Scholar Heinze G, Schemper M. A solution to the problem of separation in logistic regression. Stat Med. 2002;21(16):2409–19.View ArticlePubMedGoogle Scholar Ma C, Blackwell T, Boehnke M, Scott LJ, the GoT2D investigators. Recommended joint and meta-analysis strategies for case-control association testing of single low-count variants. Genet Epidemiol. 2013;37(6):539–50.View ArticlePubMedPubMed CentralGoogle Scholar Kinnamon DD, Hershberger RE, Martin ER. Reconsidering association testing methods using single-variant test statistics as alternatives to pooling tests for sequence data with rare variants. PLoS One. 2012;7(2):e30238.View ArticlePubMedPubMed CentralGoogle Scholar Kosmidis I. Bias in parametric estimation: reduction and useful side-effects. Wiley Interdiscip Rev Comput Stat. 2014;6:185–96.View ArticleGoogle Scholar Firth D. Bias reduction of maximum likelihood estimates. Biometrika. 1993;80:27–38.View ArticleGoogle Scholar Bull SB, Mak C, Greenwood CM. A modified score function estimator for multinomial logistic regression in small samples. Comput Stat Data Anal. 2002;39:57–74.View ArticleGoogle Scholar Bull SB, Lewinger JP, Lee SS. Confidence intervals for multinomial logistic regression in sparse data. Stat Med. 2007;26(4):903–18.View ArticlePubMedGoogle Scholar Lee S, Emond MJ, Bamshad MJ, Barnes KC, Rieder MJ, Nickerson DA, Christiani DC, Wurfel MM, Lin X. Optimal unified approach for rare-variant association testing with application to small-sample case-control whole-exome sequencing studies. Am J Hum Genet. 2012;91(2):224–37.View ArticlePubMedPubMed CentralGoogle Scholar Colby S, Lee S, Lewinger PJ, Bull SB: pmlr: Penalized Multinomial Logistic Regression. R package version 1.0; 2010. http://CRAN.R-project.org/package=pmlr. Lee S, Miropolsky L, Wu M: SKAT: SNP-Set (Sequence) Kernel Association Test. R package version 0.95; 2014. http://CRAN.R-project.org/package=SKAT. Blangero J, Teslovich TM, Sim X, Almeida MA, Jun G, Dyer TD, Johnson M, Peralta JM, Manning AK, Wood AR, et al. Omics squared: Human genomic, transcriptomic, and phenotypic data for Genetic Analysis Workshop 19. BMC Proc.Google Scholar Pritchard JK. Are rare variants responsible for susceptibility to complex diseases? Am J Hum Genet. 2001;69(1):124–37.View ArticlePubMedPubMed CentralGoogle Scholar O'Connor TD, Kiezun A, Bamshad M, Rich SS, Smith JD, Turner E, NHLBIGO Exome Sequencing Project; ESP Population Genetics, Statistical Analysis Working Group, Leal SM, Akey JM. Fine-scale patterns of population stratification confound rare variant association tests. PLoS One. 2013;8(7):e65834.View ArticlePubMedPubMed CentralGoogle Scholar Bermejo JL. Above and beyond state-of-the-art approaches to investigate sequence data: Summary of methods and results from the Population-based Association Group at the GAW 19. BMC Genet. 2015;16 Suppl 3:S1.View ArticleGoogle Scholar Shin J-H, Blay S, McNeney B, Graham J: LDheatmap: an R function for graphical display of pairwise linkage disequilibria between single nucleotide polymorphisms. J Stat Softw. 2006;16: Code Snippet 3.Google Scholar
CommonCrawl
Why doesn't the standard analysis of set cover $H_n$ greedy extend to partial cover? Several authors, starting with Slavik, have noted that the classical analysis of the set cover $H_n$ greedy algorithm does not readily extend to the set partial cover problem, where the goal is to pick a minimum-cost family of sets to cover $p \cdot n$ of the $n$ elements, where $0<p<1$ is a constant. But it sure seems to! Greedy: repeatedly choose the most cost-effective set, i.e., one minimizing $c(S) /\min(|S-C|,pn-|C|)$, where $C$ is the set of elements covered so far. That is, the standard set cover greedy's cost-effectiveness definition is modified so that the benefit of a set is the min of # new elements and # of additional elements you still need to get. Then it would seem that you can just say: number the elements $e_1,...,e_{pn}$ in order covered (ignoring any additional ones covered--we'll allocate all the costs to these first $pn$ elements), and argue that at the moment when greedy covers $e_k$, choosing all of $OPT$ would take care of your outstanding $\ge pn-k+1$ element needs, with cost per "satisfied element need" of at most $\alpha = OPT/(pn-k+1)$, so there's got to be a set that's at least that good, so greedy's going to choose one at least that good, which gives us a total bound of $OPT \sum_{i=1}^{pn} 1/(pn-k+1) = H_{pn} OPT$. But apparently this argument is flawed. How so? (Slivak writes in his thesis, "Even though [the algorithms] are quite similar, it turns out that the approach used by Chvatal, Lovasz, or Johnson cannot be used to establish a reasonable bound on the performance [...]. The reasons are that only a fraction of points of the set $U$ are covered and that the part of $U$ covered by the optimum partial cover can be completely different from the part covered by the greedy cover. This makes the analysis of the performance bound [...] quite complicated." http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.5734&rep=rep1&type=pdf And Kearns proved a $2H_n+3$ bound, and presumably not because he simply overlooked the obvious approach.) ds.algorithms approximation-algorithms set-cover MattMatt $\begingroup$ Wolsey gave a generalization of set cover: "minimizing a linear function subject to a submodular constraint" reference. Partial cover is (I think?) a special case: "choose a collection $S$ of sets of minimum total cost s.t. $f(S) \ge p\cdot n$." Where $$f(S) = \min\Big(p\cdot n, \big|\cup_{s\in S} s\big|\Big)$$ is the size of the union of the sets in $S$, or your $p\cdot n$, whichever is less. For integer-valued submodular functions $f$ such as this, his bound is $H_k$, where $$k=\max_s f(\{s\})-f(\{\}) \le n.$$ . $\endgroup$ – Neal Young Feb 11 '14 at 3:30 $\begingroup$ Thanks Neal, I know the result is true, what I don't understand is why the simple adaptation of the classical argument I rehearse above is invalid, as several authors have stated. E.g., Elomaa & Kujala (cs.uleth.ca/~benkoczi/files/papers/psetcover-2010.pdf) also write, "The straightforward analysis of the greedy method for Partial Cover becomes quite complicated because the optimal solution may cover a different set than those chosen by the greedy algorithm. Thus, the methods used by Johnson, Lovasz, and Chvatal ...do not directly generalize..." Where does it actually break? $\endgroup$ – Matt Feb 11 '14 at 4:38 $\begingroup$ The "standard" analysis of set cover via the LP relaxation is messy to generalize to partial cover because the LP relaxation for partial cover is not as clean as that for the standard set cover problem. What is the primal and dual you are referring to when you talk about the allocation of costs? $\endgroup$ – Chandra Chekuri Feb 11 '14 at 15:35 $\begingroup$ @Chandra I'm not referring to any primal/dual at all here, just naively adapting the direct analysis (as in e.g. ch.2 of the Vazirani text), i.e., just charging equal shares of $c(S)$ for each chosen $S$ to the helpful elements that $S$ gives us, and then bounding and summing up those charge values. I'm not able to find the bug in that analysis. $\endgroup$ – Matt Feb 11 '14 at 19:15 $\begingroup$ Matt, rereading more carefully, I don't see any flaw in your proof. However, the standard greedy set-cover analysis proves the tighter ratio of $H_d$ where $d$ is the maximum set size. (The argument there is that you can charge all of greedy's cost to the elements so that each set $S$ in OPT is charged at most $H_{|S|}c_S$.) For partial cover, the $H_d$ bound holds too, but I don't see how your argument extends to give it, as you would be charging greedy's cost to elts that OPT might not cover. This wouldn't explain why Kearns settled for $2H_n+3$ though. $\endgroup$ – Neal Young Feb 12 '14 at 9:27 Browse other questions tagged ds.algorithms approximation-algorithms set-cover or ask your own question. Bounded-cardinality bounded-frequency set cover: hardness of approximation Set Cover for Permutation Matrices expected number of sets generated by greedy set cover ? Approximating Min-Sum Set Multicover What is the reverse of greedy algorithm for setcover? Partial cover approximation About a pre-processing step for primal–dual weighted set cover problem Polynomial approximation algorithm for set cover with assumption
CommonCrawl
Intuitionism in the Philosophy of Mathematics First published Thu Sep 4, 2008; substantive revision Wed Aug 14, 2013 Intuitionism is a philosophy of mathematics that was introduced by the Dutch mathematician L.E.J. Brouwer (1881–1966). Intuitionism is based on the idea that mathematics is a creation of the mind. The truth of a mathematical statement can only be conceived via a mental construction that proves it to be true, and the communication between mathematicians only serves as a means to create the same mental process in different minds. This view on mathematics has far reaching implications for the daily practice of mathematics, one of its consequences being that the principle of the excluded middle, $(A \vee \neg A)$, is no longer valid. Indeed, there are propositions, like the Riemann hypothesis, for which there exists currently neither a proof of the statement nor of its negation. Since knowing the negation of a statement in intuitionism means that one can prove that the statement is not true, this implies that both $A$ and $\neg A$ do not hold intuitionistically, at least not at this moment. The dependence of intuitionism on time is essential: statements can become provable in the course of time and therefore might become intuitionistically valid while not having been so before. Besides the rejection of the principle of the excluded middle, intuitionism strongly deviates from classical mathematics in the conception of the continuum, which in the former setting has the property that all total functions on it are continuous. Thus, unlike several other theories of constructive mathematics, intuitionism is not a restriction of classical reasoning; it contradicts classical mathematics in a fundamental way. Brouwer devoted a large part of his life to the development of mathematics on this new basis. Although intuitionism has never replaced classical mathematics as the standard view on mathematics, it has always attracted a great deal of attention and is still widely studied today. In this entry we concentrate on the aspects of intuitionism that set it apart from other branches of constructive mathematics, and the part that it shares with other forms of constructivism, such as foundational theories and models, is discussed only briefly. 1. Brouwer 2. Intuitionism 2.1 The two acts of intuitionism 2.2 The creating subject 3. Mathematics 3.1 The BHK-interpretation 3.2 Intuitionistic logic 3.3 The natural numbers 3.4 The continuum 3.5 Continuity axioms 3.6 The bar theorem 3.7 Choice axioms 3.8 Descriptive set theory, topology, and topos theory 4. Constructivism 5. Meta-mathematics 5.1 Arithmetic 5.2 Analysis 5.3 Lawless sequences 5.4 Foundations and models 5.5 Reverse mathematics Luitzen Egbertus Jan Brouwer was born in Overschie, the Netherlands. He studied mathematics and physics at the University of Amsterdam, where he obtained his PhD in 1907. In 1909 he became a lecturer at the same university, where he was appointed full professor in 1912, a position he held until his retirement in 1951. Brouwer was a brilliant mathematician who did ground-breaking work in topology and became famous already at a young age. All his life he was an independent mind who pursued the things he believed in with ardent vigor, which brought him in conflict with many a colleague, most notably with David Hilbert. He had admirers as well, and in his house "the hut" in Blaricum he welcomed many well-known mathematicians of his time. To the end of his life he became more isolated, but his belief in the truth of his philosophy never wavered. He died in a car accident at the age of 85 in Blaricum, seven years after the death of his wife Lize Brouwer. At the age of 24 Brouwer wrote the book Life, Art and Mysticism (Brouwer 1905), whose solipsistic content foreshadows his philosophy of mathematics. In his dissertation the foundations of intuitionism are formulated for the first time, although not yet under that name and not in their final form. In the first years after his dissertation most of Brouwer's scientific life was devoted to topology, an area in which he is still known for his theory of dimension and his fixed point theorem. This work is part of classical mathematics; according to Brouwer's later view, his fixed point theorem does not hold, although an analogue cast in terms of approximations can be proved to hold according to his principles. From 1913 on, Brouwer increasingly dedicated himself to the development of the ideas formulated in his dissertation into a full philosophy of mathematics. He not only refined the philosophy of intuitionism but also reworked mathematics, especially the theory of the continuum and the theory of sets, according to these principles. By then, Brouwer was a famous mathematician who gave influential lectures on intuitionism at the scientific meccas of that time, Cambridge, Vienna, and Göttingen among them. His philosophy was considered awkward by many, but treated as a serious alternative to classical reasoning by some of the most famous mathematicians of his time, even when they had a different view on the matter. Kurt Gödel, who was a Platonist all his life, was one of them. Hermann Weyl at one point wrote "So gebe ich also jetzt meinen eigenen Versuch Preis und schließe mich Brouwer an" (Weyl 1921, 56). And although he rarely practised intuitionistic mathematics later in life, Weyl never stopped admiring Brouwer and his intuitionistic philosophy of mathematics. The life of Brouwer was laden with conflicts, the most famous one being the conflict with David Hilbert, which eventually led to Brouwer's expulsion from the board of the Mathematische Annalen. This conflict was part of the Grundlagenstreit that shook the mathematical society at the beginning of the 20th century and that emerged as a result of the appearance of paradoxes and highly nonconstructive proofs in mathematics. Philosophers and mathematicians were forced to acknowledge the lack of an epistemological and ontological basis for mathematics. Brouwer's intuitionism is a philosophy of mathematics that aims to provide such a foundation. According to Brouwer mathematics is a languageless creation of the mind. Time is the only a priori notion, in the Kantian sense. Brouwer distinguishes two acts of intuitionism: The first act of intuitionism is: Completely separating mathematics from mathematical language and hence from the phenomena of language described by theoretical logic, recognizing that intuitionistic mathematics is an essentially languageless activity of the mind having its origin in the perception of a move of time. This perception of a move of time may be described as the falling apart of a life moment into two distinct things, one of which gives way to the other, but is retained by memory. If the twoity thus born is divested of all quality, it passes into the empty form of the common substratum of all twoities. And it is this common substratum, this empty form, which is the basic intuition of mathematics. (Brouwer 1981, 4–5) As will be discussed in the section on mathematics, the first act of intuitionism gives rise to the natural numbers but implies a severe restriction on the principles of reasoning permitted, most notably the rejection of the principle of the excluded middle. Owing to the rejection of this principle and the disappearance of the logical basis for the continuum, one might, in the words of Brouwer, "fear that intuitionistic mathematics must necessarily be poor and anaemic, and in particular would have no place for analysis" (Brouwer 1952, 142). The second act, however, establishes the existence of the continuum, a continuum having properties not shared by its classical counterpart. The recovery of the continuum rests on the notion of choice sequence stipulated in the second act, i.e. on the existence of infinite sequences generated by free choice, which therefore are not fixed in advance. The second act of intuitionism is: Admitting two ways of creating new mathematical entities: firstly in the shape of more or less freely proceeding infinite sequences of mathematical entities previously acquired …; secondly in the shape of mathematical species, i.e. properties supposable for mathematical entities previously acquired, satisfying the condition that if they hold for a certain mathematical entity, they also hold for all mathematical entities which have been defined to be "equal" to it …. (Brouwer 1981, 8) The two acts of intuitionism form the basis of Brouwer's philosophy; from these two acts alone Brouwer creates the realm of intuitionistic mathematics, as will be explained below. Already from this basic principles it can be concluded that intuitionism differs from Platonism and formalism, because neither does it assume a mathematical reality outside of us, nor does it hold that mathematics is a play with symbols according to certain fixed rules. In Brouwer's view, language is used to exchange mathematical ideas but the existence of the latter is independent of the former. The distinction between intuitionism and other constructive views on mathematics according to which mathematical objects and arguments should be computable, lies in the freedom that the second act allows in the construction of infinite sequences. Indeed, as will be explained below, the mathematical implications of the second act of intuitionism contradict classical mathematics, and therefore do not hold in most constructive theories, since these are in general part of classical mathematics. Thus Brouwer's intuitionism stands apart from other philosophies of mathematics; it is based on the awareness of time and the conviction that mathematics is a creation of the free mind, and it therefore is neither Platonism nor formalism. It is a form of constructivism, but only so in the wider sense, since many constructivists do not accept all the principles that Brouwer believed to be true. The two acts of intuitionism do not in themselves exclude a psychological interpretation of mathematics. Although Brouwer only occasionally addressed this point, it is clear from his writings that he did consider intuitionism to be independent of psychology. Brouwer's introduction of the creating subject (Brouwer 1948) as an idealized mind in which mathematics takes place already abstracts away from inessential aspects of human reasoning such as limitations of space and time and the possibility of faulty arguments. Thus the intersubjectivity problem, which asks for an explanation of the fact that human beings are able to communicate, ceases to exist, as there exists only one creating subject. The notion has become known in the literature as the creative subject, but here Brouwer's terminology is used. In Niekus 2010, it is argued that Brouwer's creating subject does not involve an idealized mathematician. For a phenomenological analysis of the creating subject as a transcendental subject in the sense of Husserl see van Atten 2007. In most philosophies of mathematics, for example in Platonism, mathematical statements are tenseless. In intuitionism truth and falsity have a temporal aspect; an established fact will remain so, but a statement that becomes proven at a certain point in time lacks a truth-value before that point. In the formalization of the notion of creating subject, which was not formulated by Brouwer but only later by others, the temporal aspect of intuitionism is incorporated in the axioms (here $\Box_n A$ denotes that the creating subject experiences the truth of $A$ at time $n$, or, in other words, that it has a proof of $A$ at time $n$): (CS1) $\Box_n A \vee \neg \Box_n A$ (it can be decided whether the creating subject knows $A$ (CS2) $\Box_m A \rightarrow \Box_{m+n}A$ (what the creating subject knows remains known to him) (CS3) $\exists n \Box_n A \leftrightarrow A$ (what is true will be discovered to be so by the creating subject, and it cannot know what is not true) The first axiom is a form of the principle of the excluded middle concerning the knowledge of the creating subject. The second axiom clearly uses the fact that the creating subject is an idealization since it expresses that proofs will always be remembered. The last axiom is not one a mathematician using classical reasoning would adhere to. In fact, Gödel's incompleteness theorem indicates that the principle is false when □nA would be interpreted as being provable in a reasonable proof system, which, however, is certainly not what Brouwer had in mind. Brouwer used arguments that involve a creating subject to construct counterexamples to certain intuitionistically unacceptable statements. Where the weak counterexamples, to be discussed below, only show that certain statements cannot, at present, be accepted intuitionistically, the notion of the idealized mind proves certain classical principles to be false. One can, for example, given a statement A that does not contain any reference to time, i.e. no occurrence of □n, define an infinite sequence (what will later be called a choice sequence) according to the following rule (Brouwer 1953): \[ \alpha(n) = \begin{cases} 0 & \text{if } \neg\Box_n A \\ 1 & \text{if } \Box_n A. \end{cases} \] From this follows the principle known as Kripke's schema, \[ \exists \alpha(A \leftrightarrow \exists n\, \alpha(n) = 1). \] In van Dalen 1978 a model for the axioms of the creating subject is given in the context of arithmetic and choice sequences. Thus proving this notion to be consistent with intuitionistic arithmetic and certain parts of analysis. Important as the arguments using the notion of creating subject might be for the further understanding of intuitionism as a philosophy of mathematics, its role in the development of the field has been less influential than that of the two acts of intuitionism, which directly lead to the mathematical truths Brouwer and those coming after him were willing to accept. Although Brouwer's development of intuitionism played an important role in the foundational debate among mathematicians at the beginning of the 20th century, the far reaching implications of his philosophy for mathematics became only apparent after many years of research. The two most characteristic properties of intuitionism are the logical principles of reasoning that it allows in proofs and the full conception of the intuitionistic continuum. Only as far as the latter is concerned, intuitionism becomes incomparable with classical mathematics. In this entry the focus is on those principles of intuitionism that set it apart from other mathematical disciplines, and therefore its other constructive aspects will be treated in less detail. In intuitionism, knowing that a statement A is true means having a proof of it. In 1934 Arend Heyting, who had been a student of Brouwer, introduced a form of what became later known as the Brouwer-Heyting-Kolmogorov-interpretation, which captures the meaning of the logical symbols in intuitionism, and in constructivism in general as well. It defines in an informal way what an intuitionistic proof should consist of by indicating how the connectives and quantifiers should be interpreted. $\bot$ is not provable. A proof of $A\wedge B$ consists of a proof of $A$ and a proof of $B$. A proof of $A \vee B$ consists of a proof of $A$ or a proof of $B$. A proof of $A \rightarrow B$ is a construction which transforms any proof of $A$ into a proof of $B$. A proof of $\exists xA(x)$ is given by presenting an element $d$ of the domain and a proof of $A(d)$. A proof of $\forall x A(x)$ is a construction which transforms every proof that $d$ belongs to the domain into a proof of $A(d)$. The negation $\neg A$ of a formula $A$ is proven once it has been shown that there cannot exist a proof of $A$, which means providing a construction that derives falsum from any possible proof of $A$. Thus $\neg A$ is equivalent to $A \rightarrow \bot$. The BHK-interpretation is not a formal definition because the notion of construction is not defined and therefore open to different interpretations. Nevertheless, already on this informal level one is forced to reject one of the logical principles ever-present in classical logic: the principle of the excluded middle $(A\vee \neg A)$. According to the BHK-interpretation this statement holds intuitionistically if the creating subject knows a proof of $A$ or a proof that $A$ cannot be proved. In the case that neither for $A$ nor for its negation a proof is known, the statement $(A \vee \neg A)$ does not hold. The existence of open problems, such as the Goldbach conjecture or the Riemann hypothesis, illustrates this fact. But once a proof of $A$ or a proof of its negation is found, the situation changes, and for this particular $A$ the principle $(A \vee \neg A)$ is true from that moment on. Brouwer rejected the principle of the excluded middle on the basis of his philosophy, but Arend Heyting was the first to formulate a comprehensive logic of principles acceptable from an intuitionistic point of view. Intuitionistic logic, which is the logic of most other forms of constructivism as well, is often referred to as "classical logic without the principle of the excluded middle". It is denoted by IQC, which stands for Intuitionistic Quantifier Logic, but other names occur in the literature as well. A possible axiomatization in Hilbert style consists of the principles $A \wedge B \rightarrow A$ $A \wedge B \rightarrow B$ $A \rightarrow A \vee B$ $B \rightarrow A \vee B$ $A \rightarrow (B \rightarrow A)$ $\forall x A(x) \rightarrow A(t)$ $A(t) \rightarrow \exists x A(x)$ $\bot \rightarrow A$ $(A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C))$ $A \rightarrow (B \rightarrow A \wedge B)$ $(A \rightarrow C) \rightarrow ( (B \rightarrow C) \rightarrow (A \vee B \rightarrow C))$ $\forall x (B \rightarrow A(x)) \rightarrow (B \rightarrow \forall x A(x))$ $\forall x (A(x) \rightarrow B) \rightarrow (\exists x A(x) \rightarrow B)$ with the usual side conditions for the last two axioms, and the rule Modus Ponens, \[ \text{from $A$ and $(A \rightarrow B)$ infer $B$}, \] as the only rule of inference. Intuitionistic logic has been an object of investigation ever since Heyting formulated it. Already at the propositional level it has many properties that sets it apart from classical logic, such as the Disjunction Property: (DP) $\text{IQC} \vdash A \vee B \text{ implies IQC} \vdash A \text{ or IQC }\vdash B.$ This principle is clearly violated in classical logic, because classical logic proves $(A \vee \neg A)$ also for formulas that are independent of the logic, i.e. for which both $A$ and $\neg A$ are not a tautology. The inclusion of the principle Ex Falso Sequitur Quodlibet, $(\bot \rightarrow A)$, in intuitionistic logic is a point of discussion for those studying Brouwer's remarks on the subject; in van Atten 2008, it is argued that the principle is not valid in Intuitionism and that the logical principles valid according to Brouwer's views are those of relevance logic. See van Dalen 2004 for more on Brouwer and Ex Falso Sequitur Quodlibet. Although till today all the logic used in intuitionistic reasoning is contained in IQC, it is in principle conceivable that at some point there will be found a principle acceptable from the intuitionistic point of view that is not covered by this logic. For most forms of constructivism the widely accepted view is that this will not ever be the case, and thus IQC is considered to be the logic of constructivism. For intuitionism the situation is less clear because it cannot be excluded that at some point our intuitionistic understanding might lead us to new logical principles that we did not grasp before. One of the reasons for the widespread use of intuitionistic logic is that it is well-behaved both from the proof-theoretic as the model-theoretic point of view. There exist a great many proof systems for it, such as Gentzen calculi and natural deduction systems, as well as various forms of semantics, such as Kripke models, Beth models, Heyting algebras, topological semantics and categorical models. Several of these semantics are, however, only classical means to study intuitionistic logic, for it can be shown that an intuitionistic completeness proof with respect to them cannot exist (Kreisel 1962). It has, however, been shown that there are alternative but a little less natural models with respect to which completeness does hold constructively (Veldman 1976). The constructive character of intuitionistic logic becomes particularly clear in the Curry-Howard isomorphism that establishes a correspondence between derivations in the logic and terms in simply typed $\lambda$-calculus, that is, between proofs and computations. The correspondence preserves structure in that reduction of terms correspond to normalization of proofs. The existence of the natural numbers is given by the first act of intuitionism, that is by the perception of the movement of time and the falling apart of a life moment into two distinct things: what was, 1, and what is together with what was, 2, and from there to 3, 4, ... In contrast to classical mathematics, in intuitionism all infinity is considered to be potential infinity. In particular this is the case for the infinity of the natural numbers. Therefore statements that quantify over this set have to be treated with caution. On the other hand, the principle of induction is fully acceptable from an intuitionistic point of view. Because of the finiteness of a natural number in contrast to, for example, a real number, many arithmetical statements of a finite nature that are true in classical mathematics are so in intuitionism as well. For example, in intuitionism every natural number has a prime factorization; there exist computably enumerable sets that are not computable; $(A \vee \neg A)$ holds for all quantifier free statements $A$. For more complex statements, such as van der Waerden's theorem or Kruskal's theorem, intuitionistic validity is not so straightforward. In fact, the intuitionistic proofs of both statements are complex and deviate from the classical proofs (Coquand 1995, Veldman 2004). Thus in the context of the natural numbers, intuitionism and classical mathematics have a lot in common. It is only when other infinite sets such as the real numbers are considered that intuitionism starts to differ more dramatically from classical mathematics, and from most other forms of constructivism as well. In intuitionism, the continuum is both an extension and a restriction of its classical counterpart. In its full form, both notions are incomparable since the intuitionistic real numbers possess properties that the classical real numbers do not have. A famous example, to be discussed below, is the theorem that in intuitionism every total function on the continuum is continuous. That the intuitionistic continuum does not satisfy certain classical properties can be easily seen via weak counterexamples. That it also contains properties that the classical reals do not posses stems from the existence, in intuitionism, of choice sequences. Weak counterexamples The weak counterexamples, introduced by Brouwer in 1908, are the first examples that Brouwer used to show that the shift from a classical to an intuitionistic conception of mathematics is not without consequence for the mathematical truths that can be established according to these philosophies. They show that certain classical statements are presently unacceptable from an intuitionistic point of view. As an example, consider the sequence of real numbers given by the following definition: \[ r_n = \begin{cases} 2^{-n} \text{ if } \forall m \leq n A(m) \\ 2^{-m} \text{ if } \neg A(m) \wedge m \leq n \wedge \forall k \lt m A(k). \end{cases} \] Here $A(n)$ is a decidable property for which $\forall n A(n)$ is not known to be true or false. Decidability means that at present for any given $n$ there exists (can be constructed) a proof of $A(n)$ or of $\neg A(n)$. At the time of this writing, we could for example let $A(n)$ express that $n$, if greater than 2, is the sum of three primes; $\forall n A(n)$ then expresses the (original) Goldbach conjecture that every number greater than 2 is the sum of three primes. The sequence $\langle r_n \rangle$ defines a real number $r$ for which the statement $r=0$ is equivalent to the statement $\forall n A(n)$. It follows that the statement $(r = 0 \vee r \neq 0)$ does not hold, and therefore that the law of trichotomy $\forall x(x \lt y \vee x=y \vee x \gt y)$ is not true on the intuitionistic continuum. Note the subtle difference between "$A$ is not intuitionistically true " and "$A$ is intuitionistically refutable": in the first case we know that $A$ cannot have an intuitionistic proof, the second statement expresses that we have a proof of ¬A, i.e. a construction that derives falsum from any possible proof of $A$. For the law of trichotomy we have just shown that it is not intuitionistically true. Below it will be shown that even the second stronger form saying that the law is refutable holds intuitionistically. This, however, is not true for all statements for which there exist weak counterexamples. For example, the Goldbach conjecture is a weak couterexample to the principle of the excluded middle, since $\forall n A(n)$ as above is at present not known to be true or false, and thus we cannot assert $\forall n A(n) \vee \neg \forall n A(n)$ intuitionistically, at least not at this moment. But the refutation of this statement, $\neg (\forall n A(n) \vee \neg \forall n A(n))$, is not true in intuitionism, as one can show that for any statement $B$ a contradiction can be derived from the assumption that $\neg B$ and $\neg\neg B$ hold (and thus also from $B$ and $\neg B$). In other words, $\neg\neg (B \vee \neg B)$ is intuitionistically true, and thus, although there exist weak counterexamples to the principle of the excluded middle, its negation is false in intuitionism, that is, it is intuitionistically refutable. The existence of real numbers $r$ for which the intuitionist cannot decide whether they are positive or not shows that certain classically total functions cease to be so in an intuitionistic setting, such as the piecewise constant function \[ f(r) = \begin{cases} 0 \text{ if } r \geq 0 \\ 1 \text{ if } r \lt 0. \end{cases} \] There exist weak counterexamples to many classically valid statements. The construction of these weak counterexamples often follow the same pattern as the example above. For example, the argument that shows that the intermediate value theorem is not intuitionistically valid runs as follows. Let $r$ be a real number in [−1,1] for which $(r\leq 0 \vee 0 \lt r)$ has not been decided, as in the example above. Define the uniformly continuous function $f$ on $[0,3]$ by \[ f(x) = \text{min}(x-1,0) + \text{max}(0,x-2) + r. \] Clearly, $f(0) = -1 +r$ and $f(3) = 1 + r$, whence $f$ takes the value 0 at some point $x$ in [0,3]. If such $x$ could be determined, either $1 \leq x$ or $x \leq 2$. Since $f$ equals $r$ on $[1,2]$, in the first case $r \leq 0$ and in the second case $0\leq r$, contradicting the undecidability of the statement $(r\leq 0 \vee 0 \leq r)$. These examples seem to indicate that in the shift from classical to intuitionistic mathematics one loses several fundamental theorems of analysis. This however is not so, since in many cases intuitionism regains such theorems in the form of an analogue in which existential statements are replaced by statements about the existence of approximations within arbitrary precision, as in this classically equivalent form of the intermediate value theorem that is constructively valid: Theorem. For every continuous real-valued function $f$ on an interval $[a,b]$ with $a \lt b$, for every $c$ between $f(a)$ and $f(b)$, the following holds: \[ \forall n \exists x \in [a,b] \, |f(x)-c| \lt 2^{-n}. \] Weak counterexamples are a means to show that certain mathematical statements do not hold intuitionistically, but they do not yet reveal the richness of the intuitionistic continuum. Only after Brouwer's introduction of choice sequences did intuitionism obtain its particular flavor and became incomparable with classical mathematics. Choice sequences Choice sequences were introduced by Brouwer to capture the intuition of the continuum. Since for the intuitionist all infinity is potential, infinite objects can only be grasped via a process that generates them step-by-step. What will be allowed as a legitimate construction therefore decides which infinite objects are to be accepted. For example, in most other forms of constructivism only computable rules for generating such objects are allowed, while in Platonism infinities are considered to be completed totalities whose existence is accepted even in cases when no generating rules are known. Brouwer's second act of intuitionism gives rise to choice sequences, that provide certain infinite sets with properties that are unacceptable from a classical point of view. A choice sequence is an infinite sequence of numbers (or finite objects) created by the free will. The sequence could be determined by a law or algorithm, such as the sequence consisting of only zeros, or of the prime numbers in increasing order, in which case we speak of a lawlike sequence, or it could not be subject to any law, in which case it is called lawless. Lawless sequences could for example be created by the repeated throw of a coin, or by asking the creating subject to choose the successive numbers of the sequence one by one, allowing it to choose any number to its liking. Thus a lawless sequence is ever unfinished, and the only available information about it at any stage in time is the initial segment of the sequence created thus far. Clearly, by the very nature of lawlessness we can never decide whether its values will coincide with a sequence that is lawlike. Also, the free will is able to create sequences that start out as lawlike, but for which at a certain point the law might be lifted and the process of free choice takes over to generate the succeeding numbers, or vice versa. According to Brouwer every real number is presented by a choice sequence, and the choice sequences enabled him to capture the intuitionistic continuum via the controversial continuity axioms. Brouwer first spoke of choice sequences in his inaugural address (Brouwer 1912), but at that time he did not yet treat them as a fundamental part of his mathematics. Gradually they became more important and from 1918 on Brouwer started to use them in a way explained in the next section. The acceptance of the notion of choice sequence has far-reaching implications. It justifies, for the intuitionist, the use of the continuity axioms, from which classically invalid statements can be derived. The weakest of these axioms is the weak continuity axiom: (WC-N) $\forall\alpha\exists n A(\alpha,n) \rightarrow \forall\alpha\exists m\exists n \forall\beta\in\alpha(\overline{m})A(\beta,n).$ Here $n$ and $m$ range over natural numbers, $\alpha$ and $\beta$ over choice sequences, and $\beta\in\alpha(\overline{m})$ means that the first $m$ elements of $\alpha$ and $\beta$ are equal. Although until now there has never been given a completely satisfactory justification of most continuity axioms for arbitrary choice sequences, not even by Brouwer, when restricted to the class of lawless sequences arguments supporting the validity of the weak continuity axiom run as follows. When could a statement of the form $\forall\alpha\exists n A(\alpha,n)$ be established by the intuitionist? By the very nature of the notion of lawless sequence, the choice of the number $n$ for which $A(\alpha,n)$ holds has to be made after only a finite initial segment of $\alpha$ is known. For we do not know how $\alpha$ will proceed in time, and we therefore have to base the choice of $n$ on the initial segment of $\alpha$ that is known at that point in time where we wish to fix $n$. This implies that for every lawless sequence $\beta$ with the same initial segment as $\alpha$, $A(\beta,n)$ holds as well. The weak continuity axiom has been shown to be consistent, and is often applied in a form that can be justified, namely in the case in which the predicate $A$ only refers to the values of $\alpha$, and not to the higher order properties that it possibly possesses. The details of the argument will be omitted here, but it contains the same ingredients as the justification of the principle for lawless sequences, and can be found in van Atten and van Dalen 2002. Weak continuity does not exhaust the intuitionists' intuition about the continuum, for given the weak continuity axiom, it seems reasonable to assume that the choice of the number $m$ such that $\forall\beta\in\alpha(\overline{m})A(\beta,n)$, could be made explicit. Thus $\forall\alpha\exists n A(\alpha,n)$ implies the existence of a continuous functional $\Phi$ that for every $\alpha$ produces the $m$ that fixes the length of $\alpha$ on the basis of which $n$ is chosen. More formally, let $\mathcal{CF}$ be the class of continuous functionals $\Phi$ that assign natural numbers to infinite sequences, i.e. that satisfy \[ \forall\alpha\exists m\forall\beta\in\alpha(\overline{m})\Phi(\alpha)=\Phi(\beta). \] The full axiom of continuity, which is an extension of the weak continuity axiom, can then be expressed as: (C-N) $\forall\alpha\exists n A(\alpha,n) \rightarrow \exists \Phi \in \mathcal{CF}\,\forall\alpha A(\alpha,\Phi(\alpha)).$ There exist stronger forms of continuity that occur in intuitionistic analysis and the theory of lawless sequences. Through the continuity axiom certain weak counterexamples can be transformed into genuine refutations of classically accepted principles. For example, it implies that the quantified version of the principle of the excluded middle is false: \[ \neg\forall\alpha(\forall n\alpha (n)=0 \vee \neg \forall n\alpha (n)=0). \] Here $\alpha(n)$ denotes the $n$-th element of $\alpha$. To see that this negations holds, suppose, arguing by contradiction, that $\neg\forall\alpha(\forall n\alpha (n)=0 \vee \neg \forall n\alpha (n)=0)$ holds. This implies that \[ \forall\alpha\exists k((\forall n\alpha (n)=0 \wedge k=0) \vee (\neg \forall n\alpha (n)=0 \wedge k=1)). \] By the weak continuity axiom, for $\alpha$ consisting of only zeros there exists a number $m$ that fixes the choice of $k$, which means that for all $\beta\in\alpha(\overline{m})$, $k=0$. But the existence of sequences whose first $m$ elements are 0 and that contain a 1 show that this cannot be. This example showing that the principle of the excluded middle not only does not hold but is in fact false in intuitionism, leads to the refutation of many basic properties of the continuum. Consider for example the real number $r_\alpha$ that is the limit of the sequence consisting of the numbers $r_n$ as given in the section on weak counterexamples, where the $A(m)$ in the definition is taken to be the statement $\alpha(m)=0$. Then the refutation above implies that $\neg\forall\alpha(r_\alpha=0 \vee r_\alpha\neq 0)$, and it thereby refutes the law of trichotomy: \[ \forall x (x \lt y \vee x=y \vee y \lt x). \] The following theorem is another example of the way in which the continuity axiom refutes certain classical principles. Theorem ${\bf (C\mbox{-}N)}$ Every total real function is continuous. Indeed, a classical counterexample to this theorem, the nowhere continuous function \[ f(x) = \begin{cases} 0 \text{ if $x$ is a rational number } \\ 1 \text{ if $x$ is an irrational number} \end{cases} \] is not a legitimate function from the intuitionistic point of view since the property of being rational is not decidable on the real numbers. The theorem above implies that the continuum is not decomposable, and in van Dalen 1997, it is shown that this even holds for the set of irrational numbers. The two examples above are characteristic for the way in which the continuity axioms are applied in intuitionistic mathematics. They are the only axioms in intuitionism that contradict classical reasoning, and thereby represent the most colorful as well as the most controversial part of Brouwer's philosophy. Neighborhood Functions There is a convenient representation of continuous functionals that has been used extensively in the literature, though not by Brouwer himself. Continuous functionals that assign numbers to infinite sequences can be represented by neighborhood functions, where a neighborhood function $f$ is a function on the natural numbers satisfying the following two properties ($\cdot$ denotes concatenation and $f(\alpha(\overline{n}))$ denotes the value of $f$ on the code of the finite sequence $\alpha(\overline{n})$). \[ \alpha\exists n f(\alpha(\overline{n})) \gt 0 \ \ \ \ \forall n\forall m (f(n) \gt 0 \rightarrow f(n\cdot m) = f(n)). \] Intuitively, if $f$ represents $\Phi$ then $f(\alpha(\overline{n}))=0$ means that $\alpha(\overline{n})$ is not long enough to compute $\Phi(\alpha)$, and $f(\alpha(\overline{n}))=m+1$ means that $\alpha(\overline{n})$ is long enough to compute $\Phi(\alpha)$ and that the value of $\Phi(\alpha)$ is $m$. If $\mathcal{K}$ denotes the class of neighborhood functions, then the continuity axiom ${\bf (C\mbox{-}N)}$ can be rephrased as \[ \forall \alpha\exists n A(\alpha,n) \rightarrow \exists f \in \mathcal{K}\, \forall m(f(m) \gt 0 \rightarrow \forall \beta \in m A(\beta,f(m-1))), \] where $\beta \in m$ means that the code of the initial segment of $\beta$ is $m$. Brouwer introduced choice sequences and the continuity axioms to capture the intuitionistic continuum, but these principles alone do not suffice to recover that part of traditional analysis that Brouwer considered intuitionistically sound, such as the theorem that every continuous real function on a closed interval is uniformly continuous. For this reason Brouwer proved the so-called bar theorem. It is a classically valid statement, but the proof Brouwer gave is by many considered to be no proof at all since it uses an assumption on the form of proofs for which no rigorous argument is provided. This is the reason that the bar theorem is also referred to as the bar principle. The most famous consequence of the bar theorem is the fan theorem, which suffices to prove the aforementioned theorem on uniform continuity, and which will be treated first. Both the fan and the bar theorem allow the intuitionist to use induction along certain well-founded sets of objects called spreads. A spread is the intuitionistic analogue of a set, and captures the idea of infinite objects as ever growing and never finished. A spread is essentially a countably branching tree labelled with natural numbers or other finite objects and containing only infinite paths. A fan is a finitely branching spread, and the fan principle expresses a form of compactness that is classically equivalent to König's lemma, the classical proof of which is unacceptable from the intuitionistic point of view. The principle states that for every fan $T$ in which every branch at some point satisfies a property $A$, there is a uniform bound on the depth at which this property is met. Such a property is called a bar for $T$. (FAN) $\forall \alpha \in T\exists n A(\alpha(\overline{n})) \rightarrow \exists m \forall \alpha \in T \exists n \leq m A(\alpha(\overline{n})).$ Here $\alpha \in T$ means that $\alpha$ is a branch of $T$. The principle FAN suffices to prove the theorem mentioned above: Theorem (FAN) Every continuous real function on a closed interval is uniformly continuous. Brouwer's justification for the fan theorem is his bar principle for the universal spread: (C-N) $\forall\alpha\forall n \big( A(\alpha(\overline{n})) \vee \neg A(\alpha(\overline{n})) \big) \wedge $ $\forall\alpha\exists n A(\alpha(\overline{n})) \wedge $ $\forall\alpha\forall n \big( A(\alpha(\overline{n})) \rightarrow B(\alpha(\overline{n})) \big) \wedge $ $\forall\alpha\forall n \big( \forall mB(\alpha(\overline{n})\cdot m) \rightarrow B(\alpha(\overline{n})) \big)] \rightarrow B(\varepsilon).$ Here $\varepsilon$ stands for the empty sequence, $\cdot$ for concatenation, BI for Bar Induction, and the subscript D refers to the decidability of the predicate $A$. The bar principle provides intuitionism with an induction principle for trees; it expresses a well-foundedness principle for spreads with respect to decidable properties. Extensions of this principle in which the decidability requirement is weakened can be extracted from Brouwer's work but will be omitted here. Continuity and the bar principle are sometimes captured in one axiom called the bar continuity axiom. There is a close connection between the bar principle and the neighborhood functions mentioned in the section on continuity axioms. Let $\mathcal{IK}$ be the inductively defined class of neighborhood functions, consisting of all constant non-zero sequences $\lambda m.n+1$, and such that if $f(0)=0$ and $\lambda m.f(x\cdot m)\in \mathcal{IK}$ for all $x$, then $f \in \mathcal{IK}$. The statement $\mathcal{K}=\mathcal{IK}$, that is, the statement that the neighborhood functions can be generated inductively, is equivalent to BID. Brouwer's proof of the bar theorem is remarkable in that it uses well-ordering properties of hypothetical proofs. It is based on the assumption that any proof that a property A on sequences is a bar can be decomposed into a canonical proof that is well-ordered. Although it is classically valid, Brouwer's proof of the principle shows that the reason for accepting it as a valid principle in intuitionism differs fundamentally from the argument supporting its acceptability in classical mathematics. The axiom of choice in its full form is unacceptable from a constructive point of view, at least in the presence of certain other central axioms of set theory, such as extensionality (Diaconescu 1975). For let $A$ be a statement that is not known to be true or false. Then membership of the following two sets is undecidable. \begin{align} X &= \{ x \in \{0,1\} \mid x=0 \vee (x=1 \wedge A) \} \\ Y &= \{ y \in \{0,1\} \mid y=1 \vee (y=0 \wedge A) \} \end{align} The existence of a choice function $f:\{X,Y\} \rightarrow \{0,1\}$ choosing an element from $X$ and $Y$ would imply $(A \vee \neg A)$. For if $f(X)\neq f(Y)$, it follows that $X\neq Y$, and hence $\neg A$, whereas $f(X)=f(Y)$ implies $A$. Therefore a choice function for $\{X,Y\}$ cannot exist. There are, however, certain restrictions of the axiom that are acceptable for the intuitionist, for example the axiom of countable choice, also accepted as a legitimate principle by the semi-intuitionists to be discussed below: (AC-N) $\forall R \subseteq \mathbb{N} \times \mathbb{N} \big( \forall m\exists n\, mRn \rightarrow \exists \alpha \in \mathbb{N}^\mathbb{N} \forall m\, mR\alpha(m) \big).$ This scheme may be justified as follows. A proof of the premise should provide a method that given $m$ provides a number $n$ such that $mRn$. Thus the function $\alpha$ on the natural numbers $\mathbb{N}$ can be constructed step-by-step: first an element $m_0$ is chosen such that $0Rm_0$, which will be the value of $\alpha(0)$. Then an element $m_1$ is chosen such that $1Rm_1$, which will be the value of $\alpha(1)$, and so on. Several other choice axioms can be justified in a similar way. Only one more will be mentioned here, the axiom of dependent choice: (DC-N) $\forall R \subseteq \mathbb{N} \times \mathbb{N} \big( \forall m\exists n\, mRn \rightarrow $ $\forall k \exists \alpha \in \mathbb{N}^\mathbb{N} \big( \alpha(0)=k \wedge \forall i\geq 0\, \alpha(i)R\alpha(i+1) \big) \big).$ Also in classical mathematics the choice axioms are treated with care, and it is often explicitly mentioned how much choice is needed in a proof. Since the axiom of dependent choice is consistent with an important axiom in classical set theory (the axiom of determinacy) while the full axiom of choice is not, special attention is payed to this axiom and in general one tries to reduce the amount of choice in a proof, if choice is present at all, to dependent choice. Brouwer was not alone in his doubts concerning certain classical forms of reasoning. This is particularly visible in descriptive set theory, which emerged as a reaction to the highly nonconstructive notions occurring in Cantorian set theory. The founding fathers of the field, including Émile Borel and Henri Lebesgue as two of the main figures, were called semi-intuitionists, and their constructive treatment of the continuum led to the definition of the Borel hierarchy. From their point of view a notion like the set of all sets of real numbers is meaningless, and therefore has to be replaced by a hierarchy of subsets that do have a clear description. In Veldman 1999, an intuitionistic equivalent of the notion of Borel set is formulated, and it is shown that classically equivalent definitions of the Borel sets give rise to a variety of intuitionistically distinct classes, a situation that often occurs in intuitionism. For the intuitionistic Borel sets an analogue of the Borel Hierarchy Theorem is intuitionistically valid. The proof of this fact makes essential use of the continuity axioms discussed above and thereby shows how classical mathematics can guide the search for intuitionistic analogues that, however, have to be proved in a completely different way, sometimes using principles unacceptable from a classical point of view. Another approach to the study of subsets of the continuum, or of a topological space in general, has appeared through the development of formal or abstract topology (Fourman 1982, Martin-Löf 1970, Sambin 1987). In this constructive topology the role of open sets and points is reversed; in classical topology an open set is defined as a certain set of points, in the constructive case open sets are the fundamental notion and points are defined in terms of them. Therefore this approach is sometimes referred to as point-free topology. Intuitionistic functional analysis has been developed far and wide by many after Brouwer, but since most approaches are not strictly intuitionistic but also constructive in the wider sense, this research will not be addressed any further here. Intuitionism shares a core part with most other forms of constructivism. Constructivism in general is concerned with constructive mathematical objects and reasoning. From constructive proofs one can, at least in principle, extract algorithms that compute the elements and simulate the constructions whose existence is established in the proof. Most forms of constructivism are compatible with classical mathematics, as they are in general based on a stricter interpretation of the quantifiers and the connectives and the constructions that are allowed, while no additional assumptions are made. The logic accepted by almost all constructive communities is the same, namely intuitionistic logic. Many existential theorems in classical mathematics have a constructive analogue in which the existential statement is replaced by a statement about approximations. We saw an example of this, the intermediate value theorem, in the section on weak counterexamples above. Large parts of mathematics can be recovered constructively in a similar way. The reason not to treat them any further here is that the focus in this entry is on those aspects of intuitionism that set it apart from other constructive branches of mathematics. For a thorough treatment of constructivism the reader is referred to the corresponding entry in this encyclopedia. Although Brouwer developed his mathematics in a precise and fundamental way, formalization in the sense as we know it today was only carried out later by others. Indeed, according to Brouwer's view that mathematics unfolds itself internally, formalization, although not unacceptable, is unnecessary. Others after him thought otherwise, and the formalization of intuitionistic mathematics and the study of its meta-mathematical properties, in particular of arithmetic and analysis, have attracted many researchers. The formalization of intuitionistic logic on which all formalizations are based has already been treated above. Heyting Arithmetic HA as formulated by Arend Heyting is a formalization of the intuitionistic theory of the natural numbers (Heyting 1956). It has the same non-logical axioms as Peano Arithmetic PA but it is based on intuitionistic logic. Thus it is a restriction of classical arithmetic, and it is the accepted theory of the natural numbers in almost all areas of constructive mathematics. Heyting Arithmetic has many properties that reflect its constructive character, for example the Disjunction Property that holds for intuitionistic logic too. Another property of HA that PA does not share is the numerical existence property: ($\overline{n}$ is the numeral corresponding to natural number $n$) (NEP) ${\bf HA} \vdash \exists x A(x) \Rightarrow \exists n \in {\mathbb N} \, {\bf HA} \vdash A(\overline{n}).$ That this property does not hold in PA follows from the fact that PA proves $\exists x (A(x) \vee \forall y \neg A(y))$. Consider, for example, the case that $A(x)$ is the formula $T(e,e,x)$, where $T$ is the decidable Kleene predicate expressing that $x$ is the code of a terminating computation of the program with code $e$ on input $e$. If for every $e$ there would exist a number $n$ such that ${\bf PA}\vdash T(e,e,n) \vee \forall y \neg T(e,e,y)$, then by checking whether $T(e,e,n)$ holds it would be decided whether a program $e$ terminates on input $e$. This, however, is in general undecidable. Markov's rule is a principle that holds both classically and intuitionistically, but only for HA the proof of this fact is nontrivial: (MR) $ {\bf HA} \vdash \forall x (A(x) \vee \neg A(x)) \wedge \neg\neg\exists x A(x) \Rightarrow {\bf HA} \vdash \exists x A(x).$ Since HA proves the law of the excluded middle for every primitive recursive predicate, it follows that for such $A$ the derivability of $\neg\neg \exists x A(x)$ in HA implies the derivability of $\exists x A(x)$ as well. From this it follows that PA is $\Pi^0_2$-conservative over HA. That is, for primitive recursive $A$: \[ {\bf PA} \vdash \forall x \exists y A(x,y) \Rightarrow {\bf HA} \vdash \forall x \exists y A(x,y). \] Thus the class of provably recursive functions of HA coincides with the class of provably recursive functions of PA, a property that, on the basis of the ideas underlying constructivism and intuitionism, may not come as a surprise. The formalization of intuitionistic mathematics covers more than arithmetic. Large parts of analysis have been axiomatized from a constructive point of view (Kleene 1965, Troelstra 1973). The constructivity of these systems can be established using functional, type theoretic, or realizability interpretations, most of them based on or extensions of Gödel's Dialectica interpretation (Gödel 1958, Kreisel 1959), Kleene realizability (Kleene 1965), or type theories (Martin-Löf 1984). In these interpretations the functionals underlying constructive statements, such as for example the function assigning a $y$ to every $x$ in $\forall x\exists y A(x,y)$, are made explicit in various ways. In Scott 1968 and 1970, a topological model for the second-order intuitionistic theory of analysis is presented where the reals are interpreted as continuous functions from Baire space into the classical reals. In this model Kripke's schema as well as certain continuity axioms hold. In Moschovakis 1973, this method is adapted to construct a model of theories of intuitionistic analysis in terms of choice sequences. Also in this model Kripke's schema and certain continuity axioms hold. In Van Dalen 1978 Beth models are used to provide a model of arithmetic and choice sequences that satisfy choice schemata, instances of weak continuity and Kripke's schema. In this model the domains at every node are the natural numbers, so that one does not have to use nonstandard models, as in the case of Kripke models. Moreover, the axioms CS1–3 of the creating subject can be interpreted in it, thus showing this theory to be consistent. There exist axiomatizations of the lawless sequences, and they all contain extensions of the continuity axioms (Kreisel 1968, Troelstra 1977). In particular in the form of the Axiom of Open Data stating that for $A(\alpha)$ not containing other nonlawlike parameters besides $\alpha$: \[ A(\alpha) \rightarrow \exists n \forall \beta \in \alpha (\overline{n}) A(\beta). \] In Troelstra 1977, a theory of lawless sequences is developed (and justified) in the context of intuitionistic analysis. Besides axioms for elementary analysis it contains, for lawless sequences, strengthened forms of the axioms of open data, continuity, decidability and density (density says that every finite sequence is the initial segment of a lawless sequence). What is especially interesting is that in these theories quantifiers over lawless sequences can be eliminated, a result that can also be viewed as providing a model of lawlike sequences for such theories. Other classical models of the theory of lawless sequences have been constructed in category theory in the form of sheaf models (van der Hoeven and Moerdijk 1984). In Moschovakis 1986, a theory for choice sequences relative to a certain set of lawlike elements is introduced, along with a classical model in which the lawless sequences turn out to be exactly the generic ones. 5.4 Foundations Formalizations that are meant to serve as a foundation for constructive mathematics are either of a set-theoretic (Aczel 1978, Myhill 1975) or type-theoretic (Martin-Löf 1984) nature. The former theories are adaptations of Zermelo-Fraenkel set theory to a constructive setting, while in type theory the constructions implicit in constructive statements are made explicit in the system. Set theory could be viewed as an extensional foundation of mathematics whereas type theory is in general an intensional one. In recent years many models of parts of such foundational theories for intuitionistic mathematics have appeared, some of them have been mentioned above. Especially in topos theory (van Oosten 2008) there are many models that capture certain characteristics of intuitionism. There are, for example, topoi in which all total real functions are continuous. Functional interpretations such as realizability as well as interpretations in type theory could also be viewed as models of intuitionistic mathematics and most other constructive theories. In reverse mathematics one tries to establish for mathematical theorems which axioms are needed to prove them. In intuitionsistic reverse mathematics one has a similar aim, but then with respect to intuitionistic theorems: working over a weak intuitionistic theory, axioms and theorems are compared to each other. The typical axioms with which one wishes theorems to compare are the fan principle and the bar principle, Kripke's schema and the continuity axioms. In Veldman 2011 (Other Internet Resources), equivalents of the fan principle over a basic theory called Basic Intuitionistic Mathematics are studied. It is shown that the fan principle is equivalent to the statement that the unit interval [0,1] has the Heine-Borel property, and from this many other equivalents are derived. In Veldman 2009, the fan principle is shown to also be equivalent to Brouwer's Approximate Fixed-Point Theorem. In Lubarsky et al. 2012, reverse mathematics is applied to a form of Kripke's schema, which is shown to be equivalent to certain topological statements. There are many more of such examples from intuitionsistic reverse mathematics. Especially in the larger field of constructive reverse mathematics there are many results of this nature that are also relevant from the intuitionistic point of view. Aczel, P., 1978, 'The type-theoretic interpretation of constructive set theory,' in Logic Colloquium '77, A. Macintyre, L. Pacholski, J. Paris (eds.), North-Holland. van Atten, M., 2004, On Brouwer, (Wadsworth Philosophers Series), Belmont: Wadsworth/Thomson Learning. –––, 2007, Brouwer meets Husserl (On the phenomenology of choice sequences), Dordrecht: Springer. –––, 2008, 'On the hypothetical judgement in the history of intuitionistic logic,' in Logic, Methodology, and philosophy of science XIII: Proceedings of the 2007 International Congress in Beijing, C. Glymour and W. Wang and D. Westerståhl (eds.), London: King's College Publications. van Atten, M. and D. van Dalen, 2002, 'Arguments for the continuity principle,' Bulletin of Symbolic Logic, 8(3): 329–374. Beth, E.W., 1956, 'Semantic construction of intuitionistic logic,' KNAW Afd. Let. Med., Nieuwe serie, 19/11: 357–388. Brouwer, L.E.J., 1975, Collected works I, A. Heyting (ed.), Amsterdam: North-Holland. –––, 1976, Collected works II, H. Freudenthal (ed.), Amsterdam: North-Holland. –––, 1905, Leven, kunst en mystiek, Delft: Waltman. –––, 1907, Over de grondslagen der wiskunde, Ph.D. Thesis, University of Amsterdam, Department of Physics and Mathematics. –––, 1912, 'Intuïtionisme en formalisme', Inaugural address at the University of Amsterdam, 1912. Also in Wiskundig tijdschrift, 9, 1913. –––, 1925, 'Zur Begründung der intuitionistischen Mathematik I,' Mathematische Annalen, 93: 244–257. –––, 1925, 'Zur Begründung der intuitionistischen Mathematik II,' Mathematische Annalen, 95: 453–472. –––, 1948, 'Essentially negative properties', Indagationes Mathematicae, 10: 322–323. –––, 1952, 'Historical background, principles and methods of intuitionism,' South African Journal of Science, 49 (October-November): 139-146. –––, 1953, 'Points and Spaces,' Canadian Journal of Mathematics, 6: 1–17. –––, 1981, Brouwer's Cambridge lectures on intuitionism, D. van Dalen (ed.), Cambridge: Cambridge University Press, Cambridge. –––, 1992, Intuitionismus, D. van Dalen (ed.), Mannhein: Wissenschaftsverlag. Brouwer, L.E.J. and C.S. Adama van Scheltema, 1984, Droeve snaar, vriend van mij – Brieven, D. van Dalen (ed.), Amsterdam: Uitgeverij de Arbeiderspers. Coquand, T., 1995, 'A constructive topological proof of van der Waerden's theorem,' Journal of Pure and Applied Algebra, 105: 251–259. van Dalen, D., 1978, 'An interpretation of intuitionistic analysis', Annals of Mathematical Logic, 13: 1–43. –––, 1997, 'How connected is the intuitionistic continuum?,' Journal of Symbolic Logic, 62(4): 1147–1150. –––, 1999/2005, Mystic, geometer and intuitionist, Volumes I (1999) and II (2005), Oxford: Clarendon Press. –––, 2001, L.E.J. Brouwer (een biografie), Amsterdam: Uitgeverij Bert Bakker. –––, 2004, 'Kolmogorov and Brouwer on constructive implication and the Ex Falso rule' Russian Math Surveys, 59: 247–257. van Dalen, D. (ed.), 2001, L.E.J. Brouwer en de grondslagen van de wiskunde, Utrecht: Epsilon Uitgaven. Diaconescu, R., 1975, 'Axiom of choice and complementation,' in Proceedings of the American Mathematical Society, 51: 176–178. Fourman, M., and R. Grayson, 1982, 'Formal spaces,' in The L.E.J. Brouwer centenary symposium, A.S. Troelstra and D. van Dalen (eds.), Amsterdam: North-Holland. Gentzen, G., 1934, 'Untersuchungen über das logische Schließen I, II,' Mathematische Zeitschrift, 39: 176–210, 405–431. Gödel, K., 1958, 'Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes,' Dialectia, 12: 280–287. Heyting, A., 1930, 'Die formalen Regeln der intuitionistischen Logik,' Sitzungsberichte der Preussischen Akademie von Wissenschaften. Physikalisch-mathematische Klasse, 42–56. –––, 1956, Intuitionism, an introduction, Amsterdam: North-Holland. van der Hoeven, G., and I. Moerdijk, 1984, 'Sheaf models for choice sequences,' Annals of Pure and Applied Logic, 27: 63–107. Kleene, S.C., and R.E. Vesley, 1965, The foundations of intuitionistic mathematics, Amsterdam: North-Holland. Kreisel, G., 1959, 'Interpretation of analysis by means of constructive functionals of finite type,' in Constructivity in mathematics, A. Heyting (ed.), Amsterdam: North-Holland. –––, 1962, 'On weak completeness of intuitionistic predicate logic,' Journal of Symbolic Logic, 27: 139–158. –––, 1968, 'Lawless sequences of natural numbers,' Compositio Mathematica, 20: 222–248. Kripke, S.A., 1965, 'Semantical analysis of intuitionistic logic', in Formal systems and recursive functions, J. Crossley and M. Dummett (eds.), Amsterdam: North-Holland. Lubarsky, R., F. Richman, and P. Schuster 2012, 'The Kripke schema in metric topology', Mathematical Logic Quarterly, 58(6): 498–501. Maietti, M.E., and G. Sambin, 2007, 'Toward a minimalist foundation for constructive mathematics,' in From sets and types to topology and analysis: toward a minimalist foundation for constructive mathematics, L. Crosilla and P. Schuster (eds.), Oxford: Oxford University Press. Martin-Löf, P., 1970, Notes on constructive mathematics, Stockholm: Almqvist & Wiskell. –––, 1984, Intuitionistic type theory, Napoli: Bibliopolis. Moschovakis, J.R., 1973, 'A topological interpretation of second-order intuitionistic arithmetic,' Compositio Mathematica, 26(3): 261–275. –––, 1986, 'Relative lawlessness in intuitionistic analysis,' Journal of Symbolic Logic, 52(1): 68–87. Myhill, J., 1975, 'Constructive set theory,' Journal of Symbolic Logic, 40: 347–382. Niekus, J., 2010,'Brouwer's incomplete objects' History and Philosophy of Logic 31: 31–46. van Oosten, J., 2008, Realizability: An introduction to its categorical side, (Studies in Logic and the Foundations of Mathematics: Volume 152), Amsterdam: Elsevier. Sambin, G., 1987, 'Intuitionistic formal spaces,' in Mathematical Logic and its Applications, D. Skordev (ed.), New York: Plenum. Scott, D., 1968, 'Extending the topological interpretation to intuitionistic analysis,' Compositio Mathematica, 20: 194–210. –––, 1970, 'Extending the topological interpretation to intuitionistic analysis II', in Intuitionism and proof theory, J. Myhill, A. Kino, and R. Vesley (eds.), Amsterdam: North-Holland. Tarski, A., 1938, 'Der Aussagenkalkül und die Topologie,' Fundamenta Mathematicae, 31: 103–134. Troelstra, A.S., 1973, Metamathematical investigations of intuitionistic arithmetic and analysis, (Lecture Notes in Mathematics: Volume 344), Berlin: Springer. –––, 1977, Choice sequences (Oxford Logic Guides), Oxford: Clarendon Press. Troelstra, A.S., and D. van Dalen, 1988, Constructivism I and II, Amsterdam: North-Holland. Veldman, W., 1976, 'An intuitionistic completeness theorem for intuitionistic predicate logic,' Journal of Symbolic Logic, 41(1): 159–166. –––, 1999, 'The Borel hierarchy and the projective hierarchy in intuitionistic mathematics,' Report Number 0103, Department of Mathematics, University of Nijmegen. [available online] –––, 2004, 'An intuitionistic proof of Kruskal's theorem,' Archive for Mathematical Logic, 43(2): 215–264. –––, 2009, 'Brouwer's Approximate Fixed-Point Theorem is Equivalent to Brouwer's Fan Theorem,' in Logicism, Intuitionism, and Formalism - Synthese Library 341, S. Lindström, E. Palmgren, K. Segerberg, V. Stoltenberg-Hansen (eds.): 277–299. –––, 2014, 'Brouwer's Fan Theorem as an axiom and as a contrast to Kleene's Alternative,' in Archive for Mathematical Logic, to appear. Weyl, H., 1921, 'Über die neue Grundlagenkrise der Mathematik,' Mathematische Zeitschrift, 10: 39–70. Luitzen Egbertus Jan Brouwer, biography at Mac Tutor History of Mathematics website. Brouwer, Luitzen Egbertus Jan | category theory | choice, axiom of | Gödel, Kurt | Hilbert, David | Hilbert, David: program in the foundations of mathematics | logic, history of: intuitionistic logic | logic: classical | logic: intuitionistic | mathematics, philosophy of: formalism | mathematics: constructive | phenomenology | Platonism: in metaphysics | set theory | type theory I thank Sebastiaan Terwijn, Mark van Atten, and an anonymous referee for their useful comments on an earlier draft of this entry. Rosalie Iemhoff <[email protected]>
CommonCrawl
Generalized bootstrap for estimating equations Snigdhansu Chatterjee and Arup Bose More by Snigdhansu Chatterjee More by Arup Bose We introduce a generalized bootstrap technique for estimators obtained by solving estimating equations. Some special cases of this generalized bootstrap are the classical bootstrap of Efron, the delete-d jackknife and variations of the Bayesian bootstrap. The use of the proposed technique is discussed in some examples. Distributional consistency of the method is established and an asymptotic representation of the resampling variance estimator is obtained. Ann. Statist., Volume 33, Number 1 (2005), 414-436. First available in Project Euclid: 8 April 2005 Primary: 62G09: Resampling methods 62E20: Asymptotic distribution theory Secondary: 62G05: Estimation 62F12: Asymptotic properties of estimators 62F40: Bootstrap, jackknife and other resampling methods 62M99: None of the above, but in this section Estimating equations resampling generalized bootstrap jackknife Bayesian bootstrap wild bootstrap paired bootstrap M-estimation nonlinear regression generalized linear models dimension asymptotics Chatterjee, Snigdhansu; Bose, Arup. Generalized bootstrap for estimating equations. Ann. Statist. 33 (2005), no. 1, 414--436. doi:10.1214/009053604000000904. https://projecteuclid.org/euclid.aos/1112967711 Basawa, I. V., Godambe, V. P. and Taylor, R. L., eds. (1997). Selected Proceedings of the Symposium on Estimating Functions. IMS, Hayward, CA. Bickel, P. J. and Freedman, D. A. (1983). Bootstrapping regression models with many parameters. In A Festschrift for Erich L. Lehmann (P. J. Bickel, K. A. Doksum and J. L. Hodges, Jr., eds.) 28--48. Wadsworth, Belmont, CA. Borovskikh, Yu. V. and Korolyuk, V. S. (1997). Martingale Approximation. VSP, Utrecht. Bose, A. and Chatterjee, S. (2002). Comparison of bootstrap and jackknife variance estimators in linear regression: Second order results. Statist. Sinica 12 575--598. Bose, A. and Kushary, D. (1996). Jackknife and weighted jackknife estimation of the variance of $M$-estimators in linear regression. Technical Report 96-12, Dept. Statistics, Purdue Univ. Chatterjee, S. (1999). Generalised bootstrap techniques. Ph.D. dissertation, Indian Statistical Institute, Calcutta. Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Ann. Statist. 7 1--26. Ferguson, T. S. (1996). A Course in Large Sample Theory. Chapman and Hall, London. Freedman, D. A. and Peters, S. C. (1984). Bootstrapping a regression equation: Some empirical results. J. Amer. Statist. Assoc. 79 97--106. Godambe, V. P., ed. (1991). Estimating Functions. Clarendon, Oxford. Hu, F. (2001). Efficiency and robustness of a resampling $M$-estimator in the linear model. J. Multivariate Anal. 78 252--271. Digital Object Identifier: doi:10.1006/jmva.2000.1951 Hu, F. and Kalbfleisch, J. D. (2000). The estimating function bootstrap (with discussion). Canad. J. Statist. 28 449--499. Huet, S., Bouvier, A., Gruet, M. and Jolivet, E. (1996). Statistical Tools for Nonlinear Regression. Springer, New York. Lahiri, S. N. (1992). Bootstrapping $M$-estimators of a multiple linear regression parameter. Ann. Statist. 20 1548--1570. Lele, S. (1991). Resampling using estimating functions. In Estimating Functions (V. P. Godambe, ed.) 295--304. Clarendon, Oxford. Liu, R. Y. and Singh, K. (1992). Efficiency and robustness in resampling. Ann. Statist. 20 370--384. Lo, A. Y. (1991). Bayesian bootstrap clones and a biometry function. Sankhyā Ser. A 53 320--333. Mammen, E. (1989). Asymptotics with increasing dimension for robust regression with applications to the bootstrap. Ann. Statist. 17 382--400. Mammen, E. (1992). When Does Bootstrap Work? Asymptotic Results and Simulations. Lecture Notes in Statist. 77. Springer, New York. Mammen, E. (1993). Bootstrap and wild bootstrap for high-dimensional linear models. Ann. Statist. 21 255--285. Myers, R. H., Montgomery, D. C. and Vining, G. G. (2002). Generalized Linear Models. Wiley, New York. Newton, M. A. and Raftery, A. E. (1994). Approximate Bayesian inference with the weighted likelihood bootstrap (with discussion). J. Roy. Statist. Soc. Ser. B 56 3--48. Ortega, J. M. and Rheinboldt, W. C. (1970). Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York. Præ stgaard, J. and Wellner, J. A. (1993). Exchangeably weighted bootstraps of the general empirical process. Ann. Probab. 21 2053--2086. Rao, C. R. and Zhao, L. C. (1992). Approximation to the distribution of $M$-estimates in linear models by randomly weighted bootstrap. Sankhyā Ser. A 54 323--331. Rubin, D. B. (1981). The Bayesian bootstrap. Ann. Statist. 9 130--134. Serfling, R. J. (1980). Approximation Theorems of Mathematical Statistics. Wiley, New York. Wu, C.-F. J. (1986). Jackknife, bootstrap and other resampling methods in regression analysis (with discussion). Ann. Statist. 14 1261--1350. Zheng, Z. and Tu, D. (1988). Random weighting method in regression models. Sci. Sinica Ser. A 31 1442--1459. Jackknife, Bootstrap and Other Resampling Methods in Regression Analysis Wu, C. F. J., The Annals of Statistics, 1986 A General Resampling Scheme for Triangular Arrays of $\alpha$-Mixing Random Variables with Application to the Problem of Spectral Density Estimation Politis, Dimitris N. and Romano, Joseph P., The Annals of Statistics, 1992 A General Theory for Jackknife Variance Estimation Shao, Jun and Wu, C. F. J., The Annals of Statistics, 1989 On Resampling Methods for Variance and Bias Estimation in Linear Models Shao, Jun, The Annals of Statistics, 1988 Consistency of the jackknife-after-bootstrap variance estimator for the bootstrap quantiles of a Studentized statistic Lahiri, S. N., The Annals of Statistics, 2005 Bootstrapping Aalen-Johansen processes for competing risks: Handicaps, solutions, and limitations Dobler, Dennis and Pauly, Markus, Electronic Journal of Statistics, 2014 A Rank Statistics Approach to the Consistency of a General Bootstrap Mason, David M. and Newton, Michael A., The Annals of Statistics, 1992 Regenerative block bootstrap for Markov chains Bertail, Patrice and Clémençon, Stéphan, Bernoulli, 2006 Some Asymptotic Results for Jackknifing the Sample Quantile Shi, Xiquan, The Annals of Statistics, 1991 Efficiency and Robustness in Resampling Liu, Regina Y. and Singh, Kesar, The Annals of Statistics, 1992
CommonCrawl
I Would this experiment disprove that consciousness causes collapse? Thread starter john taylor A double slit experiment, is taking place. There are detectors, placed inside both of the slits. On the first run if a particle travels through one of the slits, the detector registers, that it has detected a particle, but doesn't specify which slit it has travelled through(to the conscious experimenter). Moreover, the potential for having knowledge as to which slit the particle went through is totally removed since the detector, is set up in such a manor that it is only possible to know that a particle was detected and not which slit it went through. On the second run of the experiment, however, the potential for the experimenter to know which slit it went through is given, by the detector revealing which slit it went through(which would in turn give the experimenter, the potential to know which slit). If there is no interference pattern there on the second run, but on the first run, this would imply, that consciousness causes collapse, since it shows when the potential for human knowledge is there the particle collapses to an eigenstate, as opposed to not collapsing to an eigenstate, on the first run, when a measurement is being made but without the potential for a conscious observer to know the result. Would this prove or disprove(in principle) that consciousness causes collapse? Mentz114 The detector, if it gives enough information for location, will cause decoherence which inhibits the interference. No consciousness required. Reactions: Heikki Tuuri and bhobba DrChinese john taylor said: On the first run if a particle travels through one of the slits, the detector registers, that it has detected a particle, but doesn't specify which slit it has travelled through(to the conscious experimenter). Moreover, the potential for having knowledge as to which slit the particle went through is totally removed since the detector, is set up in such a manor that it is only possible to know that a particle was detected and not which slit it went through. Actually, there are versions of the double slit in which this occurs. There is NO interference pattern, essentially proving that human consciousness does NOT cause collapse. http://sciencedemonstrations.fas.harvard.edu/files/science-demonstrations/files/single_photon_paper.pdf "An apparatus for a double-slit interference experiment in the single-photon regime is described. The apparatus includes a which-path marker that destroys the interference... " The markers are never examined to determine which slit the particle traversed, and yet the interference disappears. When the marker is eliminated, interference is restored. However, the apparatus itself is the same in both versions; the marker might be considered "virtual". Reactions: haael and bhobba StevieTNZ Would this prove or disprove(in principle) that consciousness causes collapse? Reactions: atyy and Demystifier atyy No, as @StevieTNZ said. DrChinese said: This is wrong, because there is a "collapse" even if there is no interference pattern. Or to be more strictly correct, there is no collapse in either version of the experiment, whether there is an interference pattern or not. atyy said: Not really sure what you mean here. Perhaps it is nomenclature. Usually, in a double slit setup, collapse refers to there being a single slit being traversed unambiguously. Obviously, there is also a degree of collapse when the particle is detected on the screen as well. In the standard interpretation (with respect to which the OP question makes sense, since in this interpretation measurement has a special status and collapse is defined) collapse is used when describing the probabilities of measurement outcomes when there are successive measurements. A measurement is when there is a definite outcome; equivalently it is used when one applies the Born rule. I should also say I am not fond of the terminology of "which path" experiments. In the standard interpretation, particles are not assigned paths. In the Bohmian interpretation, they are - but the "which path" language does not use the Bohmian interpretation. Instead it uses an influential "which path" terminology developed by Scully and colleagues, which is not part of the standard interpretation, and is flawed. [quant-ph/0010020] Quantum trajectories, real, surreal or an approximation to a deeper process? "In this paper we have shown that the claim that we can meaningfully talk about particle trajectories in an interferometer such as the one shown in figure 1 within quantum mechanics made by ESSW2 [8] and by Scully [9] does not follow from the standard (Copenhagen) interpretation. An additional assumption must be made, namely, that the cavity and the atom can only exchange energy when the atom actually passes through the cavity. Here the position of the particle becomes an additional parameter, which supplements the wave function and therefore is not part of the orthodox interpretation. Furthermore we have shown that this way of introducing the position coordinate leads to a contradiction as we are forced to conclude that although the atom follows one path, it behaves as if it went down both paths." See also: https://advances.sciencemag.org/content/2/2/e1501466 Also nice: https://arxiv.org/abs/1707.07884 Reactions: Mentz114 This paper by Fankhauser is funny ;-)). He envokes the collapse hypothesis in a way just to prove his proposition (in my opinion nevertheless very valid point that nothing is retrocausal in the quantum-eraser experiments). This just solidifies my prejudice against any kind of collapse interpretation. Since it's an unnecessary unsharply defined esoterical happening in the minds of philsophers, which is not of course not described nor describable mathematically within the formalism, you can massage it in any way you like. In the minimal interpretation there's no problem whatsoever, of course. It's just a wise advice by the author to stop reading at the end of Sect. 2. Then you are condemned as an "instrumentalist", but so what? It's the only consistent interpretation without adhoc esoterics like collapses or Bohmian paths, where the Bohmian theory cannot even consistently formulated (namely for photons which are massless and thus as much relativistic as anything can get and have no well-defined position observable to begin with). All there is, is quantum entanglement describing correlations due to state preparation, which are not possible in any local deterministic hidden variable theory (according to Bell). Since the correlations are due to state preparation, i.e., before the measurement procedure on the so prepared object starts, there's neither an instantaneous action at a distance nor any possibility for retrocausation. As the author correctly states, the selection of whether you want to realize the "which-way-information" setup or the "interference-pattern setup" can be made after the experiment has done and all photons are gone by just choosing the corresponding subensembles of photon pairs according to the appropriate detection events as fixed in the measurement protocol. There's nothing mysterious in this within the minimal interpretation: One just has to accept that quantum theory has taught us more details about how nature works than our "common sense" built solely from our every-day experience with macroscopic objects which almost always seeem to follow the rules of classical physics (except the quantum specific fact that there's stable matter at all ;-)), which they do because we don't (and often cannot) look close enough but only look at coarse-grained macroscopic observables usually not resolving even the quantum and and even the much larger thermal fluctuations. This paper by Fankhauser is funny ;-)). He envokes the collapse hypothesis in a way just to prove his proposition (in my opinion nevertheless very valid point that nothing is retrocausal in the quantum-eraser experiments). The collapse interpretation is instrumentalist - everything is the same whether one takes collapse to be real or not real - but it is a required mathematical operation. He simply shows that there is nothing retrocausal in the standard instrumentalist interpretation. Reactions: vanhees71 and Demystifier Ok, I can accept this, but I still don't get why you always insist on the collapse. What's done in this eraser exp. doesn't anywhere need a collapse: (a) Preparation (defining the state of the two photons): laser light excites an appropriate birefringent crystal which fluoresces a pair of polarization (and momentum) entangled photons (parametric down conversion). Nowhere has anything collapsed. (b) You register photons at D0 either (b.1) coincidently with D3 or D4, either of which implies that you have which-way information, and no interference pattern is found at D0 (b.2) coincidently with D1 or D2, either of which implies that there's no which-way information, and the double-slit interference pattern is found at D0 Just looking at D0 does never produce an interference pattern, and just the choice of the partial ensemble according to (b.1) or (b.2) lead to either which-way information and no interference pattern or no which-way information and an interference pattern. All photons at D0 together don't give the interference pattern. There has nothing collapsed either (except when the photons are registered, where you might call the absorption of a photon in the detector a "collapse", but it's nothing else than the interaction of photons with (the electrons of) the matter making up the detector, which is not outside of QT either). The fact that you can choose whether you have which-way information about the photon registered at D0 or not via manipulations of the other photon is not caused by any interactions of either photon with the equipment but by the preparation of this photon pair by parametric down conversion in the crystal, and the correlations needed are due to these correlations. You could even just make a measurement protocol, where you consider all events at registering a photon at D0 and also marking which of the Detectors D1-D4 has clicked coincidently (and you only register such events). Then you can reconstruct the interference pattern of photons registered by just looking at events where the 2nd photon hit D1 (or D2) or get which-way information by just looking at events where the 2nd photon hit D3 (or D4). It is also clear that the detection of the one photon at D1-D4 doesn't do anything causally with the other photon registered at D0. You can, e.g., put the part of manipulating and detecting the 2nd photon as far away from D0 as you want. As long you make sure that nothing disturbes the photon on its way to the equipment, i.e., no decoherence happens, you still can "post-select" whether you want which-way information or an interference pattern by the above defined coincidence measurements. So there's no collapse needed to explain the results of the experiment and the possibility of post-selection or erasure of which-way information. It's just the simple fact that there are correlations which are described by the entanglement of the two photons. Skipping the intermediate discussion, I would repeat: the Rueckner/Peidle reference (#3) demonstrates clearly that human consciousness is not required to cause the double slit interference to disappear. The relative settings of the polarizers is the only thing that is varied to create or eliminate the interference. Yes, when the registrations are coincident, there is no need for collapse. But if the registrations are not coincident, then the first measurement will collapse the state. It's exactly the same as in spacelike separated Bell tests. Because of the spacelike separation, there is a frame in which Alice and Bob measure at the same time - in this frame there is no collapse. In other frames, the first measurement collapses the state. So collapse is frame-dependent. Why is there need for collapse if you simply perform another measurement, i.e., not a coincidence measurement? Then there's even nothing to be surprised about, because then you simply measure all photons arriving at D0, and as QT predicts, there's no interference pattern there. Your entire posting contradicts itself: Indeed if the registration events at A and B's sites are space-like separated there cannot by construction be any causal connection between these events. You explained this yourself by the fact that you can always find reference frames, where the causal order is the opposite from that in the original one and also one where both events are at the same coordinate time of this frame. This clearly shows that the events cannot be causally connected. Now also the usual relativistic QFT (of which QED is the paradigmatic example) without the unnecessary additional collapse postulates does not contradict this causality structure built in in the relativistic spacetime model (Minkowski space). The QFT is tailored such that this cannot happen, because any local observables (i.e., a operators ##\hat{O}_k(x)=\hat{O}_k(t,\vec{x})## built from the fundamental field operators) commute at spacelike separations of their arguments. Particularly that's the case for the interaction-Hamilton density between photons and charged particles and thus the detector material. Thus this interaction cannot cause anything at space-like distances and thus no causal effect spreads larger than the speed of light. This is by construction! Thus the collapse hypothesis bluntly contradicts the theory it is supposed to interpret. This makes no sense, and it is not necessary here: The correlations described by the entangled state are due to a (also local!) interaction preparing the entangled two-photon state in the very beginning of the experiment, i.e., before any of the detector clicks, and here indeed before means that the "preparation event" and all detector-click events are really time-like separated, and this is a true cause for the observed correlations, i.e., that the two photons were entangled before any measurement was done on them by A and B at their sites. So there's no need to explain through a mysterious collapse the findings of the corresponding correlations in coincidence experiments, and you can only find them by doing coincidence experiments. The commutation of spacelike separated observables means that no information can be transmitted faster than light. The collapse is consistent with this, as it does not allow information to be transmitted faster than light. But if the registrations are not coincident, then the first measurement will collapse the state. It's exactly the same as in spacelike separated Bell tests. Because of the spacelike separation, there is a frame in which Alice and Bob measure at the same time - in this frame there is no collapse. In other frames, the first measurement collapses the state. So collapse is frame-dependent. I'm gonna disagree here. There is no evidence that the first measurement causes collapse, as opposed to the second measurement. Such is strictly by assumption and nothing more. It makes equal "sense" to say the second measurement causes collapse. Your statement is not a part of the predictive science. (It is relevant to some interpretations.) Further, such experiments CAN be performed in the same reference frame, so that there is no ambiguity that one occurs first. There is no frame dependence for the results, nor on the interpretation of the results, OTHER than by assumption, hypothesis or interpretation. Reactions: vanhees71 The collapse is not consistent with this, because it's assumed that the state collapses instantaneously. You need a lot of handwaving to make collapse consistent with QFT, if there's any convincing formulation at all (I've seen none so far). The good thing is, that the collapse assumption is not needed at all. Yes, indeed, and that's why you just give up the unnecessary collapse assumptions. It's nowhere needed to understand quantum experiments, including those of this kind with entangled parts of quantum systems (like the two photons in this example). The collapse is consistent with relativity, eg. see Figure 1 of https://arxiv.org/abs/0706.1232. This is not available free online, but you might be able to access it: https://link.springer.com/chapter/10.1007/BFb0104397. Reactions: DrChinese and vanhees71 keithdow You should look up Wigner's friend. Simply stated, put someone inside a space suit next to Schrodingers cat. It won't affect the outcome of the experiment. For the external observer, the wave function only collapses when he looks. For the friend, the observation in continuous. I've no clue what you want to tell me. If you believe in a collapse (which in my opinion is pretty absurd) then the presence of the 2nd observer for the external observer is (let's call them Alice and Bob for convenience) that if Bob knows that the cat is continuously watched by Alice "collapsing" the poor creature to a definite state "alive" or "dead", he'll use the stat op $$\hat{\rho}=p_{\text{alive}} |\text{alive} \rangle \langle{\text{alive}} + p_{\text{dead}} |\text{dead} \rangle \langle \text{dead}|.$$ That's because then he knows that the cat is no longer entangled with the state of the unstable particle ("still there" vs. "decayed"). In the other case, i.e., if Alice is not present he'd use the pure state $$\hat{\rho}_{\text{particle}+\text{cat}}=|\Psi \rangle \langle \Psi | \quad \text{with} \quad |\Psi \rangle=\frac{1}{\sqrt{2}} |\text{particle there},\text{cat alive} \rangle + |\text{particle decayed},\text{cat dead} \rangle.$$ Now the funny thing is, ignoring the particle, Bob will also assign the state $$\hat{\rho}_{\text{cat}}'=\mathrm{Tr}_{\text{particle}} \hat{\rho}_{\text{particle}+\text{cat}} = \hat{\rho}.$$ So definitely nothing has changed for Bob. It's just different ways to describe the cat's state given the knowledge Bob has due to the knowledge about the setup of Schrödingers devilish experiment. Even if you believe in collapse Alice's presence has no effect on the assignment of a state to the cat by Bob, because that't the description Bob has to choose given the setup of the experiment (or in formalistic terms the preparation of the particle and the cat). The whole cat thing started out as an obviously absurd parody but unfortunately morphed into a rather serious teaching concept that frankly just makes physics look silly to the public. Reactions: vanhees71, weirdoguy and Mentz114 bob012345 said: So are you denying quantum theory applies to macroscopic objects, or ...? Reactions: Lord Jestocost "I've no clue what you want to tell me." The wave function collapses when a measurement is made. For the external observer, it collapses when he checks to see if the cat is dead. For the internal observer it is collapsing all the time because the internal observer is making a continuous measurement. Do you get it now? So collapse is subjective? "Would this experiment disprove that consciousness causes collapse?" You must log in or register to reply here. Related Threads for: Would this experiment disprove that consciousness causes collapse? I Would this experiment prove consciousness causes collapse Consciousness causes collapse dEdt I Regarding consciousness causing wavefunction collapse Trollfaz Does consciousness cause Wave-Function collapse? HiggsBoson1 B Why did famous people think consciousness causes collapse? low inhibition
CommonCrawl
Alkali treatment–acid leaching of rare earth elements from phosphogypsum fertilizer: insight for additional resource of valuable components M. S. Gasser1, Z. H. Ismail1, E. M. Abu Elgoud ORCID: orcid.org/0000-0002-7207-70451, F. Abdel Hai2, I. O. Ali2 & H. F. Aly ORCID: orcid.org/0000-0003-2387-76331 Phosphogypsum (PG) is the main by-product of phosphoric acid, which is produced by the sulfuric acid attack of phosphate rocks, wet process. This by-product, which contains around 2.0% phosphoric acid, is used as a low-cost soil fertilizer, PGF. PGF consists mainly of gypsum (CaSO4·2H2O), P2O5, SiO2, and other impurities, including a minor amount of rare earth elements, REEs. In general, phosphate rocks contain from about 0.04 to 1.0% REE, which are precipitated with PG. Now, REEs are considered as strategic elements. Therefore, PG is now regarded as a secondary source of REE. This paper address a process for the separation of REEs and sodium sulphate as a product from PGF. This paper is based on the metathesis of the bulk of PGF with sodium carbonate to obtain calcium carbonate precipitated contain REEs. Furthermore, sodium sulphate was obtained as a product. Calcium carbonate containing REEs was leached out by citric acid as a green acid or nitric acid. At optimum conditions, maximum leaching of REEs from CaCO3 after one cycle of leaching by 3.0 mol/L nitric acid at L/S = 3/1, agitation time of 180.0 min., and at a temperature of 25 °C is 75.1%, 361.10 mg/kg from the total REEs present in PGF. While, the maximum leaching of 87.4%, 420.2 mg/kg of REEs from CaCO3 after one cycle of leaching by 1.0 mol/L citric acid, L/S = 5/1, agitation time of 15.0 min., and 85 °C. The REEs that were obtained in the leaching citrate solutions were purified by solvent extraction using 10% of di-2-ethyl hexyl phosphoric acid, HDEHP, in kerosene. The extracted REEs were stripped by 0.5 mol/L H2SO4. The stripped solutions were further treated with 10.0% oxalic acid to precipitate the REEs. The developed procedure can recover REEs from PGF with an efficiency of 85.2% and a purity of 97.7%. Phosphogypsum (PG) is a byproduct generated during the industrial wet process of phosphoric acid production, in which sulfuric acid is used to digest phosphate rock. Gypsum (CaSO4·2H2O), the main component of PG, usually accounts for 65.0 to 95.0% of PG by weight. There are small quantities of impurities in PG, such as phosphates (H3PO4, Ca(H2PO4)2.H2O, CaHPO4.2H2O, and Ca3(PO4)2), fluorides (NaF, Na2SiF6, Na3AlF6, Na3FeF6, and CaF), sulfates, trace metals, and radioactive elements [1]. The large-scale production of these undesirable by-products, i.e., over 100–280 Mt/yr of PG worldwide [2, 3], but only about 15.0 percent, were reused as building materials, agricultural fertilizers, or soil stabilization amendments [4]. The remaining 85% are considered wastes that require large disposal areas and may cause huge environmental problems because of the high content of metals and impurities [5, 6]. Therefore, most common waste treatment practices have traditionally concentrated on relieving the release of contaminants by covering PG piles with impermeable materials and collecting acid effluents for further treatment. On the other hand, PG is regarded as an important REEs secondary resource. The waste typically contains 0.04 to 1.0% of REEs. These elements are critical materials for green energy development due to their essential roles in items like lamp phosphors and permanent magnets, catalysts, and rechargeable batteries [7, 8]. Although research has been conducted, a technology that allows the developer to economically recover these REE elements from the PG waste has not yet been developed [9,10,11,12,13,14,15,16]. Furthermore, the existence of radioactivity overwhelmingly restricts PG utilization. In the United States, the use of PG was banned in 1990 [17], and in the European Union, it was discontinued in 1992 because of the potential radiological impact. Research-based on hydrometallurgical focused on methods of recovering REEs in PG [18, 19]. The recovery of REEs could be considered a promising, economic, and environmentally friendly solution for the management of these wastes. However, the huge volume of PG landfilled near fertilizer industries may contain enough REEs to be mined if selective retrieval methods are advanced [13, 14, 20,21,22,23,24,25]. Lütke et al. [26] investigated the leaching of rare earth elements from PG by using citric and sulfuric acid. They reported that the leaching efficiency values of total rare earth elements were 62.0% and 89.7% for citric and sulfuric acid, respectively. Cánovas et al. [27] studied the leaching of REEs from PG with nitric and sulfuric acid. The obtained results indicated that the high leaching efficiency of REEs above 80.0% was achieved by using 3.0 mol/L nitric acid. While the leaching efficiency by using 0.50 mol/L sulfuric acid is in the range of 46.0–58.0%. Ennaciri et al. [28] developed a process for the production of K2SO4 by the conversion of phosphogypsum (CaSO4. 2H2O) and potassium carbonate (K2CO3). The obtained result showed that the reaction was conducted with stoichiometric ratios between PG and potassium carbonate and the high conversion of PG was achieved at 80 ◦C. Production of rare earth elements from PG after treatment with sodium chloride followed by sodium carbonate has been studied by Hammas-Nasri et al. [29]. They found that the total rare earth enrichment of about 84% was achieved in the final solid by using a washing step with (25 g/L) NaCl followed by leaching the residue with (60 g/L) Na2CO3 at 90 °C for 60.0 min. Leaching of rare earth elements from PG using different mineral acids (HCl, H2SO4, and HNO3) has been examined by Walawalkar et al. [30]. They reported the leaching efficiency of REEs was 57.0%, 51.0%, and 32.0% for HNO3, HCl, and H2SO4, respectively. Hammas-Nasri et al. [31] employed dilute sulfuric acid for the leaching of REEs from PG waste by a two-step leaching method. Their work showed that the leaching efficiency of REEs was about 50.0% by using double leaching with a 10.0% sulfuric acid solution at 60 °C for 1.0–2.0 h and a liquid/solid ratio of 1.3. Guan et al. [32] evaluated the behavior of hydrochloric acid for the leaching of REEs from PG. The experimental results showed that the maximum leaching efficiency for REEs was 65.6% at operating conditions (acid concentration of 1.65 mol/L, S/L ratio of 1/10, and reaction temperature of 60 °C). Recently, we developed a process for REEs with citric acid as a green acid, by direct leaching of PGF and PG containing 2.0% H3PO4 with a leaching efficiency of more than 84.0%. [13] In this work, this process was modified to enable efficient recovery of REEs, which were purified and separated. In this concern, a metathesis reaction based on the transformation of the precipitated calcium sulfate-free from REEs in the PGF to calcium carbonate precipitate containing the REEs by sodium carbonate with the release of the sodium sulfate into the solution. Further, the REEs precipitated with calcium carbonate were leached out by the use of a green citric and nitric acid solution and further purified by solvent extraction. Chemicals and reagents All chemicals used were of analytical grade unless stated otherwise. Citric acid, AR, was supplied from Adwic, Egypt. Different REEs, AR, were obtained as oxides from Fluka. The extractant HDEHP was purchased from Aldrich. The odorless kerosene was used as a diluent for the extractant and obtained from Misr Petroleum Company, Egypt. PGF characteristics PGF samples were obtained from Abu-Zaabal Fertilizers Company and Chemicals, Egypt. In the previous work [14], PGF was characterized using X-ray fluorescence spectrometry, XRF, X-ray diffraction, XRD, Infrared spectrum spectroscopy, FT-IR, and an Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES). In this concern, the major elemental chemical analysis of PGF, which was done by XRF, was given in Table 1. The total REEs in the PGF sample was equal to 481.0 ± 5 mg/kg, Table 2. Table 1 Chemical analysis of PGF by X- ray fluorescence (XRF) Table 2 Chemical analysis of REEs in PGF by ICP–OES Thermal analysis, differential thermal analysis (DTA), and thermal gravimetric analysis (TGA) were performed on a Shimadzu DTG–60/60 H with a heating rate of 20 °C/min under N2 flow. The differential thermal analysis (DTA) of the PGF sample shows the presence of two endothermic peaks, Fig. 1a. The first one occurred at 160.5 °C and may be related to a loss of 1.5 mol of H2O from dihydrate calcium sulphate (CaSO4.2H2O) and the formation of hemihydrate calcium sulphate (CaSO4.1/2 H2O) according to Eq. [33]: $${\text{CaSO}}_{4} \cdot {\text{2H}}_{{2}} {\text{O}}\,\mathop{\longrightarrow}\limits^{{\Delta , \sim \,160.5\,^\circ {\text{C}}}}\,{\text{CaSO}}_{4} \cdot {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}{\text{H}}_{2} {\text{O}} + {\raise0.5ex\hbox{$\scriptstyle 3$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}{\text{H}}_{2} {\text{O}}$$ Thermal analysis a DTA b TGA of PGF The TGA curve, Fig. 1b, of the PGF sample, shows a weight loss of 15%. Part of this weight loss may be due to humidity, and the other part corresponds to the endothermic DTA peaks. It was observed that the phase transition of hydrated calcium sulfate (CaSO4.2H2O) in PG to hemihydrate and un-hydrated calcium sulfate using DTA and TGA analysis [33, 34]. These two endothermic peaks appeared at 151 °C and 180 °C while the weight loss was 18.2%. The shift in peaks and the difference in weight loss may be attributed to the purity of PGF, the amount of residual acid present, and the origin of phosphate rock used for phosphoric acid production. Leaching investigation The PGF sample was dried at 200 °C for 2 h and then analyzed. Chemical analysis of the dried sample was shown in Table 3, which indicated that there was no change in the chemical composition of PGF due to heating. Table 3 Chemical analysis of PGF by XRF, after drying Leaching process Unless otherwise stated, leaching experiments were held by taking a certain known volume of the leaching solution in a polyethylene vial with 1.0 g of PGF and mixing thoroughly for a predetermined period. The admixture is separated by filtration and the total concentration of the resulted REEs (mg/L) is specified in the leaching solution calorimetrically by the Arsenazo-III method [35]. The Shimadzu UV–visible spectrophotometer model UV-160, Japan, was used to measure the concentrations of total REEs in samples after investigation. Individual REEs were determined by ICP-OES. Dried PGF sample was mixed with a certain volume of 0.4 mol/L Na2CO3 for 120.0 min. at 25 °C, the formed mixture was filtrated and the solid residue was treated with nitric acid or citric acid. In this concern, 3.0 mol/L of nitric acid was used with L/S ratio of 3/1 at a temperature of 25 °C and a contact time of 180.0 min. While 1.0 mol/L citric acid was used at L/S ratio of 5/1 at a temperature of 85 °C and a contact time of 15.0 min. The total percent of REEs leached (total% of REEs leached) was calculated using the Eq. (2); $${\text{Total}}\,{\text{percentage}}\,{\text{of}}\,{\text{REEs}}\,{\text{leached}}\, = \left[ {{\text{C}}_{f} /{\text{C}}_{o} } \right] \times \,100$$ where, Co is the concentration of the total REEs (mg/L) actually present in 1.0 g of PGF. To determine Co, 1.0 g of PGF was completely dissolved in aqua regia and evaporated until dryness. [13] Leaching of REEs from PGF with 1.0 mol/L citric acid at an L/S ratio of 5/1, a temperature of 85 °C, and equilibrium time of 15.0 min was carried out. The obtained leaching solution was contacted with an equal volume of organic solution with a known HDEHP concentration in kerosene. The two phases were shaken for a predetermined period in a thermostated mechanical shaker. After equilibration, the two phases are separated using a separating funnel. The REEs concentration extracted in the organic phase was calculated by the difference between its concentration in the aqueous phase before and after extraction. In the previous work, the leaching behavior of the total lanthanides, REEs, from PGF has been examined using nitric acid, hydrochloric acid, and sulfuric acid [14]. Recovery was highest when the PGF was leached with 3.0 mol/L HNO3. In the last work, some organic acids, namely boric acid, malic acid, and citric acid were used to leach REEs (Ln-Y) from PGF [13]. It was concluded that the 1.0 mol/L citric acid solution was the most effective leaching solution for REEs from PGF compared to other acids. Based on the aforementioned results, the combined process for leaching the REEs from PGF was developed. The process was based on the alkaline dissolution of the PGF by sodium carbonate solution to form soluble sodium sulphate as product and a precipitate of calcium carbonate together with the different REEs. This is followed by one-cycle leaching of the REEs from the carbonate precipitate with either nitric acid or citric acid. In this respect, the PGF sample was treatment with sodium carbonate (0.40 mol/L) [36] for 120.0 min at 25 °C to produce sodium sulphate according to the following equation: $${\text{CaSO}}_{{4}} \, + \,{\text{Na}}_{{2}} {\text{CO}}_{{3}} \, \to \,{\text{Na}}_{{2}} {\text{SO}}_{{4}} \, + \,{\text{CaCO}}_{{3}} \, \downarrow$$ There are several uses for sodium sulphate as a filler in powdered home laundry detergents and other uses. REEs are associated with CaCO3 and an analysis of the total REEs present in sodium sulphate solution was found to be less than 1.0% as indicated in Table 4. Table 4 The concentration of REEs present in a sodium sulfate solution Alkali treatment-nitric acid leaching In previous work [14], nitric acid was used to leach REEs directly from PGF by three-cycle. The maximum leaching efficiency of REEs was 66.0% using 3.0 mol/L nitric acid. Nevertheless, when nitric acid was used to leach the calcium carbonate containing REEs in the present work was found higher than 75.0% under a similar condition. After filtration, the obtained leaching solution was analyzed as illustrated in Fig. 2. From this figure, it is clear that the total REEs obtained in one cycle of leaching by nitric acid are 75.1% of the total REEs present in PGF. Concentrations of different REEs, ppm in PGF, and that leached from one-cycle. with 3.0 mol/L nitric acid solution, at agitation time 180.0 min, L:S = 3:1 and 25 °C. Alkali treatment- citric acid leaching In previous work [13], citric acid was utilized to leach REEs directly from PGF by three-cycle. The maximum leaching efficiency of REEs was 83.4% using 1.0 mol/L citric acid. Moreover, when citric acid was used to leach the calcium carbonate containing REEs in the present work was found higher than 87.0% under a similar condition. The optimum conditions, 1.0 mol/L citric acid, an L/S ratio of 5/1, equilibrium time of 15.0 min at 85 °C. After filtration, the obtained leaching solution was analyzed as given in Fig. 3. From this figure, it is clear that the total REEs obtained from one cycle of leaching by citric acid are 87.4% of the total REEs present in PGF. Concentrations of different REEs, ppm in PGF, and that leached from one-cycle with 1.0 mol/L citric acid solution, at agitation time 15.0 min, L:S = 5:1 and 85 °C. The residue that remains after the citric acid treatment was analyzed by XRF (Table 5). The obtained result indicated that < 1% of CaO was dissolved. Table 5 Chemical analysis by XRF of the precipitate obtained after PGF treatment with Na2CO3 followed by leaching with citric acid REEs purification Based on the analysis of the precipitated obtained after leaching out of REEs with citric acid, Table 5, it is clear that some impurities such as Ca, Sr, Fe, etc. are present in the REE leaching with citric acid. Therefore, to purify the REE leach citrate solution from these impurities, solvent extraction was used for this purpose. In this respect, di-ethyl hexyl phosphoric acid (HDEHP, H2R2) is widely used in the extraction and purification of REEs present in different acidic media. [37] In this concern, a simulated solution containing REEs with the same ratios as present in the PGF sample was prepared in a citrate medium. Extraction of REEs with different concentrations of HDEHP in kerosene was carried out at an equilibrium time of 15.0 min. and 25 °C. The results obtained are presented graphically in Fig. 4, as a relation between % E and HDEHP concentration. The obtained results indicated that 10.0% is the proper concentration of HDEHP in kerosene for almost quantitative extraction of REEs from the citrate medium. Effect of HDEHP concentration on the extraction of total REEs from simulated solution. T = 25 °C L:S ratio = 1:1 pH = 3.0 The pH results of the citrate acid concentration in the extraction process were given in Fig. 5 and found that the pH ranging from 3.0 to 4.0 is the most suitable for quantitative extraction of REEs. The extraction equilibration was as follows in Eq. (4): [38] $${\text{REE}}^{ + 3} \,\left( {{\text{aq}}} \right)\, + \,3{\text{H}}_{2} {\text{R}}_{2} \,\left( {{\text{org}}} \right)\,{\text{REE}}\,\left( {{\text{HR}}_{2} } \right)_{3} \,\left( {{\text{org}}} \right) + \,3{\text{H}}^{ + } \,\left( {{\text{aq}}} \right)$$ Effect of pH on the extraction of REEs from simulated solution. [HDEHP] = 5.0% contact time = 15.0 min The REEs leaching solution, obtained from the treatment of PGF with sodium carbonate and then with citric acid was contaminated with other elements, as previously mentioned. This solution was purified by extracting REEs with 10.0% HDEHP in kerosene at pH 3.0 for 15.0 min at 25 °C The extracted REEs were stripped by 0.5 mol/L H2SO4. The stripped solution was analyzed by ICP-OES to determine REE concentration, Table 6. Also, XRF analysis was carried out to determine the major impurities present in REEs. The obtained result is given in Table 7. Comparing this table with that of the original solution, Table 1, it is clear that the REEs produced were found to be free from fluoride and aluminum. The stripped solution contained no more than 0.4% calcium, whereas the unpurified REEs contained 35.9%. In addition, silica decreased from 9.95% to 0.3%. Other impurities are not more than 0.1%. Table 6 Chemical analysis of REEs in the stripping solution by ICP–OES Table 7 Chemical analysis of stripped solution by X- ray fluorescence (XRF) The stripped and the simulated solutions were further treated with 10.0% oxalic acid to precipitate the REEs to be analyzed by XRF (Fig. 6a and b, respectively). X- ray fluorescence analysis of precipitate by oxalic acid of a simulated solution, b stripped solution From Tables 6, 7 and Fig. 6, it is concluded that the developed procedure can recover REEs from PGF with an efficiency of 85% and a purity of 97.7%. The summary of the main procedures developed was given in Table 8. The different leaching processes presented in the table indicate that a combined pre-treatment with alkali followed by one cycle with citric acid is so far the most efficient process for the REEs leaching from the PGF matrix. Table 8 The summary of the main procedures developed A proposed flow sheet for the process based on nitric acid as well as citric acid is given in Fig. 7. Flow diagram for REEs leaching from PGF with nitric acid or citric acid after carbonate precipitation The total REE content in PGF is about 481.0 mg/kg. The major components of the REEs are Ce, La, Er, Pr, and Y. Alkali treatment of PGF produces soluble sodium sulfate as a product and a precipitate of calcium carbonate containing REEs. REEs was recovered from CaCO3 by leaching with HNO3 acid or citric acid. Based on the obtained results, maximum leaching of 75.1%, 361.10 mg/kg of REEs from CaCO3 after one cycle leaching by 3.0 mol/L nitric acid at L/S = 3/1, agitation time of 180.0 min., and at a temperature of 25 °C. In this respect, La is the most leached element from PGF with an efficiency of more than 81.7%, followed by 76.9% for Ce, 75.0% for Er, 65.3% for Pr, and finally 37.9% for Y. While, the maximum leaching of 87.4%, 420.2 mg/kg of REEs from CaCO3 after one cycle leaching by 1.0 mol/L citric acid, L/S = 5/1, agitation time of 15.0 min., and 85 °C. The leaching efficiency of citric acid in the final leach solution followed the order; Ce (92.0%) > Er (87.7%) > La (86.1%) > Pr (77.1%) > Y (63.5%). Purification of REEs from citrate leach solution was carried out by 10% HDEHP in kerosene at pH 3.0 and shaking time of 0.25 h at room temperature. The extracted REEs were stripped by 0.5 H2SO4. This procedure can recover REEs from PGF with an efficiency of 85.0% and purity of 97.7%. All data generated or analyzed during this study are included in this article. Rutherford PM, Dudas MJ, Arocena JM. Heterogeneous distribution of radionuclides, barium and strontium in phosphogypsum by-product. Sci Total Environ. 1996;180:201. Parreira AB, Kobayashi ARK Jr, Silvestre OB. Influence of portland cement type on unconfined compressive strength and linear expansion of cement-stabilized phosphogypsum. J Environ Eng. 2003;129:956. Yang J, Liu W, Zhang L, Xiao B. Preparation of load-bearing building materials from autoclaved phosphogypsum. Constr Build Mater. 2009;23:687. Hanan T, Mohamed C, Felix LA, Alguacil FJ, Delgado A. Environmental impact and management of phosphogypsum. J Environ Manag. 2009;90:2377. Pe'rez-L'opez R, Nieto JM, de la Rosa JD, Bolívar JP. Environmental tracers for elucidating the weathering process in a phosphogypsum disposal site: implications for restoration. J Hydrol. 2015;529:1313. El-Zrelli R, Courjault-Rad P, Rabaoui L, Castet S, Michel S, Bejaoui N. Heavy metal contamination and ecological risk assessment in the surface sediments of the coastal area surrounding the industrial complex of Gabes city, Gulf of Gabes, SE Tunisia. Mar Pollut Bull. 2016;101(2):922–9. P. Zhang, Recovery of critical elements from Florida phosphate: Phase 1. Characterization of rare earths. ECI International Conference: Rare earth Minerals/Metals – Sustainable Technologies for the Future, San Diego, USA, August 12–17 (2012) Sinha SA, Meshram P, Pandey BD. Metallurgical processes for the recovery and recycling of lanthanum from various resources—a review. Hydrometallurgy. 2016;160:47. Lokshin EP, Tareeva OA, Elizarov IR. Agitation leaching of rare earth elements from phosphogypsum by weak sulfuric solutions. Theor Found Chem Eng. 2015;50:857. Joanna K, Zygmunt K, Marzena S, Wirth H. Evaluation of the recovery of Rare Earth Elements (REE) from phosphogypsum waste–case study of the WIZÓW chemical plant (Poland). J Clean Prod. 2016;113:345. El-Didamony H, Ali MM, Awwad NS, Fawzy MM, Attallah MF. Treatment of phosphogypsum waste using suitable organic extractants. J Radioanal Nucl Chem. 2012;291:907. El-Didamony H, Gado HS, Awwad NS, Fawzy MM, Attallah MF. Treatment of phosphogypsum waste produced from phosphate ore processing. J Hazard Mater. 2013;296:596. Gasser MS, Ismail ZH, Abu Elgoud EM, Abdel Hai F, Ali IO, Aly HF. Process for lanthanides-Y leaching from phosphogypsum fertilizers using weak acids. J Hazard Mater. 2019;378:120762. Ismail ZH, Abu Elgoud EM, Abdel Hai F, Ali IO, Gasser MS, Aly HF. Leaching of some lanthanides from phosphogypsum fertilizers by mineral acids. Arab J Nuclear Sci Appl. 2015;48:37. Kandil AT, Aly MM, Moussa EM, Kamel AM, Gouda MM, Kouraim MN. Column leaching of lanthanides from Abu Tartur phosphate ore with kinetic study. J Rare Earths. 2010;28:576. Kouraim MN, Fawzy MM, Helaly OS. Leaching of lanthanides from phosphogypsum waste using nonyl phenol ethoxylate associated with HNO3 and HCl. Int J Sci Basic Appl Res. 2014;16:31. E.M. Sumner, Application rates of phosphogypsum in agriculture. In: Proceedings of the phosphogypsum fact-finding forum, Tallahassee, FL, 17–23. p. 1995. Lokshin EP, Vershkova YA, Vershkov AV, Tareeva OA. Leaching of lanthanides from phospho-hemihydrate with nitric acid Russ. J Appl Chem. 2002;75:1753. Lokshin EP, Tareeva OA. Specific features of sulfuric acid leaching-out of lanthanides from phosphohemihydrate. Russ J Appl Chem. 2008;1:81. C'anovas CR, P'erez-Lo'pez R, Macías F, Chapron S, Nieto JM, Pellet-Rostaing S. Exploration of fertilizer industry wastes as potential source of critical raw materials. J Clean Prod. 2017;143:497. C'anovas CR, Chapron S, Arrachart G, Pellet-Rostaing S. Leaching of rare earth elements (REEs) and impurities from phosphogypsum: a preliminary insight for further recovery of critical raw materials. J Clean Prod. 2019;219:225. Hammas-Nasri I, Horchani-Naifer K, Férid M, Barca D. Production of a rare earths concentrate after phosphogypsum treatment with dietary NaCl and Na2CO3 solutions Miner. Eng. 2019;132:169. Rychkov VN, Kirillov EV, Kirillov SV, Semenishchev VS, Bunkov GM, Botalov MS, Smyshlyaev DV, Malyshev AS. Recovery of rare earth elements from phosphogypsum. J Clean Prod. 2018;196:674. Wu S, Zhao L, Wang L, Huang X, Zhang Y, Feng Z, Cui D. Simultaneous recovery of rare earth elements and phosphorus from phosphate rock by phosphoric acid leaching and selective precipitation: Towards green process. J Rare Earths. 2019;37:652. Yang X, Salvador D, Makkonen HT, Pakkanen L. N, Phosphogypsum processing for rare earths recovery—a review. Natural Resources. 2019;10:325. Lütke SF, Oliveira ML, Waechter SR, Silva LF, Cadaval TR Jr, Duarte FA, Dotto GL. Leaching of rare earth elements from phosphogypsum. Chemosphere. 2022;301: 134661. Cánovas CR, Chapron S, Arrachart G, Pellet-Rostaing S. Leaching of rare earth elements (REEs) and impurities from phosphogypsum: a preliminary insight for further recovery of critical raw materials. J Clean Prod. 2019;219:225–35. Ennaciri Y, El Alaoui-Belghiti H, Bettach M. Comparative study of K2SO4 production by wet conversion from phosphogypsum and synthetic gypsum. J Market Res. 2019;8(3):2586–96. Hammas-Nasri I, Horchani-Naifer K, Férid M, Barca D. Production of a rare earths concentrate after phosphogypsum treatment with dietary NaCl and Na2CO3 solutions. Miner Eng. 2019;132:169–74. Walawalkar M, Nichol CK, Azimi G. Process investigation of the acid leaching of rare earth elements from phosphogypsum using HCl, HNO3, and H2SO4. Hydrometallurgy. 2016;166:195–204. Hammas-Nasri I, Horchani-Naifer K, Ferid M, Barca D. Rare earths concentration from phosphogypsum waste by two-step leaching method. Int J Miner Process. 2016;149:78–83. Guan Q, Sui Y, Liu C, Wang Y, Zeng C, Yu W, Chi RA. Characterization and leaching kinetics of rare earth elements from phosphogypsum in hydrochloric acid. Minerals. 2022;12(6):703. Hanna AA, Karish AIM, Ahmed SM. Phosphogypsum: part i: mineralogical, thermogravimetric, chemical and infrared characterization. J Mater Sci Technol. 1999;15:431. Hammas I, Horchani-Naifer K, Férid M. Characterization and optical study of phosphogypsum industrial waste. Stud Chem Proc Technol. 2013;1:30. Marczenko Z. Spectrophotometric determination of elements. New York: Wiley; 1986. E. Ionescu, E. Tomescu, R. Rachita (1980) Contribution to lanthanide recovery from phosphate rock. Proc. Intern. Congr. Phosphorus Compounds. Inst. Mondiale du phosphate, Paris. p 745. Pedada SR, Bathula S, Vasa SSR, Charla KS, Gollapalli NR. Micellar effect on metal-ligand complexes of Co(II), Ni(II), Cu(II), and Zn(II) with citric acid Bull. Chem Soc Ethiop. 2009;23:347. Campbell Jr PD, Fellers WH, Smith PM. Precipitation of enriched lutetium by direct oxalate extraction. Chancellor's Honors Program Projects. 1999. http://trace.tennessee.edu/utk_chanhonoproj/290. The authors are thankful to the Egyptian Atomic Energy Authority for its continuous support of scientific research and development. Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Science, Technology & Innovation Funding Authority (STDF) and springer nature open access agreement. Hot Laboratories Center, Egyptian Atomic Energy Authority, Cairo, 13759, Egypt M. S. Gasser, Z. H. Ismail, E. M. Abu Elgoud & H. F. Aly Chemistry Department, Faculty of Science, Al-Azhar University, Cairo, Egypt F. Abdel Hai & I. O. Ali M. S. Gasser Z. H. Ismail E. M. Abu Elgoud F. Abdel Hai I. O. Ali H. F. Aly EMAE: conceptualization, methodology, writing—original draft. ZHI: data curation, visualization, investigation. HFA and MSG: supervision, validation, writing—review and editing. FAH and IOA: supervision, validation, editing. All the authors read and approved the final manuscript. Correspondence to E. M. Abu Elgoud. Gasser, M.S., Ismail, Z.H., Abu Elgoud, E.M. et al. Alkali treatment–acid leaching of rare earth elements from phosphogypsum fertilizer: insight for additional resource of valuable components. BMC Chemistry 16, 51 (2022). https://doi.org/10.1186/s13065-022-00845-7 Phosphogypsum fertilizer Alkali-acid
CommonCrawl
Consumer demand heterogeneity and valuation of value-added pulse products: a case of precooked beans in Uganda Paul Aseete1 nAff6, Enid Katungi2, Jackline Bonabana-Wabbi3, Eliud Birachi4 & Michael Adrogu Ugen5 This study investigated consumer demand heterogeneity and valuation of a processed bean product—"precooked beans" with substantially reduced cooking time. Common bean is the most important source of protein for low- and middle-income households in Uganda. Its consumption is, however, constrained by long cooking time, high cooking energy and water requirements. As consumption dynamics change due to a rapid expansion of urban populations, rising incomes and high costs of energy, demand for fast-cooking processed foods is rising. An affordable, on-the-shelf bean product that requires less time, fuel and water to cook is thus inevitable. A choice experiment was used to elicit consumer choices and willingness to pay for precooked beans. Data used were collected from 558 households from urban, peri-urban and rural parts of central Uganda and analyzed using a latent class model which is suitable when consumer preferences for product attributes are heterogeneous. Study results revealed three homogeneous consumer segments with one accounting for 44.3% comprising precooked bean enthusiasts. Consumers derive high utility from a processed bean product with improved nutrition quality, reduced cooking time and hence save water and fuel. The demand for the processed bean is driven by cost saving and preference for convenience, which are reflected in willingness to pay a premium to consume it. Heterogeneity in attribute demand is explained by sex and education of the respondents, volumes of beans consumed, location and sufficiency in own bean supply. Our findings suggest that exploring avenues for nutritionally enhancing while optimizing processing protocols to make precooked beans affordable will increase consumer demand. These results have implications for market targeting, product design and pricing of precooked beans. Common bean (Phaseolus vulgaris L.) is a part of agricultural systems and diets of urban and rural populations across Sub-Saharan Africa (SSA). The crop is an important rotation crop and intercrop and adds nitrogen to the soil [1]. It is rich in cholesterol-free dietary proteins, energy, folic acid, fiber and micronutrients (iron and zinc)—thus a strategic remedy for hidden hunger and healthy eating for children and women of reproductive age in households with limited sources of protein [2]. Regular consumption of beans decreases the risk of coronary heart disease, diabetes, colorectal cancer and helps with weight management [3]. In SSA, consumption of bean and its contribution to protein intake is among the highest in the world [4]. Although bean consumption demand has been stable since the 1980s, the crop is consumed as dry grain, which takes longer to cook. Cooking time depends on the crop variety, the cooking method, quantity cooked and length of grain storage. It thus ranges from 120 to 180 min when beans are cooked without presoaking or catalyst and 58–107 min when they are presoaked in water [5,6,7]. In Uganda, beans are cooked without presoaking using wood fuel products. This poses potential challenges to bean consumption given increasing fuel costs, rapid urbanization and income growth that are transforming consumer preferences to more convenient and easy-to-prepare foods [8, 9]. While breeders have introduced new bean varieties that cook fast, consumers continue to show a desire for bean varieties that cook even faster [10]. Thus, long cooking time coupled with the changing consumption patterns will drive future demand away from bean consumption. To reduce the cooking time of dry bean, the National Agriculture Research Organization of Uganda and of Kenya, the International Center for Tropical Agriculture and the private sector are exploring industrial base solutions. The intervention entails processing dry bean grains under high temperature and pressure to produce a transformed value-added bean product referred to as the "precooked bean." Such processing methods improve nutrient availability in beans [11, 12]. Once precooked, one only needs to add water and cook for 10–15 min, nearly a 90% reduction in cooking time. While processed bean products with short cooking time already exist on the Ugandan food market, their market share remains small for several reasons. Most processed beans are not affordable to most consumers. For example, canned beans imported from Rwanda, Egypt and the United Arab Emirates [13] cost about UGX 8100-9250Footnote 1 per kilogram over three times higher than the cost (UGX 2000-2700 per kilogram) of unprocessed dry bean grain [14]. Other processed bean products like chilled beans need preservation, through refrigeration, a constraint to many consumers. Inadequate demand for processed bean products has thus been a disincentive for private sector investment into bean processing [13]. Like other processed bean products, the potential demand for the precooked bean is a big question for private investors. Their willingness to pay for its different traits will essentially govern consumers' demand for the precooked bean. It is thus important to understand consumer preferences for different attributes of the precooked bean product, estimate their willingness to pay for each and document factors that will affect demand. This is key given that it involves large investments to develop the product and information is needed to design effective marketing strategies. The best way of assessing the effective demand for the desired traits is to quantify the implicit prices of the desired traits. Hence, this study sought to understand consumer preference for different traits of the precooked bean and to estimate the implicit price of each trait. Based on a latent class model (LCM), the study further sought to find consumer segments that are likely to switch from consuming unprocessed bean grain to precooked beans and determine factors that will facilitate or constrain demand. The next section of the paper discusses the theoretical framework including the applications of LCM in demand analysis and the framework for using choice experiments. "Methods" section describes the design of the choice experiment, its implementation, the sampling strategy and characteristics of the interviewed households. Model estimates and other results are reported and discussed in "Results and discussion" section while the paper concludes with a summary of the key findings and policy implications. Analytical framework Since the precooked bean was not on the market, the study used a choice experiment approach to elicit choices and investigate how consumers will value and trade-off product attributes. A choice experiment is a stated preference method that derives from the Lancaster [15] consumer choice theory to value non-market goods or those goods not on the market. According to Lancaster's consumer theory, choices that consumers make can be modeled based on utility from attributes embodied in the good rather the good itself. The method further draws from the random utility approach [16] for econometric modeling of the choices made to account for possible unobserved heterogeneity. Individual preferences are heterogeneous as they depend on socioeconomic characteristics, individual objectives or resource endowment [16]. Commonly used models to account for heterogeneity in preferences include the random parameter logit (RPL) [17, 18] and the latent class model [19, 20]. Both the RPL and LCM incorporate heterogeneity in attributes, the systematic component of utility, but are based on different assumptions about the heterogeneity distribution. The RPL assumes a continuous distribution of the parameters to introduce heterogeneity while the LCM assumes a discrete distribution over unobservable endogenous (latent) classes of the respondents [21]. The LCM assumes that preferences are homogeneous within each class but can differ across classes also known as segments [22]. The number of segments and membership are simultaneously determined with the analysis of choices. The LCM is robust in modeling heterogeneity because; it has fewer restrictions and is less prone to biases that are often associated with model assumptions such as linear relationships and normal distributions [19]. The case in the LCM is one in which an individual resides in a latent class, s, (not revealed to the analyst) and there is a fixed number of classes, S. Denote \(s\) a class for individuals with homogeneous preferences. Also, let \(U_{jit}\) be the utility individual \(i\) in class \(s\) derives from choosing precooked bean alternative \(j\) in choice situation t and \(Z_{jit}\) a vector of attributes embodied in the precooked bean product. Thus, individuals maximize utility given by: $$U_{jit} = \beta_{s} Z_{jit} + \varepsilon_{jit}$$ where \(\beta_{s}\) is a vector of segment-specific parameter coefficients to be estimated and \(\varepsilon_{jit}\) is the random component of utility for each segment. When the error terms are assumed to be independently and identically distributed (IID) according to a Type 1 extreme value distribution, the probability that option j is selected by a respondent i belonging to segment s is given by; $$P_{ij/s} = {\frac{{\exp (\beta_{s}^{t} Z_{jit} )}}{{\sum\nolimits_{j = 1}^{J} {\exp (\beta_{s}^{t} Z_{jit} } )}}}$$ Denote H*, the likelihood function that classifies respondents to one segment with probability \(P_{is}\). The membership likelihood is a function of individual characteristics in vector \((X)\) as in (Eq. 3). Such individual characteristics could include demographic, social and economic factors, bean production and consumption and perceptions on processed foods. Thus, $$H^{*} = \lambda_{s} X_{i} + \alpha_{is}$$ where \(\alpha_{is}\) is the error term assumed to be IID and distributed across consumers and segments and follows a Gumbel distribution. The likelihood of an individual \(i\) being a member of a segment \(s\) is expressed as: $$P_{is} ={\frac{{\exp (\lambda^{t}_{s} X_{i} )}}{{\sum\nolimits_{s = 1}^{s} {\exp (\lambda^{t}_{s} X_{i} } )}}}$$ As noted earlier, the class membership is not observed. Thus, the joint probability that individual i belongs to segment s and chooses precooked bean alternative j is given by $$P_{ij/s} = (P_{ij/s} )*(P_{is} ) = \left( {\frac{{\exp (\beta_{s}^{t} Z_{jit} )}}{{\sum\nolimits_{j = 1}^{J} {\exp (\beta_{s}^{t} Z_{jit} } )}}} \right) * \left( {\frac{{\exp (\lambda^{t}_{s} X_{i} )}}{{\sum\nolimits_{s = 1}^{s} {\exp (\lambda^{t}_{s} X_{i} } )}}} \right)$$ In the estimation the LCM (Eq. 5), as adopted by Zhu [23], we also model the allocation of a respondent within a segment as conditional on their preferences, which, in turn, depend on their characteristics. Although (Eq. 5) is estimated by the Maximum Likelihood Method, the LCM does not guarantee that the solution generated will be the maximum likelihood solution. Its maximum often converges on the local as opposed to the global maximum [24]. To minimize this problem, we used the tighter convergence criterion and minimized the number of classes to avoid over-fitting the model [25]. After estimation of attribute coefficients in the LCM, willingness to pay was measured as the ratio of marginal utility of the attributes and price coefficient as in (Eq. 6). The negative of disutility from price (cost) was used as a surrogate for marginal utility of income [26] because we did not have an accurate measure of income in the data. $${\text{WTP}}_{\text{attribute}} = - \frac{{\beta_{\text{attribute }} }}{{\beta_{\text{price }} }}$$ Confidence intervals were then calculated using the delta method [27]. Choice experiment (CE) design The precooked bean can be described in terms of its attributes and levels they take. The most important attributes and their respective levels considered in the CE were selected through a stepwise process. The first step involved reviewing literature on important bean consumption attributes [28, 29], brainstorming among the research team and consultations with the precooked bean processor. The literature provides important attributes considered by bean consumers such as taste, low flatulence, appeal and less cooking time [10, 30]. However, this literature did not specify how reduced cooking time benefits the users, which we also sought to address in this study. Consultations with the processor revealed that after processing, the cooking time for the precooked bean is about 10–15 min which lowers the fuel and water quantities required for cooking. Shorter cooking time also means that the time spent in the kitchen cooking reduces—increasing convenience for persons who cook the beans. The processor also revealed that attributes like taste, color, and flavor will remain unchanged after processing. The study thus excluded them in the design of the choice experiment. In the second step, consultations with communities in study areas during study design revealed that the average cooking time for unprocessed bean, presoaked overnight, averaged 55 min. This is about 266% longer time for cooking when compared to the time for cooking precooked beans. To capture the demand for these benefits of processing beans, we considered five attributes including cooking time, nutritional enhancement, fuel, and water saving and price for the CE design. These attributes, embodied in dry bean grain, are altered during processing to create convenience and savings for consumers. The study defined cooking time (TIME) as the duration (minutes) it takes to boil beans to a point when they are ready for seasoning. This attribute was coded as: fast cooking (15 min) for precooked bean, intermediate cooking (35 min) for beans soaked and cooked with a fast cooking method and long cooking (55 min) with only soaking and no fast cooking method as the base. The a priori expectation was that consumers will choose short cooking time and thus precooked beans over dry grain. The fuel attribute (FUEL) reflects the cost of fuel a household incurs in cooking beans which depends on the fuel used, and type of bean cooked. It was difficult to quantify saving from precooked beans because information on volumes and fuel cost per cooking of beans was not available. We, however, used relative percentage reduction in cooking time based on a household's context to define this attribute. For consumers who combine soaking of dry beans and fast cooking methods, adoption of precooked bean will reduce their cooking time by up to 50%, while those who only soak beans, reduction will be up to 20%. This attribute was defined to have three levels: low saving (20%) as the base for households that cook unprocessed dry beans with only soaking, moderate saving (50%) for cooking soaked beans with a faster cooking technology and high saving (80%) for the precooked bean. Given that the cost of cooking in urban areas of Uganda is going up [31, 32], the expectation was that consumers will choose the high fuel saving option. Consumers who pay high costs for water because they buy or get it from long distances will enjoy low water requirement of precooked beans. The attribute water requirement (WATER) captured whether consumers will pay for precooked beans because they are water saving. This attribute was effects coded as: maintaining the status quo (high cooking water requirement = − 1) or choosing precooked beans (low cooking water requirement = 1Footnote 2.) The attribute nutritional enhancement (NUTRI) captures the nutritional quality of bean varieties selected for processing. Integrating nutrition in agricultural innovations to improve nutrition has gained popularity and in response, bean varieties selected for precooking were those with higher protein and iron levels. The attribute (NUTRI) was added to enable us test whether consumers value nutritional enhancement and will pay for it. This will inform whether product labeling with nutritional information and fortification of the product is necessary. Effects coding was used for the NUTRI attribute as: choosing a nutritionally enhanced bean product (Yes = 1) and Otherwise (No = − 1). Following Chowdhury et al. [34] and Birol et al. [35] who reported that consumers are willing to pay premium prices to consume biofortified foods, our expectation was that consumers will have high demand for a nutritionally enhanced product. The study added price per kilogram of beans as an attribute (PRICE) to allow computation of the implicit prices of precooked bean product attributes. Price levels were derived from the average annual prices of beans in study sites [14] and then stepped up by increments of 40% to reflect proposed price changes due to processing as per the processors' perspective. Price was defined at four levels: UGX 2500, 3500, 4500 and 5500 with 2500 serving as the base price. The expectation here was that ceteris peribus, consumers would choose a cheaper product set. The five attributes and their levels were combined into choice sets using the computer-aided discrete choice design in JMP 12, which generated 21 choice sets of three options each. A profile with a complete list of blocked choice sets used in the choice experiment is supplied as an additional file (see Additional file 1). Options A and B showed an altered product while option C represented the status quo. The option for maintaining the status quo reflects a shopping choice for consumers who may prefer not to consume precooked beans. The alternative specific constant (ASC) was chosen to equal to 1 when the respondents selected options A or B and 0 for the option C [20]. If the ASC is negative and significant, then the propensity of the consumer to choose the status quo is high and vice versa. The choice experiment was broken down into three blocks (A, B and C) of seven choices sets each. Blocking improves the quality of choice data without compromising the diversity of choices, minimizes respondent fatigue and improves the cognitive ability of the respondents [36]. To improve the visual appeal of choice sets, attributes were illustrated using images on cards (Fig. 1). Sample of the choice set (single card) subjected to respondents Study area and survey implementation The districts of Kampala, Mukono, Wakiso, Buikwe, and Luwero in Central Uganda made up the study sites. These were selected because of their high population density and levels of urbanization, high cost of energy for cooking [37] and the importance of common bean in household diets [38]. Although agriculture employs up to 74% of the households in the study sites, they purchase 54% of the food consumed [39]—depicting a high potential for food trade. They thus make up the most probable market for the precooked bean. From the selected districts, sub-counties and divisions (for Kampala) formed the second stage of sampling from which the most urbanized/peri-urban sub-county and at least one rural sub-county was selected. A list of villages in the sub-counties/divisions from sub-county and division administrative headquarters then aided the choice of villages. Because the list of households per village was not available, community leaders provided the estimated number of households used in designing the sampling. Where the number of households in the village divided by the desired village sample size provided the sampling interval. The first household interviewed was selected using a random starting point and all subsequent households assigned using the sampling interval through systematic random sampling. This process led to 558 households (capped due to budget limitations) from which one principle decision maker in a household, either the household head or spouse responded to the full survey. In cases where the household head was absent the spouse was interviewed. Households were randomly assigned to the three choice experiment blocks with each block receiving an equal number of households. Figure 2 shows the distribution and locations of households. Location of households sampled for the survey Since the product was not on the market by the time of the study, product profiles and the "cheap talk" also used by Kikulwe et al. [20] served as an introduction, and descriptor of the hypothetical product. The cheap talk was used to explain and simplify choice scenarios to the respondents. It reduces hypothetical biases for information collected with little prior product knowledge [34]. Trained enumerators using computer-assisted personal interviewing techniques uploaded with a structured questionnaire collected survey data. Besides choice data, the study elicited information on household characteristics including demographics, bean production and consumption dynamics, bean preparation methods, perceptions of food processing, market access, incomes, and employment. Selected sample characteristics The sample was skewed to urban, but with a sufficient number of observations from the rural setup. Out of 558 households interviewed, slightly more than a half (58.9%) was from urban setup while the remaining was split between rural and peri-urban locations (Table 1). The average household size was 6.30 people with rural households having the largest households. On average, the number of household members above 14 years was 3.02 people, with half being potential workers and the other half dependents. Urban household heads were more likely to be engaged in off-farm employment and earned higher incomes than those from rural areas (Table 1). Most households (98.4%) reported frequent consumption of beans, with an average consumption frequency of 4.2 bean meals/week and a quantity of 0.64 kg/meal/household. Average per capita bean consumption was 22.41 kg/person/year, which is close to 19 kg/person/year reported from previous studies [13]. Bean consumption was significantly higher in rural areas (25.12 kg/person/year) compared to (21.54 kg/person/year) in urban areas. Table 1 Summary statistics of sampled households. Urban households consumed 9.7% of beans from own production while those from rural areas consumed up to 74.2% of beans from own production. The proportion is higher for rural household compared to 56.8% in [38] because this study was conducted in July, which comes after harvesting season. Reliance on the market by urban consumers implies that precooked beans stand a high chance of being demanded in these localities. Firewood and charcoal were the most common types of fuel used to cook beans; used by 87.9% of households in the rural and 79.1% of urban households respectively. For the average quantity of dry bean (0.64 kg) consumed by a household (one bean meal), it spent 113.62 min in cooking time (without presoaking) and UGX. 1703 on fuel. Households that adopted time-saving cooking measures (presoaking, using catalysts or cooking fresh beans), cooked for an average of 78.5 min and spend an average of UGX. 1390. The study used factor analysis to understand consumer knowledge, attitudes, and perceptions toward processed foods. Using a cutoff factor loading value of 0.4 and eigenvalues above 1, three factors that accounted for 49.7% of the observed variance were identified (Table 2). The first factor was termed "processing benefits" (PB) because it had the highest loading on the questions related to potential enjoys of consuming processed foods. This captures the altruistic interests and awareness of consumers on environmental, employment and other benefits associated with agro-processing. This is consistent with findings by Khachatryan and Zhou [40] and Hu et al. [41] who noted that consumers are often willing to take up new services because of their desire to contribute to society. The second factor termed "social influences" (SI) had a high loading on statements that reflect consumer concerns on societal influence including culture. The third, "availability and safety" (AS) captures social concerns about product availability and safety (Table 2). Three-factor indices created were tested for their suitability in the LCM and only the PB index was used because it fitted the model well. Table 2 Factor loading for consumer perceptions on processed foods Consumer segmentation and preference analysis Estimation of the LCM to determine the optimal number of segments was based on a balanced assessment of the log-likelihood function and full information maximum likelihood [42]. The four criteria used include: Akaike's information criterion (AIC), Bayesian information criterion (BIC), log-likelihood (LL) and McFadden pseudo R2 (ρ2). The AIC and BIC were minimized, and LL and ρ2 were maximized at three segments (Table 3). Andrews [43] noted that AIC and BIC never under-fit the number of segments but may over-fit them leading to larger parameter biases. Since the three-segment model best described the sample, thus the best fitting LCM, consumers were categorized into three homogeneous segments. Table 3 Determination of the optimal number of segments A multinomial logit model (MNL), which gives the unconditional probability for the choice of a product attribute, was run as the starting point to check for parameter fit and as a precursor for further iterations. Compared to MNL, the LCM had a higher log-likelihood (− 2738.23 vs − 3210.53) and the adjusted R2 (0.359 vs 0.174), thus a better specification to describe the data (Table 4). Table 4 LCM estimates of precooked beans product attributes and segment membership Looking at the alternative specific constant (ASC), consumers in segment 3 exhibited a positive and significant propensity to switch to consumption of precooked beans since they valued options A and B in the choice experiment over option C (Table 4). Consumers in segment 1 and 2 have a negative and significant ASC, which represents a preference for the status quo (option C) over the processed product. Valuation of product attributes nutrition, fuel saving, and price was significant across all segments with a priori expected signs (Table 4). Although the respective coefficients of these three attributes differ in magnitudes across segments, suggestive of varying weights, they are important to consumers. Consumers would derive utility from nutritious bean products and fuel saving but are sensitive to price changes. The consciousness toward own health and the need to stay healthy could be the motivation to the high value attached to nutritional enhancement of precooked beans [44,45,46]. The significant valuation of fuel saving as a benefit of precooked beans reflects the growing cost of fuel for cooking [31, 32] associated with increases in population pressure and urban population. Over 90% of Ugandans use energy from biomass exploitation (firewood and charcoal) which is becoming scarce and expensive [31]. Consumers in segment one, derive higher utility from all attributes especially enhanced quality of nutrition attribute. We therefore term consumers in segment one the "nutrition enhancement lovers" because they derive the highest utility from nutritional enhancement. The probability of belonging to this segment was influenced by self-reliance on the supply of beans for consumption, the quantities consumed, sex of respondent and education. The likelihood of membership in segment one is 53% higher for individuals who have attained primary level of education or above compared to those with no education. However, being male reduced the probability of membership in segment one by 48% while individuals from households that are self-sufficient in supply were 82% likely to cluster in segment one. For one additional kg of bean consumed per week, the probability of being a member of segment one reduced by 0.82 points (Table 4). Like households in segment one, consumers in segment two ranked nutrition as their most preferred attribute but are less likely to consumed precooked beans (Table 4). This might be because they are very averse to price increases as revealed by the absolute coefficient on price that was second largest within the segment and largest between segments. The probability of membership in segment two was positively influenced by self-sufficiency in bean supply but negatively by quantities of bean consumed and attitude toward indirect benefits of food processing (Table 4). Given their stingy nature, we term members in the group "conservative self-reliant bean consumers." All the attributes were important for consumers in segment three (Table 4). This segment of consumers derives higher utility from nutrition enhancement, water saving and reduced cooking time attributes. Since members in this segment had a high propensity of choosing precooked beans over the status quo and have a balanced demand for its attributes, we term them "precooked bean enthusiasts." Membership coefficients in segment three are interpreted implicitly in relation to the signs of the estimated statistically significant parameters in segments one and or two [47]. Based on this approach, consumers in segment three rely on beans purchased from the market for home consumption because supply from own production was not sufficient although they consume larger quantities of beans per week. It is important to note that consumers in urban areas are heterogeneous and belong to all three segments in almost equal proportions contrary to the expectation that urbanites will switch en masse to consumption of processed beans. This shows a diversity of people with different socioeconomic characteristics including among others; variations in wealth status, incomes, time and cooking constraints, and perceptions. Profiles of consumers based on segment membership To profile consumers in each segment, we first calculate the probability of a consumer belonging to a segment using estimated LCM coefficients (inserted in Eq. 5). Then, each household was assigned to the segment where it exhibited the highest probability of membership. Following this procedure, membership placement showed that 44.3% of the sample belonged to the segment of precooked bean enthusiasts while 47.7 and 8.1% are nutrition enhancement lovers and conservative—self-reliant bean consumers, respectively (Table 5). Table 5 Profiles of consumers in different segment. Precooked bean enthusiasts have small households of 6.1 people, on average, with bean consumption of 0.54 kg/week, the second highest household level quantities among the sample households. Thus, per member cost of preparing beans could be high in this segment which probably drives their decision to consume the proposed product. They reported the highest expense (UGX 1686.73) for preparing a meal of beans (Table 5). Their choice could, thus, be reflecting cost-saving behavior since they are already buying the beans they consume. Moreover, the average distance to the nearest bean market and water sources was furthest in this segment. The consumers in the segment have the lowest supply of bean from their own production and largely depend on the market to satisfy their bean consumption demand. Approximately 57% of its members were in the urban and earn their income from a variety of sources including agriculture. This is a sign that potential consumers of precooked beans are spread in terms of location and have variable socioeconomic characteristics—which will require integrated marketing strategies to reach them with precooked bean products. Households in this segment were headed by individuals in their mid-40s, with 10 years of education and the majority were men. Members in the segment of "nutrition enhancement lovers" are the most educated with a significant proportion (44.2%) depending on salaried jobs as their major source of income. Households in this segment mostly use charcoal for cooking and consume significantly small quantities of beans (2.73 kg/week) compared to members in other segments (Table 5). This group is less likely to consume precooked beans probably because they depend on their own production, which is just enough (80.5%) for their needs. This group resides in rural areas with small trading centers and part-time in farming combined with other everyday jobs to earn a living. Conservative-self-reliant bean consumers enjoy relatively lower market prices spent on beans and fuels. Table 5 shows that members in this segment faced the lowest bean prices on the market and spend UGX 1445.45 on fuel for preparing beans. About 33% of members in this segment belong to households that reported self-sufficiency in bean supply while 54.2% supplemented their own production with beans from the market. The segment has the largest rural population and least educated members. Given that members of this group produce their own beans dominated by rural households, they have a lot to lose if they are to sell and buy back beans. Willingness to pay for precooked beans with attribute trade-off Table 6 shows Consumer's marginal willingness to pay (WTP) a premium (positive WTP values) or a discount (negative WTP values) to consume precooked beans. There was a positive WTP for the precooked bean. All consumers attached high importance to nutritional enhancement, fuel and time-saving attributes. Their WTP for these attributes varied by consumer segment, with nutrition enhancement lovers willing to pay the highest premiums for precooked beans. Since precooked bean enthusiasts make up the potential market for precooked beans, their willingness to pay can serve as a reference for making pricing decisions for the product. Table 6 Marginal WTP for precooked bean attributes Precooked bean enthusiasts are willing to pay an average increase of 31.21% in bean prices to consume precooked beans and the highest acceptable price was an increase of 40.36% over the prevailing market price. Consumer willingness to pay premiums for value addition and innovative food products has been reported by Ofuoku and Akusu [48] and Geethalakshmi et al. [49]. It is, however, important to note that consumers may under or overstate their intentions making it unclear if the same willingness to pay will be replicated when they face real product demand [50]. Study findings revealed significant heterogeneity in consumer valuation and preference of precooked bean attributes. Consumer heterogeneity is mostly explained by income abilities and to some extent by sufficiency in bean supply, sex and education of the respondent. Product demand will thus depend on the individual consumer's context meaning the precooked beans should be marketed as a product with diverse benefits and for targeted consumer needs. Such benefits include; contribution to environmental conservation through reduced fuel use, monetary saving from less fuel used, employment opportunities created, and incomes earned by suppliers of the raw materials who are mostly smallholder women. Consumers in different segments are willing to pay for precooked beans especially for attributes of enhanced nutrition, fuel and time saving. While increased prices for the product reduces such willingness. These attributes will drive demand for the product and should inform product pricing decisions and mechanisms for communicating paybacks from higher prices charged. Mechanisms such as innovative labeling, branding and product differentiation in the market will be key in marketing since precooked bean attributes like nutrition, benefits from using less fuel and timesaving are invisible and are rarely rewarded by product markets. Consumer enthusiasm for nutritional quality through enhancement is a unique selling point. Marketing strategies should thus make nutritional content explicit and part of promotional campaigns to boost demand for the precooked beans. Findings of the study are indicative of positive opinions that consumers have on value addition of beans and the demand for the product once on the market. While this is evident, the choice experiment used in the study is a none-market method, but the best available to elicit potential demand for precooked beans. Willingness to pay and valuation estimates thus remain hypothetical, so the results reported in this study are not conclusive. A follow-up study with the real precooked bean product on the market may be necessary. Uganda Shilling (UGX) to US $ rate was 1 USD = UGX 3240.65. https://www.bou.or.ug/bou/rates_statistics/statistics.html. Effect coding was chosen for coding nutritional enhancement and water requirement over the dummy coding scheme because effect codding avoids overestimates of WTP and minimizes the effect of boundary value estimates [33]. ASC: alternative specific constant choice experiment LCM: latent class model random parameter logit SSA: WTP: willingness to pay Lupwayi NZ, Kennedy AC, Chirwa M. Grain legume impacts on soil biological processes in Sub-Saharan Africa. Afr J Plant Sci. 2011;5(1):1–7. Singh U, Singh B. Tropical grain legumes as important human foods. Econ Bot. 1992;46(3):310–21. Leterme P, Muũoz LC. Factors influencing pulse consumption in Latin America. Br J Nutr. 2002;88(S3):251–4. Nedumaran S, Abinaya P, Shraavya B, Rao PP, Bantilan MC. Grain legumes production, consumption, and trade trends in developing countries-an assessment and synthesis, socioeconomics. Discussion paper series number 3; 2013. Kahenya P. Effects of soaking on the cooking time of different common bean (Phaseolus vulgaris L.) varieties grown in Kenya. In: Scientific conference proceedings, 2014 May 28. Romero Del Castillo R, Costell E, Plans M, Simó J, Casañas F. A standardized method of preparing common beans (Phaseolus vulgaris L.) for sensory analysis. J Sens Stud. 2012;27(3):188–95. Castellanos JZ, Guzmán-Maldonado H, Acosta-Gallegos JA, Kelly JD. Effects of hardshell character on cooking time of common beans grown in the semiarid highlands of Mexico. J Sci Food Agric. 1995;69(4):437–43. Popkin BM. The nutrition transition and obesity in the developing world. J Nutr. 2001;131(3):871S–3S. De Haen H, Stamoulis K, Shetty P, Pingali P. The world food economy in the twenty-first century: challenges for international co-operation. Dev Policy Rev. 2003;21(5–6):683–96. Katungi E, Kikulwe E, Emongor R. Analysis of farmers valuation of common bean attributes and preference heterogeneity under environmental stresses of Kenya. Afr J Agric Res. 2015;10(30):2889–901. Tharanathan RN, Mahadevamma S. Grain legumes—a boon to human nutrition. Trends Food Sci Technol. 2003;14(12):507–18. Wang N, Hatcher DW, Tyler RT, Toews R, Gawalko EJ. Effect of cooking on the composition of beans (Phaseolus vulgaris L.) and chickpeas (Cicer arietinum L.). Food Res Int. 2010;43(2):589–94. Kilimo Trust. Development of inclusive markets in agriculture and trade (DIMAT): the nature and markets of bean value chains in Uganda. 2012. http://www.undp.org/content/dam/uganda/docs/UNDP%20Uganda_PovRed%20-%20Beans%20Value%20Chain%20Report%202013.pdf. Accessed 22 Dec 2016. Statistical abstract. Uganda Bureau of Statistics. UBOS. 2016. Lancaster KJ. A new approach to consumer theory. J Polit Econ. 1966;74(2):132–57. McFadden D. Conditional logit analysis of qualitative choice behavior. In: Zarembka P, editor. Frontiers in econometrics. New York: Academic Press; 1973. p. 105–42. Greene WH, Hensher DA. A latent class model for discrete choice analysis: contrasts with mixed logit. Transp Res Part B Methodol. 2003;37(8):681–98. Oparinde A, Birol E. Farm households' preferences for cash-based compensation versus livelihood-enhancing programmes: a choice experiment to inform Avian Flu (HPAI H5N1) compensation policy in Nigeria. J Afr Econ. 2012;21(4):637–68. Louviere JJ, Hensher DA, Swait JD. Stated choice methods: analysis and applications. Cambridge: Cambridge University Press; 2000. Kikulwe EM, Birol E, Wesseler J, Falck-Zepeda J. A latent class approach to investigating demand for genetically modified banana in Uganda. Agric Econ. 2011;42(5):547–60. Kamakura WA, Wedel M. Market segmentation: conceptual and methodological foundations. New York: Kluwer Academic Press; 1999. Hynes S, Hanley N. Analysing preference heterogeneith using random parameter logit and latent class modelling techniques. Working paper no. 0091. National University of Ireland Galway, Department of Economics; 2005. Zhu Q, Zhang Z. On using individual characteristics in the MNL latent class conjoint analysis: an empirical comparison of the nested approach versus the regression approach. Mark Bull. 2009;1:20. Magidson J, Vermunt JK. Latent class models. The Sage handbook of quantitative methodology for the social sciences. Thousand Oaks: SAGE; 2004. p. 175–98. Uebersax J. A brief study of local maximum solutions in latent class analysis. 2000. http://ourworld.compuserve.com/homepages/jsuebersax/local.htm. Accessed 22nd Dec 2016. Greene WH. NLOGIT 5 reference guide. Plainview, NY: Econometric Software. Inc.; 2012. Carson RT, Czajkowski M. A new baseline model for estimating willingness to pay from discrete choice models. In: International choice modeling conference, Sydney (2013). Sperling L, Loevinsohn ME, Ntabomvura B. Rethinking the farmer's role in plant breeding: local bean experts and on-station selection in Rwanda. Exp Agric. 1993;29(4):509–19. Katungi E, Sperling L, Karanja D, Beebe S. Relative importance of common bean attributes and variety demand in the drought areas of Kenya. J Dev Agric Econ. 2011;3(8):411–22. Amane MI, Dias DJ, Chirwa R, Rubyogo JC, Tembo F. Using innovative approaches in selecting and disseminating bean varieties in Mozambique: lessons learnt. In: 10th African crop science conference proceedings, Maputo, Mozambique, 10–13 October 2011. African Crop Science Society (2011). pp. 283–286. Bizzarri M, Bellamy C, Patrick E, Roth C. Safe access to firewood and alternative energy in Uganda: an appraisal report. Rome: WFP; 2009. Mwaura FR, Okoboi GE, Ahaibwe GE. Determinants of household's choice of cooking energy in Uganda. Research report series no. 114. EPRC. 2014. Hasan-Basri B, Karim MZ. The effects of coding on the analysis of consumer choices of public parks. World Appl Sci J. 2013;22(4):500–5. Chowdhury S, Meenakshi JV, Tomlins KI, Owori C. Are consumers in developing countries willing to pay more for micronutrient-dense biofortified foods? Evidence from a field experiment in Uganda. Am J Agr Econ. 2011;93(1):83–97. Birol E, Meenakshi JV, Oparinde A, Perez S, Tomlins K. Developing country consumers' acceptance of biofortified foods: a synthesis. Food Secur. 2015;7(3):555–68. Kuhfeld WF. Marketing research methods in SAS. Experimental design, choice, conjoint, and graphical techniques. Cary, NC: SAS-Institute; 2005. Damanin P. Exploring livelihoods of the urban poor in Kampala, Uganda: an institutional, community and household contextual analysis. ACF. 2012. https://www.actionagainsthunger.org/sites/default/files/publications/ACF_Uganda_Kampala_Urban_Study-2012.pdf. Accessed 24th Jan 2017. Larochelle C, Katungi E, Beebe S. Disaggregated analysis of Bean consumption demand and contribution to household food security in Uganda. Cali: International Center for Tropical Agriculture (CIAT); 2015. Statistical Abstract. Kampala: Uganda Bureau of Statistics. UBOS. 2013. Khachatryan H, Zhou G. Preferences for sustainable lawn care practices: the choice of lawn fertilizers. In: Agricultural & applied economics association's 2014 annual meeting; 2014. Hu W, Woods T, Bastin S, Cox L, You W. Assessing consumer willingness to pay for value-added blueberry products using a payment card survey. J Agric Appl Econ. 2011;43(2):243. Hurvich CM, Tsai CL. Regression and time series model selection in small samples. Biometrika. 1989;76(2):297–307. Andrews RL, Currim ISA. comparison of segment retention criteria for finite mixture logit models. J Mark Res. 2003;40(2):235–43. Huffman SK, Jensen HH. Demand for enhanced foods and the value of nutritional enhancements of food: the case of margarines. In: AAEA meetings, Denver, CO; 2004. p. 1–4. Birol E, Asare-Marfo D, Karandikar B, Roy D. A latent class approach to investigating farmer demand for biofortified staple food crops in developing countries: The case of high-iron pearl millet in Maharashtra. India: International Food Policy Research Institute (IFPRI); 2011. Bouis HE, Saltzman A. Improving nutrition through biofortification: a review of evidence from HarvestPlus, 2003 through 2016. Glob Food Secur. 2017;12:49–58. Kontoleon A, Yabe M. Market Segmentation Analysis of Preferences for GM Derived Animal Foods in the UK. J Agric Food Ind Organ. 2006;4(1):1–38. Ofuoku AU, Akusu MO. Preference and willingness of consumers to pay for value-added poultry products in niger delta region of Nigeria. J Northeast Agric Univ (Engl Ed). 2016;23(4):82–92. Geethalakshmi V, Ashaletha S, Raj DA, Nasser M. Consumer preference and willingness to pay for value-added fish products in Palakkad. Kerala: ICAR; 2013. Klein R, Sherman R. Estimating new product demand from biased survey data. J Econom. 1997;76(1):53–76. PA conceived and designed the study, collected and analyzed data. He drafted and coordinated the write up of the manuscript. EK designed the study and supervised implementation and reviewed the manuscript, JBW designed the study and supervised implementation and reviewed the manuscript. EB reviewed data analysis tools and reviewed the manuscript. MAU conceived the study, reviewed data collection tools, supervised the implementation of the study and reviewed the manuscript. All authors read and approved the final manuscript. Study findings presented in this paper were implemented by National Crops Resources Research Institute under the initiative that was supported by IDRC/ACIAR. The authors would like to thank IDRC and ACIAR for the funding. The authors are grateful to the team of dedicated research assistants and staff that supported data collection. The team is also indebted to the anonymous reviewers for their valuable comments and those who helped proofread the manuscript for their insights. The authors declare that they do not have any competing interests. Data that were used to generate these results are available upon request from the Authors. Study respondents were assured of anonymity, and their consent was sought orally before the survey was undertaken. Approval to undertake this research was thought from the Uganda National Council for Science and Technology under the project "Precooked beans for improving food and nutrition security and income generation and conservation of natural resources." The study was funded by the International Development Research Centre (IDRC) and Australian Centre for International Agricultural Research (ACIAR) under the cultivate Africa's future fund. The funding agency reviewed study protocols used. Paul Aseete Present address: Department of Agricultural Economics, Kansas State University, Waters Hall, Room 400, Manhattan, KS, 66506, USA National Crops Resources Research Institute, P.O. Box 7084, Kampala, Uganda International Center for Tropical Agriculture, P.O. Box 6247, Kampala, Uganda Enid Katungi Makerere University, P.O. Box 7062, Kampala, Uganda Jackline Bonabana-Wabbi International Center for Tropical Agriculture, P.O. Box 823-00621, Nairobi, Kenya Eliud Birachi National Semi Arid Resources Research Institute, P.O. Box 56, Soroti, Uganda Michael Adrogu Ugen Search for Paul Aseete in: Search for Enid Katungi in: Search for Jackline Bonabana-Wabbi in: Search for Eliud Birachi in: Search for Michael Adrogu Ugen in: Correspondence to Paul Aseete. 40066_2018_203_MOESM1_ESM.docx Additional file 1. List of blocked choice sets used in the choice experiment. Aseete, P., Katungi, E., Bonabana-Wabbi, J. et al. Consumer demand heterogeneity and valuation of value-added pulse products: a case of precooked beans in Uganda. Agric & Food Secur 7, 51 (2018) doi:10.1186/s40066-018-0203-3 Precooked beans Preference heterogeneity
CommonCrawl
Is excess lift or excess power needed for a climb? As answered in this question, aircraft need excess power - not excess lift - to climb. This is plausible when the aircraft's thrust vector has a vertical component (its nose and engine points upwards), but I challenge the requirement of excess power for every case. Please take a look at the following cart. The thrust gets delivered by a propeller at the rear and the thrust vector is always horizontal. A wing attached to a vertical beam is free to move up and down. When the cart gets accelerated and reaches a certain speed, the lift acting onto the wing gets greater than the wing's weight, leading to a climb of the wing. Please notice that - because thrust is horizontal - the chemical energy burned goes into kinetic energy of the cart and/or heat energy (due to overcoming drag). No power invested by the propeller goes into potential energy of the wing; the climb of the wing is done purely by lift. aerodynamics lift aircraft-physics climb $\begingroup$ You did an excellent job of illustrating your question! I wish others would pose their question with such clarity. $\endgroup$ – Peter Kämpf May 31 '15 at 12:36 $\begingroup$ The first thing you missed is drag: Once the wing moves through air, it will create not only lift, but also drag, and that drag will be higher when the wing accelerates up the pole. This drag increase will at least reduce the acceleration the cart receives from the engine. If the wing would not produce lift, the cart would accelerate more quickly and would settle at a higher speed. $\endgroup$ – Peter Kämpf May 31 '15 at 18:00 $\begingroup$ I really like these these illustrations! Did you make them yourself? If so, what tools did you use, I need those skills as well ! $\endgroup$ – DeltaLima May 31 '15 at 18:27 $\begingroup$ @DeltaLima: I made them with SketchUp. $\endgroup$ – Chris Jun 1 '15 at 10:09 $\begingroup$ The drag of the wing changes with the square of the speed of the car, and when the wing moves up or down, it changes in addition with the third power of the angle given by the ratio of vertical to horizontal speed. A square comes from the amount of lift created, and must be multiplied by the angle again to account for its change of direction - therefore the third power. $\endgroup$ – Peter Kämpf Jun 5 '15 at 18:29 As the answers to your original question already explained, you do need extra lift to accelerate upwards. Once the wing is set into a vertical motion, however, lift again exactly equals weight to keep the wing at a constant vertical speed (if we neglect thrust and drag for a moment). No extra lift is needed to maintain that vertical speed. Only when you want to accelerate further up, extra lift is needed. The increase in potential energy comes indeed from the propeller, because the lift vector of the climbing wing is tilted backwards, adding a horizontal component that needs to be compensated by extra propeller thrust. Now let's look at your experiment in detail: I assume the wing has some mass, is rotationally locked and slides up and down that pole without friction. If you accelerate the car, at some point its speed will just be right for the wing to create exactly the lift to cancel out its own weight. At this speed the wing will be stable at any position along the pole. If it slides down a little, its angle of attack $\alpha$ will increase and create more lift, stopping the downward motion. The reverse is true for any upward motion. See below for an illustration of the principle. The cyan vector is the vector sum of the flow due to forward motion (blue) and vertical motion (red), and this is what the wing will "notice". When the car accelerates further, the lift will increase and now become greater than the weight. The wing will accelerate upwards until its vertical speed will reduce its angle of attack by enough to reduce the vertical aerodynamic forces to exactly equal its weight. Now you have the same situation as before, but not at zero vertical speed, but at a positive vertical speed which will make sure that the wing pops out at the top of the pole unless there is some stop. When the wing hits the stop, the vertical motion ceases, the angle of attack increases and the wing will lift up not only itself, but also part of the car's weight. Note that I now spoke of the vertical components of the aerodynamic forces, not lift. When drag is added, it will add a vertical component when the wing is in motion. Lift is defined as the sum of aerodynamic forces perpendicular to the flow direction at infinity and drag parallel to it. This cumbersome definition makes sure that local distortions in the flow field do not impact the direction of lift and drag. The direction of lift for the climbing wing will point slightly backwards and the direction of drag slightly downwards. This will add some drag component to the sum of the vertical aerodynamic forces, and lift needs to increase to compensate for this. The horizontal component of lift will now add to the drag and the forces on the pole, so more force from the propeller is needed to push the climbing wing through the air. This extra force is needed to increase the potential energy of the wing on its way up. For a descending wing, the reverse is true: Now drag will add some vertical component and lift will be slightly slower. The forward component of lift will now push against the pole, reducing the force the propeller needs to provide. The reduction in potential energy now reduces the horizontal aerodynamic forces. An airplane is slightly different, because it is free to pitch up or down and the angle of thrust will pitch with it. This will enable the pilot to select the flight path and the amount of lift the wing creates, but again the vertical motion will make sure that any excess lift will translate into increased vertical speed and a lower angle of attack, so the excess lift vanishes. In a climb, thrust needs to be bigger than drag in order to increase the potential energy of the airplane, and now the vertical component of the tilted thrust vector will support some weight, reducing the amount of lift needed to support the weight. $\begingroup$ If the wing doesn't rotate and the airflow is at a constant angle (i.e., in the observer's frame of reference, still air and the cart is moving along horizontal ground), how can the angle of attack change? $\endgroup$ – David Richerby Jun 1 '15 at 6:52 $\begingroup$ @DavidRicherby: Due to the wing's motion. I guess I'll update the answer with a sketch - that will be better than a wordy explanation here. $\endgroup$ – Peter Kämpf Jun 1 '15 at 8:43 $\begingroup$ Ah, I understand now: the angle of attack is the same whenever the wing is stationary but it alters while the wing is moving up or down. $\endgroup$ – David Richerby Jun 1 '15 at 10:16 $\begingroup$ @PeterKämpf: The change of the angle of attack is actually a change in the direction of the relative wind (as seen from the perspective of the wing). Since lift is perpendicular to relative wind, there is an additional lift-induced drag onto the cart when the wing accelerates up. $\endgroup$ – Chris Jun 1 '15 at 10:29 $\begingroup$ @PeterKämpf: With vertical drag I mean the component of the drag force acting opposite to the vertical movement of the wing, i.e. the drag force that acts downwards when the wing moves up. $\endgroup$ – Chris Jun 8 '15 at 10:37 When you say, No power invested by the propeller goes into potential energy of the wing; the climb of the wing is done purely by lift. you're missing where the energy of the wing comes from. Lift isn't a magical power that creates potential energy out of nothing: it just turns airspeed (kinetic energy) into height (potential energy). In your example, the power invested by the propeller turns into kinetic energy of the whole cart, including the wing. That's how the energy gets from the propeller (or its fuel) into the potential energy of the wing. You need to use more thrust to drive the cart with the wing attached, than you would if you took the wing away. There are two ways to look at the forces produced during a climb. Remember that as a wing produces more lift, it also produces more induced drag. That's why you need excess thrust, to generate the excess lift. For a certain power setting, you can fly level at a certain speed. If you pitch up, the wings will create excess lift, but also more drag. Even though some of your thrust is acting vertically, there isn't any excess thrust, because the drag is greater. You'll slow down, the lift will decrease, and you'll stop climbing. Instead, you can keep the aircraft level, and add more thrust. This will increase your speed, which will also increase the lift from the wings. This in turn increases the induced drag, which will eventually balance the excess thrust at a new, higher airspeed. Because you've increased the lift by doing this, you'll climb, even though your wings are level. You can only do this because you added power in the first place. (I feel obliged to point out that you wouldn't usually climb like this: to get a better rate of climb, you'd generally add power and also pitch up, letting your airspeed decrease to the speed where the wings produce the most lift for the least drag.) Dan HulmeDan Hulme $\begingroup$ Induced drag in a steep climb is actually less than in a shallow climb, simply because lift is less (more of the net upwards force generated by thrust). By definition, Lift and Drag are perpendicular and parallel to the flight path (relative wind), not the earth horizontal plane. The increase in lift (and Di) s only momentary, to accelerate to create an upwards velocity as indicated by Peter Kämpfs answer $\endgroup$ – Waked May 31 '15 at 16:38 $\begingroup$ Believe it or not, induced drag goes down when speed increases. $\endgroup$ – Peter Kämpf May 31 '15 at 17:54 $\begingroup$ @PeterKämpf Because the angle of attack is decreasing, you mean? That's a point. I'd hoped to keep the explanation simpler than that, but maybe I tried to make it too simple. $\endgroup$ – Dan Hulme Jun 1 '15 at 9:53 $\begingroup$ Induced drag goes down when speed increases because wingtip vortices decrease at higher speed. $\endgroup$ – Chris Jun 1 '15 at 11:19 $\begingroup$ @DanHulme: "You need to use more thrust to drive the cart with the wing attached, than you would if you took the wing away." Of course, the reason is additional drag, which dissipates to heat. I am fully aware that this violates conservation of energy. But remember that energy conservation is a "macro principle" that gets induced by more basic principles, e.g. mechanics. You have to give mechanical reasons to show that energy conservation is in place. $\endgroup$ – Chris Jun 1 '15 at 11:40 I sort of feel that the rest of the answers are unnecessarily complex, given how simple the fundamentals here are: Question: Is it necessary that L>m.g (or as you put it, an excess of lift) in order to climb? Answer: No, at least not a sustained excess of lift. Newton's Laws state that an object in motion will remain in that state unless a force acts upon it. A force imbalance is required to set the aircraft into a climb, but once this has been achieved the forces can be balanced and the aircraft will continue to climb. As such, an excess of lift is not a condition required for an aircraft to sustain a climb. Question: Is it necessary that we add energy to the system (in the form of increasing our power output) in order to climb? Answer: Yes, if energy is conserved then in order to gain altitude (and by extension gravitational potential energy), we must add energy. We could add no energy, not increase the power output of our engines, and simply pull up, increasing AoA but also drag, and we would climb for a short time as we trade kinetic energy for gravitational potential energy, however we would find that our aircraft quickly slows and we are required to dive to below our original altitude to return to steady level flight. Hence a power excess is necessary for climb, but a sustained lift excess is not. thepowerofnonethepowerofnone $\begingroup$ I disagree with your statement that "an excess of lift is not a condition required for an aircraft to sustain a climb." Do you have any authorities on that which you can provide. Conventionally, an excess of lift results in a climb, and a short fall of lift results in a descent, compared to balanced lift and weight. Accordingly, I also look for an authority on your concluding statement that sustained left does not require a "power excess." $\endgroup$ – mongo Aug 5 '17 at 18:55 $\begingroup$ In a stabilized climb (constant airspeed, constant direction of flight path through space), lift is LESS than weight. See my answer. $\endgroup$ – quiet flyer Oct 15 '18 at 8:58 The simple answer is easy to demonstrate. Start with an aircraft TRIMMED for straight and level flight. For example 1000 feet, 100 mph, 1500 rpm fixed pitch prop. Lift = aircraft weight and thrust = aircraft drag. Now increase engine rpm by 150 rpm (10% more thrust), which increases thrust. The aircraft will for a moment accelerate, the increased airflow over the wing and stabilizer increases lift and the aircraft will gain altitude. In a few seconds the system will balance once again, the airspeed will return to the trimmed 100 mph, and the excess thrust will show up as climb rate. The aircraft will now be slightly pitched up, but the angle of attack remains constant since it is controlled by the stabilizer trim setting, which we did not touch. Next roll the elevator trim forward, which will lower the nose a bit. The airspeed will increase slightly and the climb rate will reduce. When trimmed once again to straight and level flight the aircraft rate of climb will be 0, the airspeed will be above 100 mph. Now the extra thrust shows up as increased speed. To continue the example, reduce the rpm back to the original 1500 rpm. leave the trim alone. The aircraft should now show a decent rate, at the new slightly higher airspeed. All this was done without input from the control stick. Anytime the pilot maneuvers the primary flight flight controls, there is a nearly instant trade between angle of attack, speed, lift, drag, inertia, climb rate or decent. Jerry S. Jerry S.Jerry S. $\begingroup$ Lift = aircraft weight is valid only in a specific scenario (and the requirement for trimmed flight does not cover it): pitch angle null and engine mounting pitch null; alternatively, thrust vector "pitch" null. In any other case, including trimmed conditions, lift != aircraft weight $\endgroup$ – Federico♦ May 31 '15 at 18:37 $\begingroup$ @Federico, in a systemic sense, lift of the caused by engine pitch is lift. Just as body lift, tail lift (or negative lift) all sum to the body or system lift. If the aggregate lift goes up, the airplane can climb. If it becomes less than the weight of the aircraft, the aircraft descends. $\endgroup$ – mongo Aug 5 '17 at 19:01 The above answers beautifully explain the theoretical solution to your problem, but since you haven't accepted any one of them as of now, i'd be illustrating the solution numerically. Lets assume that your cart is moving with a constant velocity of 'v' Then, K.E. = 1/2 (mv^2) D = 1/2((density)(v^2)S(Cd)) and total energy E = K.E. + D*distance (Assuming frictionless interaction of surfaces everywhere) now, Cd = Cd0 + K(Cl)^2 distance = v*t so T.E. = 1/2(v^2)(m + (density)SVt(Cd0 + K(Cl)^2)) Here it can be seen that total energy is being used for The kinetic energy part of the Cart The coefficient of lift part of the Cart's wing The coefficient of lift part is hence responsible for the energy use3d up in lifting the wing upwards and hence the whole system obeys conservation of energy Victor JulietVictor Juliet Updated simulation without vertical drag force In this situation, for the wing only, lift in a climb is greater than the weight. The vertical force stabilises to equal the weight, but since the lift vector is tilted backwards slightly due to the upward velocity, aerodynamic lift increases. Peter Kämpfs answer describes what happens to the wing in this situation, but what we did not have was a quantification. I've run a real time simulation of the forces on the wing in the drawing of the OP, as a function of airspeed $V_{air}$ and vertical wing velocity $\dot{z}$. The forces on the wing are drawn below, I've taken a NACA 0012 profile with an $ \alpha_0$ of 2 degrees: $$L = lift = C_L \cdot \frac{1}{2} \cdot \rho \cdot {V}^2 \cdot A \tag{1}$$ $$D = C_D \cdot \frac{1}{2} \cdot \rho \cdot {V}^2 \cdot \tag{2}A$$ For NACA 0012, $C_L$ is proportional to $\alpha$: $C_L$ = 1 at $\alpha$ = 10 degrees, hence $$C_L = k_L \cdot \alpha \tag{3} $$ When the wing goes up, the angle of attack changes: $$ \Delta \alpha = arctan(\frac{\dot{z}}{V_{air}}) \tag{4}$$ We now lump all the constants together: $K_L = k_L \cdot \frac{1}{2} \cdot \rho \cdot A$, $K_D = 0.01 \cdot \frac{1}{2} \cdot \rho \cdot A$ ($C_D$ for standard roughness at Re = 6 x $10^6$ = 0.01 for angles up to 4 deg) The lift of this angle of attack is found by combining (1), (3) and (4): $$ L = K_L \cdot (\alpha_0 - \Delta \alpha) \cdot V^2 \tag{5}$$ resulting force $F$ is divided by mass to result in wing acceleration, which is then integrated with a digital Euler integrator to yield $\dot{z}$ L and D are aligned with the free stream vector V, while the weight is always aligned with the vertical. We take the cosine of the L vector minus the sine of the D vector $$ F_{up} = L \cdot cos(\Delta \alpha) - D \cdot sin(\Delta \alpha) \tag{6}$$ Now for: m = 1 kg A = 1 $m^2$ $\alpha_0 = 2 deg$ $k_L$ = 0.1 We get L = 9.81 N at $V_{air}$ = 8.949 m/s. If we then increase $V_{air}$ from 8.949 to 10.5 m/s in 1.5 second, the wing gets an initial acceleration upwards. After 2.4 seconds acceleration is zero, the wing goes up with constant velocity $\dot{z}$ = 0.1 m/s. The angle of attack has then reduced from 2 deg to 1.45 deg Values printed for beginning of test through to 3 seconds: There are some 2nd order effects in the response which may be from digital instability due to the large time step of the Euler integrator. Time to check this is not available at the moment. So in the end situation, L is 9.82 N which is larger than weight in a climb due to increase in airspeed. Not by much - the lift vector is tilted backwards at a small angle, determined by the ratio of $\dot{z}$ and V which is 0.01. The total vertical force is $ L \cdot cos\alpha - D \cdot sin\alpha - W$ $\begingroup$ You start from wrong assumptions. Drag is always perpendicular to lift - it's defined this way. There cannot be a drag component aligned with lift. Next, when the wing climbs, the lift force vector tilts backward and the drag vector downward due to the change in angle of attack. Now some of the drag is in the direction of weight but still perpendicular to lift. What is missing is the horizontal force of the beam which counteracts drag. $\endgroup$ – Peter Kämpf Jul 19 '17 at 18:59 $\begingroup$ @Peter I've thought long and hard about your comment. It makes sense to always define lift and drag in the direction of the airstream, *regardless of the direction of the airstream itself", is basically what you are saying. I agree with that, and will rework the simulation model. However, there is a lingering issue that multiple persons here have been trying to express: we accept that in a climb, power is required to overcome the increase in potential energy from the gravitational field. However, where is the power accounted for that needs to overcome aerodynamic resistance in vertical... $\endgroup$ – Koyovis Jul 21 '17 at 4:51 $\begingroup$ ...direction. A helicopter taking off vertically needs increased lift to offset the vertical fuselage drag. A rocket taking off needs to provide thrust equal to (weight + drag). Why is this for fixed wing only valid in thrust direction, and not in lift direction? $\endgroup$ – Koyovis Jul 21 '17 at 4:55 $\begingroup$ Thrust is opposite to drag (roughly) but larger in a climb. So all vertical drag is compensated by thrust, and some vertical thrust remains to reduce demand for lift. Please see the drawing here and look how the force vectors compare. In a descent, thrust is smaller, leaving some vertical drag uncompensated which now reduces the required lift. Yes, in a descent drag helps to reduce lift. Thrust is in the direction of drag while lift is orthogonal to both. So thrust compensates $\endgroup$ – Peter Kämpf Jul 21 '17 at 21:00 $\begingroup$ drag, not lift. In vertical ascent (rocket or helicopter) it is thrust, not lift, which compensates for drag (and needs to be a bit larger than weight). Both produce thrust to counteract weight, and only when the helicopter adds some forward speed, lift will be created in addition to thrust. $\endgroup$ – Peter Kämpf Jul 21 '17 at 21:08 There is a lot more to this question than initially meets the eye-- it is quite an interesting question. Normally, in the context of fixed-wing flight, the thrust vector acts roughly parallel to the flight path through the airmass. When the thrust vector is exactly parallel to the flight path through the airmass, and the flight path is linear rather than curving up or down or to either side, then the vector diagram of forces in climb looks like this (left-hand case-- climb angle of 45 degrees-- right-hand case-- climb angle of 90 degrees): Powered climb at climb angles of 45 and 90 degrees: We can see that Lift = Weight * cosine (climb angle). In the left-hand diagram, the climb angle is 45 degrees and Lift = .707 * Weight. In the right-hand diagram, the climb angle is 90 degrees and Lift is zero. But, these diagrams assume that the Thrust vector acts parallel to the flight path through the airmass. Obviously, if this isn't true, thrust equation lift = weight * cosine (climb angle) is also no longer true. To take an extreme case, note that when the exhaust nozzles of a Harrier "jump jet" are pointed straight down, the wing is "unloaded"-- the plane can hover at zero airspeed with zero lift, supported entirely by thrust. Conversely, during a glider winch launch, the towline pulls steeply downward on the glider. This too can be viewed as a form of "vectored thrust"-- but now the load on the wing is increased, rather than decreased, so the wings must generate a lift force that is much greater than the aircraft's weight. In the case presented in this question, Thrust does NOT act along the flight path of the "aircraft", if we consider the wing to be the "aircraft". When the wing is rising up the pole, the vertical motion causes a change in the direction of the wing's trajectory through the airmass and also a change in the direction of the "relative wind", but there is no corresponding change in the direction of the Thrust vector. Thus the thought experiment presented in the question is NOT representative of the typical situation in fixed-wing flight. The thrust vector is NOT fixed in direction relative to the chord line of the wing, and is NOT acting roughly parallel to the direction of the "relative wind" experienced by the wing, and the direction of the wing's flight path through the airmass. Furthermore, the basic mechanism that governs the airspeed of a fixed-wing aircraft is absent. Normally, as an aircraft climbs, if Lift exceeds Weight, the flight path will curve upward, causing the Weight vector to have a greater component acting parallel to the direction of the aircraft's flight path through the airmass, causing a decrease in speed. But in this experiment, since the wing is "locked" into position on the cart in the fore-and-aft sense, if the wing's trajectory curves upward, it appears that the cart will provide however much thrust is needed to hold the horizontal component of the wing's velocity vector component exactly constant. Assuming, that is, that the wing's drag is trivial compared to drag from other sources such wheel drag and wheel bearing drag from the cart, so that variations in drag from the wing have essentially no effect on the airspeed and groundspeed of the cart. So the forces acting on the wing in this thought experiment will be very different from the forces typically acting on a fixed-wing aircraft in actual flight. It should not come as surprise to discover that in the case of this thought experiment, lift actually must be greater than weight in order for the wing to climb up the pole. We really could end this answer right here. But it's rather interesting to look a little more deeply at the forces acting on the wing in the thought experiment. What are some of the notable features of the thought experiment? As we've already noted, the wing is locked in place relative to the cart in the fore-and-aft direction. The wing cannot speed up or slow down in the fore-and-aft direction, relative to the cart. Furthermore, IF the drag from the wing is trival compared to the other sources of drag acting on the cart, so that the drag from the wing has essentially no effect on the cart's airspeed, this means that the wing cannot speed up or slow down in the fore-and-aft direction relative to the airmass (or relative to the ground). The cart will transmit to the wing however much thrust is needed to hold constant the fore-and-aft component of the wing's airspeed vector. This is very different from the typical situation in fixed-wing flight. Also. as the thought experiment was originally worded, the wing is locked into a constant pitch attitude. Here is a vector diagram illustrating the situation when the cart is moving at some constant airspeed that is NOT high enough to allow the wing to lift off the cart: The forces illustrated include Lift (L), Drag (D), Weight (W), Thrust (T), and the upward force (C) exerted by the cart on the wing as the wing rests on the cart. Net force is zero. The L/D ratio is 10:1. Now assume that we hold the wing down with a catch as we increase power and accelerate to some higher airspeed, and then allow everything to stabilize. Then we unlatch the catch. The diagram below shows the situation at the instant we unlatch the catch-- The wing has not yet begun to rise upward, so there is no change in the direction of the wing's trajectory through the airmass, or the direction of the lift and drag vectors. The wing's angle-of-attack has not changed, so the lift and drag coefficients have not changed, so the L/D ratio is still 10/1. The dashed line represents the net force vector, which is simply the vector sum of all the other force vectors. Acceleration = force / mass, so we can also label the net force vector as "Acceleration * mass". What happens as the wing starts to rise (accelerate) up the pole? The wing's upward velocity causes a change in the "relative wind" experienced by the wing. The wing's angle-of-attack immediately decreases or becomes negative, so the lift coefficient decreases, and the L/D ratio decreases. (The drag coefficient might decrease too, but not as much as the lift coefficient.) If the wing has a cambered, non-symmetrical airfoil, it will still produce lift at some small negative angle-of-attack, but not very much-- the lift coefficient will be low. When the wing reaches some given upward vertical velocity, Lift will have decreased to the point such that the vertical component of the net aerodynamic force acting on the wing will no longer be larger than Weight, the net force on the wing will drop to zero, and the wing will no longer be able to accelerate, but rather will move up the pole with a constant vertical velocity. The figure below illustrates this situation: The direction of the wing's path through the airmass is parallel to (and opposite to) the direction of the Drag vector. The climb angle-- labelled "C" in the diagram-- is the acute angle formed between the Thrust and Drag vectors, and also between the Lift and Weight vectors. This is also the angle between the Drag vector and the horizon, and also the angle between the Lift vector and the vertical direction. The vectors can be all arranged head-to-tail in a closed figure, so the net force is zero. Lift is slightly larger than Weight, and Thrust is quite a bit greater than Drag. If the wing is mounted on the cart with zero incidence, then the wing's angle-of-attack must be slightly negative-- in fact it must be equal to negative "C" degrees. We've drawn the L/D ratio as 2/1, to represent the decrease in the wing's lift coefficient caused by the change in angle-of-attack. The wing is moving up the pole at a constant velocity. Interestingly, this situation is virtually identical to the situation experienced by the rising wing as an aircraft rolls to a steeper bank angle, especially if the roll is driven by a spoiler deployed on the descending wing with no modification to the shape of the rising wing. The change in angle-of-attack caused by the wing's rising motion through the airmass limits the vertical speed that the rising wing can attain-- this is called "roll damping". The lift and drag vectors are "twisted aft" or "twisted backwards" from the direction they pointed before the rolling motion started-- you can also see this "twist" illustrated in this diagram https://www.av8n.com/how/img48/adverse-yaw-steady.png from this section https://www.av8n.com/how/htm/yaw.html#sec-adverse-yaw of the excellent "See How It Flies" website https://www.av8n.com/how/ . The situation is also exactly like the situation we'd have if we were towing a glider under the following conditions: 1) We have a very long tow rope-- so long that the angle of the tow rope relative to the horizon is not influenced at all by the glider's climb rate relative to the tow plane. 2) The towplane is flying in such direction such that the glider's end of the rope is pulling exactly horizontally on the glider 3) The glider's drag is trivial compared to the towplane's thrust and drag, so the glider's aerodynamic situation has no influence on the towplane's airspeed. 4) The glider pilot is giving pitch control inputs in such a way as to force the glider's pitch attitude to stay constant relative to the horizon, regardless of climb rate. Now, what if we modify the experiment by allowing the wing's pitch attitude to vary, while holding the wing's angle-of-attack constant relative to the wing's trajectory through the airmass-- perhaps by adding a stabilizing vane or tail to the back of the wing? Now what happens when the wing starts to rise? The figure below illustrates a situation where lift is exactly equal to weight. The drag vector is horizontal, so the wing cannot be rising or falling through the air-- it must be staying in a fixed position on the pole. Note that we've chosen to illustrate a 5:1 L/D ratio for this version of the thought experiment. Now what if we give the wing the tiniest upward push, so that it starts to rise? As soon as it starts to rise, its velocity through the airmass is augmented by its vertical motion. And now the wing is free to pivot in such a way that its angle-of-attack can stay constant, so we don't have the "damping" effect that we had in the earlier version of the experiment. The resulting increase in airspeed and lift is much like we see when a glider rises on a winch tow, except that in the case of the wing on the imaginary cart, the thrust vector stays horizontal, rather than pointing partly downwards. The wing's climb angle through the airmass will get steeper and steeper as its vertical speed increases. This causes the airspeed to increase, which causes the Lift vector to get larger. The figure below illustrates the situation that we see when the wing's climb angle through the airmass reaches 60 degrees. Again, the dashed line represents the net force vector, which is simply the vector sum of all the other force vectors. Acceleration = force / mass, so we can also label the net force vector as "Acceleration * mass". In this particular case, we've sized the lift and weight vectors to represent the situation where the horizontal component of the wing's speed through the airmass is exactly the same as it was in the previous diagram above, where lift was exactly equal to weight in the case where the wing's trajectory was horizontal. Simply by rising upward, the wing has experienced a doubling of airspeed and a four-fold increase in the magnitude of the lift vector. The sum of the vertical components of the lift and drag vectors is now 1.3 times the weight of the wing. Of course, we could modify the diagram to represent a case where the wing was experiencing a net upward force even before it started accelerating upward, simply by decreasing the size of the weight vector relative to the other vectors. If the cart's velocity stays constant, and cart is able to transfer however much thrust is needed to the wing to keep it locked in place in the fore-and-aft direction relative to the cart, will the wing keep accelerating faster and faster up the pole? It turns out it will not. Even if the wing has no weight at all, it will stop accelerating upwards once its climb angle equals the inverse tangent of the L/D ratio. For the 5/1 case illustrated here, that climb angle is 78.7 degrees. If the wing does have weight, the maximum achievable climb angle will be less. In the particular case illustrated above, where the Weight vector is exactly equal to the Lift vector that existed when the wing had zero upward velocity, the maximum achievable climb angle is somewhere between 70 and 75 degrees. Above this maximum achievable climb angle, the vertical components of Lift and Drag no longer add up to a value that is greater than Weight. So even when the wing is free to pivot to maintain a constant angle-of-attack, and the cart has infinite thrust available to allow it to maintain a constant airspeed in spite of changes in the wing's drag force, there is a limit to the climb angle that the wing can achieve. Here's an interesting table ca- cos- sin- aspd- L- D- vcL- vcD- net aero vc- net vert 0 1.00 0.00 1.00 1.00 .200 1.00 0.00 1.00 0.00 30 .866 .500 1.15 1.33 .267 1.15 .133 1.02 .021 45 .707 .707 1.41 2.00 .400 1.41 .283 1.13 0.13 70 .342 .940 2.92 8.55 1.71 2.92 1.61 1.32 0.32 75 .259 .966 3.86 14.9 2.99 3.86 2.88 .980 -0.02 80 .174 .985 5.76 33.2 6.63 5.76 6.53 -.773 -1.773 Assumptions-- Constant horizontal component of airspeed in all cases 5/1 L/D ratio. The last column (but only the last column) assumes that the value of Weight is selected in such a way that Weight is exactly equal to the value of the Lift vector in the particular case where the climb angle is zero, meaning that Weight and Lift are exactly in balance in this case. The value of Weight has no effect on any of the other columns. ca= climb angle in degrees cos= cosine of climb angle sin= sine of climb angle airspd= speed of wing through air in arbitrary units L= Lift in arbitrary units D= Drag in the same units as L We've chosen an L/D ratio of 5/1 vcL= vertical component of lift (acts upwards) = L * cosine (climb angle) vcD= vertical component of drag (acts downwards) = D * sine (climb angle) net aero vc = vertical component of net aerodynamic force= (vcL-vcD) -- a positive sign means that the net aerodynamic force acts upwards, while a negative sign means that the net aerodynamic force acts downwards. net vert = net vertical force = (net aero vc - weight), assuming that Weight is selected in such a way that Weight is exactly equal to the value of the Lift vector in the case where the climb angle is zero. If the last column (net vert) is negative, this means that in the case where Weight is set to the particular value described above, the climb rate must decelerate (and the climb angle must decrease). If the second-to-last column is negative, the climb rate must decelerate (and the climb angle must decrease) even if Weight is zero. This version of the thought experiment-- where the pitch attitude of the wing is free to vary to maintain a constant angle-of-attack-- is somewhat like what happens at the start of glider winch tow, especially near the beginning of the tow when the towline is very long and the tow force stays almost horizontal for a while, even if the glider starts to climb rapidly. Finally-- the original question contained the following line: " Please notice that - because thrust is horizontal - the chemical energy burned goes into kinetic energy of the cart and/or heat energy (due to overcoming drag). No power invested by the propeller goes into potential energy of the wing; the climb of the wing is done purely by lift." Thrust certainly does do work along the direction of the flight path through the airmass, which is never purely vertical. The situation appears to be analogous to a lightweight cube (say made of balsa wood) being blown up a slippery ramp by a wind that is blowing horizontally. Is the wind increasing the potential energy of the cube? For more on the more "conventional" climbing situation-- a fixed-wing aircraft with thrust acting parallel to the direction of the flight path-- see these related answers to related questions: :Does lift equal weight in a climb?" "What produces Thrust along the line of flight in a glider?" "'Gravitational' power vs. engine power" "Descending on a given glide slope (e.g. ILS) at a given airspeed— is the size of the lift vector different in headwind versus tailwind?" "Are we changing the angle of attack by changing the pitch of an aircraft?" quiet flyerquiet flyer The other answer (and the basic premise) are misleading. Aviation is a careful balance of just about everything, and you rarely get more of one without losing some of the other. Consider a rocket. No wings, and thus no lift. Sea level, 0m/s to orbit on 8 minutes. Everything done with a rather ridiculous amount of power. Now consider an airship. Also no wings, but excess lift. Goes up all by itself. The engines are purely for maneuvering, if you remove them we usually call the thing a balloon. One of the most ridiculous things I ever heard was a flight instructor who claimed that the throttle controls altitude and the elevator controls speed. I called complete BS on the statement and asked if he found himself flying toward some cumulo-granitus would he prefer to add power or yank back rather sharply on the controls? What he was trying to get across was that in the extremely narrow regime of straight and level flight*, adjusting power will affect speed which affects lift, and the plane will eventually stabilize at a different altitude after a power adjustment. Adjusting pitch will change your speed almost immediately, but your altitude will change as well. You could arrange a demonstration of adjusting power and pitch simultaneously and having nothing else change, but that basically proves my point. If you want to slow down (for example, landing) do you leave the throttle at full and pull the stick all the way back? Of course not. It's a very delicate balance, and it's something pilots spend a lot of time learning. Or if you work for certain asian airlines you just have some very expensive computers handle it for you. Yes, we spend a lot of time there, but consider how precise you need to be. A fraction of a degree, uncorrected, in any axis will result in you crashing before you run out of fuel. $\begingroup$ "The other answer" which one? There are now three other answers and there may well be more by the time you read this. $\endgroup$ – David Richerby Jun 1 '15 at 6:55 $\begingroup$ Also, -1 for the racist reference to "certain asian airlines". $\endgroup$ – David Richerby Jun 1 '15 at 6:57 $\begingroup$ Not only that, but there were at least two answers at this time this was posted. $\endgroup$ – a CVn Jun 1 '15 at 8:34 $\begingroup$ @paul OK. But, as you say, they just happen to be located in Asia. Saying they're Asian doesn't identify them and their ethnic origin isn't relevant to their preference for autoland. Using the word "Asian" as the only description makes it look like you're talking about Asian airlines in general so it's best avoided. $\endgroup$ – David Richerby Jun 1 '15 at 14:55 $\begingroup$ @DavidRicherby That's why I said "certain asian airlines" and not "asian airlines". I wonder what your reaction would have been if I said "certain european airlines". $\endgroup$ – paul Jun 1 '15 at 23:04 Not the answer you're looking for? Browse other questions tagged aerodynamics lift aircraft-physics climb or ask your own question. Why do airplanes lift up their nose to climb? Does lift equal weight in a climb? How does an aircraft descend without its nose pointing down? Does an accelerating airplane also start climbing? Are we changing the angle of attack by changing the pitch of an aircraft? 'Gravitational' power vs. engine power What produces Thrust along the line of flight in a glider? Descending on a given glide slope (e.g. ILS) at a given airspeed— is the size of the lift vector different in headwind versus tailwind? Are wings any more efficient at creating lift, versus orienting the engine's thrust downwards? Best procedure to Turn around in Canyon, Turn radius as a function of velocity What was the lift equation used by the Wright brothers? How does the engine produce aerodynamic lift at high angle of attack? How to design a near vertical take off airplane? What is an example of a quantitative analysis for the lift using Newtonian mechanics?
CommonCrawl
Eco-friendly consolidated process for co-production of xylooligosaccharides and fermentable sugars using self-providing xylonic acid as key pretreatment catalyst Xin Zhou1,2,3 & Yong Xu ORCID: orcid.org/0000-0002-8106-326X1,2,3 Obtaining high-value products from lignocellulosic biomass is central for the realization of industrial biorefinery. Acid pretreatment has been reported to yield xylooligosaccharides (XOS) and improve enzymatic hydrolysis. Moreover, xylose, an inevitable byproduct, can be upgraded to xylonic acid (XA). The aim of this study was to valorize sugarcane bagasse (SB) by starting with XA pretreatment for XOS and glucose production within a multi-product biorefinery framework. SB was primarily subjected to XA pretreatment to maximize the XOS yield by the response surface method (RSM). A maximum XOS yield of 44.5% was achieved by acid pretreatment using 0.64 M XA for 42 min at 154 °C. Furthermore, XA pretreatment can efficiently improve enzymatic digestibility, and achieved a 90.8% cellulose conversion. In addition, xylose, the inevitable byproduct of the acid-hydrolysis of xylan, can be completely converted to XA via bio-oxidation of Gluconobacter oxydans (G. oxydans). Subsequently, XA and XOS can be simultaneously separated by electrodialysis. XA pretreatment was explored and exhibited a promising ability to depolymerize xylan into XOS. Mass balance analysis showed that the maximum XOS and fermentable sugars yields reached 10.5 g and 30.9 g per 100 g raw SB, respectively. In summary, by concurrently producing XOS and fermentable sugars with high yields, SB was thus valorized as a promising feedstock of lignocellulosic biorefinery for value-added products. The global overconsumption of fossil fuels has driven the development of the lignocellulosic biorefinery concept, an industrial process that converts sustainably sourced lignocellulosic biomass into energy, chemicals, and fuels [1, 2]. A suitable candidate for such processing is sugarcane bagasse (SB), which is one of the most abundant types of lignocellulosic biomass in China and in other parts of the world [3]. SB is comprised of all biomass refuse after sugarcane harvest, and is usually discarded or burned at the field [4]. Both of these ongoing practices represent environmentally imprudent or even harmful usage of a readily available biomass resource [3,4,5]. The results of previous studies on the biorefining of SB indicated SB as well-suited for the production of value-added products especially due to its high polysaccharide content. Specifically, several sources of SB have been shown to consist of 40–50% cellulose and 20–30% hemicelluloses based on their dry mass [6, 7]. However, the natural structure of SB (and in general, of all lignocellulosic biomass) is strongly recalcitrant to enzymatic hydrolysis of polysaccharides (cellulose in particular). This characteristic thus necessitates the implementation of economical pretreatment that deconstructs the lignocellulosic entanglement without inducing damaging its constituents. To date, the pretreatment technologies that have received the most attention include hydrothermal, steam explosion, diluted acid, and diluted alkaline pretreatment [5, 8, 9]. Of particular note is that diluted acid pretreatment is not only beneficial for enzymatic hydrolysis, but it is also an effective mean to obtain desirable xylooligosaccharide (XOS) products from SB [10, 11]. This is because SB hemicelluloses are predominately composed of xylan. Here, XOS, from degradation of xylan, are short-chain oligosaccharides that are composed of 2–7 xylose molecules, tethered by β-(1,4) glycosidic linkages [12]. The most attractive biological feature of XOS is that it cannot be digested or absorbed by the human digestive system when consumed. Furthermore, it promotes the growth of beneficial intestinal bacteria. Therefore, XOS with a low degree of polymerization (DP), can be considered as key ingredients toward the improvement of gut health. Ingestion leads to a boost in calcium absorption, lowers cholesterol, improves the immune system, and reduces the risk of colon cancer [13,14,15,16]. In addition, the rapid growth of the functional food/feed in markets has boosted the demand for high-quality XOS, which the current market prices reflect as much as $22–50/kg [17,18,19]. Thus, it is beneficial to implement diluted acid pretreatment in a biorefinery process to optimize XOS production. Doing so would add an additional profit-generating product to the product portfolio of a biorefinery with the potential to achieve a high price. Pretreatments with diluted acid use mineral or organic acid reagents, both of which have been reported to be capable of yielding XOS [20,21,22]. However, it has been suggested that mineral acids promote an increased extent of xylose and furfural production, which directly translates to lower XOS yields [20]. In contrast, organic acids tend to favor XOS generation, while also yielding additional benefits such as little furfural yield, lower corrosiveness and decreased generation of enzymatic hydrolysis inhibitors [11, 23, 24]. Various organic acids, such as acetic acid, oxalic acid, and gluconic acid, have already been explored as reagents for XOS production during acid pretreatment [25, 26]. Lin et al. [23] developed a microwave-induced hydrolysis of beechwood xylan with oxalic acid strategy to produce XOS, which achieved a yield of 39.31%. Zhang et al. [11] developed an acetic acid pretreatment method to produce XOS and improve enzymatic hydrolysis efficiency from corncob, and the acetic acid hydrolysis achieved XOS yield of 45.9%. In addition, Zhou et al. [24] found that gluconic acid exhibited a good ability to degrade SB xylan into XOS with a yield of 53.2%. It is important to note that this pretreatment technology effectively reduces biomass' recalcitrance to enzymatic digestion [11, 27]. The key concern during diluted acid pretreatment for XOS production is to control how much xylose will be generated, which is an inevitable chemical reaction given the lability of glycosidic bonds in acidic media. Interestingly, previous reports suggested and successfully demonstrated that this xylose could be upgraded to xylonic acid (XA) and recovered as an additional product stream via electrodialysis [28, 29]. XA, with a structure similar to gluconic and derived from the oxidation of xylose, can also release H+ to depolymerize xylan [30]. In addition, it has been demonstrated that acidic pretreatment can efficiently weaken the hydrolyzed glycosidic bond of hemicellulose and lignin–hemicellulose bonds, which leads to sugar dissolution in the hemicellulose and to an increased porosity of the plant cell wall [6]. Overall, it is feasible to assume that XA can be used as an acid catalyst during diluted acid pretreatment. This study aspired to further optimize the acid pretreatment for obtain high value-added XOS that utilizes XA as key reagent. Thus, the influences of XA concentration, hydrolysis duration, and temperature on the composition of obtained hydrolysates were primarily assessed to maximize the XOS yield. Furthermore, the enzymatic hydrolysis efficiency of XA pretreated SB solids was also investigated. In summary, this work demonstrates an integrated biorefinery process that utilizes a reagent derived from biomass itself to produce XOS, a substrate that remains easily digestible by cellulolytic enzyme systems. Response surface method optimization of XOS yields The main purpose of this study aspired to maximize the XOS yield due to the high value and the proportion of the soluble compounds (XOS) generally depends on the operation conditions. Here, temperature, acid concentration, and reaction time are crucial parameters in the hydrolysis of hemicelluloses. This is because each of these factors affects both hydrolysis rate and selectivity. Consequently, each of these three parameters was primarily optimized to achieve the highest XOS yield. RSM is an effective statistical procedure that uses a minimum set of experiments to determine the coefficients of a mathematical model as well as the optimum conditions. Thus, in the present study, SB samples were pretreated at different temperatures (130–170 °C) over a time range of 15–75 min with 3.0 g of SB and 30 mL XA solutions with different XA concentrations. Values of independent process variables were studied and the responses obtained from 13 different combinations of reaction conditions are shown in Table 1. Table 1 Independent variables of the central composite design and results of response surface analysis [xylooligosaccharide (XOS) yields] After the XA hydrolysis of SB hemicelluloses was completed, the hydrolysis products in the hyrolysate of each sample were analyzed. Although XOS with DP larger than 6 could be generated, each sample reached up to DP 6, due to the lack of > DP6 of the standards, such as X7 and X8. Thus, X2–X6 were calculated for XOS yield, which is listed in Table 1. In addition, the relative contents of X2–X6, xylose, and furfural are shown in Fig. 1. According to the fit summary reports, the following regression equation represented the XOS yield from the experimental responses: $$Y = - 859.7 + 10.65a_{1} + 1.92a_{2} + 127.75a_{3} - 0.01a_{1} a_{2} + 0.16a_{1} a_{3} - 0.34a_{2} a_{3} - 0.03a_{1}^{2} - 0.003a_{2}^{2} - 108.07a_{3}^{2}$$ Yields of furfural, xylose, X2, X3, X4, X5, and X6 in hydrolysate produced from SB with different acid concentrations and times at a 130 °C, b 150 °C, and c 170 °C In the formula, a1, a2, and a3 represent temperature, hydrolysis time, and XA concentration, respectively. The determination coefficient R2 was used as correlation measure to test the goodness-of-fit of the regression equation, which is defined as the ratio of the explained variation to the total variation, and demonstrates the agreement between the observed and predicted results [31]. In the present study, the R2 for XOS yield was 0.974. The relatively high R2 values of the models indicate close agreement between the experimental results and the predicted values provided by the model. This similarity can also be verified by high correlations between the observed and predicted values (curves are supplied in Additional file 1). The results from analysis of variance yield a model's F-value of 12.48 and a P-value of 0.031, indicating that the model's quadratic terms accurately fitted the experimental data. Analysis of XOS yield showed that the P-values of a1, a1a2, a12, and a32 were < 0.05, indicating that the independent variable a1 and the quadratic terms of a1a2, a12, and a32 exerted significant effects on XOS yield. Two-dimensional contour plots and three-dimensional response surface plots were generated by Design-Expert software and is showed in Fig. 2a–c. The analysis results also demonstrated that the effects of the reaction temperature was more significant for XOS yield than that of hydrolysis time and acid concentration. Response surface showing the effects of independent variables on XOS yields. a Reaction temperature (°C) and time (min); b XA concentration (mol/L) and reaction temperature (°C); c XA concentration (mol/L) and reaction time (min) A relatively low temperature and short treatment time resulted in very low xylose and XOS yields. This is due to the time it takes to hydrolyze hemicelluloses into oligosaccharides. However, the XOS yield increased strongly from 1.3% (at 130 °C, 15 min, and 1.0 M) to 31.2% at elevated temperature (170 °C, 15 min, and 1.0 M). This was because the higher temperature accelerated the degradation of xylan-type hemicelluloses. Since the DP decreased with increasing acid concentration and hydrolysis time, the formation of monomers in the hydrolysate was inevitable. The results indicated that higher temperature and longer reaction times were conducive to the further degradation of XOS. With increased reaction temperature and acid concentration, X5, X6, and > X6 continued to be hydrolyzed into smaller oligosaccharides such as X2, X3, and X4. A higher level of X2 and X3 concentrations and lower amounts of X5 and X6 were observed at higher temperature with long reaction time. Although higher XOS yield (> 40%) also could be achieved by higher XA concentration with longer reaction time, more xylose and furfural were generated (150 °C, 95 min, and 0.625 M). As shown in Fig. 1a–c, the results indicated that the levels of both xylose and furfural increased during hydrolysis with increasing temperature and retention time. The red zone in Fig. 2a–c shows the optimal condition for the production of XOS. The hydrolytic depolymerization of xylan-rich hemicelluloses proceeded most efficiently at 154 °C and 42 min with 0.64 M XA, with a predicted optimum XOS yield of 45.2%. The real contents of X2–X6 and xylose in hydrolysate by the optimized condition pretreatment were 3.22 g/L, 2.53 g/L, 2.37 g/L, 1.38 g/L, 0.96 g/L, and 7.11 g/L. Namely, the experimentally obtained XOS yield under this optimized condition was 44.5%. Clearly, the experimental values for XOS yields were found to be close to the predicted values obtained by the fitted model. The predicted optimum exactly matches the experimental optimum, thus verifying the accuracy of the established response surface. In addition, 0.28 g/L furfural was very low with this condition. Thus, condition using 154 °C and 42 min with 0.64 M XA is feasible for squeezing the maximum quantity of profit-generating products. Xylonic acid fermentation and separation XOS are the prior target of this study since these are higher value-added products. The results presented above indicated that acid hydrolysis at high temperature can effectively depolymerize xylan-type hemicelluloses into XOS and achieve high yield. The results also proved useful to demonstrate XA as an effective catalyst for XOS production. However, xylose was also simultaneously generated during acid pretreatment. Under the optimum conditions for XOS production, 10.5 g/(100 g SB) XOS could be produced, and 7.1 g/(100 g SB) xylose was released into the hydrolysate. Moreover, approximately 102.1 g/L XA (as catalyst) still remained in the hydrolysate. In general, a commercial XOS product mainly consists of DP 2–6 with a purity of 70–95% [17]. Thus, both xylose and the catalyst (XA) need to be removed to produce XOS with sufficient purity. Previously, Cao et al. successfully purified sodium xylonate (XA·Na) broth via electrodialysis; however, the broth still contains residual sugar (xylose). Their results showed that XA, xylose, and NaOH could be separated simultaneously [29]. In addition, xylose can be converted into bio-oxidized XA by G. oxydans with a yield close to 100% [28, 32]. Overall, xylose can be converted to XA, which can then be separated by electrodialysis. After XA pretreatment, the pH of pre-hydrolysate was adjusted to 6.0 using NaOH. pH adjustment is a requirement because a low pH inhibits the activity of G. oxydans cells. Next, the pre-hydrolysate was directly subjected to G. oxydans for the conversion of xylose into XA. During XA fermentation, NaOH was used to maintain the fermentation pH so that XA·Na remained in the hydrolysate. After 24 h of fermentation, approximately 109.1 g/L XA accumulated in the broth, with a corresponding yield of 95.1% (Fig. 3). Xylose could be completely and efficiently converted to XA by G. oxydans, even without addition of any further nutrients or inorganic salts to the hydrolysate. These observations also suggested that xylose was exclusively oxidized to XA without further catabolism. In addition, X2–X6 curves (Fig. 3) and chromatograms (Fig. 4a, b) confirmed that xylose was bio-converted into XA and XOS was not utilized by G. oxydans. This enabled the preservation of XOS and XA·Na for their downstream purification/separation. Thus, the XA-pre-hydrolysate was subjected to electrodialysis for XOS and XA separation and recovery. Electrodialysis was performed at a current density of 50 mA/cm2 and for about 30 min; 96.8% XA of the fermentation broth was recycled and close to 100% XOS were retained in the broth. In summary, electrodialysis with bipolar membranes is a very promising and prospective method for the simultaneous separation of ionic salts and for the recycling of sugar compounds for other uses. Xylose bioconversion by Gluconobacter oxydans Chromatogram of high-performance anion exchange chromatography (HPAEC) analysis: a hydrolysate from xylonic acid (XA) pretreatment of sugarcane bagasse (SB); b hydrolysate after xylose bio-oxidation by G. oxydans Enzymatic hydrolysis of pretreated solids In the present study, XA (as an organic acid) was introduced to produce XOS, which also greatly changed the composition of SB. After XA pretreatment at RSM optimized conditions, the pretreated solids contained 54.6% glucan, 10.3% xylan, and 26.3% lignin. The hemicellulose fraction was mostly hydrolyzed during acid pretreatment, and higher cellulose and acid-insoluble lignin contents were recovered in the acid-pretreated solid residue. Furthermore, to investigate whether XA pretreatment can improve enzymatic hydrolysis, both raw and pretreated SB were subjected to enzymatic hydrolysis for 48 h at 5% w/v of the solid. Enzymatic hydrolysis was performed at 50 °C for 48 h and the enzymatic hydrolysis yield was expressed as the yield of glucose and cellobiose released into the enzymatic hydrolysate. Figure 5 showed that the XA pretreatment solid achieved a higher enzymatic hydrolysis yield. The enzymatic hydrolysis yield improved to 90.8% compared with the control without pretreatment (22.4%). It is noted that dilute acid pretreatments normally are capable of removing hemicellulosic fraction, but not lignin [33, 34]. Although the lignin cannot be effectively removed or degraded, another process occurring during acid pretreatment is lignin depolymerization and repolymerization through the formation of a common carbocation intermediate [35]. It has been proved that redistribution of lignin and degradation of hemicellulose jointly results in greatly altering the pore size distribution and the accessible surface area [36, 37]. As shown in Fig. 6, XA-pretreated SB featured a particularly higher quantity of newly exposed surfaces and generally rougher surface. All evidences indicated that XA pretreatment effectively improved the efficiency of enzymatic hydrolysis compared with the raw material. Yield of enzymatic hydrolysis after XA pretreatment Scanning electron micrographs of treated and untreated SB The mass balances were systematically analyzed during the co-production of glucose and XOS (Fig. 7). Using pretreated SB after XOS extraction as substrate and starting from 100 g of raw and dry SB, enzymatic hydrolysis obtained approximately 30.9 g of glucose and 10.5 g of XOS. The glucan and xylan recovery rates were 77.6% and 44.5%, respectively. These results clearly indicated pretreatment with XA as a promising option for the concurrent maximization of the economic value of SB by waste upgrading and the promotion of the current XOS and fermentable sugars co-production. Mass balance for co-production of XOS and fermentable sugars This study used an agricultural waste (SB) as biorefinery feedstock for the production of high-value XOS and fermentable sugars. XA pretreatment was explored and exhibited great potential for the depolymerization of xylan into XOS. Mass balance analysis showed that the maximum yields of XOS and fermentable sugars reached 10.5 g and 30.9 g per 100 g of raw SB. In addition, xylose as byproduct from xylan hydrolysis can be converted into XA, which is able to be separated and recycled for running new acid pretreatment process. In summary, by concurrently producing XOS and glucose with XA pretreatment, SB was valorized as a promising feedstock for lignocellulosic biorefinery for the generation of value-added products. Raw material and chemical composition analysis SB was collected in Hainan Province, China, in the summer of 2019. In the laboratory, the material was broken into particles through 40 mesh. These particles were then oven-dried at 105 °C for 24 h until a constant weight was reached. The chemical composition (wt%) of SB was determined as follows: glucan 39.8%, xylan 23.6%, and lignin 22.8%. Experimental design for xylonic acid pre-hydrolysis The hydrolysis parameters were optimized using RSM. A central composite design with three factors and three replicates at the center point was employed. Temperature (a1), hydrolysis time (a2), and XA concentration (a3) were chosen as independent variables. The ranges of these three independent variables and the center point values are listed in Table 1. In total, 13 experimental runs were conducted in three replicates with five center points, according to the Box–Behnken Design matrix. XOS yields were determined as the response variable (Y). The relationship between the independent variables and the response variable was calculated by a quadratic polynomial equation: $$Y = a_{0} + \sum a_{i} a_{i} + \sum a_{ii} a_{i}^{2} + \sum a_{ij} a_{i} a_{j}$$ Here, Y represents the predicted XOS yields (%), a0 represents a constant term, a (ai and aj) represent independent variables, and ai, aii, and aij represent coefficients of linear, quadratic, and interaction parameters, respectively. The statistical software Design-Expert (Version 11.0) was used for a regression analysis of the experimental data and for response surface plots. Analysis of variance was used to estimate statistical parameters. Xylonic acid pretreatment of sugarcane bagasse SB (3.0 ± 0.1 g) was mixed with 30 mL of different concentrations of XA solution (prepared via xylose bio-oxidation) in a 50-mL 316 stainless steel tube reactor (inner height × diameter is 80.0 × 28.0 mm, 5.0 mm wall) which also capped with 316 stainless steel cap. After loading, the sealed stainless steel tube reactor was immersed into preheated oil baths (dimethyl silicon oil) at different desired temperature, where it remained for varying durations depending on the utilized experimental design. The temperature of oil bath was controlled by the Parr PID controller. When the reaction finished, the stainless steel tube reactor immediately removed from the heater and cooled down using cold water, and then opened it. The resultant solid and liquid fractions were then separated and harvested by filtration. The separated liquid fraction was used to determine the XOS yields, while the solid fraction was subjected to component analysis and enzymatic hydrolysis. Xylose bioconversion Gluconobacter oxydans (ATCC 621H) was used to convert xylose into the XA and maintained on plates (sorbitol 50 g/L, yeast extract 5 g/L, and agar 20 g/L) at 4 °C [29, 38]. The seed medium was prepared in a 500 mL Erlenmeyer flask, containing 100 mL medium (sorbitol 100 g/L and yeast extract 10 g/L), where it was cultured for 24 h at 220 rpm and 30 °C. Cell pellets of G. oxydans were harvested by centrifugation at 6000 rpm for 5 min. The pH of hydrolysate was adjusted to 6.0 by NaOH and then it was filtered by a 0.45-μm Millipore filter prior to inoculation. Fermentation assay was performed at 220 rpm and 30 °C with 1000 mL Erlenmeyer flask, containing 200 mL hydrolysate and 2 g/L G. oxydans cells (calculate as dry cell weight). Electrodialysis and separation of the fermented broth A bipolar membrane electrodialyzer (GJBMED-90 × 210-5) from Xinke Co. (Liaoning, China) was used in this study and it consisted of a control unit (adjustable outputs of voltage 60 V and current 10 A). The parameters of electrodialysis are as follows. Single membrane effective area: 90 × 210 mm2; number of membrane stack repeat units: 15 pairs; electrode material: titanium-coated electrode plate; membrane material: polyphenylene oxide. [29]. In addition, electrodialysis equipment was composed of four chambers: acid, alkali, salt, and electrode chamber, and the working solution was online transported across the bipolar membrane to the acid chamber [39]. The electrodialysis stack was connected to a peristaltic pump. When starting the electrodialysis, sodium xylonate (Na·XA) in salt chamber was directly pumped into bipolar membrane electrodialysis stack. As a result, Na·XA was converted to XA with the protons produced from the water dissociation, which were transported into the compartment, while the Na+ was removed from the compartment. After ion exchange, XA accumulated in the acidic chamber, while the formed NaOH accumulated in the alkaline chamber. Finally, the aqueous solution (containing XOS) could be retained in salt chamber (The schematic of bipolar membrane electrodialysis was showed in Additional file 1) [29]. In addition, in the electrode chambers, 0.3 mol/L sodium sulfate solution was added to improve the conductivity and to reduce the resistance of the membrane stack. Enzymatic hydrolysis of pretreated sugarcane bagasse Prior to enzymatic hydrolysis, solid residues from the XA pretreatment were washed with deionized water and were dried at room temperature until constant weight. Enzymatic hydrolysis was conducted in 150 mL screw-capped bottles at 50 °C, pH 4.8 (0.1 mol/L sodium acetate buffer), and 150 rpm with 5% solid loading and constant cellulases concentration (Cellic CTec2, Novozymes, NA, Franklinton, USA) of 15 FPU/g glucan. After enzymatic hydrolysis, the rendered enzymatic hydrolysate was collected by centrifugation. The chemical composition of SB (cellulose, hemicelluloses, and lignin) was determined using a standard protocol provided by the National Renewable Energy Laboratory [40]. Carbohydrate contents (glucan and xylan) and lignin (acid-insoluble lignin and acid-soluble lignin) were analyzed for the raw material and the pretreated samples following two-step sulfuric acid pretreatment. Briefly, milled samples (20–80 meshes) were first hydrolyzed by 72% (w/w) sulfuric acid at 30 °C water bath for 1 h with frequent mixing. Then, the slurry was diluted to a concentration of 4% (w/w) sulfuric acid by adding deionized water and hydrolyzed at 121 °C for 1 h. The autoclaved slurry was cooled and filtered by filtering crucibles. The separated liquid fraction was used for analysis carbohydrates and acid-soluble lignin. The carbohydrates were analyzed by high performance liquid chromatography (HPLC) (Agilent 1260, USA) equipped with an Aminex Bio-Rad HPX-87H column (Bio-Rad Laboratories, USA). HPLC was performed at 50 °C, with 0.005 M sulfuric acid as eluent at a flow rate of 0.6 mL/min. The acid-soluble lignin was determined at 240 nm by UV spectrophotometer (UV-1800, Shimadzu, Japan). The separated solid fraction was dried at 105 °C oven until constant weight. After record the weight, the dry solid was heated at 575 °C muffle oven for 4 h to calculate the insoluble lignin. Microscopic images of the raw and pretreated SB were captured by a scanning electron microscope (FEI Quanta 400, Hitachi, Japan), which was operated at a voltage of 15 kV. Prior to observation, all samples were sputter-coated with gold. SEM photomicrographs were taken at a magnification of 500×. XA, xylose, xylobiose (X2), xylotriose (X3), xylotetraose (X4), xylopentaose (X5), and xylohexaose (X6) were analyzed by high-performance anion exchange chromatography (HPAEC) (Dionex ICS-5000, ThermoFisher, USA) coupled with a CarboPac™ PA200 column (ThermoFisher, USA) [41]. Glucose, cellobiose, and furfural were measured according to the HPLC method described above. XOS, enzymatic hydrolysis [42], and XA yield were calculated as follows: $${\text{XOS}}\;{\text{yield }}(\% ) = \frac{{({\text{X}}2 + {\text{X}}3 + {\text{X}}4 + {\text{X}}5 + {\text{X}}6)\;{\text{in}}\;{\text{hydrolysate}}\left( {\text{g}} \right)}}{{{\text{Initial}}\;{\text{xylan}}\;{\text{content}}\;{\text{in}}\;{\text{the}}\;{\text{raw}}\;{\text{material}}\left( {\text{g}} \right)}} \times 100\%$$ $${\text{Enzymatic}}\;{\text{hydrolysis}}\;{\text{yield}}\left( \% \right) = \frac{{({\text{Glucose}} + 1.053 \times {\text{cellobiose}})\;{\text{in}}\;{\text{enzymatic}}\;{\text{hydrolysate}}\left( {\text{g}} \right)}}{{1.111 \times {\text{Glucan}}\;{\text{content}}\;{\text{after}}\;{\text{XA}}\;{\text{hydrolysis}}\left( {\text{g}} \right)}} \times 100\%$$ $${\text{XA}}\;yield\left( \% \right) = \frac{{{\text{XA}}\;{\text{concentration}} \left( {\text{g/L}} \right) \times 0.918 }}{{{\text{Xylose}}\;{\text{in}}\;{\text{hydrolysate }}\left( {\text{g/L}} \right)}} \times 100\%$$ XOS: Xylooligosaccharides XA: Xylonic acid SB: Sugarcane bagasse XA·Na: Sodium xylonate Degrees of polymerization X2: Xylobiose Xylotriose Xylotetraose Xylopentaose Xylohexose Xyloheptaose Xylooctaose RSM: Response surface method HPLC: High-performance liquid chromatography HPAEC: High-performance anion exchange chromatography Lancefield CS, Panovic I, Deuss PJ, Barta K, Westwood NJ. Pre-treatment of lignocellulosic feedstocks using biorenewable alcohols: towards complete biomass valorisation. Green Chem. 2017;19:202–14. Khazraie T, Zhang Y, Tarasov D, Gao W, Price J, Demartini N, Hupa L, Fatehi P. A process for producing lignin and volatile compounds from hydrolysis liquor. Biotechnol Biofuels. 2017;10:47. Biswas R, Uellendahl H, Ahring BK. Wet explosion pretreatment of sugarcane bagasse for enhanced enzymatic hydrolysis. Biomass Bioenerg. 2014;61:104–13. Vargas Betancur GJ, Pereira N Jr. Sugarcane bagasse as feedstock for second generation ethanol production: part I: diluted acid pretreatment optimization. Electron J Biotechnol. 2011;13:1–9. Tang S, Liu R, Sun FF, Dong C, Wang R, Gao Z, Zhang Z, Xiao Z, Li C, Li H. Bioprocessing of tea oil fruit hull with acetic acid organosolv pretreatment in combination with alkaline H2O2. Biotechnol Biofuels. 2017;10:86. Jiang LQ, Fang Z, Li XK, Luo J, Fan SP. Combination of dilute acid and ionic liquid pretreatments of sugarcane bagasse for glucose by enzymatic hydrolysis. Process Biochem. 2013;48:1942–6. Pandey A, Soccol CR, Nigam P, Soccol VT. Biotechnological potential of agro-industrial residues. I: sugarcane bagasse. Bioresoure Technol. 2000;74:69–80. Alvira P, Tomás-Pejó E, Ballesteros M, Negro MJ. Pretreatment technologies for an efficient bioethanol production process based on enzymatic hydrolysis: a review. Bioresour Technol. 2010;101:4851–61. Yang B, Wyman CE. Pretreatment: the key to unlocking low-cost cellulosic ethanol. Biofuel Bioprod Bioresour. 2010;2:26–40. Zhao X, Morikawa Y, Feng Q, Jing Z, Liu D. A novel kinetic model for polysaccharide dissolution during atmospheric acetic acid pretreatment of sugarcane bagasse. Bioresour Technol. 2014;151:128–36. Zhang H, Xu Y, Yu S. Co-production of functional xylooligosaccharides and fermentable sugars from corncob with effective acetic acid prehydrolysis. Bioresour Technol. 2017;234:343–9. Otieno DO, Ahring BK. The potential for oligosaccharide production from the hemicellulose fraction of biomasses through pretreatment processes: xylo-oligosaccharides (XOS), arabino-oligosaccharides (AOS), and manno-oligosaccharides (MOS). Carbohydr Res. 2012;360:84–92. Vázquez MJ, Alonso JL, Domı́Nguez H, Parajó JC. Xylooligosaccharides: manufacture and applications. Trends Food Sci Tech. 2000;11:387–93. Katrien S, Courtin CM, Delcour JA. Non-digestible oligosaccharides with prebiotic properties. Crit Rev Food Sci. 2006;46:459–71. Singh RD, Banerjee J, Sasmal S, Muir J, Arora A. High xylan recovery using two stage alkali pre-treatment process from high lignin biomass and its valorisation to xylooligosaccharides of low degree of polymerisation. Bioresour Technol. 2018;256:110–7. Huang C, Wang X, Liang C, Jiang X, Yang G, Xu J, Yong Q. A sustainable process for procuring biologically active fractions of high-purity xylooligosaccharides and water-soluble lignin from Moso bamboo prehydrolyzate. Biotechnol Biofuels. 2019;12:189. Moure A, Dominguez GH, Parajo JC. Advances in the manufacture, purification and applications of xylo-oligosaccharides as food additives and nutraceuticals. Process Biochem. 2006;41:1913–23. Jain I, Kumar V, Satyanarayana T. Xylooligosaccharides: an economical prebiotic from agroresidues and their health benefits. Indian J Exp Biol. 2015;53:131–42. Lai C, Jia Y, Wang J, Wang R, Zhang Q, Chen L, Shi H, Huang C, Li X, Yong Q. Co-production of xylooligosaccharides and fermentable sugars from poplar through acetic acid pretreatment followed by poly (ethylene glycol) ether assisted alkali treatment. Bioresour Technol. 2019;288:121569. Kootstra AMJ, Beeftink HH, Scott EL. Comparison of dilute mineral and organic acid pretreatment for enzymatic hydrolysis of wheat straw. Biochem Eng J. 2009;46:126–31. Akpinar O, Erdogan K, Bostanci S. Production of xylooligosaccharides by controlled acid hydrolysis of lignocellulosic materials. Carbohydr Res. 2009;344:660–6. Bian J, Peng P, Peng F, Xiao X, Xu F, Sun RC. Microwave-assisted acid hydrolysis to produce xylooligosaccharides from sugarcane bagasse hemicelluloses. Food Chem. 2014;156:7–13. Lin Q, Li H, Ren J, Deng A, Li W, Liu C, Sun R. Production of xylooligosaccharides by microwave-induced, organic acid-catalyzed hydrolysis of different xylan-type hemicelluloses: optimization by response surface methodology. Carbohydr Polym. 2017;157:214–25. Zhou X, Zhao J, Zhang X, Xu Y. An eco-friendly biorefinery strategy for xylooligosaccharides production from sugarcane bagasse using cellulosic derived gluconic acid as efficient catalyst. Bioresour Technol. 2019;289:121755. Qin L, Liu ZH, Li BZ, Dale BE, Yuan YJ. Mass balance and transformation of corn stover by pretreatment with different dilute organic acids. Bioresour Technol. 2012;112:319–26. Deng A, Ren J, Wang W, Li H, Lin Q, Yan Y, Sun R, Liu G. Production of xylo-sugars from corncob by oxalic acid-assisted ball milling and microwave-induced hydrothermal treatments. Ind Crop Prod. 2016;79:137–45. Amnuaycheewa P, Hengaroonprasan R, Rattanaporn K, Kirdponpattara S, Cheenkachorn K, Sriariyanun M. Enhancing enzymatic hydrolysis and biogas production from rice straw by pretreatment with organic acids. Ind Crop Prod. 2016;87:247–54. Zhou X, Zhou X, Xu Y. Improvement of fermentation performance of Gluconobacter oxydans by combination of enhanced oxygen mass transfer in compressed-oxygen-supplied sealed system and cell-recycle technique. Bioresour Technol. 2017;244:1137–41. Cao R, Xu Y. Efficient preparation of xylonic acid from xylonate fermentation broth by bipolar membrane electrodialysis. Appl Biochem Biotech. 2019;187:396–406. Strobel BW. Influence of vegetation on low-molecular-weight carboxylic acids in soil solution: a review. Geoderma. 2001;99:169–98. Nath A, Chattopadhyay PK. Optimization of oven toasting for improving crispness and other quality attributes of ready to eat potato-soy snack using response surface methodology. J Food Eng. 2007;80:1282–92. Zhou X, Xin Z, Liu G, Yong X, Balan V. Integrated production of gluconic acid and xylonic acid using dilute acid pretreated corn stover by two-stage fermentation. Biochem Eng J. 2018;137:18–22. Foston M, Ragauskas AJ. Changes in the structure of the cellulose fiber wall during dilute acid pretreatment in populus studied by 1H and 2H NMR. Energy Fuel. 2010;24:5677–85. Samuel R, Foston M, Jiang N, Allison L, Ragauskas AJ. Structural changes in switchgrass lignin and hemicelluloses during pretreatments by NMR analysis. Polym Degrad Stabil. 2011;96:2002–9. Sannigrahi P, Ragauskas AJ, Miller SJ. Effects of two-stage dilute acid pretreatment on the structure and composition of lignin and cellulose in loblolly pine. Bioenergy Res. 2008;1:205–14. Rollin JA, Zhu Z, Sathitsuksanoh N, Zhang YHP. Increasing cellulose accessibility is more important than removing lignin: a comparison of cellulose solvent-based lignocellulose fractionation and soaking in aqueous ammonia. Biotechnol Bioeng. 2011;108:22–30. Vinícius L, Gurgel A, Marabezi K, Arcia M, Zanbom D, Aprigio A, Curvelo S. Dilute acid hydrolysis of sugar cane bagasse at high temperatures: a kinetic study of cellulose saccharification and glucose decomposition. Part I: sulfuric acid as the catalyst. Ind Eng Chem Res. 2012;51:1173–85. Zhou X, Xu Y. Integrative process for sugarcane bagasse biorefinery to co-produce xylooligosaccharides and gluconic acid. Bioresour Technol. 2019;282:81–7. Prochaska K, Staszak K, Woźniak-Budych MJ, Regel-Rosocka M, Adamczak M, Wiśniewski M, Staniewski J. Nanofiltration, bipolar electrodialysis and reactive extraction hybrid system for separation of fumaric acid from fermentation broth. Bioresour Technol. 2014;167:219–25. Sluiter AD, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton DW, Crocker D. Determination of structural carbohydrates and lignin in biomass. Lab Anal Proced. 2012;1617:1–16. Yong X, Li F, Xing W, Qiang Y, Yu SY. Simultaneous separation and quantification of linear xylo- and cello-oligosaccharides mixtures in lignocellulosics processing products on high-performance anion-exchange chromatography coupled with pulsed amperometric detection. BioResources. 2013;8:3247–59. Bhagia S, Dhir R, Kumar R, Wyman CE. Deactivation of cellulase at the air-liquid interface is the main cause of incomplete cellulose conversion at low enzyme loadings. Sci Rep. 2018;8:1350. The authors acknowledge the financial support from National Natural Science Foundation of China and National Key R&D Program of China. The research would also like to acknowledge the Scientific Research Start-up Funds of Nanjing Forestry University. This study was funded by the National Key R&D Program of China (2017YFD0601001), the National Natural Science Foundation of China (31901270), and the Scientific Research Start-up Funds of Nanjing Forestry University, China (163030127). Key Laboratory of Forestry Genetics & Biotechnology (Nanjing Forestry University), Ministry of Education, Nanjing, 210037, People's Republic of China Xin Zhou & Yong Xu Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing Forestry University, Nanjing, 210037, People's Republic of China College of Chemical Engineering, Nanjing Forestry University, No. 159 Longpan Road, Nanjing, 210037, People's Republic of China Search for Xin Zhou in: Search for Yong Xu in: XZ developed the idea for the study, performed the research and data analysis, and prepared the manuscript. YX helped to analyze data and revise the manuscript. Both authors read and approved the final manuscript. Correspondence to Yong Xu. Additional file 1. Actual vs. predicted xylooligosaccharide (XOS) yields from XA hydrolysis of SB. Schematic of bipolar membrane electrodialysis. Zhou, X., Xu, Y. Eco-friendly consolidated process for co-production of xylooligosaccharides and fermentable sugars using self-providing xylonic acid as key pretreatment catalyst. Biotechnol Biofuels 12, 272 (2019) doi:10.1186/s13068-019-1614-5 Xylose Enzymatic hydrolysis
CommonCrawl
The basic reproduction number of discrete SIR and SEIS models with periodic parameters Slow passage through multiple bifurcation points January 2013, 18(1): 57-94. doi: 10.3934/dcdsb.2013.18.57 Weak KAM theory for nonregular commuting Hamiltonians Andrea Davini 1, and Maxime Zavidovique 2, Dip. di Matematica, Università di Roma "La Sapienza", P.le Aldo Moro 2, 00185 Roma, Italy IMJ, Université Pierre et Marie Curie, Case 247, 4 place Jussieu, F-75252 Paris Cédex 05, France Received May 2011 Revised June 2012 Published September 2012 In this paper we consider the notion of commutation for a pair of continuous and convex Hamiltonians, given in terms of commutation of their Lax--Oleinik semigroups. This is equivalent to the solvability of an associated multi--time Hamilton--Jacobi equation. We examine the weak KAM theoretic aspects of the commutation property and show that the two Hamiltonians have the same weak KAM solutions and the same Aubry set, thus generalizing a result recently obtained by the second author for Tonelli Hamiltonians. We make a further step by proving that the Hamiltonians admit a common critical subsolution, strict outside their Aubry set. This subsolution can be taken of class $C^{1,1}$ in the Tonelli case. To prove our main results in full generality, it is crucial to establish suitable differentiability properties of the critical subsolutions on the Aubry set. These latter results are new in the purely continuous case and of independent interest. Keywords: Commuting Hamiltonians, viscosity solutions, Aubry-Mather theory, weak KAM theory, multi-time Hamilton-Jacobi equation.. Mathematics Subject Classification: Primary: 35F21; Secondary: 49L25, 37J5. Citation: Andrea Davini, Maxime Zavidovique. Weak KAM theory for nonregular commuting Hamiltonians. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 57-94. doi: 10.3934/dcdsb.2013.18.57 A. Agrachev and P. W. Y. Lee, Continuity of optimal control costs and its application to weak KAM theory,, Calc. Var. Partial Differential Equations, 39 (2010), 213. doi: 10.1007/s00526-010-0308-4. Google Scholar O. Alvarez and M. Bardi, "Ergodicity, Stabilization, and Singular Perturbations for Bellman-Isaacs Equations,", Mem. Amer. Math. Soc., (2010). Google Scholar L. Ambrosio, O. Ascenzi and G. Buttazzo, Lipschitz regularity for minimizers of integral functionals with highly discontinuous integrands,, J. Math. Anal. Appl., 142 (1989), 301. doi: 10.1016/0022-247X(89)90001-2. Google Scholar G. Barles, "Solutions de Viscosité des Équations de Hamilton-Jacobi,'', Mathéematiques & Applications, (1994). Google Scholar G. Barles, Some homogenization results for non-coercive Hamilton-Jacobi equations,, Calc. Var. Partial Differential Equations, 30 (2007), 449. doi: 10.1007/s00526-007-0097-6. Google Scholar G. Barles and I. Mitake, A PDE approach to large-time asymptotics for boundary-value problems for nonconvex Hamilton-Jacobi equations,, Comm. Partial Differential Equations, 37 (2012), 136. doi: 10.1080/03605302.2011.553645. Google Scholar G. Barles and P. E. Souganidis, On the large time behavior of solutions of Hamilton-Jacobi equations,, SIAM J. Math. Anal., 31 (2000), 925. Google Scholar G. Barles and A. Tourin, Commutation properties of semigroups for first-order Hamilton-Jacobi equations and application to multi-time equations,, Indiana Univ. Math. J., 50 (2001), 1523. Google Scholar P. Bernard, Existence of $C^{1,1}$ critical sub-solutions of the Hamilton-Jacobi equation on compact manifolds,, Ann. Sci. École Norm. Sup. 4, 40 (2007), 445. Google Scholar P. Bernard, Symplectic aspects of Mather theory,, Duke Math. J., 136 (2007), 401. Google Scholar G. Buttazzo, M. Giaquinta and S. Hildebrandt, "One-dimensional Variational Problems. An Introduction,'', Oxford Lecture Series in Mathematics and its Applications, (1998). Google Scholar F. Camilli, A. Cesaroni and A. Siconolfi, Randomly perturbed dynamical systems and Aubry-Mather theory,, Int. J.Dyn. Syst. Differ. Equ., 2 (2009), 139. Google Scholar P. Cannarsa and C. Sinestrari, "Semiconcave Functions, Hamilton-Jacobi Equations, and Optimal Control,'', Progress in Nonlinear Differential Equations and their Applications, (2004). Google Scholar P. Cardaliaguet, Ergodicity of Hamilton-Jacobi equations with a noncoercive nonconvex Hamiltonian in $\mathbb R^2/\mathbb Z^2$,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), 837. doi: 10.1016/j.anihpc.2009.11.015. Google Scholar F. Cardin and C. Viterbo, Commuting Hamiltonians and Hamilton-Jacobi multi-time equations,, Duke Math. J., 144 (2008), 235. doi: 10.1215/00127094-2008-036. Google Scholar F. H. Clarke, "Optimization and Nonsmooth Analysis,'', Wiley, (1983). Google Scholar F. H. Clarke and R. B. Vinter, Regularity properties of solutions to the basic problem in the calculus of variations,, Trans. Amer. Math. Soc., 289 (1985), 73. doi: 10.1090/S0002-9947-1985-0779053-3. Google Scholar G. Crandall and P.-L. Lions, Viscosity solutions of Hamilton-Jacobi equations,, Trans. Amer. Math. Soc., 277 (1983), 1. doi: 10.1090/S0002-9947-1983-0690039-8. Google Scholar X. Cui, On commuting Tonelli Hamiltonians: time-periodic case,, Preprint, (2009). Google Scholar X. Cui and J. Li, On commuting Tonelli Hamiltonians: autonomous case,, J. Diff. Equations, 250 (2011), 4104. doi: 10.1016/j.jde.2011.01.020. Google Scholar A. Davini, Bolza problems with discontinuous Lagrangians and Lipschitz continuity of the value function,, SIAM J. Control Optim., 46 (2007), 1897. doi: 10.1137/060654311. Google Scholar A. Davini and A. Siconolfi, A generalized dynamical approachto the large time behavior of solutions of Hamilton-Jacobi equations,, SIAM J. Math. Anal., 38 (2006), 478. doi: 10.1137/050621955. Google Scholar A. Davini and A. Siconolfi, Exact and approximate correctors for stochastic Hamiltonians: the $1$-dimensional case,, Math. Ann., 345 (2009), 749. doi: 10.1007/s00208-009-0372-2. Google Scholar A. Davini and A. Siconolfi, Metric techniques for convex stationary ergodic Hamiltonians,, Calc. Var. Partial Differential Equations, 40 (2011), 391. doi: 10.1007/s00526-010-0345-z. Google Scholar A. Davini and A. Siconolfi, Weak KAM Theory topics in the stationary ergodic setting,, Calc. Var. Partial Differential Equations, 44 (2012), 319. doi: 10.1007/s00526-011-0436-5. Google Scholar R. DeMarr, Common fixed points for commuting contraction mappings,, Pacific J. Math., 13 (1963), 1139. Google Scholar M. Entov, L. Polterovich and F. Zapolsky, Quasi-morphisms and the Poisson bracket,, Pure Appl. Math. Q., 3 (2007), 1037. Google Scholar L. C. Evans, "Partial Differential Equations,'', Graduate Studies in Mathematics, (1998). Google Scholar A. Fathi, "Weak Kam Theorem in Lagrangian Dynamics,'', Cambridge University Press., (). Google Scholar A. Fathi, Sur la convergence du semi-groupe de Lax-Oleinik,, C. R. Acad. Sci. Paris Sér. I Math., 327 (1998), 267. Google Scholar A. Fathi and A. Siconolfi, On smooth time functions,, Math. Proc. Cambridge Philos. Soc., 152 (2012), 303. doi: 10.1017/S0305004111000661. Google Scholar A. Fathi and A. Siconolfi, PDE aspects of Aubry-Mather theory for continuous convex Hamiltonians,, Calc. Var. Partial Differential Equations, 22 (2005), 185. doi: 10.1007/s00526-004-0271-z. Google Scholar M. Gromov, Pseudoholomorphic curves in symplectic manifolds,, Invent. Math., 82 (1985), 307. doi: 10.1007/BF01388806. Google Scholar N. Ichihara and H. Ishii, Asymptotic solutions of Hamilton-Jacobi equations with semi-periodic Hamiltonians,, Comm. Partial Differential Equations, 33 (2008), 784. doi: 10.1080/03605300701257427. Google Scholar N. Ichihara and H. Ishii, Long-time behavior of solutions of Hamilton-Jacobi equations with convex and coercive Hamiltonians,, Arch. Ration. Mech. Anal., 194 (2009), 383. doi: 10.1007/s00205-008-0170-0. Google Scholar C. Imbert and M. Volle, On vectorial Hamilton-Jacobi equations,, Control Cybernet, 31 (2002), 493. Google Scholar H. Ishii, Weak KAM aspects of convex Hamilton-Jacobi equations with Neumann type boundary conditions,, J. Math. Pures Appl., 95 (2011), 99. Google Scholar H. Ishii and H. Mitake, Representation formulas for solutions of Hamilton-Jacobi equations with convex Hamiltonians,, Indiana Univ. Math. J., 56 (2007), 2159. doi: 10.1512/iumj.2007.56.3048. Google Scholar P.-L. Lions, G. Papanicolau and S. R. S. Varadhan, Homogenization of Hamilton-Jacobi equations,, unpublished preprint, (1987). Google Scholar P.-L. Lions and J.-C. Rochet, Hopf formula and multitime Hamilton-Jacobi equations,, Proc. Amer. Math. Soc., 96 (1986), 79. doi: 10.1090/S0002-9939-1986-0813815-5. Google Scholar J. N. Mather, Action minimizing invariant measures for positive definite Lagrangian systems,, Math. Z., 207 (1991), 169. doi: 10.1007/BF02571383. Google Scholar J. N. Mather, Variational construction of connecting orbits,, Ann. Inst. Fourier (Grenoble), 43 (1993), 1349. doi: 10.5802/aif.1377. Google Scholar M. Motta and F. Rampazzo, Nonsmooth multi-time Hamilton-Jacobi systems,, Indiana Univ. Math. J., 55 (2006), 1573. doi: 10.1512/iumj.2006.55.2760. Google Scholar A. Siconolfi, "Hamilton-Jacobi Equations and Weak KAM Theory,", Encyclopedia of Complexity and Systems Science. Springer-Verlag, (2009), 4540. Google Scholar A. Siconolfi and G. Terrone, A metric approach to the converse Lyapunov theorem for continuous multivalued dynamics,, Nonlinearity, 20 (2007), 1077. doi: 10.1088/0951-7715/20/5/002. Google Scholar A. Sorrentino, On the integrability of Tonelli Hamiltonians,, Trans. Amer. Math. Soc., 363 (2011), 5071. doi: 10.1090/S0002-9947-2011-05492-9. Google Scholar M. Zavidovique, Weak KAM for commuting Hamiltonians,, Nonlinearity, 23 (2010), 793. doi: 10.1088/0951-7715/23/4/002. Google Scholar Yasuhiro Fujita, Katsushi Ohmori. Inequalities and the Aubry-Mather theory of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2009, 8 (2) : 683-688. doi: 10.3934/cpaa.2009.8.683 Ugo Bessi. Viscous Aubry-Mather theory and the Vlasov equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 379-420. doi: 10.3934/dcds.2014.34.379 Xifeng Su, Lin Wang, Jun Yan. Weak KAM theory for HAMILTON-JACOBI equations depending on unknown functions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6487-6522. doi: 10.3934/dcds.2016080 Hans Koch, Rafael De La Llave, Charles Radin. Aubry-Mather theory for functions on lattices. Discrete & Continuous Dynamical Systems - A, 1997, 3 (1) : 135-151. doi: 10.3934/dcds.1997.3.135 Fabio Camilli, Annalisa Cesaroni. A note on singular perturbation problems via Aubry-Mather theory. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 807-819. doi: 10.3934/dcds.2007.17.807 Mihai Bostan, Gawtum Namah. Time periodic viscosity solutions of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2007, 6 (2) : 389-410. doi: 10.3934/cpaa.2007.6.389 Manuel de León, David Martín de Diego, Miguel Vaquero. A Hamilton-Jacobi theory on Poisson manifolds. Journal of Geometric Mechanics, 2014, 6 (1) : 121-140. doi: 10.3934/jgm.2014.6.121 Diogo A. Gomes. Viscosity solution methods and the discrete Aubry-Mather problem. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 103-116. doi: 10.3934/dcds.2005.13.103 Nalini Anantharaman, Renato Iturriaga, Pablo Padilla, Héctor Sánchez-Morgado. Physical solutions of the Hamilton-Jacobi equation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 513-528. doi: 10.3934/dcdsb.2005.5.513 Giuseppe Marmo, Giuseppe Morandi, Narasimhaiengar Mukunda. The Hamilton-Jacobi theory and the analogy between classical and quantum mechanics. Journal of Geometric Mechanics, 2009, 1 (3) : 317-355. doi: 10.3934/jgm.2009.1.317 Melvin Leok, Diana Sosa. Dirac structures and Hamilton-Jacobi theory for Lagrangian mechanics on Lie algebroids. Journal of Geometric Mechanics, 2012, 4 (4) : 421-442. doi: 10.3934/jgm.2012.4.421 Olga Bernardi, Franco Cardin. Minimax and viscosity solutions of Hamilton-Jacobi equations in the convex case. Communications on Pure & Applied Analysis, 2006, 5 (4) : 793-812. doi: 10.3934/cpaa.2006.5.793 Kaizhi Wang, Jun Yan. Lipschitz dependence of viscosity solutions of Hamilton-Jacobi equations with respect to the parameter. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1649-1659. doi: 10.3934/dcds.2016.36.1649 Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159 Joan-Andreu Lázaro-Camí, Juan-Pablo Ortega. The stochastic Hamilton-Jacobi equation. Journal of Geometric Mechanics, 2009, 1 (3) : 295-315. doi: 10.3934/jgm.2009.1.295 Nicolas Forcadel, Mamdouh Zaydan. A comparison principle for Hamilton-Jacobi equation with moving in time boundary. Evolution Equations & Control Theory, 2019, 8 (3) : 543-565. doi: 10.3934/eect.2019026 Gawtum Namah, Mohammed Sbihi. A notion of extremal solutions for time periodic Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 647-664. doi: 10.3934/dcdsb.2010.13.647 Eddaly Guerra, Héctor Sánchez-Morgado. Vanishing viscosity limits for space-time periodic Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 331-346. doi: 10.3934/cpaa.2014.13.331 Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4345-4358. doi: 10.3934/dcds.2019176 Tomoki Ohsawa, Anthony M. Bloch. Nonholonomic Hamilton-Jacobi equation and integrability. Journal of Geometric Mechanics, 2009, 1 (4) : 461-481. doi: 10.3934/jgm.2009.1.461 Andrea Davini Maxime Zavidovique
CommonCrawl
Ariel's Red Tachyons Why there's no multiagent policy gradient theorem Where good ideas come to die Ariel Kwiatkowski Last updated on Oct 18, 2021 3 min read Ever since I started working on multiagent reinforcement learning, I felt somewhat underwhelmed by the algorithms that exist, especially in the policy gradient world. Sure, there's some approaches with really cool tricks, but ultimately it all seems to be "Take a single-agent algorithm and use it to optimize the policy/policies, adding some extra tweaks to make it behave better" But multiagent learning has such a large potential for nontrivial behaviors, surely there must be some property that allows for a fundamentally different algorithm to be applied. We're using a hammer to put in a screw - but I want to make a screwdriver. I've been scouring the literature, had a few false positives, but ultimately wasn't able to find anything that would fit the bill. I even had a semi-specific idea in my mind of what that algorithm might look like from the mathematical side - Spinning Up has a really nice derivation of the Policy Gradient theorem for single agent RL. So what if we follow the same lead, but adjust the assumption about the environment only being the function of a single agent? Yesterday I finally sat down and did the math. And realized why "nobody came up with it" - it's just trivial. Let's take a look at the original derivation they presented: In the entire proof, except for the very last line, it stays on the level of trajectories with $P(\tau|\theta)$ or $R(\tau)$, which is good - it's independent of however many agents we have, so we can just use all of that. But this doesn't lead us to a specific implementation - that happens in the last line. In fact, all we need to make it multiagent is change the expanded expression for $P(\tau | \theta)$ so that it uses the joint policy $\bar{\pi}_\theta(\bar{a}_t|\bar{s}_t)$ (working with homogeneous agents). What is $\bar{\pi}_\theta(\bar{a}_t|\bar{s}_t)$? Well, it's the probability that each agent takes a certain action, which can be expressed as a product of the probabilities: $\bar{\pi}_\theta(\bar{a}_t|\bar{s}_t) = \prod\limits_{i=0}^{N} \pi_\theta(a_t^i | s_t^i)$ Naturally, working on logprobs this turns into: $\log \bar{\pi}_\theta(\bar{a}_t|\bar{s}_t) = \sum\limits_{i=0}^{N} \log\pi_\theta(a_t^i | s_t^i)$ Producing a final equation for the gradient estimate: $\nabla_\theta J(\pi_\theta) = \mathop{\text{E}}\limits_{\tau\sim\pi_\theta} \left[ \sum\limits_{t=0}^T \sum\limits_{i=0}^N \log \pi_\theta(a_t^i | s_t^i) R(\tau) \right] $ Yep, it's just a linear combination of all agents' experiences. If you take the simple approach to optimizing a multiagent homogeneous policy by taking all agents' experiences and concatenating them, at some point you're going to average all the gradients which, surprise surprise, does exactly the same thing. (well, there might be a constant scaling factor, but that's hardly interesting) Because as it turns out, summation is commutative. I know, tripped me up the first few times too. What does that teach us? For me it certainly is a lesson that once I get a "great" idea for which I just need to figure out the mathematical details, it's better to just jump to math as soon as possible - at least it will spare me the disappointment. And now, back to thinking "How the hell do I publish a paper in this economy?" PhD Student in Artificial Intelligence I'm interested in finding out what makes brains tick - preferably by making artificial tickers. @ 2023 Me
CommonCrawl
DOE PAGES Journal Article: The Impact of White Dwarf Luminosity Profiles on Oscillation Frequencies Title: The Impact of White Dwarf Luminosity Profiles on Oscillation Frequencies KIC 08626021 is a pulsating DB white dwarf (WD) of considerable recent interest, and the first of its class to be extensively monitored by Kepler for its pulsation properties. Fitting the observed oscillation frequencies of KIC 08626021 to a model can yield insights into its otherwise-hidden internal structure. Template-based WD models choose a luminosity profile where the luminosity is proportional to the enclosed mass, $${L}_{r}\,\propto \,{M}_{r}$$, independent of the effective temperature Teff. Evolutionary models of young WDs with Teff ≳ 25,000 K suggest that neutrino emission gives rise to luminosity profiles with L r $${/}\!\!\!\!\!\!{\propto }$$ M r . We explore this contrast by comparing the oscillation frequencies between two nearly identical WD models: one with an enforced $${L}_{r}\propto {M}_{r}$$ luminosity profile, and the other with a luminosity profile determined by the star's previous evolution history. We find that the low-order g-mode frequencies differ by up to ≃70 μHz over the range of Kepler observations for KIC 08626021. This suggests that by neglecting the proper thermal structure of the star (e.g., accounting for the effect of plasmon neutrino losses), the model frequencies calculated by using an $${L}_{r}\propto {M}_{r}$$ profile may have uncorrected, effectively random errors at the level of tens of μHz. A mean frequency difference of 30 μHz, based on linearly extrapolating published results, suggests a template model uncertainty in the fit precision of ≃12% in WD mass, ≃9% in the radius, and ≃3% in the central oxygen mass fraction. Timmes, Frances X. [1]; Search DOE PAGES for author "Timmes, Frances X." Townsend, Richard H. D. [2]; Search DOE PAGES for author "Townsend, Richard H. D." Bauer, Evan B. [3]; Search DOE PAGES for author "Bauer, Evan B." Thoul, Anne [4]; Search DOE PAGES for author "Thoul, Anne" Fields Jr., Carl Edward [5]; Search DOE PAGES for author "Fields Jr., Carl Edward" Wolf, William M. [6] Search DOE PAGES for author "Wolf, William M." Arizona State Univ., Tempe, AZ (United States); Joint Inst. for Nuclear Astrophysics (JINA), East Lansing, MI (United States). Center for the Evolution of the Elements (JINA-CEE) Univ. of Wisconsin, Madison, WI (United States) Univ. of California, Santa Barbara, CA (United States) Univ. of Liege, (Belgium) Joint Inst. for Nuclear Astrophysics (JINA), East Lansing, MI (United States). Center for the Evolution of the Elements (JINA-CEE); Michigan State Univ., East Lansing, MI (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States) Arizona State Univ., Tempe, AZ (United States) Los Alamos National Lab. (LANL), Los Alamos, NM (United States) USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF) Report Number(s): LA-UR-18-29196 Journal ID: ISSN 2041-8213; TRN: US2000582 89233218CNA000001 The Astrophysical Journal. Letters Journal Volume: 867; Journal Issue: 2; Journal ID: ISSN 2041-8213 79 ASTRONOMY AND ASTROPHYSICS; stars: evolution; stars: individual (KIC 08626021); stars: interiors; stars: oscillations; white dwarfs Timmes, Frances X., Townsend, Richard H. D., Bauer, Evan B., Thoul, Anne, Fields Jr., Carl Edward, and Wolf, William M. The Impact of White Dwarf Luminosity Profiles on Oscillation Frequencies. United States: N. p., 2018. Web. doi:10.3847/2041-8213/aae70f. Timmes, Frances X., Townsend, Richard H. D., Bauer, Evan B., Thoul, Anne, Fields Jr., Carl Edward, & Wolf, William M. The Impact of White Dwarf Luminosity Profiles on Oscillation Frequencies. United States. doi:10.3847/2041-8213/aae70f. Timmes, Frances X., Townsend, Richard H. D., Bauer, Evan B., Thoul, Anne, Fields Jr., Carl Edward, and Wolf, William M. Thu . "The Impact of White Dwarf Luminosity Profiles on Oscillation Frequencies". United States. doi:10.3847/2041-8213/aae70f. https://www.osti.gov/servlets/purl/1561074. title = {The Impact of White Dwarf Luminosity Profiles on Oscillation Frequencies}, author = {Timmes, Frances X. and Townsend, Richard H. D. and Bauer, Evan B. and Thoul, Anne and Fields Jr., Carl Edward and Wolf, William M.}, abstractNote = {KIC 08626021 is a pulsating DB white dwarf (WD) of considerable recent interest, and the first of its class to be extensively monitored by Kepler for its pulsation properties. Fitting the observed oscillation frequencies of KIC 08626021 to a model can yield insights into its otherwise-hidden internal structure. Template-based WD models choose a luminosity profile where the luminosity is proportional to the enclosed mass, ${L}_{r}\,\propto \,{M}_{r}$, independent of the effective temperature Teff. Evolutionary models of young WDs with Teff ≳ 25,000 K suggest that neutrino emission gives rise to luminosity profiles with L r ${/}\!\!\!\!\!\!{\propto }$ M r . We explore this contrast by comparing the oscillation frequencies between two nearly identical WD models: one with an enforced ${L}_{r}\propto {M}_{r}$ luminosity profile, and the other with a luminosity profile determined by the star's previous evolution history. We find that the low-order g-mode frequencies differ by up to ≃70 μHz over the range of Kepler observations for KIC 08626021. This suggests that by neglecting the proper thermal structure of the star (e.g., accounting for the effect of plasmon neutrino losses), the model frequencies calculated by using an ${L}_{r}\propto {M}_{r}$ profile may have uncorrected, effectively random errors at the level of tens of μHz. A mean frequency difference of 30 μHz, based on linearly extrapolating published results, suggests a template model uncertainty in the fit precision of ≃12% in WD mass, ≃9% in the radius, and ≃3% in the central oxygen mass fraction.}, doi = {10.3847/2041-8213/aae70f}, journal = {The Astrophysical Journal. Letters}, DOI: 10.3847/2041-8213/aae70f Asteroseismology of ZZ Ceti stars with full evolutionary white dwarf models: II. The impact of AGB thermal pulses on the asteroseismic inferences of ZZ Ceti stars De Gerónimo, F. C.; Althaus, L. G.; Córsico, A. H. Astronomy & Astrophysics, Vol. 613 DOI: 10.1051/0004-6361/201731982 The astrophysics of cool white dwarfs Hansen, B. Physics Reports, Vol. 399, Issue 1 DOI: 10.1016/j.physrep.2004.07.001 The potential of asteroseismology for probing the core chemical stratification in white dwarf stars Giammichele, N.; Charpinet, S.; Brassard, P. Pre-White Evolution. I Vila, Samuel C. The Astrophysical Journal, Vol. 146 The Impact of Nuclear Reaction Rate Uncertainties on the Evolution of Core-collapse Supernova Progenitors Fields, C. E.; Timmes, F. X.; Farmer, R. The Astrophysical Journal Supplement Series, Vol. 234, Issue 2 DOI: 10.3847/1538-4365/aaa29b A deeper understanding of white dwarf interiors Metcalfe, T. S. Monthly Notices of the Royal Astronomical Society: Letters, Vol. 363, Issue 1 Evolution of Initially Pure ^{12}C Stars and the Production of Planetary Nebulae Kutter, G. S.; Savedoff, M. P. Modules for Experiments in Stellar Astrophysics (Mesa): Binaries, Pulsations, and Explosions Paxton, Bill; Marchant, Pablo; Schwab, Josiah DOI: 10.1088/0067-0049/220/1/15 SEVEN-PERIOD ASTEROSEISMIC FIT OF THE KEPLER DBV Bischoff-Kim, Agnès; Østensen, Roy H.; Hermes, J. J. The Astrophysical Journal, Vol. 794, Issue 1 DOI: 10.1088/0004-637X/794/1/39 Modules for Experiments in Stellar Astrophysics (${\mathtt{M}}{\mathtt{E}}{\mathtt{S}}{\mathtt{A}}$): Convective Boundaries, Element Diffusion, and Massive Star Explosions Paxton, Bill; Schwab, Josiah; Bauer, Evan B. DOI: 10.3847/1538-4365/aaa5a8 The Pulsating White Dwarf Stars Fontaine, G.; Brassard, P. Publications of the Astronomical Society of the Pacific, Vol. 120, Issue 872 Evolution of the pulsation properties of hot pre-white dwarf stars Kawaler, S. D.; Winget, D. E.; Hansen, C. J. Probing the Structure of Kepler ZZ Ceti Stars with Full Evolutionary Models-based Asteroseismology Romero, Alejandra D.; Córsico, A. H.; Castanheira, B. G. DOI: 10.3847/1538-4357/aa9899 Asteroseismology of ZZ Ceti stars with fully evolutionary white dwarf models: I. The impact of the uncertainties from prior evolution on the period spectrum Hydrogen-driving and the blue edge of compositionally stratified ZZ Ceti star models Winget, D. E.; van Horn, H. M.; Tassoul, M. Nonradial Oscillations in Neutron Star Oceans: A Source of Quasi-periodic X-Ray Oscillations? Bildsten, Lars; Cutler, Curt On Variations of Pre-Supernova Model Properties Farmer, R.; Fields, C. E.; Petermann, I. Modules for Experiments in Stellar Astrophysics (Mesa): Planets, Oscillations, Rotation, and Massive Stars Paxton, Bill; Cantiello, Matteo; Arras, Phil DOI: 10.1088/0067-0049/208/1/4 Asteroseismic constraints on diffusion in white dwarf envelopes: Constraints on diffusion in WD envelopes Bischoff-Kim, A.; Metcalfe, T. S. Monthly Notices of the Royal Astronomical Society, Vol. 414, Issue 1 Angular momentum transport by heat-driven g-modes in slowly pulsating B stars Townsend, R. H. D.; Goldstein, J.; Zweibel, E. G. DOI: 10.1093/mnras/stx3142 Adiabatic properties of pulsating DA white dwarfs. II - Mode trapping in compositionally stratified models Brassard, P.; Fontaine, G.; Wesemael, F. The Astrophysical Journal Supplement Series, Vol. 80 On the function describing the stellar initial mass function Maschberger, T. DOI: 10.1093/mnras/sts479 Measuring 12 C(α, γ) 16 O from White Dwarf Asteroseismology Metcalfe, T. S.; Salaris, M.; Winget, D. E. AT LAST—A V777 HER PULSATOR IN THE KEPLER FIELD Østensen, R. H.; Bloemen, S.; Vučković, M. DOI: 10.1088/2041-8205/736/2/L39 Evolutionary and pulsational properties of white dwarf stars Althaus, Leandro G.; Córsico, Alejandro H.; Isern, Jordi The Astronomy and Astrophysics Review, Vol. 18, Issue 4 A new Analysis of the two Classical zz ceti White Dwarfs gd 165 and ross 548. ii. Seismic Modeling Giammichele, N.; Fontaine, G.; Brassard, P. White Dwarf Stars Liebert, James Annual Review of Astronomy and Astrophysics, Vol. 18, Issue 1 DOI: 10.1146/annurev.aa.18.090180.002051 WDEC: A Code for Modeling White Dwarf Structure and Pulsations Bischoff-Kim, Agnès; Montgomery, Michael H. The Astronomical Journal, Vol. 155, Issue 5 DOI: 10.3847/1538-3881/aab70e Pulsating White Dwarf Stars and Precision Asteroseismology Winget, D. E.; Kepler, S. O. DOI: 10.1146/annurev.astro.46.060407.145250 Amplitude and frequency variations of oscillation modes in the pulsating DB white dwarf star KIC 08626021: The likely signature of nonlinear resonant mode coupling Zong, W.; Charpinet, S.; Vauclair, G. Toward High-Precision Seismic Studies of White Dwarf Stars: Parametrization of the core and Tests of Accuracy Giammichele, N.; Charpinet, S.; Fontaine, G. DOI: 10.3847/1538-4357/834/2/136 Modules for Experiments in Stellar Astrophysics (Mesa) Paxton, Bill; Bildsten, Lars; Dotter, Aaron GYRE: an open-source stellar oscillation code based on a new Magnus Multiple Shooting scheme Townsend, R. H. D.; Teitler, S. A. DOI: 10.1093/mnras/stt1533 The Potential of White Dwarf Cosmochronology Fontaine, G.; Brassard, P.; Bergeron, P. Properties of Carbon–Oxygen White Dwarfs from Monte Carlo Stellar Models Fields, C. E.; Farmer, R.; Petermann, I. Analysis of Adiabatic Dipolar Oscillations of Stars Takata, Masao Publications of the Astronomical Society of Japan, Vol. 58, Issue 5 DOI: 10.1093/pasj/58.5.893 A large oxygen-dominated core from the seismic cartography of a pulsating white dwarf A Strong Test of Electroweak Theory Using Pulsating DB White Dwarf Stars as Plasmon Neutrino Detectors Winget, D. E.; Sullivan, D. J.; Metcalfe, T. S. Toward ensemble asteroseismology of ZZ Ceti stars with fully evolutionary models: Ensemble asteroseismology of ZZ Ceti stars Romero, A. D.; Córsico, A. H.; Althaus, L. G. The Luminosity Function and Stellar Evolution. Salpeter, Edwin E. Improved seismic model of the pulsating DB white dwarf KIC 08626021 corrected from the effects of neutrino cooling Charpinet, S.; Brassard, P.; Giammichele, N. Multi-period G-mode pulsations of a pre-He-WD star in the eclipsing binary KIC 9164561 Journal Article Zhang, X. B. ; Luo, C. Q. ; Fu, J. N. ; ... - Astrophysical Journal Letters We report the discovery of a new pulsating pre-He-WD in the EL CVn-type binary KIC 9164561. Light curve modeling and frequency analysis of the binary system were carried out based on short-cadence Kepler photometry. Combined with the radial-velocity solution, revised physical parameters of the binary system were determined. The component KIC 9164561B was confirmed to be a pre-He-WD star with M = 0.213±0.012M{sub ⊙}, R = 0.283±0.006R{sub ⊙}, and T{sub eff} = 10650±200 K. In addition to the light changes due to an eclipse, pulsational light variations of the pre-He-WD star were detected. The Fourier analysis reveals at least 52more » frequencies, with the dominant one at 313.4 μHz. A brief mode identification indicates that the pre-He-WD star pulsates in g-modes with an average period spacing of ΔΠ{sub l=1} = 80.87 s. Moreover, a number of multiplets due to rotational splitting were identified from the frequency spectra. The rotational period of the pulsating pre-He-WD star was found to equal to the orbital period, indicating that KIC 9164561B is in synchronous rotation.« less Detection of Solar-Like Oscillations, Observational Constraints, and Stellar Models for θ Cyg, the Brightest Star Observed by the Kepler Mission Journal Article Guzik, Joyce Ann ; Houdek, G. ; Chaplin, W. J. ; ... - The Astrophysical Journal (Online) θ Cygni is an F3 spectral type magnitude V = 4.48 main-sequence star that was the brightest star observed by the original Kepler spacecraft mission. Short-cadence (58.8 s) photometric data using a custom aperture were first obtained during Quarter 6 (2010 June–September) and subsequently in Quarters 8 and 12–17. We present analyses of solar-like oscillations based on Q6 and Q8 data, identifying angular degree l = 0, 1, and 2 modes with frequencies of 1000–2700 μHz, a large frequency separation of 83.9 ± 0.4 μHz, and maximum oscillation amplitude at frequency νmax = 1829 ± 54 μHz. We also presentmore » analyses of new ground-based spectroscopic observations, which, combined with interferometric angular diameter measurements, give T eff = 6697 ± 78 K, radius 1.49 ± 0.03 R ⊙, [Fe/H] = $-$0.02 ± 0.06 dex, and log g = 4.23 ± 0.03. We calculate stellar models matching these constraints using the Yale Rotating Evolution Code and the Asteroseismic Modeling Portal. The best-fit models have masses of 1.35–1.39 M ⊙ and ages of 1.0–1.6 Gyr. θ Cyg's T eff and log g place it cooler than the red edge of the γ Doradus instability region established from pre-Kepler ground-based observations, but just at the red edge derived from pulsation modeling. Lastly, the pulsation models show γ Dor gravity modes driven by the convective blocking mechanism, with frequencies of 1–3 cycles per day (11 to 33 μHz). However, gravity modes were not seen in Kepler data; one signal at 1.776 cycles per day (20.56 μHz) may be attributable to a faint, possibly background, binary.« less EARLY ASTEROSEISMIC RESULTS FROM KEPLER: STRUCTURAL AND CORE PARAMETERS OF THE HOT B SUBDWARF KPD 1943+4058 AS INFERRED FROM g-MODE OSCILLATIONS Journal Article Van Grootel, V ; Charpinet, S ; Fontaine, G ; ... - Astrophysical Journal Letters We present a seismic analysis of the pulsating hot B subdwarf KPD 1943+4058 (KIC 005807616) on the basis of the long-period, gravity-mode pulsations recently uncovered by Kepler. This is the first time that g-mode seismology can be exploited quantitatively for stars on the extreme horizontal branch, all previous successful seismic analyses having been confined so far to short-period, p-mode pulsators. We demonstrate that current models of hot B subdwarfs can quite well explain the observed g-mode periods, while being consistent with independent constraints provided by spectroscopy. We identify the 18 pulsations retained in our analysis as low-degree (l = 1more » and 2), intermediate-order (k = -9 through -58) g-modes. The periods (frequencies) are recovered, on average, at the 0.22% level, which is comparable to the best results obtained for p-mode pulsators. We infer the following structural and core parameters for KPD 1943+4058 (formal fitting uncertainties only): T{sub eff} = 28,050 {+-} 470 K, log g = 5.52 {+-} 0.03, M{sub *} = 0.496 {+-} 0.002 M{sub sun}, log (M{sub env}/M{sub *}) = -2.55 {+-} 0.07, log (1 - M{sub core}/M{sub *}) = -0.37 {+-} 0.01, and X{sub core}(C+O) = 0.261 {+-} 0.008. We additionally derive the age of the star since the zero-age extended horizontal branch 18.4 {+-} 1.0 Myr, the radius R = 0.203 {+-} 0.007 R{sub sun}, the luminosity L = 22.9 {+-} 3.13 L{sub sun}, the absolute magnitude M{sub V} = 4.21 {+-} 0.11, the reddening index E(B - V) = 0.094 {+-} 0.017, and the distance d = 1180 {+-} 95 pc.« less Journal Article Guzik, J. A. ; Houdek, G. ; Chaplin, W. J. ; ... - Astrophysical Journal θ Cygni is an F3 spectral type magnitude V = 4.48 main-sequence star that was the brightest star observed by the original Kepler spacecraft mission. Short-cadence (58.8 s) photometric data using a custom aperture were first obtained during Quarter 6 (2010 June–September) and subsequently in Quarters 8 and 12–17. We present analyses of solar-like oscillations based on Q6 and Q8 data, identifying angular degree l = 0, 1, and 2 modes with frequencies of 1000–2700 μ Hz, a large frequency separation of 83.9 ± 0.4 μ Hz, and maximum oscillation amplitude at frequency ν {sub max} = 1829 ± 54more » μ Hz. We also present analyses of new ground-based spectroscopic observations, which, combined with interferometric angular diameter measurements, give T {sub eff} = 6697 ± 78 K, radius 1.49 ± 0.03 R {sub ⊙}, [Fe/H] = -0.02 ± 0.06 dex, and log g = 4.23 ± 0.03. We calculate stellar models matching these constraints using the Yale Rotating Evolution Code and the Asteroseismic Modeling Portal. The best-fit models have masses of 1.35–1.39 M {sub ⊙} and ages of 1.0–1.6 Gyr. θ Cyg's T {sub eff} and log g place it cooler than the red edge of the γ Doradus instability region established from pre- Kepler ground-based observations, but just at the red edge derived from pulsation modeling. The pulsation models show γ Dor gravity modes driven by the convective blocking mechanism, with frequencies of 1–3 cycles per day (11 to 33 μ Hz). However, gravity modes were not seen in Kepler data; one signal at 1.776 cycles per day (20.56 μ Hz) may be attributable to a faint, possibly background, binary.« less KOI-2700b—a planet candidate with dusty effluents on a 22 hr orbit Journal Article Rappaport, Saul ; Sanchis-Ojeda, Roberto ; Barclay, Thomas ; ... - Astrophysical Journal Kepler planet candidate KOI-2700b (KIC 8639908b), with an orbital period of 21.84 hr, exhibits a distinctly asymmetric transit profile, likely indicative of the emission of dusty effluents, and reminiscent of KIC 1255b. The host star has T {sub eff} = 4435 K, M ≅ 0.63 M {sub ☉}, and R ≅ 0.57 R {sub ☉}, comparable to the parameters ascribed to KIC 12557548. The transit egress can be followed for ∼25% of the orbital period and, if interpreted as extinction from a dusty comet-like tail, indicates a long lifetime for the dust grains of more than a day. We presentmore » a semiphysical model for the dust tail attenuation and fit for the physical parameters contained in that expression. The transit is not sufficiently deep to allow for a study of the transit-to-transit variations, as is the case for KIC 1255b; however, it is clear that the transit depth is slowly monotonically decreasing by a factor of ∼2 over the duration of the Kepler mission. We infer a mass-loss rate in dust from the planet of ∼2 lunar masses per Gyr. The existence of a second star hosting a planet with a dusty comet-like tail would help to show that such objects may be more common and less exotic than originally thought. According to current models, only quite small planets with M{sub p} ≲ 0.03 M {sub ⊕} are likely to release a detectable quantity of dust. Thus, any 'normal-looking' transit that is inferred to arise from a rocky planet of radius greater than ∼1/2 R {sub ⊕} should not exhibit any hint of a dusty tail. Conversely, if one detects an asymmetric transit due to a dusty tail, then it will be very difficult to detect the hard body of the planet within the transit because, by necessity, the planet must be quite small (i.e., ≲ 0.3 R {sub ⊕}).« less
CommonCrawl
Home » Fundamentals » Stirling numbers 1.8 Stirling numbers [Jump to exercises] 1 Fundamentals 2. Combinations and permutations 3. Binomial coefficients 4. Bell numbers 5. Choice with repetition 6. The Pigeonhole Principle 7. Sperner's Theorem 8. Stirling numbers 2 Inclusion-Exclusion 1. The Inclusion-Exclusion Formula 2. Forbidden Position Permutations 3 Generating Functions 1. Newton's Binomial Theorem 2. Exponential Generating Functions 3. Partitions of Integers 4. Recurrence Relations 5. Catalan Numbers 4 Systems of Distinct Representatives 1. Existence of SDRs 2. Partial SDRs 3. Latin Squares 4. Introduction to Graph Theory 5. Matchings 5 Graph Theory 2. Euler Circuits and Walks 3. Hamilton Cycles and Paths 4. Bipartite Graphs 5. Trees 6. Optimal Spanning Trees 7. Connectivity 8. Graph Coloring 9. The Chromatic Polynomial 10. Coloring Planar Graphs 11. Directed Graphs 6 Pólya–Redfield Counting 1. Groups of Symmetries 2. Burnside's Theorem 3. Pólya-Redfield Counting $\def\sone#1#2{\left[#1\atop #2\right]} \def\stwo#1#2{\left\{#1\atop #2\right\}}$ In exercise 4 in section 1.4, we saw the Stirling numbers of the second kind. Not surprisingly, there are Stirling numbers of the first kind . Recall that Stirling numbers of the second kind are defined as follows: Definition 1.8.1 The Stirling number of the second kind, $S(n,k)$ or $\stwo{n}{k}$, is the number of partitions of $[n]=\{1,2,\ldots,n\}$ into exactly $k$ parts, $1\le k\le n$. Before we define the Stirling numbers of the first kind, we need to revisit permutations. As we mentioned in section 1.7, we may think of a permutation of $[n]$ either as a reordering of $[n]$ or as a bijection $\sigma\colon [n]\to[n]$. There are different ways to write permutations when thought of as functions. Two typical and useful ways are as a table, and in cycle form. Consider this permutation $\sigma\colon [5]\to[5]$: $\sigma(1)=3$, $\sigma(2)=4$, $\sigma(3)=5$, $\sigma(4)=2$, $\sigma(5)=1$. In table form, we write this as $\left(1\; 2\; 3\; 4\; 5\atop 3\; 4\; 5\; 2\; 1\right)$, which is somewhat more compact, as we don't write "$\sigma$'' five times. In cycle form, we write this same permutation as $(1,3,5)(2,4)$. Here $(1,3,5)$ indicates that $\sigma(1)=3$, $\sigma(3)=5$, and $\sigma(5)=1$, whiile $(2,4)$ indicates $\sigma(2)=4$ and $\sigma(4)=2$. This permutation has two cycles, a 3-cycle and a 2-cycle. Note that $(1,3,5)$, $(3,5,1)$, and $(5,1,3)$ all mean the same thing. We allow 1-cycles to count as cycles, though sometimes we don't write them explicitly. In some cases, however, it is valuable to write them to force us to remember that they are there. Consider this permutation: $\left(1\; 2\; 3\; 4\; 5\; 6\atop 3\; 4\; 5\; 2\; 1\; 6\right)$. If we write this in cycle form as $(1,3,5)(2,4)$, which is correct, there is no indication that the underlying set is really $[6]$. Writing $(1,3,5)(2,4)(6)$ makes this clear. We say that this permutation has 3 cycles, even though one of them is a trivial 1-cycle. Now we're ready for the next definition. Definition 1.8.2 The Stirling number of the first kind, $s(n,k)$, is $(-1)^{n-k}$ times the number of permutations of $[n]$ with exactly $k$ cycles. The corresponding unsigned Stirling number of the first kind , the number of permutations of $[n]$ with exactly $k$ cycles, is $|s(n,k)|$, sometimes written $\sone{n}{k}$. Using this notation, $s(n,k)=(-1)^{n-k}\sone{n}{k}$. Note that the use of $\sone{n}{k}$ conflicts with the use of the same notation in section 1.7; there should be no confusion, as we won't be discussing the two ideas together. Some values of $\sone{n}{k}$ are easy to see; if $n\ge 1$, then $$\matrix{ \rlap{\left[\matrix{n\cr n\cr}\right]=1} \phantom{\left[\matrix{n\cr 1\cr}\right]=(n-1)!} &\quad&\left[\matrix{n\cr k\cr}\right]=0, \;\mbox{ if $k>n$}\cr \left[\matrix{n\cr 1\cr}\right]=(n-1)!&\quad& \rlap{\left[\matrix{n\cr 0\cr}\right]=0} \phantom{\left[\matrix{n\cr k\cr}\right]=0, \;\mbox{ if $k>n$}}\cr }$$ It is sometimes convenient to say that $\sone{0}{0}=1$. These numbers thus form a triangle in the obvious way, just as the Stirling numbers of the first kind do. Here are lines 1–5 of the triangle: $$\matrix{ 1\cr 0&1\cr 0&1&1\cr 0&2&3&1\cr 0&6&11&6&1\cr 0&24&50&35&10&1\cr }$$ The first column is not particularly interesting, so often it is eliminated. In exercise 4 in section 1.4, we saw that $$\eqalignno{ \stwo{n}{k}&=\stwo{n-1}{k-1}+k\cdot\stwo{n-1}{k}. &(1.8.1)\cr }$$ The unsigned Stirling numbers of the first kind satisfy a similar recurrence. Theorem 1.8.3 $\sone{n}{k}=\sone{n-1}{k-1} + (n-1)\cdot\sone{n-1}{k}$, $k\ge 1$, $n\ge1$. The proof is by induction on $n$; the table above shows that it is true for the first few lines. We split the permutations of $[n]$ with $k$ cycles into two types: those in which $(n)$ is a 1-cycle, and the rest. If $(n)$ is a 1-cycle, then the remaining cycles form a permutation of $[n-1]$ with $k-1$ cycles, so there are $\sone{n-1}{k-1}$ of these. Otherwise, $n$ occurs in a cycle of length at least 2, and removing $n$ leaves a permutation of $[n-1]$ with $k$ cycles. Given a permutation $\sigma$ of $[n-1]$ with $k$ cycles, $n$ can be added to any cycle in any position to form a permutation of $[n]$ in which $(n)$ is not a 1-cycle. Suppose the lengths of the cycles in $\sigma$ are $l_1,l_2,\ldots,l_k$. In cycle number $i$, $n$ may be added after any of the $l_i$ elements in the cycle. Thus, the total number of places that $n$ can be added is $l_1+l_2+\cdots+l_k=n-1$, so there are $(n-1)\cdot\sone{n-1}{k}$ permutations of $[n]$ in which $(n)$ is not a 1-cycle. Now the total number of permutations of $[n]$ with $k$ cycles is $\sone{n-1}{k-1}+ (n-1)\cdot\sone{n-1}{k}$, as desired. $\square$ Corollary 1.8.4 $s(n,k) = s(n-1,k-1) - (n-1)s(n-1,k)$. The Stirling numbers satisfy two remarkable identities. First a definition: Definition 1.8.5 The Kronecker delta $\delta_{n,k}$ is 1 if $n=k$ and 0 otherwise. Theorem 1.8.6 For $n\ge 0$ and $k\ge 0$, $$\eqalign{ \sum_{j=0}^n s(n,j)S(j,k) &= \sum_{j=0}^n (-1)^{n-j}\sone{n}{j}\stwo{j}{k} =\delta_{n,k}\cr \sum_{j=0}^n S(n,j)s(j,k) &= \sum_{j=0}^n (-1)^{j-k}\stwo{n}{j}\sone{j}{k} = \delta_{n,k}\cr }$$ We prove the first version, by induction on $n$. The first few values of $n$ are easily checked; assume $n>1$. Now note that $\sone{n}{0}=0$, so we may start the sum index $j$ at 1. When $k>n$, $\stwo{j}{k}=0$, for $1\le j\le n$, and so the sum is 0. When $k=n$, the only non-zero term occurs when $j=n$, and is $(-1)^0\sone{n}{n}\stwo{n}{n}=1$, so the sum is 1. Now suppose $k< n$. When $k=0$, $\stwo{j}{k}=0$ for $j>0$, so the sum is 0, and we assume now that $k>0$. We begin by applying the recurrence relations: $$\eqalign{ \sum_{j=1}^n &(-1)^{n-j}\sone{n}{j}\stwo{j}{k}= \sum_{j=1}^n (-1)^{n-j}\left(\sone{n-1}{j-1}+(n-1)\sone{n-1}{j}\right) \stwo{j}{k}\cr &=\sum_{j=1}^n (-1)^{n-j}\sone{n-1}{j-1}\stwo{j}{k}+ \sum_{j=1}^n (-1)^{n-j}(n-1)\sone{n-1}{j}\stwo{j}{k}\cr &=\sum_{j=1}^n (-1)^{n-j}\sone{n-1}{j-1}\left(\stwo{j-1}{k-1}+ k\stwo{j-1}{k}\right) + \sum_{j=1}^n (-1)^{n-j}(n-1)\sone{n-1}{j}\stwo{j}{k}\cr &=\sum_{j=1}^n (-1)^{n-j}\sone{n-1}{j-1}\stwo{j-1}{k-1}+ \sum_{j=1}^n (-1)^{n-j}\sone{n-1}{j-1}k\stwo{j-1}{k}\cr &\qquad+ \sum_{j=1}^n (-1)^{n-j}(n-1)\sone{n-1}{j}\stwo{j}{k}.\cr }$$ Consider the first sum in the last expression: $$ \eqalign{ \sum_{j=1}^n (-1)^{n-j}\sone{n-1}{j-1}\stwo{j-1}{k-1} &=\sum_{j=2}^n (-1)^{n-j}\sone{n-1}{j-1}\stwo{j-1}{k-1}\cr &=\sum_{j=1}^{n-1} (-1)^{n-j-1}\sone{n-1}{j}\stwo{j}{k-1}\cr &=\delta_{n-1,k-1}=0, }$$ since $k-1< n-1$ (or trivially, if $k=1$). Thus, we are left with just two sums. $$\eqalign{ \sum_{j=1}^n &(-1)^{n-j}\sone{n-1}{j-1}k\stwo{j-1}{k} +\sum_{j=1}^n (-1)^{n-j}(n-1)\sone{n-1}{j}\stwo{j}{k}\cr &=k\sum_{j=1}^{n-1} (-1)^{n-j-1}\sone{n-1}{j}\stwo{j}{k} -(n-1)\sum_{j=1}^{n-1} (-1)^{n-j-1}\sone{n-1}{j}\stwo{j}{k}\cr &=k\delta_{n-1,k}-(n-1)\delta_{n-1,k}. }$$ Now if $k=n-1$, this is $(n-1)\delta_{n-1,n-1}-(n-1)\delta_{n-1,n-1}=0$, while if $k< n-1$ it is $k\delta_{n-1,k}-(n-1)\delta_{n-1,k}=k\cdot 0-(n-1)\cdot 0=0$. If we interpret the triangles containing the $s(n,k)$ and $S(n,k)$ as matrices, either $m\times m$, by taking the first $m$ rows and columns, or even the infinite matrices containing the entire triangles, the sums of the theorem correspond to computing the matrix product in both orders. The theorem then says that this product consists of ones on the diagonal and zeros elsewhere, so these matrices are inverses. Here is a small example: $$ \pmatrix{ 1& 0& 0& 0& 0& 0\cr 0& 1& 0& 0& 0& 0\cr 0& -1& 1& 0& 0& 0\cr 0& 2& -3& 1& 0& 0\cr 0& -6& 11& -6& 1& 0\cr 0& 24& -50& 35& -10& 1\cr } \pmatrix{ 1& 0& 0& 0& 0& 0\cr 0& 1& 0& 0& 0& 0\cr 0& 1& 1& 0& 0& 0\cr 0& 1& 3& 1& 0& 0\cr 0& 1& 7& 6& 1& 0\cr 0& 1& 15& 25& 10& 1\cr } = \pmatrix{ 1& 0& 0& 0& 0& 0\cr 0& 1& 0& 0& 0& 0\cr 0& 0& 1& 0& 0& 0\cr 0& 0& 0& 1& 0& 0\cr 0& 0& 0& 0& 1& 0\cr 0& 0& 0& 0& 0& 1\cr } $$ Exercises 1.8 Ex 1.8.1 Find a simple expression for $\sone{n}{n-1}$. Ex 1.8.2 Find a simple expression for $\sone{n}{1}$. Ex 1.8.3 What is $\sum_{k=0}^n \sone{n}{k}$? Ex 1.8.4 What is $\sum_{k=0}^n s(n,k)$? Ex 1.8.5 Show that $x^{\underline n}=\prod_{k=0}^{n-1}(x-k)=\sum_{i=0}^n s(n,i)x^i$, $n\ge 1$; $x^{\underline n}$ is called a falling factorial. Find a similar identity for $x^{\overline n}=\prod_{k=0}^{n-1}(x+k)$; $x^{\overline n}$ is a rising factorial. Ex 1.8.6 Show that $\ds \sum_{k=0}^n \stwo{n}{k} x^{\underline k} = x^n$, $n\ge 1$; $x^{\underline k}$ is defined in the previous exercise. The previous exercise shows how to express the falling factorial in terms of powers of $x$; this exercise shows how to express the powers of $x$ in terms of falling factorials. Ex 1.8.7 Prove: $\ds S(n,k)=\sum_{i=k-1}^{n-1} {n-1\choose i}S(i,k-1)$. Ex 1.8.8 Prove: $\ds \sone{n}{k}=\sum_{i=k-1}^{n-1} (n-i-1)! {n-1\choose i}\sone{i}{k-1}$. Ex 1.8.9 Use the previous exercise to prove $\ds s(n,k)=\sum_{i=k-1}^{n-1} (-1)^{n-i-1}(n-i-1)! {n-1\choose i}s(i,k-1)$. Ex 1.8.10 We have defined $\sone{n}{k}$ and $\stwo{n}{k}$ for $n,k\ge 0$. We want to extend the definitions to all integers. Without some extra stipulations, there are many ways to do this. Let us suppose that for $n\not=0$ we want $\sone{n}{0}=\sone{0}{n}=\stwo{n}{0}=\stwo{0}{n}=0$, and we want the recurrence relations of equation 1.8.1 and in theorem 1.8.3 to be true. Show that under these conditions there is a unique way to extend the definitions to all integers, and that when this is done, $\stwo{n}{k}=\sone{-k}{-n}$ for all integers $n$ and $k$. Thus, the extended table of values for either $\sone{n}{k}$ or $\stwo{n}{k}$ will contain all the values of both $\sone{n}{k}$ and $\stwo{n}{k}$. Ex 1.8.11 Under the assumptions that $s(n,0)=s(0,n)=0$ for $n\not=0$, and $s(n,k) = s(n-1,k-1) - (n-1)s(n-1,k)$, extend the table for $s(n,k)$ to all integers, and find a connection to $S(n,k)$ similar to that in the previous problem. Ex 1.8.12 Prove corollary 1.8.4. Ex 1.8.13 Prove the remaining part of theorem 1.8.6.
CommonCrawl
Architecture and Architectonics (4) Biology and Life Sciences (42) Materials and Applied Sciences (4) You are looking at 31 - 40 of 127 items for Author or Editor: L. Wang x Simultaneous analysis of eight bioactive steroidal saponins in Gongxuening capsules by HPLC-ELSD and HPLC-MSn Acta Chromatographica https://doi.org/10.1556/achrom.21.2009.2.11 Authors: X. X. Liu, L. Wang, J. Yang, T. T. Zhang, X. D. Deng, and Q. Wang Rapid high-performance liquid chromatographic methods with evaporative light scattering detection (HPLC-ELSD) and electrospray ionization multistage mass spectrometry (HPLC-ESI-MSn) have been established and validated for simultaneous qualitative and quantitative analysis of eight steroidal saponins in ten batches of Gongxuening capsule (GXN), a widely commercially available traditional Chinese preparation. The optimum chromatographic conditions entailed use of a Kromasil C18 column with acetonitrile-water (30:70 to 62:38, υ/υ) as mobile phase at a flow rate of 1.0 mL min−1. The drift tube temperature of the ELSD was 102°C and the nebulizing gas flow rate was 2.8 L min−1. Separation was successfully achieved within 25 min. LC-ESI-MSn was used for unequivocal identification of the constituents of the samples by comparison with reference compounds. The assay was fully validated for precision, repeatability, accuracy, and stability, then successfully applied to quantification of the eight compounds in samples. The method could be effective for evaluation of the clinical safety and efficacy of GXN. Calorimetric study and thermal analysis of crystalline nicotinic acid https://doi.org/10.1023/b:jtan.0000027833.24442.a0 Authors: S. Wang, Z. Tan, Y. Di, F. Xu, M. Wang, L. Sun, and T. Zhang As one primary component of Vitamin B3, nicotinic acid [pyridine 3-carboxylic acid] was synthesized, and calorimetric study and thermal analysis for this compound were performed. The low-temperature heat capacity of nicotinic acid was measured with a precise automated adiabatic calorimeter over the temperature rang from 79 to 368 K. No thermal anomaly or phase transition was observed in this temperature range. A solid-to-solid transition at T trs=451.4 K, a solid-to-liquid transition at T fus=509.1 K and a thermal decomposition at T d=538.8 K were found through the DSC and TG-DTG techniques. The molar enthalpies of these transitions were determined to be Δtrs H m=0.81 kJ mol-1, Δfus H m=27.57 kJ mol-1 and Δd H m=62.38 kJ mol-1, respectively, by the integrals of the peak areas of the DSC curves. Sesame oil protects against hepatic injury via the inhibition of neutrophil activation in thioacetamide-treated rats Acta Phytopathologica et Entomologica Hungarica https://doi.org/10.1556/aphyt.47.2012.2.3 Authors: P. Chu, D. Hsu, L. Wang, and M. Liu Thioacetamide (TAA) is a potent hepatotoxicant in acute and chronic hepatic injury. The study examined the protective effect of sesame oil against TAA-induced hepatic injury in rats. Hepatic injury was induced by intraperitoneal injection of 100 mg/kg of TAA for 24 h. Triple doses of sesame oil (1, 2, or 4 mL/kg) was given orally 0, 6, and 12 h after TAA treatment. TAA significantly increased serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels. Sesame oil decreased serum AST and ALT levels and significantly inhibited hepatic lipid peroxidation and nitric oxide levels compared with TAA-alone group. Further, sesame oil significantly inhibited TAA-induced hepatic neutrophil activation marker myeloperoxidase activity. However, sesame oil did not affect hepatic tumor necrosis factor, IL-1β and IL-10 generation in TAA-treated group. In conclusion, sesame oil protects against TAA-induced hepatic injury and oxidative stress via the inhibition of neutrophil activation. However, inflammatory cytokines may not be involved in sesame-oil-associated hepatic protection against TAA in rats. Dynamics in simultaneous bio-electro-generative leaching for pyrite-MnO2 Authors: L. Xiao, Y. Li, S. Wang, Z. Fang, and G. Qiu The principle for the electro-generative simultaneous leaching (EGSL) is applied to simultaneous leaching of pyrite-MnO2 in this paper. A galvanic system for the bio-electro-generative simultaneous leaching (BEGSL) has been set up. The equation of electric quantity vs. time is used to study the effect of produced sulfur on electro-generative efficiency and quantity. It has been shown that the resistance decreased in the presence of Acidithiobacillus thiooxidans (A. thiooxidans) with the increase of electro-generative efficiency. The effects of temperature and grain size on rate of ferrous extraction from pyrite under the conditions of presence and absence of A. thiooxidans were studied, respectively. The changes in the extraction rate of Fe2+ as particle size in presence of A. thiooxidans were more evident than that in the absence, which indicated that the extraction in bio-electro-generative leaching was affected by particle size remarkably. Around the optimum culture temperature for A. thiooxidans, the bigger change in the conversion rate of Fe2+ was depending on temperature. The transferred charge in BEGSL including part of S0 to sulfate group in the presence of (A. thiooxidans) which is called as biologic electric quantity, and the ratio of biologic electric quantity reached to 58.10% in 72 h among the all-transferred charge. The Carbon Depositing Behavior and its Kinetic Research in Benzene Alkylation Process Over High Silicate ZSM-5 Zeolite https://doi.org/10.1023/a:1010107306028 Authors: J. Liu, W. Chen, Q. Wang, and L. Xu In our invention, FCC (fluid catalytic cracking) dry gas could be used to react with benzene without any special purification, and more than 90% ethylene was converted to ethylbenzene. The phenomenon of carbon deposition over catalyst surface was obvious and leads to a deactivation of catalyst, so it is important to study the behavior of carbon deposition of catalyst during alkylation of benzene. The influence of several factors such as temperature, reaction time, reactant concentration of the amount and the kinetics of carbon deposition were investigated, during which carbon depositing rate equations were obtained for different reactant. Study of the Kinetics of the Combustion Reaction on Shuangya Mountain Coal Dust by TG Authors: J. Liu, D. He, L. Xu, H. Yang, and Q. Wang The combustion behavior of Shuangya Mountain (SYM) coal dust has been investigated by means of TG in this paper. The reaction fraction can be obtained from isothermal TG data. The regressions of g( ), an integral function of vs. t for different reaction mechanisms were performed. The mechanism of nucleation and nuclei growth is determined as the controlling step of the coal dust combustion reaction by the correlation coefficient of the regression, and the kinetic equation of the SYM coal dust combustion reaction has been established. DSC study of stabilization reactions in poly(acrylonitrile-co-itaconic acid) with peak-resolving method Authors: Q. Ouyang, L. Cheng, H. Wang, and K. Li The effect of itaconic acid (IA) content and heating rate on the stabilization reactions in poly(acrylonitrile-co-itaconic acid) (P(AN-co-IA)) was investigated by differential scanning calorimetry (DSC) with peak-resolving method. Increasing IA content was effective in decreasing the initial temperature and the heat evolved, and found to enhance oxidative reactions to some extent. While, promoting heating rate resulted in a shift of the exotherm to a higher temperature and a more rapid liberation of heat. The percentage of area of the first exothermic peak increased with increasing heating rate, which would be attributed to the enhancement of the free radical cyclization reactions. Studies on thermochemical properties of ionic liquids based on transition metal Authors: W. Guan, L. Li, H. Wang, J. Tong, and J. Yang A brown and transparent ionic liquid (IL), [C4mim][FeCl4], was prepared by mixing anhydrous FeCl3 with 1-butyl-3-methylimidazolium chloride ([C4mim][Cl]), with molar ratio 1/1 under stirring in a glove box filled with dry argon. The molar enthalpies of solution, Δs H m, of [C4mim][FeCl4], in water with various molalities were determined by a solution-reaction isoperibol calorimeter at 298.15 K. Considering the hydrolyzation of anion [FeCl4]− in dissolution process of the IL, a new method of determining the standard molar enthalpy of solution, Δs H m 0, was put forward on the bases of Pitzer solution theory of mixed electrolytes. The values of Δs H m 0 and the sum of Pitzer parameters: \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$(4\beta _{Fe,Cl}^{(0)L} + 4\beta _{C_4 mim,Cl}^{(0)L} + \Phi _{Fe,C_4 mim}^L )$$ \end{document} \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$(\beta _{Fe,Cl}^{(1)L} + \beta _{C_4 mim,Cl}^{(1)L} )$$ \end{document} were obtained, respectively. In terms of thermodynamic cycle and the lattice energy of IL calculated by Glasser's lattice energy theory of ILs, the dissociation enthalpy of anion [FeCl4]−, ΔH dis≈5650 kJ mol−1, for the reaction: [FeCl4]−(g)→Fe3+(g)+4Cl−(g), was estimated. It is shown that large hydration enthalpies of ions have been compensated by large the dissociation enthalpy of [FeCl4]− anion, Δd H m, in dissolution process of the IL. A kinetic analysis of thermal decomposition of polyaniline/ZrO2 composite Authors: S. Wang, Z. Tan, Y. Li, L. Sun, and Y. Li Synthesis, characterization and thermal analysis of polyaniline (PANI)/ZrO2 composite and PANI was reported in our early work. In this present, the kinetic analysis of decomposition process for these two materials was performed under non-isothermal conditions. The activation energies were calculated through Friedman and Ozawa-Flynn-Wall methods, and the possible kinetic model functions have been estimated through the multiple linear regression method. The results show that the kinetic models for the decomposition process of PANI/ZrO2 composite and PANI are all D3, and the corresponding function is ƒ(α)=1.5(1−α)2/3[1−(1-α)1/3]−1. The correlated kinetic parameters are E a=112.7±9.2 kJ mol−1, lnA=13.9 and E a=81.8±5.6 kJ mol−1, lnA=8.8 for PANI/ZrO2 composite and PANI, respectively. Rapid analysis of multiple components in radix et Rhizoma Rhei using ultraperformance liquid chromatography tandem mass spectrometry Authors: G.-Yin. Wang, L.-W. Xu, and Y.-P. Shi A simple and rapid method, using online ultraperformance liquid chromatography with photodiode array detection and electrospray ionization mass spectrometry (UPLC-PDA-eλ-ESI-MS/MS), was developed for the in-depth analysis of 50 batches Radix et Rhizoma Rhei. The analysis was performed on a UPLC BEH C18 column using a gradient elution system. Baseline separation could be achieved in less than 7.5 min. At the same time, on the basis of the 50 batches of samples collected from representative cultivated regions, a novel chromatographic fingerprint was devised by UPLC-PDA, in which 27 common peaks were detected and identified by the developed UPLC-MS/MS method step by step according to fragmentation mechanisms, MS/MS data, standards, and relevant literature. Many active components gave prominent [M - H]− ions in the ESI mass spectra. These components include anthraquinones, sennosides, stilbenes, glucose gallates, naphthalenes, and catechins. Furthermore, based on the information of these Radix et Rhizoma Rhei components, and further combined with discriminant analysis, a novel discriminant analysis equation (DAE) was established for the quality control of Radix et Rhizoma Rhei for the first time.
CommonCrawl
Department Research Administration and Contact Information McKnight-Zame Lecture Series Lecture Series by Richard Stanley Lecture Series by Phillip Griffiths David Essner Competition General UM Reqs Course Descriptions and Current Offerings ALEKS Math Placement Math Competitions UMMU (Math Club) PhD Program Timeline Master of Science in Mathematical Finance (MSMF) Graduate Student Seminar Software and Electronic Books Math Laboratory PC Laboratory Mathematics Links Institute for Theoretical and Mathematical Ecology Arnaud Ducrot University of Le Havre, France Some Results on an Evolutionary-epidemic Problem Arising in Plant Disease Thursday, May 16, 2019, 5:00pm Ungar Room 402 Abstract: In this talk we discuss various properties of an evolutionary-epidemic system modelling plant disease epidemic and incorporating the ability of the pathogen to adapt to the environment by mutation. The resulting problem consists in an intregro-differential system of equations that typically depends on a small parameter $\varepsilon>0$ that describes the dispersion of the pathogen in the phenotype trait space. In the first part of this talk, we show that the system asymptotically stabilizes toward its unique endemic equilibrium and we describe, using a small parameter ($\varepsilon$) asymptotic, a possibly long transient behaviour before reaching the endemic equilibrium. In the second part, the above problem is extended to the case where the populations are also structured with respect to physical space and where the infection is able to disperse. In that setting, we discuss the spatio-temporal evolution of the disease by studying some properties of the travelling wave solutions for this system, that models the spatial spread of the disease. Pierre Magal University of Bordeaux, France Existence of Wave Trains for the Gurtin-McCamy Equation Tuesday, May 14, 2019, 5:00pm Abstract: This work is mainly motivated by the study of periodic wave train solutions for the so-called Gurtin-McCamy equation. To that aim we construct a smooth center manifold for a rather general class of abstract second order semi-linear differential equations involving non-densely defined operators. We revisit results on commutative sums of linear operators using the integrated semigroup theory. These results are used to reformulate the notion of the weak solutions of the problem. We also derive a suitable fixed point formulation for the graph of the local center manifold that allows us to conclude to the existence and smoothness of such a local invariant manifold. Then we derive a Hopf bifurcation theorem for second order semi-linear equations. This result is applied to study the existence of periodic wave trains for the Gurtin-McCamy problem, that is for a class of non-local age structured equations with diffusion. Dr. Alex Iosevich Analytic and Combinatorial Aspects of Finite Point Configurations Thursday, April 18, 2019, 5:00pm Abstract: We are going to discuss the following basic question. How large does a subset of a vector space need to be to ensure that it determines a positive proportion of all possible point configurations of a given type, where the notion of large depends on the structure of the underlying field. We shall discuss the analytic and combinatorial aspects of this problem, describe some recent results and applications to problems in classical analysis involving the existence and non-existence of exponential bases and frames. Dr. Andrew Morozov Towards Constructing a Mathematically Rigorous Framework for Modelling Evolutionary Fitness Abstract: In modelling biological evolution, a major mathematical challenge consists in an adequate quantification of selective advantages of species. Current approaches to modelling natural section are often based on the idea of maximization of a certain prescribed criterion – evolutionary fitness. This paradigm was inspired by the seminal Darwin's idea of the 'survival of the fittest'. However, the concept of evolutionary fitness is still somewhat vague, intuitive and is often subjective. On the other hand, by using different definitions of fitness one can predict conflicting evolutionary outcomes, which is obviously unfortunate. In this talk, I present a novel axiomatic approach to model natural selection in dynamical systems with inheritance in an arbitrary function space. For a generic self-replication system, I introduce a ranking order of inherited units following the underlying measure density dynamics. Using such ranking, it becomes possible to derive a generalized fitness function which maximization will predict long-term evolutionary outcome. The approach justifies the variational principle of determining evolutionarily stable behavioural strategies. I demonstrate a new technique allowing to derive evolutionary fitness for population models with structuring (e.g. in models with time delay) which was so far a mathematical challenge. Finally, I show how the method can be applied to a von Foerster continuous stage population model. Alexander Volberg Poincaré Inequalities on Hamming Cube and Related Combinatorial and Probabilistic Problems Thursday, March 28, 2019, 5:00pm Abstract: Geometric inequalities on Hamming cube imply corresponding isoperimetric inequalities in Gaussian spaces. Inequalities in discrete setting (on Hamming cube) are usually more difficult and more deep. In particular, Poincaré inequalities on Hamming cube give sharp lower estimates for the product measure of the boundaries of arbitrary sets of Hamming cube. Such estimates were used by Margulis in his famous network connectivity theorem. We will survey such estimates obtained by Margulis, Bobkov, Ledoux, Lust-Piquard. Recently the constant in L1 discrete Poincaré inequality was improved. The sharp constant remains unknown (unlike the Gaussian case, where it was found by Maurey–Pisier and then Ledoux), but we will show the idea of the improvement. Professor Fedor Bogomolov On Projective Invariants of k-tuples of Torsion Points on Elliptic Curves Tuesday, March 19, 2019, 5:00pm Abstract: Every complex elliptic curve $E$ has a natural (so called hyperlliptic) involution $\theta$ (as an abelian group and algebraic curve) with $4$ stable points and $P^1$ as quotient $E/\theta$. The stable points of $\theta$ can be identified with a subgroup of points of order $2$ on $E$ with a torsion subgroup $Q/Z+ Q/Z\subset E$. The problem which I am going to consider is about the variation of the images of collections of points from $Q/Z+ Q/Z$ in $P^1$ which occur under the variation of elliptic curve $E_t$. We are conidering more precisely the variation of projective invariants of such collections. Thus the problem becomes interesting when the number of corresponding points in $P^1$ is $\geq 4$. In our joint work with Yuri Tschinkel and Hang Fu we formulated several conjecture conecerning the behavior of such sets. In our recent article with Hang Fu we managed to describe all collections of $4$-tuples for which the variation of projective invariants is trivial. I will discuss the proof and description of such $4$-tuples in my talk and several other general concepts and results related to the subject. Sergey Fomin Morsifications and Mutations Friday, March 1, 2019, 5:00pm Abstract: I will discuss a new and somewhat mysterious connection between singularity theory and cluster algebras, more specifically between the topology of isolated singularities of plane curves and the mutation equivalence of quivers associated with their morsifications. This is joint work with Pavlo Pylyavskyy, Eugenii Shustin, and Dylan Thurston. David Zureick-Brown Diophantine and Tropical Geometry Thursday, February 21, 2019, 5:00pm Abstract: Diophantine geometry is the study of integral solutions to a polynomial equation. For instance, for integers a,b,c ≥ 2 satisfying 1/a + 1/b + 1/c > 1, Darmon and Granville proved that the individual generalized Fermat equation x^a + y^b = z^c has only finitely many coprime integer solutions. Conjecturally something stronger is true: for a,b,c ≥ 3 there are no non-trivial solutions. I'll discuss various other Diophantine problems, with a focus on the underlying intuition and conjectural framework. I will especially focus on the uniformity conjecture, and will explain new ideas from tropical geometry and our recent partial proof of the uniformity conjecture. Residual Intersections, Old and New Friday, February 15, 2019, 5:00pm Abstract: Two general quadric hypersurface in complex 3-space that contain a line, intersect in the line and also a curve of degree 3, the "residual intersection". I'll describe the 19th-century motivations and origins of the theory of residual intersections, and also some recent work in the area. Dr. Akram S. Alishahi Ritt Assistant Professor at Columbia University Homological Knot Invariants, Relations and Applications Wednesday, December 12, 2018, 3:00pm Abstract: Knot theory is about studying knots i.e. image of a smooth injective map from circle to R^3. In this talk, we will start by sketching some problems in knot theory. Then we will discuss two knot invariants, Khovanov homology and knot Floer homology, and we will explain how they can be used to answer some of these questions. Khovanov homology and knot Floer homology are algebraic knot invariants that are defined combinatorially and analytically, respectively. Despite their very different definitions, the two invariants seem to contain a great deal of the same information and are conjectured to be related. In parallel, we will discuss some of their similarities. This talk is based on joint works with Nathan Dowlin and Eaman Eftekhary. Dr. Jeffrey Meier Generalized Square Knots and Homotopy 4-spheres Monday, December 10, 2018, 5:00pm Abstract: Perhaps the most elusive open problem in low-dimensional topology is the smooth 4-dimensional Poincare Conjecture, which asserts that any 4-manifold with the homotopy type of the 4-sphere is diffeomorphic to the 4-sphere. For the last forty years, potential counterexamples to this conjecture have been constructed, illustrated, and subsequently standardized. Many of these examples are geometrically simply connected, meaning they can be built without 1-handles. If such a homotopy 4-sphere is built with only one 2-handle, then it must be the 4-sphere; this is a consequence of David Gabai's solution to the Property R Conjecture. In this talk, I will discuss work to understand geometrically simply connected homotopy 4-spheres that are built with two 2-handles. In the case that one of the 2-handles is attached along a fibered knot, we obtain strong results about the nature of the second component. Building on this, we use the beautiful periodic structure of torus knots to classify the attaching curve of the second component when the first component is a generalized square knot (a torus knot summed with its mirror). Finally, we prove that for an infinite family of such links, the corresponding homotopy 4-sphere is the standard one, proving the Poincare Conjecture in this setting. We also give intriguing new potential counterexamples to the Poincare Conjecture coming from these families. This talk is based on joint work with Alex Zupan. Dr. Libin Rong Modeling HIV Dynamics under Treatment Thursday, December 6, 2018, 5:00pm Abstract: Highly active antiretroviral therapy has successfully controlled HIV replication in many patients. The treatment effectiveness depends on many factors, such as pharmacokinetics/pharmacodynamics of drugs and the intracellular stages of the viral replication cycle inhibited by antiretroviral drugs. In this talk, I will present some recent work on studying HIV dynamics under treatment. Using multi-stage models, I will show that drugs from different classes have different influence on HIV decay dynamics. Using models that combine pharmacodynamics and virus dynamics, I will show that pharmacodynamic profiles of drugs can significantly affect the outcome of either early or late treatment of HIV infection. Dr. Marco A. M. Guaraco L. E. Dickson Instructor at the University of Chicago Member of the Institute for Advanced Study Phase Transitions and Minimal Hypersurfaces Monday, December 3, 2018, 5:00pm Abstract: Long standing questions in the theory of minimal hypersurfaces have been solved in the past few years. This progress can be explained and enriched through a strong analogy with the theory of phase transitions. I will present the current state of these ideas, discuss my contributions to the subject and share directions for future developments. Dr. Christopher Scaduto Joint Assistant Professor and NSF Postdoctoral Fellow Simons Center for Geometry and Physics at Stony Brook University Instantons and Lattices of Smooth 4-manifolds with Boundary Thursday, November 29, 2018, 5:00pm Abstract: A classical invariant associated to a 4-manifold is the intersection form on its second homology group, which is an integral lattice. A famous result of Donaldson from the early 1980s says that a definite lattice of a smooth compact 4-manifold without boundary is diagonalizable over the integers. What if there is non-empty boundary? This talk surveys recent advances on this problem which have been obtained using Yang-Mills instanton Floer theory. Dr. Siyuan Lu Hill Assistant Professor Isometric Embedding and Quasi-local Mass Tuesday, November 27, 2018, 5:00pm Abstract: In this talk, we will first review the classic result of isometric embedding of (S^2,g) into 3-dimensional Euclidean space by Nirenberg and Pogorelov. We will then discuss how to apply it to define quasi-local mass in general relativity. In particular, the positivity of Brown-York quasi-local mass proved by Shi-Tam is equivalent to the Riemannian Positive mass theorem by Schoen-Yau and Witten. We will then discuss the recent progress in isometric embedding of (S^2,g) into general Riemannian manifold. We will also discuss the recent work on a localized Riemannian Penrose inequality, which is equivalent to the Riemannian Penrose inequality. Thursday, November 8, 2018, 5:00pm David Herzog Ergodicity and Lyapunov Functions for Langevin Dynamics with Singular Potentials Thursday, September 27, 2018, 5:00pm Abstract: We discuss Langevin dynamics of N particles on R^d interacting through a singular repulsive potential, e.g. the well-known Lennard-Jones type, and show that the system converges to the unique invariant Gibbs measure exponentially fast in a weighted total variation distance. The proof of the result turns on an explicit construction of a Lyapunov function. In contrast to previous results for such systems, our result implies geometric convergence to equilibrium starting from an essentially optimal family of initial distributions. Dr. Damian Brotbek Hyperbolicity and Jet Differentials Tuesday, June 5, 2018, 2:30pm Abstract: In the 60's, Kobayashi introduced on any complex manifold X an intrinsic pseudo-metric generalizing the Poincaré metric on the unit disc. When this pseudo metric is in fact a metric, the manifold X is said to be hyperbolic in the sense of Kobayashi. By a result of Brody, when X is compact, then it is hyperbolic if and only if it does not contain any entire curve (a non constant holomorphic map from the complex plane to X). A fruitful way to study hyperbolicity problems is to use jet differential equations. Those objects, generalizing symmetric differential forms, provide obstructions to the existence of entire curves and can be used in some particular situation to prove that some given varieties is hyperbolic. The purpose of this talk, aimed at a general audience, is to give an overview on hyperbolicity and the theory of jet differentials. Michael Larsen The Circle Method in Algebraic Geometry Monday, May 7, 2018, 5:00pm Abstract: Certain counting problems in group theory can be formulated either in terms of varieties over finite fields, or (dually) in terms of irreducible character values. By comparing the two points of view, one can either use geometry to give character estimates or (what I will mostly talk about) character estimates to prove theorems in geometry. Ayelet Lindenstrauss Cohomology Theories and Topological Hochschild Homology Abstract: Cohomology theories assign to every space a graded abelian group, satisfying appropriate axioms. Particularly nice ones have a product, that is: they assign to every space a graded ring. It turns out that such theories can be described in terms of ring spectra. Modern constructions let us define actual products that are both associative and unital on the ring spectra that correspond to multiplicative cohomology theories, which is harder than one would think due to some "flabbiness" in the definition of the ring spectra. Using these products, though, we can look at the ring spectra of discrete rings (corresponding to the usual cohomology of a space with coefficients in that ring) and replicate homological algebra constructions on the discrete rings with topological constructions on the corresponding ring spectra. The integers are an initial object in the category of discrete unital rings; in the category of ring spectra, that role is played by the sphere spectrum. So even if we are only interested in understanding discrete rings, doing homological algebra with their ring spectra over the sphere spectrum turns out to give new and interesting constructions. Topological Hochschild homology is the ring spectrum version of Hochschild homology, with tensor products taken over the sphere spectrum rather than over the integers. It is a finer invariant of discrete rings, and the Dennis trace map from algebraic K-theory to Hochschild homology can be applied to ring spectra and thus factors through Topological Hochschild homology. I will discuss the topological Hochschild homology of rings including number rings (joint with Ib Madsen) and maximal orders in simple algebras over the rationals (joint with Henry Chan). Professor Takayuki Hibi Reflexive Polytopes Monday, April 30, 2018, 5:00pm Abstract: A lattice polytope P of dimension d in the d-dimensional euclidean space is called reflexive if the origin is contained in the interior of P and if the dual polytope of P is again a lattice polytope. For example, the triangle in the euclidean plane with the vertices (-1,-1), (-1,2) and (2,-1) is reflexive. It turns out that reflexive polytopes play an important role in various areas of mathematics. One of the questions in combinatorics is how to construct reflexive polytopes in natural ways. In my talk, in the frame of Gröbner bases, a technique to yield reflexive polytopes will be discussed. No special knowledge will be required to understand my talk. James McKernan Symmetries of Polynomials Abstract: Symmetries of polynomials are closely connected to the geometry of the variety of zeroes of the polynomials. Varieties come in three types and all three types have very different symmetry groups. We review some results, both old and new, which place bounds on the size of the symmetry group. Dr. Boris Botvinnik Conformal Geometry and Topology of Manifolds Abstract: I will start with basics on conformal geometry, we will discuss the Einstein-Hilbert functional and Yamabe problem. Then I plan to discuss the problem of existence of metrics with positive scalar curvature for simply connected spin manifolds. At the end I would like to describe some recent results on the space of metrics with positive scalar curvature. Mary Ann Horn Professor and Chair, Department of Mathematics Applied Mathematics and Statistics Using Mathematical Modeling to Understand the Role of Diacylglycerol (DAG) as a Second Messenger Abstract: Diacylgylcerol (DAG) plays a key role in cellular signaling as a second messenger. In particular, it regulates a variety of cellular processes and the breakdown of the signaling pathway that involves DAG contributes to the development of a variety of diseases, including cancer. A mathematical model of the G-protein signaling pathway in RAW 264.7 macrophages downstream of P2Y6 activation by the ubiquitous signaling nucleotide uridine 5'-diphosphate is presented. The primary goal is to better understand the role of diacylglycerol in the signaling pathway and the underlying biological dynamics that cannot always be easily measured experimentally. The model is based on time-course measurements of P2Y6 surface receptors, inositol trisphosphate, cytosolic calcium, and with a particular focus on differential dynamics of multiple species of diacylglycerol. When using the canonical representation, the model predicted that key interactions were missing from the current pathway structure. Indeed, the model suggested that to accurately depict experimental observations, an additional branch to the signaling pathway was needed, whereby an intracellular pool of diacylglycerol is immediately phosphorylated upon stimulation of an extracellular receptor for uridine 5'-diphosphate and subsequently used to aid replenishment of phosphatidylinositol. As a result of sensitivity analysis of the model parameters, key predictions can be made regarding which of these parameters are the most sensitive to perturbations and are therefore most responsible for output uncertainty. (Joint work with Hannah Callender, University of Portland, and the H. Alex Brown Lab, Vanderbilt.) Iosif Polterovich Professor, Universite de Montreal Canada Research Chair in Geometry and Spectral Theory Sloshing, Steklov, and Corners Thursday, April 5, 2018, 5:00pm Abstract: The sloshing problem is a Steklov type eigenvalue problem describing small oscillations of an ideal fluid. We will give an overview of some latest advances in the study of Steklov and sloshing spectral asymptotics, highlighting the effects arising from corners, which appear naturally in the context of sloshing. In particular, we will discuss the proofs of the conjectures posed by Fox and Kuttler back in 1983 on the asymptotics of sloshing frequencies in two dimensions. We will also outline an approach towards obtaining sharp asymptotics for Steklov eigenvalues on polygons. The talk is based on a joint work with M. Levitin, L. Parnovski and D. Sher. Professor Fu Liu Ehrhart Positivity Tuesday, April 3, 2018, 5:00pm Abstract: The Ehrhart polynomial counts the number of lattice points inside dilation of an integral polytope, that is, a polytope whose vertices are lattice points. We say a polytope is Ehrhart positive if its Ehrhart polynomial has positive coefficients. In the literature, different families of polytopes have been shown to be Ehrhart positive using different techniques. We will survey these results in the first part of the talk, after giving a brief introduction to polytopes and Ehrhart polynomials. Through work of Danilov/McMullen, there is an interpretation of Ehrhart coefficients relating to the normalized volumes of faces. In the second part of the talk, I will discuss joint work with Castillo in which we try to make this relation more explicit in the case of regular permutohedra. The motivation is to prove Ehrhart positivity for generalized permutohedra. If time permits, I will also discuss some other related questions. Robert Bryant Phillip Griffiths Professor of Mathematics On Self-Dual Curves Fieldhouse at the Watsco Center Abstract: An algebraic curve in the projective plane (or, more generally in a higher dimensional projective space) is said to be 'self-dual' if it is projectively equivalent to its dual curve (after, possibly, an automorphism of the curve). Familiar examples are the nonsingular conics (or, more generally, rational normal curves in higher dimensions) and the 'binomial curves' y^a = x^b, but there are many more such curves, even in the plane. I'll survey some of the literature on these curves, particularly in the plane and 3-space, and some of what is known about their classification and moduli, including their connection with contact curves in certain contact 3-folds, some of which are singular. I'll also provide what appear to be some new examples of these curves. Michael Li Applied Math Institute Nonidentifiability Issue in Parameter Estimation of Differential Equation Models Abstract: Transmission models for infectious diseases using differential equations are frequently confronted with disease data. For instance, these type of models are used to analyze HIV surveillance data, for the assessment of HIV epidemics and estimation of true burden of HIV in terms of incidence, prevalence and the size of undiagnosed HIV positive population. Parameter estimation when fitting the model to data is a key step of the modelling approach, and can often be complicated by the the presence of nonidentifiable parameters. Nonidentifiability results in multiple, often infinitely many, parameter values for which the model fit the data equally well, while different choices of these best-fit parameter values can produce very different model predictions for unobserved quantities of public health interest. In this talk, I will begin by discussing various notions of nonidentifiability: structural vs practical,local vs global, etc, and different methods to detect and diagnose nonidentifiability. Then I will present a new mathematical approach to study the issue of nonidentifiability based on singular value decomposition and variance decomposition. I will then illustrate our approach with a case study of HIV estimation using an ordinary differential equation model. Some open questions for infinite dimensional models such as delay differential equation and partial differential equation models will also be discussed. Dr. Simon Gindikin Curved Version of the Radon Inversion Formula Abstract: 100 years ago Radon published his famous formula for the reconstruction of functions on the plane through their integrals along lines. Is it possible, to replace in this construction lines with different curves? There are only known just a few such examples, such as geodesicsor horocycles on the hyperbolic plane. These formulas usually look similar to the Radon's formula. We give a universal reconstruction formula, as a closed differential form on the manifold of all curves, whose restriction on different cycles of curves gives specific examples of inversion formulas for curves. It is possible to interpret this construction as areal Cauchy integral formula. Professor Yipeng Jing Academician, The Chinese Academy of Sciences Probing the Cosmic Expansion with Large Scale Galaxy Surveys: Prospects and Challenges Tuesday, January 30, 2018, 5:00pm Abstract: The Universe is found to be expanding in an accelerating phase. Mysterious dark energy is one possible solution to explain the acceleration, and modifying the gravity theory is another. In order to find out what is driving the cosmic accelerating expansion, one has to first answer the question whether the General Relativity is still valid on the cosmic scales. Astronomers are carrying out this test by observing many millions galaxies. I will introduce the basic information of this observation and summarize the current status. I will also introduce the future observations, and finally outline theoretical and observational challenges. Professor Ziv Ran Filling Groovy: The Goodness of Generic Projections Thursday, December 14, 2017, 5:00pm Abstract: The classical process of projection amounts to selecting a linear subspace W from a given vector space V of functions on a geometric object X. This is analogous to vision, where W is the 2-dimensional space of coordinate functions on a retina. What information is lost by passing from V to W when W is selected randomly among subspaces of given dimension? We will describe some progress on a version of this problem in complex algebraic geometry. Steven Lu The Structure of Semi-hyperbolic Projective Algebraic Manifolds Abstract: S. Kobayashi coined the term hyperbolic for a compact complex manifold M without nontrivial holomorphic images of C and conjectured the positivity of the canonical bundle of M. In particular M would be projective if true. But the conjecture is still wide open for projective manifolds beyond dimension two. A spectacular advance in this direction is the resolution in the projective case by D. Wu-S.T. Yau (Invent. 2016) of the differential geometric analog of the conjecture, due to S.T. Yau. The analog pertains to compact Kähler manifolds with negative holomorphic curvature and the said advance resolves in particular the abundance conjecture, a key conjecture for the classification of algebraic varieties, for such a manifold. In this talk, I will mainly focus on a recent joint paper with G. Heier, B. Wong and F.Y. Zheng that offers a structure theorem for projective Kähler manifolds with negative holomorphic curvature, assuming the abundance conjecture. The analysis involves a careful study of the rank of the said curvature, and offers relationships to the global abundance problem. Dr. Nancy Rodriguez On the Obstruction and Propagation of Entire Solutions to a Non-local Reaction Diffusion Equation with a Gap Abstract: In this talk I will discuss the propagation properties of a bistable spatially heterogeneous reaction-diffusion equation where the diffusion is generated by a jump process. Here the spatial heterogeneity is due to a small region with decay. First, I will focus on the existence and uniqueness of a "generalized transition front". Then I will give some partial results about propagation and obstruction of the transition front. Throughout the talk I will point out many interesting differences between the non-local and local reaction-diffusion equations. Pengfei Guan Isometric Embeddings, Geometric Inequalities and Nonlinear PDEs Wednesday, April 12, 2017, 5:00pm Abstract: In 1950s, Nirenberg and Pogorelov solved the classical Weyl problem regarding the isometric embedding of positively curved compact surface in to R^3. Solution to Weyl's problem is crucial to the definition of the Brown-York quasi local mass in general relativity in 1990s, and also play key role in the recent works of Liu-Yau and Wang-Yau. The development brings renewed focus on the Weyl problem of isometric embedding of surfaces to general 3D ambient space. We will discuss elliptic PDEs involved the problem and recent work on this type of nonlinear equations. We also discuss new proof of isoperimetric type of inequalities using parabolic PDEs. The talk is try to illustrate some beautiful interaction of nonlinear PDE, differential geometry and general relativity. It is accessible for general mathematical audience. Dr. Alexander Kiselev Enhancement of Biological Reactions by Chemotaxis Abstract: Many reactions and processes in nature take place in fluid and in presence of both fluid flow and chemotaxis - directed motion of cells or species guided by attractive (or repulsive) chemical. One example of such process is broadcast spawning by corals, the way corals reproduce. Models of this process based on pure reaction-diffusion tend to dramatically underestimate the fertilization success rate. I will discuss a simplified 2D single equation model which incorporates fluid flow and chemotaxis effects. In the framework of this model built on the basis of the well known Keller-Segel equation, the role of chemotaxis turns out to be crucial. In the presence of a sufficiently strong chemotaxis, even weakly coupled reaction can lead to high fertilization rate on a fixed time scale. If time permits, I will discuss some progress in a more sophisticated model which involves a system of two equations. Novel mathematical tools used in this work involve sharp convergence to equilibrium estimates for a class of Fokker-Planck operators with logarithmic-type potentials. Anders Björner KTH Royal Institute of Technology, Sweden Around Codimension One Embeddings Tuesday, March 7, 2017, 5:00pm Abstract: Being drawable in the plane without intersecting edges is a very important and much studied graph property. Euler observed in 1752 that planarity implies a linear upper bound on the number of edges of a graph (which otherwise is quadratic in the number of vertices). Several ways of characterizing planar graphs have been given during the previous century. Planarity is, of course, a special case of a general notion of embedding a simplicial d-complex into real k-space. The k=d+1 and k=2d cases are of particular interest in higher dimensions, since they both generalize planarity. Embedding a space into some manifold is a much studied question in geometry/topology. For instance, van Kampen showed that in the k=2d case there is a very useful cohomological obstruction to embeddability. Higher-dimensional embeddability has been studied also from the combinatorial point of view, in a tradition inspired by Euler. In this talk I will survey a few topics from the combinatorial study of embeddings, such as bounds for the number of maximal faces and algorithmic questions. I will end with mention of some joint work with A. Goodarzi concerning an obstruction to k=d+1 embeddings. The talk will not presuppose previous familiarity with the topic. Simon A. Levin Collective Motion, Collective Decision-making, and Collective Action Abstract: There exists a rich history of research on the mathematical modeling of animal populations. The classical literature, however, is inadequate to explain observed spatial patterning, or foraging and anti-predator behavior, because animals actively aggregate. This lecture will begin from models of animal aggregation, the role of leadership in collective motion and the evolution of collective behavior, and move from there to implications for decision-making in human societies. Ecological and economic systems are alike in that individual agents compete for limited resources, evolve their behaviors in response to interactions with others, and form exploitative as well as cooperative interactions as a result. In these complex-adaptive systems, macroscopic properties like the flow patterns of resources like nutrients and capital emerge from large numbers of microscopic interactions, and feedback to affect individual behaviors. I will explore common features of these systems, especially as they involve the evolution of cooperation in dealing with public goods, common pool resources and collective movement across systems; Examples and lessons will range from bacteria and slime molds to groups to insurance arrangements in human societies and international agreements on environmental issues. Ron Adin Bar Ilan University, Israel Cyclic Descents, Toric Schur Functions and Gromov-Witten Invariants Tuesday, February 28, 2017, 5:00pm Abstract: Descents of permutations have been studied since Euler. This notion has been vastly generalized in several directions, and in particular to the context of standard Young tableaux (SYT). More recently, cyclic descents of permutations were introduced by Cellini and further studied by Dilks, Petersen and Stembridge. Looking for a corresponding notion for SYT, Rhoades found a very elegant solution for rectangular shapes. In an attempt to extend this concept, explicit combinatorial definitions for two-row and certain other shapes have been found, implying the Schur-positivity of various quasi-symmetric functions. In all cases, the cyclic descent set admits a cyclic group action and restricts to the usual descent set when the letter $n$ is ignored. Consequently, the existence of a cyclic descent set with these properties was conjectured for all shapes, even the skew ones. This talk will report on the surprising resolution of this conjecture: Cyclic descent sets do exist for nearly all skew shapes, with an interesting small set of exceptions. The proof applies nonnegativity properties of Postnikov's toric Schur polynomials and a new combinatorial interpretation of certain Gromov-Witten invariants. We shall also comment on issues of uniqueness. Joint with Sergi Elizalde, Vic Reiner and Yuval Roichman. Enrico Arbarello Università degli Studi di Roma "La Sapienza" Hyperplane Sections of K3 Surfaces Friday, February 3, 2017, 5:00pm Abstract: K3 surfaces and their hyperplane sections play a central role in algebraic geometry. This is a survey of the work done during the past five years to characterize which smooth curves lie on a K3 surface. Related topics will be discussed. These are joint works with a combination of the following authors: Andrea Bruno, Gavril Farkas, Edoardo Sernesi and Giulia Saccà. Yuri Tschinkel Arithmetic and Geometry of Fano Varieties Thursday, February 2, 2017, 5:00pm Abstract: I will discuss recent advances in the theory of Fano varieties over nonclosed fields (joint with B. Hassett, A. Kresch, and A. Pirutka). Dr. Tye Lidman Left-orderability and Three-manifolds Thursday, January 26, 2017, 5:00pm Abstract: A group is called left-orderable if it can be given a left-invariant total order. We will discuss the question of when the fundamental group of a three-manifold Y is left-orderable. Orderability is known to be related to certain topological aspects of Y, such as the surfaces which sit inside it. We will discuss a conjectural relationship between left-orderability and the solutions to a certain nonlinear PDE on Y. Joint Math-Physics Colloquium Professor Luc Vinet Centre de Recherches Mathématiques, Montreal The Bannai-Ito Algebra in Many Guises Physics Conference Room, 3rd Floor Abstract: This talk will offer a review of the Bannai-Ito algebra and of its higher rank extension. It will first be explained that it is in a Schur-Weyl duality with the super algebra osp(1,2). Its occurrence as the symmetry algebra of the Dirac-Dunkl equation will be discussed and its relation to orthogonal polynomials will also be presented. Spin Manifolds of Dimensions at Most 4 Wednesday, January 25, 2017, 5:00pm Abstract: Low dimensional spin manifolds are interesting objects with close connections to quadratic forms. The first part of the talk will provide an overview of these manifolds and their invariants. The second part of the talk will use the understanding of low dimensional spin manifolds to give a detailed description of invariants that for each topological space X detect all 3- and 4-dimensional spin manifolds mapping to X up to spin bordism. Dr. Pierre Magal Final Size of an Epidemic for a Two Group SIR Model Abstract: In this talk we consider a two-group SIR epidemic model. We study the final size of the epidemic for each sub-population. The qualitative behavior of the infected classes at the earlier stage of the epidemic is described with respect to the basic reproduction number. Numerical simulations are also preformed to illustrate our results. Dr. Andrea Tellini École des Hautes Études en Sciences Sociales Enhancement of Fisher-KPP Propagation through Lines and Strips of Fast Diffusion Abstract: In this talk I will present some systems of reaction-diffusion equations which take into account the presence of diffusion (and/or reaction) heterogeneities in some regions of the domain. The motivation is the modelling of the invasion of an environment by a species whose individuals can move (and/or reproduce) faster in some regions of the habitat. I will describe the dynamics of such systems and focus on the qualitative properties of the propagation speed, showing in particular when it is larger than in the case of a homogeneous environment. These are joint works with H. Berestycki, L. Rossi and E. Valdinoci. Dr. Xi Huo Zika Outbreaks in a Highly Heterogeneous Environment: Insights from Dynamical Modelling Abstract: Zika virus is in the family of Flaviviridae, and is often transmitted to human by Aedes aegypti, a common vector for transmitting several tropical fevers, including dengue and chikungunya. The environmental heterogeneity and intervention strategies of Zika spread also involve seasonality, co-circulation of other vector-borne diseases, and demographic structures of the mosquito population. We have been developing a variety of dynamical models to understand the transmission dynamics with focus on different aspects of environmental heterogeneities. We first consider the co-infection and co-circulation of dengue and Zika and their implication of dengue vaccination program for Zika control in the presence of experimentally reported antibody-dependent enhancement. We then consider the impact of heterogeneity of vector demographics on the initial outbreak rate and outbreak potential using age-structured partial differential equation systems and calculating the relevant threshold using non-linear semigroup theory and spectral theory. We also examine both numerically and analytically the mechanisms for potential nonlinear oscillations using the global bifurcation theory in delay differential equations. Dr. William Christopher Strickland Modeling Invasive Dispersal at Multiple Scales Thursday, December 8, 5:00pm Abstract: Biological invasions represent an interesting challenge to model mathematically. Landscape heterogeneity, non-local and temporally dependent spreading mechanisms, coarse data, and the presence of long-distance transportation connections are but a few of the complications that can greatly affect our understanding of invasive spread. In this talk, I will look at dispersal from a multi-scale perspective in an attempt to address some of these challenges. To begin, I will introduce a generalization of Mollison's stochastic contact birth process (J R Stat Soc 39(3):283, 1977) which is robust to non-local distribution kernels and heterogeneity in the landscape. By interpreting the quantity of interest as species occurrence probability rather than population size, I will describe how this process may also be approximated and simulated deterministically, using niche modeling tools to characterize landscape heterogeneity. Adding to this is a method for considering the effects of a disease-vector transportation network, which can unwittingly transport a biological invader to distant sites. Finally, I will shift focus to the intial stages of an invasion and concentrate on the local and mesoscale by considering the intentional release of a parasitoid wasp biocontrol agent. Results indicate that the fluid physics of air above the landscape likely plays a critical role in the dispersal process. Numerical results will be included throughout the talk, including simulations for the cheatgrass (bromus tectorum) invasion in Rocky Mountain National Park and the initial spread of parasitoid wasps (Eretmocerus hayati) during a biocontrol introduction. Dr. Jan Sbierski Strong Cosmic Censorship and the Wave Equation in the Interior of Black Holes Wednesday, December 7, 2016, 5:00pm Abstract: The Einstein equations admit a locally well-posed initial value problem. However, there are explicit black hole solutions of the Einstein equations for which global uniqueness fails. The strong cosmic censorship conjecture in general relativity states that global uniqueness should hold generically -- which implies the expectation that small perturbations of the initial data for the above black hole solutions should give rise to a spacetime which is globally uniquely determined. In this talk I will explain how this motivates the study of the wave equation in the interior of black holes, present an overview of previous results, and discuss a recent instability result, obtained in collaboration with Jonathan Luk, in more detail. Dr. Jessica Lin Stochastic Homogenization for Reaction-Diffusion Equations Abstract: One way of modeling phenomena in "typical" physical settings is to study PDEs in random environments. The subject of stochastic homogenization is concerned with identifying the asymptotic behavior of solutions to PDEs with random coefficients. Specifically, we are interested in the following: if the random effects are microscopic compared to the lengthscale at which we observe the phenomena, can we predict the behavior which takes place on average? For certain models of PDEs and under suitable hypotheses on the environment, the answer is affirmative. In this talk, I will focus on the stochastic homogenization for reaction-diffusion equations with both KPP and ignition nonlinearities. In the large-scale-large-time limit, the behavior of typical solutions is governed by a simple deterministic Hamilton-Jacobi equation modeling front propagation. Such models are relevant for predicting the evolution of a population or the spread of a fire in a heterogeneous medium. This talk is based on joint work with Andrej Zlatos. Dr. Verónica Quítalo CoLab UT Austin-Portugal Program Systems of Partial Differential Equations Arising from Population Dynamics and Neuroscience: Free Boundary Problems as a Result of Segregation Friday, December 2, 2016, 5:00pm Abstract: In this talk we will motivate and present our recent results on two phase free boundary problems arising from population dynamics. We will focus on systems with fully nonlinear diffusion and local interaction, and linear systems with a (nonlocal) long-range interaction. In the long-range model, the growth of a population at a point is inhibited by the other populations in a full area surrounding that point. This will force the populations to stay at distance one from each other in the limit configuration. So for the first time is obtained a free boundary problem with a gap of no-man's land between the regions where the populations exist. This is a joint work with Luis Caffarelli and Stefania Patrizi. We will also present briefly some models of differential equations arising from neuroscience and share our current research on propagation of activity in the brain. We will motivate the need to incorporate the "volume" conductivity as well as of the neurons on a model. This is a joint work with Aaron Yip, Zoltan Nadasdy and Silvia Barbeiro. Dr. Nathan Totz Global Flows with Invariant Measures for a Family of Almost Inviscid SQG Equations Abstract: We present a new result, joint with Andrea Nahmod, Natasa Pavlovic, and Gigliola Staffilani, in which very low regularity flows are constructed globally in time almost surely for a family of modified SQG equations using a Gibbs measure; the resulting flows leave this Gibbs measure invariant. The family of equations we treat is formed by adding a small amount of smoothing to the active scalar of the standard inviscid SQG equation. We find that global solutions can be constructed almost surely for any nonzero amount of smoothing. Dr. Michele Coti-Zelati Deterministic and Stochastic Aspects of Fluid Mixing Monday, November 28, 2016, 5:00pm Abstract: The process of mixing of a scalar quantity into a homogeneous fluid is a familiar physical phenomenon that we experience daily. In applied mathematics, it is also relevant to the theory of hydrodynamic stability at high Reynolds numbers - a theory that dates back to the 1830's and yet only recently developed in a rigorous mathematical setting. In this context, mixing acts to enhance, in certain senses, the dissipative forces. Moreover, there is also a transfer of information from large length-scales to small length-scales vaguely analogous to, but much simpler than, that which occurs in turbulence. In this talk, we focus on the study of the implications of these fundamental processes in linear settings, with particular emphasis on the long-time dynamics of deterministic systems (in terms of sharp decay estimates) and their stochastic perturbations (in terms of invariant measures). Friday, October 7, 2016, 4:00pm Frank Lutz On the Topology of Steel Thursday, October 6, 2016, 5:00pm Abstract: Polycrystalline materials, such as metals, are composed of crystal grains of varying size and shape. Typically, the occurring grain cells have the combinatorial types of 3-dimensional simple polytopes, and together they tile 3-dimensional space. We will see that some of the occurring grain types are substantially more frequent than others - where the frequent types turn out to be "combinatorially round". Here, the classification of grain types gives us, as an application of combinatorial low-dimensional topology, a new starting point for a topological microstructure analysis of steel. Ryan Hynd Partial Differential Equations in Finance Thursday, August 25, 2016, 5:00pm Abstract: Starting with the Black & Scholes equation, I will give a tour of partial differential equations arising in various financial models. My goal is to emphasize what is mathematically interesting about each equation and how solutions are used in applications. Dr. Daozhou Gao Shanghai Normal University The Importance of Synchrony in Mass Drug Administration Friday, July 8, 2016, 4:30pm Abstract: Mass drug administration (MDA), a strategy in which all individuals in a population are subject to treatment without individual diagnosis, has been recommended by the World Health Organization for controlling and eliminating several neglected tropical diseases. In this talk, I will present some results arising from mass treatment of trachoma with azithromycin. In the first part, we compare three typical drug distribution strategies (regardless of health status): constant treatment, impulsive synchronized MDA, and impulsive non-synchronized treatment. We show that synchronized and constant strategies are respectively the most and least effective treatments in disease control. Elimination through synchronized treatment is always possible when adequate drug efficacy and coverage is fulfilled and sustained. In the second part, the optimal seasonal timing of mass administration of azithromycin for maximum antimalarial benefit has been established. This is joint work with Thomas M. Lietman and Travis C. Porco. Dr. Wan-Tong Li Lanzhou University, China Nonlocal Effects and Nonlocal Dispersal Thursday, July 7, 2016, 4:30pm Abstract: This talk is concerned with some aspects of nonlocal dispersal equations. It consists of three parts. In part one, I will present some relations between local (random) and nonlocal dispersal problems. In part two, I will report our recent results on traveling waves and entire solutions of nonlocal dispersal equations. Part III is devoted to some problems on traveling waves and entire solutions of nonlocal dispersal equations. Professor Guo Lin Traveling Wave Solutions of Evolutionary Models without Monotonicity Abstract: This talk is concerned with the traveling wave solutions of evolutionary systems including delayed reaction-diffusion systems and integrodifference equations. Even if the general monotone conditions fail, the existence of traveling wave solutions is studied by generalized upper and lower solutions. The asymptotic behavior is established by the idea of contracting rectangles. In the study of classical Lotka-Volterra competitive systems, we obtain the existence of nonmonotone traveling wave solutions, which weakly confirms the conjecture by Tang and Fife [ARMA, 1980]. Dr. Dmitri Vassiliev Spectral Asymptotics for First Order Systems Abstract: In layman's terms a typical problem in this subject area is formulated as follows. Suppose that our universe has finite size but does not have a boundary. An example of such a situation would be a universe in the shape of a 3-dimensional sphere embedded in 4-dimensional Euclidean space. And imagine now that there is only one particle living in this universe, say, a massless neutrino. Then one can address a number of mathematical questions. How does the neutrino field (solution of the massless Dirac equation) propagate as a function of time? What are the eigenvalues (energy levels) of the particle? Are there nontrivial (i.e. without obvious symmetries) special cases when the eigenvalues can be evaluated explicitly? What is the difference between the neutrino (positive energy) and the antineutrino (negative energy)? What is the nature of spin? Why do neutrinos propagate with the speed of light? Why are neutrinos and photons (solutions of the Maxwell system) so different and, yet, so similar? The speaker will approach the study of first order systems of partial differential equations from the perspective of a spectral theorist using techniques of microlocal analysis and without involving geometry or physics. However, a fascinating feature of the subject is that this purely analytic approach inevitably leads to differential geometric constructions with a strong theoretical physics flavour. Rectifiability of Harmonic Measure Abstract: In a recent multi-authored paper by J. Azzam, S. Hofmann, J.-M. Martell, S. Mayboroda, M. Mourgoglou, X. Tolsa and myself the following result is proved: For arbitrary open set in any \R^d such that its boundary carries harmonic measure w and for any Borel set $E$, such that $H^{d-1}(E)<\infty, if harmonic measure w|E is absolutely continuous with respect to H^{d-1}|E, then it is rectifiable. This result solves a long standing conjecture of Chris Bishop and generalizes the result of N. Makarov obtained for d=2 and for simply connected open sets obtained in the early 80's. We will also make an overview of harmonic measure results of the last 30 years. Weiyi Zhang J-holomorphic Subvarieties Friday, March 18, 2016, 5:45pm Abstract: In this talk, we discuss the J-holomorphic subvarieties in a 4-dimensional symplectic manifold. We will start by showing that a subvariety of a complex rational surface in an exceptional rational curve class could have higher genus components. On the other hand, such exotic phenomenon won't happen for (any tamed almost complex structures on) ruled surfaces. We will then show a general cone theorem as in Mori theory and determine the moduli space of a spherical class in a rational or ruled surface, which could be decomposed as linear systems. Comparing Torsion Points of Elliptic Curves Abstract: In this talk we consider subset of $P^1_{E}\subset P^1$ which are the images of torsion points of complex elliptic curve $E$ under projection defined as a quotient map for standard involution. The corresponding map $E\to P^1$ and hence $P^1_{E}\subset P^1$ is uniquely defined modulo projective transform on $P^1$. It is known that the intersection of subsets $P^1_{E}, P^1_{E'}$ are finite apart from the case when they coincide. However the question is whether there is universal constant bounding this intersection. This problem is somewhat related to Serre conjecture on the image of Galois action on the torsion points. I am going to discuss some results concerning the matter. In particular we show that the intersection contains exactly one point for some families of elliptic curves, that there is an infinite set of pairs $E,E'$ where the intersection consists of $10$ points and also provide a example with $14$ intersection points. This is joint work with Hang Fu. Dr. Phillip Griffiths Institute of Advanced Study, Princeton The Hodge Conjecture Abstract: The Hodge conjecture is one of the Millennium Prize problems. In this talk I will discuss the questions: •What is it? •Why is it interesting? •What is known about it? •Why has there been almost no progress in solving it? As little background as possible will be assumed, and what is needed will be informally explained. Based on joint work with Mark Green, Radu Laza and Colleen Robles. Ilya Kachkovskiy Almost Commuting Operators Abstract: The classical question of whether a pair of matrices or operators with small commutator is close to a commuting pair dates back to von Neumann, Rosenthal, and Halmos. In particular, a dimension-uniform version of this question for Hermitian matrices was open until 1993 when Lin proved the existence of a uniform distance estimate using deep C*-algebraic arguments. Lin's method didn't allow to obtain any quantitative results besides existence. In 2001, Davidson and Szarek conjectured that the distance is bounded by some universal constant times the square root of the norm of the commutator. In this talk, I will explain the main ideas of the proof of Davidson and Szarek's conjecture (joint work with Y. Safarov), including the infinite-dimensional version. I will also talk about the relation with Brown-Douglas-Fillmore theorem, and the case of the Hilbert-Schmidt norm for matrices. If time permits, I will discuss some work in progress that includes possible generalization to the case of unitary matrices (where an additional topological obstruction arises), and some applications to mathematical physics. Dr. Antoine Choffrut The Analysis of Partial Differential Equations: A Case Study with the Euler Equations of Fluid Mechanics Abstract: In this talk I will give an overview of my work on the incompressible Euler equations. It can be categorized in two classes: the use of convex integration to prove h-principles, and structure theorems for two-dimensional stationary flows. These are two very different types of results. However, I will try to highlight a unifying principle in this body of work. Mihaela Ifrim Two Dimensional Water Waves in Holomorphic Coordinates Abstract: This is joint work with Daniel Tataru, and in parts with Benjamin Harrop-Griffits and John Hunter. My talk is concerned with the infinite depth water wave equation in two space dimensions, with either gravity or surface tension. I will also make some remarks on the finite depth case, and on the infinite depth case in which constant vorticity and only gravity are assumed. We consider this problem expressed in position-velocity potential holomorphic coordinates. Viewing this problem(s) as a quasilinear dispersive equation, we develop new methods which will be used to prove enhanced lifespan of solutions and also global solutions for small and localized data. For the gravity water waves there are several results available; they have been recently obtained by Wu, Alazard-Burq-Zuily and Ionescu-Pusateri using different coordinates and methods. In the capillary water waves case, we were the first to establish a global result (two months later, Ionescu-Pusateri also announced a related result). Our goal is improve the understanding of these problems by providing a single setting for all the above cases, and presenting simpler proofs. The talk will try to be self contained. Mimi Dai Regularity for the 3D Navier-Stokes Equations and Related Problems Abstract: As one of the most significant problems in the study of partial differential equations arising in fluid dynamics, Leray's conjecture in 1930's regarding the appearance of singularities for the 3-dimensional (3D) Navier-Stokes equations (NSE) has been neither proved nor disproved. The problems of blow-up have been extensively studied for decades using different techniques. By using a method of wavenumber splitting which originated from Kolmogorov's theory of turbulence, we obtained a new regularity criterion for the 3D NSE. The new criterion improves the classical Prodi-Serrin, Beale-Kato-Majda criteria and their extensions. Related problems, such as the well/ill-posedness, will be discussed as well. Dr. Eduardo Sontag Qualitative Features of Transient Responses A Case Study: Scale-invariance Abstract: An ubiquitous property of sensory systems is "adaptation": a step increase in stimulus triggers an initial change in a biochemical or physiological response, followed by a more gradual relaxation toward a basal, pre-stimulus level. Adaptation helps maintain essential variables within acceptable bounds and allows organisms to readjust themselves to an optimum and non-saturating sensitivity range when faced with a prolonged change in their environment. It has been recently observed that some adapting systems, ranging from bacterial chemotaxis pathways to signal transduction mechanisms in eukaryotes, enjoy a remarkable additional feature: scale invariance or "fold change detection" meaning that the initial, transient behavior remains approximately the same even when the background signal level is scaled. I will review the biological phenomenon, and formulate a theoretical framework leading to a general theorem characterizing scale invariant behavior by equivariant actions on sets of vector fields that satisfy appropriate Lie-algebraic non-degeneracy conditions. The theorem allows one to make experimentally testable predictions, and I will discuss the validation of these predictions using genetically engineered bacteria and microfluidic devices, as well their use as a "dynamical phenotype" for model invalidation. I will conclude by briefly engaging in some wild and irresponsible speculation about the role of the shape of transient responses in immune system self/other recognition and in evaluating the initial effects of immunotherapy. Hodge Theory and Moduli Wednesday, September 9, 2015, 5:00pm Abstract: This talk will give an overview of the connection between the above two topics in the first non-classical case of H-surfaces. Professor Wayne Lawton Multivariate Prediction and Spectral Factorization Thursday, September 3, 2015, 5:00pm Abstract: Prediction theory studies stationary random functions on ordered groups. Time series are functions on the integer group and characterized by classical harmonic analysis results such as Szego's spectral factorization theorem. Images are functions on higher rank groups. We use results about entire functions and ergodic theory to derive new results for these functions. Dr. Dimitri Gurevich Valenciennes University, France From Quantum Groups to Noncommutative Geometry Tuesday, May 5, 2015, 4:00pm Abstract: Since creation theory of Quantum Groups numerous attempts to elaborate an appropriate differential calculus were undertaken. Recently, a new type of Noncommutative Geometry has been obtained on this way. Namely, we have succeeded in introducing the notions of partial derivatives on the enveloping algebras U(gl(m)) and constructing the corresponding de Rham complexes. All objects arising in our approach are deformations of their classical counterparts. In my talk I plan to introduce some basic notions of the theory of Quantum Groups and to exhibit possible applications of thistype Noncommutative Geometry to quantization of certain dynamical models. Institute for Advanced Study, Princeton Hodge Theory: Some History and a Possible Future Direction Tuesday, April 21, 2015, 5:00pm Abstract: Hodge theory is a central part of algebraic geometry. The subject has a long and rich history and continues today to be at the forefront of much current work. This talk will discuss some of its history and one possible, less well known, possible area for future work. In brief: where did the subject come from, and where might it be heading? Professor Yuval Roichman Fine Sets and Schur Positivity Abstract: Characters of symmetric groups and Iwahori-Hecke algebras may be evaluated by signed (and slightly weighted) enumerations of various classes of permutations, such as conjugacy classes, Knuth classes and inverse descent classes. We propose an abstract framework for such classes, which we call "fine sets". It will be shown that fine sets can be characterized by Schur positivity of the associated quasi-symmetric functions. The proof involves asymmetric Walsh-Hadamard type matrices. Applications include the equivalence of classical theorems of Lusztig-Stanley and Foata-Schutzenberger. Time allowing, some very recent developments will be described -- in particular, a generalization to type B and applications to grid classes and derangements, related to results of Desarmenien-Wachs and others. Based on joint works with R. Adin, C. Athanasiadis and S. Elizalde. Professor Alexei Kovalev 8-dimensional Manifolds with Holonomy Spin(7) Monday, April 6, 2015, 4:00pm Abstract: Spin(7) is one of the "exceptional" holonomy groups of Riemannian metrics appearing in Berger's classification. Riemannian manifolds with holonomy Spin(7) occur in dimension 8, are Ricci-flat and have parallel spinor fields. We construct examples of asymptotically cylindrical Riemannian 8-manifolds with holonomy group Spin(7). To our knowledge, these are the first such examples. This leads to a new, connected sum construction of compact holonomy-Spin(7) manifolds from asymptotically cylindrical pairs. We show in examples that the construction produces a "pulling-apart" deformation of compact Spin(7)-manifolds previously given by Joyce, as well as topologically new Spin(7)-manifolds. Professor Xiuxiong Chen On the Kaehler Ricci Flow Abstract: There is a long standing conjecture on Kaehler Ricci flow on Fano manifolds that the Ricci flow converges sub-sequentially to a Kaehler Ricci solution with at most codimension 4 singularities, with perhaps a different complex structure (the so-called "Hamilton-Tian conjecture"). In this lecture, we will outline a proof of this conjecture. This is a joint work with Bing Wang. Professor Mario Milman Florida Atlantic University Uncertainty Inequalities in Metric Measure Spaces Abstract: I will show an extension of the classical uncertainty inequalities to connected metric measure spaces. The idea is to exploit the connection with isoperimetry. I will introduce a new class of weights (*isoperimetric weights*) and prove a new local Poincare inequality formulated in terms of the isoperimetric profile. A reinterpretation of the notion of *isoperimetric weights* in terms of function spaces provides a different method to attack the problem and also allows for explicit computation of these weights for log concave measures and other model geometries. I will try to explain all the terms in the abstract, provide examples and show connections with classical inequalities by Hardy and Strichartz. Professor King-Leung Lam On Global Dynamics of Competitive Systems in Ordered Banach Spaces Abstract: A well-known result in [Hsu-Smith-Waltman, Trans. AMS (1996)] states that in a competitive semiflow defined on the product of two cones in respective Banach spaces, one of the following outcomes is possible for the two competitors: either there is at least one stable coexistence steady state, or else one of the exclusion states attracts all trajectories initiating in the order interval bounded by the two exclusion states. However, none of the exclusion states can be globally asymptotically stable if we broaden our scope to the entire positive cone. In this talk, we discuss two sufficient conditions that guarantee, in the absence of coexistence steady states, the global asymptotic stability of one of the exclusions states. Our results complement the counter example mentioned in the above paper and are frequently applicable in practice. Dr. Alexander Volberg Why the Oracle May Not Exist: Ergodic Families of Jacobi Matrices, Absolute Continuity without Almost Periodicity Thursday, March 5, 2015, 5:00pm Abstract: We will explain the recent solution of Kotani's problem pertinent to the existence/non-existence of "oracle" (almost periodicity) for the ergodic families of Jacobi matrices (discrete Schroedinger operators). Kotani suggested that such families are subject to the following implication: if family has a non-trivial absolutely continuous spectrum (this happens almost surely) then almost surely it consists of almost periodic matrices (hence the possibility to predict the future by the past). Kotani proved an important positive result of this sort. Recently independently Artur Avila and Peter Yuditskii-myself disproved this conjecture of Kotani (by two different approaches). We will show the hidden singularity that defines when such Kotani's oracle exists or not. S.R.S. Varadhan Frank J. Gould Professor of Science Courant Institute at New York University Large Deviations for Brownian Occupation Times, Revisited Abstract: Brownian motion on R^d is not positive recurrent and the Large Deviation estimate holds only in a weak form. To get a strong version a compactification is needed. In the application of interest we need a translation invariant compactification and that is carried out. Dr. Lingjiong Zhu Self-Exciting Point Processes Abstract: Self-exciting point processes are simple point processes that have been widely used in neuroscience, sociology, finance and many other fields. In many contexts, self-exciting point processes can model the complex systems in the real world better than the standard Poisson processes. We will discuss the Hawkes process, the most studied self-exciting point process in the literature. We will talk about the limit theorems and asymptotics in different regimes. Extensions to Hawkes processes and applications to finance will also be discussed. Dr. Karim Adiprasito Toric Chordality Monday, February 16, 2015, 5:00pm Abstract: We put the fundamental graph-theoretic notion of chordality into a proper context within the weight algebra of McMullen and framework rigidity. Moreover, we generalize some of the classical results of graph chordality to this context, including the fundamental relation to the Leray property. Our main focus is the relation of higher chordality to the Hard-Lefschetz Theorem for simplicial polytopes of Saito, McMullen and others. Homological chordality allows us to state a powerful Quantitative Lower Bound Theorem which relates the "defect" to a chordal complex to the g-numbers of the same polytope, thereby providing an immediate combinatorial interpretation of the g-numbers and, more refinedly, the cohomology classes of the associated projective toric variety, in terms of induced subcomplexes of the simplicial complex. While most of our results follow quite easily once we established stress spaces as the right setting for the study of higher chordality, we provide a central propagation theorem for toric chordality. As an application, we resolve a variety of interesting questions in combinatorics of polytopes. Dr. Hamed Amini Swiss Federal Institute of Technology (EPFL), Lausanne Systemic Risk in Financial Networks Abstract: After the last financial crisis, the monitoring and managing of systemic risk has emerged as one of the most important concerns for regulators, governments and market participants. In this talk, we propose two complementary approaches to modeling systemic risk. In the first approach, we model the propagation of balance-sheet or cash-flow insolvency across financial institutions as a cascade process on a network representing their mutual exposures. We derive rigorous asymptotic results for the magnitude of contagion in a large financial network and give an analytical expression for the asymptotic fraction of defaults, in terms of network characteristics. We also introduce a criterion for the resilience of a large inhomogeneous financial network to initial shocks that can be used as a tool for monitoring systemic risk. Using an equilibrium approach, in the second part of the talk, we apply a coherent systemic risk measure to examine the effects on systemic risk and liquidation losses of multilateral clearing via a central clearing counterparty (CCP). We provide sufficient conditions in terms of the CCP's fee and guarantee fund policy for a reduction of systemic risk. Dr. Jin Hyuk Choi Taylor Approximation of Incomplete Radner Equilibrium Models Abstract: We will first explain the notion of Radner equilibria by using the one-period binomial model. We will see how the complete market assumption simplifies the mathematical structure. In the next part we will consider the setting of exponential investors and uncertainty governed by Brownian motions. Here we will show the existence of an equilibrium for a general class of incomplete models. Finally, we will show that the general incomplete equilibrium can be approximated by a tractable equilibrium stemming from exponential-quadratic models. Dr. Volker Schlue Non-existence of Time-periodic Dynamics in General Relativity Monday, February 2, 2015, 5:00pm Abstract: General Relativity presents us with a theory for the dynamics of space and time itself. In this lecture I will revisit the expectation that in contrast to Newton's theory of gravity there does not exist time-periodic motion. In particular, I will present a recent result that establishes purely on the basis of knowing the gravitational waves emitted from any self-gravitating system, that any time-periodic space-time is necessarily "stationary", that is time-independent. Moreover, I will elaborate on novel unique continuation results for hyperbolic partial differential equations that play an important role in the proof, and have been obtained jointly with Spyros Alexakis and Arick Shao. Dr. Alejandro Morales Matrices over Finite Fields, Polytopes, and Symmetric Functions Related to Rook Placements Abstract: Permutation matrices are fundamental objects in combinatorics and algebra. Such matrices can be viewed as placements of non-attacking rooks on a square board. If we restrict the support of the matrix we obtain rook placements that in computational complexity are an example of problems where finding a solution is easy but the enumeration is difficult. In this talk we look at various recent results where rook placements appear. The first instance is counting invertible matrices over a finite field of size q with support that avoids some entries. The number of such matrices is a q-analogue of rook placements but this number may not be a polynomial in q (Stembridge). We give polynomial formulas when the forced zeroes are in the diagonal and on the inversions of some permutations. We also relate the latter to the zoo of combinatorial objects in Postnikov's study of the positive Grassmannian. (Joint works with A. Klein, J. Lewis, R. Liu, G. Panova, S. V Sam, Y. Zhang.) The second instance is as vertices of the Chan-Robbins-Yuen polytope, a face of the Birkhoff polytope of doubly stochastic matrices. This polytope has a remarkable volume formula proved by Zeilberger. We discuss three variants of this polytope. (Joint works with K. Meszaros, B. Rhoades and J. Striker.) The third instance is as coefficients in relations among Stanley's chromatic symmetric function and Shareshian-Wachs chromatic quasisymmetric functions on a Catalan family of graphs. (Ongoing work with M. Guay-Paquet and F. Saliola.) Rigidity and Flexibility for the Euler Equations: The h-principle and the Method of Convex Integration Abstract: Certain problems in Differential Geometry have long been observed to feature a dichotomy "rigidity vs. flexibility". Rigidity refers to the fact that smooth solutions are unique, while flexibility means that the set of solutions with low regularity can be extremely large. With the groundbreaking work of De Lellis and Szekelyhidi on the incompressible Euler equations of fluid mechanics, it came as an enormous surprise that such a dichotomy may also hold for problems from Mathematical Physics. Flexibility results of interest to us fall under the name of h-principles and are established using the method of convex integration. In this talk I will give a bird's eye view of these notions and explain a motivating conjecture due to Onsager. I will conclude with future problems which deserve special research efforts. Dr. Angela Hicks Combining the Classical and the Rational: Re-expressing Rational Shuffle Conjecture Friday, January 23, 2015, 5:00pm Abstract: Recent results have place the classical shuffle conjecture in the context of an infinite family of conjectures about parking functions in any rectangular lattice. The classical case describes three traditional statistics which have varying degrees of complication: area, dinv, and pmaj. Combining the work of many authors in several fields gives two of these statistics in the more general case. After introducing the conjectures, this talk will cover how these statistics can be expressed in a manner consistent with the current classical literature and why this is important. Additionally, this talk will include a conjectured generalization of the third statistic, pmaj, in a number of new cases and time permitting, explain its connection to the sandpile model. Dr. Nicolas Addington Recent Developments in Rationality of Cubic 4-folds Abstract: The question of which cubic 4-folds are rational is one of the foremost open problems in algebraic geometry. I'll start by explaining what this means and why it's interesting; then I'll discuss three approaches to solving it (including one developed in the last year), my own work relating the three approaches to one another, and the troubles that have befallen each approach. Dr. Kristin Shaw Technical University of Berlin Tropical Curves and Surfaces Abstract: Tropical geometry is a relatively new field of mathematics which provides us with piecewise linear models for classical geometry. Often these models encode properties of the original objects in a more combinatorial form. The first major application of the field to classical geometry came by way of Mikhalkin's correspondence theorem between planar tropical and complex curves. This correspondence has served to answer questions in enumerative geometry; how many curves of a given degree and genus satisfy an appropriate set of constraints? Various other applications have been to moduli problems, linear series on curves, intersection theory, and computations of Hodge numbers. Tropical geometry has also been suggested as a explanation for mirror symmetry. Abstract tropical varieties in higher dimensions have not been studied to the extend of curves. Applications of tropical geometry to classical geometry often rely on establishing correspondences between the two worlds. In this talk we will see that upon leaving the plane for other surfaces or varieties of higher dimension these correspondences are harder to establish and sometimes do not exist. We will explain how intersection theory and tropical homology can be used to study curves and also tropical Picard groups in surfaces and show how operations known as tropical modification and summation can construct and help study interesting examples of tropical surfaces. Dr. Morgan Brown Rationality of Algebraic Varieties Abstract: An algebraic variety is called rational if it has a parametrization by rational functions which is one-to-one almost everywhere. Determining whether a variety is rational or not is an old and very difficult problem, even when that variety is a smooth hypersurface in projective space. I will give a brief history of this problem, and explain some of the modern techniques for showing that varieties are irrational, including how certain aspects of the derived category might reflect rationality. Dr. Bruno Benedetti Diameter of Polytope Graphs Abstract: "Discrete" geometry focuses on geometric objects that have facets, corners, vertices, rather than on objects with a uniform smooth structure. A standard result is Balinski's theorem, "the graph of every convex d-dimensional polytope is d-connected". But given a d-polytope with n facets, how many edges do we have to walk along (at most), if we want to go from a vertex to another? This "combinatorial diameter" question has relevance in optimization, being related to the simplex algorithm. Hirsch conjectured in 1957 that the answer should be "at most n-d", but the conjecture was disproved by Santos in 2010. I will explain two recent positive results: (1) The Hirsch conjecture holds for all flag polytopes. The proof uses methods from differential geometry. (Joint work with K. Adiprasito.) (2) The notion of dual graph naturally extends to projective curves, or to arbitrary algebraic varieties. Surprisingly, a broader version of Balinski's theorem still holds. (Joint work with M. Varbaro, and work in progress with B. Bolognese.) Dr. Xing Liang Chinese University of Science and Technology Spreading Speeds of Monostable Parabolic Equations in Periodic Habitat with Free Boundary Abstract: In this talk, we will show the existence of the speading speeds of monostable parabolic equations in the spatially and temporally periodic habitats. We main focus on the case where the equations is not linearly determinate. Dr. James Keesling An Agent Based Microsimulation Model for the Spread of Citrus Greening Abstract: The citrus industry is a pillar of Florida's economy. It is hard to imagine that it may disappear in short order. However, due to the devastating effects of Citrus Greening, this may very well be the case. Citrus Greening (or Huang Long Bing) is a disease of citrus caused by a bacterium (Candidatus Liberibacter asiaticus). It is spread by a psyllid, Diaphorina citri. In this talk we will describe a simulation model for the spread of this disease. The model is based on a detailed analysis of the biology of the psyllid vector. Current work expanding and applying the model is being funded by the Citrus Research and Development Foundation as part of the Psyllid Shield Project. Professor John Stillwell What Does "Depth" Mean in Math? History, Foundations, and Logic Witten Learning Center 160 Abstract: Every mathematician believes that certain theorems are "deep," but the concept of depth does not have a formal definition. By looking at some famous theorems, ancient and modern, we will study some candidates for "depth" at various levels, particularly the undergraduate level. With these examples in hand we hope to discuss whether any concepts of logic now available can give "depth" a precise meaning. Anatoly Libgober Landau-Ginzburg/Calabi-Yau and McKay Correspondences for Elliptic Genus Monday, October 27, 2014, 5:00pm Abstract: I will discuss elliptic genus of singular varieties and its extension to Witten's phases of N=2 theories. In particular McKay correspondence for elliptic genus will be described. As one of applications I will show how to derive relations between elliptic genera of Calabi-Yau manifolds and related Witten phases using equivariant McKay correspondence for elliptic genus. Dr. Vyacheslav Shokurov Weak Kawamata Conjecture Wednesday, October 15, 2014, 5:00pm Abstract: The talk will explain the finiteness of weak log canonical models related by bounded flops. Dr. Kimihiko Motegi Nihon University, Tokyo, Japan Twisted Families of L-space Knots Wednesday, September 10, 2014, 5:00pm Abstract: A knot in the 3-sphere is called an L-space knot if it admits a nontrivial Dehn surgery yielding an L-space, which is a generalization of a lens space from the algebraic viewpoint of Heegaard Floer homology. Given an L-space knot K, can we obtain an infinite family of L-space knots by twistings of K along a suitably chosen unknotted circle? We will discuss this question in the case where K admits a Seifert surgery, and give a sufficient condition on such an unknotted circle. If K is a torus knot, then we have an unknotted circle c such that twistings along c produce an infinite family of hyperbolic, L-space knots. In particular, for the trivial knot we can take infinitely many such unknotted circles. We also demonstrate that there are infinitely many hyperbolic, L-space knots with tunnel number greater than one, each of which arises from a trefoil knot by alternate twistings along two unknotted circles. Dr. Ryan Hynd Option Pricing in the Large Risk-aversion, Small-transaction-cost Limit Abstract: We discuss an alternative to the well-known Black-Scholes option pricing model and characterize the asking price of a risk-averse seller, in the limit of small transaction costs. The resulting mathematical problem is one of asymptotic analysis of partial differential equations and the real challenge involves a particular eigenvalue problem. Professor Andrea Nahmod University of Massachusetts, Amherst Randomization and Long Time Dynamics in Nonlinear Evolution PDE Abstract: The field of nonlinear dispersive equations has undergone significant progress in the last twenty years thanks to the influx of tools and ideas from nonlinear Fourier and harmonic analysis, geometry and analytic number theory, into the existing functional analytic methods. This body of work has primarily focused on deterministic aspects of wave phenomena and answered fundamental questions such as existence and long time behavior of solutions, in various regimes. Yet there remain some important obstacles and open questions. A natural approach to tackle some of them, and one which has recently seen a growing interest, is to consider certain evolution equations from a non-deterministic point of view (e.g. the random data Cauchy problem, invariant measures, etc.) and incorporate to the deterministic toolbox, powerful but still classical tools from probability as well. Such approach goes back to seminal work by Bourgain in the mid 90's where global well-posedness of certain periodic Hamiltonian PDEs was studied in the almost sure sense via the existence and invariance of their associated Gibbs measures. In this talk we will explain these ideas, describe some recent work and future directions with an emphasis on the interplay of deterministic and probabilistic approaches. Professor Ian Hambleton Manifolds and Symmetry Abstract: This will be survey talk about connections between the topology of a manifold and its group of symmetries. I will illustrate this theme by discussing finite group actions on compact manifolds, such as spheres or products of spheres, and infinite discrete groups acting properly discontinuously on non-compact manifolds, such as products of spheres and Euclidean spaces. Dr. Andras Nemethi Hungarian Academy of Sciences Lattice Cohomology and the Geometric Genus of Surface Singularities Abstract: The link of a complex normal surface singularity is an oriented 3-manifold. We will review the definition of the lattice cohomology associated with such a plumbed 3-manifold. It makes connection between low-dimensional topology and singularity theory. E.g., its Euler characteristic is the Seiberg-Witten invariant of the link, while in 'nice' cases it also provides the geometric genus of the analytic structure. We will focus on such connections and interplays, together with a historical overview of the topological characterizations of the geometric genus. On Homomorphisms between Multiplicative Groups of Fields Wednesday, March 19, 2014, 5:00pm Abstract: I am going to describe a proof of the theorem which allows for any such homomorphism which respects algebraic dependence between elements is related to a nonarchimedean valuation of the initial field. The only additional condition is that the homomorphism has at least two algebraically independent elements in the image and a nontrivial kernel (not equal to a subfield). This result can be considered as rational version of so called Grothendieck section conjecture for functional fields of transcendence degree greater or equal to two. Dr. John Shareshian Coset Posets and Probabalistic Zeta Functions of Finite Groups, and a Problem on Binomial Coefficients Monday, March 17, 2014, 5:00pm Abstract: Philip Hall introduced the general Möbius inversion formula in order to enumerate the set of k-tuples (g1,...,gk) from a finite group G that contain a generating set for G. One can substitute an arbitrary complex number, rather than just a positive integer k into the formula obtained by Hall, and the resulting function is called the probabalistic zeta function for G. Serge Bouc observed that evaluation of the probabalistic zeta function at -1 yields the reduced Euler characteristic of a topological space naturally associated to G, namely, the order complex of the poset of all cosets of all proper subgroups of G. Ken Brown investigated this complex and was led to ask whether it is ever contractible. In joint work with Russ Woodroofe, we show that this complex has nontrivial homology, and is therefore not contractible, whenever G has no alternating group as a composition factor. Our efforts to understand alternating composition factors led to a fascinating (at least to me) elementary problem about prime divisors of binomial coefficients. Dr. Priyanga Amarasekare Temperature Effects on Population Dynamics and Species Interactions: A Trait-based Perspective Abstract: Populations and communities are complex systems whose properties result from the interplay between non-linear feedbacks that are intrinsic to the system (e.g., biotic interactions that lead to density- and frequency-dependence) and external inputs (e.g., abiotic factors) that are outside the feedback structure of the system. Understanding this interplay requires that we understand the mechanisms by which the effects of external inputs on lower levels of the system (e.g., traits of organisms) influence properties at higher levels (e.g., population viability, species diversity). Using temperature as the axis of abiotic variation, I develop a mechanistic theoretical framework for elucidating how abiotic effects on traits influence population dynamics and species interactions, and how these ecological dynamics in turn feedback into the trait response, causing trait evolution. I test model predictions with data on insects. The integration of theory and data paves the way for making testable predictions about the effects of climate warming on population viability, biodiversity and the control of invasive species. Dr. Gustavo Ponce On Unique Continuation Properties of Solutions to Some Dispersive Equations Abstract: We shall discuss results concerning unique continuationproperties of solutions to some canonical dispersive equations and their relation with decay and persistent properties of the corresponding solutions flow. These canonical dispersive models include the generalized Korteweg-de Vries equation, the nonlinear Schrödinger equation and the Benjamin-Ono. Dr. Emil Wiedemann PIMS Postdoctoral Fellow Convex Integration for Nonlinear Partial Differential Equations Abstract: The so-called method of convex integration has been recognized as a powerful tool in various fields of mathematics, including geometry and, more recently, partial differential equations and the calculus of variations. I will discuss the method and two of its recent applications: On the one hand, starting with the recent work of C. De Lellis and L. Szkelyhidi, convex integration methods have been used to construct solutions of equations from fluid dynamics with surprising properties. On the other hand, together with K. Koumatos and F. Rindler, we were able to construct unexpected solutions to first-order equations involving the Jacobian determinant, motivated by questions arising in nonlinear elasticity theory. Dusa McDuff Helen Lyttle Kimmel '42 Professor of Mathematics Barnard College, Columbia University Symplectic Embeddings in Dimensions 4 and Above Abstract: Symplectic geometry is a fascinating mix of flexibility and rigidity. One way in which this is shown is in the properties of symplectic embeddings. After explaining what a symplectic structure is, I will describe some recent (and not so recent) results about this question. Dr. Ting Zhou C.L.E. Moore Instructor On Transformation-Optics Based Invisibility Monday, January 27, 2014, 5:00pm Abstract: In this talk, I shall discuss the transformation optics based design of electromagnetic invisible cloaks from the inverse problems point of view. In order to avoid the difficulty posed by the singular structure required for ideal cloaking, we study the regularized approximate cloaking in prototypical models, for time harmonic Maxwell's equations in R3 and the scalar Helmholtz equations in R2. In particular, as the regularization parameter converges to zero, i.e., as the approximate cloaking converges to the ideal one, we will see that different types of boundary conditions appearing at the interior of the cloaking interface. Some of them is of non-local pseudo-differential type. Mihaela Ignatova On Well-posedness for Free Boundary Fluid-structure Interaction Models Abstract: We address a fluid-structure interaction model describing the motion of an elastic body immersed in a viscous incompressible fluid. The model consists of the Navier-Stokes equations (for the fluid) and a linear system of elasticity (for the elastic solid) coupled on the free moving interface via natural dynamic and transmission boundary conditions. We first derive a priori estimates for the local existence of solutions for a class of initial data which also guaranties uniqueness, which leads to the local well-posedness of the model with less regular initial data then previously known. Moreover, under additional interior damping and stabilization terms on the free interface, we prove the global existence and exponential decay of solutions provided the initial data is sufficiently small. Collaborators: I. Kukavica, I. Lasiecka, and A. Tuffaha Gregory Pearlstein Singular Metrics and the Hodge Conjecture Abstract: A Hodge class on a smooth complex projective variety gives rise to an associated hermitian line bundle on a Zariski open subset of a complex projective space P^n. I will discuss recent work with P. Brosnan which shows that the Hodge conjecture is equivalent to the existence of a particular kind of degenerate behavior of this metric near the boundary. Professor M. Teicher Computational Aspects of the Braid Group and Applications Abstract: In the talk we will present the 3 most difficult problems in the braid group – "The Word Problem", "The Conjugacy Problem", "The Hurwitz Equivalence Problem" – partial solutions, and applications to cryptography. Dr. Vedran Sohinger On Some Problems Concerning the Nonlinear Schrodinger Equation Abstract: In this talk, we will summarize several recent results concerning the nonlinear Schrodinger equation (NLS). The first part of the talk is dedicated to the study of the low-to-high frequency cascade which occurs as a result of the NLS evolution. In particular, one wants to look at how the frequency support of a solution evolves from the low to the high frequencies. This phenomenon can be quantitatively described as the growth in time of the high Sobolev norms of the solution. We present a method to bound this growth using the idea of an almost conservation law, which was previously used in the low regularity context in the work of Bourgain and Colliander, Keel, Staffilani, Takaoka, and Tao. In the second part of the talk, we will study the Gross-Pitaevskii hierarchy. This is an infinite system of linear partial differential equations which occurs in the derivation of the nonlinear Schrodinger equation from the dynamics of N-body Bose systems. We will study this hierarchy on the three-dimensional torus. We will show a conditional uniqueness result for the hierarchy in a class of density matrices of regularity strictly greater than 1. Our result builds on the previous study of this problem on R^3 by Erdos, Schlein, and Yau, as well as by Klainerman and Machedon and on the study of this problem on T^2 by Kirkpatrick, Schlein, and Staffilani. Finally, we will apply randomization techniques in order to study randomized forms of the Gross-Pitaevskii hierarchy at low regularities, as was done in the setting of nonlinear dispersive equations starting with the work of Bourgain. The second part of the talk is based on joint work with Philip Gressman and Gigliola Staffilani. Dr. King-Yeung Lam Evolution of Dispersal: A Reaction-Diffusion Approach Abstract: We consider a reaction-diffusion model of two competing species for the evolution of conditional dispersal in a spatially varying but temporally constant environment. Two species are different only in their dispersal strategies, which are a combination of random dispersal and biased movement upward along the resource gradient. In the absence of biased movement or advection, A. Hastings showed that dispersal is selected against in spatially varying environments. When there is a small amount of biased movement or advection, we show the existence of a positive Evolutionarily Stable Strategy in diffusion rates, which is a form of Nash Equilibrium of the underlying population game.Our analysis of the model suggests that a balanced combination of random and biased movement might be a better habitat selection strategy for populations. Dr. Dimitri R. Yafaev University of Rennes, France Convolutions and Hankel Operators Abstract: We compare two classes of integral operators: convolutions in L2(R) and Hankel operators in L2(R+). Convolutions have integral kernels b(x - y); they can be standardly diagonalized by the Fourier transform. Hankel operators H can be realized as integral operators with kernels h(t+s). They do not admit an explicit diagonalization. Nevertheless we show that they can be quasi-diagonalized as H = L*ΣL. Here L is the Laplace transform, Σ is the operator of multiplication by a function σ (λ), λ > 0. We find a scale of spaces of test functions where L acts as an isomorphism. Then L* acts as an isomorphism in the corresponding spaces of distributions. We show that h = L* σ which yields an one-to-one correspondence between kernels and sigma-functions of Hankel operators. The sigma-function σ (λ) of a self-adjoint Hankel operator H contains substantial information about its spectral properties. Thus we show that the operators H and Σ have the same numbers of positive and negatives eigenvalues. In particular, we find necessary and sufficient conditions for sign-definiteness of Hankel operators. These results are illustrated at examples of quasi-Carleman operators generalizing the classical Carleman operator with kernel h(t) = t-1 in various directions. Dr. Erik Lundberg Statistics on Hilbert's Sixteenth Problem Tuesday, December 3, 2013, 5:00pm Abstract: We start with a question motivated by the fundamental theorem of algebra: How many zeros of a random polynomial are real? We discuss three Gaussian ensembles that lead to three different answers. For a polynomial in several variables, the real section of its zero set is much more complicated. Hilbert's sixteenth problem asks to study the possible arrangements of the connected components, and is especially concerned with the case of many components. I will describe a probabilistic approach to studying the topology, volume, and arrangement of the zero set (in real projective space) for a Gaussian ensemble of homogeneous polynomials. We will emphasize a model for random polynomials that is built from a basis of spherical harmonics (eigenfunctions of the spherical Laplacian). This is joint work with Antonio Lerario. Dr. Paul Kirk Indiana University, Bloomington Looking at Knots and 3-manifolds from the Perspective of the Space of Representations of Their Fundamental Group Friday, November 8, 2013, 4:00pm Abstract: A fruitful way to study 3-dimensional manifolds, such as the complement of a knot in a 3-sphere, is to study the topological space (algebraic variety) of conjugacy classes of representations of its fundamental group to a Lie group such as SU(2) or SL(2,C). I will describe part of the history and success of this approach, with a focus on illustrating low dimensional examples. Dr. Gregory Pearlstein Normal Functions and the Hodge Conjecture Tuesday, September 24, 2013, 5:00pm Abstract: The theory of normal functions and the Hodge conjecture have their origin in the study of algebraic cycles by Lefschetz and Poincare. I will sketch the history of the subject and discuss some of my recent work on singularities of normal functions to the Hodge conjecture and the zero locus of a normal function to a conjectural filtration of Bloch and Beilinson. Dr. Shuangjie Peng Huazhong Normal University, Wuhan, China On Schrodinger Systems with Nonlinear or Linear Coupling Tuesday, August 20, 2013, 11:00am Abstract: We will talk about how to get infinitely many positive non-radial vector solutions which are synchronized or segregated for a Schrodinger system with nonlinear coupling and radially symmetric potentials. We also discuss multiplicity of vector solutions which are segregated for a Schrodinger system with linear coupling. Dr. Yinbin Deng Huazhong Normal University, Wuhan China On the Positive Radial Solutions of a Class of Singular Semilinear Elliptic Equations Monday, August 19, 2013, 4:00pm Abstract: In this talk, we are concerned with the following elliptic equation div (A(|x|)∇u) + B(|x|)up = 0 in Rn, (0.1) where p > 1, n ≥ 3, A(|x|) > 0 is differentiable in Rn ¥{0} and B(|x|) is a given nonnegative Hölder continuous function in Rn ¥{0}. The asymptotic behavior at infinity and structure of separation property of positive radial solutions with different initial data for (0.1) are discussed. Moreover, the existence and separation property of infinitely many positive solutions for Hardy equation and an equation related to Caffarelli-Kohn-Nirenberg inequality are obtained respectively, as special cases. Professor Erwan Rousseau LATP CMI Aix-Marseille University Complex Hyperbolicity, Differential Equations and Automorphic Forms Abstract: I will explain how the study of entire curves in complex projective manifolds is related to differential equations and automorphic forms. Dr. Mu-Tao Wang A Minkowski Inequality and a Penrose Inequality Friday, April 5, 2013, 5:00pm Abstract: The classical inequality of Minkowski relates the total mean curvature of a convex surface to the area of the surface. I shall discuss a newly discovered Minkowski type inequality which can be interpreted as the Penrose inequality for collapsing shells in general relativity. This is joint work with Simon Brendle and Pei-Ken Hung. Persi Diaconis Mary V. Sunseri Professor of Statistics and Mathematics Shuffling Cards, Breaking Rocks and Hopf Algebras NOTE: Professor Diaconis will also be delivering the McKnight-Zame Distinguished Lecture on Wednesday, March 27. Abstract: Hopf algebras are combinatorial objects introduced by topologists to study the topology of classical groups. In joint work with Amy Pang and Arun Ram, we show that the Hopf square map (coproduct followed by product) often has a simple probabilistic interpretation. It allows us to explicitly diagonalize well-known Markov chains: the classical model of riffle shuffling cards and a rock breaking model of Kolmogorov among others. I will try to present all of this in mathematical English, explaining all the words above. Professor of Statistics Mathematical and Statistical Approaches to Heterogeneous Data: Challenges from the Human Microbiome Abstract: Through new sequencing technologies, we can make a census of the bacteria living in the human gut. We also have phylogenetic information about the taxa present and clinical information about the subjects. Distance based methods allow us to create useful representations integrating all this data and make useful visualizations that allow us to discover bacterial markers for disease and follow the dynamics of the bacterial communities. This is joint work with David Relman and his Stanford Lab. Dr. Richard Stanley Norman Levinson Professor of Applied Mathematics Polynomial Sequences of Binomial Type Mihalis Dafermos Professor of Mathematical Physics A Scattering Theory Construction of Dynamical Black Holes Dr. Joshua Greene Dehn Surgery and Floer Homology Abstract: Dehn surgery is a natural operation occurring in low-dimensional topology. It gives a method to create 3-manifolds out of knots and links, and there are many very attractive results and problems about it. One central problem is the Berge conjecture, which predicts when Dehn surgery along a knot in the three-sphere can produce a lens space. Floer homology has played a prominent role in low-dimensional topology during the past twenty-five years. It developed out of gauge theory and symplectic geometry, and one of its versions associates invariants to 3-manifolds and other objects in low-dimensional topology. I will discuss the central role that Floer homology has played in the study of Dehn surgery, and in particular what it tells us about the Berge conjecture. Dr. Eugene Gorsky Compactified Jacobians, q,t-Catalan Numbers and Knot Invariants Abstract: Campillo, Delgado and Gusein-Zade proved that the semigroup of a plane curve singularity encodes the information about the Alexander polynomial of its link. Oblomkov and Shende conjectured an extension of their result to to the HOMFLY polynomial, which uses the Hilbert schemes and compactified Jacobians of a singular curve. I will explain the combinatorics of their construction in the simplest example of torus knots, and relate it to the generalization of q,t-Catalan numbers of Garsia and Haiman. The talk is based on joint work with M. Mazin. Dr. Fernando Schwartz Geometric Inequalities for Hypersurfaces Thursday, October 11, 2012, 5:00pm Abstract: We revisit some classic estimates for the capacity as well as a version of the Alexandrov-Fenchel inequality for hypersurfaces of Euclidean space. We provide new, more general proofs of these inequalities, and include some rigidity statements. The results are joint work with Alexandre Freire. Professor Lars Andersson Albert Einstein Institute Max Planck Institute for Gravitational Physics Cosmological Models and Stability Abstract: In this talk I will discuss some mathematical results on inhomogeneous cosmological models, focusing on late time behavior and the issues of nonlinear stability versus instability. Professor Fernando Coda Marques IMPA, Brazil Min-max Minimal Surfaces and the Willmore Conjecture Abstract: In 1965, T. J. Willmore conjectured that the integral of the square of the mean curvature of a torus immersed in Euclidean three-space is at least 2π2. In this talk we will discuss a proof of this conjecture that uses the min-max theory of minimal surfaces. This is a joint work with Andre Neves of Imperial College (UK). Professor Ernesto Lupercio Center for Research and Advanced Studies of the National Polytechnic Institute (Cinvestav-IPN) Virtual Orbifold Cohomology Abstract: In this talk of a mostly expository nature I will explain the orbifolding procedure for topological field theories and a new family of examples. I will first introduce the concept of topological field theory and orbifold. This is joint with Gonzalez, Segovia, and Uribe. Dr. Sebastian Schreiber An SDE Perspective on the Ecology and Evolution of Movement Abstract: All populations, whether they be plants, animals, or viruses, live in spatially and temporally variable environments. Understanding how this variability influences population persistence and the evolution of movement is a fundamental issue of practical and theoretical importance in population biology. Prior work (including important contributions due to Chris Cosner and Steve Cantrell) has shown that spatial variability, in and of itself, enhances persistence and selects against random movement as well as movement into sink habitats (places unable to harbor a self-sustaining population). Alternatively, temporal variability, in and of itself, inhibits persistence and exerts no selective pressures on movement. The combined effects, however, of spatial and temporal variability are remarkably complex. This combined variability can select for movement into sink habitats and allow for populations to persist in landscapes comprised solely of sink habitats. In this talk, I will discuss recent analytic results in which populations living in patchy environments are modeled using stochastic differential equations (SDEs). These results provide a diversity of new insights into population persistence and the evolution of movement. Part of this work was done in collaboration with Steve Evans (Berkeley), Peter Ralph (Davis), and Arnab Sen (Cambridge). Dr. Ryan Derby-Talbot Are Complicated 3-manifolds Complicated? Abstract: In this talk we will discuss several ways that one can construct 3-manifolds (shapes like our spatial universe), and various ways that these constructions can be made complicated. Surprisingly, a 3-manifold made complicated by one construction may actually be simple in another construction, and necessarily so. We will explore different ways that complicating 3-manifolds in some ways simplifies them in others. Lots of fun pictures are promised. Hidden Symmetries and Conserved Charges Abstract: Test fields with non-zero spin, eg. Maxwell and linearized gravity provide an important model problem for black hole stability. Fields with non-zero spin admit non-radiating modes which must be eliminated in order to prove decay. In this talk I will discuss the relation between conserved charges and hidden symmetries for linearized gravity on Minkowski space and vacuum spaces of Petrov type D and outline the application of these ideas in proving estimates for the higher spin fields on the Kerr background. Dr. Andras Stipsicz Renyi Institute of Mathematics (Budapest) Institute for Advanced Study (Princeton) Computations of Heegaard Floer Homologies Abstract: Heegaard Floer homology groups were recently introduced by Ozsvath and Szabo to study properties of 3-manifolds and knots in them. The definition of the invariants rests on delicate holomorphic geometry, making the actual computations cumbersome. In the lecture we will recall the basic definitions and theorems of the theory, and show how to define the simplest version in a purely combinatorial manner. For a special class of 3-manifolds the more general version will be presented by simple combinatorial ideas through lattice homology of Nemethi. Professor Ross Pinsky Technion – Israel Institute of Technology Probabilistic and Combinatorial Aspects of the Card-Cyclic to Random Insertion Shuffle Abstract: Consider a permutation σj ∈ Sn as a deck of cards numbered from 1 to n and laid out in a row, where σj denotes the number of the card that is in the j-th position from the left. We study some probabilistic and combinatorial aspects of the shuffle on Sn defined by removing and then randomly reinserting each of the n cards once, with the removal and reinsertion being performed according to the original left to right order of the cards. The novelty here in this nonstandard shuffle is that every card is removed and reinserted exactly once. The bias that remains turns out to be quite strong and possesses some surprising features. Dr. Carmen Coll Instituto de Matematica Multidisciplinar ITME Visiting Scholar Identifiability of Parameters for Structured Systems Abstract: Mathematical models have been successfully developed to study real processes. Usually, the situation is as follows: the inputs, u, can be controlled, the outputs, y, can be observed, but the description of what happens inside is not unknown. This situation is called the input-output behavior of the system. In general, the state-space representation of these processes involves coefficient matrices and the equations of the model have unknown parameters that can be determined from experimental data. A good formulation of the model is essential because this fact allows us to predict or control the behavior of the real process. Moreover, it can be used to estimate quantities that cannot be measured directly from observations. In this case, it is useful to obtain these parameters to accurate the model. The problem of identifying the unknown parameters within the model uniquely from the experiment considered is called the identifiability problem. That is, a system is identifiable if the relationship between the set of possible parameter values and the set of possible input-output behaviors is one-to-one. On the other hand, many of these models have a fixed structure and we want to solve the identifiability problem when this structured holds on. The structural identifiability analysis of a system is a preliminary analysis of the model structure oriented to the parameter and model identification. The problem of the structural identifiability of the model consists of the determination of all parameter sets which give the same input-output structure. In this talk, parametric systems with different matrix structures are considered. The structural properties of the model are studied, and some conditions to assure the structural identifiability are given. These results guarantee the existence of only one solution for the parameters of the system. In practice, systems with one of these structures arise, for example, via discretization or finite difference methods for solving boundary and initial value problems involving differential or partial differential equations. Dr. Huaiping Zhu Forecasting Mosquito Abundance and West Nile Virus Risk Using Weather and Environment Conditions Abstract: It has been witnessed for the last decades that climate change has great impact on the emerging and reemerging of vector-borne diseases, yet it must be admitted that the actual impact of climate change on vector population and diseases transmission are still far from clear. In this talk, I will present a modeling study of the West Nile virus in the Peel region of Ontario, Canada. By using surveillance data, weather data and land use information, we develop both statistical and dynamical models incorporating weather conditions and land use information for the vector-mosquitoes abundance and risk assessment of West Nile virus. I will discuss the statistical properties of the dynamical models and present a collaborative effort with Peel region and Public Health Agency of Canada in developing tools for forecasting the mosquito-abundance and the virus risk. Dr. Lev Ginzburg Department of Ecology and Evolution Life is 4D: Allometric Slopes Are Understandable when Viewed in 4D Professor Roger Arditi Université Pierre et Marie Curie, Paris How Species Interact Abstract: Understanding the functioning of ecosystems requires the understanding of the interactions between consumer species and their resources. How do these interactions affect the variations of population abundances? How do population abundances determine the impact of predators on their prey? The authors defend the view that the "null model" that most ecologists tend to use (derived from the Lotka-Volterra equations) is inappropriate because it assumes that the amount of prey consumed by each predator is insensitive to the number of conspecifics. The authors argue that the amount of prey available per predator (rather than the absolute abundance of prey) is the basic determinant of the dynamics of predation. This so-called ratio dependence is shown to be a much more reasonable "null model". Lessons can be drawn from a similar debate that took place in microbiology in the 1950's. Currently, populations of bacteria are known to follow the analogue of ratio dependence when growing in real-life conditions. Three kinds of arguments are developed. First, it is shown that available direct measurements of prey consumption are "in the middle" but most are close to ratio dependence and all are clearly away from the usual Lotka-Volterra relationship; an example is the system of wolves and moose on Isle Royale. Second, indirect evidence is based on the responses of food chains to nutrient enrichment: all empirical observations at the community level agree very well with the ratio-dependent view. Third, mechanistic approaches explain how ratio dependence emerges at the global scale, even when assuming Lotka-Volterra interactions at the local scale; this is illustrated by microcosm experiments, by individual-based models and by mathematical models. Changing the fundamental paradigm of the predator-prey interaction has far-reaching consequences, ranging from the logical consistency of theoretical ecology to practical questions of eco-manipulation, biological control, conservation ecology. This work is in collaboration with Lev Ginzburg. Professor Larry Shepp Patrick T. Harker Professor Wharton School, University of Pennsylvania Board of Governor's Professor Is Mathematical Modeling Able to Give Insight into Current Questions in Finance, Economics, and Politics? Abstract: Part I. I argue that rigorous mathematics gives insight into the current question of whether taxation helps or hinders employment. Part II. I compute (within one simple model) how much money future knowledge, obtained either via insider information or via high-frequency trading, of future stock prices brings to a possessor of such knowledge. Professor Sergiu Klainerman Higgins Professor of Mathematics On the Bounded L2 Curvature Conjecture as a Breakdown Criteria for the Einstein Vacuum Equations Abstract: I will talk about my recent work with Rodnianski and Szeftel concerning a solution of the conjecture. I will also compare the result with the other known breakdown criteria in GR. Dr. Michael Eichmair Isoperimetric Structure of Initial Data Sets Friday, December 16, 2011, 4:00pm Abstract: I will present joint work with Jan Metzger. A basic question in mathematical relativity is how geometric properties of an asymptotically flat manifold (or initial data set) encode information about the physical properties of the space time that it is embedded in. For example, the square root of the area of the outermost minimal surface of an initial data with non-negative scalar curvature provides a lower bound for the "mass" of its associated space time, as was conjectured by Penrose and proven by Bray and Huisken-Ilmanen. Other special surfaces that have been studied in this context include stable constant mean curvature surfaces and isoperimetric surfaces. I will explain why positive mass works to the effect that large stable constant mean curvature surfaces are always isoperimetric. This answers an old conjecture of Bray's and complements the results by Huisken-Yau and Qing-Tian on the "global uniqueness problem for stable CMC surfaces" in initial data sets with positive scalar curvature. Time permitting, I will sketch applications related to G. Huisken's isoperimetric mass and very recent related results with S. Brendle on further isoperimetric features of the exact spatial Schwarzschild metric. Dr. Martin Bootsma Utrecht University, The Netherlands The Spreading Capacity of Methicillin-resistant Staphylococcus aureus (MRSA) Abstract: In my talk I will discuss an example of cross-fertilization between medicine and mathematics. Data on the size of outbreaks with methicillin-resistant Staphylococcus aureus (MRSA) in Dutch hospitals were collected to estimate the transmissibility of the two most relevant MRSA strains in the Netherlands. Analysis of these data led to a relation between the epidemiological model for the spread of MRSA in hospitals and queuing theory, and a new estimation method for transmissibility of pathogens in outbreak settings with contact screening. In very recent theoretical work, Amaury Lambert and Pieter Trapman derive an improved estimator if easily collectable data is available. These data are now collected in a currently performed study in the Netherlands. University of Bordeaux Segalen, France P-gp Transfer and Acquired Multi-drug Resistance in Tumors Cells Abstract: Multi-drug resistance for cancer cells as been a serious issue since several decades. In the past, many models have been proposed to describe this problem. These models use a discrete structured for the cancer cell population, and they may include some class of resistant, non resistant, and acquired resistant cells. Recently, this problem has received a more detailed biological description, and it turns out that the resistance to treatments is due in 40% of cancers to a protein called P-glycoprotein (P-gp). Moreover some new biological experiments show that transfers can occur by the mean of Tunneling nanoTubes built in between cells (direct transfers). Transfers can also occur through microparticles (containing P-pg) released by over expressing cells into the liquid surrounding these cells. These microparticles can then diffuse and can be recaptured by the cells (indirect transfers). This transfers turn to be responsible for the acquired resistance of sensitive cells. The goal of this talk is to introduce this problem, and to present a cell population dynamic model with continuous P-gp structure. Professor DaGang Yang Einstein 4-Manifolds with Pinched Sectional Curvature Abstract: Let (M, g) be a compact, simply connected n-dimensional Riemannian manifold with sectional curvature K. (M, g) is said to be pointwise ε-pinched for some constant 1 ≥ ε > 0 if there is a positive function K0 on M such that K0 ≥ K > εK0. Question: For what values of ε, can one expect the Ricci flow initiated from g to converge to a metric of constant sectional curvature, and therefore diffeomorphic to the standard sphere Sn? The 1/4-pinched differentiable sphere theorem, by H.W. Chen for n = 4, and by S. Brendle and R. M. Schoen for n ≥ 5, says that ε = 1/4 is the smallest possible value. For ε < 1/4, P. Petersen and T. Tao have shown that there is a constant εn, 1/4 > εn > 0, for each dimension n such that, if 1 ≥ K > εn, then the Ricci flow initiated from g will converge to a metric which is either of constant sectional curvature or is a compact rank one symmetric space. It is therefore natural to propose the following question: For each n ≥ 4, what is the smallest pinching constant εn > 0 such that, if K0 ≥ K > εnK0, then the Ricci flow initiated from g can still be expected to converge to a metric of constant sectional curvature or to a metric of a compact rank one symmetric space? In other words, are there any other models of Einstein manifolds with pinched positive sectional curvature in each dimension n ≥ 4? In this talk, I shall discuss some old and new results in this area for n=4. Dr. Jan Medlock Issues in the Ecology and Evolution of Dengue Abstract: Dengue is a mosquito-borne viral pathogen that causes large amounts of disease in the tropics and sub-tropics. Dengue viruses are divided into four large clades, called serotypes: infection with a virus produces complete immunity to viruses within that same serotype, but increases the risk of severe disease upon infection with a virus from a different serotype. Multiple mechanisms have been hypothesized for this interaction between serotypes in the human immune system, which, combined with seasonal oscillations in mosquito abundances, lead to complex behavior in mathematical models. In addition, two new interventions for dengue are currently in intense development: a vaccine that protects against all four serotypes and transgenic mosquitoes that are less-suitable vectors. In this talk, I will discuss a model for evolution of dengue viruses in response to these new interventions and work in progress on the best population groups to target with vaccine to minimize disease burden. Dr. Jim Haglund Macdonald Polynomials and the Hilbert Series of the Quotient Ring of Diagonal Coinvariants Abstract: Macdonald polynomials are symmetric functions in a set of variables X which also depend on two parameters q,t. In this talk we describe how a formula of Haiman for the Hilbert series of the quotient ring of diagonal coinvariants in terms of Macdonald polynomials implies a much simpler expression for the Hilbert series involving matrices satisfying certain constraints. Dr. Marcus Khuri The Positive Mass Theorem with Charge Revisited Abstract: In the early 80's Hawking et al. generalized the positive mass theorem to include charge. It was conjectured that the case of equality should occur only for the extremal black hole solutions known as Majumdar-Papapetrou spacetimes. Chrusciel et al. confirmed this under extra assumptions. In this talk we will show how these extra hypotheses may be removed. This is joint work with Gilbert Weinstein. Richard P. Stanley A Survey of Alternating Permutations Professor Xiaodong Wang Volume Entropy and Ricci Curvature Abstract: The volume entropy is a very interesting invariant of a Riemannian manifold. When the Ricci curvature has a negative lower bound, there is a sharp lower bound for the volume entropy. I will discuss why the equality case characterizes hyperbolic manifolds. In certain cases, we can also prove that the manifold is close to a hyperbolic manifold in the Gromov-Hausdorff sense if the volume entropy is close to the sharp lower bound. The method involves the Busemann compactification and Patterson-Sullivan measure. This is a joint work with Francois Ledrappier. Dr. David Smith Center for Disease Dynamics Economics and Policy Washington, DC Recasting the Theory of Transmission by Mosquitoes Abstract: Mathematical modeling for mosquito-borne diseases has been used to develop theory and guide disease control for more than a century, but the demands on models have been changing. Analysis of a comprehensive review of mosquito-borne transmission models demonstrated that mosquito-borne disease models follow the conventions of the Ross-Macdonald model and that there has been little innovation in modeling transmission. A new mathematical description of transmission was based on mosquito movement, aquatic ecology and blood feeding behavior. Mosquito movement can be described concisely as a random walk on a bipartite graph. Transmission also depends on the ways that mosquitos allocate bites on humans and the way humans allocate their time at risk. This framework provides a starting point for reformulating a new theory of transmission that captures other aspects of mosquito behavior that are important for transmission but absent from the Ross-Macdonald model. Dr. Xinzhi Liu Department of Applied Mathematics Waterloo, Canada Epidemic Models with Switching Parameters Wednesday, February 23, 2011, 5:00pm Abstract: Epidemic models are vital for implementing, evaluating, and optimizing control schemes in order to eradicate a disease. These mathematical models may be oversimplified, but they are useful for gaining knowledge of the underlying mechanics driving the spread of a disease, and for estimating the number of vaccinations required to eradicate a disease. This talk discusses some epidemic models with switching parameters. Both constant control and pulse control schemes are examined, and, in doing so, we hope to gain insight into the effects of a time-varying contact rate on critical control levels required for eradication. Professor Tatiana Toro Potential Theory Meets Geometric Measure Theory Abstract: A central question in Potential Theory is the extent to which the geometry of a domain influences the boundary regularity of solutions to divergence form elliptic operators. To answer this question one studies the properties of the corresponding elliptic measure. On the other hand one of the central questions in Geometric Measure Theory (GMT) is the extent to which the regularity of a measure determines the geometry of its support. The goal of this talk is to present a few instances in which techniques from GMT and Harmonic Analysis come together to produce new results in both of these areas. Research and Advanced Studies Center of the National Polytechnic Institute of Mexico (Cinvestav-IPN) Winner of the 2009 Srinivasa Ramanujan Prize The Moduli Space of (Non-commutative) Toric Varieties Abstract: In this talk I will describe my work in progress with Laurent Meersseman and Alberto Verjovvsky on the moduli space of Toric Manifolds. Using specific families of foliations and the Gale transform we describe some basic geometric and topological properties of this moduli space. Professor Herbert S. Wilf Thomas A. Scott Emeritus Professor of Mathematics There's Plenty of Time for Evolution Abstract: Those who are skeptical of the Darwinian view of evolution often argue that since there are K^n possible n letter words over a K letter alphabet, it must take an exponentially long time before random mutations of the letters will produce "the right word." We show that if the effects of natural selection are taken into account in a reasonable way, the K^n time estimate can be replaced by Kn log n. As a byproduct we obtain the mean of the largest of many geometrically distributed random variables. This is joint work with Warren Ewens. Dr. Nathan Geer The Colored Jones Polynomial and Some of Its Relatives Abstract: In this talk I will discuss a problem in quantum topology called the Volume Conjecture. This conjecture relates the colored Jones polynomials with the hyperbolic volume of the knot complement. As I will explain the Volume Conjecture links together elements of topology, geometry and algebra. I will begin with a gentle introduction to knot theory and the definition of the Jones polynomial. Then I will show how to compute the colored Jones polynomial using algebra. Finally after stating the conjecture, I will discuss some related topological invariants. Some New Probability Problems; Some New Solutions Abstract: I will first update the situation discussed last year re the artificial pancreas project. The problem is now to upgrade the sensor by getting a better algorithm for recalibration. This is a really important and beautiful problem. I will tell you about my recent work on it; it's still very open. The second update is a beautiful theoretical problem recently posed by Mike Steele. The problem is this: consider a sequence of n iid uniform variables on [0,1]. Call a subsequence "upsy-downsy", if no three successive terms of the subsequence are monotonic. It is known (Houdre-Stanley-Widom) that if one can search all subsequences, then the expected length of the longest upsy-downsy subsequence is asymptotic to n times 2/3. Steele asked a question of interest to people in stochastic optimization: what is the length of the upsy-downsy subsequence if each term is optimally chosen, {\em without knowing the rest of the sequence, as in the famous secretary problem. We showed the answer is asymptotic to n times c, where c = 2 - \sqrt{2} = .586\ldots < 2/3$. I will indicate two approaches to this problem, each of which gives the right answer, but only one of which I regard as mathematically legitimate. Dr. Slawomir Kwasik Souls of Manifolds via Curvature and Surgery Abstract: Deep connections between topology and geometry will be discussed in the case of manifolds with non-negative (sectional) curvature. Historical perspective of these connections and new developments will be presented. Dr. Chuan Xue Mathematical Biosciences Institute at Ohio State A Mathematical Model of Chronic Wounds Abstract: Chronic wound healing is a staggering public health problem, affecting 6.5 million individuals annually in the U.S. Ischemia, caused primarily by peripheral artery diseases, represents a major complicating factor in the healing process. In this talk, I will present a mathematical model of chronic wounds that represents the wounded tissue as a quasi-stationary Maxwell material, and incorporates the major biological processes involved in the wound closure. The model was formulated in terms of a system of partial differential equations with the surface of the open wound as a free boundary. Simulations of the model demonstrate how oxygen deficiency caused by ischemia limit macrophage recruitment to the wound-site and impair wound closure. The results are in tight agreement with recent experimental findings in a porcine model. I will also show analytical results of the model on the large-time asymptotic behavior of the free boundary under different ischemic conditions of the wound. Dr. Valerie Hower A Shape-based Method for Determining Protein Binding Sites in a Genome Abstract: We present a new algorithm for the identification of bound regions from ChIP-Seq experiments. ChIP-Seq is a relatively new assay for measuring the interactions of proteins with DNA. The binding sites for a given protein in a genome are "peaks" in the data, which is given by an integer-valued height function defined on the genome. Our method for identifying statistically significant peaks is inspired by the notion of persistence in topological data analysis and provides a non-parametric approach that is robust to noise in experiments. Specifically, our method reduces the peak calling problem to the study of tree-based statistics derived from the data. The software T-PIC (Tree shape Peak Identification for ChIP-Seq) is available at http://math.berkeley.edu/~vhower/tpic.html and provides a fast and accurate solution for ChIP-Seq peak finding. Dr. Peter Kim Imatinib Dynamics and Cancer Vaccines: From Agent-Based Models to PDEs Abstract: Various models exist for the interaction between the drug imatinib and chronic myelogenous leukemia. However, the role of the immune response during imatinib treatment remains unclear. Based on experimental data, we hypothesize that imatinib gives rise to a brief anti-leukemia immune response as patients enter remission. We propose that cancer vaccinations during imatinib treatment can boost the existing immune response and lead to a sustained remission or a potential cure. To examine this hypothesis, we take a model by Michor et al. and extend it to a delay differential equation (DDE) model by incorporating an anti-leukemia immune response. We show that properly-timed vaccines can sustain the immune response to potentially prolong remission or eliminate cancer. For comparison, we analyze an agent-based model developed independently by Roeder et al. We develop a partial differential equation (PDE) model that captures the same behavior as the Roeder agent-based model and extend it by incorporating an immune response. We conclude that both the DDE and PDE models exhibit similar behaviors with regard to cancer remission, implying that anti-leukemia immune responses may play a role in leukemia treatment. Professor Nicolai Reshetikhin Understanding Random Surfaces Abstract: There is a bijection between a class of piece-wise linear surfaces and dimer configurations on planar graphs. A dimer configuration on a graph is a perfect matching on vertices connected by edges. Dimers are well known in biology, chemistry and statistical mechanics. For certain very natural probability measures on dimer configurations, important correlation functions can be computed as Pfaffians of N\times N matrices. This reduces the statistics of such special random surfaces to a reasonable problem in linear algebra. This allows to study such random surfaces corresponding tolarge graphs. The talk will outline this story and at the end the discussion will focus on the "continuum limit" of such random surfaces. Professor Pierre Magal Bifurcation Problems for Structured Population Dynamics Models Abstract: This presentation is devoted to bifurcation problems for some classes of PDE arising in the context of population dynamics. The main difficulty in such a context is to understand the dynamical properties of a PDE with non-linear and non-local boundary conditions. A typical class of examples is the so called age structured models. Age structured models have been well understood in terms of existence, uniqueness, and stability of equilibria since the 80's. Nevertheless, up to recently, the bifurcation properties of the semiflow generated by such a system has been only poorly understood. In this presentation, we will start with some results about existence and smoothness of the center mainfold, and we will present some general Hopf bifurcation results applying to age structured models. Then we will turn to normal theory in such a context. The point here is to obtain formula to compute the first order terms of the Taylor expansion of the reduced system. Dr. Steven White Centre for Ecology & Hydrology Wallingford, Oxon, UK Controlling Mosquitoes by Classical or Transgenic Sterile Insect Techniques Abstract: For centuries, humans have attempted to control insect populations. This is in part because of the significant mortality and morbidity burden associated with insect vector-borne diseases, but also due to the huge economic impact of insect pests leading to losses in global food production. The development of transgenic technologies, coupled with sterile insect techniques (SIT), is being explored in relation to new approaches for the biological control of insect pests. In this talk, I explore the impact of two control strategies (classical SIT and transgenic late-acting bisex lethality) using a stage-structured mathematical model, which is parameterized for the mosquito Aedes aegypti, which can spread yellow fever, dengue fever and Chikungunya disease. Counter to the majority of studies, I use realistic pulsed release strategies and incorporate a fitness cost, which is manifested as a reduction in male mating competitiveness. I will explore the timing of control release in constant and cyclic wild-type mosquito populations, and demonstrate that this timing is critical for effective pest management. Furthermore, I will incorporate these control strategies into an integrated pest management program (IPM) and find the optimal release strategy. Finally, I will extend the models to a spatial context, determining conditions for the prevention of mosquito invasion by the use of a barrier wall. Dr. Kate Petersen The Euclidean Algorithm and Primitive Roots Abstract: Artin's famous primitive root conjecture states that if n is an integer other than -1 or a square, then there are infinitely many primes p such that n is a primitive root modulo p. Although this conjecture is not known to hold for any value of n, Hooley proved it to be true under the assumption of the generalized Riemann hypothesis (GRH). We will discuss a number field version of this conjecture and its connection to the following Euclidean algorithm problem. Let O be the ring of integers of a number field K. It is well-known that if O is a Euclidean domain, then O is a unique factorization domain. With the exception of the imaginary quadratic number fields, it is conjectured that the reverse implication is true. This was proven by Weinberger under the assumption of the GRH. We will discuss recent progress towards the unconditional resolution of the Euclidean algorithm problem and the related primitive root problem. This is joint work with M. Ram Murty. Dr. Shiwang Ma Nankai University, China Bounded and Unbounded Motions for Asymmetric Oscillators at Resonance Monday, November 8, 2010, 4:30pm Abstract: In this talk, we consider the boundedness and unboundedness of solutions for the asymmetric oscillator x" + ax+ - bx- + g(x) = p(t), where x+ = max{x,0},x- = max{-x,0}, a and b are two positive constants, p(t) is a 2π-periodic smooth function and g(x) satisfies lim|x|→+∞x-1g(x) = 0. We establish some sharp sufficient conditions concerning the boundedness of all the solutions and the existence of unbounded solutions. Unlike many existing results in the literature where the function g(x) is required to be a bounded function with asymptotic limits, here we allow g(x) be unbounded or oscillatory without asymptotic limits. Some critical cases will also be considered. Dr. Igor Rodnianski Evolution Problem in General Relativity Wednesday, November 3, 2010, 5:00pm Abstract: The talk will introduce basic mathematical concepts of General Relativity and review the progress, main challenges and open problems, viewed through the prism of the evolution problem. I will illustrate interaction of Geometry and PDE methods in the context of General Relativity on examples ranging from incompleteness theorems and formation of trapped surfaces to stability problems. Dr. Lars Andersson The Black Hole Stability Problem Friday, October 22, 2010, 4:00pm Abstract: The problem of nonlinear stability for the Kerr model of a rotating black hole is one of the central problems in general relativity. The analysis of linear fields on the Kerr spacetime is an important model problem for full nonlinear stability. In this talk, I will present recent work with Pieter Blue which makes use of the hidden symmetry related to the Carter constant to circumvent these difficulties and give a "physical space" approach to estimates for the wave equation, including energy bounds, trapping, and dispersive estimates. I will also discuss the field equations for higher spin fields including linearized gravity. Dr. Yuan Lou Persistence of a Single Phytoplankton Species Abstract: Phytoplankton need light to grow. However, most of phytoplankton are heavier than water, so they sink. How can phytoplankton persist? We investigate a nonlocal reaction-diffusion-advection equation which models the growth of a single phytoplankton species in a water column where the species depends solely on light for its metabolism. We study the effect of sinking rate, water column depth and vertical turbulent diffusion rate on the persistence of a single phytoplankton species. This is based upon a joint work with Sze-Bi Hsu, National Tsing-Hua University. Professor Nick Loehr Macdonald Polynomials in Representation Theory and Combinatorics Abstract: This talk surveys some recent work in algebraic combinatorics that illustrates surprising connections between representation theory and enumerative combinatorics. We describe how to calculate the Hilbert series of various spaces of polynomials (harmonics, diagonal harmonics, and Garsia-Haiman modules) using combinatorial statistics on permutations and parking functions. This leads to a discussion of the algebraic and combinatorial significance of the Macdonald polynomials, which have played a central role in the theory of symmetric functions for the past two decades. Brian J. Coburn, Ph.D. Center for Biomedical Modeling Semel Institute of Neuroscience and Human Behavior David Geffen School of Medicine Modeling Approaches for Influenza and HIV Abstract: In this talk, I will present a survey of research projects on different mathematical models for influenza and HIV. For influenza, I will discuss two different modeling approaches. In the first approach, I will present a multi-strain/multi-host (MSMH) model that tracks the spread of inter-species strains between birds, pigs and humans. In the MSMH model, pigs are "mixing vessels" between avian and human strains and are capable of producing super-strains as a consequence of genetic recombination of these strains. I will show how specific subtypes can cause an epidemic then virtually disappear for years or even decades before reemerging (e.g., the case of H1N1). In the second approach, I will present a model that tracks the spread of influenza within flight transmission. A plane flight is much shorter scale than influenza's infectious duration; hence, we use methods from microbial risk management to assess the number of potential infections. We show that the flight duration along with the compartment will ultimately determine the passenger's risk. For HIV, I will present cross-sectional data on HIV prevalence in Lesotho, a small sub-Saharan African nation with HIV prevalence at approximately 23%. I will present our current progress on data analysis from the Health and Demographic Survey (DHS) to develop risk maps by district based on prevalence and treatment, feasibility analysis of a clinical trial, and efficacy of male circumcision as prevention for HIV. Dr. Andrew Noble A Non-neutral Theory of Dispersal-limited Community Dynamics Abstract: We introduce the first analytical model of a dispersal-limited, niche-structured community to yield Hubbell's neutral theory in the limit of functional equivalence among all species. Dynamics of the multivariate species abundance distribution (SAD) for an asymmetric local community are modeled explicitly as a dispersal-limited sampling of the surrounding metacommunity. Coexistence may arise either from approximate functional equivalence or a competition-colonization tradeoff. At equilibrium, these symmetric and asymmetric mechanisms both generate unimodal SADs. Multiple modes only arise in asymmetric communities and provide a strong indication of non-neutral dynamics. Although these stationary distributions must be calculated numerically in the general theory, we derive the first analytical sampling distribution for a nearly neutral community where symmetry is broken by a single species distinct in ecological fitness and dispersal ability. Novel asymptotic expansions of hypergeometric functions are developed to make evaluations of the sampling distribution tractable for large communities. In this regime, population fluctuations become negligible. A calculation of the macroscopic limits for the symmetric and asymmetric theories yields a new class of deterministic competition models for communities of fixed-size where rescue effects facilitate coexistence. For nearly neutral communities where the asymmetric species experiences linear density-dependence in ecological fitness, strong Allee-type effects emerge from a saddle-node bifurcation at a critical point in dispersal limitation. The bistable dynamics governing a canonical Allee effect are modified by a constant influx of migrants, which raises the lower stable fixed point above zero. In the stochastic theory, a saddle-node bifurcation corresponds to the development of bimodal stationary distributions and the emergence of inflection points in plots of mean first-time to extirpation as a function of abundance. Dr. Hao Wang The Role of Light and Nutrients in Aquatic Trophic Interactions Abstract: Carbon (C), nitrogen (N), and phosphorus (P) are vital constituents in biomass: C supplies energy to cells, N is essential to build proteins, and P is an essential component of nucleic acids. The scarcity of any of these elements can severely restrict organism and population growth. Thus in nutrient deficient environments, the consideration of nutrient cycling, or stoichiometry, may be essential for population models. To show this idea, I will present two case studies in this talk. We carried out a microcosm experiment evaluating competition of an invasive species Daphnia lumholtzi with a widespread native species, Daphnia pulex. We applied two light treatments to these two different microcosms and found strong context-dependent competitive exclusion in both treatments. To better understand these results we developed and tested a mechanistically formulated stoichiometric model. This model exhibits chaotic coexistence of the competing species of Daphnia. The rich dynamics of this model as well as the experiment allow us to suggest some plausible strategies to control the invasive species D. lumholtzi. We modeled bacteria-algae interactions in the epilimnion with the explicit consideration of carbon (energy) and phosphorus (nutrient). We hypothesized that there are three dynamical scenarios determined by the basic reproductive numbers of bacteria and algae. Effects of key environmental conditions were examined through these scenarios. Competition of bacterial strains were modeled to examine Nishimura's hypothesis that in severely P-limited environments such as Lake Biwa, P-limitation exerts more severe constraints on the growth of bacterial groups with higher nucleic acid contents, which allows low nucleic acid bacteria to be competitive. Dr. Sanja Zivanovic Centrum Wiskunde en Informatica (CWI), Amsterdam, Netherlands Numerical Solutions to Noisy Systems Abstract: We study input-affine systems where input represents some bounded noise. The system can be rewritten as differential inclusion describing the evolution. Differential inclusions are a generalization of differential equations with multivalued right-hand side. They have applications in many areas of science, such as mechanics, electrical engineering, the theory of automatic control, economical, biological, and social macrosystems. A numerical method for rigorous over-approximation of a solution set of input-affine system will be presented. The method gives high order error for a single time step and a uniform bound on the error over the finite time interval. The approach is based on the approximations of inputs by piecewise linear functions. Ecological and Evolutionary Consequences of Dispersal in Multi-trophic Communities Abstract: I investigate the effects of non-random dispersal strategies on coexistence and species distributions in multi-trophic communities with competition and predation. I conduct a comparative analysis of dispersal strategies with random and fitness-dependent dispersal at the extremes and two intermediate strategies that rely on cues (density and habitat quality) that serve as proxies for fitness. The most important finding is an asymmetry between consumer species in their dispersal effects. The dispersal strategy of inferior resource competitors that are less susceptible to predation have a large effect on both coexistence and species distributions, but the dispersal strategy of the superior resource competitor that is more susceptible to predation has little or no effect on dispersal. I explore the consequences of this asymmetry for the evolution of dispersal. Dr. Herbert Wilf How to Lose as Little as Possible Abstract: Suppose Alice has a coin with heads probability q and Bob has one with heads probability p > q. Now each of them will toss their coin n times, and Alice will win iff she gets more heads than Bob does. Of course, the game favors Bob, but for the given p, q, what is the choice, N(q,p), of n that maximizes Alice's chances of winning? The analysis uses the multivariate form of Zeilberger's algorithm, so a portion of the talk will be a review of the ideas underlying symbolic summation. Dr. Mario Milman Sobolev Inequalities on Probability Metric Spaces Abstract: To formulate new Sobolev inequalities one needsto answer questions like: what is the role of dimension? What norms are appropriate to measure the integrabilitygains? Just to name a few...For example, in contrast to the Euclidean case, the integrability gains in Gaussian measure are logarithmic but dimension free (log Sobolev inequalities). So it is easy to understand the difficulties to derive a general theory. I will discuss some new methods to prove general Sobolev inequalities that unify the Euclidean and the Gaussian cases, as well as several important model manifolds. Dr. Larry Shepp Member, National Academy of Science (NAS) Member, Institute of Medicine (IOM) Problems in Probability Abstract: Several problems will be discussed: 1) What is the distribution of the empirical correlation coefficient of two (actually independent) Wiener processes? It is far from zero - correlation is induced by the arc sine law property of the sample paths. This is used by (bad) statisticians to show correlation between time series when none exists. It is a non-trivial calculation to find the actual distribution. 2) What is the relationship between the coefficients of a polynomial of degree n and the number of its real zeros? Descartes had something to say about it, but Mark Kac showed that probability theory can add a lot of insight. 3) An update on the situation discussed last year re the artificial pancreas project. Dr. Richard Schoen Bass Professor of Humanities and Sciences Riemannian Manifolds of Constant Scalar Curvature Abstract: The problem of constructing Riemannian metrics of constant scalar curvature is called the Yamabe problem. It is an important variational problem in conformal geometry, and also relates directly to the Einstein equations of general relativity. We will give a brief history and introduction to this problem and describe some new phenomena which have been discovered recently concerning issues of singular behavior and blow up of such metrics. Dr. Pengzi Miao Critical Metrics for the Volume Functional on Compact Manifolds with Boundary Abstract: It is known that, on closed manifolds, Einstein metrics of negative scalar curvature are critical points of the usual volume functional constrained to the space of metrics of constant scalar curvature. In this talk, I will discuss how this variational characterization of Einstein metrics can be localized to compact manifolds with boundary. I will derive the critical point equation and focus on geometric properties of its solutions. In particular, if a solution has zero scalar curvature and the boundary of the manifold can be isometrically embedded into the Euclidean space as a convex hypersurface, I will show that the volume of such a critical metric is always greater than or equal to the Euclidean volume enclosed by the image of the isometric embedding, and two volumes are the same if and only if the critical metric is isometric to the Euclidean metric on a round ball. I will also give a classification of all conformally flat critical metrics. This is joint work with Luen-Fai Tam. Dr. Brian J. Weber RTG/Simons Center Postdoc Einstein Metrics, the Bach Tensor, and Metric Degenerations Abstract: One might search for "canonical metrics," such as Einstein metrics, on a manifold by trying to prove the convergence of a sequence of metrics that minimize some functional, although such a direct approach usually fails. In this talk we present an indirect approach which has been successful in some cases. A local obstruction to finding an Einstein metric in a conformal class is the non-vanishing of the Bach tensor, defined to be the gradient of the Weyl curvature functional $\int |W|^2$. On a Kaehler manifold there are no other obstructions, and any Bach-flat Kaehler metric is locally conformally Einsteinian. Additionally, the conformal factor is geometrically interesting and sometimes controllable. This talk will describe the results of a 2008 paper with X. Chen and C. LeBrun, where circumstances under which a Kaehler manifold is Bach-flat were established, and where it was shown that these conditions hold for a certain Kaehler metric on $CP^2 # 2\overline CP^2$ with non-zero conformal factor, establishing for the first time an Einstein metric on $CP^2 # 2\overline 2CP^2$. Dr. Hans Boden Metabelian SL(n,C) Representations of Knot Groups Abstract: In this talk, which represents joint work with Stefan Friedl, we will present a classification of irreducible metabelian SL(n,C) representations of knots groups. Under a mild hypothesis, we prove that such representations factor through a finite group, hence they are all conjugate to unitary representations, and we give a simple formula for the number of conjugacy classes. For knots with nontrivial Alexander polynomial, we discuss an existence result for irreducible metabelian representations. Given a knot group, its SL(n,C) character variety admits a natural action by the cyclic group of order n, and we show how to identify the fixed points of this action with characters of metabelian representations. If time permits, we will describe conditions under which such points are simple points in the character variety using a deformation argument of Abdelghani, Heusener, and Jebali. Dr. Brett L. Kotschwar Backwards Uniqueness for the Ricci Flow and the Non-expansion of the Isometry Group Abstract: One of the fundamental properties of the Ricci flow -- an evolution equation for Riemannian metrics -- is that of isometry preservation, namely, that an isometry of the initial metric remains an isometry of the solution, at least as long as the curvature remains bounded. In this talk, I will take up the complementary problem of isometry development under the flow. While the solution may acquire new isometries in the limit, one does not expect the flow to sponsor their generation within the lifetime of the solution. The impossibility of such a phenomenon is equivalent to a backwards uniqueness (or unique-continuation) property for the equation: two solutions which agree at some non-initial time must agree identically at all previous times. I will discuss recent work which establishes this property for complete solutions of bounded curvature, and prohibits, additionally, a solution from becoming Einstein or self-similar in finite time. Dr. Stephen Gourley University of Surrey, UK Impulsive Delay Equation Models for the Control of Vector-borne Diseases Abstract: Delay equation models for the control of a vector-borne disease such as West Nile virus will be presented. The models make it possible to compare the effectiveness of larvicides and adulticides in controlling mosquito populations. The models take the form of autonomous delay differential equations with impulses (if the adult insects are culled) or a system of nonautonomous delay differential equations where the time-varying coefficients are determined by the culling times and rates (in the case where the insect larvae are culled). Sufficient conditions can be derived which ensure eradication of the disease. Eradication of vector-borne diseases is possible by culling the vector at either the immature or the mature phase. Very infrequent culling can actually lead to the mean insect population being increased rather than decreased. Professor Philippe LeFloch University of Paris 6 and CNRS Einstein Spacetimes with Bounded Curvature Abstract: I will present recent results on Einstein spacetimes of general relativity when the curvature is solely assumed to be bounded and no assumption on its derivatives is made. One such result, in a joint work with B.-L. Chen, concerns the optimal regularity of pointed spacetimes in which, by definition, an "observer" has been specified. Under geometric bounds on the curvature and injectivity radius near the observer, there exist a CMC (constant mean curvature) foliation as well as CMC--harmonic coordinates, which are defined in geodesic balls with definite size depending only on the assumed bounds, so that the components of the Lorentzian metric has optimal regularity in these coordinates. The proof combines geometric estimates (Jacobi field, comparison theorems) and quantitative estimates for nonlinear elliptic equations with low regularity. Dr. Zhilan Feng Evolutionary Implications of Influenza Medication Strategies Abstract: Patients at risk for complications of influenza are commonly treated with antiviral medications, which however also could be used to control outbreaks. The adamantanes and neuraminidase inhibitors are active against influenza A, but avian influenza (H5N1) is resistant to oseltamivir and swine influenza (H1N1) to the adamantanes (but see postscript). To explore influenza medication strategies (pre-exposure or prophylaxis, post-exposure/pre-symptom onset, and treatment at successive clinical stages) that may affect evolution of resistance (select for resistant strains within or facilitate their spread between hosts), we elaborated a published transmission model and chose parameters from the literature. Then we derived the reproduction numbers of sensitive and resistant strains, peak and final sizes, and time to peak. Finally, we made these results accessible via user-friendly Mathematica notebooks. (Joint work with Rongsong Liu, Dashun Xu, Yiding Yang, and John Glasser) Professor Sergiy Koshkin University of Houston-Downtown Gauge Theory of Faddeev-Skyrme Functionals Friday, November 20, 2009, 3:30pm Abstract: We study geometric variational problems for a class of nonlinear sigma-models in quantum field theory. Mathematically, one needs to minimize an energy functional on homotopy classes of maps from closed 3-manifolds into compact homogeneous spaces G/H, similar to the case of harmonic maps. The minimizers are known as Hopfions and exhibit localized knot-like structure. Our main results include proving existence of Hopfions as finite energy Sobolev maps in each (generalized) homotopy class when the target space is a symmetric space. For more general spaces we obtain a weaker result on existence of minimizers in each 2-homotopy class. Our approach is based on representing maps into G/H by equivalence classes of flat connections. The equivalence is given by gauge symmetry on pullbacks of G-->G/H bundles. We work out a gauge calculus for connections under this symmetry, and use it to eliminate non-compactness from the minimization problem by fixing the gauge. Dr. Alexander Engström Miller Research Fellow Graph Theoretic Methods in Algebraic Statistics Abstract: First I will review how methods from commutative algebra, for example Gröbner bases and toric ideals, can be used in statistics. Then I will describe two applications of graph theoretic methods in this context: My proof of Sturmfels and Sullivant's conjecture on cut ideals; and the ideals of graph homomorphisms introduced together with Patrik Noren. Dr. Daniel Ruberman Slice Knots and the Alexander Polynomial Abstract: A knot in the 3-sphere is slice if it bounds an embedded disk in the 4-ball. The disk may be topologically embedded, or we may require the stronger condition that it be smoothly embedded; the knot is said to be (respectively) topologically or smoothly slice. It has been known since the early 1980's that there are knots that are topologically slice, but not smoothly slice. These result from Freedman's proof that knots with trivial Alexander polynomial are topologically slice, combined with gauge-theory techniques originating with Donaldson. In joint work with C. Livingston and M. Hedden, we answer the natural question of whether Freedman's result is responsible for all topologically slice knots. We show that the group of topologically slice knots, modulo those with trivial Alexander polynomial, is infinitely generated. The proof uses Heegaard-Floer theory. Copyright ©2018 University of Miami. All Rights Reserved. 1365 Memorial Drive Ungar 515 UM Home UM Admissions
CommonCrawl
Proceedings of the Joint International GIW & ABACBS-2019 Conference: bioinformatics (part 2) DeepSuccinylSite: a deep learning based approach for protein succinylation site prediction Niraj Thapa1, Meenal Chaudhari1, Sean McManus1, Kaushik Roy2, Robert H. Newman3, Hiroto Saigo4 & Dukka B. KC5 BMC Bioinformatics volume 21, Article number: 63 (2020) Cite this article Protein succinylation has recently emerged as an important and common post-translation modification (PTM) that occurs on lysine residues. Succinylation is notable both in its size (e.g., at 100 Da, it is one of the larger chemical PTMs) and in its ability to modify the net charge of the modified lysine residue from + 1 to − 1 at physiological pH. The gross local changes that occur in proteins upon succinylation have been shown to correspond with changes in gene activity and to be perturbed by defects in the citric acid cycle. These observations, together with the fact that succinate is generated as a metabolic intermediate during cellular respiration, have led to suggestions that protein succinylation may play a role in the interaction between cellular metabolism and important cellular functions. For instance, succinylation likely represents an important aspect of genomic regulation and repair and may have important consequences in the etiology of a number of disease states. In this study, we developed DeepSuccinylSite, a novel prediction tool that uses deep learning methodology along with embedding to identify succinylation sites in proteins based on their primary structure. Using an independent test set of experimentally identified succinylation sites, our method achieved efficiency scores of 79%, 68.7% and 0.48 for sensitivity, specificity and MCC respectively, with an area under the receiver operator characteristic (ROC) curve of 0.8. In side-by-side comparisons with previously described succinylation predictors, DeepSuccinylSite represents a significant improvement in overall accuracy for prediction of succinylation sites. Together, these results suggest that our method represents a robust and complementary technique for advanced exploration of protein succinylation. Protein post-translational modifications (PTM) are important cellular regulatory processes that occur after protein synthesis. PTMs increase the functional diversity of the proteome by the covalent addition of functional moieties to proteins, proteolytic cleavage of regulatory subunits and play important roles in signaling for degradation of entire proteins. PTMs include phosphorylation, glycosylation, ubiquitination and relatively recently described modifications, such as succinylation. Succinylation is a PTM that occurs through the addition of a succinyl group (−CO-CH2-CH2-CO2H) to the ε-amino of target lysine residues. Protein PTMs have been detected by a variety of experimental techniques [1], including mass spectrometry (MS) [2, 3], liquid chromatography [4], radioactive chemical labeling [5] and immunological detection, such as chromatin immunoprecipitation [6] and western blotting [7]. Generally, the experimental analysis of PTMs requires time-consuming, labor- and capital-intensive techniques and the use of hazardous/expensive chemical reagents. Due to importance of PTMs in both disease states and normal biological functions, it is imperative to invest in developing options that can screen proteins for potential PTM sites in a rapid, cost-effective manner. In recent years, machine learning has become a cost-effective method for prediction of different PTM sites. Some of the machine learning based succinylation site prediction approaches are iSuc-PseAAC [8], iSuc-PseOpt [9], pSuc-Lys [10], SuccineSite [11], SuccineSite2.0 [12], GPSuc [13] and PSuccE [14] . Although results have been promising, the potential for bias is present due to manual selection of features along with the possible absence of unknown features that contribute to succinylation. Moreover, the prediction performance of these methods is not yet satisfactory enough to be used in high throughput studies. Recently, deep learning (DL) approaches have been developed to elucidate putative PTM sites in cellular proteins. For instance, MusiteDeep [15] and DeepPhos [16] have been developed to predict phosphorylation sites while Fu et al. [17] and Wu et al. [18] used DL-based methods to identify putative ubiquitination and acetylation sites, respectively. These DL methods have achieved relative improvement in aggregate measures of method performance, such as the area under curve (AUC) and Matthews Correlation Coefficient (MCC). Typically, these models utilize some combination of one-hot encoding and extracted features as an input, largely trying to avoid reliance on manual feature extraction. To the best of our knowledge, DL models have not been applied previously for prediction of succinylation sites. In this study, we developed a succinylation site predictor, termed DeepSuccinylSite, based on a convolutional neural network (CNN) deep learning framework [19] using Keras library [20]. Benchmark dataset In this study, we used the same training and independent dataset collected from experimentally derived lysine succinylation sites as in Hasan et al. [13] and Ning et al. [14]. Ning et al. used UniProtKB/Swiss-Prot database and NCBI protein sequence database as Hasan et al. to create the succinylation dataset. After removing proteins that have more than 30% sequence identity using CD-HIT, 5009 succinylation sites and 53,542 sites not known to be succinylated remained. Of these, 4755 succinylation sites and 50,565 non-succinylation sites were used for the training set and 254 succinylation sites and 2977 non-succinylation sites were used for the independent test. Moreover, for our approach the optimal window size came out to be 33 and some of the sequences had other characters, we lost 5 (out of 4755) positive sites in the training set. For the training and test sets, data were balanced using under-sampling. The final training dataset contained 4750 positive and 4750 negative sites whereas the independent test dataset contained 254 positive and 254 negative sites after balancing. Table 1 shows the final dataset for training and independent test after balancing. In order to generate a local representation of the protein and to optimize the model, a window parameter was set around each lysine (K) of interest. If the left or right side of K was less than half the size of the window, then pseudo residue "-" was used in order to retain all the positive sites. Table 1 Number of positive and negative sites for training and testing dataset In contrast to traditional machine learning methods, our DL-based method takes sequence data in the form of windows directly as an input, reducing the need for hand-crafted feature extraction. A pre-requisite for this approach is that the sequence data must be encoded in a form that is readable by our DL model. Accordingly, we have utilized two types of encoding: (i) one-hot encoding and (ii) embedding layer. Compared to other DL approaches for other types of post-translational modification site prediction, one of the major differences is our embedding encoding. One-hot encoding One hot encoding converts categorical variables to respective binary variables. We implemented one-hot encoding in a manner similar to that used during the development of MusiteDeep [15]. In order to convert the 20 common amino acids and our pseudo residue "-" into numerical values, these 21 characters are converted into integers ranging from 0 to 20. Every amino acid was represented by a binary code consisting of a sequence of zeros and a singular one, the location of which encodes the identity of the amino acid. In our study, the binary representation was done based on alphabetical order. For example, Alanine (A) is represented as 100000000000000000000 and Arginine (R) is represented as 010000000000000000000 and so on. Accordingly, in our model, a window of size, N, corresponded to an input vector size of N × 21. One of the primary drawbacks of one-hot encoding is that the mapping is completely uniform. Therefore, amino acids with similar properties are not placed together in vector space. Embedding layer One of the highlights of our approach is the embedding layer. The second type of encoding that we utilize is the embedding encoding [20, 21]. Embedding finds the best representation for the amino acid sequence, as in DeepGO [22], to overcome the shortcomings of one-hot encoding. Briefly, the 20 amino acids residue and 1 pseudo residue were first converted into integers ranging from 0 to 20. This is provided as an input to the embedding layer, which lies at the beginning of our DL architecture. The embedding layer is initialized with random weights. The layer then learns better vector-based representations with subsequent epochs during training. Each vectorization is an orthogonal representation in another dimension, thus preserving its identity. Hence, making it more dynamic than the static one-hot encoding. In our study, embedding encoding (word to vec) for K is: [− 0.03372079, 0.01156038, − 0.00370798, 0.00726882, − 0.00323456, − 0.00622324, 0.01516087, 0.02321764, 0.00389882, − 0.01039953, − 0.02650939, 0.01174229, − 0.0204078, − 0.06951248, − 0.01470334, − 0.03336572, 0.01336034, − 0.00045607, 0.01492316, 0.02321628, − 0.02551141] in 21-dimensional vector space after training. Embedding groups commonly co-occurring items together in the vector space. Two key arguments must be specified in the embedding layer. These are: output_dim: Size of vector space. input_length: Size of input, which is window size. Training and testing datasets The training dataset was further sub-divided into 80% training and 20% validation sets. The model was trained on 80% of the training data with validation done in every epoch using the remaining 20% of the training dataset. This validation approach was performed in order to track the training progress and to identify overfitting. Overfitting was identified when validation accuracy started decreasing while training accuracy continued to increase. Checkpointer was utilized to select the optimal model from the epochs based on validation accuracy; this approach also helped to minimize any potential overfitting. The model generated was then used for independent testing with the independent testing dataset. The main advantage of using DL over traditional machine learning approaches is the exclusion of manual feature extraction. The input for our DL approach is the sequence windows in FASTA format. For example, for a window size of 33, the input dimension would be 33 × 21 for one-hot encoding. For embedding for the same window size, the input dimension would be 33 × 21 for embedding output dimension of 21. DeepSuccinylSite architecture The overall architecture of DeepSuccinylSite is shown in Fig. 1. a Window size of 33 in FASTA format is the input. It is converted into integers which is then encoded either using one-hot encoding or embedding layer. This will be the input for CNN layers. b The output from either of the encoding is then fed as input into the deep learning architecture. Finally, after the flattening and fully connected layers we get the final output which contains two nodes with outputs [0 1] for positive and [1 0] for negative sites After encoding the input data, the encoded data was fed into the network. The same architecture was utilized for both encoding methods, except for the inclusion of an embedding layer and a lambda layer in the case of the embedding encoding. The next layer is the convolutional layer. Previous DL-based models for Phosphorylation sites (DeepPhos, MusiteDeep) [19, 20] have used 1-D (dimensional) convolutional layer, whereas we have used 2-D (dimensional) convolutional layer, thus increasing our flexibility with choosing 2-D size. If we use 1D convolutional layer and do the same, then we will not be able to deduce many feature information, as the x-axis is fixed (it will stay at 21) and will only stride vertically. Thereafter, other layers were also chosen with 2D. We used a 2D convolutional layer to prioritize the inclusion of filter size 17 × 3 (for window size 33, the PTM site lies at the 17th position), which will include the PTM site in every stride. The use of this filter size, along with the disabling of padding, allowed the model to be optimized for training time without compromising performance. Higher dropout of 0.6 was used to avoid overfitting. Moreover, a rectified linear unit (ReLU) was used as an activation function for all layers. ReLU was deemed an optimal activation function due to its sparse activation, which minimized the possibility for overfitting and maximized the predictive power of the model. We used two convolutional layers, one maxpooling layer, a fully connected layer with two dense layers, and an output layer. The parameters used in the model are given in Table 2. Table 2 Parameters in DeepSuccinylSite Adam optimization was used as the optimizer for our architecture, as described previously by Kingma et al. [23]. Adam uses an adaptive learning rates methodology to calculate individual learning rates for each parameter. Adam is different from classical stochastic gradient descent in that stochastic gradient descent maintains a single, constant learning rate for all weight updates during training [24]. Specifically, Adam combines benefits of both adaptive gradient algorithm and root mean square propagation, allowing for efficient training of the model. Since this study is a binary classification problem, binary cross-entropy (measure of uncertainty associated with given distribution) or log loss was used as the loss function. The binary cross-entropy is given by: $$ -\frac{1}{N}\sum \limits_{i=1}^N\left[{y}_i\mathit{\log}\left({\hat{y}}_i\right)+\left(1-{y}_i\right)\mathit{\log}\left(1-{\hat{y}}_i\right)\right] $$ where y is the label (1 for positive and 0 for negative) and \( {\hat{y}}_i \) is the predicted probability of the site being positive for all N points. For each positive site (y = 1), it adds \( \log \left({\hat{y}}_i\right) \) to the loss, that is, the log probability of it being positive. Conversely, for each negative site (y = 0), it adds \( \log \left(1-{\hat{y}}_i\right) \), that is, the log probability of it being negative. The fully connected layers contained two dense layers with 768 and 256 nodes, respectively, with the final output layer containing 2 nodes. Model evaluation and performance metrics In this study, 10-fold cross validation was used to evaluate the performance of the model. In 10-fold cross validation, the data are partitioned into 10 equal parts. Then, one-part is left out for validation and training is performed on remaining 9 parts. This process is repeated until all parts are used for validation. Confusion Matrix (CM), Matthew's Correlation Coefficient (MCC) and Receiver Operating Characteristics (ROC) curve were used as performance metrics. The ROC curve is a graphical plot that illustrates the diagnostic ability of a binary classifier whereas area under curve (AUC) represents the degree or measure of separability. Since identification of succinylation sites is a binary classification problem, the confusion matrix size is 2 × 2 composed of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN). Other metrics calculated using these variables were accuracy, sensitivity (i.e., the true positive rate) and specificity (i.e., the true negative rate). $$ Accuracy=\frac{TP+ TN}{TP+ TN+ FP+ FN}\times 100 $$ $$ Sensitivity=\frac{TP}{TP+ FN}\times 100 $$ $$ Specificity=\frac{TN}{TN+ FP}\times 100 $$ $$ MCC=\frac{(TP)(TN)-(FP)(FN)}{\sqrt{\left( TP+ FP\right)\left( TP+ FN\right)\left( TN+ FP\right)\left( TN+ FN\right)}} $$ Optimal window size and encoding Initially, window sizes from 9 to 45 were tested with both one-hot encoding and embedding. For example, for a window size of 9, the lysine (K) residue was set in the middle of the window with 4 amino acid residues upstream and 4 amino acid residues downstream. A window size of 33 yielded the highest MCC for both one-hot encoding and embedding, with further increases in window size resulting in reductions in MCC (Table 3). Likewise, the highest specificity and AUC were achieved using a window size of 33, with only a marginal reduction in sensitivity when using embedding (Table 3 and Fig. 2). Hence, a window size of 33 was considered as the optimal window size for this study. Interestingly, a window size of 33 was also utilized by Wang et al. for phosphorylation site prediction using one-hot encoding [15]. It is worth noting that the consistency in window size between this study and the previous study by Wang et al. correlates with the known range for many inter-protein amino acid interactions. Importantly, with only a few exceptions, embedding performed better than one-hot encoding for every window size tested. Therefore, for this study, embedding was chosen for encoding. Table 3 Performance metrics for different window sizes. The highest values in each category are highlighted in boldface. MCC: Matthew's Correlation Coefficient ROC curve for different window sizes for embedding Identification of optimal embedding dimension Next, we sought to identify the optimal embedding dimension. To this end, dimensions ranging from 9 to 33 were tested for embedding. It is important to note that increasing the dimension of embedding will result in higher computational cost. Therefore, we aimed to identify the smallest dimension that struck a balance across all metrics. Because MCC is often used as a surrogate of overall model performance, it was prioritized slightly over the other parameters. While both dimension sizes of 15 and 21 struck such a balance, the performance metrics were generally better using a dimension size of 21. Indeed, a dimension size of 21 achieved the highest MCC, with sensitivity and specificity scores that were within 7% of the maximum scores achieved in these areas (Table 4). Consistently, dimension size of 15 and 21 achieved the highest AUC score (Fig. 3). Taken together, these data suggest that a dimension size of 21 is optimal using our architecture. Therefore, a dimension size of 21 was selected for model development. The dimension size is consistent with the fact that 20 amino acid residues and 1 pseudo residue were present in each vector. Table 4 Performance metrics for different embedding dimensions. The highest values in each category are shown in bold. MCC: Matthew's Correlation Coefficient ROC curves for different embedding dimensions Cross-validation and alternative classifiers Our final model, which we termed DeepSuccinylSite, utilizes embedding with window and dimension sizes of 33 and 21, respectively. Based on five rounds of 10-fold cross-validation, DeepSuccinylSite exhibited robustness with consistent performance metrics with an average MCC of 0.519 +/− 0.023 and an AUC of 0.823 (Additional file 1: Table S3). We also implemented additional Deep Learning architectures and different machine learning models where the input was hand-crafted 'physico-chemical' based features rather than the protein sequence alone. Essentially, this implementation takes various physiochemical features combined with XGBoost to extract prominent features. We excluded any sequences with '-', while calculating the features. We then used XGBoost to extract prominent features, which provided better accuracy and obtained a total of 160 features at threshold of 0.00145. Interestingly, the performance of the methods using these approaches were not as good as DeepSuccinylSite, whose input is protein sequence alone (Additional file 1: Table S2). Further information on performance of our model are included in Additional file 1. Additionally, the results of feature-based Deep Learning architecture is shown in Additional file 1: Figure S1. Comparison with other deep learning architectures Other DL architectures, such as Recurrent Neural Network (RNN) [25] and Long Short-Term Memory (LSTM) [26], as well as the combined model, LSTM-RNN, were also implemented for one-hot encoding (DeepSuccinylSite-one_hot) and compared with the independent test result of DeepSuccinylSite (Table 5). Additionally, we implemented an additional DL architecture, where the input includes other features beyond the primary amino acid sequence. Specifically, this implementation utilizes a combination of 1) physiochemical features, such as Pseudo Amino acid Composition (PAAC), 'k-Spaced Amino Acid Pairs' (AAP); 2) Autocorrelation features, such as Moreau-Broto autocorrelation and Composition, Transition and Distribution (CTD) features, and 3) Entropy Features, such as Shannon entropy, Relative entropy, and Information gain. We excluded any sequences with '-', while calculating the features. We then used XGBoost to extract prominent features which provided better accuracy and obtained a total of 160 features at threshold 0.00145. The version of the algorithm using features is termed as DeepSuccinylSite-feature based. Table 5 Comparison of DeepSuccinylSite with other deep learning architectures for window size 33. The highest value in each category is shown in bold. MCC: Matthew's Correlation Coefficient; RNN: Recurrent neural network; LSTM: Long short-term memory model For fair comparison, we used the same balanced training and testing dataset for window size of 33 and one-hot encoding for these three DL architectures. The results are shown in Table 5 and ROC curve is shown in Fig. 4. The results for our DL model with embedding (DeepSuccinylSite) is also shown. The detailed architecture of these models, including results for other window sizes are discussed in Additional file 1 and the performance of these methods is presended in Additional file 1: Table S1. For one-hot encoding, DeepSuccinylSite achieved better MCC and AUC score than the other DL architectures. Likewise, our final model using embedding achieved the highest MCC and AUC scores of any model (Table 5). ROC curve for different deep learning architectures Independent test comparisons with existing models Next, the performance of DeepSuccinylSite was compared with other succinylation site predictors using an independent test set as mentioned in the benchmark dataset earlier. During these analyses, some of the most widely used tools for succinylation site prediction, such as iSuc-PseAAC [8], iSuc-PseOpt [9], pSuc-Lys [10], SuccineSite [11], SuccineSite2.0 [12], GPSuc [13] and PSuccE [14], were considered. All these methods use the same training and independent test data sets as in Table 6. The performance metrics for these previously published methods were taken from their respective manuscripts mainly based on comparison done in PSuccE [14]. Table 6 Comparison of DeepSuccinylSite with existing predictors using an independent test dataset. The highest value in each category is shown in bold DeepSuccinylSite achieved a 58.3% higher sensitivity score than the next highest performing model (Table 6). In contrast, our model exhibited the lowest specificity score of all of models tested. However, the specificity score achieved by DeepSuccinylSite was only 22.2% lower than that of the top-ranked methods. Consequently, DeepSuccinylSite achieved a significantly higher performance as measured by MCC. Indeed, DeepSuccinylSite exhibited an ~ 62% increase in MCC when compared to the next highest method, GPSuc. Taken together, the novel architecture we have described, termed DeepSuccinylSite, shows significantly improved performance for precise and accurate prediction of succinylation sites. Succinylation is relatively newly discovered PTM that is garnering interest due to the biological implications of introducing a large (100 Da) chemical moiety that changes the charge of the modified residue. Experimental detection of succinylation is labor intensive and expensive. Due to the availability of a relatively large dataset containing 4750 positive sites for training, it was possible for us to implement different DL architectures. The model optimization process described in this paper led to a significant improvement in precise prediction of succinylation sites when compared to models previously described in the literature. Two types of encoding were considered for this study, one-hot encoding and embedding. Our results suggest that embedding is an optimal approach, as it allows the model to learn representations similar to the amino acid features, which results in further improvements in the ability to identify putative sites of modification. Furthermore, DeepSuccinylSite corroborates previous indications in the literature that have suggested a window size of 33 optimally reflects local chemical interactions in proteins that predict sites of PTM due to its performance in metrics like MCC. One of the important parameters was embedding dimension. DeepSuccinylSite was trained with different dimensions ranging from 9 to 33. With increase in dimension, training time also increased. Though there was not a significant difference between dimension sizes 15 and 21, considering the number of amino acid residues and slightly better result, 21 was chosen as the embedding dimension for this study. Finally, for window size 33 with embedding dimension 21, DeepSuccinylSite achieved efficiency scores of 0.79, 0.69 and 0.48 for sensitivity, specificity and MCC, respectively. For further improvements, instead of current protein sequence-based window sequence, we can extract structure-based window sequence centered around the site of interest and use that window as the input. When the structure of the protein is not available, protein structure prediction pipelines like I-TASSER [27] or ROSETTA [28], can first be used to predict the structure. Since the structure of the proteins are more conserved than sequence, we hope to capture evolutionary information better and thus obtain better prediction accuracy. Moreover, we could also improve the performance of the approach by creating multiple models using sequence-based windows, structure-based windows, physiochemical properties and then utilize voting approaches. Lastly, multi-window input, as done in DeepPhos [16], using our encoding technique can improve the performance. However, more datasets are required for these schemes and once more experimental data becomes available, we could explore this in more detail. We also explored the effects of data size on prediction performance (Additional file 1: Table S4 and Additional file 1: Figure S2). These studies suggest that, initially, the performance of our model increases with the increasing data size before reaching a plateau. This is somewhat contrary to the general consensus in deep learning that performance keeps increasing with the data size according to a power law. However, with more experimental data likely to be available in the future, we could perform a more comprehensive study on how performance scales with increasing data size. Perhaps, this might also suggest that with increasing data we might have to develop more complex deep learning models. Utilizing the unique architecture described in this paper, the DeepSuccinylSite model shows a substantial improvement in predictive quality over existing models. The utility of this model is in its ability to predict lysine residues that are likely to be succinylated. Accordingly, this model could be utilized to optimize workflows for experimental verification of succinylation sites. Specifically, use of this model could significantly reduce the time and cost of identification of these sites. This model may also have some utility in hypothesis generation when PTM presents itself as likely explanation for observed biological phenomenon. In this study, we describe the development of DeepSuccinylSite, a novel and effective deep learning architecture for the prediction of succinylation sites. The primary advantage of using this model over other machine learning architectures is the elimination of feature extraction. As a consequence, other PTM sites could be easily applied in this model. Since this model only utilizes two convolutional layer and one max-pooling layer to avoid overfitting for the current data, provision of new data sources may allow for further modification of this model in the future. In conclusion, DeepSuccinylSite is an effective deep learning architecture with best-in-class results for prediction of succinylation sites and potential for use in general PTM prediction problems. The datasets and models analyzed during the current study along with the supplementary materials are available in https://github.com/dukkakc/DeepSuccinylSite. Area under ROC curve DL: LSTM: Long short-term memory MCC: Mathew correlation coefficient PTM: Post translational modification ReLU: Rectified linear unit RNN: Recurrent neural network Receiver operator characteristics Hasan MM, Khatun MS. Prediction of protein Post-Translational Modification sites: An overview. Ann Proteom Bioinform. 2018;2:049-57. https://doi.org/10.29328/journal.apb.1001005. Medzihradszky KF. Peptide sequence analysis. Methods Enzymol. 2005;402:209–44. Agarwal KL, Kenner GW, Sheppard RC. Feline gastrin. An example of peptide sequence analysis by mass spectrometry. J Am Chem Soc. 1969;91(11):3096–7. Welsch DJ, Nelsestuen GL. Amino-terminal alanine functions in a calcium-specific process essential for membrane binding by prothrombin fragment 1. Biochemistry. 1988;27(13):4939–45. Slade DJ, Subramanian V, Fuhrmann J, Thompson PR. Chemical and biological methods to detect post-translational modifications of arginine. Biopolymers. 2014;101(2):133–43. Umlauf D, Goto Y, Feil R. Site-specific analysis of histone methylation and acetylation. Methods Mol Biol. 2004;287:99–120. Jaffrey SR, Erdjument-Bromage H, Ferris CD, Tempst P, Snyder SH. Protein S-nitrosylation: a physiological signal for neuronal nitric oxide. Nat Cell Biol. 2001;3(2):193–7. Xu Y, Ding YX, Ding J, Lei YH, Wu LY, Deng NY. iSuc-PseAAC: predicting lysine succinylation in proteins by incorporating peptide position-specific propensity. Sci Rep. 2015;5:10184. Jia J, Liu Z, Xiao X, Liu B, Chou KC. iSuc-PseOpt: identifying lysine succinylation sites in proteins by incorporating sequence-coupling effects into pseudo components and optimizing imbalanced training dataset. Anal Biochem. 2016;497:48–56. Jia J, Liu Z, Xiao X, Liu B, Chou KC. pSuc-Lys: predict lysine succinylation sites in proteins with PseAAC and ensemble random forest approach. J Theor Biol. 2016;394:223–30. Hasan MM, Yang S, Zhou Y, Mollah MNH. SuccinSite: a computational tool for the prediction of protein succinylation sites by exploiting the amino acid patterns and properties. Mol BioSyst. 2016;12(3):786–95. Hasan MM, Khatun MS, Mollah MNH, Yong C, Guo D. A systematic identification of species-specific protein succinylation sites using joint element features information. Int J Nanomedicine. 2017;12:6303–15. Hasan MM, Kurata H. GPSuc: global prediction of generic and species-specific Succinylation sites by aggregating multiple sequence features. PLoS One. 2018;13(10):e0200283. Ning Q, Zhao X, Bao L, Ma Z, Zhao X. Detecting Succinylation sites from protein sequences using ensemble support vector machine. BMC Bioinformatics. 2018;19(1):237. Wang D, Zeng S, Xu C, Qiu W, Liang Y, Joshi T, et al. MusiteDeep: a deep-learning framework for general and kinase-specific phosphorylation site prediction. Bioinformatics. 2017;33(24):3909–16. Fenglin Luo, Minghui Wang, Yu Liu, Xing-Ming Zhao, Ao Li. DeepPhos: prediction of protein phosphorylation sites with deep learning, Bioinformatics. 2019;35(16):2766–73. Fu H, Yang Y, Wang X, Wang H, Xu Y. DeepUbi: a deep learning framework for prediction of ubiquitination sites in proteins. BMC Bioinformatics. 2019;20(1):86. Wu M, Yang Y, Wang H, Xu Y. A deep learning method to more accurately recall known lysine acetylation sites. BMC Bioinformatics. 2019;20(1):49. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436. Chollet F, et al. Keras; 2015. https://keras.io. D'Informatique Et Recherche Operationnelle D. In: Bengio Y, Ejean Ducharme R, Vincent P, De Recherche Mathematiques C, editors. A Neural Probabilistic Language Model; 2001. Kulmanov M, Khan MA, Hoehndorf R. DeepGO: predicting protein functions from sequence and interactions using a deep ontology-aware classifier. Bioinformatics. 2017;34(4):660–8. Kingma DP, Adam BJ. A Method for Stochastic Optimization. arXiv e-prints [Internet]. 2014;01:2014 Available from: https://ui.adsabs.harvard.edu/abs/2014arXiv1412.6980K. Kiefer J, Wolfowitz J. Stochastic estimation of the maximum of a regression function. Ann Math Stat. 1952;23(3):462–6. Jain LC, Medsker LR. Recurrent neural networks: design and applications: CRC press, Inc.; 1999. 416 p. Hochreiter S. #252, Schmidhuber r. long short-term memory. Neural Comput. 1997;9(8):1735–80. Roy A, Kucukural A, Zhang Y. I-TASSER: a unified platform for automated protein structure and function prediction. Nat Protoc. 2010;5(4):725–38. DiMaio F, Leaver-Fay A, Bradley P, Baker D, Andre I. Modeling symmetric macromolecular structures in Rosetta3. PLoS One. 2011;6(6):e20450. About this supplement This article has been published as part of BMC Bioinformatics Volume 21 Supplement 3, 2020: Proceedings of the Joint International GIW & ABACBS-2019 Conference: bioinformatics (part 2). The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-21-supplement-3. This work was supported by National Science Foundation (NSF) grant nos. 1901793, 1564606 and 1901086 (to DK). RHN is supported by an HBCU-UP Excellence in Research Award from NSF (1901793) and an SC1 Award from the National Institutes of Health National Institute of General Medical Science (5SC1GM130545). HS was supported by JSPS KAKENHI Grant Numbers JP18H01762 and JP19H04176. Department of Computational Science and Engineering, North Carolina A&T State University, Greensboro, NC, USA Niraj Thapa, Meenal Chaudhari & Sean McManus Department of Computer Science, North Carolina A&T State University, Greensboro, NC, USA Kaushik Roy Department of Biology, North Carolina A&T State University, Greensboro, NC, USA Robert H. Newman Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan Hiroto Saigo Electrical Engineering and Computer Science Department, Wichita State University, Wichita, KS, USA Dukka B. KC Niraj Thapa Meenal Chaudhari Sean McManus DK, SH, RN, KR conceived of and designed the experiments. NT and MC performed the experiments and data analysis. NT, DK, SMM and MC wrote the paper. RN, SH, DK, KR and SMM revised the manuscript. All authors read and approved the final manuscript. Correspondence to Dukka B. KC. Contains supplementary tables and figures referred to in the text. We describe various other deep learning architectures, other machine learning architectures, cross-validation results and independent test results for different sample sizes. Table S1. Independent Test Results. Table S2. Independent test result for different machine learning architectures. Figure S1. ROC curve for feature based DL-model. Table S3. Cross-validation (CV) results for different run. Table S4. Independent test results for different sample sizes. Figure S2. MCC and AUC for independent test for different sample sizes. Thapa, N., Chaudhari, M., McManus, S. et al. DeepSuccinylSite: a deep learning based approach for protein succinylation site prediction. BMC Bioinformatics 21, 63 (2020). https://doi.org/10.1186/s12859-020-3342-z Succinylation
CommonCrawl
Search Results: 1 - 10 of 779 matches for " Subir Sachdev " Quantum Phases of the Shraiman-Siggia Model Subir Sachdev Physics , 1993, DOI: 10.1103/PhysRevB.49.6770 Abstract: We examine phases of the Shraiman-Siggia model of lightly-doped, square lattice quantum antiferromagnets in a self-consistent, two-loop, interacting magnon analysis. We find magnetically-ordered and quantum-disordered phases both with and without incommensurate spin correlations. The quantum disordered phases have a pseudo-gap in the spin excitation spectrum. The quantum transition between the magnetically ordered and commensurate quantum-disordered phases is argued to have the dynamic critical exponent $z=1$ and the same leading critical behavior as the disordering transition in the pure $O(3)$ sigma model. The relationship to experiments on the doped cuprates is discussed. Quantum phase transitions in spins systems and the high temperature limit of continuum quantum field theories Abstract: We study the finite temperature crossovers in the vicinity of a zero temperature quantum phase transition. The universal crossover functions are observables of a continuum quantum field theory. Particular attention is focussed on the high temperature limit of the continuum field theory, the so-called ``quantum-critical'' region. Basic features of crossovers are illustrated by a simple solvable model of dilute spinless fermions, and a partially solvable model of dilute bosons. The low frequency relaxational behavior of the quantum-critical region is displayed in the solution of the transverse-field Ising model. The insights from these simple models lead to a fairly complete understanding of the system of primary interest: the two-dimensional quantum rotor model, whose phase transition is expected to be in the same universality class as those in antiferromagnetic Heisenberg spin models. Recent work on the experimental implications of these results for the cuprate compounds is briefly reviewed. Quantum Phase Transitions and Conserved Charges Physics , 1993, DOI: 10.1007/BF01317409 Abstract: The constraints on the scaling properties of conserved charge densities in the vicinity of a zero temperature ($T$), second-order quantum phase transition are studied. We introduce a generalized Wilson ratio, characterizing the non-linear response to an external field, $H$, coupling to any conserved charge, and argue that it is a completely universal function of $H/T$: this is illustrated by computations on model systems. We also note implications for transitions where the order parameter is a conserved charge (as in a $T=0$ ferromagnet-paramagnet transition). Universal, finite temperature, crossover functions of the quantum transition in the Ising chain in a transverse field Physics , 1995, DOI: 10.1016/0550-3213(95)00657-5 Abstract: We consider finite temperature properties of the Ising chain in a transverse field in the vicinity of its zero temperature, second order quantum phase transition. New universal crossover functions for static and dynamic correlators of the ``spin'' operator are obtained. The static results follow from an early lattice computation of McCoy, and a method of analytic continuation in the space of coupling constants. The dynamic results are in the ``renormalized classical'' region and follow from a proposed mapping of the quantum dynamics to the Glauber dynamics of a classical Ising chain. Theory of finite temperature crossovers near quantum critical points close to, or above, their upper-critical dimension Physics , 1996, DOI: 10.1103/PhysRevB.55.142 Abstract: A systematic method for the computation of finite temperature ($T$) crossover functions near quantum critical points close to, or above, their upper-critical dimension is devised. We describe the physics of the various regions in the $T$ and critical tuning parameter ($t$) plane. The quantum critical point is at $T=0$, $t=0$, and in many cases there is a line of finite temperature transitions at $T = T_c (t)$, $t < 0$ with $T_c (0) = 0$. For the relativistic, $n$-component $\phi^4$ continuum quantum field theory (which describes lattice quantum rotor ($n \geq 2$) and transverse field Ising ($n=1$) models) the upper critical dimension is $d=3$, and for $d<3$, $\epsilon=3-d$ is the control parameter over the entire phase diagram. In the region $|T - T_c (t)| \ll T_c (t)$, we obtain an $\epsilon$ expansion for coupling constants which then are input as arguments of known {\em classical, tricritical,} crossover functions. In the high $T$ region of the continuum theory, an expansion in integer powers of $\sqrt{\epsilon}$, modulo powers of $\ln \epsilon$, holds for all thermodynamic observables, static correlators, and dynamic properties at all Matsubara frequencies; for the imaginary part of correlators at real frequencies ($\omega$), the perturbative $\sqrt{\epsilon}$ expansion describes quantum relaxation at $\hbar \omega \sim k_B T$ or larger, but fails for $\hbar \omega \sim \sqrt{\epsilon} k_B T$ or smaller. An important principle, underlying the whole calculation, is the analyticity of all observables as functions of $t$ at $t=0$, for $T>0$; indeed, analytic continuation in $t$ is used to obtain results in a portion of the phase diagram. Our method also applies to a large class of other quantum critical points and their associated continuum quantum field theories. Condensed matter and AdS/CFT Physics , 2010, DOI: 10.1007/978-3-642-04864-7_9 Abstract: I review two classes of strong coupling problems in condensed matter physics, and describe insights gained by application of the AdS/CFT correspondence. The first class concerns non-zero temperature dynamics and transport in the vicinity of quantum critical points described by relativistic field theories. I describe how relativistic structures arise in models of physical interest, present results for their quantum critical crossover functions and magneto-thermoelectric hydrodynamics. The second class concerns symmetry breaking transitions of two-dimensional systems in the presence of gapless electronic excitations at isolated points or along lines (i.e. Fermi surfaces) in the Brillouin zone. I describe the scaling structure of a recent theory of the Ising-nematic transition in metals, and discuss its possible connection to theories of Fermi surfaces obtained from simple AdS duals. Quantum phase transitions of antiferromagnets and the cuprate superconductors Abstract: I begin with a proposed global phase diagram of the cuprate superconductors as a function of carrier concentration, magnetic field, and temperature, and highlight its connection to numerous recent experiments. The phase diagram is then used as a point of departure for a pedagogical review of various quantum phases and phase transitions of insulators, superconductors, and metals. The bond operator method is used to describe the transition of dimerized antiferromagnetic insulators between magnetically ordered states and spin-gap states. The Schwinger boson method is applied to frustrated square lattice antiferromagnets: phase diagrams containing collinear and spirally ordered magnetic states, Z_2 spin liquids, and valence bond solids are presented, and described by an effective gauge theory of spinons. Insights from these theories of insulators are then applied to a variety of symmetry breaking transitions in d-wave superconductors. The latter systems also contain fermionic quasiparticles with a massless Dirac spectrum, and their influence on the order parameter fluctuations and quantum criticality is carefully discussed. I conclude with an introduction to strong coupling problems associated with symmetry breaking transitions in two-dimensional metals, where the order parameter fluctuations couple to a gapless line of fermionic excitations along the Fermi surface. Where is the quantum critical point in the cuprate superconductors? Physics , 2009, DOI: 10.1002/pssb.200983037 Abstract: Transport measurements in the hole-doped cuprates show a "strange metal" normal state with an electrical resistance which varies linearly with temperature. This strange metal phase is often identified with the quantum critical region of a zero temperature quantum critical point (QCP) at hole density x=x_m, near optimal doping. A long-standing problem with this picture is that low temperature experiments within the superconducting phase have not shown convincing signatures of such a optimal doping QCP (except in some cuprates with small superconducting critical temperatures). I review theoretical work which proposes a simple resolution of this enigma. The crossovers in the normal state are argued to be controlled by a QCP at x_m linked to the onset of spin density wave (SDW) order in a "large" Fermi surface metal, leading to small Fermi pockets for x< x_m in the underdoped regime, so that SDW order is only present for x Nonzero temperature transport near fractional quantum Hall critical points Abstract: In an earlier work, Damle and the author (Phys. Rev. B in press; cond-mat/9705206) demonstrated the central role played by incoherent, inelastic processes in transport near two-dimensional quantum critical points. This paper extends these results to the case of a quantum transition in an anyon gas between a fractional quantized Hall state and an insulator, induced by varying the strength of an external periodic potential. We use the quantum field theory for this transition introduced by Chen, Fisher and Wu (Phys. Rev. B 48, 13749 (1993)). The longitudinal and Hall conductivities at the critical point are both $e^2/ h$ times non-trivial, fully universal functions of $\hbar \omega / k_B T$ ($\omega$ is the measuring frequency). These functions are computed using a combination of perturbation theory on the Kubo formula, and the solution of a quantum Boltzmann equation for the anyonic quasiparticles and quasiholes. The results include the values of the d.c. conductivities ($\hbar \omega /k_B T \to 0$); earlier work had been restricted strictly to T=0, and had therefore computed only the high frequency a.c. conductivities with $\hbar \omega / k_B T \to \infty$. Conductivity of thermally fluctuating superconductors in two dimensions Physics , 2003, DOI: 10.1016/j.physc.2004.02.078 Abstract: We review recent work on a continuum, classical theory of thermal fluctuations in two dimensional superconductors. A functional integral over a Ginzburg-Landau free energy describes the amplitude and phase fluctuations responsible for the crossover from Gaussian fluctuations of the superconducting order at high temperatures, to the vortex physics of the Kosterlitz-Thouless transition at lower temperatures. Results on the structure of this crossover are presented, including new results for corrections to the Aslamazov-Larkin fluctuation conductivity.
CommonCrawl
One or two questions about so-called "absolute" set theories Nearly fifty years ago Takeuti called attention to a phenomenon that occurs in connection with the construction of set theories such as ZF that result in a hierarchy of sets (indexed by ordinal numbers). The phenomenon is that the collection of theories (regarded as sets of sentences belonging to a formalized first order language) is much smaller than the collection of sets (belonging to the universe of ZF) which could serve as models for these theories. More precisely, let L denote a classical first order language in which ZF is formalized. The only "non-logical" symbols that occur in ZF are the symbol for "membership" and the symbol for "equality". For each ordinal number z, let R(z) denote the set of all sets of rank z belonging to the universe of (some extension of) ZF. Assume that this extension of ZF is consistent, so that each particular set R(z) either is or is not a model of any particular sentence of L. The collection of sentences of L is denumerably infinite. Each consistent theory formalizable in L can be identified with the collecion of its theorems which are sentences of L. Clearly, the cardinal number of the collection of all such consistent theories cannot exceed the cardinal number of the continuum. Consequently there exists at least one (consistent) theory T, formalizable in L, and an unbounded collection Q of ordinal numbers such that y belongs to Q if and only if the set R(y) is a model of the theory T. Takeuti called such theories as T "absolute" set theories and believed that only one could exist. That one would be the "ultimate" or "best possible" set theory, in which all the "meaningful" questions that could be asked about sets would be answered. Have there been any further investigations of these so-called "absolute" set theories and if so, have these yielded any interesting mathematical theorems? In particular, is it known whether more than one of them can or must exist? 122 silver badges33 bronze badges Garabed GulbenkianGarabed Gulbenkian $\begingroup$ There must be something missing. Clearly there are infinitely many such theories. For each $n \in \omega$ consider the statement, "There is a largest limit ordinal $\alpha$ and exactly $n$ ordinals above it." $\endgroup$ – Monroe Eskew Jul 6 '14 at 20:33 $\begingroup$ It's unclear what you mean by 'ordinal number $z$'. If you mean an object $z$ in a model $V$ of ZF such that $V \vDash$ '$z$ is an ordinal', then any extension of ZF is "absolute". However, it sounds that you mean something else. $\endgroup$ – François G. Dorais♦ Jul 6 '14 at 20:37 $\begingroup$ I've never heard of this, and googling 'Takeuti "absolute theories"' yields no hits; could you give a citation? $\endgroup$ – Noah Schweber Jul 6 '14 at 21:14 $\begingroup$ "Absolute" set theory (or theories) are defined and discussed on pages 79-80 of the book "Foundations of Mathematics: Symposium Papers Commemorating the Sixtieth Birthday of Kurt Godel" which was published in 1969 by Springer Verlag. The article by Takeuti entitled "The Universe of Set Theory" appears on pages 74-128 of this book. After defining "absolute" set theories, Takeuti does not say much more about them in this article. That is why I am asking whether there has been any further work in tis area. $\endgroup$ – Garabed Gulbenkian Jul 8 '14 at 12:51 There is indeed a crucial point missing in your description. Takeuti's idea is described in his article as follows: "Let us fix a language of set theory e.g. the first order language with one constant $\epsilon$. Let $\mathfrak{T}_\alpha$ be the set theory of $R(\alpha)$ w.r.t. this language. Since the creation of ordinals is endless and the cardinality of the set of all possible theories in our fixed language is bounded, some theory must appear endlessly many times among [the] $\mathfrak{T}_\alpha$s. We believe that only one theory will appear overwhelmingly densely among them. We wish to define the absolute set theory to be this theory." (pg. 80, emphasis added) The emphasized part is the whole piece that makes this meaningful, since as noted in the comments, plenty of theories will occur unboundedly often. Now, what does Takeuti mean by "overwhelmingly densely?"$^*$ This is the crucial point! Note, by the way, that everything here takes place within a fixed model. So once we specify some appropriate and reasonably definable notion of "overwhelming density," each model will have an object it thinks of as "the absolute theory." In particular, there is no a priori reason to expect sentences of the fomr "$\varphi$ is in the absolute theory" to be provable in ZF(C), even after we've managed to define a notion of "absolute theory." One common notion of "overwhelmingly dense" is club (closed unbounded): a class of ordinals $\mathcal{C}\subseteq ON$ is club if it is unbounded and for every $\alpha$, if $sup(\mathcal{C}\cap\alpha)=\alpha$ then $\alpha\in\mathcal{C}$. Now it is certainly true that at most one theory will occur club frequently in this sense. However, it is not obvious to me that there need be a theory with this property! For example, in ZFC any uncountable regular $\kappa$ can be partitioned into $\kappa$-many disjoint stationary sets, none of which can contain a club (since their complements are each stationary). So if we want a universe with choice in the background, it seems extremely odd to assume that partitioning the ordinals into continuum-many pieces will yield a club-containing piece (and in fact I suspect this is simply provably false, but I don't have time to work it out right now). However, perhaps it is consistent with $ZF+DC$ that on any uncountable ordinal with uncountable cofinality, the filter generated by club sets is an ultrafilter ($ZF+AD$ does imply that the club filter on $\omega_1$ is an ultrafilter, but I don't believe it implies this for arbitrary ordinals); more generally, it may be consistent with $ZF+DC$ that any class partition of the universe into set-many pieces has a club-containing piece. And, of course, there's the fact that we're not partitioning the universe in some random way, so maybe some appropriate axioms about reflection properties would lead to the theory of $V$ itself being $V$'s absolute theory. So I think where this leaves your question is: The notion of "absolute set theory" depends on a notion of "overwhelming density," and it is not at all clear what this should be, especially if we want different models to have at least some agreement on their absolute theories. This means that the question of whether such theories can exist is too vague. The simplest precise form of this question would be for "overwhelmingly dense" to mean "contains a club," in which case I suspect the answer is that it is consistent to have such a theory around a model of ZF, but I don't know how to prove it. As to the question of whether people continued to look at these, I suspect the answer is somewhat yes, if indirectly, but I don't know. Certainly I've never heard someone explicitly talking about this idea, but on the other hand it seems closely related both to reflection and to inner model theory (e.g., $0^\#$ can be thought of as "the absolute theory of $L$ in $V$" if it exists). $^*$ Takeuti actually does go on to talk about what "overwhelmingly dense" could mean; but I haven't had time to more than skim the paper, so I don't want to say something wrong about what he does. Let me add, unrelatedly, that I think the ideas developed in this article in the first few pages, on the complexity of the continuum function, etc., are quite interesting! Basically, Takeuti asks "what is the (ordinal-)computability-theoretic complexity of the function $f: \alpha\mapsto \beta\iff 2^{\aleph_\alpha}=\aleph_\beta$?" (and related questions). I don't know how much this sort of thing has been pursued, since most of the ordinal computability I'm aware of takes place within $L$, but I'd be very interested to hear about it. Noah SchweberNoah Schweber $\begingroup$ It's not consistent with ZF+DC that every partition of Ord into a set of (definable) classes has a club in one of the pieces. The reason is that DC (or even countable choice) implies that $\omega_1$ is regular, and so every club will contain ordinals of cofinality $\omega$ and ordinals of cofinality $\omega_1$. This doesn't quite kill the idea of taking "overwhelmingly dense" to mean containing a club. That's because "$z$ has cofinality $\omega$" might not be expressible as a first-order property in $R(z)$. $\endgroup$ – Andreas Blass Aug 2 '14 at 2:55 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question. A question about ordinal definable real numbers Can we prove set theory is consistent? Intended interpretations of set theories A question about large denumerable ordinal numbers A question about formalized theories that may be both consistent and w-consistent A question about Kunen's inconsistency theorem A question about the Ordinal Definable elements of Power Sets A "Completion" of $ZFC^-$
CommonCrawl
Dual category 2010 Mathematics Subject Classification: Primary: 18A05 [MSN][ZBL] to a category $\mathcal{C}$ The category $\mathcal{C}^o$ with the same objects as $\mathcal{C}$ and with morphism sets $\mathrm{Hom}_{\mathcal{C}^o}(A,B) = \mathrm{Hom}_{\mathcal{C}}(B,A)$ ( "reversal of arrows" ). Composition of two morphisms $u$ and $v$ in the category $\mathcal{C}^o$ is defined as composition of $v$ with $u$ in $\mathcal{C}$. The concepts and the statements in the category $\mathcal{C}$ are replaced by dual concepts and statements in $\mathcal{C}^o$. Thus, the concept of an epimorphism is dual to the concept of a monomorphism; the concept of a projective object is dual to that of an injective object, that of the direct product to that of the direct sum, etc. A contravariant functor on $\mathcal{C}$ becomes covariant on $\mathcal{C}^o$. A dual category may sometimes have a direct realization; thus, the category of discrete Abelian groups is equivalent to the category dual to the category of compact Abelian groups (Pontryagin duality), while the category of affine schemes is equivalent to the category dual to the category of commutative rings with a unit element. The dual category to a category $\mathcal{C}$ is also called the opposite category, and one also uses the notation $\mathcal{C}^{\mathrm{op}}$ (see Category). Dual category. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dual_category&oldid=42900 This article was adapted from an original article by V.I. Danilov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Dual_category&oldid=42900"
CommonCrawl
Performance of unanchored matching-adjusted indirect comparison (MAIC) for the evidence synthesis of single-arm trials with time-to-event outcomes Yawen Jiang ORCID: orcid.org/0000-0002-0498-06621 & Weiyi Ni2 The objectives of the present study were to evaluate the performance of a time-to-event data reconstruction method, to assess the bias and efficiency of unanchored matching-adjusted indirect comparison (MAIC) methods for the analysis of time-to-event outcomes, and to propose an approach to adjust the bias of unanchored MAIC when omitted confounders across trials may exist. To evaluate the methods using a Monte Carlo approach, a thousand repetitions of simulated data sets were generated for two single-arm trials. In each repetition, researchers were assumed to have access to individual-level patient data (IPD) for one of the trials and the published Kaplan-Meier curve of another. First, we compared the raw data and the reconstructed IPD using Cox regressions to determine the performance of the data reconstruction method. Then, we evaluated alternative unanchored MAIC strategies with varying completeness of covariates for matching in terms of bias, efficiency, and confidence interval coverage. Finally, we proposed a bias factor-adjusted approach to gauge the true effects when unanchored MAIC estimates might be biased due to omitted variables. Reconstructed data sufficiently represented raw data in the sense that the difference between the raw and reconstructed data was not statistically significant over the one thousand repetitions. Also, the bias of unanchored MAIC estimates ranged from minimal to substantial as the set of covariates became less complete. More, the confidence interval estimates of unanchored MAIC were suboptimal even using the complete set of covariates. Finally, the bias factor-adjusted method we proposed substantially reduced omitted variable bias. Unanchored MAIC should be used to analyze time-to-event outcomes with caution. The bias factor may be used to gauge the true treatment effect. Comparative effectiveness evidence is essential for clinical decision and formulary policy making, heath technology assessments, and economic evaluations. When direct comparisons and network meta-analyses (NMA) are infeasible, population-adjusted indirect comparison methods may be used for evidence syntheses of comparative effectiveness [1]. Such methods include matching-adjusted indirect comparison (MAIC), simulated treatment comparisons (STC), and multi-level network meta regression (MLNMR) [1, 2], among which MAIC is relatively popular [1, 3]. The process of conducting an MAIC has been described extensively in a number of previous studies [3,4,5]. At its core, the MAIC method utilizes the individual-level patient data (IPD) from the trial of an intervention (usually a manufacturer's own product) and the published aggregate data from the trial of a comparator intervention, and re-weights the patients with IPD such that their characteristics are balanced with those of the patients from the aggregate data of the comparator's trial [3]. The weights can be obtained using propensity scores estimated with method of moments or entropy balancing, either of which is calculated using the observed characteristics that need to be balanced [3, 5]. The outcome of the patients with IPD calculated with re-weighting is then compared with that of the published aggregate data to obtain the relative effect [3]. MAIC has received increasing popularity in the evidence-based medicine community and health technology assessment agencies [1, 6, 7]. It has been mostly implemented in an anchored approach in which a common comparator, such as the placebo group, is available across trials [6]. The relative effect in the analyses of time-to-event outcomes or survival analyses in anchored MAIC, usually quantified as hazard ratio (HR), is calculated by taking the ratio of HRs from different trials or the difference of logHRs [8]. Because of the common comparator, anchored MAIC estimates are theoretically not biased by the existence of unbalanced prognostic variables that are not effect modifiers [6]. In the less frequently used unanchored MAIC approach, a common comparator is not available and the outcomes from the re-weighted IPD and the published aggregate data must be compared directly. Hence, a key difference of unanchored MAIC from anchored MAIC is that the former compares outcomes across trials whereas the latter conceptually compares treatment effect across trials. However, at least two additional complexities arise in the unanchored analyses of time-to-event outcomes that may potentially nullify the properties of anchored MAIC. First, unbalanced prognostic variables can themselves contribute to the outcome and may become confounders without adjustment. Realizing such potential drawbacks, Phillipo et al. recommended that unanchored MAIC was not always advisable [1]. Second, HRs are estimated using regression techniques such as Cox regression that requires the IPD of the published study instead of the published aggregate data, which distinguished itself from unanchored MAIC for linear-scale outcomes in which the aggregate outcomes were compared after reweighting. Such data is typically not available to researchers and is obtained through reconstruction of digitized Kaplan-Meier (K-M) curves [9, 10]. For ease of distinction, we refer to the reconstructed IPD using digitized K-M curves as RIKM hereinafter. Previous discussion of and studies on the properties of MAIC have focused on anchored analyses of linear-scale outcomes [3,4,5,6]. The properties of unanchored MAIC in the context of time-to-event analysis have not been investigated so far yet the literature is gradually picking up the approach without appreciating the unique profiles of this estimator [11,12,13]. This represents a major methodological gap that needs to be filled. Specifically, single-arm trials accounted for 50% of all US Food and Drug Administration (USFDA) accelerated hematology and oncology approvals in 2015, which continued to surge to 80% in 2018 [14]. In light of this, it is expected that more comparative effectiveness studies and economic evaluations have to utilize unanchored MAIC on time-to-event outcomes at the absence of common comparators. However, due to the two aforementioned complexities, several questions that are related to the properties of unanchored MAIC on time-to-event outcomes remain to be answered. First, does RIKM represent the original survival data well enough? This is the premise that unanchored MAIC based on RIKM can be used for indirect comparison. Although this has been partially addressed when the data reconstruction method was originally proposed, the validation of data reproducibility was only conducted by comparing the summary measures of survival data underlying one single graph to the summary measures of the reconstructed version of the same graph [9]. Surprisingly, there has been an absence of attempt to validate the reconstruction method using simulation, which was likely due to the requirement of labor-intensive manual operation. Specifically, such a simulation analysis involves digitizing the curve of each repetition that mandates manually defining the coordinates, identifying the curve, and exporting the data. This process is unlike typical simulation studies that can be fully automated with programming. Hence, a simulation-based evaluation of the performance of the reconstruction approach is needed to verify its utility. Second, what are the properties of unanchored MAIC on time-to-event outcomes with respect to bias and efficiency in different scenarios? For example, is it unbiased if all prognostic variables are captured in the creation of weight as in the case of linear outcomes regardless of fundamentally different statistical processes? Also, is it unbiased if prognostic factors and effect modifiers are unbalanced across trials? A simulation study may be effective to reveal its performance in such scenarios. Third, is there a statistical approach to estimate the boundary of the true effect if unanchored MAIC estimates are indeed biased by unbalanced and unobserved covariates? We examined if the concept of bias factor that is borrowed from observational cohort studies can be used for this purpose. To answer such questions, we conducted a simulation study to investigate the properties of unanchored MAIC on time-to-event outcomes in different scenarios. The results can shed light on the above-mentioned issues and help to guide the appropriate use of the unanchored MAIC on time-to-event outcomes. Simulated data Two scenarios were simulated to investigate the properties of unanchored MAIC on time-to-event outcomes. Under each scenario, hypothetical data of breast cancer patients were simulated for two single-arm trials. For simplicity, the interventions in the trials were called treatment A and B, respectively. It was assumed that researchers had access to the IPD of treatment A but not to that of comparator B. The purpose of unanchored MAIC was, therefore, to compare the effectiveness of B versus A. The outcome in the trials was recurrence-free survival time (RFS), which was defined as the time from the start of the intervention to the earlier of all-cause death and disease recurrence. In both scenarios, unbalanced covariates across trials were simulated by design. In the present study, an effect modifier was defined as a variable that interacted with the intervention in the data generation of time to event and a prognostic factor was a variable that itself loaded on time to event. In the first scenario, effect modifiers of treatments were not included. The same set of prognostic factors were simulated for each arm (or trial), which were age, an indicator variable for menopausal status (postmenopausal vs. not), and indicator variables for tumor grades (1, 2, and 3). The B arm data also contained an indicator of treatment B. The estimated effect of B in relation to A using unanchored MAIC was captured by the HR associated with this indicator. The prognostic factors of the A arm were set so such that the patients in the A arm were in less severe conditions than the B arm. In other words, the A arm had lower average values of the prognostic factors that were negatively associated with RFS. Age was a continuous variable and was simulated using normal distributions truncated at the lower and upper bounds, while the other prognostic factors were indicators simulated using Bernoulli distributions. The specifications of the distributions used for the prognostic factors of the two arms are listed in Table 1. The sample sizes of the A and B arms were arbitrarily set at 1000 and 800, respectively. Table 1 Parameters used in the simulation of the A and B arms The RFS were simulated using Weibull distributions. The shape and scale parameters of the Weibull distribution and the coefficients of the prognostic factors in the linear component of the Weibull distribution are displayed in Table 1. Random censoring was included based on a uniform distribution that was truncated at 2500 days. In the second scenario, not only the prognostic factors were unbalanced but also menopausal status was both a prognostic factor and an effect modifier of B. There was no change to the A arm. Hence, the A arm was directly taken from the simulation in scenario 1. Whereas the coefficients of the other prognostic factors remained the same in the B arm across the scenarios, that of the B indicator was changed from − 0.5 in the first scenario to − 0.4 in the second scenario. In addition, an interaction term of the B indicator and menopausal status was included in the linear component to incorporate the modification effect. The coefficient of the interaction term was set at − 0.2 such that the expected treatment effect was the same across the two scenarios. The simulated data in the B arms were used to generate aggregate characteristics and K-M curves, which correspond to the published data of typical single-arm clinical trials. One-thousand sets of triplet time-to-event data (one for the A arm, one for the B arm without any effect modifiers, and one for the B arm with an effect modifier) were simulated. Subsequent analyses of the statistical performance of unanchored MAIC using different analytic strategies were conducted between the A arm and the B arms within each set. The results from the 1000 repetitions formed the distributions of the estimates using alternative MAIC strategies which are described later. As mentioned previously, each repetition in the present study involved digitizing the hypothetically published K-M curve of the B arm and required heavy manual operation. Hence, the number of repetitions was restricted to 1000. Validation of digitization-based reconstruction method The validation of the reconstruction method was based on comparison of the 1000 repetitions of RIKM and the simulated data for the generation of the curves. RIKM of both B arms (with and without the effect modifier) were compared to the corresponding simulated raw data using Cox regressions with an indicator of being reconstructed data and a variable representing the time-varying effect of the "being reconstructed" indicator. The hypotheses were that the HRs of the indicator and the time-varying effect would both equal one if the reconstructed data sufficiently mirrored the raw data. To quantify the assessment, the mean HRs were estimated as $$ {\overline{\mathrm{HR}}}_{\mathrm{k}}^{rc}=\frac{1}{\mathrm{N}}\sum \limits_{j=1}^N\left(\hat{{\mathrm{HR}}_{\mathrm{j},\mathrm{k}}^{rc}}\right) $$ $$ {\overline{\mathrm{HR}}}_{\mathrm{k}}^{tv}=\frac{1}{\mathrm{N}}\sum \limits_{j=1}^N\left(\hat{{\mathrm{HR}}_{\mathrm{j},\mathrm{k}}^{tv}}\right) $$ where \( {\overline{\mathrm{HR}}}_{\mathrm{k}}^{rc} \) and \( {\overline{\mathrm{HR}}}_{\mathrm{k}}^{tv} \) are respectively the mean HRs of the reconstruction indicator and the time-varying effect, N is the number of repetitions in each scenario, and \( \hat{{\mathrm{HR}}_{\mathrm{j},\mathrm{k}}^{rc}} \) and \( \hat{{\mathrm{HR}}_{\mathrm{j},\mathrm{k}}^{tv}} \) are respectively the estimated HRs of the reconstruction indicator and the time-varying effect from the jth repetition of the kth (1st or 2nd) scenario. Also, the percentages of the 95% confidence intervals (CIs) that covered one were calculated for both estimates in both scenarios. It was expected that the percentages were at least 95%. Strategies of unanchored MAIC on time-to-event outcomes The general data analytic steps of unanchored MAIC are 1) balancing the IPD with the aggregate data to obtain weights; 2) digitization for RIKM; and 3) pooling the IPD and the RIKM to conduct weighted survival analysis. Two methods have been proposed to balance the prognostic factors of the IPD data with those of aggregate data, namely propensity score matching using a method-of-moments logistic regression and entropy balancing [1, 15]. Phillipo et al. noticed that the two methods are equivalent in reducing bias yet the latter generates smaller standard errors [1]. Also, entropy balancing generates equal weighted sample sizes of the two groups [16]. In the present study, entropy balancing was used to balance the prognostic factors and to obtain weights. Both the mean and the variance of age were used for balancing. This reflects real-world practice because the variance of characteristics such as that of age is usually reported in the publication of clinical trials. The other covariates including the indicator of menopausal status and the indicators of tumor grades were balanced on the percentages. These variables were dichotomous variables of which the balance of the second moment follows that of the first moment [17]. Table 2 lists a statistical summary of the prognostic factors of the A arm before and after balancing as well as the target aggregate data of the B arm using one of the repetitions as an example. Three analytic strategies were evaluated for unanchored MAIC in the first scenario. The first strategy was an unweighted analysis ignoring the unbalanced prognostic factors across trials. In the second strategy, all prognostic variables were included when conducting entropy balancing to create weight. In the third strategy, the indictors for menopausal status and tumor grades were omitted in the creation of weight. Table 2 Example of entropy balancing In the second scenario, four analytic strategies were evaluated. Similar to the first scenario, the first and second strategies were an unweighted analysis and a weighted analysis using all prognostic factors, respectively. The third strategy was a weighted analysis omitting the effect modifier (menopausal status) in the creation of weight, while the fourth strategy further dropped tumor grade indicators from the balanced variable list.In all analyses, Cox regressions were used to estimate the HRs, and the comparison of strategies was based on logHRs. Performance of unanchored MAIC To quantify the performance of unanchored MAIC in the analysis of time-to-event outcomes, the bias, the Monte-Carlo variance (MCV), the mean squared error (MSE), and the percentages of CI coverage of the estimates were evaluated for each strategy. Among these, MCV is the squared of the empirical standard errors [18], which is a measure of the efficiency of the estimator. The bias was calculated as $$ \frac{1}{\mathrm{N}}\sum \limits_{j=1}^N\left(\log \hat{{\mathrm{HR}}_{\mathrm{j}}}-\left(-0.5\right)\right), $$ the MCV was calculated as $$ \frac{1}{\mathrm{N}-1}\sum \limits_{j=1}^N{\left(\log \hat{{\mathrm{HR}}_{\mathrm{j}}}-\log \overline{\mathrm{HR}}\right)}^2, $$ and the MSE was calculated as $$ \frac{1}{\mathrm{N}}\sum \limits_{j=1}^N{\left(\log \hat{{\mathrm{HR}}_{\mathrm{j}}}-\left(-0.5\right)\right)}^2 $$ where \( \hat{{\mathrm{HR}}_{\mathrm{j}}} \) is the estimated HR from the jth repetition in each scenario and \( \overline{\mathrm{HR}} \) is the mean of \( \hat{{\mathrm{HR}}_{\mathrm{j}}} \) over N repetitions. In addition to these quantities, the effective sample size (ESS) was also calculated [19]. Although not an indicator of the performance of MAIC estimates, ESS was informative in that its value should be close to the true sample size of the B arm when the characteristics of the two arms were balanced without having to rely on extreme weights [3]. A flowchart of the overall process of simulation, analysis, and comparison is illustrated in Fig. S1. Engauge Digitizer 10.11 [20] was used to digitize the K-M curves (screenshots of digitizing and exporting displayed in Figs. S2, S3, S4). Reconstruction of RIKM was implemented using Stata ipdfc routine and all statistical analyses were conducted using Stata 14 (StataCorp LLC, College Station, Texas, the United States of America) [10]. Using the bias factor to estimate the boundary of the true effect For an exposure E, an unmeasured dichotomous confounder U, and an outcome D, VanderWeele et al. has shown that $$ \frac{H{R}_{obs}}{H{R}_{true}}\le bias\ factor, $$ where HRobs is the observed effect and HRtrue is the true effect. The bias factor is calculated as $$ \left(H{R}_{UD}\times R{R}_{EU}\right)/\left(H{R}_{UD}+R{R}_{EU}-1\right) $$ where HRUD is the maximal possible effect of U on D and RREU is the risk ratio of U = 1 of the exposed group to the non-exposed group [21, 22]. As such, the inequality \( H{R}_{true}\ge \frac{H{R}_{obs}}{bias\ factor} \) suggests that HRtrue should not be smaller than \( \frac{L{L}_{\hat{HR}}}{bias\ factor} \) in 95% of the repetitions in which \( L{L}_{\hat{HR}} \) is the lower limit of the 95% CI. If so, the strongest plausible effect can be estimated using HRUD and RREU. The former can be estimated using the IPD of the trial that the researchers can access, the latter can be based on assumptions or external sources. We calculated \( \frac{\hat{HR}}{bias\ factor}\ and\frac{L{L}_{\hat{HR}}}{bias\ factor} \) for the two scenarios by setting menopausal status as U, following which we summarized the mean bias of \( \log \frac{\hat{HR}}{bias\ factor} \) and the percentages of the repetitions in which \( \frac{L{L}_{\hat{HR}}}{bias\ factor} \) was smaller than the true value. By the set-up of the data simulation, the bias factor was 1.10 and 1.05 in the two scenarios, respectively (the calculation was illustrated in online supplementary materials part II). The results of comparing the raw data and RIKM of the B arms as a validation of the reconstruction method are listed in Table 3. A graphical example of a raw survival curve and the counterpart using the digitization and reconstruction method is presented in Fig. S5. The mean HRs of the recovered indicator in the first and the second scenarios were correspondingly 0.959 and 0.960, whereas the mean HRs of time-varying effect in both scenarios were 1.00. Also, the percentages of repetitions in which the 95% CI covered one were 100% in both scenarios. Table 3 Agreement between the raw data and the reconstructed data of the B arms The results of the performance evaluation of unanchored MAIC for survival outcomes in scenario 1 are presented in Table 4. In the first scenario, which did not involve any effect modifiers, the bias of the logHRs using the unweighted Cox regressions was 0.164. By contrast, the bias of the weighted Cox regressions that used all prognostic factors in entropy balancing was substantially smaller at 0.027. Although less than the unweighted analyses, the bias of the weighted analyses when the indicators of menopausal status and tumor grades were dropped from entropy balancing was 0.114. More, the MCV of the estimates was the same across all analytic strategies, which was 0.002. Even more, the MSEs of the three analytic strategies were 0.029, 0.003 and 0.015, respectively. Finally, the percentages of repetitions in which the 95% CI covered the true value were 11.2, 93.8 and 39.1% for the unweighted, fully weighted, and partially weighted strategies. None of the coverage reached the expected 95% although the fully weighted approach reached a close approximation. Table 4 Estimates of log hazard ratio in scenario 1 (without effect modifiers) The performance evaluation results related to scenario 2 are listed in Table 5. In the second scenario, the unweighted analysis had a bias of 0.173. The fully weighted analyses had a bias of 0.035. In addition, the bias of the weighted analyses omitting the effect modifier was 0.079. More, the weighted analyses omitting both the indicator of menopausal status and the indicators of tumor grades had a bias of 0.122. The MCV of the unweighted estimator and the weighted approach omitting both the indicator of menopausal status and the indicators of tumor grades was 0.002 whereas that of the other two analytic strategies was 0.003. The MSEs of these four analytic strategies were 0.032, 0.004, 0.009 and 0.017, respectively. The percentages of repetitions in which the 95% CI covered the true value were 7.7, 89.9, 68.7 and 34.1% for the four strategies correspondingly. Similar to scenario 1, the fully weighted approach in scenario 2 was the closest to the threshold of 95% but had an even greater shortage in coverage compared with scenario 1. Table 5 Estimates of log hazard ratio in scenario 2 (with an effect modifier) By the study design, the ESS of the fully weighted approach was the same in the two scenarios. Specifically, the ESS was 791 when all covariates were balanced, which was close to the true sample size of the B arm. As expected, the ESS was greater when the list of covariates for balancing was shorter. The performance of adjustment methods using bias factors are presented in Table 6. The mean bias of the bias factor-adjusted HRs in the log scale was − 0.025 and 0.030 in the two scenarios, respectively. The magnitude of bias of the adjusted HRs were comparable to that of the fully weighted approaches in both scenarios. The corresponding percentages of repetitions of which the true value was not less than the adjusted lower limit (LL) were 93.3 and 91.8%, respectively. These percentages were close to but did not reach the expectation of 95%. Table 6 Bias factor-adjusted HR and lower limits of the 95% CI of HR using menopausal status as an omitted variable In the present analysis, we examined the performance of alternative unanchored MAIC approaches to analyze time-to-event outcomes under the scenarios with and without an effect modifier. The results contribute to the information basis for the appropriate use of unanchored MAIC. With a simulation, the present study confirmed that RIKM using the method proposed by Guyot et al. may sufficiently represent the raw time-to-event data [9]. This finding has two practical implications. First, secondary analyses using reconstructed IPD is a viable solution when raw data cannot be accessed. Second, and in a reversed perspective, studies on properties of methods related to reconstructed IPD may rely on simulated raw data instead of reconstructed data. Our findings also revealed several important properties of unanchored MAIC. First and foremost, unanchored MAIC does have the potential to generate unbiased estimates when used to analyze time-to-event outcomes if all factors that impact either the outcome or the treatment effect are captured. That is, not only the effect modifiers but also the non-effect-modifying prognostic factors have to be balanced. Consistent with intuition, dropping some of the prognostic factors in balancing causes greater bias than balancing with full information but less bias than the unweighted approach. Second, and unlike the anchored counterpart, prognostic factors are important in unanchored MAIC analysis of time-to-event outcomes even though not being effect modifiers at the same time. In our simulation analyses, bias arose when not balancing on the prognostic factors even when there were no effect modifiers. Also, omitting the effect modifier which was also a prognostic factor led to nontrivial bias in the scenario of having an effect modifier. On top of bias, the confidence interval estimates of this approach was far from acceptable. As such, the results of the second scenario indicate that balancing both prognostic factors and effect modifiers is crucial in unanchored MAIC on time-to-event outcomes. Third, there may be a trade-off between bias and precision, yet the fully weighted approach constantly outperformed other approaches when MSE was used to evaluate the methods whereas the unweighted approach consistently ranked the worst. Therefore, the benefit of reducing bias with the fully weighted approach outweigh the precision loss in the setting of the present simulation analysis. Fourth, the uncertainty of unanchored MAIC may be biased or underestimated even when the bias of the relative effect is not a prominent problem because the coverage of the CIs never reached 95% across all strategies. This property of MAIC has not been spotted in literature previously and should be discussed in future applications of unanchored MAIC on time-to-event outcomes. The reasons of this property could be multifaceted. The loss of information in the data reconstruction step, although minuscule, may have contributed to this. More, entropy balancing followed by a Cox regression may not have fully accounted for bias. The compound of these sources of uncertainty may result in the imperfect confidence interval estimates. In addition, we proposed an approach to estimate the boundary of the true effect when unanchored MAIC on time-to-event outcomes is likely biased due to omitted covariates. Simulation results showed that the proposed method was imperfect but were not necessarily unusable. This approach involves calculating the bias factor, which requires knowledge or assumptions of the extent of omitted variable unbalance (RREU). In practice, the extent of omitted variable unbalance may be unknown in most situations. Possible solutions including using external data to estimate a plausible value of RREU or calculating the adjusted HRs and the boundaries by toggling RREU over a possible range. For example, if the percentages of post-menopausal individuals among breast cancer patients range from 35 to 65% across different trials and observational studies, then the range can be used to obtain the extremes of RREU estimates, and, for that matter, the boundaries of bias factor-adjusted treatment effect estimates. Of note, when the treatment effect is suspected to be overestimated rather than underestimated, the upper limit should be multiplied by the bias factor to estimate the boundary. Several limitations should be noted when interpreting the results. First, we only used Weibull distribution to simulate the data sets. The data generation process in the real world may not necessarily approximate a Weibull distribution. Especially, survival curves in oncology are sometimes characterized by a high death rate due to nonresponse at the beginning or a plateau at the tail due to cure [23], which may not be sufficiently represented by single-index survival functions. As such, the generalizability of our findings is possibly limited due to the specific situations. Second, only 1000 repetitions were conducted for each scenario of data generation due to the labor-demanding process of manually completing part of the K-M curve digitization. Although the number of repetitions matches that of a previous simulation study in the realm of MAIC [5], the possibility of insufficient repetitions to reveal the properties could not be fully ruled out. Third, the same specification of shape and scale parameters of the Weibull distribution was used in the simulation of both A and B arms, which may be reasonable if the populations are adequately homogenous across trials. However, the scenarios of different underlying survival distributions across trials were not probed. Such complexity almost infinitely complicates the examination and discussion of any evidence synthesis methods. Fourth, the scenarios we explored were not exhaustive. For example, the coefficient specifications of covariates, the differences in the prognostic factors across trials, and the treatment effect were not extensively varied to examine the performance of unanchored MAIC under other scenarios. Such practice was largely hampered by the hefty manual work required to digitize the graphs. A byproduct of the limited number of scenarios for characteristic differences was that the impact of extreme weights due to larger differences in the IPD and the aggregate data could not be investigated, which was also reflected by the ESS of the fully weighted approach. Finally, censoring was simulated using a uniform distribution that was unrelated to the treatment, the effect modifier, and the prognostic factors. How non-random censoring impacts the performance of unanchored MAIC and MAIC in general in the analysis of time-to-event outcomes should be investigated in future. Reconstructed IPD from digitized K-M curves may sufficiently represent the raw time-to-event data. Also, unanchored MAIC may be used in the analysis of time-to-event outcomes across single-arm trials. However, it should be used with caution of unmeasured prognostic factors and effect modifiers as well as suboptimal CIs. More, the bias factor-adjusted estimate can be used as an approximation of the boundary of the true effect at the presence of omitted variables. The study did not collect primary data. Program code files used for data simulation and analyses in the submitted work can be accessed at https://doi.org/10.17632/6dvrxd7xpn.2. Hazard ratio IPD: Individual-level patient data K-M: Kaplan-meier LL: MAIC: Matching-adjusted indirect comparison MCV: Monte-carlo variance MSE: Mean squared error Network meta-analyses RREU : The extent of omitted variable unbalance RFS: Recurrence-free survival time RIKM: Reconstructed IPD using digitized K-M curves USFDA: Phillippo DM, Ades AE, Dias S, Palmer S, Abrams KR, Welton NJ. Methods for population-adjusted indirect comparisons in health technology appraisal. Med Decis Mak. 2018;38(2):200–11. Phillippo DM, Dias S, Ades AE, Belger M, Brnabic A, Schacht A et al. Multilevel network meta-regression for population-adjusted treatment comparisons. J R Stat Soc Ser A. 2020;183(3):1189–1210. https://doi.org/10.1111/rssa.12579. Signorovitch JE, Sikirica V, Erder MH, Xie J, Lu M, Hodgkins PS, et al. Matching-adjusted indirect comparisons: a new tool for timely comparative effectiveness research. Value Health. 2012;15(6):940–7. Signorovitch J, Erder MH, Xie J, Sikirica V, Lu M, Hodgkins PS, et al. Comparative effectiveness research using matching-adjusted indirect comparison: an application to treatment with guanfacine extended release or atomoxetine in children with attention-deficit/hyperactivity disorder and comorbid oppositional defiant disorder. Pharmacoepidemiol Drug Saf. 2012;21:130–7. Petto H, Kadziola Z, Brnabic A, Saure D, Belger M. Alternative weighting approaches for anchored matching-adjusted indirect comparisons via a common comparator. Value Health. 2019;22(1):85–91. Phillippo D, Ades T, Dias S, Palmer S, Abrams KR, Welton N. NICE DSU technical support document 18: methods for population-adjusted indirect comparisons in submissions to NICE. 2016. Committee PBA. Guidelines for preparing submissions to the pharmaceutical benefits advisory Committee (version 5.0. 2016). Canberra: Pharmaceutical Benefits Advisory Committee; 2016. Malangone E, Sherman S. Matching-adjusted indirect comparison analysis using common SAS® 9.2: procedures. 2016. https://support.sas.com/resources/papers/proceedings11/228-2011.pdf. Accessed Aug 18 2017. Guyot P, Ades AE, Ouwens MJ, Welton N. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves. BMC Med Res Methodol. 2012;12(1):9. Wei Y, Royston P. Reconstructing time-to-event data from published Kaplan-Meier curves. Stata J. 2017;17(4):786–802. Ishak KJ, Rael M, Hicks M, Mittal S, Eatock M, Valle JW. Relative effectiveness of sunitinib versus everolimus in advanced pancreatic neuroendocrine tumors: an updated matching-adjusted indirect comparison. J Comp Eff Res. 2018;7(10):947–58. https://doi.org/10.2217/cer-2018-0020. Epub 2018 Aug 31. PMID: 30168349. Sherman S, Amzal B, Calvo E, Wang X, Park J, Liu Z, et al. An indirect comparison of Everolimus versus Axitinib in US patients with advanced renal cell carcinoma in whom prior Sunitinib therapy failed. Clin Ther. 2015;37(11):2552–9. Atkins MB, Tarhini A, Rael M, Gupte-Singh K, O'Brien E, Ritchings C, et al. Comparative efficacy of combination immunotherapy and targeted therapy in the treatment of BRAF-mutant advanced melanoma: a matching-adjusted indirect comparison. Immunotherapy. 2019;11. https://doi.org/10.2217/imt-2018-0208.. U.S. Food and Drug Administration. Hematology/Oncology (Cancer) Approvals & Safety Notifications. 2019. https://www.fda.gov/drugs/resources-information-approved-drugs/hematologyoncology-cancer-approvals-safety-notifications. Accessed Mar 14 2019. Hainmueller J. Entropy balancing for causal effects: a multivariate reweighting method to produce balanced samples in observational studies. Polit Anal. 2011;20. https://doi.org/10.2139/ssrn.1904869. Hainmueller J, Xu Y. Ebalance: a stata package for entropy balancing. J Stat Softw. 2013;54(7):18. https://doi.org/10.18637/jss.v054.i07. Greene WH. Econometric analysis. Boston: Prentice Hall Inc.; 2012. Morris TP, White IR, Crowther MJ. Using simulation studies to evaluate statistical methods. Stat Med. 2019;38(11):2074–102. https://doi.org/10.1002/sim.8086. Kish L. Survey sampling. vol 04; HN29, K5.: 1965. Mitchell M, Muftakhidinov B, Winchen T, Jędrzejewski-Szmek Z, Trande A, Weingrill J et al. Engauge Digitizer Software. 2019. http://markummitchell.github.io/engauge-digitizer. Accessed Apr 27 2019. VanderWeele T, Ding P, Mathur M. Technical considerations in the use of the E-value. J Causal Inference. 2019. https://doi.org/10.1515/jci-2018-0007. VanderWeele TJ, Ding P. Sensitivity analysis in observational research: introducing the E-value. Ann Intern Med. 2017;167(4):268–74. https://doi.org/10.7326/m16-2607. Farewell VT. Mixture models in survival analysis: Are they worth the risk? 1986;14(3):257–62. doi:https://doi.org/10.2307/3314804. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. School of Public Health (Shenzhen), Sun Yat-sen University, Room 215, Mingde Garden #6, 132 East Outer Ring Road, Pan-yu District, Guangzhou, Guangdong, China Yawen Jiang Department of Pharmaceutical and Health Economics, University of Southern California, 635 Downey Way, Verna & Peter Dauterive Hall (VPD) Suite 210, Los Angeles, CA, 90089-3333, USA Weiyi Ni YJ contributed to the conceptualization, methodology, software, formal analysis, data interpretation, and manuscript drafting of the study. WN contributed to the validation of the analysis, data interpretation, and review and editing of the manuscript. Both authors read and approved the final manuscript. Correspondence to Yawen Jiang. Additional file 1. Jiang, Y., Ni, W. Performance of unanchored matching-adjusted indirect comparison (MAIC) for the evidence synthesis of single-arm trials with time-to-event outcomes. BMC Med Res Methodol 20, 241 (2020). https://doi.org/10.1186/s12874-020-01124-6 Matching-adjusted Indirect comparison Reconstruction: single-arm Evidence synthesis
CommonCrawl
Mean platelet volume dynamics as a prognostic indicator in pediatric surgical intensive care unit: a descriptive observational study Iman Riad M. Abdel-Aal1, Akram Shahat El Adawy1, Hany Mohammed El-Hadi Shoukat Mohammed ORCID: orcid.org/0000-0003-0736-22181 & Ahmed Nabil Mohamed Gabah1 Ain-Shams Journal of Anesthesiology volume 12, Article number: 32 (2020) Cite this article Platelet size and activity have a close correlation. The mean platelet volume (MPV) is related to the disease severity and prognosis, especially in critically ill patients. To study the relation between MPV changes and postoperative morbidities and mortality in pediatric surgical intensive care unit (PSICU). Methods and material We enrolled in this descriptive observational study one hundred PSICU children aged from 1 month to 18 years and stayed for > 48 h for peri-operative or post-trauma management. The 1ry outcome was the association between percentage change in MPV (ΔMPV) value and mortality. We recorded MPV, ΔMPV, and platelet count as a baseline, at day 0, 1st, 2nd, 3rd, 5th, and 7th days and then once weekly until patients were discharged, died, or reached a maximum of 90 days in ICU stay. Statistical analysis used We used statistical package for the social science (SPSS) version 22. Non-parametric Mann-Whitney test made comparisons between quantitative variables. Repeated measures analysis of variance (ANOVA), non-parametric Friedman, and Wilcoxon signed-rank tests made the comparison within the same patients. We used receiver operating characteristic (ROC) curves for the detection of sensitivity and specificity. Patients who developed ICU complications showed higher ΔMPV compared with non-complicated cases, and this was statistically significant on days 2, 3, 5, and 7 of ICU stay. ROC curve analysis showed a sensitivity of 57.2% and 73% on days 2 and 3 and a specificity of 76.6% and 71% on days 2 and 3, respectively. MPV dynamics have a prognostic role and worth a value in predicting several complications in PSICU. Mean platelet volume (MPV) dynamics worth a value in predicting several complications in critically ill pediatric surgical patients. However, platelet count seemed to be a more specific and sensitive tool to detect complications than MPV dynamics. Platelets have a major role in hematopoiesis, fibrin deposition, and inflammation (Archana, Vijaya, & Jayalakshmi, 2014). The platelet count is dynamic (Greinacher & Selleng, 2010). In critically ill patients, platelet consumption is frequent and associated with a poorer prognosis (Zampieri et al., 2014). As platelet size correlates to activity, the mean platelet volume (MPV) is a reflection of its size, function, and activation (Karadag-Oncel et al., 2013; Sezgi et al., 2014; Slavka et al., 2011). MPV seems to be a marker of platelet production, consumption, and disease severity (Archana et al., 2014). So, changes in MPV could be used for disease prognosis and mortality in ICU (Sezgi et al., 2014). We hypothesized that MPV changes could be used as a prognostic tool in the pediatric surgical intensive care unit. We conducted this descriptive observational study on one hundred pediatric surgical patients who were admitted to pediatric surgical intensive care units (ICUs) at University Hospitals started from January 6, 2015, to January 12, 2015. This study aimed to assess the prognostic value of mean platelet volume (MPV) dynamics in the pediatric surgical intensive care unit. After approval by the departmental Research Ethics Committee; we obtained informed consents from parents or the next-of-kin of patients before commencement, and 100 pediatric cases aged from 1 month to 18 years who admitted to pediatric surgical ICUs and stayed for > 48 h for perioperative or post-trauma management were enrolled in the study. We excluded from the study children who stayed for < 48 h in the ICU and those who had congenital heart disease, hematological disorder, or with a current diagnosis of cancer. We calculated the sample size according to this equation: Sample size = $$ \mathrm{Sample}\kern0.17em \mathrm{size}=\frac{Z_{1-\alpha /{2}^2}\mathrm{SD}}{{\mathrm{d}}^2} $$ Where, Z1−α/2 = is standard normal variate (at p < 0.05, it is 1.96) SD = standard deviation of the variable. Value of standard deviation was taken from a previous study, which was 5 [4] d = absolute error or precision which is 1 Power of study which is the chance of a study successfully demonstrating the "true" result is 80% Alpha (α) error which is the probability of falsely rejecting a true null hypothesis is 5% Beta (β) error, which is the probability of failing to reject a false null hypothesis, is 20%. We used the G*Power version 3.1.9.2, program written by Franz Faul, Universitat Kiel, Germany, Copyright (C): 1992–2014 for sample size calculation. After patient admission to ICU, we collected the following data: age in months, gender, weight in kilogram, the name of ICU to which the patient admitted, primary reason for admission, and source of referal (either from the operating room (OR), emergency room, or inpatient ward). Also, we recorded the duration of surgery in surgical patients, the need for intra-operative blood or platelets transfusion, and the number of units transfused as an indicator of significant blood loss. We recorded intra-operative complications including hypotension (defined as a reduction ≥ 20% in systolic blood pressure (SBP) from baseline reading), significant blood loss, desaturation (defined as oxygen saturation < 95% for more than 10 s), and arrhythmias. Moreover, we recorded the length of stay (LOS) in the ICU and the fate of the patient afterward (discharge or death). The following laboratory data were obtained and recorded: mean platelet volume (MPV) in femtoliter (fl) and platelet count (PLC) in 103 per microliter (103/μL). The attending intensivist obtained a blood sample of 2 ml of blood by either venipuncture, arterial puncture, or through a central catheter if it was in situ to get accurate complete blood count (CBC) results. Then, samples collected in ethylene diamine tetra-acetic acid (EDTA) containing tubes and analyzed in the hematology laboratory of the hospital in an automated hematology analysis system (Sysmex NE 8000 autoanalyzer, Sysmex Europe GmbH, Norderstedt, Germany) that measures platelet size using aperture-impedance technology. All patient samples were processed within an hour after collection, as recommended in the literature, to avoid bias due to excessive platelet swelling. The normally accepted values for MPV at our "University Hospitals' hematology laboratory" ranged from 7 to 11 fl. All laboratory parameters were recorded as a baseline (pre-operative values), at the day of ICU admission (day 0), 1st, 2nd, 3rd, 5th, and 7th days and then once weekly until patients were discharged, died, or reached the greatest of 90 days in ICU stay. To measure the daily MPV changes, we constructed and computed ΔMPV. We defined ΔMPV as: ΔMPV = ([MPV day(X) − MPV day (0)]/MPV day (0) × 100% where MPV day(X) was the MPV value for day(X) in ICU while MPV day (0) was the MPV value for that collected at ICU admission (day 0). ΔMPV recorded at days 1 (24 h after ICU admission), 2, 3, 5, and 7, then once weekly until patients had been discharged, died, or reached the greatest of 90 days in ICU stay. Pediatric Index of Mortality (PIM) score was calculated from data collected on admission day (day 0) to ICU and computed electronically from the following website: SFAR Société Française d'Anesthésie et de Réanimation (http://www.sfar.org/scores2/pim2.html#underlying) for prediction of mortality. We also calculated the Pediatric Logistic Organ Dysfunction (PELOD) Score for each child daily. The 1ry outcome of this study was the association between percentage change in MPV (ΔMPV) value and patient mortality in pediatric surgical patients admitted to pediatric surgical ICUs at University Hospital between January 6, 2015, and January 12, 2015. Secondary outcomes included as follows: The association between percentage change in MPV (ΔMPV) value starting from the day of ICU admission and postoperative morbidities that were recorded in PSICU and included fever, surgical bleeding, sepsis, pneumonia, the need of mechanical ventilation, the use of vasopressor agents, and the necessity of blood or platelet transfusion. We defined fever as a core body temperature > 38°C, while the intensivist made the diagnosis of sepsis and pneumonia according to ICU definitions. Surgical bleeding is described as a bleeding episode resulting in drop-in hemoglobin of > 2 g/dl within 24 h, bleeding events requiring local tamponade or blood transfusion within 24 h. And the association between platelets count (PLC) starting from the day of ICU admission and postoperative morbidities and mortality. Data were coded and entered using the statistical package SPSS (Statistical Package for the Social Science; SPSS Inc., Chicago, IL, USA) version 22. Data were summarized using mean, standard deviation, median, minimum, and maximum in quantitative data and using frequency (count) and relative frequency (percentage) for categorical data. Comparisons between quantitative variables were made using the non-parametric Mann-Whitney test. Probability (p) values less than 0.05 were considered statistically significant, and p value ˂ 0.001 was considered highly significant. Repeated measures (ANOVA), non-parametric Friedman, and Wilcoxon signed-rank tests made the comparison of serial measurements within the same patients for quantitative variables. We used receiver operating characteristic (ROC) curves for the detection of sensitivity and specificity of different parameters. In this descriptive observational study, we have enrolled one hundred patients who fulfilled the inclusion criteria. Tables 1 and 2 show patients' characteristics and demographic data, while Table 3 shows reported intraoperative complications. Table 1 Patients' demographic data. Data presented as mean ± standard deviation (SD), count, and frequency Table 2 Patients' characteristic. Data presented as mean ± standard deviation (SD), count, and frequency Table 3 Intraoperative complications. Data presented as mean ± standard deviation (SD), count, and frequency Platelet count (PLC) showed a gradual decrease in number over the first few days in ICU compared to admission day (Fig. 1); then, it started to increase from day 7. Although the changes in platelet count (PLC) during the 1st week of ICU stay were within a normal range, these changes showed a significant difference (p value = 0.00) compared to the preoperative platelet count for surgical cases. Also, these changes showed a significant difference (p value = 0.00) compared to day 0 except for day 7, the p value was 0.623. PLC changes during 1st week of ICU stay. Data presented as means; error bars represent standard deviation. p < 0.001 = highly significant. The number sign (#) indicates highly significant difference compared to preoperative PLC. The asterisk (*) indicates highly significant difference compared to day 0 PLC Consequently, mean platelet volume (MPV) showed a gradual increase in the amount over the first few days in ICU compared to admission day (Fig. 2); then, it started to decrease again. Although the changes of MPV during the 1st week of ICU stay were within a normal range, these changes showed a significant difference (p value = 0.00) compared to the preoperative MPV value in surgical cases except for day 7 in which p value = 0.077. Also, these changes of MPV at different days showed a significant difference (p value = 0.00) compared to day 0 MPV value. MPV changes during 1st week of ICU stay. Data presented as means; error bars represent standard deviation. p < 0.001 = highly significant. The number sign (#) indicates highly significant difference compared to preoperative MPV. The asterisk (*) indicates highly significant difference compared to day 0 MPV Percentage changes in MPV (Delta ΔMPV) increased in day 2 and day 3 in comparison to day 1 changes, then decreased again in day 5 and day 7. Percentage changes in MPV (Delta ΔMPV) at different days of the first week of ICU stay showed a significant difference compared to day 1 ΔMPV value, as shown in Fig. 3. Delta MPV during 1st week of ICU stay. Data presented as means; error bars represent standard deviation. p < 0.001 = highly significant. The number sign (#) indicates highly significant difference compared to day 1 delta MPV value Among the studied one hundred patients, fifty-four patients (54%) had complications that developed during their ICU stay. These complications varied from sepsis, pneumonia, fever, use of vasopressor agents, and need to mechanical ventilation to the need for blood or platelets transfusion (Fig. 4). Incidence and type of ICU complications. Data presented as percentages PLC among ICU complicated versus non-complicated patients during the first week of ICU stay presented in Fig. 5. Patients who developed ICU complications showed lower PLC compared to non-complicated cases. This association was statistically significant on days 1, 2, and 3 of ICU stay, but it is insignificant on days 5 (p value = 0.861) and 7 (p value = 0.247). PLC among ICU complicated versus non-complicated patients. Data presented as means, and error bars represent standard deviation When comparing PLC among ICU complicated versus non-complicated cases at day 1 by ROC curve analysis, there was a significant difference of platelets count between complicated cases and non-complicated cases (p ˂ 0.001; area under the curve = 0.789; 95% confidence interval = 0.688–0.890). According to ROC curve analysis, the sensitivity of platelet counts to detect complications at day 1 was 81.4%, but the specificity of platelet counts to detect complications at day 1 was 71.9%. ROC curve analysis at day 2 proved a significant difference of PLC between complicated cases and non-complicated cases (p value ˂ 0.001; area under the curve = 0.889; 95% confidence interval = 0.812–0.965). The sensitivity of PLC to detect complications at day 2 was 81.1%, but the specificity of PLC to detect complications at day 2 was 100%. Patients who developed ICU complications showed a higher Delta MPV compared to non-complicated cases. This association was statistically significant on days 2, 3, 5, and 7 of ICU stay, but it is insignificant on day 1 (Fig. 6). Percentage changes in MPV (Delta MPV) among ICU complicated versus non-complicated patients during the 1st week of ICU stay. Data presented as means, and error bars represent standard deviation. The asterisk (*) indicates significant difference of Delta MPV between complicated and non-complicated cases When comparing Delta MPV between ICU complicated and non-complicated cases at day 1 by ROC curve analysis, there was an insignificant difference of Delta MPV between complicated cases and non-complicated cases (p = 0.691; area under the curve = 0.523; 95% confidence interval = 0.409–0.637). ROC curve analysis of Delta MPV at day 2 showed a significant difference of Delta MPV between complicated cases and non-complicated cases (p = 0.035; area under the curve = 0.623; 95% confidence interval = 0.514–0.732). The sensitivity of Delta MPV to detect complications at day 2 was 57.2%, but the specificity of it to recognize complications at day 2 was 76.6% (Fig. 7). Receiver operating characteristic (ROC) curve for detection of sensitivity and specificity of Delta MPV at day 2 ROC curve analysis of Delta MPV at day 3 demonstrated a significant difference of Delta MPV between complicated cases and non-complicated cases (p < 0.001; area under the curve = 0.794; 95% confidence interval = 0.700–0.888). According to ROC curve analysis, the sensitivity of Delta MPV to detect complications at day 3 was 73%, while the specificity of it to detect complications at day 3 was 71%. ROC curve analysis of Delta MPV (Fig. 8) of the day 5 showed a significant difference of Delta MPV between complicated cases and non-complicated cases (p ˂ 0.001; area under the curve = 0.810; 95% confidence interval = 0.697–0.922). The sensitivity of Delta MPV to detect complications at day 5 was 100%, but the specificity of Delta MPV to recognize complications at day 5 was 76.2%. ROC curve analysis of Delta MPV at day 7 revealed a significant difference of Delta MPV between complicated cases and non-complicated cases (p = 0.018; area under the curve = 1.000; 95% confidence interval = 1.000–1.000). According to ROC curve analysis, the sensitivity of Delta MPV to detect complications at day 7 was 100%, and the specificity of Delta MPV to detect complications at day 7 was 100%. In this study, there was only one reported case of mortality. That mortality case associated with a daily reduction of platelet counts compared to the remaining non-mortality cases. Although of this finding, this difference was statistically insignificant because of the presence of only one case of mortality within the one hundred cases and thus making the comparison very difficult. This mortality case was a 5-year-old child who admitted to the hospital after a road traffic accident with Glasgow Coma Scale three and a large extradural hematoma. He underwent evacuation of that hematoma and was admitted after that to ICU for mechanical ventilation and vasopressor drug support. This patient died on the 3rd postoperative day. We found that the percentage change in MPV increased daily in the mortality case compared with the remaining non-mortality cases. That finding was also statistically insignificant (p value = 0.086). This insignificance may be explained by the presence of only one mortality case within the 100 cases, thus making the comparison very difficult. Mean platelet volume (MPV) is a reflection of platelet size and, consequently, platelet function and activation (Machlus & Italiano, 2013; Yuri Gasparyan, Ayvazyan, P Mikhailidis, & D Kitas, 2011). MPV is an essential, simple, readily available, and cost-effective tool (Van der Lelie & Von dem Borne, 1983). Platelet count has an inverse relationship with MPV (Kim et al., 2015). Although we failed to find a significant correlation between MPV and mortality, the current study suggests that trends in changes in MPV may be more reliable markers of poor prognosis than the corresponding absolute values. To the best of our knowledge, it is the first time to study the relationship between change in MPV and mortality and morbidities that occurred at pediatric surgical ICU. Several studies assessed MPV in critical illness. Zampieri et al. (2014) found an increase in MPV in the first 24 h after admission independently associated with increased mortality. Van der Lelie and Von dem Borne (1983) showed a higher MPV in patients with sepsis than in patients with localized infection and suggested that an increase of MPV in patients with bacterial infection could indicate the occurrence of septicemia. Kim et al. (2015) noted that continuous measurement of MPV could be useful in determining mortality risk in patients with sepsis and septic shock. Guclu, Durmaz, and David (2013) reported low platelet counts and higher MPV in patients with severe sepsis compared with other control patients. Aydemir, Piskin, Akduman, Kokturk, and Aktas (2012) said that fungal sepsis has a stronger association with thrombocytopenia and increased MPV. Unal, Ozen, Kocabeyoglu, et al. (2013) stated a clear association of preoperative MPV and hematocrit levels with post-coronary artery bypass grafting (CABG) adverse events. The prognostic value of these measures was independent of other well-defined individual risk factors. In contrast, white blood cell (WBC) count, including differential leukocyte count, failed to demonstrate a significant relationship with post-CABG adverse events. In an extensive prospective study, Oncel et al. (2012) also reported high MPVs in septic newborns. This study was the first one to demonstrate a statistically significant difference concerning baseline MPV values (day 0 value) between neonates with sepsis (proven or clinical) and healthy controls. Rowe, Buckner, and Newmark (1975) examined 93 postoperative pediatric surgical patients and found that 71% of the patients with Gram-negative sepsis had platelet counts ≤ 100,000, whereas all the platelet counts in the non-septic or Gram-positive sepsis patients were ≥ 150,000. They also noted a rise in platelet counts when patients efficiently treated for sepsis. Also, Rastogi, Olmez, Bhutada, and Rastogi (2011) demonstrated a significant association of mortality and significant morbidities in preterm newborns below 28 weeks gestations, and platelets drop in the first 7 days of life. Agrawal and Sachdev (2008) stated that thrombocytopenic children have a higher incidence of bleeding, longer ICU stay, and higher mortality. These results are consistent with the current study findings, and together, these data suggest that continuous monitoring of changes in MPV and platelet counts may play a role in risk stratification of patients with morbidities and mortality in pediatric surgical ICU. Although there are some contradictory observations as Becchi et al. (2006), who evaluated the impact of MPV and platelet count on the prognosis of critically ill septic patients and concluded that lower MPVs on admission associated with increased mortality. They reported a three times increase in death probability of patients with MPV< 9.7 fl at the recruitment time. This contradiction can be explained in part by the limited size of the Becchi's study population which could not be adequately represented and in part probably also by the differences in included patients' conditions. Yilmaz, Kara, Gumusdere, Arslan, and Ustebay (2015) stated that MPV and other platelet indices would not help to separate true appendicitis from suspected appendicitis. The present study has some limitations as the lack of mortality cases that made the comparison very difficult among the study patients. Therefore, further research is required to determine the precise mechanisms underlying the association between MPV and mortality in critically ill patients in pediatric surgical ICU. We also recommend performing further multicenter studies and investigating its prognostic role in comparison with PELOD and PIM scores. Mean platelet volume (MPV) dynamics and platelet count have a prognostic role and worth value for determining several complications in the critically ill pediatric surgical patients. However, platelet count seemed to be a more specific and sensitive tool to detect complications higher than MPV dynamics. MPV should be used in combination with other prognostic tests to achieve a better outcome for children in pediatric ICU. Data supporting findings can be obtained from the corresponding author. CBC: MPV: Mean platelet volume PELOD: Pediatric Logistic Organ Dysfunction Score PIM: Pediatric Index of Mortality PLC: PRBCs: Packed red blood cells PSICU: Pediatric surgical intensive care unit WBC: Agrawal S, Sachdev AGD (2008) Platelet counts and outcome in the paediatric intensive care unit. Indian Journal of Critical Care Medicine 12(3):102–108 Archana S, Vijaya C, Jayalakshmi V (2014) Comparison of mean platelet volume, platelet count, total leucocyte and neutrophil counts in normoglycemics, impaired fasting glucose, and diabetics. International Journal of Scientific Study 2:24–27 Aydemir H, Piskin N, Akduman D, Kokturk F, Aktas E (2012) Platelet and mean platelet volume kinetics in adult patients with sepsis. Platelets. 26(4):331–335 Becchi C, Al Malyan M, Fabbri LP, Marsili M, Boddi V, Boncinelli S (2006) Mean platelet volume trend in sepsis: is it a useful parameter? Minerva Anestesiologica 72:749–756 Greinacher A, Selleng K (2010) Consultative hematology: hemostasis and thrombosis: thrombocytopenia in the intensive care unit patient. Hematology. 3:10–15 Guclu E, Durmaz Y, David S (2013) Effect of severe sepsis on platelet count and their indices. African Health Sciences 13:333–338 Karadag-Oncel E, Ozsurekci Y, Kara A, Karahan S, Cengiz A, Ceyhan M (2013) The value of mean platelet volume in the determination of community-acquired pneumonia in children. Italian Journal of Pediatrics 39(1):16 Kim CH, Kim SJ, Lee MJ, Kwon YE, Kim YL, Park KS et al (2015) An increase in mean platelet volume from baseline is associated with mortality in patients with severe sepsis or septic shock. PLoS One 10(3):678 Machlus K, Italiano J (2013) The incredible journey: from megakaryocyte development to platelet formation. The Journal of Experimental Medicine 210(7):2107OIA17 Oncel M, Ozdemir R, Yurttutan S, Canpolat F, Erdeve O, Oguz S et al (2012) Mean platelet volume in neonatal sepsis. Journal of Clinical Laboratory Analysis 26(6):493–496 Rastogi S, Olmez I, Bhutada A, Rastogi D (2011) Drop in platelet counts in extremely preterm neonates and its association with clinical outcomes. Journal of Pediatric Hematology/Oncology 33(8):580–584 Rowe M, Buckner D, Newmark S (1975) The early diagnosis of gram-negative septicemia in the pediatric surgical patient. Annals of Surgery 182(3):280–286 Sezgi C, Taylan M, Kaya H, Selimoglu Sen H, Abakay O, Demir M et al (2014) Alterations in platelet count and mean platelet volume as predictors of patient outcome in the respiratory intensive care unit. The Clinical Respiratory Journal 9(4):403–408 Slavka G, Perkmann T, Haslacher H, Greisenegger S, Marsik C, Wagner O et al (2011) Mean platelet volume may represent a predictive parameter for overall vascular mortality and ischemic heart disease. Arteriosclerosis, Thrombosis, and Vascular Biology 31(5):1215–1218 Unal EU, Ozen A, Kocabeyoglu S et al (2013) Mean platelet volume may predict early clinical outcome after coronary artery bypass grafting. Journal of Cardiothoracic Surgery 8:9195 Van der Lelie J, Von dem Borne AK (1983) Increased mean platelet volume in septicemia. Journal of Clinical Pathology 36:693–696 Yilmaz Y, Kara F, Gumusdere M, Arslan H, Ustebay S (2015) The platelet indices in pediatric patients with acute appendicitis. International Journal of Research in Medical Sciences 3(6):1388–1391 Yuri Gasparyan A, Ayvazyan L, P Mikhailidis D, D Kitas G. Mean platelet volume: a link between thrombosis and inflammation?. Current Pharmaceutical Design 2011;17(1):47-58. Zampieri FG, Ranzani OT, Sabatoski V, de Souza HP, Barbeiro H, da Neto LMC et al (2014) An increase in mean platelet volume after admission is associated with higher mortality in critically ill patients. Annals of Intensive Care 4:20–27 Authors want to acknowledge Dr. Mohammed Ali, MD, Department of Community and Public Health, Faculty of Medicine, Cairo University for his significant contribution to statistical analysis for this work. Faculty of Medicine, Cairo University, El Saray Street, El Manial, Cairo, 11956, Egypt Iman Riad M. Abdel-Aal, Akram Shahat El Adawy, Hany Mohammed El-Hadi Shoukat Mohammed & Ahmed Nabil Mohamed Gabah Iman Riad M. Abdel-Aal Akram Shahat El Adawy Hany Mohammed El-Hadi Shoukat Mohammed Ahmed Nabil Mohamed Gabah HM developed the idea of the study. IA and HM made its design. AG, AE, and HM carried out the implementation, data collection, and analysis and took the lead in writing the manuscript. All authors performed the literature search, manuscript preparation, and revision. All authors read and approved the final manuscript before submission. Correspondence to Hany Mohammed El-Hadi Shoukat Mohammed. Local ethics approval committee has been obtained, and written informed consent has been obtained from all parents before the study. This study had been approved by the Research Ethics Committee, Department of Anesthesia, Faculty of medicine, Cairo University. Abdel-Aal, I.R.M., El Adawy, A.S., Mohammed, H.M.EH.S. et al. Mean platelet volume dynamics as a prognostic indicator in pediatric surgical intensive care unit: a descriptive observational study. Ain-Shams J Anesthesiol 12, 32 (2020). https://doi.org/10.1186/s42077-020-00082-x DOI: https://doi.org/10.1186/s42077-020-00082-x Surgical intensive care unit
CommonCrawl
View all Nature Research journals Explore our content Diurnal cloud cycle biases in climate models Jun Yin ORCID: orcid.org/0000-0003-2706-06201,2 & Amilcare Porporato ORCID: orcid.org/0000-0001-9378-207X1,2 Nature Communications volume 8, Article number: 2269 (2017) Cite this article 116 Altmetric Atmospheric dynamics Climate and Earth system modelling Projection and prediction Clouds' efficiency at reflecting solar radiation and trapping the terrestrial radiation is strongly modulated by the diurnal cycle of clouds (DCC). Much attention has been paid to mean cloud properties due to their critical role in climate projections; however, less research has been devoted to the DCC. Here we quantify the mean, amplitude, and phase of the DCC in climate models and compare them with satellite observations and reanalysis data. While the mean appears to be reliable, the amplitude and phase of the DCC show marked inconsistencies, inducing overestimation of radiation in most climate models. In some models, DCC appears slightly shifted over the ocean, likely as a result of tuning and fortuitously compensating the large DCC errors over the land. While this model tuning does not seem to invalidate climate projections because of the limited DCC response to global warming, it may potentially increase the uncertainty of climate predictions. As efficient modulators of the Earth's radiative budget, clouds play a crucial role in making our planet habitable1. Their response to the increase in anthropogenic emissions of greenhouse gases will also have a substantial effect on future climates, although it is highly uncertain whether this will contribute to intensify or alleviate the global warming threat2. Such uncertainties are well recognized in the state-of-the-art general circulation models (GCMs)3 and are typically associated with their performance in simulating some critical cloud features, such as cloud structure and coverage4. Among these features, perhaps the most overlooked one is the diurnal cycle of clouds (DCC), describing how certain cloud properties (e.g., cloud coverage) change throughout the day at a given location. Due to the clouds' interference with the diurnal fluctuations of solar and terrestrial radiation, shifts in DCC have the potential to strongly affect the Earth's energy budget, even when on average the daily cloud coverage is the same5. Although the diurnal cycle of atmospheric convection has recently attracted more research attention6,7,8,9,10,11,12, DCC has yet to be analyzed at a global scale to fully understand its radiative effects on the Earth's energy budget. In this work, to assess the degree with which climate models capture the key features of the DCC, we calculate three main statistics describing the typical DCC in each season in climate model outputs and compare them with those obtained from satellite observations and reanalysis data. We show how the resulting DCC model discrepancies influence the global radiation balance, contributing to increased uncertainties in climate projections. Errors of DCC We focus on the total cloud coverage (a brief discussion of the effects of other cloud properties can be found in the last section), whose diurnal cycle is closely related to that of total cloud water path13, 14 and thus plays a critical role in the energy budget. To avoid dealing with higher harmonics of a Fourier decomposition of the DCC for cases with significant deviations from sinusoidal shapes15, here we focus on the standard deviation (σ), centroid (c), and mean (μ) to capture the amplitude, phase, and the daily average of cloud coverage (Fig. 1a and "Methods" section). Note, however, that the centroid and standard deviation are usually very similar to the amplitude and phase of the first harmonic (see a comparison map in Supplementary Fig. 1). These three indexes of cloud climatology are computed for the outputs of the GCMs participating in the Fifth Phase of the Coupled Model Intercomparison Project, and then are compared with those from the International Satellite Cloud Climatology Project (ISCCP)16 and from the European Centre for Medium-Range Weather Forecasts (ECMWF) twentieth century reanalysis (ERA-20C)17, all of which have high-frequency (3-h) global coverage for the period 1986–2005. ISCCP records were obtained from both visible and infrared channels; the latter is used to derive the cloud coverage as the infrared is measured throughout the whole diurnal cycle18, 19. While we are well aware that the ISCCP satellite records contain artifacts that may affect long-term trends20, 21, it is important to emphasize that they do provide very useful information about the cloud climatology of interest here. In fact, it has been shown that the DCC climatology from these ISCCP records is generally consistent with the observations from stationary weatherships and some other satellite records19, 22, 23. Regarding ERA-20C, it is the ECMWF's state-of-the-art reanalysis designed for climate applications17. Both ERA-20C and CNRM-CM5 climate models rely on the integrated forecast system from ECMWF17, 24, so that some common elements of cloud climatology may be expected. Diurnal cycle of cloud climatology and its indexes. a Examples of diurnal cycle of average cloud coverage near Guangde, Anhui, China (30.7N, 119.2E) in summer (June, July, and August) averaged over 1986–2005. The vertical dot lines and horizontal dash lines show the centroid and mean of the diurnal cycle climatology; the shaded blue areas indicate plus and minus one standard deviation. More examples are presented in Supplementary Fig. 2. Empirical probability density function (PDF) of b mean (μ), c standard deviation (σ), and d centroid (c) of diurnal cycle of cloud coverage climatology over the land (black solid lines) and ocean (blue dash lines) in all four seasons over 60S–60N. The data sources (satellite observations, reanalysis, and climate models) are indicated on the left side of the figure (see Supplementary Table 1 for a list of these) Figure 1a shows an example of DCC climatology and the corresponding indexes for a subtropical monsoon climate zone in Eastern China in summer, characterized by clear mornings and frequent afternoon thunderstorms. This type of diurnal cycle is evident from the satellite (Fig. 1a ISCCP), reanalysis data (Fig. 1a ERA-20C), and CNRM-CM5 (Fig. 1a CNRM-CM5), while most of the GCMs show inconsistencies (see more examples in Supplementary Fig. 2). To explore these discrepancies globally, we calculated the DCC indexes at each grid point in each season from each data source. The most striking feature of DCC indexes is the land/ocean patterns, reflecting the contrasting mechanisms of atmospheric convection, although these geographical patterns are less coherent in the GCM outputs (see Supplementary Figs. 3–13). For this reason, we compare the empirical distributions of DCC indexes over the land and the ocean in Fig. 1. The satellite, reanalysis, and CNRM-CM5 clearly show larger μ, smaller σ, and earlier c over the ocean. A consistent pattern is found in all GCM outputs for the mean cloud coverage (Fig. 1b). However, the DCC amplitude σ generally shows no significant land–ocean contrast and a number of GCMs erroneously suggest stronger DCC amplitude over the oceans (Fig. 1c), while regarding the phase c, the land–ocean contrast is underestimated with most GCMs not even capturing the afternoon cloud peaks (Fig. 1d). Overall, only CNRM-CM5 shows reasonable simulation of DCC over the land, likely due to its detailed convective schemes and model resolution24. A detailed analysis of these differences is given in Fig. 2, which compares the root-mean-square deviation (RMSD, see "Methods" section) of μ, σ, and c in climate models and reanalysis data with the standard values from ISCCP records. Regarding the mean, μ, the discrepancies over the land and ocean are relatively similar. The corresponding Taylor diagrams25 further suggest that the mean cloud coverage is much better simulated than the rest DCC indexes (see Supplementary Figs. 14 and 15). As for σ and c, the RMSDs between ISCCP records and the other data sets over the ocean are clearly smaller than those over the land. In these continental regions, the CNRM-CM5 model and the reanalysis data ERA-20C show relatively smaller RMSD. In general, models with obvious similarities in code produce a similar cloud climatology and RMSD (e.g., CNRM-CM5 and ERA-20C; GFDL-ESM2M and GFDL-ESM2G). A more detailed comparison between each data source is presented in Supplementary Fig. 16. Root-mean-square deviation of DCC indexes between satellite observations and climate model outputs. a, d RMSD of mean cloud coverage, μ, over the land and ocean; b, e RMSD of cloud coverage standard deviation, σ, over the land and ocean; c, f RMSD of the cloud coverage centroid, c, over the land and ocean Controls of cloud cycle on radiation budget Given the discrepancies in DCC predictions, it is logical to wonder how they may influence the Earth's radiation budget. To address this point, we follow and extend the approach of the so-called cloud radiative effects (CRE; see "Methods" section for details), which has been conventionally used to diagnose the effects of clouds by comparing all-sky and clear-sky radiative fluxes at the top of the atmosphere (TOA)26,27,28. We first analyze the diurnal cycle of global mean CRE at sub-daily timescale (see Eq. (10) in "Methods" section). We use data from ERA-20C reanalysis because its radiative flux data are similar to those from climate model outputs and satellite observations (see Supplementary Fig. 17). Figure 3a, b shows the diurnal cycle of CRE climatology over the land and ocean. The shortwave CRE, which is in phase with the incoming solar radiation, has a more marked diurnal variation than the longwave one. The CRE cycle is also stronger over the ocean due to the contrasting sea surface/cloud albedo that enhances the cloud effects. As explained in detail in the "Methods" section, we then use these CRE cycles to analyze the TOA reference irradiance as a function of the diurnal variations in cloud coverage. Such reference irradiance provides a consistent approach to evaluate the potential radiative impacts of the biases in DCC. Figure 3c, d shows a heatmap of the TOA reference irradiance as a function of the DCC indices c and c v = σ/μ for a sinusoidal form of diurnal cloud coverage (see Eq. (15) in "Methods" section). The reference irradiance is symmetric with respect to the centroid at noon (c = 12 h) and has higher gradients over the ocean. As one would expect, the enhanced CRE cycle over the ocean (compare Fig. 3a, b), due to its lower surface albedo, increases the DCC radiative impacts. Moreover, earlier cloud phases (i.e., corresponding to values of c before sunrise) inevitably induce warming effects as clouds trap radiation at night regardless of cloud type and structure; similarly, midday cloud peaks typically induce cooling effects as clouds usually reflect more solar radiation at noon. Such impacts of the phase (c) become more significant for larger relative amplitude (c v ) and over the ocean. For example, for c v = 0.1, the reference irradiance over the land increases by 6.4 W m−2 in response to a shift of the centroid from noon to midnight (i.e., for c going from 12 to 24), while for c v = 0.2, the increase of irradiance becomes 12.7 W m−2 for the same centroid shift. The corresponding increases over the ocean are even larger (11.0 and 21.7 W m−2). These large changes over both land and ocean are consistent with the values reported in a prior study5 and are due to the significant and systematic variations of DCC. Cloud radiative effects and reference irradiance at the top of the atmosphere. Global diurnal cycle of cloud radiative effects (CRE) climatology over the land (a) and ocean (b) from reanalysis outputs separated into shortwave and longwave components; heatmap of TOA reference irradiance as a function of coefficient of variation and centroid of cloud diurnal cycles over the land (c) and ocean (d). The crosses specify the 25th, 50th, and 75th percentiles of the c and c v from climate models (open markers, see details of the markers in Supplementary Table 1), satellite records (filled hexagram), and reanalysis data (filled circle) Radiative effects of cloud cycle errors To assess the radiative impacts of DCC errors, we first superimpose the c and c v from different GCMs onto the heatmap in Fig. 3c, d. Over the land, the indexes appear much more scattered due to larger discrepancies of both c and c v among the data sources, as already illustrated in Fig. 2. When the continental clouds tend to peak in the afternoon, as observed in ISCCP and simulated in ERA-20C and CNRM-CM5, they reflect more solar radiation and result in climates corresponding to the cold zone of the heatmap (Fig. 3c). Over the ocean, the indexes c and c v are much more clustered. This however does not necessarily imply small radiative impacts, because the marine heatmap has steeper gradients (Fig. 3d). In summary, Fig. 3c, d shows potentially strong effects of DCC cloud errors in GCMs in both phase (c) and variability (c v ). By focusing on the departure of cloud coverage from its mean, f(t) = μ + f DCC(t), we can isolate the f DCC effects, without the sinusoidal approximation that was necessary for Fig. 3 (see Eq. (14) in "Methods" section). Accordingly, we calculate the TOA reference irradiance at each grid point in each season using μ from the ERA-20C reanalysis data and the f DCC from each GCMs outputs. In this way, μ from ERA-20C reanalysis is set as the baseline to compare the radiative impacts of f DCC from the climate models in terms of global mean TOA reference irradiance. The results, displayed in Fig. 4, show that over the land the lack of cloud peaks around afternoon in most GCMs (see Fig. 1) implies more solar radiation, so that the reference irradiance is higher than that from the standard ERA-20C. The inter-model difference of reference irradiance can be as large as 1.8 W m−2 between climate models CNRM-CM5 and GFDL-CM3, and reaches the maximum 2.7 W m−2 between GFDL-CM3 and ISCCP. Over the ocean, the relatively smaller DCC discrepancies (see Fig. 2) are amplified by their larger impacts (see the larger CRE over ocean in Fig. 3b), thus again resulting in considerably large inter-model differences in reference irradiance. The largest one occurs between CMCC-CM and GFDL-CM3 with a difference of 2.1 W m−2. In CMCC-CM and INM-CM4, the surplus reference irradiance due to the lack of cloud peaks around noon over the land is somewhat compensated by the effects of the slightly later cloud peaks over the ocean. Other GCMs instead have a larger reference irradiance that is likely compensated by model tuning. Regarding the ISCCP records, it is lower over the land and higher over the ocean when compared with ERA-20C. An in-depth investigation of the DCC discrepancies between the multiple satellite observations and reanalyses is outside the scope of the work here, but it is worth mentioning that it could be useful as a starting point to formulate a standard DCC to be used as a reference for cloud parametrization in climate models. To verify that these results are robust to the selection of the baseline μ, we also obtained it using data available from the CanAM4 climate model (this is the only GCM with sub-daily CRE outputs at the global scale). As shown in Supplementary Fig. 18, the results are similar to the ones with ERA-20C, thus confirming our findings. Reference irradiance. Top-of-the-atmosphere reference irradiance over a land, b ocean, and c whole Earth calculated using the mean cloud coverage (μ) from reanalysis data and the departure of mean cloud coverage (f DCC) from each data source. The dash lines show the top-of-the-atmosphere reference irradiance from reanalysis data. Note that the reanalysis is not designed to conserve the energy balance48, 49; therefore, the deviations from the standard reanalysis, rather than their absolute values, are more meaningful to quantify the radiative impacts of the DCC Implication of cloud cycle errors for climate projection Since the total radiative effects of DCC biases may be very important, as shown by the previous analysis, it is likely that the GCM tuning, which is done to be able to reproduce the observed surface temperature climatology, may be in part linked to the DCC biases. It is thus crucial to try to understand the related consequences for climate predictions. First of all, climate models may have poor performance in simulating not only DCC, but also other cloud variables, such as structure and liquid water path29, 30. The errors in these climate variables may also interact and induce substantial biases on the radiation balance. For example, the well-documented problem of "too few, too bright," whereby the underestimation of cloudiness is often compensated by an overestimation of cloud albedo31, 32, may further enhance the CRE33,34,35, thus increasing the radiative effects of DCC errors. Furthermore, the overall radiation budget may be achieved by different tunings36, 37 (e.g., on multiple climate variables or parameters in different times or locations). For instance, the "too few, too bright" problem may be also be compensated by adjusting the cloud structure and its microphysics31, 34. Similarly, as discussed above, the effects of large DCC errors over the land may be compensated by the opposite effects of small DCC errors over the ocean in some climate models (e.g., see Fig. 4). To specifically assess the potential impacts of DCC errors on climate projection, we first consider the DCC changes from current to future simulations. Following the same approach as in Fig. 4, we calculate the future TOA reference irradiance with GCM outputs from RCP45 experiment during 2081–2100. The differences between future and present TOA reference irradiance (ΔR), averaged over the land, ocean, and the whole Earth, are summarized in Fig. 5. As can be seen, |ΔR| ≪ |R|, indicating that the DCC responses to climate change have much smaller radiative impacts than its errors do in the current climate models. As a result, each GCMs model maintains consistent (albeit affected by errors) DCC simulations in future climate conditions. Thus, on the one hand, consistent biases in DCC between present and future climates give rise to similar TOA reference irradiance, so that the model tuning made for current climate conditions still remains largely effective for the global mean temperature projections. On the other hand, consistent biases have the potential to increase the uncertainty of climate projections. In fact, model tuning for extra TOA radiation is primarily conducted by adjusting cloud-related parameters37, which may result in overestimation of CRE33,34,35. A large CRE likely strengthens the absolute values of cloud feedbacks38, 39 and thus contributes to the large spread of climate projection among different GCMs. Moreover, while the effects of large DCC errors over land are compensated by the effects of small bias over the ocean, this compensation disrupts the spatial patterns of the energy distribution and may influence the land–ocean–atmosphere interaction, with potentially significant impacts on the climate projections40. It is therefore likely that improving the resolution of most climate model simulations, so that atmospheric convections are better resolved or at least more easily parameterized, will significantly improve DCC simulations8, 10, 41. This might also be the reason for the good DCC results of CNRM-CM5. Change in reference irradiance in response to climate change. Change of TOA reference irradiance over a land, b ocean, and c whole Earth from 1986–2005 to 2018–2100. The 1986–2005 reference irradiance is calculated with GCM outputs from historical experiment as reported in Fig. 4, whereas the 2081–2100 irradiance is calculated in the same approach but with climate model outputs from RCP45 experiment We have investigated the radiative effects of DCC errors in terms of total cloud coverage without identifying specifically the impacts of the diurnal cycle errors of in-cloud properties (e.g., cloud vertical structure, optical depth, and liquid/ice water path), which are all critical to the Earth's energy budget1. For example, while low clouds usually have higher cloud-top temperature and thus emit more longwave radiation, higher clouds emit less longwave radiation due to their lower temperature42. As shown in Fig. 3 and another prior independent study13, the longwave CRE has much weaker cycle than its shortwave counterpart, indicating that DCC modulates Earth's energy primarily through the shortwave radiation. This suggests that the diurnal cycle errors of cloud structure will tend to have limited longwave radiative impacts (note that this should not be confused with the daily mean errors of cloud structure, which have long been recognized1, 42 to have significant impacts on the Earth's energy balance). Differently from the longwave radiation, the marked cycle of shortwave CRE (Fig. 3a, b) may indeed be influenced by cloud properties beyond the cloud coverage on which we focused here. For example, in-cloud water path, liquid/ice water content, and aerosol can influence the cloud albedo and thus adjust the CRE cycles43, 44. Once the GCMs provide detailed in-cloud properties in each grid point at sub-daily timescale, these potential impacts could be easily investigated with a similar approach to the one adopted here. In summary, we have quantified the discrepancies of the DCC among current climate models, satellite observations, and reanalysis data. In general, climate models have better and more consistent performance in simulating mean cloud coverage, while most GCMs present considerable discrepancies in the standard deviation (σ) and centroid (c) of cloud cycles. The evident errors are the smaller σ and earlier c over the land, leading to an overestimation of net radiation as indicated by the CRE analysis. The smaller errors over the ocean also induce significant radiative impacts, as its relatively larger marine CRE amplifies the effects of DCC errors. Model tuning used to compensate for these errors results in shifts of the DCC phase over the ocean and even larger DCC biases over the land. Thanks to the limited responses of DCC to global warming, such biases do not seem to invalidate future climate projection; however, they may induce an overestimation of cloud-feedback strength and distort the patters of land–ocean–atmosphere interaction. Improving resolution and parameterizations of atmospheric convection may help reduce the reliance of model tuning and provide more accurate climate projections. The time series of cloud coverage at each grid box (i) in each season (j) from each data source (m) were analyzed as follows. For the period 1986–2005, in each day the cloud coverage is given at 3-h interval (e.g., at local solar time t 1 = 3 h, t 2 = 6 h,…,t k = 3·k hr,…,t 8 = 24 h). We first average, by season, these series to obtain a typical DCC coverage, $$\begin{array}{*{20}{l}} {\overline f _{mij}(t_1),} \hfill & {\overline f _{mij}(t_2),} \hfill & {...} \hfill & {\overline f _{mij}(t_k),} \hfill & {...} \hfill & {\overline f _{mij}(t_8)} \hfill \end{array},$$ where subscripts m, i, j, and k represent the data source index, grid location index, season index, and discrete time index, respectively. To characterize climatology of DCC, we define three indexes: the mean, amplitude, and phase. The mean of the DCC is directly defined as the expectation $$\mu _{mij} = \frac{1}{8}\mathop {\sum}\limits_{k = 1}^8 {\overline f _{mij}(t_k)} .$$ The amplitude of the DCC is quantified by its corrected sample standard deviation as: $$\sigma _{mij} = \sqrt {\frac{1}{7}\mathop {\sum}\limits_{k = 1}^8 {\left[ {\overline f _{mij}(t_k) - \mu _{mij}} \right]} ^2} .$$ The coefficient of variation can be expressed as $$\left( {c_v} \right)_{mij} \,\,= \frac{{\sigma _{mij}}}{{\mu _{mij}}}.$$ The latter is useful to analyze the impact of relative amplitude of DCC across different models. The phase of the DCC is given by the centroid of t k weighted by the probability distribution of cloud coverage during one diurnal cycle $$p_{mij}(t_k) = \frac{{\overline f _{mij}(t_k)}}{{\mathop {\sum}\limits_{k = 1}^8 {\overline f _{mij}(t_k)} }}.$$ Since t k within one diurnal period can be treated as a circular quantity, the calculation of centroid (c) uses the circular statistics45, $$c_{mij} = \frac{\tau }{{2\pi }}{\mathrm{arg}}\left[ {\mathop {\sum}\limits_{k = 1}^8 {p_{mij}(t_k){\mathrm{exp}}\left( {{\bf{i}}\frac{{2\pi t_k}}{\tau }} \right)} } \right].$$ where i is the imaginary unit and arg[·] is the argument of a complex number. As can be seen in the examples of Fig. 1 and Supplementary Fig. 2, the centroid is located around the timing of the most cloudiness in one typical day. The RMSD of μ between data source m 1 and m 2 is defined as: $$R_\mu (m_1,m_2) = \sqrt {\frac{1}{{JI}}\mathop {\sum}\limits_j {\mathop {\sum}\limits_i {(\mu _{m_1ij} - \mu _{m_2ij})^2} } } ,$$ where I and J are the numbers of grid boxes and seasons considered in the calculation of the corresponding RMSD. Similarly, the RMSD of σ is $$R_\sigma (m_1,m_2) = \sqrt {\frac{1}{{JI}}\mathop {\sum}\limits_j {\mathop {\sum}\limits_i {(\sigma _{m_1ij} - \sigma _{m_2ij})^2} } } ,$$ and the RMSD of c is $$R_c(m_1,m_2) = \sqrt {\frac{1}{{JI}}\mathop {\sum}\limits_j {\mathop {\sum}\limits_i {\left( {c_{m_1ij} - c_{m_2ij} + n\tau } \right)^2} } ,}$$ where n is an integer and τ is the length of one diurnal cycle (24 h). The integer n is properly chosen such that the centroid difference (\(c_{m_1ij} - c_{m_2ij} + n\tau\)) is within [−τ/2, τ/2]. TOA reference irradiance The CRE are conventionally defined as the difference of TOA all-sky (R) and clear-sky (R clr) net radiative fluxes26,27,28, $${\mathrm{CRE}} = R - R_{{\mathrm{clr}}}.$$ This quantity can also be expressed in terms of cloud coverage42, $${\mathrm{CRE}} = f(R_{{\mathrm{cld}}} - R_{{\mathrm{clr}}}),$$ where R cld is the cloudy-sky radiative flux. Combining Eqs. (10) and (11), R cld can thus be expressed as: $$R_{{\mathrm{cld}}} = \frac{1}{f}\left[ {R - (1 - f)R_{{\mathrm{clr}}}} \right].$$ Since all-sky, clear-sky radiative fluxes, and total cloud coverage are often provided in GCM outputs, R cld can be calculated directly from GCM outputs from Eq. (12). With known values of R cld, it is now possible to recalculate the TOA radiative flux as a function of cloud coverage and its properties, by solving Eq. (12) for R, $$R = fR_{{\mathrm{cld}}} + (1 - f)R_{{\mathrm{clr}}},$$ where all the variables R, R cld, R clr, and f are time dependent. Specifically, with Eq. (13) we can use the R clr and R cld provided by GCMs or other data sources to analyze the impacts of diurnal variations of cloud coverage on TOA radiation. To conduct this analysis, we decompose the cloud coverage first into a mean μ and fluctuations f DCC around it $$f(t) = \mu + f_{\rm{DCC}}(t).$$ We may also approximate the latter with its first harmonic14, 15 $$f_{{\mathrm{DCC}}} \approx \sqrt 2 \sigma {\mathrm{cos}}\left[ {w(t - c)} \right],$$ to directly link the reference irradiance to the phase (c) and the amplitude (σ). Next, we substitute the sinusoidal approximation Eqs. (14) and (15) into Eq. (13). With μ, R cld, and R clr from ERA-20C reanalysis and σ and c from the GCM outputs, we are thus able to isolate the radiative impacts of DCC phase and amplitude from each climate model (e.g., Fig. 3). Alternatively, focusing on the overall effect of DCC fluctuations (f DCC), without the sinusoidal approximations Eq. (15), we can substitute Eq. (14) directly into Eq. (13), with μ, R cld, and R clr obtained from ERA-20C reanalysis and f DCC from GCMs (see Fig. 4). Both these two versions of daily mean TOA irradiance computed from Eq. (13) are referred to as TOA reference irradiance in the main text. It is worth mentioning here that R cld and R clr are obtained from ERA-20C for assessing TOA reference irradiance in Figs 4 and 5 of the main text (alternative results using CanAM4 outputs are shown in Supplementary Figs. 18 and 19). In this way, we follow an approach which is similar in spirit to the standard radiative kernel approach used for estimating climate feedbacks46. A single set of radiative kernels usually is good enough for assessing climate feedback from different GCMs47. Similarly, the selection of standard R cld and R clr also has limited impacts on their inter-model patterns of TOA reference irradiance, which have been used for assessing the DCC radiative impacts (e.g., compare the results from ERA-20C in Figs 4 and 5 and from CanAM4 outputs in Supplementary Figs. 18 and 19). Code availability Models used in this paper are available from the corresponding author on request. The ISCCP satellite records are available from NASA Atmospheric Science Data Center (http://isccp.giss.nasa.gov/). The ERA-20C reanalysis data can be obtained from the European Centre for Medium-Range Weather Forecasts (http://www.ecmwf.int). The climate model data can be downloaded from the fifth phase of the Coupled Model Intercomparison Project website (http://cmip-pcmdi.llnl.gov). Boucher, O. et al. in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Stocker, T. F. et al.) 571–658 (Cambridge University Press, Cambridge, UK, 2013). Dessler, A. E. A determination of the cloud feedback from climate variations over the past decade. Science 330, 1523–1527 (2010). ADS CAS Article PubMed Google Scholar Zhang, M. et al. CGILS: results from the first phase of an international project to understand the physical mechanisms of low cloud feedbacks in single column models. J. Adv. Model Earth Syst. 5, 826–842 (2013). ADS Article Google Scholar Zelinka, M. D., Klein, S. A. & Hartmann, D. L. Computing and partitioning cloud feedbacks using cloud property histograms. Part II: attribution to changes in cloud amount, altitude, and optical depth. J. Clim. 25, 3736–3754 (2012). Bergman, J. W. & Salby, M. L. The role of cloud diurnal variations in the time-mean energy budget. J. Clim. 10, 1114–1124 (1997). Minnis, P. & Harrison, E. F. Diurnal variability of regional cloud and clear-sky radiative parameters derived from GOES data. Part III: November 1978 radiative parameters. J. Clim. Appl. Meteorol. 23, 1032–1051 (1984). Yang, G.-Y. & Slingo, J. The diurnal cycle in the tropics. Mon. Weather Rev. 129, 784–801 (2001). Clark, A. J., Gallus, W. A. & Chen, T.-C. Comparison of the diurnal precipitation cycle in convection-resolving and non-convection-resolving mesoscale models. Mon. Weather Rev. 135, 3456–3473 (2007). Pfeifroth, U., Hollmann, R. & Ahrens, B. Cloud cover diurnal cycles in satellite data and regional climate model simulations. Meteorol. Z. 21, 551–560 (2012). Langhans, W., Schmidli, J., Fuhrer, O., Bieri, S. & Schär, C. Long-term simulations of thermally driven flows and orographic convection at convection-parameterizing and cloud-resolving resolutions. J. Appl. Meteorol. Climatol. 52, 1490–1510 (2013). Walther, A., Jeong, J.-H., Nikulin, G., Jones, C. & Chen, D. Evaluation of the warm season diurnal cycle of precipitation over Sweden simulated by the Rossby Centre regional climate model RCA3. Atmos. Res. 119, 131–139 (2013). Gustafson, W. I., Ma, P.-L. & Singh, B. Precipitation characteristics of CAM5 physics at mesoscale resolution during MC3E and the impact of convective timescale choice. J. Adv. Model Earth Syst. 6, 1271–1287 (2014). Webb, M. J. et al. The diurnal cycle of marine cloud feedback in climate models. Clim. Dyn. 44, 1419–1436 (2015). Wood, R., Bretherton, C. S. & Hartmann, D. L. Diurnal cycle of liquid water path over the subtropical and tropical oceans. Geophys. Res. Lett. 29, 2092 (2002). ADS Google Scholar O'Dell, C. W., Wentz, F. J. & Bennartz, R. Cloud liquid water path from satellite-based passive microwave observations: a new climatology over the global oceans. J. Clim. 21, 1721–1739 (2008). Rossow, W. B. & Schiffer, R. A. ISCCP cloud data products. Bull. Am. Meteorol. Soc. 72, 2–20 (1991). Poli, P. et al. ERA-20C: an atmospheric reanalysis of the twentieth century. J. Clim. 29, 4083–4097 (2016). Rossow, W. B. & Schiffer, R. A. Advances in understanding clouds from ISCCP. Bull. Am. Meteorol. Soc. 80, 2261–2287 (1999). Rozendaal, M. A., Leovy, C. B. & Klein, S. A. An observational study of diurnal-variations of marine stratiform cloud. J. Clim. 8, 1795–1809 (1995). Klein, S. A. & Hartmann, D. L. Spurious changes in the ISCCP dataset. Geophys. Res. Lett. 20, 455–458 (1993). Evan, A. T., Heidinger, A. K. & Vimont, D. J. Arguments against a physical long-term trend in global ISCCP cloud amounts. Geophys. Res. Lett. 34, L04701 (2007). Cairns, B. Diurnal variations of cloud from ISCCP data. Atmos. Res. 37, 133–146 (1995). Wylie, D. P. & Woolf, H. M. The diurnal cycle of upper-tropospheric clouds measured by GOES-VAS and the ISCCP. Mon. Weather Rev. 130, 171–179 (2002). Voldoire, A. et al. The CNRM-CM5.1 global climate model: description and basic evaluation. Clim. Dyn. 40, 2091–2121 (2013). Taylor, K. E. Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res. Atmos. 106, 7183–7192 (2001). Cess, R. D. et al. Interpretation of cloud-climate feedback as produced by 14 atmospheric general circulation models. Science 245, 513–516 (1989). Cess, R. D. et al. Intercomparison and interpretation of climate feedback processes in 19 atmospheric general circulation models. J. Geophys. Res. Atmos. 95, 16601–16615 (1990). Cess, R. D. et al. Cloud feedback in atmospheric general circulation models: an update. J. Geophys. Res. Atmos. 101, 12791–12794 (1996). Jiang, J. H. et al. Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA 'A-Train' satellite observations. J. Geophys. Res. Atmos. 117, D14105 (2012). Perez, J., Menendez, M., Mendez, F. J. & Losada, I. J. Evaluating the performance of CMIP3 and CMIP5 global climate models over the north-east Atlantic region. Clim. Dyn. 43, 2663–2680 (2014). Webb, M., Senior, C., Bony, S. & Morcrette, J.-J. Combining ERBE and ISCCP data to assess clouds in the Hadley Centre, ECMWF and LMD atmospheric climate models. Clim. Dyn. 17, 905–922 (2001). Zhang, M. H. et al. Comparing clouds and their seasonal variations in 10 atmospheric general circulation models with satellite measurements. J. Geophys. Res. Atmos. 110, D15S02 (2005). Williams, K. D. & Webb, M. J. A quantitative performance assessment of cloud regimes in climate models. Clim. Dyn. 33, 141–157 (2009). Nam, C., Bony, S., Dufresne, J.-L. & Chepfer, H. The 'too few, too bright' tropical low-cloud problem in CMIP5 models. Geophys. Res. Lett. 39, L21801 (2012). Tsushima, Y. et al. Robustness, uncertainties, and emergent constraints in the radiative responses of stratocumulus cloud regimes to future warming. Clim. Dyn. 46, 3025–3039 (2016). Hourdin, F. et al. LMDZ5B: the atmospheric component of the IPSL climate model with revisited parameterizations for clouds and convection. Clim. Dyn. 40, 2193–2222 (2013). Mauritsen, T. et al. Tuning the climate of a global model. J. Adv. Model Earth Syst. 4, M00A01 (2012). Karlsson, J., Svensson, G. & Rodhe, H. Cloud radiative forcing of subtropical low level clouds in global models. Clim. Dyn. 30, 779–788 (2008). Brient, F. & Bony, S. How may low-cloud radiative properties simulated in the current climate influence low-cloud feedbacks under global warming? Geophys. Res. Lett. 39, L20807 (2012). Stevens, B. & Bony, S. What are climate models missing? Science 340, 1053–1054 (2013). Brisson, E. et al. How well can a convection-permitting climate model reproduce decadal statistics of precipitation, temperature and cloud characteristics? Clim. Dyn. 47, 3043–3061 (2016). Ramanathan, V. et al. Cloud-radiative forcing and climate: results from the Earth radiation budget experiment. Science 243, 57–63 (1989). Senior, C. A. & Mitchell, J. F. B. Carbon dioxide and climate. The impact of cloud parameterization. J. Clim. 6, 393–418 (1993). Donovan, D. P. Ice-cloud effective particle size parameterization based on combined lidar, radar reflectivity, and mean Doppler velocity measurements. J. Geophys. Res. Atmos. 108, 4573 (2003). Jammalamadaka, S. R. & Sengupta, A. Topics in Circular Statistics, Vol 5 (World Scientific, Singapore, 2001). Soden, B. J. et al. Quantifying climate feedbacks using radiative kernels. J. Clim. 21, 3504–3520 (2008). Vial, J., Dufresne, J.-L. & Bony, S. On the interpretation of inter-model spread in CMIP5 climate sensitivity estimates. Clim. Dyn. 41, 3339–3362 (2013). Berrisford, P. et al. The ERA-Interim Archive. ERA Report Series 1 Technical Report. 16 (European Centre for Medium-Range Weather Forecasts, Reading, 2009). Dee, D. P. et al. The ERA-interim reanalysis: configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 137, 553–597 (2011). We acknowledge support from the USDA Agricultural Research Service cooperative agreement 58-6408-3-027; and the National Science Foundation (NSF) grants EAR-1331846, EAR-1316258, FESD EAR-1338694, and the Duke WISeNet Grant DGE-1068871. Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ, 08544, USA Jun Yin & Amilcare Porporato Princeton Environmental Institute, Princeton University, Princeton, NJ, 08544, USA Jun Yin Amilcare Porporato A.P. and J.Y. conceived and designed the study. J.Y. wrote an initial draft of the paper, to which both authors contributed edits throughout. Correspondence to Amilcare Porporato. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Yin, J., Porporato, A. Diurnal cloud cycle biases in climate models. Nat Commun 8, 2269 (2017). https://doi.org/10.1038/s41467-017-02369-4 The role of extreme rain events in Peninsular Florida's seasonal hydroclimate variations Shangyong Shi & Vasubandhu Misra Journal of Hydrology (2020) Radiative effects of daily cycle of cloud frequency in past and future climates & Amilcare Porporato Climate Dynamics (2020) Cloud cover over the Tibetan Plateau and eastern China: a comparison of ERA5 and ERA-Interim with satellite observations Yonghui Lei , Husi Letu , Huazhe Shang & Jiancheng Shi Assessment of Simulated Solar Irradiance on Days of High Intermittency Using WRF-Solar Abhnil Amtesh Prasad & Merlinde Kay Energies (2020) Surface Expressions of Atmospheric Thermal Tides in the Tropical Atlantic and Their Impact on Open‐Ocean Precipitation J. A. Christophersen , G. R. Foltz & R. C. Perez Journal of Geophysical Research: Atmospheres (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
amegonz13 Cell membranes are asymmetrical. Which of the following statements is the most likely explanation for the membrane's asymmetrical nature? The two sides of a cell membrane face different environments and carry out different functions. In facilitated diffusion, what is the role of the transport protein? Transport proteins provide a hydrophilic route for the solute to cross the membrane. Which of the following molecular movements is due to diffusion or osmosis? Cells of the pancreas secrete insulin into the bloodstream. When a plant cell is placed in concentrated salt water, water moves out of the cell. The sodium-potassium pump pumps three sodium ions out of a neuron for every two potassium ions it pumps in. Which of the following processes includes all others? transport of an ion down its electrochemical gradient diffusion of a solute across a membrane facilitated diffusion passive transport Which of the following factors would tend to increase membrane fluidity? a greater proportion of saturated phospholipids a relatively high protein content in the membrane a lower temperature a greater proportion of unsaturated phospholipids a greater proportion of relatively large glycolipids compared with lipids having smaller molecular masses When molecules move down their concentration gradient, they move from where they are _____________ to where they are______________ Diffusion across a biological membrane is called ______________ more concentrated less concentrated. passive transport. Which of the following statements about osmosis is correct? Osmosis is the diffusion of water from a region of lower water concentration to a region of higher water concentration. Osmotic movement of water into a cell would likely occur if the cell accumulates water from its environment. If a solution outside the cell is hypertonic compared to the cytoplasm, water will move into the cell by osmosis. If a cell is placed in an isotonic solution, more water will enter the cell than leaves the cell. The presence of aquaporins (proteins that form water channels in the membrane) should speed up the process of osmosis. Select the correct statement about osmosis. Osmosis is the diffusion of water molecules across a selectively permeable membrane. Osmotic equilibrium cannot be reached unless solute concentrations equalize across the membrane. If a dead cell is placed in a solution hypotonic to the cell contents, osmosis will not occur. What name is given to the process by which water crosses a selectively permeable membrane? Endocytosis moves materials _____ a cell via _____. into ... membranous vesicles You can recognize the process of pinocytosis when _____. the cell is engulfing extracellular fluid A white blood cell engulfing a bacterium is an example of _____. Which statements about the fluid mosaic structure of a membrane are correct? Select the three correct statements. -Membranes include a mosaic, or mix, of carbohydrates embedded in a phospholipid bilayer. -The diverse proteins found in and attached to membranes perform many important functions. -Because membranes are fluid, membrane proteins and phospholipids can drift about in the membrane. -The framework of a membrane is a bilayer of phospholipids with their hydrophilic heads facing the aqueous environment inside and outside of the cell and their hydrophobic tails clustered in the center. -The kinky tails of some proteins help keep the membrane fluid by preventing the component molecules from packing solidly together. Which of the following would be a factor that determines whether the molecule selectively enters the target cells? the nonpolar, hydrophobic nature of the drug molecule the concentration of the drug molecule that is transported in the blood the similarity of the drug molecule to other molecules that are transported into the target cells the phospholipid composition of the target cells' plasma membrane Which of the following factors does not affect membrane permeability? The polarity of membrane phospholipids The saturation of hydrocarbon tails in membrane phospholipids The amount of cholesterol in the membrane How can a lipid be distinguished from a sugar? Lipids are mostly nonpolar. True or false? Osmosis is a type of diffusion. What property of dishwashing liquid (detergent) makes it useful to wash grease from pans? Amphipathic nature Which of the following particles could diffuse easily through a cell membrane? Sodium ion (Na+) Hydrogen ion (H+) True or false? The water-soluble portion of a phospholipid is the polar head, which generally consists of a glycerol molecule linked to a phosphate group. If a red blood cell is placed in a salt solution and bursts, what is the tonicity of the solution relative to the interior of the cell? Hypotonic If the concentration of phosphate in the cytosol is higher than the concentration of phosphate in the surrounding fluid, how could the cell increase the concentration of phosphate in the cytosol? What happens when two solutions separated by a selectively permeable membrane reach osmotic equilibrium? Water molecules move between the two solutions, but there is no net movement of water across the membrane. Which of the following statements about a typical plasma membrane is correct? The two sides of the plasma membrane have different lipid and protein composition. Carbohydrates on the membrane surface are important in determining the overall bilayer structure. Phospholipids are the primary component that determines which solutes can cross the plasma membrane. The hydrophilic interior of the membrane is composed primarily of the fatty acid tails of the phospholipids. The plasma membrane is a covalently linked network of phospholipids and proteins that controls the movement of solutes into and out of a cell. Which of the following best describes the structure of a biological membrane? a fluid structure in which phospholipids and proteins move freely between sides of the membrane two layers of phospholipids with proteins either crossing the layers or on the surface of the layers two layers of phospholipids (with opposite orientations of the phospholipids in each layer) with each layer covered on the outside with proteins two layers of phospholipids with proteins embedded between the two layers a mixture of covalently linked phospholipids and proteins that determines which solutes can cross the membrane and which cannot The permeability of a biological membrane to a specific polar solute may depend on which of the following? the types of transport proteins in the membrane Which of the following correctly describes some aspect of exocytosis or endocytosis? Both processes provide a mechanism for exchanging membrane-impermeable molecules between the organelles and the cytosol. The inner surface of a transport vesicle that fuses with or buds from the plasma membrane is most closely related to the inner surface of the plasma membrane. Exocytosis and endocytosis temporarily change the surface area of the plasma membrane. Endocytosis and exocytosis involve passive transport. These two processes require the participation of mitochondria. Which statement is correct? A solution of seawater is hypertonic. The contents of a red blood cell are hyperosmotic to distilled water. A solution of distilled water is hypotonic Chapter 7 reading quiz amelia_hansen45 Osmosis Labs Quiz ZaneThePwner Mel Bio Test 7 melaniemagdun murquizo BIOL 1020 CH. 7 HW annakstorie Mastering Biology Chapter 7 q+a's giselleriv11 Biomolecules and Membranes II Kendra_Ann biO QUiz 1307 OrIGiN And history of life on eArth bio test 9,10 Bio lab midterm Which of the following is NOT true of deep ocean trenches? (A) They are long and narrow depressions in the ocean floor. (B) They are sites where plates plunge back into the mantle. (C) They are geologically very stable. (D) They may act as sediment traps. If the cornea is to be reshaped (this can be done surgically or with contact lenses) to correct myopia, should its curvature be made greater or smaller? Explain. Also explain how hyperopia can be corrected. Your 1500-kg sports car accelerates from 0 to $30 \mathrm{~m} /$ $\mathrm{s}$ in $10 \mathrm{~s}$. What average force is exerted on it during this acceleration? The railcar docklight is supported by the $\frac{1}{8}$ -in.-diameter pin at $A .$ If the lamp weighs $4$ lb, and the extension arm $A B$ has a weight of $0.5$ lb / ft , determine the average shear stress in the pin needed to support the lamp. Hint: The shear force in the pin is caused by the couple moment required for equilibrium at $A .$ Clinical Reasoning Cases in Nursing 7th Edition•ISBN: 9780323527361Julie S Snyder, Mariann M Harding Essentials of Strength Training and Conditioning 4th Edition•ISBN: 9781492501626G Haff, N Triplett Essentials of Medical Language 4th Edition•ISBN: 9781264154371David M Allan, Rachel Basco CompTIA ITF+ CHAPTER 10 - 13 (FCO-U61) PART III thebeltranfamily PolCNAcademyTeacher Psukim Yaella_Blumenkranc sp2, page 16 (act.12) -- TRANSLATIONS (with proper… CarusoEagle
CommonCrawl
Improved approximate rips filtrations with shifted integer lattices and cubical complexes Aruni Choudhary ORCID: orcid.org/0000-0002-9225-08291, Michael Kerber2 & Sharath Raghvendra3 Journal of Applied and Computational Topology volume 5, pages 425–458 (2021)Cite this article Rips complexes are important structures for analyzing topological features of metric spaces. Unfortunately, generating these complexes is expensive because of a combinatorial explosion in the complex size. For n points in \(\mathbb {R}^d\), we present a scheme to construct a 2-approximation of the filtration of the Rips complex in the \(L_\infty \)-norm, which extends to a \(2d^{0.25}\)-approximation in the Euclidean case. The k-skeleton of the resulting approximation has a total size of \(n2^{O(d\log k +d)}\). The scheme is based on the integer lattice and simplicial complexes based on the barycentric subdivision of the d-cube. We extend our result to use cubical complexes in place of simplicial complexes by introducing cubical maps between complexes. We get the same approximation guarantee as the simplicial case, while reducing the total size of the approximation to only \(n2^{O(d)}\) (cubical) cells. There are two novel techniques that we use in this paper. The first is the use of acyclic carriers for proving our approximation result. In our application, these are maps which relate the Rips complex and the approximation in a relatively simple manner and greatly reduce the complexity of showing the approximation guarantee. The second technique is what we refer to as scale balancing, which is a simple trick to improve the approximation ratio under certain conditions. Context. Persistent homology (Carlsson 2009; Edelsbrunner and Harer 2010; Edelsbrunner et al. 2002) is a technique to analyze data sets using topological invariants. The idea is to build a multi-scale representation of data sets and to track its homological changes across the scales. A standard construction for the important case of point clouds in Euclidean space is the Vietoris-Rips complex (usually abbreviated as simply the Rips complex): for a scale parameter \(\alpha \ge 0\), it is the collection of all subsets of points with diameter at most \(\alpha \). When \(\alpha \) increases from 0 to \(\infty \), the Rips complexes form a filtration, an increasing sequence of nested simplicial complexes whose homological changes can be computed and represented in terms of a barcode. The computational drawback of Rips complexes is their sheer size: the k-skeleton of a Rips complex (that is, where only subsets of size at most \(k+1\) are considered) for n points consists of \(\Theta (n^{k+1})\) simplices because every \((k+1)\)-subset joins the complex for a sufficiently large scale parameter. This size bound makes barcode computations for large point clouds infeasible even for low-dimensional homological featuresFootnote 1. This difficulty motivates the question of what we can say about the barcode of the Rips filtration without explicitly constructing all of its simplices. We address this question using approximation techniques. The space of barcodes forms a metric space: two barcodes are close if similiar homological features occur on roughly the same range of scales. More precisely, the bottleneck distance is used as a distance metric between barcodes. The first approximation scheme by Sheehy (2013) constructs a \((1+\varepsilon )\)-approximation of the k-skeleton of the Rips filtration using only \(n(\frac{1}{\varepsilon })^{O(\lambda k)}\) simplices for arbitrary finite metric spaces, where \(\lambda \) is the doubling dimension of the metric. Further approximation techniques for Rips complexes (Dey et al. 2014) and the closely related Čech complexes (Botnan and Spreemann 2015; Cavanna et al. 2015; Kerber and Sharathkumar 2013) have been derived subsequently, all with comparable size bounds. More recently, we constructed an approximation scheme (Choudhary et al. 2019) for the Čech filtrations of n points in \(\mathbb {R}^d\) that had size \(n\left( \frac{1}{\varepsilon }\right) ^{O(d)}2^{O(d\log d +dk)}\) for the k-skeleton, improving the size bound from previous work. In Choudhary et al. (2017b), we constructed an approximation scheme for Rips filtration in Euclidean space that yields a worse approximation factor of only O(d), but uses only \(n2^{O(d\log k +d)}\) simplices. In Choudhary et al. (2017b), we also show a lower bound result on the size of approximations: for any \(\varepsilon < 1/\log ^{1+c} n\) with some constant \(c\in (0,1)\), any \(\varepsilon \)-approximate filtration has size \(n^{\Omega (\log \log n)}\). There has also been work on using cubical complexes to compute persistent homology, such as in Wagner et al. (2012). Cubical complexes are typically smaller than their simplicial counterparts, simply because they avoid triangulations. However, to our knowledge, there has been no attempt to utilize them in computing approximations of filtrations. Also, while there are efficient methods to compute persistence for simplicial complexes connected with simplicial maps (Dey et al. 2014; Kerber and Schreiber 2017), we are not aware of such counterparts for cubical complexes. Our contributions. For the Rips filtration of n points in \(\mathbb {R}^d\) with distances taken in the \(L_\infty \)-norm, we present a 2-approximation whose k-skeleton has size at most \(n6^{d-1}(2k+4)(k+3)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} =n2^{O(d\log k + d)}\) where \( \left\{ \begin{array}{c}a\\ b\end{array}\right\} \) denotes Stirling numbers of the second kind. This translates to a \(2d^{0.25}\)-approximation of the Rips filtration in the Euclidean metric and hence improves the asymptotic approximation quality of our previous approach (Choudhary et al. 2017b) with the same size bound. Our scheme gives the best size guarantee over all previous approaches. On a high level, our approach follows a straightforward approximation scheme: given a scaled and appropriately shifted integer grid on \(\mathbb {R}^d\), we identify those grid points that are close to the input points and build an approximation complex using these grid points. The challenge lies in how to connect these grid points to a simplicial complex such that close-by grid points are connected, while avoiding too many connections to keep the size small. Our approach first selects a set of active faces in the cubical complex defined over the grid, and defines the approximation complex using the barycentric subdivision of this cubical complex. We also describe an output-sensitive algorithm to compute our approximation. By randomizing the aforementioned shifts of the grids, we obtain a worst-case running time of \(n2^{O(d)}\log \Delta +2^{O(d)}M\) in expectation, where \(\Delta \) is the spread of the point set (that is, the ratio of the diameter to the closest distance of two points) and M is the size of the approximation. Additionally, this paper makes the following technical contributions: We follow the standard approach of defining a sequence of approximation complexes and establishing an interleaving between the Rips filtration and the approximation. We realize our interleaving using chain maps connecting a Rips complex at scale \(\alpha \) to an approximation complex at scale \(c\alpha \), and vice versa, with \(c\ge 1\) being the approximation factor. Previous approaches (Choudhary et al. 2017b; Dey et al. 2014; Sheehy 2013) used simplicial maps for the interleaving, which induce an elementary form of chain maps and are therefore more restrictive. The explicit construction of such maps can be a non-trivial task. The novelty of our approach is that we avoid this construction by the usage of acyclic carriers (Munkres 1984). In short, carriers are maps that assign subcomplexes to subcomplexes under some mild extra conditions. While they are more flexible, they still certify the existence of suitable chain maps, as we exemplify in Sect. 2. We believe that this technique is of general interest for the construction of approximations of cell complexes. We exploit a simple trick that we call scale balancing to improve the quality of approximation schemes. In short, if the aforementioned interleaving maps from and to the Rips filtration do not increase the scale parameter by the same amount, one can simply multiply the scale parameter of the approximation by a constant. Concretely, given maps $$\begin{aligned} \phi _\alpha :\mathcal {R}_\alpha \rightarrow \mathcal {X}_\alpha \qquad \psi _\alpha :\mathcal {X}_\alpha \rightarrow \mathcal {R}_{c\alpha } \end{aligned}$$ interleaving the Rips complex \(\mathcal {R}_\alpha \) and the approximation complex \(\mathcal {X}_\alpha \), we can define \(\mathcal {X}'_\alpha :=\mathcal {X}_{\alpha /\sqrt{c}}\) and obtain maps $$\begin{aligned} \phi '_\alpha :\mathcal {R}_\alpha \rightarrow \mathcal {X}'_{\sqrt{c}\alpha }\qquad \psi _\alpha :\mathcal {X}'_\alpha \rightarrow \mathcal {R}_{\sqrt{c}\alpha } \end{aligned}$$ which improves the interleaving from c to \(\sqrt{c}\). While it has been observed that the same trick can be used for improving the worst-case distance between Rips and Čech filtrations,Footnote 2 our work seems to be the first to make use of it in the context of approximations. We extend our approximation scheme to use cubical complexes instead of simplicial complexes, thereby achieving a marked reduction in size complexity. To connect the cubical complexes at different scales, we introduce the notion of cubical maps, which is a simple extension of simplicial maps to the cubical case. While we do not know of an algorithm that can compute persistence for the case of cubical complexes with cubical maps, we believe that this is a first step towards advocating the use of cubical complexes as approximating structures. Our technique can be combined with dimension reduction techniques in the same way as in Choudhary et al. (2017b) (see Theorems 19, 21, and 22 therein), with improved logarithmic factors. We state the main results in the paper, while omitting the technical details. Updates from the conference version. An earlier version of this paper appeared at the 25th European Symposium on Algorithms (Choudhary et al. 2017a). In that version, we achieved a \(3\sqrt{2}\)-approximation of the \(L_\infty \) Rips filtration and correspondingly, a \(3\sqrt{2}d^{0.25}\)-approximation of the \(L_2\) case. In this version, we improve the weak interleaving of Choudhary et al. (2017a) to a strong interleaving to get improved approximation factors. We expand upon the details of scale balancing, among other proofs that were missing from the conference version. We add the case of cubical complexes in this version. There is a subtle yet important distinction between the approximation complexes used in the conference version and the current result. In the conference version, our simplicial complex was built using only active faces, while the current version uses both active and secondary faces (please see Sect. 4 for definitions). This makes it easier to relate the simplicial and the cubical complexes in the current version. On the other hand the complexes are different, hence the associated proofs have been adapted accordingly. Outline. We start by explaining the relevant topological concepts in Sect. 2. We give details of the integer grids that we use in Sect. 3. In Sect. 4 we present our approximation scheme that uses the barycentric subdivision, and present the computational aspects in Sect. 5. The extension to cubical complexes is presented in Sect. 6. We discuss practical aspects of our scheme and conclude in Sect. 7. Some details of the strong interleaving from Sect. 4 are deferred to Appendix A. We briefly review the essential topological concepts needed. More details are available in standard references (see Bubenik et al. 2015; Chazal et al. 2009; Edelsbrunner and Harer 2010; Hatcher 2002; Munkres 1984). Simplicial complexes. A simplicial complex K on a finite set of elements S is a collection of subsets \(\{\sigma \subseteq S\}\) called simplices such that each subset \(\tau \subset \sigma \) is also in K. The dimension of a simplex \(\sigma \in K\) is \(k:=|\sigma |-1\), in which case \(\sigma \) is called a k-simplex. A simplex \(\tau \) is a sub-simplex of \(\sigma \) if \(\tau \subseteq \sigma \). We remark that, commonly a sub-simplex is called a "face" of a simplex, but we reserve the word "face" for a different structure. For the same reason, we do not introduce the common notation of of "vertices" and "edges" of simplicial complexes, but rather refer to 0- and 1-simplices throughout. The k-skeleton of K consists of all simplices of K whose dimension is at most k. For instance, the 1-skeleton of K is a graph defined by its 0-simplices and 1-simplices. Given a point set \(P\subset \mathbb {R}^d\) and a real number \(\alpha \ge 0\), the (Vietoris-)Rips complex on P at scale \(\alpha \) consists of all simplices \(\sigma =(p_0,\cdots ,p_k)\subseteq P\) such that \(diam(\sigma )\le \alpha \), where diam denotes the diameter. In this work, we write \(\mathcal {R}_\alpha \) for the Rips complex at scale \(2\alpha \) with the Euclidean metric, and \(\mathcal {R}^{\infty }_\alpha \) when using the metric of the \(L_\infty \)-norm. In either way, a Rips complex is an example of a flag complex, which means that whenever a set \(\{p_0,\cdots ,p_k\}\subseteq P\) has the property that every 1-simplex \(\{p_i,p_j\}\) is in the complex, then the k-simplex \(\{p_0,\cdots ,p_k\}\) is also in the complex. A related complex is the Čech complex of P at scale \(\alpha \), which consists of simplices of P for which the radius of the minimum enclosing ball is at most \(\alpha \). We do not study Čech complexes in this paper, but we mention them briefly while showing a connection with the Rips complex later in this section. A simplicial complex \(K'\) is a subcomplex of K if \(K'\subseteq K\). For instance, \(\mathcal {R}_{\alpha }\) is a subcomplex of \(\mathcal {R}_{\alpha '}\) for \(0\le \alpha \le \alpha '\). Let L be a simplicial complex. Let \(\hat{\varphi }\) be a map which assigns a vertex of L to each vertex of K. A simplicial map is a map \(\varphi :K\rightarrow L\) induced by a vertex map \(\hat{\varphi }\), such that for every simplex \(\{p_0,\cdots ,p_k\}\) in K, the set \(\{\hat{\varphi }(p_0),\cdots ,\hat{\varphi }(p_k)\}\) is a simplex of L. For \(K'\) a subcomplex of K, the inclusion map \(inc:K'\rightarrow K\) is an example of a simplicial map. A simplicial map is completely determined by its action on the 0-simplices of K. Chain complexes. A chain complex \(\mathcal {C}_*=(\mathcal {C}_p,\partial _p)\) with \(p\in \mathbb {Z}\) is a collection of abelian groups \(\mathcal {C}_p\) and homomorphisms \(\partial _p:\mathcal {C}_p\rightarrow \mathcal {C}_{p-1}\) such that \(\partial _{p-1}\circ \partial _{p}=0\). A simplicial complex K gives rise to a chain complex \(\mathcal {C}_*(K)\) for a fixed base field \(\mathcal {F}\): define \(\mathcal {C}_p\) for \(p\ge 0\) as the set of formal linear combinations of p-simplices in K over \(\mathcal {F}\), and \(\mathcal {C}_{-1}:=\mathcal {F}\). The boundary of a k-simplex with \(k\ge 1\) is the (signed) sum of its sub-simplices of co-dimension oneFootnote 3; the boundary of a 0-simplex is simply set to 1. The homomorphisms \(\partial _p\) are then defined as the linear extensions of this boundary operator. Note that \(\mathcal {C}_*(K)\) is sometimes called augmented chain complex of K, where the augmentation refers to the addition of the non-trivial group \(\mathcal {C}_{-1}\). A chain map \(\phi :\mathcal {C}_*\rightarrow D_*\) between chain complexes \(\mathcal {C}_*=(\mathcal {C}_p,\partial _p)\) and \(D_*=(D_p,\partial '_p)\) is a collection of group homomorphisms \(\phi _p:\mathcal {C}_p\rightarrow D_p\) such that \(\phi _{p-1}\circ \partial _{p}=\partial '_{p}\circ \phi _{p}\). For simplicial complexes K and L, we call a chain map \(\phi :\mathcal {C}_*(K)\rightarrow \mathcal {C}_*(L)\) augmentation-preserving if \(\phi _{-1}\) is the identity. A simplicial map \(\varphi :K\rightarrow L\) between simplicial complexes induces an augmentation-preserving chain map \(\bar{\varphi }:\mathcal {C}_*(K)\rightarrow \mathcal {C}_*(L)\) between the corresponding chain complexes. This construction is functorial, meaning that for \(\varphi \) the identity function on a simplicial complex K, \(\bar{\varphi }\) is the identity function on \(\mathcal {C}_*(K)\), and for composable simplicial maps \(\varphi ,\varphi '\), we have that \(\overline{\varphi \circ \varphi '}= \bar{\varphi }\circ \bar{\varphi '}\). Homology. The p-th homology group \(H_p(\mathcal {C}_*)\) of a chain complex is defined as \(\mathrm {ker}\,\partial _p/\mathrm {im}\,\partial _{p+1}\). The p-th homology group of a simplicial complex K, \(H_p(K)\), is the p-th homology group of its induced chain complex \(\mathcal {C}_*(K)\). Note that this definition is commonly referred to as reduced homology, but we ignore this distinction and consider reduced homology throughout. \(H_p(\mathcal {C}_*)\) is an \(\mathcal {F}\)-vector space because we have chosen our base ring \(\mathcal {F}\) as a field. Intuitively, when the chain complex is generated from a simplicial complex, the dimension of the p-th homology group counts the number of p-dimensional holes in the complex. We write \(H(\mathcal {C}_*)\) for the direct sum of all \(H_p(\mathcal {C}_*)\) for \(p\ge 0\). A chain map \(\phi :\mathcal {C}_*\rightarrow D_*\) induces a linear map \(\phi ^*: H(\mathcal {C}_*)\rightarrow H(D_*)\) between the homology groups. Again, this construction is functorial, meaning that it maps identity maps to identity maps, and it is compatible with compositions. Acyclic carriers. We call a simplicial complex K acyclic, if K is connected and all homology groups \(H_p(K)\) are trivial. For simplicial complexes K and L, an acyclic carrier \(\Phi \) is a map that assigns to each simplex \(\sigma \) in K, a non-empty acyclic subcomplex \(\Phi (\sigma )\subseteq L\), and whenever \(\tau \) is a sub-simplex of \(\sigma \), then \(\Phi (\tau )\subseteq \Phi (\sigma )\). We say that a chain \(c\in \mathcal {C}_p(K)\) is carried by a subcomplex \(K'\), if c takes value 0 except for p-simplices in \(K'\). A chain map \(\phi :\mathcal {C}_*(K)\rightarrow \mathcal {C}_*(L)\) is carried by \(\Phi \), if for each simplex \(\sigma \in K\), \(\phi (\sigma )\) is carried by \(\Phi (\sigma )\). We state the acyclic carrier theorem (Munkres 1984, Thm 13.3), adapted to our notation: Let \(\Phi :K\rightarrow L\) be an acyclic carrier. Then, There exists an augmentation-preserving chain map \(\phi :\mathcal {C}_*(K)\rightarrow \mathcal {C}_*(L)\) carried by \(\Phi \). If two augmentation-preserving chain maps \(\phi _1,\phi _2:\mathcal {C}_*(K)\rightarrow \mathcal {C}_*(L)\) are both carried by \(\Phi \), then \(\phi _1^*=\phi _2^*\).Footnote 4 We remark that "augmentation-preserving" is crucial in the statement: without it, the trivial chain map (that maps everything to 0) turns the first statement trivial and easily leads to a counter-example for the second claim. Filtrations and towers. Let \(I\subseteq \mathbb {R}\) be a set of real values which we refer to as scales. A filtration is a collection of simplicial complexes \((K_\alpha )_{\alpha \in I}\) such that \(K_\alpha \subseteq K_\alpha '\) for all \(\alpha \le \alpha '\in I\). For instance, \((\mathcal {R}_\alpha )_{\alpha \ge 0}\) is a filtration which we call the Rips filtration. A (simplicial) tower is a sequence \((K_\alpha )_{\alpha \in J}\) of simplicial complexes with J being a discrete set (for instance \(J=\{2^k\mid k\in \mathbb {Z}\}\)), together with simplicial maps \(\varphi _\alpha :K_\alpha \rightarrow K_{\alpha '}\) between complexes at consecutive scales. For instance, the Rips filtration can be turned into a tower by restricting to a discrete range of scales, and using the inclusion maps as \(\varphi \). The approximation constructed in this paper will be another example of a tower. We say that a simplex \(\sigma \) is included in the tower at scale \(\alpha '\), if \(\sigma \) is not in the image of the map \(\varphi _{\alpha }:K_\alpha \rightarrow K_{\alpha '}\), where \(\alpha \) is the scale preceding \(\alpha '\) in the tower. The size of a tower is the number of simplices included over all scales. If a tower arises from a filtration, its size is simply the size of the largest complex in the filtration (or infinite, if no such complex exists). However, this is not true in general for simplicial towers, because simplices can collapse in the tower and the size of the complex at a given scale may not take into account the collapsed simplices which were included at earlier scales in the tower. Barcodes and Interleavings. A collection of vector spaces \((V_\alpha )_{\alpha \in I}\) connected with linear maps \(\lambda _{\alpha _1,\alpha _2}:V_{\alpha _1}\rightarrow V_{\alpha _2}\) is called a persistence module, if \(\lambda _{\alpha ,\alpha }\) is the identity on \(V_\alpha \) and \(\lambda _{\alpha _2,\alpha _3}\circ \lambda _{\alpha _1,\alpha _2} =\lambda _{\alpha _1,\alpha _3}\) for all \(\alpha _1\le \alpha _2\le \alpha _3\in I\) for the index set I. We generate persistence modules using the previous concepts. Given a simplicial tower \((K_\alpha )_{\alpha \in I}\), we generate a sequence of chain complexes \((\mathcal {C}_*(K_\alpha ))_{\alpha \in I}\). By functoriality, the simplicial maps \(\varphi \) of the tower give rise to chain maps \(\overline{\varphi }\) between these chain complexes. Using functoriality of homology, we obtain a sequence \((H(K_\alpha ))_{\alpha \in I}\) of vector spaces with linear maps \(\overline{\varphi }^*\), forming a persistence module. The same construction applies to filtrations as a special case. Persistence modules admit a decomposition into a collection of intervals of the form \([\alpha ,\beta ]\) (with \(\alpha ,\beta \in I\)), called the barcode, subject to certain tameness conditions. The barcode of a persistence module characterizes the module uniquely up to isomorphism. If the persistence module is generated by a simplicial complex, an interval \([\alpha ,\beta ]\) in the barcode corresponds to a homological feature (a "hole") that comes into existence at complex \(K_\alpha \) and persists until it disappears at \(K_\beta \). Two persistence modules \((V_\alpha )_{\alpha \in I}\) and \((W_\alpha )_{\alpha \in I}\) with linear maps \(\phi _{\cdot ,\cdot }\) and \(\psi _{\cdot ,\cdot }\) are said to be weakly (multiplicatively) c-interleaved with \(c\ge 1\), if there exist linear maps \(\gamma _\alpha :V_\alpha \rightarrow W_{c\alpha }\) and \(\delta _\alpha :W_\alpha \rightarrow V_{c\alpha }\), called interleaving maps, such that the diagram commutes, that is, \(\psi =\gamma \circ \delta \) and \(\phi = \delta \circ \gamma \) for all \(\{\dots ,\alpha /c^2,\alpha /c,\alpha ,c\alpha ,\dots \}\in I\) (we have skipped the subscripts of the maps for readability). In such a case, the barcodes of the two modules are 3c-approximations of each other in the sense of Chazal et al. (2009). We say that two towers are c-approximations of each other if their persistence modules are c-approximations. Under the more stringent conditions of strong interleaving, the approximation ratio can be improved. Two persistence modules \((V_\alpha )_{\alpha \ge 0}\) and \((W_\alpha )_{\alpha \ge 0}\) with respective linear maps \(\phi _{\cdot ,\cdot }\) and \(\psi _{\cdot ,\cdot }\) are said to be (multiplicatively) strongly c-interleaved if there exist a pair of families of linear maps \(\gamma _\alpha :V_{\alpha }\rightarrow W_{c\alpha }\) and \(\delta _\alpha :W_{\alpha }\rightarrow V_{c\alpha }\) for \(c>0\), such that Diagram (2) commutes for all \(0\le \alpha \le \alpha '\) (the subscripts of the maps are excluded for readability). In such a case, the persistence barcodes of the two modules are said to be c-approximations of each other in the sense of Chazal et al. (2009). Finally, we mention a special case that relates equivalent persistence modules (Carlsson and Zomorodian 2005; Goodman et al. 2017). Two persistence modules \(\mathbb {V}=(V_\alpha )_{\alpha \in I}\) and \(\mathbb {W}=(W_\alpha )_{\alpha \in I}\) that are connected through linear maps \(\phi ,\psi \) respectively are isomorphic if there exists an isomorphism \(f_\alpha :V_\alpha \rightarrow W_\alpha \) for each \(\alpha \in I\) for which the following diagram commutes for all \(\alpha \le \beta \in I\): Isomorphic persistence modules have identical persistence barcodes. Scale balancing. Let \(\mathbb {V}=(V_\alpha )_{\alpha \in I}\) and \(\mathbb {W}=(W_\alpha )_{\alpha \in I}\) be two persistence modules with linear maps \(f_v,f_w\), respectively. Let there be linear maps \(\phi :V_{\alpha /\varepsilon _1}\rightarrow W_{\alpha }\) and \(\psi :W_{\alpha }\rightarrow V_{\alpha \varepsilon _2}\) for \(1\le \varepsilon _1,\varepsilon _2\) such that all \(\alpha ,\alpha /\varepsilon _1,\alpha \varepsilon _2\in I\). Suppose that the following diagram commutes, for all \(\alpha \in I\). Let \(\varepsilon :=max(\varepsilon _1,\varepsilon _2)\). Then, by replacing \(\varepsilon _1,\varepsilon _2\) by \(\varepsilon \) in Diagram (4), the diagram still commutes, so \(\mathbb {V}\) is a \(3\varepsilon \)-approximation of \(\mathbb {W}\). We define a new vector space \(V'_{c \alpha }:=V_\alpha \), where \(c=\sqrt{\frac{\varepsilon _1}{\varepsilon _2}}\) and \(c\alpha \in I\). This gives rise to a new persistence module, \(\mathbb {V}'=(V_{c\alpha })_{\alpha \in I}\). The maps \(\phi \) and \(\psi \) can then be interpreted as \(\phi :V'_{\alpha /\sqrt{\varepsilon _1\varepsilon _2}}\rightarrow W_{\alpha }\), or \(\phi :V'_{\alpha }\rightarrow W_{\alpha \sqrt{\varepsilon _1\varepsilon _2}}\) and \(\psi :W_{\alpha }\rightarrow V'_{\alpha \sqrt{\varepsilon _1\varepsilon _2}}\). Then, Diagram (4) can be re-interpreted as which still commutes. Therefore, \(\mathbb {V}'\) is a \(3\sqrt{\varepsilon _1\varepsilon _2}\)-approximation of \(\mathbb {W}\), which is an improvement over \(\mathbb {V}\), since \(\sqrt{\varepsilon _1\varepsilon _2}\le max(\varepsilon _1,\varepsilon _2)\). \(\mathbb {V}\) and \(\mathbb {V}'\) have the same barcode up to a scaling factor. This scaling trick also works when \(\mathbb {V}\) and \(\mathbb {W}\) are strongly interleaved. If we have the following commutative diagrams: (where we have skipped the maps for readability): then \(\mathbb {V}\) and \(\mathbb {W}\) are \(max(\varepsilon _1,\varepsilon _2)\)-approximations of each other. By defining \(\mathbb {V}'\) as before, the following diagrams commute for \(d=c\varepsilon _2=\sqrt{\varepsilon _1\varepsilon _2}\), so we can improve a \(\max (\varepsilon _1,\varepsilon _2)\)-approximation to an \(\sqrt{\varepsilon _1\varepsilon _2}\)-approximation. We end the section by discussing a basic but important relation between Čech and Rips filtrations. It is well-known that for any \(\alpha \ge 0\), \(\mathcal {C}_\alpha \subseteq \mathcal {R}_{\alpha }\subseteq \mathcal {C}_{\sqrt{2}\alpha }\) (Edelsbrunner and Harer 2010). This gives a strong interleaving between the towers \((\mathcal {C}_\alpha )_{\alpha \ge 0}\) and \((\mathcal {R}_\alpha )_{\alpha \ge 0}\) with \(\varepsilon _1=1\) and \(\varepsilon _2=\sqrt{2}\). Applying the scale balancing technique, we get that The scaled Čech persistence module \((H(\mathcal {C}_{\root 4 \of {2}\alpha }))_{\alpha \ge 0}\) and the Rips persistence module \((H(\mathcal {R}_{\alpha }))_{\alpha \ge 0}\) are \(\root 4 \of {2}\)-approximations of each other. Shifted integer lattices In this section, we take a look at simple modifications of the integer lattice. We denote by \(I:=\{\alpha _s:=\lambda 2^s\mid s\in \mathbb {Z}\}\) with \(\lambda >0\), a discrete set of scales. For each scale in I, we define grids which are scaled and translated (shifted) versions of the integer lattice. Definition 1 (Scaled and shifted grids) For each scale \(\alpha _s\in I\), we define the scaled and shifted grid \(G_{\alpha _s}\) inductively as: For \(s=0\), \(G_{\alpha _{s}}\) is simply the scaled integer grid \(\lambda \mathbb {Z}^d\), where each basis vector has been scaled by \(\lambda \). For \(s\ge 0\), we choose an arbitrary point \(O_{\alpha _{s}}\in G_{\alpha _{s}}\) and define $$\begin{aligned} G_{\alpha _{s+1}} = 2\left( G_{\alpha _{s}}-O_{\alpha _{s}}\right) +O_{\alpha _{s}}+ \frac{\alpha _s}{2}\left( \pm 1,\cdots ,\pm 1\right) , \end{aligned}$$ where the signs of the components of the last vector are chosen independently and uniformly at random (and the choice is independent for each s). For \(s\le 0\), we define $$\begin{aligned} G_{\alpha _{s-1}} = \frac{1}{2}\left( G_{\alpha _{s}}-O_{\alpha _{s}}\right) +O_{\alpha _{s}}+ \frac{\alpha _{s-1}}{2}\left( \pm 1,\cdots ,\pm 1\right) , \end{aligned}$$ where the last vector is chosen as in the case of \(s\ge 0 \). Equations (8) and (9) are consistent at \(s=0\). A simple example of the above construction is the sequence of grids with \(G_{\alpha _{s}}:=\alpha _s\mathbb {Z}^d\) for even s, and \(G_{\alpha _{s}}:=\alpha _s\mathbb {Z}^d + \frac{\alpha _{s-1}}{2}(1,\cdots ,1)\) for odd s. Next, we motivate the shifting of the grids. Let \(\mathrm {Vor}_{G_s}(x)\) denote the Voronoi cell of any point \(x\in G_s\) with respect to the point set \(G_s\). It is clear that the Voronoi cell is a cube of side length \(\alpha _s\) centered at x. The shifting of the grids ensures that each \(x\in G_{\alpha _{s}}\) lies in the Voronoi region of a unique \(y\in G_{\alpha _{s+1}}\). Using an elementary calculation, we show a stronger statement: Let \(x\in G_{\alpha _{s}}, y\in G_{\alpha _{s+1}}\) be such that \(x\in \mathrm {Vor}_{G_{\alpha _{s+1}}}(y)\). Then, $$\begin{aligned} \mathrm {Vor}_{G_{\alpha _{s}}}(x)\subset \mathrm {Vor}_{G_{\alpha _{s+1}}}(y). \end{aligned}$$ Without loss of generality, we can assume that \(\alpha _s=2\) and x is the origin, using an appropriate translation and scaling. Also, we assume for the sake of simplicity that \(G_{\alpha _{s+1}}=2G_{\alpha _{s}} + (1,\cdots ,1)\); the proof is analogous for any other translation vector. In that case, it is clear that \(y=(1,\cdots ,1)\). Since \(G_{\alpha _{s}}=2\mathbb {Z}^d\), the Voronoi region of x is the set \([-1,1]^d\). Since \(G_{\alpha _{s+1}}\) is a translated version of \(4\mathbb {Z}^d\), the Voronoi region of y is the cube \([-1,3]^d\), which covers \([-1,1]^d\). The claim follows. For an example look to Fig. 1. \(\square \) \(G_{\alpha _{s}}\) is represented by small disks (yellow), while \(G_{\alpha _{s+1}}\) is represented by larger disks (green). Possible locations of x are indicated with their Voronoi regions. The Voronoi regions of the larger grid contain those of x Cubical complex of \(\mathbb {Z}^d\) The integer grid \(\mathbb {Z}^d\) naturally defines a cubical complex, where each element is an axis-aligned, k-dimensional cube with \(0\le k\le d\). To define it formally, let \(\square \) denote the set of all integer translates of faces of the unit cube \([0,1]^d\), considered as a convex polytope in \(\mathbb {R}^d\). We call the elements of \(\square \) faces of \(\mathbb {Z}^d\). Each face has a dimension k; the 0-faces, or vertices are exactly the points in \(\mathbb {Z}^d\). The facets of a k-face E are the \((k-1)\)-faces contained in E. We call a pair of facets of E opposite facets, if they are disjoint. Naturally, these concepts carry over to scaled and shifted versions of \(\mathbb {Z}^d\), so we define \(\square _{\alpha _{s}}\) as the cubical complex defined by \(G_{\alpha _{s}}\). We define a map \(g_{\alpha _{s}}: \square _{\alpha _{s}}\rightarrow \square _{\alpha _{s+1}}\) as follows: for vertices of \(\square _{\alpha _{s}}\), we assign to \(x\in G_{\alpha _{s}}\) the (unique) vertex \(y\in G_{\alpha _{s+1}}\) such that \(x\in \mathrm {Vor}_{G_{\alpha _{s+1}}}(y)\) (see Lemma 2). For a k-face f of \(\square _{\alpha _{s}}\) with vertices \((p_1,\cdots ,p_{2^k})\) in \(G_{\alpha _{s}}\), we set \(g_{\alpha _{s}}(f)\) to be the convex hull of \(\{g_{\alpha _{s}}(p_1),\cdots ,g_{\alpha _{s}}(p_{2^k})\}\); the next lemma shows that this is a well-defined map. In this paper, we sometimes call \(g_{\alpha _{s}}\) a cubical map, since it is a counterpart of simplicial maps for cubical complexes. Let f be k-face of \(\square _{\alpha _{s}}\) with vertices \(\{ p_1,\cdots ,p_{2^k} \}\subset G_{\alpha _{s}}\). Then the set of vertices \(\{g_{\alpha _{s}}(p_1),\cdots ,g_{\alpha _{s}}(p_{2^k})\}\) form a face e of \(\square _{\alpha _{s+1}}\). for every face \(e_1 \subset e\), there is a face \(f_1 \subset f\) such that \(g_{\alpha _s}(f_1)=e_1\). if \(e_1,e_2\) are any two opposite facets of e, then there exists a pair of opposite facets \(f_1,f_2\) of f such that \(g_{\alpha _{s}}(f_1)=e_1\) and \(g_{\alpha _{s}}(f_2)=e_2\). First claim: We prove the first claim by induction on the dimension of faces of \(G_{\alpha _{s}}\). Base case: for vertices, the claim is trivial using Lemma 2. Induction case: let the claim hold true for all \((k-1)\)-faces of \(G_{\alpha _{s}}\). We show that the claim holds true for all k-faces of \(G_{\alpha _{s}}\). Let f be a k-face of \(G_{\alpha _{s}}\). Let \(f_1\) and \(f_2\) be opposite facets of f, along the m-th coordinate. Let us denote the vertices of \(f_1\) by \((p_1,\cdots ,p_{2^{k-1}})\) and those of \(f_2\) by \((p_{2^{k-1}+1},\cdots ,p_{2^{k}})\) taken in the same order, that is, \(p_j\) and \(p_{2^{k-1}+j}\) differ in only the m-th coordinate for all \(1\le j\le 2^{k-1}\). By definition, all vertices of \(f_1\) share the m-th coordinate, and we denote coordinate of these vertices by z. Then, the m-th coordinate of all vertices of \(f_2\) equals \(z+\alpha _s\). Then \(g_{\alpha _{s}}(p_j)\) and \(g_{\alpha _{s}}(p_{2^{k-1}+j})\) have the same coordinates, except possibly the m-th coordinate. By induction hypothesis, \(e_1=g_{\alpha _{s}}(f_1)\) and \(e_2=g_{\alpha _{s}}(f_2)\) are two faces of \(G_{s+1}\). This implies that \(e_2\) is a translate of \(e_1\) along the m-th coordinate. There are two cases: if \(e_1\) and \(e_2\) share the m-th coordinate, then \(e_1=e_2\) and therefore \(g_{\alpha _{s}}(f)=e_1=e_2=e\), so the claim follows. On the other hand, if \(e_1\) and \(e_2\) do not share the m-th coordinate, then they are two faces of \(\square _{\alpha _{s+1}}\) which differ in only one coordinate by \(\alpha _{s+1}\). So they are opposite facets of a co-dimension one face e of \(G_{\alpha _{s+1}}\). Using induction, the claim follows. Second claim: We prove the claim by induction over the dimension of \(e_1\). Base case: \(e_1\) is a vertex. The vertices of f in Voronoi region of \(e_1\) form \(f_1\). Since f is an axis parallel face and the Voronoi region is also axis-parallel, it is immediate that \(f_1\) is a face of f. Assume that the claim is true up to dimension i. For \(e_1\) a face of dimension \(i+1\), consider opposite facets \(e_a\) and \(e_b\) of e. By the induction claim, there exist faces \(f_a,f_b\subset f\) that satisfy \(g_{\alpha _s}(f_a)=e_a, g_{\alpha _s}(f_b)=e_b\). \(f_a\) and \(f_b\) are disjoint since otherwise \(g_{\alpha _s}(f_a\cap f_b)\) would be common to both \(e_a\) and \(e_b\), a contradiction. If \(e_a\) is a translate of \(e_b\) along the m-th coordinate, then \(f_a\) is also a translate of \(f_b\) along the same coordinate. Therefore \(f_a\) and \(f_b\) are opposite faces of a face \(f_1\) and \(g_{\alpha _s}(f_1)=e_1\). Third claim: Without loss of generality, assume that \(x_1\) is the direction in which \(e_2\) is a translate of \(e_1\). Using the second claim, let h denote the maximal face of f such that \(g_{\alpha _{s}}(h)=e_1\). Clearly, \(h\ne f\), since that would imply \(g_{\alpha _{s}}(f)=e_1=e\), which is a contradiction. Suppose h has dimension less than \(k-1\). Let \(h'\) be the facet of f that contains h and has the same \(x_1\) coordinates for all vertices. Then \(g_{\alpha _s}(h')=e_1\), which contradicts the maximality of h. Therefore, the only possibility is that h is a facet \(f_1\) of f such that \(g_{\alpha _{s}}(f_1)=e_1\). Let \(f_2\) be the opposite facet of \(f_1\). From the proof of the first claim, it is easy to see that \(g_{\alpha _{s}}(f_2)=e_2\). The claim follows. \(\square \) Barycentric subdivision We discuss a special triangulation of \(\square _{\alpha _{s}}\). A flag in \(\square _{\alpha _{s}}\) is a set of faces \(\{f_0,\cdots ,f_k\}\) of \(\square _{\alpha _{s}}\) such that $$\begin{aligned} f_0\subseteq \cdots \subseteq f_k. \end{aligned}$$ The barycentric subdivision of \(\square _{\alpha _{s}}\), denoted by \(sd_{\alpha _{s}}\), is the (infinite) simplicial complex whose simplices are the flags of \(\square _{\alpha _{s}}\) (Munkres 1984). In particular, the 0-simplices of \(sd_{\alpha _{s}}\) are the faces of \(\square _{\alpha _{s}}\). An equivalent geometric description of \(sd_{\alpha _{s}}\) can be obtained by defining the 0-simplices as the barycenters of the faces in \(sd_{\alpha _{s}}\), and introducing a k-simplex between \((k+1)\) barycenters if the corresponding faces form a flag. For a simple example, see Figs. 2 and 3. It is easy to see that \(sd_{\alpha _{s}}\) is a flag complex. Given a face f in \(\square _{\alpha _{s}}\), we write sd(f) for the subcomplex of \(sd_{\alpha _{s}}\) consisting of all flags that are formed only by faces contained in f. A portion of the grid in two dimensions. The dots are the grid points which form the 0-faces of the cubical complex The barycentric subdivision of the grid. The tiny squares are barycenters of the 1-faces and 2-faces of the cubical complex Approximation scheme with simplicial complexes We define our approximation complex for a finite set of points in \(\mathbb {R}^d\). Recall from Definition 1 that we can define a collection of scaled and shifted integer grids \(G_{\alpha _s}\) over a collection of scales \(I:=\{\alpha _s= 2^s \mid s\in \mathbb {Z}\}\) in \(\mathbb {R}^d\). To make the exposition simple, we define our complex in a slightly generalized form. Barycentric spans Fix some \(s\in \mathbb {Z}\) and let V denote any non-empty subset of \(G_{\alpha _{s}}\). Vertex span. We say that a face \(f\in \square _{\alpha _{s}}\) is spanned by V, if the set of vertices \(V(f):=f\cap V\) is non-empty, and not contained in any facet of f. Trivially, the vertices of \(\square _{\alpha _{s}}\) which are spanned by V are precisely the points in V. Any face of \(\square _{\alpha _{s}}\) which is not a vertex must contain at least two vertices of V in order to be spanned. We point out that the set of spanned faces of \(\square _{\alpha _{s}}\) is not closed under taking sub-faces. For instance, if V consists of two antipodal points of a d-cube, the only faces spanned by V are the d-cube and the two vertices; all other faces of the d-cube contain at most one vertex and hence are not spanned. It is simple to test whether any given k-face \(f\in \square _{\alpha _{s}}\) is spanned by the set of points V(f). Let \(T\subseteq [1,\cdots ,d]\) be the set of common coordinates of the points in V(f). V(f) spans f if and only if the standard basis vectors of \(\mathbb {R}^d\) corresponding to T span f. T can be computed in \(|V(f)|O(d)=O(2^{k}d)\) time by a linear scan of the coordinates. The coordinate directions spanned by f can also be found and compared with T within the same time bound. Barycentric span. The barycentric span of V is the subcomplex of \(sd_{\alpha _{s}}\) obtained by taking the union of the complete barycentric subdivisions of the maximal faces of \(\square _{\alpha _{s}}\) that are spanned by V. The barycentric span of V is indeed a simplicial complex by definition. Moreover, the barycentric span is a flag complex. Then for any face \(f\in \square _{\alpha _s}\), the barycentric span of V(f) is either empty or acyclic. Furthermore, for any non-empty subset \(W\subseteq V\), the faces of \(\square _{\alpha _{s}}\) that are spanned by W are also spanned by V. Consequently, the barycentric span of W is a subcomplex of the barycentric span of V. Approximation complex We denote by \(P\subset \mathbb {R}^d\) a finite set of points. We define two maps: \(a_{\alpha _{s}}:P\rightarrow G_{\alpha _{s}}\): for each point \(p\in P\), we let \(a_{\alpha _{s}}(p)\) denote the grid point in \(G_{\alpha _{s}}\) that is closest to p, that is, \(p\in \mathrm {Vor}_{G_{\alpha _{s}}}(a_{\alpha _{s}}(p))\). We assume for simplicity that this closest point is unique, which can be ensured using well-known methods (Edelsbrunner and Mücke 1990). We define the active vertices of \(G_{\alpha _s}\) as $$\begin{aligned} V_{\alpha _{s}}:=\mathrm {im}\left( a_{\alpha _{s}}\right) =a_{\alpha _{s}}(P)\subset G_{\alpha _{s}}, \end{aligned}$$ that is, the set of grid points that have at least one point of P in their Voronoi cells. \(b_{\alpha _{s}}:V_{\alpha _{s}}\rightarrow P\): the map \(b_{\alpha _{s}}\) takes an active vertex of \(G_{\alpha _{s}}\) to its closest point in P. By taking an arbitrary total order on P to resolve multiple assignments, we ensure that this assignment is unique. Naturally, \(b_{\alpha _{s}}(v)\) is a point inside \(\mathrm {Vor}_{G_{\alpha _{s}}}(v)\) for any \(v\in V_{\alpha _{s}}\). It follows that the map \(b_{\alpha _{s}}\) is a section of \(a_{\alpha _{s}}\), that is, \(a_{\alpha _{s}}\circ b_{\alpha _{s}}:V_{\alpha _{s}} \rightarrow V_{\alpha _{s}}\) is the identity on \(V_{\alpha _s}\). However, this is not true for \(b_{\alpha _{s}}\circ a_{\alpha _{s}}\) in general. Recall that the map \(g_{\alpha _{s}}:\square _{\alpha _{s}}\rightarrow \square _{\alpha _{s+1}}\) takes grid points of \(G_{\alpha _{s}}\) to grid points of \(G_{\alpha _{s+1}}\). Using Lemma 2, it follows at once that: For all \(\alpha _{s}\in I\) and each \(x\in V_{\alpha _{s}}\), \(g_{\alpha _{s}}(x)=(a_{\alpha _{s+1}}\circ b_{\alpha _{s}})(x)\). Recall that \(\mathcal {R}^{\infty }_\alpha \) denotes the Rips complex at scale \(\alpha \) for the \(L_\infty \)-norm. The next statement is a direct application of the the triangle inequality; let \(diam_\infty ()\) denote the diameter in the \(L_\infty \)-norm. Let \(Q\subseteq P\) be a non-empty subset such that \(diam_\infty (Q)\le \alpha _s\). Then, the set of grid points \(a_{\alpha _{s}}(Q)\) is contained in a face of \(\square _{\alpha _{s}}\). Equivalently, for any simplex \(\sigma =(p_0,\cdots ,p_k)\in \mathcal {R}^{\infty }_{\alpha _s/2}\) on P, the set of active vertices \(\{a_{\alpha _{s}}(p_0),\cdots ,a_{\alpha _{s}}(p_k)\}\) is contained in a face of \(\square _{\alpha _{s}}\). We prove the claim by contradiction. Suppose that the set of active vertices \(a_{\alpha _{s}}(Q)\) is not contained in a face of \(\square _{\alpha _{s}}\). Then, there exists at least one pair of points \(\{x,y\}\in Q\) such that \(a_{\alpha _{s}}(x)\), \(a_{\alpha _{s}}(y)\) are not in a common face of \(\square _{\alpha _{s}}\). By the definition of the grid \(G_{\alpha _{s}}\), the grid points \(a_{\alpha _{s}}(x)\), \(a_{\alpha _{s}}(y)\) therefore have \(L_\infty \)-distance at least \(2\alpha _s\). Moreover, x has \(L_\infty \)-distance less than \(\alpha _s/2\) from \(a_{\alpha _{s}}(x)\), and the same is true for y and \(a_{\alpha _{s}}(y)\). By the triangle inequality, the \(L_\infty \)-distance of x and y is more than \(\alpha _{s}\), which is a contradiction to the fact that \(diam_\infty (Q)\le \alpha _{s}\). \(\square \) We now define our approximation tower. For any scale \(\alpha _{s}\), we define \(\mathcal {X}_{\alpha _{s}}\) as the barycentric span of the active vertices \(V_{\alpha _{s}}\subset G_{\alpha _{s}}\). See Figs. 4, 5 and 6 for a simple illustration. A two-dimensional grid, shown along with its cubical complex. The green points (small dots) denote the points in P and the red vertices (encircled) are the active vertices (color figure online) The active faces are shaded. The closure of the active faces forms the cubical complex The generated approximation complex, whose vertices consist of those of the cubical complex and the blue vertices (small dots), which are the barycenters of active and secondary faces To simplify notation, we denote the faces of \(\square _{\alpha _{s}}\) spanned by \(V_{\alpha _{s}}\) as active faces, and the faces of active faces that are not spanned by \(V_{\alpha _{s}}\) as secondary faces. To complete the description of the approximation tower, we need to define simplicial maps of the form \(\tilde{g}_{\alpha _{s}}:\mathcal {X}_{\alpha _{s}}\rightarrow \mathcal {X}_{\alpha _{s+1}}\), which connect the simplicial complexes at consecutive scales. We show that such maps are induced by \(g_{\alpha _{s}}\). Let f be any active face of \(\square _{\alpha _{s}}\). Then, \(g_{\alpha _{s}}(f)\) is an active face of \(\square _{\alpha _{s+1}}\). Using Lemma 3, \(e:=g_{\alpha _{s}}(f)\) is a face of \(\square _{\alpha _{s}}\). If e is a vertex, then it is active, because f contains at least one active vertex v, and \(g_{\alpha _{s}}(v)=e\) in this case. If e is not a vertex, we assume for a contradiction that it is not active. Then, it contains a facet \(e_1\) that contains all active vertices in e. Let \(e_2\) denote the opposite facet of \(e_1\) in e. By Lemma 3, f contains opposite facets \(f_1\), \(f_2\) such that \(g_{\alpha _{s}}(f_1)=e_1\) and \(g_{\alpha _{s}}(f_2)=e_2\). Since f is active, both \(f_1\) and \(f_2\) contain active vertices; in particular, \(f_2\) contains an active vertex v. But then the active vertex \(g_{\alpha _{s}}(v)\) must lie in \(e_2\), contradicting the fact that \(e_1\) contains all active vertices of e. \(\square \) As a result, g is well defined for each face \(e\in \square _{\alpha _{s}}\), since there exists some active face \(e'\in \square _{\alpha _{s}}\) with \(e\subseteq e'\), and \(g(e)\subseteq g(e')\). By definition, a simplex \(\sigma \in \mathcal {X}_{\alpha _s}\) is a flag \((f_0\subseteq \cdots \subseteq f_k)\) of faces in \(\square _{\alpha _{s}}\). We set $$\begin{aligned} \tilde{g}_{\alpha _{s}}(\sigma ):=\left( g_{\alpha _{s}}\left( f_0\right) ,\cdots ,g_{\alpha _{s}}\left( f_k\right) \right) , \end{aligned}$$ where \((g_{\alpha _{s}}(f_0)\subseteq \cdots \subseteq g_{\alpha _{s}}(f_k))\) is a flag of faces in \(\square _{\alpha _{s+1}}\) by Lemma 6, and hence is a simplex in \(\mathcal {X}_{\alpha _{s+1}}\). It follows that \(\tilde{g}_s:\mathcal {X}_{\alpha _{s}}\rightarrow \mathcal {X}_{\alpha _{s+1}}\) is a simplicial map. This completes the description of the simplicial tower $$\begin{aligned} \left( \mathcal {X}_{\alpha _{s}}\right) _{s\in \mathbb {Z}}. \end{aligned}$$ Interleaving with the Rips module First, we show that our tower is a constant-factor approximation of the the \(L_\infty \)-Rips filtration of P. We then show the relation between our approximation tower and the Euclidean Rips filtration of P. We start by defining two acyclic carriers. First, we set \(\lambda =1\) and abbreviate \(\alpha :=\alpha _s=2^s\) to simplify notation. \(C_1^\alpha :\mathcal {R}^{\infty }_{\alpha /2} \rightarrow \mathcal {X}_{\alpha }\): for any simplex \(\sigma =(p_0,\cdots ,p_k)\) in \(\mathcal {R}^{\infty }_{\alpha /2}\), we set \(C_1^\alpha (\sigma )\) as the barycentric span of \(U:=\{a_s(p_0),\cdots ,a_s(p_k)\}\), which is a subcomplex of \(\mathcal {X}_{\alpha }\). Using Lemma 5, U lies in a maximal active face f of \(\square _\alpha \), so that \(C_1^\alpha (\sigma )\) is acyclic. The barycentric span of any subset of U is a subcomplex of the barycentric span of U, so \(C_1^\alpha \) is a carrier. Therefore, \(C_1^\alpha \) is an acyclic carrier. \(C_2^\alpha :\mathcal {X}_{\alpha }\rightarrow \mathcal {R}^{\infty }_{\alpha }\): let \(\sigma \) be any flag of \(\mathcal {X}_{\alpha }\) and let E be the smallest active face of \(\square _\alpha \) that contains \(\sigma \) (we break ties by making use of an arbitrary global order \(\succ \) on P)Footnote 5. We collect all the points of P that map to vertices of E under the map \(a_{\alpha }\) and set \(C_2^{\alpha }(\sigma )\) as the simplex on this set of points. By an application of the triangle inequality, we see that the \(L_\infty \)-diam of \(C_2^{\alpha }(\sigma )\) is at most \(2\alpha \), so \(C_2^{\alpha }(\sigma )\in \mathcal {R}^{\infty }_{\alpha }\) and is acyclic. It is also clear that \(C_2^{\alpha }(\tau )\subseteq C_2^{\alpha }(\sigma )\) for each \(\tau \subseteq \sigma \), so \(C_2^{\alpha }\) is an acyclic carrier. Using the acyclic carrier theorem (Theorem 1), there exist augmentation-preserving chain maps $$\begin{aligned} c_1^\alpha :\mathcal {C}_*\left( \mathcal {R}^{\infty }_{\alpha /2}\right) \rightarrow \mathcal {C}_*\left( \mathcal {X}_{\alpha }\right) \quad \text {and}\quad c_2^\alpha :\mathcal {C}_*\left( \mathcal {X}_\alpha \right) \rightarrow \mathcal {C}_*\left( \mathcal {R}^{\infty }_{\alpha }\right) , \end{aligned}$$ between the chain complexes, which are carried by \(C_1^\alpha \) and \(C_2^\alpha \) respectively, for each \(\alpha \in I\). We obtain the following diagram of augmentation-preserving chain maps: where inc corresponds to the chain map for inclusion maps, and \(\tilde{g}\) denotes the chain map for the corresponding simplicial map g (we removed indices of the maps for readability). The chain complexes give rise to a diagram of the corresponding homology groups, connected by the induced linear maps \(c_1^*,c_2^*,inc^*,\tilde{g}^*\): For all \(\alpha \in I\), the linear maps in the lower triangle of Diagram (11) commute, that is, $$\begin{aligned} \tilde{g}^*=c_1^*\circ c_2^*. \end{aligned}$$ We look at the corresponding triangle in Diagram (10). We show that the (augmentation-preserving) chain maps \(\tilde{g}\) and \(c_1\circ c_2\) are both carried by an acyclic carrier \(D:\mathcal {X}_\alpha \rightarrow \mathcal {X}_{2\alpha }\). The claim then follows from the acyclic carrier theorem. Let \(\sigma \in \mathcal {X}_{\alpha }\) be any flag and let \(E\in \square _{\alpha }\) denote the minimal active face containing \(\sigma \). Let \(\{q_1,\dots ,q_k \}\) be the active vertices of E. Let \(\{ p_1,\dots , p_m \}\) be the set of points of P that map to \(\{q_1,\dots ,q_k \}\) under the map \(a_{\alpha }\). Since the \(L_\infty \)-diameter of \(\{ p_1,\dots , p_m \}\) is at most \(2\alpha \), using Lemma 5 we see that \(\{ a_{2\alpha }(p_1),\dots ,a_{2\alpha }(p_m) \}\) is a face of \(\square _{2\alpha }\). We set \(D(\sigma )\) as the barycentric span of \(\{ a_{2\alpha }(p_1),\dots ,a_{2\alpha }(p_m) \}\). It follows that D is an acyclic carrier. Further, \(\{ a_{2\alpha }(p_1),\dots ,a_{2\alpha }(p_m) \}= \{ g_{2\alpha }(q_1),\dots ,g_{2\alpha }(q_k) \}\) from Lemma 2, so \(D(\sigma )\) is the barycentric subdivision of \(g_{2\alpha }(E)\). As a result \(D=C_1\circ C_2\) so that it carries \(c_1\circ c_2\). We show that D also carries the map \(\tilde{g}\). By definition, for each face \(e\subseteq E\), \(g(e)\subseteq g(E)\) and \(\tilde{g}(sd(e))\subseteq \tilde{g}(sd(E))\). This means that \(\tilde{g}(\sigma )\) is contained in g(E). This shows that \(\tilde{g}(\sigma )\in C_1\circ C_2 (\sigma )\) implying that \(\tilde{g}\) is carried by \(C_1\circ C_2\), as required. \(\square \) For all \(\alpha \in I\), the linear maps in the upper triangle of Diagram (11) commute, that is, $$\begin{aligned} inc^*=c_2^*\circ c_1^*. \end{aligned}$$ The proof technique is analogous to the proof of Lemma 7. We define an acyclic carrier \(D:\mathcal {R}^{\infty }_{\alpha }\rightarrow \mathcal {R}^{\infty }_{2\alpha }\) which carries inc and \(c_2\circ c_1\), both of which are augmentation-preserving. Let \(\sigma =(p_0,\cdots ,p_k)\in \mathcal {R}^{\infty }_{\alpha }\) be any simplex. The set of active vertices $$\begin{aligned} U:=\left\{ a_{2\alpha }\left( p_0\right) ,\cdots ,a_{2\alpha }\left( p_k\right) \right\} \subset G_{2\alpha } \end{aligned}$$ lie in a face f of \(G_{2\alpha }\), using Lemma 5. We can assume that f is active, as otherwise, we argue about a facet of f that contains U. We set \(D(\sigma )\) as the simplex on the subset of points in P, whose closest grid point in \(G_{2\alpha }\) is any vertex of f. Using the triangle inequality we see that \(D(\sigma )\in \mathcal {R}^{\infty }_{2\alpha }\), so D is an acyclic carrier. The vertices of \(\sigma \) are a subset of \(D(\sigma )\), so D carries the map inc. Showing that D carries \(c_2\circ c_1\) requires further explanation. Let \(\delta \) be any simplex in \(\mathcal {X}_{2\alpha }\) for which the chain \(c_1(\sigma )\) takes a non-zero value. Since \(c_1(\sigma )\) is carried by \(C_1(\sigma )\), we have that \(\delta \in C_1(\sigma )\), which is the barycentric span of U. Furthermore, for any \(\tau \in C_1(\sigma )\), \(C_2(\tau )\) is a simplex on the set of vertices \(\{p \in P \mid a_{2\alpha }(p)\in V(f)\}\). It follows that \(C_2(\tau )\subseteq D(\sigma )\). In particular, since \(c_2\) is carried by \(C_2\), \(c_2(c_1(\sigma ))\subseteq D(\sigma )\) as well. \(\square \) Using Lemmas 7 and 8, we see that the two persistence modules \(\left( H(\mathcal {X}_{{\alpha _{s}}})\right) _{s\in \mathbb {Z}}\) and \(\left( H(\mathcal {R}^{\infty }_\alpha )\right) _{\alpha \ge 0}\) are weakly 2-interleaved. With elementary modifications in the definition of \(\mathcal {X}\) and \(\tilde{g}\), we can get a tower of the form \((\mathcal {X}_\alpha )_{\alpha \ge 0}\). Furthermore, with minor changes in the interleaving arguments, we show that the corresponding persistence module is strongly 4-interleaved with the \(L_\infty \)-Rips module. Using scale balancing, this result improves to a strong 2-interleaving (see Lemma 16). Since the techniques used in the proof are very similar to the concepts used in this section, for the sake of brevity we defer all further details to Appendix A. Using the strong stability theorem for persistence modules and taking scale balancing into account, we immediately get that: The scaled persistence module \(\big (H(\mathcal {X}_{2\alpha })\big )_{\alpha \ge 0}\) and the \(L_\infty \)-Rips persistence module \(\big (H(\mathcal {R}^{\infty }_\alpha )\big )_{\alpha \ge 0}\) are 2-approximations of each other. For any pair of points \(p,p'\in \mathbb {R}^d\), it holds that $$\begin{aligned} \Vert p-p'\Vert _\infty \le \Vert p-p'\Vert _2 \le \sqrt{d}\,\Vert p-p'\Vert _\infty . \end{aligned}$$ This in turn shows that the \(L_2\)- and the \(L_\infty \)-Rips filtrations are strongly \(\sqrt{d}\)-interleaved. Using the scale balancing technique for strongly interleaved persistence modules, we get: The scaled persistence module \((H(\mathcal {R}_{\alpha /d^{0.25}}))_{\alpha \ge 0}\) and \((H(\mathcal {R}^{\infty }_\alpha ))_{\alpha \ge 0}\) are strongly \(d^{0.25}\)-interleaved. Using Theorem 2, Lemma 9 and the fact that interleavings satisfy the triangle inequality (Bubenik and Scott 2014, Theorem 3.3), we see that the module \((H(\mathcal {X}_{2\alpha }))_{\alpha \ge 0}\) is strongly \(2d^{0.25}\)-interleaved with the scaled Rips persistence module \((H(\mathcal {R}_{\alpha /d^{0.25}}))_{\alpha \ge 0}\). We can remove the scaling in the Rips filtration simply by multiplying the scales on both sides with \(d^{0.25}\) and obtain our final approximation result: The module \(\big (H(\mathcal {X}_{2\root 4 \of {d}\alpha })\big )_{\alpha \ge 0}\) and the Euclidean Rips persistence module \(\big (H(\mathcal {R}_{\alpha })\big )_{\alpha \ge 0}\) are \(2d^{0.25}\)-approximations of each other. In this section, we discuss the computational aspects of constructing the approximation tower. In Sect. 5.1 we discuss the size complexity of the tower. An algorithm to compute the tower efficiently is presented in Sect. 5.2. Range of relevant scales. Set \(n:=|P|\) and let CP(P) denote the closest pair distance of P. At scale \(\alpha _0:=\frac{CP(P)}{3d}\) and lower, no two active vertices lie in the same face of the grid, so the approximation complex consists of n isolated 0-simplices. At scale \(\alpha _m:=diam(P)\) and higher, points of P map to active vertices of a common face (by Lemma 5), so the generated complex is acyclic. We inspect the range of scales \([\alpha _0,\alpha _m]\) to construct the tower, since the barcode is explicitly known for scales outside this range. For this, we set \(\lambda =\alpha _0\) in the definition of the scales. The total number of scales is $$\begin{aligned} \lceil \log _2 \alpha _m/\alpha _0\rceil =\left\lceil \log _2 \frac{diam(P)3d}{CP(P)} \right\rceil =\lceil \log _2\Delta +\log _2 3d\rceil =O(\log \Delta +\log d), \end{aligned}$$ where \(\Delta =\frac{diam(P)}{CP(P)}\) is the spread of the point set. Size of the tower The size of a tower is the number of simplices that do not have a preimage, that is, the number of simplex inclusions in the tower. We start by counting the number of active faces used in the tower. Lemma 10 The number of active faces without pre-image in the tower is at most \(n3^d\). At scale \(\alpha _0\), there are n inclusions of 0-simplices in the tower, due to n active vertices. Using Lemma 2, g is surjective on the active vertices of \(\square \) (for any scale). Hence, no further active vertices are added to the tower. It remains to count the maximal active faces of dimension \(\ge 1\) without preimage. We will use a charging argument, charging the existence of such an active face to one of the points in P. We show that each point of P is charged at most \(3^{d}-1\) times, which proves the claim. For that, we first fix an arbitrary total order \(\prec \) on P. Each active vertex on any scale has a non-empty subset of P in its Voronoi region; we call the maximal such point with respect to the order \(\prec \) the representative of the active vertex. For each active face f of dimension at least one, we define the signature of f as the set of representatives of the active vertices of f. If for any set of active vertices \(u_1,\dots ,u_k\) we have that \(v=g(u_1)=\dots =g(u_k)\), then the representative of v is one of the representatives of \(u_1,\dots ,u_k\), using Lemma 2. Therefore, the signatures of the active faces that are images of f under g are subsets of the signature of f. This implies that each maximal active face that is included has a unique maximal signature. We bound the number of maximal signatures to get a bound on the number of maximal active face inclusions. We charge the addition of each maximal signature to the lowest ordered point according to \(\prec \). Each signature contains representatives of active vertices from a face of \(\square _\alpha \). Since each active vertex v has \(3^d-1\) neighboring vertices in the grid that lie in a common face, the representative p of v can be charged \(3^d-1\) times. There is a canonical isomorphism between the neighboring vertices of v at each scale. Then, for p to be charged more times, the image of v and some neighboring vertex u must be identical under g at some scale. But then, the representative of \(g(v)=g(u)\) is not p anymore, since p was the lowest ranked point in its neighborhood, hence the representative changes when the Voronoi regions are combined. So, p could not have been charged in such a case. Therefore, each point \(p\in P\) is indeed charged at most \(3^d-1\) times. There are n active faces of dimension 0 and at most \(n(3^d-1)\) active faces of higher dimension. The upper bound is \(n+n(3^d-1)=n3^{d}\), as claimed. \(\square \) The k-skeleton of the tower has size at most $$\begin{aligned} n6^{d-1}(2k+4)(k+3)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} =n2^{O(d\log k + d)}, \end{aligned}$$ where \( \left\{ \begin{array}{c}a\\ b\end{array}\right\} \) denotes the Stirling number of the second kind. Each k-simplex that is included in the tower at any given scale \(\alpha \) is a part of the barycentric subdivision of an active face that is also included at \(\alpha \). Therefore, we can account for the inclusion of this simplex by including the barycentric subdivision of its parent active face. From Lemma 10 at most \(n3^d\) active faces are included in the tower over all dimensions. We bound the number of k-simplices in the barycentric subdivision of a d-cube. Multiplying with \(n3^{d}\) gives the required bound. Let c be any d-cube of \(\square _\alpha \). To count the number of flags of length \((m+1)\) contained in c that start with some vertex and end with c, we use similar ideas as in Edelsbrunner and Kerber (2012): first, we fix any vertex v of c and count the flags of the form \(v\subseteq \cdots \subseteq c\). Every \(\ell \)-face in c incident to v corresponds to a subset of \(\ell \) coordinate indices, in the sense that the coordinates not chosen are fixed to the coordinates of v for the face. With this correspondence, a flag from v to c of length \((m+1)\) corresponds to an ordered m-partition of \(\{1,\cdots ,d\}\). The number of such partitions is known as m! times the quantity \(\left\{ \begin{array}{c}d\\ m\end{array}\right\} \), which is the Stirling number of second kind (Rennie and Dobson 1969), and is upper bounded by \(2^{O(d\log m)}\). Since c has \(2^d\) vertices, the total number of flags \(v\subseteq \cdots \subseteq c\) of length \((m+1)\) with any vertex v is hence \(2^d m! \left\{ \begin{array}{c}d\\ m\end{array}\right\} \). We now count the number of flags of length \(k+1\). Each such flag is \((k+1)\)-subset of some flag of length \(m=k+3\) that start with a vertex and end with c. There are \(2^d (k+2)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} \) such flags and each of them has \(\left( {\begin{array}{c}k+3\\ k+1\end{array}}\right) =(k+3)(k+2)/2\) subsets of size \((k+1)\). The number of \((k+1)\)-flags is upper bounded by \(2^d (k+2)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} \frac{(k+3)(k+2)}{2} =2^{d-1} (k+2)(k+3)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} \). The k-skeleton has size at most $$\begin{aligned} n3^{d}2^{d-1} (k+2)(k+3)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} =n6^{d-1}(2k+4)(k+3)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} . \end{aligned}$$ \(\square \) Computing the tower From Sect. 3, we know that \(G_{\alpha _{s+1}}\) is built from \(G_{\alpha _{s}}\) by making use of an arbitrary translation vector \((\pm 1,\cdots ,\pm 1)\in \mathbb {Z}^d\). In our algorithm, we pick the components of this translation vector uniformly at random from \(\{+1,-1\}\), and independently for each scale. The choice behind choosing this vector randomly becomes more clear in the next lemma. From the definition, the cubical maps \(g_{\alpha _{s}}:\square _{\alpha _{s}}\rightarrow \square _{\alpha _{s+1}}\) can be composed for multiple scales. For a fixed \({\alpha _{s}}\), we denote by \(g^{(j)}:\square _{\alpha _{s}}\rightarrow \square _{\alpha _{s+j}}\) the j-fold composition of g, that is, $$\begin{aligned} g^{(j)}=g_{\alpha _{s+j-1}}\circ g_{\alpha _{s+j-2}}\circ \cdots \circ g_{\alpha _{s+1}}\circ g_{\alpha _{s}}, \end{aligned}$$ for \(j\ge 1\). For any k-face \(f\in \square _{\alpha _{s}}\) with \(1\le k\le d\), let Y denote the minimal integer j such that \(g^{(j)}(f)\) is a vertex, for a given choice of the randomly chosen translation vectors. Then, the expected value of Y satisfies $$\begin{aligned} \mathbb {E}[Y]\le 3\log k, \end{aligned}$$ which implies that no face of \(\square _{\alpha _{s}}\) survives more than \(3\log d\) scales in expectation. Without loss of generality, assume that the grid under consideration is \(\mathbb {Z}^d\) and f is the k-face spanned by the vertices \(\{\underbrace{\{0,1\},\cdots ,\{0,1\}}_{k},0,\cdots ,0\}\), so that the origin is a vertex of f. The proof for the general case is analogous. Let \(y_1\in \{-1,1\}\) denote the randomly chosen first coordinate of the translation vector, so that the corresponding shift is one of \(\{ -1/2,1/2 \}\). If \(y_1=1\), then the grid \(G'\) on the next scale has some grid point with \(x_1\)-coordinate 1/2. Clearly, the closest grid point in \(G'\) to the origin is of the form \((+1/2,\pm 1/2,\cdots ,\pm 1/2)\), and thus, this point is also closest to \((1,0,0,\cdots ,0)\). The same is true for any point \((0,*,\cdots ,*)\) and its corresponding point \((1,*,\cdots ,*)\) on the opposite facet of f. Hence, for \(y_1=1\), g(f) is a face where all points have the same \(x_1\)-coordinate. On the other hand, if \(y_1=-1\), the origin is mapped to some point which has the form \((-1/2,\pm 1/2,\cdots ,\pm 1/2)\) and \((1,0,\cdots ,0)\) is mapped to \((3/2,\pm 1/2,\cdots ,\pm 1/2)\), as one can directly verify. Hence, in this case, in g(f), points do not all have the same \(x_1\) coordinate. We say that the \(x_1\)-coordinate collapses in the first case and survives in the second. Both events occur with the same probability 1/2. Because the shift is chosen uniformly at random for each scale, the probability that \(x_1\) did not collapse after j iterations is \(1/2^{j}\). f spans k coordinate directions, so it must collapse along each such direction to contract to a vertex. Once a coordinate collapses, it stays collapsed at all higher scales. As the random shift is independent for each coordinate direction, the probability of a collapse is the same along all coordinate directions that f spans. Using the union bound, the probability that \(g^j(f)\) has not collapsed to a vertex is at most \(k/2^j\). With Y as in the statement of the lemma, it follows that $$\begin{aligned} P(Y\ge j)\le k/2^j. \end{aligned}$$ Hence, $$\begin{aligned} \mathbb {E}[Y]=&\sum _{j=1}^{\infty } j P(Y=j) = \sum _{j=1}^{\infty } P(Y\ge j) \\&\le \log k + \sum _{c=1}^{\infty }\sum _{j=c\log k}^{(c+1)\log k} P(Y\ge j)\\&\le \log k + \sum _{c=1}^{\infty }\sum _{j=c\log k}^{(c+1)\log k} P(Y\ge c\log k)\\&\le \log k+ \sum _{c=1}^{\infty }\log k\frac{k}{2^{c \log k}} \\&\le \log k+ \log k\sum _{c=1}^{\infty } \frac{1}{k^{c-1}} \\&\le \log k + 2\log k \le 3 \log k. \end{aligned}$$ As a consequence of the lemma, the expected "lifetime" of k-simplices in our tower with \(k>0\) is rather short: given a flag \(e_0\subseteq \cdots \subseteq e_\ell \), the face \(e_\ell \) will be mapped to a vertex after \(O(\log d)\) steps, and so will be all its sub-faces, turning the flag into a vertex. It follows that summing up the total number of k-simplices with \(k>0\) over \(\mathcal {X}_\alpha \) for all \(\alpha \ge 0\) yields an upper bound of \(n2^{O(d\log k +d)}\) as well. Recall that a simplicial map can be written as a composition of simplex inclusions and contractions of vertices (Dey et al. 2014; Kerber and Schreiber 2017). That means, given the complex \(\mathcal {X}_{\alpha _s}\), to describe the complex at the next scale \(\alpha _{s+1}\), it suffices to specify which pairs of vertices in \(\mathcal {X}_{\alpha _s}\) map to the same image under \(\tilde{g}\), and which simplices in \(\mathcal {X}_{\alpha _{s+1}}\) are included at scale \(\mathcal {X}_{\alpha _{s+1}}\). The input is a set of n points \(P\subset \mathbb {R}^d\). The output is a list of events, where each event is of one of the three following types: A scale event defines a real value \(\alpha \) and signals that all upcoming events happen at scale \(\alpha \) (until the next scale event). An inclusion event introduces a new simplex, specified by the list of vertices on its boundary (we assume that every vertex is identified by a unique integer). A contraction event is a pair of vertices (i, j) from the previous scale, and signifies that i and j are identified as the same from that scale. In a first step, we estimate the range of scales that we are interested in. We compute a 2-approximation of diam(P) by taking any point \(p\in P\) and calculating \(\max _{q\in P}\Vert p-q\Vert \). Then we compute CP(P) using a randomized algorithm in \(n2^{O(d)}\) expected time (Khuller and Matias 1995). Next, we proceed scale-by-scale and construct the list of events accordingly. On the lowest scale, we simply compute the active vertices by point location for P in a cubical grid, and enlist n inclusion events (this is the only step where the input points are considered in the algorithm). For the data structure, we use an auxiliary container S and maintain the invariant that whenever a new scale is considered, S consists of all simplices of the previous scale, sorted by dimension. In S, for each vertex, we store an id and a coordinate representation of the active face to which it corresponds. Every \(\ell \)-simplex with \(\ell >0\) is stored just as a list of integers, denoting its boundary vertices. We initialize S with the n active vertices at the lowest scale. Let \(\alpha <\alpha '\) be any two consecutive scales with \(\square ,\square '\) the respective cubical complexes and \(\mathcal {X},\mathcal {X}'\) the approximation complexes, with \(\tilde{g}:\mathcal {X}\rightarrow \mathcal {X}'\) being the simplicial map connecting them. Suppose we have already constructed all events at scale \(\alpha \). First, we enlist the scale event for \(\alpha '\). Then, we enlist the contraction events. For that, we iterate through the vertices of \(\mathcal {X}\) and compute their value under g, using point location in a cubical grid. We store the results in a list \(S'\) (which contains the simplices of \(\mathcal {X}'\)). If for a vertex j, g(j) is found to be equal to g(i) for a previously considered vertex i, we choose the minimal such i and enlist a contraction event for (i, j). We turn to the inclusion events: We start with the case of vertices. Every vertex of \(\mathcal {X}'\) is either an active face or a secondary face of \(\square '\). Each active face must contain an active vertex, which is also a vertex of \(\mathcal {X}'\). We iterate through the elements in \(S'\). For each active vertex v encountered, we go over all faces of the cubical complex \(\square '\) that contain v as a vertex, and check whether they are active. For every active face E encountered that is not in \(S'\) yet, we add it to \(S'\) and enlist an inclusion event of a new 0-simplex. Additionally, we go over each face of E, add it to \(S'\) and enlist a vertex inclusion event, thereby enumerating the secondary faces that are in E. At termination, all vertices of \(\mathcal {X}'\) have been detected. Next, we iterate over the simplices of S of dimension \(\ge 1\), and compute their image under \(\tilde{g}\) using the pre-computed vertex map; we store the result in \(S'\). To find the simplices of dimension \(\ge 1\) included at \(\mathcal {X}'\), we exploit our previous insight that they contain at least one vertex that is included at the same scale (see the proof of Theorem 4). Hence, we iterate over the vertices included in \(\mathcal {X}'\) and find the included simplices inductively in dimension. Let v be the current vertex under consideration; assume that we have found all \((p-1)\)-simplices in \(\mathcal {X}'\) that contain v. Each such \((p-1)\)-simplex \(\sigma \) is a flag of length p in \(\square '\). We iterate over all faces e that extend \(\sigma \) to a flag of length \(p+1\). If e is active, we have found a p-simplex in \(\mathcal {X}'\) incident to v. If this simplex is not in \(S'\) yet, we add it and enlist an inclusion event for it. We also enqueue the simplex in our inductive procedure, to look for \((p+1)\)-simplices in the next round. At the end of the procedure, we have detected all simplices in \(\mathcal {X}'\) without preimage, and \(S'\) contains all simplices of \(\mathcal {X}'\). We set \(S\leftarrow S'\) and proceed to the next scale. This ends the description of the algorithm. To compute the k-skeleton, the algorithm takes $$\begin{aligned} n2^{O(d)}\log \Delta + 2^{O(d)}M \end{aligned}$$ time in expectation and M space, where M denotes the size of the tower. In particular, the expected time is bounded by $$\begin{aligned} n2^{O(d)}\log \Delta + n2^{O(d\log k +d)} \end{aligned}$$ and the space is bounded by \(n2^{O(d\log k +d)}\). In the analysis, we ignore the costs of point locations in grids, checking whether a face is active, and searches in data structures S, since all these steps have negligible costs when appropriate data structures are chosen. Computing the image of a vertex of \(\mathcal {X}\) costs \(O(2^d)\) time. Moreover, there are at most \(n2^{O(d)}\) vertices altogether in the tower in expectation (using Lemma 10), so this bound in particular holds on each scale. Hence, the contraction events on a fixed scale can be computed in \(n2^{O(d)}\) time. Finding new active vertices requires iterating over the cofaces of a vertex in a cubical complex. There are \(3^d\) such cofaces for each vertex. This has to be done for a subset of the vertices in \(\mathcal {X}'\), so the running time is also \(n2^{O(d)}\). Further, for each new active face, we go over its \(2^{O(d)}\) faces to enlist the secondary faces, so this step also consumes \(n2^{O(d)}\) time. Since there are \(O(\log \Delta +\log d)\) scales considered, these steps require \(n2^{O(d)}\log \Delta \) over all scales. Computing the image of \(\tilde{g}\) for a fixed scale costs at most \(O(2^d|\mathcal {X}|)\). M is the size of the tower, that is, the simplices without preimage, and I is the set of scales considered. The expected bound for \(\sum _{\alpha \in I} |\mathcal {X}_\alpha |=O(\log d M)\), because every simplex has an expected lifetime of at most \(3\log d\) by Lemma 11. Hence, the cost of these steps is bounded by \(2^{O(d)}M\). In the last step of the algorithm, we find the simplices of \(\mathcal {X}'\) included at \(\alpha '\). We consider a subset of simplices of \(\mathcal {X}'\), and for each, we iterate over a collection of faces in the cubical complex of size at most \(2^{O(d)}\). Hence, this step is also bounded by \(2^{O(d)}|\mathcal {X}|\) per scale, and hence bounded \(2^{O(d)}M\) as well. For the space complexity, the auxiliary data structure S gets as large as \(\mathcal {X}\), which is clearly bounded by M. For the output complexity, the number of contraction events is at most the number of inclusion events, because every contraction removes a vertex that has been included before. The number of inclusion events is the size of the tower. The number of scale events as described is \(O(\log \Delta +\log d)\). However, it is simple to get rid of this factor by only including scale events in the case that at least one inclusion or contraction takes place at that scale. The space complexity bound follows. \(\square \) Dimension reduction When the ambient dimension d is large, our approximation scheme can be combined with dimension reduction techniques to reduce the final complexity, very similar to the application in Choudhary et al. (2017b). For a set of n points \(P\subset \mathbb {R}^d\), we apply the dimension reduction schemes of Johnson-Lindenstrauss (JL) (Johnson et al. 1986), Matoušek (MT) (Matoušek 1990), and Bourgain's embedding (BG) (Bourgain 1985). We then compute the approximation on the lower-dimensional point set. We only state the main results in Table 1, leaving out the proofs since they are very similar to those from Choudhary et al. (2017b). Table 1 Comparison of dimension reduction techniques: here the approximation ratio is for the Rips persistence module, and the size refers to the size of the k-skeleton of the approximation Approximation scheme with cubical complexes We extend our approximation scheme to use cubical complexes in place of simplicial complexes. We start by detailing a few aspects of cubical complexes. Cubical complexes We now briefly describe the concept of cubical complexes, essentially expanding upon the contents of Sect. 3.1. For a detailed overview of cubical homology, we refer to Kaczynski et al. (2004). We define cubical complexes over the grids \(G_{\alpha _{s}}\). For any fixed \(\alpha _{s}\), the grids \(G_{\alpha _{s}}\) defines a natural collection of cubes. An elementary cube \(\gamma \) is a product of intervals \(\gamma =I_1\times I_2\times \cdots \times I_d\), where each interval is of the form \(I_j=(x_j,x_j+m_j)\), such that the vertex \((x_1,\cdots ,x_m)\in G_{\alpha _{s}}\) and each \(m_j\) is either 0 or \(\alpha _s\). That means, an (elementary) cube is simply a face of a d-cube of the grid. An interval \(I_j\) is said to be degenerate if \(m_j=0\). The dimension of \(\gamma \) is the number of non-degenerate intervals that defines it. We define the boundary of any interval as the two degenerate intervals that form its endpoints and denote this by \(\partial (I_j)=(x_j,x_j) + (x_j+m_j,x_j+m_j)\). Taking the boundary of any fixed subset of the intervals defining \(\gamma \) consecutively gives a sum of faces of \(\gamma \). A cubical complex of \(G_{\alpha _{s}}\) is a finite collection of cubes of \(G_{\alpha _{s}}\). We define chain complexes for the cubical case in the same way as in simplicial complexes. The chain complexes are connected by boundary homomorphisms, where the boundary of a cube is defined as: $$\begin{aligned} \partial \left( I_1\times \cdots \times I_d\right) = \left( \partial (I_1)\times I_2 \times \cdots \times I_d\right) + \cdots + \left( I_1\times \cdots \times I_{d-1}\times \partial (I_d)\right) , \end{aligned}$$ where \((I_1\times \cdots \times \partial (I_j)\times \cdots \times I_d)\) denotes the sum $$\begin{aligned} \left( I_1\times \cdots \times \left( x_i,x_i\right) \times \cdots \times I_d\right) + \left( I_1\times \cdots \times \left( x_i+m_i,x_i+m_i\right) \times \cdots \times I_d\right) . \end{aligned}$$ It can be quickly verified that for each cube \(\gamma \), \(\partial \circ \partial (\gamma ) =0\) since each term appears twice in the expression and the addition is over \(\mathbb {Z}_2\). Cubical maps and induced homology Let \(T_{\alpha _{s}}\) and \(T_{\alpha _{t}}\) denote the cubical complexes defined by the grids \(G_{\alpha _{s}}\) and \(G_{\alpha _{t}}\), respectively, for \(s\le t\). We use the vertex map \(g:G_{\alpha _{s}}\rightarrow G_{\alpha _{t}}\) to define a map between the cubical complexes. Note that if (a, b) are vertices of a cube of \(T_{\alpha _{s}}\) that differ in one coordinate, then (g(a), g(b)) are vertices of a cube of \(T_{\alpha _{t}}\) that differ in at most one coordinate. A cubical map is a map \(f:T_{\alpha _{s}}\rightarrow T_{\alpha _{t}}\) defined using g, such that for each cube \(\gamma =[a_1,b_1]\times \cdots \times [a_d,b_d]\) of \(T_{\alpha _{s}}\), \(f(\gamma ):=[g(a_1),g(b_1)]\times \cdots \times [g(a_d),g(b_d)]\) spans a cube of \(T_{\alpha _{t}}\). The cubical map can also be restricted to sub-complexes of \(T_{\alpha _{s}}\) and \(T_{\alpha _{t}}\), provided that the image \(f(\gamma )\) is well-defined. Each cubical map also defines a corresponding continuous map between the underlying spaces of the respective complexes. Let \(x\in |\gamma |\) be a point in \(\gamma \). Then, the coordinates of x can be uniquely written as \(x=[\lambda _1 a_1 + (1-\lambda _1) b_1,\cdots , \lambda _d a_d + (1-\lambda _d) b_d]\) where each \(\lambda _i\in [0,1]\). The image of x under the continuous extension of f is the point \([\lambda _1 g(a_1) + (1-\lambda _1) g(b_1),\cdots , \lambda _d g(a_d) + (1-\lambda _d) g(b_d)]\) in the cube \(g(\gamma )\). The cubical map f gives rise to a chain map \(f_\#:C_p(T_{\alpha _{s}}) \rightarrow C_p(T_{\alpha _{t}})\) between the p-th chain groups of the complexes, for each \(p\in [0,\cdots ,d]\). For each cube \(\gamma \), \(f_\#(\gamma )=f(\gamma )\) if \(dim(\gamma )=dim(f(\gamma ))\) and 0 otherwise. For any chain \(c=\sum _i \gamma _i\), the chain map is defined linearly \(f_\#(c)=\sum _i f_\#(\gamma _i)\). It is simple to verify that \(\partial \circ f_\# = f_\#\circ \partial \), so this gives a homomorphism between the chain groups. Moving to the homology level, we get the respective homology groups \(H(T_{\alpha _{s}})\) and \(H(T_{\alpha _{t}})\) and the chain map from above induces a linear map between them. The concept of reduced homology and augmentation maps is also applicable to the cubical chain complexes. For a sequence of cubical complexes connected with cubical maps, this generates a persistence module. Cubical filtrations and towers are defined in a similar manner to the simplicial case. A cubical filtration is a collection of cubical complexes \((T_\alpha )_{\alpha \in I}\) such that \(T_\alpha \subseteq T_\alpha '\) for all \(\alpha \le \alpha '\in I\). A (cubical) tower is a sequence \((T_\alpha )_{\alpha \in J}\) of cubical complexes with J being an index set together with cubical maps between complexes at consecutive scales. A cubical tower can be written as a sequence of inclusions and contractions, where an inclusion refers to the addition of a cube and a contraction refers to collapsing a cube along a coordinate direction to either of the endpoints of the interval. We choose the simplest possible cubical complex to define our approximation cubical tower: for each scale \(\alpha _{s}\), we define the cubical complex \(U_{\alpha _{s}}\) as the set of active faces and secondary faces spanned by \(V_{\alpha _{s}}\). Hence the cubical complex is closed under taking faces and is well-defined. See Fig. 5 for a simple example. Recall from Sect. 4 that for each \(s\in \mathbb {Z}\), \(U_{\alpha _{s}}\) and \(U_{\alpha _{s+1}}\) are related by a cubical map \(g_{\alpha _{s}}\), which gives rise to the cubical tower $$\begin{aligned} \left( U_{\alpha _{s}}\right) _{s\in \mathbb {Z}}. \end{aligned}$$ We extend this to a tower \((U_{\alpha })_{\alpha \ge 0}\) by using techniques from Appendix A. In Sect. 4 we saw that the tower \((\mathcal {X}_{\alpha })_{\alpha \ge 0}\) gives an approximation to the Rips filtration. The relation between the simplicial and cubical towers is trivial: \(\mathcal {X}_{\alpha _{s}}\) is simply a triangulation of \(|U_{\alpha _{s}}|\). Hence \(\mathcal {X}_{\alpha _{s}}\) and \(U_{\alpha _{s}}\) have the same homology (Munkres 1984). Moreover, the simplicial map is derived from an application of the cubical map. In particular, the continuous versions of both maps are the same. For any \(0\le \alpha \le \beta \), let \(f_1:H_*(U_{\alpha })\rightarrow H_*(U_{\beta })\) denote the homomorphism induced by the cubical map, \(f_2:H_*(\mathcal {X}_{\alpha })\rightarrow H_*(\mathcal {X}_{\beta })\) denote the homomorphism induced by the simplicial map, and \(f_0:H_*(|\mathcal {X}_{\alpha }|=|U_{\alpha }|)\rightarrow H_*(|\mathcal {X}_{\beta }|=|U_{\beta }|)\) denote the homomorphism induced by the common continuous map. It is well-established that \(f_1=f_0\) (Kaczynski et al. 2004, Chapter. 6) and \(f_2=f_0\) (Munkres 1984, Chapter. 2). Therefore, we conclude that the persistence modules \(\big (H(U_{\alpha })\big )_{\alpha \ge 0}\) and \(\big (H(\mathcal {X}_{\alpha })\big )_{\alpha \ge 0}\) are persistence-equivalent. Combining this observation with the result of Theorem 3, we get The scaled persistence modules \(\big (H(U_{2\alpha })\big )_{\alpha \ge 0}\) and the \(L_\infty \)-Rips module \(\big (H(\mathcal {R}^{\infty }_\alpha )\big )_{\alpha \ge 0}\) are 2-approximations of each other, and \(\big (H(U_{2\root 4 \of {d}\alpha })\big )_{\alpha \ge 0}\) and the Rips module \(\big (H(\mathcal {R}_{\alpha })\big )_{\alpha \ge 0}\) \(2d^{0.25}\)-approximate each other. To compute the cubical tower, we simply re-use the algorithm for the simplicial case, with small changes: In the simplicial case, we used a container S to hold the simplices from the previous scale. We alter S to store the cubes from the previous scale. For each interval, we store an id and its coordinates. Each cube is stored as the set of ids of the intervals that define it. At each scale, we enumerate the image of the cubical map by computing the image of each interval, and then use this pre-computed map to compute the image of \((\ge 1)\)-dimensional cubes. For the inclusions, we find all the active and secondary faces but do not compute the simplices. The inclusions in the cubical tower correspond exactly to the inclusions of active and secondary faces in the simplicial tower, so this enumerates all inclusions correctly. From Lemma 10 at most \(n3^d\) active faces are added to the tower. Hence at most \(n3^d3^d=n6^d\) active and secondary faces are added to the tower. Computing the tower takes time as in Theorem 5 by replacing M with the size bound. We conclude that: The cubical tower has size at most \(n6^{d}\) and takes at most \(n6^{d}\log \Delta \) time in expectation to compute, where \(\Delta \) is the spread of the point set. We now touch upon the practical aspects of our constructions. An implementation of our approximation scheme would be a tool that computes the (approximate) persistence barcode for any input data set. For any scheme to be useful in practice, it should be able to compute sufficiently close approximations using a reasonable amount of resources. Our cubical tower consists of cubical complexes connected via cubical maps. To our knowledge, there are no algorithms to compute barcodes in this setting where the cubical maps are more than just trivial inclusions. As such, although our cubical scheme has exponentially lower theoretical guarantees compared to the simplicial tower, we can not hope to test it in practice unless the appropriate primitives are available. It could be an interesting research direction to develop this primitive and in particular investigate whether the techniques used in computing persistence barcodes for a simplicial tower allow a generalization to the cubical case. It makes more sense to inspect the simplicial tower. We saw in Theorem 4 that the size of the tower is \(n6^{d-1}(2k+4)(k+3)! \left\{ \begin{array}{c}d\\ k+2\end{array}\right\} \). Unfortunately, this bound is already too large so that the storage requirement of the Algorithm (Theorem 7) explodes exponentially. Let us assume a conservative bound of 1 Byte of memory requirement per simplex. For a point set in \(d=8\) dimensions and \(k=4\), the complexity bound is already at least 4000 Terabytes, before factoring in n. For a point set in \(d=10\) dimensions and \(k=5\), this explodes to \(10^{20}\) Terabytes. While these are upper bounds, in practice the complexity will still need to be many orders of magnitude smaller to be feasile, which is unlikely. Even with conservative estimates our storage requirement is impractical. Therefore we are not very hopeful that implementing the scheme in its current state will provide any useful insight for high dimensional approximations. Making it implementation-worthy demands more optimizations and tools at the algorithmic level. This is worth another Algorithmic engineering project in its own right. We plan to pursue this line of research in the future. Since our focus in this paper was geared towards theoretical aspects of approximations, we exclude experimental results in the current work. We hope that a more careful implementation-focussed approach may prove more practical. On the other hand, the upper bound for the cubical case is simply \(n6^{d}\). Even for \(d=10\), the storage requirement would be less than 100 Megabytes before factoring in n. This is far more attractive than the simplicial case. As such, it may make more sense to invest time and effort in developing tools to compute barcodes in the cubical setup. We presented an approximation scheme for the Rips filtration, with improved approximation ratio, size and computational complexity than previous approaches for the case of high-dimensional point clouds. In particular, we are able to achieve a marked reduction in the size of the approximation by using cubical complexes in place of simplicial complexes. This is in contrast to all other previous approaches that used simplicial complexes as approximating structures. An important technique that we used in our scheme is the application of acyclic carriers to prove interleaving results. An alternative would to be explicitly construct chain maps between the Rips and the approximation towers; unfortunately, this make the interleaving analysis significantly more complex. While the proof of the interleaving in Sect. 4.3 is still technically challenging, it greatly simplifies by the usage of acyclic carriers. There is also no benefit in knowing the interleaving maps because they are only required for the analysis of the interleaving, and not for the actual computation of the approximation tower. We believe that this technique is of general interest for the construction of approximations of cell complexes. Our simplicial tower is connected by simplicial maps; there are (implemented) algorithms to compute the barcode of such towers (Dey et al. 2014; Kerber and Schreiber 2017). It is also quite easy to adapt our tower construction to a streaming setting (Kerber and Schreiber 2017), where the output list of events is passed to an output stream instead of being stored in memory. An exception are point clouds in \(\mathbb {R}^2\) and \(\mathbb {R}^3\), for which alpha complexes (Edelsbrunner and Harer 2010) are an efficient alternative. Ulrich Bauer, private communication. To avoid thinking about orientations, it is often assumed that \(\mathcal {F}=\mathbb {Z}_2\) is the field with two elements. In the language of Munkres (1984), this result is stated as the existence of a chain homotopy between \(\phi _1\) and \(\phi _2\). As evident from Munkres (1984), Theorem 12.4, this implies that the induced linear maps are the same. We define an order between the active faces of \(\square _{\alpha }\), using \(\succ \): for each active face \(F\in \square _{\alpha }\), there are at least two points of P whose images under \(g_{\alpha }\) are vertices of F; say \(\{q_1\succ q_2\succ \cdots \succ q_m\}\subseteq P\) are the points that map to F. We assign to F the string of length n: . Each active face has a unique string associated to it. A total order on the faces is obtained by taking the lexicographic orders of the strings of each active face. Botnan, M., Spreemann, G.: Approximating persistent homology in Euclidean space through collapses. Appl. Algebra Eng. Commun. Comput. 26(1–2), 73–101 (2015) Article MathSciNet Google Scholar Bourgain, J.: On Lipschitz embedding of finite metric spaces in Hilbert space. Israel J. Math. 52(1–2), 46–52 (1985) Bubenik, P., Scott, J.A.: Categorification of persistent homology. Discrete Comput. Geom. 51(3), 600–627 (2014) Bubenik, P., de Silva, V., Scott, J.: Metrics for generalized persistence modules. Found. Comput. Math. 15(6), 1501–1531 (2015) Carlsson, G.: Topology and data. Bull. Am. Math. Soc. 46, 255–308 (2009) Carlsson, G., Zomorodian, A.: Computing persistent homology. Discrete Comput. Geom. 33(2), 249–274 (2005) Cavanna, N., Jahanseir, M., Sheehy, D.: A Geometric perspective on sparse filtrations. In: Proceedings of the 27th Canadian Conference on Computational Geometry (CCCG), pp. 116–121 (2015) Chazal, F., Cohen-Steiner, D., Glisse, M., Guibas, L., Oudot, S.: Proximity of persistence modules and their diagrams. In: ACM Symposium on Computational Geometry (SoCG), pp. 237–246 (2009) Choudhary, A., Kerber, M., Raghavendra, S.: Improved approximate rips filtrations with shifted integer lattices. In: Proceedings of the 25th Annual European Symposium on Algorithms (ESA), pp. 28:1–28:13 (2017) Choudhary, A., Kerber, M., Raghavendra, S.: Polynomial-sized topological approximations using the permutahedron (extended version). Discrete Comput. Geom. (2017) Choudhary, A., Kerber, M., Raghavendra, S.: Improved topological approximations by digitization. In: Proceedings of the Symposium on Discrete Algorithms (SODA), pp. 448:1–448:14 (2019) Dey, T.K., Fan, F., Wang, Y.: Computing topological persistence for simplicial maps. In: Proceedings of the 30th Annual Symposium on Computational Geometry (SoCG), pp. 345–354 (2014) Edelsbrunner, H., Harer, J.: Computational Topology—An Introduction. American Mathematical Society, New York (2010) MATH Google Scholar Edelsbrunner, H., Kerber, M.: Dual complexes of cubical subdivisions of \(\mathbb{R}^n\). Discrete Comput. Geom. 47(2), 393–414 (2012) Edelsbrunner, H., Mücke, E.P.: Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms. ACM Trans. Gr. 66–104 (1990) Edelsbrunner, H., Letscher, D., Zomorodian, A.: Topological persistence and simplification. Discrete Comput. Geom. 28(4), 511–533 (2002) Goodman, J.E., O'Rourke, J., Tóth, C.D. (eds.): Handbook of Computational Geometry. CRC Press, Boca Raton (2017) Hatcher, A.: Algebraic Topology. Cambridge University Press, Cambridge (2002) Johnson, W.B., Lindenstrauss, J., Schechtman, G.: Extensions of Lipschitz maps into Banach spaces. Israel J. Math. 54(2), 129–138 (1986) Kaczynski, T., Mischaikow, K., Mrozek, M.: Computational Homology. Applied Mathematical Sciences, Springer, New York (2004) Kerber, M., Schreiber, H.: Barcodes of towers and a streaming algorithm for persistent homology. In: Proceedings of 33rd International Symposium on Computational Geometry (SoCG), pp. 57:1–57:15 (2017) Kerber, M., Sharathkumar, R.: Approximate Čech complex in low and high dimensions. In: Algorithms and Computation—24th International Symposium (ISAAC), pp. 666–676 (2013) Khuller, S., Matias, Y.: A simple randomized sieve algorithm for the closest-pair problem. Inf. Comput. 118(1), 34–37 (1995) Matoušek, J.: Bi-Lipschitz embeddings into low-dimensional Euclidean spaces. Commentationes Mathematicae Universitatis Carolinae (1990) Munkres, J.R.: Elements of Algebraic Topology. Westview Press, Milton Park (1984) Rennie, B.C., Dobson, A.J.: On stirling numbers of the second kind. J. Comb. Theory 7(2), 116–121 (1969) Sheehy, D.: Linear-size approximations to the Vietoris-rips filtration. Discrete Comput. Geom. 49(4), 778–796 (2013) Wagner, H., Chen, C., Vuçini, E.: Efficient Computation of Persistent Homology for Cubical Data, pp. 91–106. Springer, Berlin (2012) We would like to thank the reviewers end editors for their feedback, which was very helpful in improving the presentation. Institut für Informatik, Freie Universität Berlin, Berlin, Germany Aruni Choudhary Institut für Geometrie, Technische Universität Graz, Graz, Austria Michael Kerber Department of Computer Science, Virginia Tech, Blacksburg, VA, USA Sharath Raghvendra Correspondence to Aruni Choudhary. On behalf of all authors, the corresponding author states that there is no conflict of interest. Aruni Choudhary is supported in part by European Research Council StG 757609. Michael Kerber is supported by Austrian Science Fund (FWF) grant number P 29984-N35. Sharath Raghvendra acknowledges support of NSF CRII grant CCF-1464276. Strong interleaving for barycentric scheme Recall that we build the approximation tower over the set of scales \(I:=\{\alpha _s = 2^s \mid s\in \mathbb {Z}\}\). The tower \((\mathcal {X}_{\alpha })_{\alpha \in I}\) connected with the simplicial map \(\tilde{g}\) can be extended to the set of scales \(\{\alpha \ge 0\}\) with simple modifications: for \(\alpha \in I\), we define \(\mathcal {X}_\alpha \) in the usual manner. The map \(\tilde{g}\) stays the same as before for complexes at such scales. for all \(\alpha \in [\alpha _s,\alpha _{s+1}) \), we set \(\mathcal {X}_\alpha =\mathcal {X}_{\alpha _s}\), for any \(\alpha _s\in I\). That means, the complex stays the same in the interval between any two scales of I, so we define \(\tilde{g}\) as the identity within this interval. These give rise to the tower \((\mathcal {X}_\alpha )_{\alpha \ge 0}\), that is connected with the simplicial map \(\tilde{g}\). This modification helps in improving the interleaving with the Rips persistence module. First, we extend the acyclic carriers \(C_1\) and \(C_2\) from before to the new case: \(C_1^\alpha :\mathcal {R}^{\infty }_{\alpha }\rightarrow \mathcal {X}_{4\alpha },\alpha >0\): we define \(C_1\) as before, simply changing the scales in the definition. It is straightforward to see that \(C_1\) is still a well-defined acyclic carrier. \(C_2^\alpha :\mathcal {X}_{\alpha } \rightarrow \mathcal {R}^{\infty }_{\alpha },\alpha \ge 0\): this stays the same as before. It is simple to check that \(C_2\) is still a well-defined acyclic carrier. These give rise to augmentation-preserving chain maps between the chain complexes: $$\begin{aligned} c_1^\alpha : \mathcal {C}_*\left( \mathcal {R}^{\infty }_{\alpha }\right) \rightarrow \mathcal {C}_*\left( \mathcal {X}_{4\alpha }\right) \qquad \text {and} \qquad c_2^\alpha : \mathcal {C}_*\left( \mathcal {X}_{\alpha }\right) \rightarrow \mathcal {C}_*\left( \mathcal {R}^{\infty }_{\alpha }\right) , \end{aligned}$$ using the acyclic carrier theorem as before (Theorem 1). The diagram commutes on the homology level, for all \(0\le \alpha \le \alpha '\). Consider the acyclic carrier \(C_1\circ inc:\mathcal {R}^{\infty }_{\alpha } \rightarrow \mathcal {X}_{4\alpha '} \). It is simple to verify that this carrier carries both \(c_1\circ inc\) and \(\tilde{g}\circ c_1\), so the induced diagram on the homology groups commutes, from Theorem 1. \(\square \) We construct an acyclic carrier \(D: \mathcal {X}_\alpha \rightarrow \mathcal {R}^{\infty }_{\alpha '}\) which carries \(inc\circ c_2\) and \(c_2\circ \tilde{g}\), thereby proving the claim (Theorem 1). Consider any simplex \(\sigma \in \mathcal {X}_{\alpha }\) and let \(E\in \square _{\alpha }\) be the minimal active face of containing \(\sigma \). We set \(D(\sigma )\) as the simplex on the set of input points of P, which lie in the Voronoi regions of the vertices of g(E). By the triangle inequality, \(D(\sigma )\) is a simplex of \(\mathcal {R}^{\infty }_{\alpha '}\), so that D is a well-defined acyclic carrier. It is straightforward to verify that D carries both \(c_2\circ \tilde{g}\) and \(inc\circ c_2\). \(\square \) The diagram is essentially the same as the lower triangle of Diagram 10, with a change in the scales. As a result, the proof of Lemma 7 also applies for our claim directly. \(\square \) The diagram can be re-interpreted as: The modified diagram is essentially the same as the upper triangle of Diagram 10, with a change in the scales and a replacement of \(c_1\) with \(\tilde{g}\circ c_1\), that is equivalent to the chain map at the scale \(\alpha '\). Hence, the proof of Lemma 8 also applies for our claim directly. \(\square \) Using Lemmas 12, 13, 14, 15, and the scale balancing technique for strongly interleaved persistence modules, it follows that The persistence modules \(\big (H(\mathcal {X}_{2\alpha })\big )_{\alpha \ge 0}\) and \(\big (H(\mathcal {R}^{\infty }_\alpha )\big )_{\alpha \ge 0}\) are strongly 2-interleaved. Choudhary, A., Kerber, M. & Raghvendra, S. Improved approximate rips filtrations with shifted integer lattices and cubical complexes. J Appl. and Comput. Topology 5, 425–458 (2021). https://doi.org/10.1007/s41468-021-00072-4 Revised: 20 February 2021 Issue Date: September 2021 Persistent homology Rips filtrations Approximation algorithms Topological data analysis Mathematics Subject Classification F.2.2
CommonCrawl
Integral sentences and numerical comparative calculations for the validity of the dispersion model for air pollutants AUSTAL2000 Rainer Schenk ORCID: orcid.org/0000-0003-3759-57922 nAff1 The authors (Janicke and Janicke (2002). Development of a model-based assessment system for machine-related immission control. IB Janicke Dunum) developed an expansion model under the name AUSTAL2000. This becomes effective in the Federal Republic of Germany with the entry into force of TA Luft (BMU (2002) First general administrative regulation for the Federal Immission Control Act (technical instructions for keeping air TA air clean) from July 24, 2002. GMBL issue 25–29 S: 511–605) declared binding in 2002. Immediately after publication, the first doubts about the validity of the reference solutions are raised in individual cases. The author of this article, for example, is asked by senior employees of the immission control to express their opinions. However, questions regarding clarification in the engineering office Janicke in Dunum remain unanswered. In 2014, the author of this article was again questioned by interested environmental engineers about the validity of the reference solutions of the AUSTAL dispersion model. In the course of a clarification, the company WESTKALK, United Warstein Limestone Industry, later placed an order to develop expertise on this model development, Schenk (2014) Expertise on Austal 2000. Report on behalf of the United Warstein Limestone Industry, Westkalk Archives and IBS). The results of this expertise form the background of all publications on the criticism of Schenk's AUSTAL expansion model. It is found that all reference solutions violate all main and conservation laws. Peculiar terms used spread confusion rather than enlightenment. For example, one confuses process engineering homogenization with diffusion. When homogenizing, one notices strange vibrations at the range limits, which cannot be explained further. It remains uncertain whether this is due to numerical instabilities. However, it is itself stated that in some cases the solutions cannot converge. The simulations should then be repeated with different input parameters. Concentrations are calculated inside AUSTAL. In this context, it is noteworthy that no publication by the AUSTAL authors specifies functional analysis, e.g. for stability, convergence and consistency. Concentrations are calculated inside closed buildings. It is explained that dust particles cannot "see" vertical walls and therefore want to pass through them. One calculates with "volume sources over the entire computing area". However, such sources are unknown in the theory of modeling the spread of air pollutants. Deposition speeds are defined at will. 3D wind fields should be used for validation. The rigid rotation of a solid in the plane is actually used. You not only deliver yourself, but also all co-authors and official technical supporters of the comedy. Diffusion tensors are formulated without demonstrating that their coordinates have to comply with the laws of transformation and cannot be chosen arbitrarily. Constant concentration distributions only occur when there are no "external forces". It is obviously not known that the relevant model equations are mass balances and not force equations. AUSTAL also claims to be able to perform non-stationary simulations. One pretends to have calculated time series. However, it is not possible to find out in all reports which time-dependent analytical solution the algorithm could have been validated with. A three-dimensional control room is described, but only zero and one-dimensional solutions are given. All reference examples with "volume source distributed over the entire computing area" turn out to be useless trivial cases. The AUSTAL authors believe that "a linear combination of two wind fields results in a valid wind field". Obviously, one does not know that wind fields are only described by second-degree momentum equations, which excludes any linear combinations. It is claimed that Berljand profiles have been recalculated. In fact, one doesn't care about three-dimensional concentration distributions. On the one hand, non-stationary tasks are described, but only stationary solutions are discussed. In another reference, non-stationary solutions are explained in reverse, but only stationary model equations are considered. Further contradictions can be found in the original literature by the AUSTAL authors. The public is misled. The aim of the present work is to untangle the absent-mindedness of the AUSTAL authors by means of mathematics and mechanics, to collect, to order and to systematize the information. This specifies the relevant tasks for the derivation of stationary and non-stationary reference solutions. They can be compared to the solutions of the AUSTAL authors. These results should make it possible to make clear conclusions about the validity of the AUSTAL model. Using the example of deriving reference solutions for spreading, sedimentation and deposition, the author of this work describes the necessary mathematical and physical principles. This includes the differential equations for stationary and non-stationary tasks as well as the relevant initial and boundary conditions. The valid initial boundary value task is explained. The correct solutions are given and compared to the wrong algorithms of the AUSTAL authors. In order to check the validity of the main and conservation laws, integral equations are developed, which are subsequently applied to all solutions. Numerical comparative calculations are used to check non-stationary solutions, for which an algorithm is independently developed. The analogy to the impulse, heat and mass transport is also used to analyze the reference solutions of the AUSTAL authors. If one follows this analogy, all reference solutions by the AUSTAL authors comparatively violate Newton's 3rd axiom. As a result, the author of this article comes to the conclusion that all reference solutions by the AUSTAL authors violate the mass conservation law. Earlier statements on this are confirmed and substantiated further. All applications with "volume source distributed over the entire computing area" turn out to be useless zero-dimensional trivial cases. The information provided by the AUSTAL authors on non-stationary solutions has not been documented throughout. The authors of AUSTAL have readers puzzled about why, for example, the stationary solution should have set in after 10 days for each reference case. It turns out that no non-stationary calculations could be carried out at all. In order to gain in-depth knowledge of the development of AUSTAL, the author of this article deals with his life story. It begins according to (Axenfeld et al. (1984) Development of a model for the calculation of dust precipitation. Environmental research plan of the Federal Minister of the Interior for Air Pollution Control, research report 104 02 562, Dornier System GmbH Friedrichshafen, on behalf of the Federal Environment Agency), according to which one is under deposition loss and not Storage understands. In the end, the AUSTAL authors take refuge in (Trukenmüller (2016) equivalence of the reference solutions from Schenk and Janicke. Treatise Umweltbundesamt Dessau-Rosslau S: 1–5) in incomprehensible evidence. How Trukenmüller gets more and more involved in contradictions can be found in (Trukenmüller (2017) Treatises of the Federal Environment Agency from February 10th, 2017 and March 23rd, 2017. Dessau-Rosslau S: 1–15). The author of this article comes to the conclusion that the dispersion model for air pollutants AUSTAL is not validated. Dispersion calculations for sedimentation and depositions cannot be carried out with this model. The authors of AUSTAL have to demonstrate how one can recalculate nature experiments with a dispersion model that contradicts all valid principles. Applications important for health and safety, e.g. Security analyzes, hazard prevention plans and immission forecasts are to be checked with physically based model developments. Court decisions are also affected. By the authors Janicke et al. (2002) a dispersion model is developed under the name AUSTAL2000. In the Federal Republic of Germany, this became binding in 2002 when the Technical Instructions for Air Quality Control (TA Luft), BMU (2002), came into force. Other model developments have to prove their equivalence to the reference solutions of AUSTAL. Immediately after publication, individual employees of immission control and later also environmental engineers raise doubts about the validity of the reference solutions. For the purpose of clarification, the author of this article in 2014 was commissioned by the company WESTKALK, United Warstein Limestone Industry, to develop expertise on this expansion model according to Schenk (2014). The author of this article comes to the conclusion that all reference solutions from AUSTAL violate mass conservation and the second law of thermodynamics and are therefore not usable. The use of critical terms also leads to the conclusion that the AUSTAL authors are not very familiar with the theory of modeling the spread of air pollutants. The results of this expertise are published in Schenk (2015a). They form the background of all criticism. In Trukenmüller et al. (2015) is strongly contradicted. However, the authors of this publication are forced to publish the derivation of their reference solutions for the first time in 31 years. The development of the AUSTAL dispersion model is based on the work of Axenfeld et al. (1984). 31 years had passed until 2015. In the solution process of the reference solutions one refers to an alleged "usual Convention", which could be found everywhere in "listed standard literature". With this convention, which is later referred to as the Janicke Convention, the speed of deposition is mistakenly understood as a proportionality factor and not as a material constant. The following replica Schenk (2015b) demonstrates that the one described in Trukenmüller et al. (2015) specified algorithm is incorrect. The initial boundary value tasks responsible for spreading, sedimentation and deposition cannot be solved without contradiction. The authors resist again and claim in Trukenmüller (2016) that there is equivalence to the correct solutions described in Schenk (2015b). The author of this article is clearly against this claim. It is not credible that this claim can only be traced back to ignorance. It is more likely that one is pursuing an intention to deceive here, as will be understood later. For example, the claim that Venkatram et al. (1999) also proves to be devoid of purpose. The publication Schenk (2017) proves that it is solely an unfounded evidence. In Trukenmüller (2017) i.a. tried again to save Janicke's Convention. One almost conjures up the author of this article that he should "… recognize the correct boundary condition, and this follows from the definition of the deposition speed". It simply "… parameterizes the mass balance at the bottom of the model…", which actually leads to a loss of mass, as was already the case in Axenfeld et al. (1984) must admit. "Worldwide, the dispersion models are based on the definition of the speed of deposition that is recognized in the literature", you can read. However, studying literature has shown that the opposite is correct. You obviously only use the reputation of authorities to distract yourself from your ignorance. This allegation will also be justified later. Because of the demand for equivalence of other model developments to AUSTAL, non-university research is blocked rather than promoted. How should new model developments be able to demonstrate equivalence if the necessary reference solutions contradict all principles of mathematics and mechanics. The Schenk publication (2018a) shows which faults the demand for equivalence leads to. Not only is the AUSTAL dispersion model not validated. The authors of other model developments are forced to question their excellent algorithms, e.g. can be found in Schorling (2009). Finally, Schenk (2018b) proves, for example, that the authors of AUSTAL have compared the results of Venkatram et al. (1999) understand deposition as loss rather than storage. All incantations in Trukenmüller (2017) are questioned. At the request of authorities and other interested parties, the AUSTAL authors are currently spreading the Trukenmüller (2016) deception regarding the validity of the AUSTAL expansion model. They don't care that this already contradicts Trukenmüller (2017). Because once Trukenmüller denies al al. (2015) the correctness of the solutions according to Schenk (2015a). And another time, Trukenmüller (2016) wants to demonstrate equivalence to it. The public is confused and misled. The aim of the present work is to untangle this embarrassment of the AUSTAL authors. For this purpose, all information provided by the AUSTAL authors in all available publications is collated, arranged and systematized. Optionally, stationary and non-stationary tasks are considered and the associated solutions are described. They can be compared to the solutions of the AUSTAL authors. Integral rates for mass balance and numerical comparative calculations are used for this. It turns out that all of Schenk's criticism of the AUSTAL expansion model is justified and cannot be invalidated. In Sect. "Methods and material" of this work an overview of the contents of the literature used is given. The author of this article studies past and current literature by the AUSTAL authors. The basic knowledge of mathematics and mechanics is described in textbooks and monographs. The fact that Trukenmüller (2016) is intended to deceive is deepened. The accusation that Trukenmüller (2017) tries to distract from one's own ignorance and uses the reputation of other authors is justified. Section "Berljand's boundary condition, initial boundary value problem and integral theorems" provides the mathematical and physical foundations for deriving, analyzing and evaluating AUSTAL's reference solutions. This includes the derivation of the boundary conditions valid for spreading, sedimentation and deposition, the description of the relevant model equations as well as the development of integral sentences for the establishment of mass balances. A comparison of the contradictory solutions of the author of this article with the wrong algorithms is made in Sect. "Calculation of concentration, sedimentation and deposition for a one-dimensional spread of air pollutants". This section also explains how the Janicke Convention was created and used. It is differentially emphasized that their use leads to a mass deficit. In Sect. "Reference solutions for dispersion, deposition, sedimentation and homogeneity" the contradictory and wrong solutions are optionally given for stationary and non-stationary considerations for all reference cases for dispersion, sedimentation, deposition and homogeneity. Their validity is checked using the developed integral rates. The reference solutions of the AUSTAL authors comparatively contradict Newton's 3rd axiom. This statement is made in Sect. "The analogy to the impulse, heat and mass transfer". How can it happen that the AUSTAL expansion model has been misleading the public from 1984 to the present? The author of this article deals with this question in Sect. "The life stories of the AUSTAL dispersion model". Methods and material In the present case, it should be checked on the basis of generally valid integral sentences for each individual case of the reference solutions of the dispersion model AUSTAL whether the mass conservation law or the II. Law of thermodynamics are violated. It is also necessary to clarify how stationary and non-stationary calculations were carried out. For this purpose, numerical and analytical algorithms have to be developed and applied to the spread, sedimentation, deposition and homogeneity of the AUSTAL authors in each individual case. Mathematics and mechanics alone are the methods used for clarification. Literature studies are required to get to know the mathematics and mechanics of the AUSTAL dispersion model. The work of von Axenfeld et al. (1984) must be studied. In cooperation with the first author of AUSTAL, Janicke, a model for calculating the dust precipitation is developed. The so-called Janicke Convention, which can be explained later in Sect. "Contradictory solution using the Janicke Convention according to Janicke (2002) and the difference to Berljand's boundary condition" is already there in the developed algorithms used. The thought model used describes deposition as loss and not as preservation. With the scientific manual according to the VDI Commission for Air Pollution Control (1988) one wants to refer to the work Axenfeld et al. (1984) establish a new propagation theory. The reference solutions and graphics belonging to the tasks for dispersion, sedimentation, deposition and homogeneity are explained in Janicke (2000). With the intention of developing a national dispersion model, the model developed in 1984 for the calculation of dust precipitation in Janicke (2001) is further developed to the "mother model" LASAT. The work Janicke (2002) describes tasks and tables for the calculation of dispersion, sedimentation, deposition and homogeneity. The "Development of a model-based assessment system for immission control for companies" is described in Janicke et al. (2002) with the name AUSTAL2000 presented to the public. The BMU publication (2002) declares this model binding for all expert dispersion calculations. All other dispersion models have to prove their equivalence. The work Trukenmüller et al. (2015) must be studied to get to know the derivation of the reference solutions declared binding for the first time. There the author of this article recognizes that all algorithms for this are wrong. The reader has to laboriously collect the physical and mathematical foundations of model equations, tasks, solution algorithms, graphics and tables from seven publications individually. Other publications deal with applications and further developments at AUSTAL. The publications Janicke (2009) and Janicke (2015) claim that the spread of radionuclides and aviation pollutants can be calculated. However, this would require non-stationary dispersion calculations, which AUSTAL is not able to do. The author of this publication also studies Schorling (2009). With WinKFZ, the author develops an excellent model for calculating the spread of air pollutants, but it is discredited by court rulings because there is no equivalence to AUSTAL. The author subsequently wants to bring them about. However, it turns out that an approximate agreement can only be recognized visually. An actual equivalence cannot be inferred, since only unknown dimensionless pollutant concentrations are used. A clarification cannot be brought about. The author reckons with the superficiality of administrations rather than denying his excellent algorithms. With the publications Trukenmüller (2016) one wants to achieve an equivalence to the correct reference solution according to Schenk (2018b). The author of this article looks at this publication and notes that it is simply a deception, as will be explained in more detail. The AUSTAL authors equate their wrong reference solution with the correct one. You get a simple algebraic equation and realize that there is no identity. You now rename variables and refer to the deposition rate \(v_{d} \, \left[ {\text{m/s}} \right]\) of your wrong solution from now on \(v_{d}^{Janicke} \, \left[ {\text{m/s}} \right]\). The algebraic equation is now solved after the second deposition rate \(v_{d} \,\) of the correct solution. At the end of the invoice, it will be renamed \(v_{d}^{Schenk} \, \left[ {\text{m/s}} \right]\). The accusation of an intention to deceive is well founded. According to Trukenmüller et al. (2015) is known that both solutions are different. With the intention of manipulation, they are still equated. Left and right of the algebraic equation are the deposition velocities twice \(v_{d}\). The own deposition speeds \(v_{d}\) are renamed with the intention to pretend equivalence in \(v_{d}^{Janicke}\). After the second deposition rate \(v_{d}\) the algebraic equation is solved. At the end the second deposition speed \(v_{d}\) is cleverly renamed to \(v_{d}^{Schenk}\). The accusation of deception is well founded. This castling can be studied in detail in Schenk (2017). The fact that the difference between a numerical and analytical solution was still not understood in 2017 can be seen in Janicke et al. (2017) read. The heading shows that analytical methods are used for approximate solutions and numerical algorithms for exact solutions. The opposite is true. The publication Trukenmüller (2017) describes a summary of the exchange of views held with the UBA regarding the validity of all reference solutions. Because the AUSTAL dispersion model is used in all areas of the economy, such as city and community planning, traffic planning, landscape design and also to avert danger, there is a high level of public interest in correctly carried out immission forecasts. For this reason, the public also has a right to be involved in discussions about the validity of this model development. There are no objections to publications on this. In the publication mentioned, those responsible for dispersion calculations according to TA Luft develop their thoughts on how they are responsible for promoting and accompanying model developments. In scientific discussions, however, they obviously rely more on the reputation of other well-known and valued authors than on their own competence. So you want to distract from your own ignorance. This wording is not very friendly. However, it is correct in every respect and affects not only the content but also the form of this publication. As far as the content is concerned, in connection with the definition of the deposition speed, one refers sequentially to authors such as Pasquill, Chamberlein, Berljand, Wiedensohler, Zhang, Slinn, Kumar, Cunningham, Monin, Kasanski, Bonka, Sehmen, Hodgson, Seinfeld, Pandis, Nicholson, Simpson and Travnikov. If one adds the work Trukenmüller (2016), the list is to be completed by the authors Venkatram and Pleim. Without a doubt, these authors have earned varying degrees of merit in the modeling of spreading, deposition and sedimentation and can point to an excellent reputation. However, they would definitely object if their research results on Trukenmüller (2017) were assumed to be equivalent. In the case of the first and the last of the authors cited, the ignorance of the AUSTAL authors can easily be demonstrated. Pasquill (1962), for example, is an excellent description of atmospheric diffusion, but in the relevant section "6.2 Deposition of airborne material" on 14 pages and 19 formulas, not a single statement can be found which indicates the violation of the Mass conservation and the Janicke's Convention could justify. The ignorance of the AUSTAL authors is that they are unable to use the excellent physics described there to develop a suitable thought model that would be accessible to a contradictory mathematical description. In the last of the cases cited, the author of this work deals intensively with the publication Venkatram et al. (1999) in Schenk R (2018b). The ignorance of the AUSTAL authors is that they did not understand that the one in Venkatram et al. (1999) found connection between sedimentation and deposition is only applicable for the special case of a disappearing soil concentration, \(c_{0} \left[ {\mu {\text{g/m}}^{3} } \right] = 0\), which consequently with \(F_{c} = v_{d} \cdot c_{0} \equiv 0\) not only questions all dispersion calculations, but also all other explanations by the authors of the AUSTAL on the validity of the Janicke Convention. According to the authors of AUSTAL, \(F_{c} \left[ {\mu g/(m^{2} \cdot s)} \right]\) means the total emission in the study area. It is also unlikely that the authors cited in the list believe that "… a column standing on the surface of the earth, which contains the material capable of deposition, runs empty through deposition", as in Axenfeld et al. (1984) is claimed. Also in the work Simpson et al. (2012) and Travnikov et al. (2005) there is no indication with which one could conclude that the Janicke Convention is valid. The accusation of ignorance is well founded. With regard to form, the style and expression of Trukenmüller (2017) snub every German authority. In UBA (2018) the authors of AUSTAL complacently describe their history of the AUSTAL expansion model. The publications, research projects, papers and studies mentioned here form the material that was to be analyzed using the methods described. Basic knowledge can be found in the literature references Albring (1961), Бepлянд (1975), Boŝnjakoviĉ (1971), Graedel et al. (1994), Gröber et al. (1955), Janenko (1968), Kneschke (1968), Naue (1967), Stephan et al. (1992), Schlichting (1964) and for, Schüle (1930), Truckenbrodt (1983) example also in Westphal (1959). These references are given to show that traditional mathematics and mechanics can be used as much as possible. Important physical basics and mathematical algorithms from AUSTAL are part of school knowledge. External literature was also studied. For example, in Abas et al. (2019) brilliantly described that environmental protection is an international task. The calculation of cross-border pollutant flows allows a scientifically based cause analysis and promotes international cooperation. Cross-border pollutant flows can only be calculated using high-quality, scientifically based and validated dispersion models. The work by Schenk et al. (1979) and Schenk (1989) are of interest. In the work Rafique et al. (2019) shows convincingly that population growth, energy policy and environmental protection are to be seen in a close connection. Political decisions cannot ignore this link. The development of the AUSTAL expansion model was also accompanied by political decisions. If air quality monitoring is required, active measurement methods are often used. Using a pump, ambient air is drawn into the mini-volume collector (Mini-VS) and the dust contained in it is separated. Berljand's boundary condition, initial boundary value problem and integral theorems Boundary condition The spread of air pollutants is described by the initial boundary value task of the impulse, heat and mass transport. This includes the differential equation of mass transport (1). $$\frac{{\partial {\text{c}}}}{{\partial {\text{t}}}} + {\text{v}}_{{\text{i}}} \cdot \frac{{\partial {\text{c}}}}{{\partial {\text{x}}_{{\text{i}}} }} = \frac{\partial }{{\partial {\text{x}}_{{\text{i}}} }}\left( {K \cdot \frac{{\partial {\text{c}}}}{{\partial {\text{x}}_{{\text{i}}} }}} \right) + \dot{q}({\text{t}}),$$ which can be solved with suitable starting and boundary conditions. In this equation, \(c\left[ {\mu g/m^{3} } \right]\) explain the concentration, \(x_{i} \left[ m \right]\) the coordinates in the different spatial directions, \(K\left[ {m^{2} /s} \right]\) the diffusion coefficient in the free atmosphere, \(\dot{q}(t) \, \left[ {\mu g/(m^{3} \cdot s)} \right]\) the source term, \(v_{i} \left[ {m/s} \right]\) the flow velocity and \(t\left[ s \right]\) the time coordinate. In the case of a one-dimensional and non-stationary propagation, the differential Eq. (2). $$\frac{\partial c}{\partial t} - v_{s} \cdot \frac{\partial c}{\partial z} = K \cdot \frac{{\partial^{2} c}}{{\partial z^{2} }} + \dot{q}(t)$$ is obtained according to Eq. (1), and in the stationary case if the source term \(\dot{q}(t) = 0\) is missing, the relationship (3). $$- v_{s} \cdot \frac{\partial c}{\partial z} = K \cdot \frac{{\partial^{2} c}}{{\partial z^{2} }}.$$ In these equations, besides the already known quantities \(v_{s} \left[ {m/s} \right]\) means the sedimentation speed and \(z\left[ m \right]\) the vertical position coordinate. With a view to later applications, it is negative. For further considerations, various simplifications are of interest for Eq. (1). $$\frac{dc}{dt} = \dot{q}(t) \,$$ Equation 4 describes the simple further development of an equally distributed initial concentration \(c_{A} \, \left[ {\mu {\text{g/m}}^{3} } \right]\), neglecting all convective and conductive material flows. This equation can be obtained, for example, if spatial concentration changes are not observed, \(\partial /\partial x_{i} = 0\). In the case of a time-independent source term \(\dot{q}(t) = \dot{q} = const.\), the relationships of (5). $${\text{c(t)}} = {\text{c}}_{\text{A}} + \int\limits_{0}^{{T}} {\dot{q}} \cdot dt\quad{\text{ c(t)}} = {\text{c}}_{\text{A}} + \dot{q} \cdot t$$ explain a linear increase in concentration as a solution of (4), where \(T_{E} \left[ s \right]\) denotes the end of emission. The boundary condition belonging to Eq. (1) is derived from the mass constancy at the control limits between atmosphere and soil. It is known as the Berljand boundary condition. The relationships to this are described in Fig. 1. All representations have been selected so that they can be applied directly to the study areas of the AUSTAL authors to derive the reference solutions. The ordinate \(x_{i}\) is directed into the free atmosphere and the coordinate \(x_{i}^{*} \left[ m \right]\) points from the depth of the earth towards the boundary. With \(x_{i} (0)\) and \(x_{i}^{*} \left[ T \right]\) the soil and atmosphere touch. In order to establish a relationship with the reference solutions of the AUSTAL authors, the coordinate notation \(x_{3} = z\) and \(x_{3}^{*} = z_{{}}^{*}\) is used for \(i = 3\) below. This is how \(\dot{m}^{A} = \dot{m}_{z}^{A} = \int {d\dot{m}_{z}^{A} = \dot{m}^{A} [\mu g/(m^{2} \cdot s)]}\) designates the conductive material flow in the free atmosphere and \(\dot{m}_{{}}^{B} = \dot{m}_{z}^{B} = \int {d\dot{m}_{z}^{B} = \dot{m}^{B} [\mu g/(m^{2} \cdot s)]}\) in the depth of the earth. There is a surface source in the atmosphere. The pollutants emitted there move convectively and conductively towards the ground. The sedimentation flow \(\dot{m}_{{}}^{S} [\mu g/(m^{2} \cdot s)]\) is calculated as the product of concentration and sedimentation rate, \(\dot{m}_{{}}^{S} (z) = - c \cdot v_{s}\). The conductive material flows are represented as products between the diffusion coefficients and the concentration gradients, \(\dot{m}_{{}}^{A} (z) = - K \cdot \partial c/\partial z\) and \(\dot{m}_{{}}^{B} (z^{*} ) = - K_{B} \cdot \partial c/\partial z^{*}\). The \(K_{B} \left[ {m^{2} /s} \right]\) is the diffusion coefficient in the soil. At the lower boundary of the study area, the identical conductive material flows are obtained for \(z_{{}}^{*} = T\) and \(z = 0\). $$\dot{m}_{{}}^{A} (z = 0) = \dot{m}_{{}}^{B} (z^{*} = T)$$ \(T\left[ m \right]\) means the depth in the ground. General validity of Berljand's boundary Condition The sedimentation rate in the soil itself is identical to zero. With \(v_{s} = 0\) according to Eq. (3) one obtains the simple relationship \(\partial^{2} c/\partial z^{*2} = 0\). The boundary conditions \(c(z^{*} = T) = c_{0}\) and \(c(z^{*} = 0) = c_{T}\) result in a linear concentration distribution in the soil, which is described by Eq. (7). $$c = \frac{{c_{0} - c_{T} }}{T} \cdot z^{*} + c_{T} .$$ Under \(c_{T} [\mu g/m^{3} ]\) is to be understood the concentration in great depth of the soil and under \(c_{0} [\mu g/m^{3} ]\) the soil concentration. Because of the constant mass, the conductive material flows on the floor must be identical. This gives $$\begin{aligned} K \cdot \frac{\partial c}{\partial z}(0) &= K_{B} \cdot \frac{\partial c}{{\partial z^{*} }}(T) = \frac{{K_{B} }}{T} \cdot (c_{0} - c_{T} ) \\ &\approx \frac{{K_{B} }}{T} \cdot c_{0} = v_{d} \cdot c_{0} \end{aligned}$$ $$v_{d} = \frac{{K_{B} }}{T}.$$ Equation (8) also gives the definition of the deposition rate, as can be seen from Eq. (9). Equation (8) assumes that the soil can absorb material capable of deposition without restriction, which is why one can set \(c_{T} \approx 0\). Equation (8) finally gives Berljand's boundary conditions (10). $$K \cdot \frac{\partial c}{\partial z}\left( 0 \right) - v_{d} \cdot c_{0} = 0 .$$ It is identical to Eq. (11) $$K \cdot \frac{\partial c}{{\partial x_{i} }}\left( 0 \right) - \beta_{i} \cdot c_{0} = 0,$$ as can be found in Бepлянд (1975). The mass transfer rate is to be understood under \(\beta_{i} \left[ {m/s} \right]\). In the case of deposition, the deposition speed means \(v_{d} = \beta_{3}\), which means \(i = 3\) is the direction in which the pollutants are deposited. The Berljand boundary condition is well known and is used in particular to describe the spread, deposition and sedimentation. This preferably affects the research area of the spread of air pollutants, but it is also not unknown in other disciplines, such as fluid mechanics, thermodynamics and process engineering, for the calculation of convective and conductive material flows. Initial boundary value task The boundary value task for one-dimensional spreading, sedimentation and deposition is described by the balance Eq. (2) and by the boundary condition (10). In the case of an initial boundary value task, the initial condition (12) $$c\left( {x_{1} ,x_{2} ,x_{3} ,t = 0} \right) = c_{A} \left( {x_{1} ,x_{2} ,x_{3} ,} \right)$$ to be added. \(x_{1,2,3} \left[ m \right]\) mean the three-dimensional spatial coordinates. The closed initial boundary value task is therefore explained by the formula (13) $$\begin{aligned} & \frac{\partial c}{\partial t} - v_{s} \cdot \frac{\partial c}{\partial z} = K \cdot \frac{{\partial^{2} c}}{{\partial z^{2} }} + \dot{q}(t) \\ & {\text{Model equation}} \\ & K \cdot \frac{\partial c}{\partial z}\left( 0 \right) - v_{d} \cdot c_{0} = 0 \\ & {\text{Boundary condition}} \\ & c\left( {x_{1} ,x_{2} ,x_{3} ,t = 0} \right) = c_{A} \left( {x_{1} ,x_{2} ,x_{3} ,} \right) \\ & {\text{Initial condition}} \\ \end{aligned}$$ Volume and area integrals To prove mass conservation, one starts from the differential Eq. (1) and forms volume integrals according to Eq. (14) $$\begin{aligned} & \int\limits_{V} {\frac{\partial c}{\partial t} \cdot dV} + \int\limits_{V} {v_{i} \cdot \frac{\partial c}{{\partial x_{i} }} \cdot dV} \\ &\quad= K \cdot \int\limits_{V} {\frac{{\partial^{2} c}}{{\partial x_{i} \partial x_{i} }}} \cdot dV + \int\limits_{V} {\dot{q}(t) \cdot dV} . \end{aligned}$$ Using the Gaussian theorem, volume integrals can be converted into surface integrals. This leads to the relationship (15) with two orbital integrals. $$\int\limits_{V} {\frac{\partial c}{\partial t} \cdot dV} + \oint_{A} {v_{i} \cdot c} \cdot dA_{i} = K \cdot \oint_{A} {\frac{\partial c}{{\partial x_{i} }}} \cdot dA_{i} + \int\limits_{V} {\dot{q}(t) \cdot dV} .$$ In this equation, \(A\left[ {m^{2} } \right]\) means the surface and \(V\left[ {m^{3} } \right]\) the volume of the control area. Integration over the surface of the study area leads to Eq. (16). $$\begin{aligned} \int\limits_{V} {\frac{\partial c}{\partial t} \cdot dV} + v_{i} (0) \cdot c(0) \cdot \int\limits_{U} {dA_{i} } + v_{i} (h) \cdot c(h) \cdot \int\limits_{O} {dA_{i} } . \hfill \quad \\ = K \cdot \frac{\partial c}{{\partial x_{i} }} ( 0 )\cdot \int\limits_{U} {dA_{i} } + K \cdot \frac{\partial c}{{\partial x_{i} }} ( {\text{h)}}\int\limits_{O} {dA_{i} } + \int\limits_{V} {\dot{q}(t) \cdot dV} \hfill \\ \end{aligned}$$ The integration can be carried out because the integrants are constant over the respective boundary surfaces below (U) and above (O). An integration over side surfaces can be dispensed with, since due to the lack of flow velocities and concentration levels, no mass transfer can take place. If you consider that all surface vectors are directed positively outwards, the scalar products can also be formed. This results in Eqs. (17). $$\begin{aligned} &\int\limits_{V} {\frac{\partial c}{\partial t} \cdot dV} + v_{s} \cdot c_{0} \cdot A_{U} - v_{s} \cdot c_{h} \cdot A_{O} \\ &\quad= - K \cdot \frac{\partial c}{\partial z} ( 0 )\cdot A_{U} + K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} \cdot A_{O} + \int\limits_{V} {\dot{q}(t) \cdot dV} . \end{aligned}$$ Here, \(A = A_{O} = A_{U} \left[ m^2 \right]\) mean the control areas at the top and bottom edges and \(c_{h} \left[ {\mu g/m^{3} } \right] = c(z = h)\) the concentration at the top. \(h\left[ m \right]\) is to be understood as the vertical extent of the study area. Also note that Eq. (18) is obtained. $$\begin{aligned} & \int\limits_{0}^{h} {\frac{\partial c}{\partial t} \cdot dz} + v_{s} \cdot c_{0} - v_{s} \cdot c_{h} + v_{d} \cdot c_{0} \\ &\quad- K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} - \int\limits_{0}^{h} {\dot{q} \cdot dz = 0} . \end{aligned}$$ This equation can be used to check the validity of all reference solutions with regard to mass conservation. For later considerations, Eq. (19) $$Q = 1/A \cdot \int\limits_{V}^{{}} {\dot{q} \cdot dV} \cdot = \int\limits_{0}^{h} {\dot{q} \cdot dz}$$ of interest. The source term is to be understood as \(Q\left[ {\mu g/(m^{2} \cdot s)} \right]\). In the case of steady-state expansion, the mass balance (20) $$v_{s} \cdot c_{0} - v_{s} \cdot c_{h} + v_{d} \cdot c_{0} - K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} = 0.$$ is obtained from a comparison between Eqs. (2) and (3) because of \(\partial {\text{c/}}\partial {\text{t = 0}}\). Calculation of concentration, sedimentation and deposition for a one- dimensional spread of air pollutants Conflict-free solution using the Berljand boundary condition according to Schenk (2018b) The correct solution of the differential Eq. (3) can be found in Schenk (2018b). It is described by Eqs. (21) $$c\left( z \right) = c_{0} \cdot \frac{{v_{s} + v_{d} }}{{v_{s} }} \cdot \left[ {1 - \frac{{v_{d} }}{{v_{s} + v_{d} }} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot z} \right)} \right] \,$$ and (22). Equation (21) explains the course of the solution as a function of the deposition and sedimentation velocities \(v_{d}\) and \(v_{s}\), the height coordinate \(z\), the diffusion coefficient \(K\) and the soil concentration \(c_{0}\), which can be determined using Eq. (22). $$c_{0} = \frac{Q}{{(v_{s} + v_{d} )}} \, .$$ With known model parameters, concentration distributions, deposition and sedimentation flows as well as soil concentrations can be calculated. For later use, Eq. (21) also gives the first derivative. $$\frac{\partial c}{\partial z} = c_{0} \cdot \frac{{v_{d} }}{K} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot z} \right) \, .$$ and for \(z = 0\). $$\frac{\partial c}{\partial z}(0) = c_{0} \cdot \frac{{v_{d} }}{K} \, .$$ Equation (24) proves that solution (21) fulfills Berljand's boundary condition (10). With this boundary condition one understands deposition storage and not loss. Contradictory solution using Janicke's Convention according to Janicke (2002) and the difference to the Berljand boundary condition The incorrect solution is described in Trukenmüller et al. (2015) given by the relationships (25) and (26). $$c(z) = c_{0} \cdot \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right) + \frac{{F_{c} }}{{v_{s} }} \cdot \left[ {1 - \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right)} \right],$$ $$F_{c} = c_{0} \cdot v_{d} .$$ The authors of AUSTAL use Eq. (25) to calculate the wrong concentration distribution, and Eq. (26) begins the confusion. First, \(F_{c} \left[ {\mu g/(m^{2} \cdot s} \right]\) according to Eq. (26) has the meaning of a deposition and later again according to Eq. (30) that of a sedimentation stream. The authors of AUSTAL do not realize that both interpretations are wrong. In the end, you make a decision and mean according to VDI 3945 Sheet 3 (2000) and Janicke (2002) "the mass flow density deposited on the ground" according to Eq. (6) and Eq. (27). $$\dot{m}_{{}}^{B} = F_{c} .$$ Equation (26) is used to calculate the soil concentration. $$c_{0} = \frac{{F_{c} }}{{v_{d} }},$$ and does not care what happens if there is no deposition stream with \(v_{d} \equiv 0\). It is of interest to learn how to understand the Janicke Convention. In the course of the derivation of Eq. (25), the authors of AUSTAL receive the relationship (29). $$F_{c} = K \cdot \frac{\partial c}{\partial z} + v_{s} \cdot c.$$ It results from the one-time integration of the differential Eq. (3), where \(F_{c}\) has the meaning of an integration constant, which would have been determined using Berljand's boundary condition. Instead, the authors of AUSTAL use a constant concentration distribution as a special solution for \(F_{c}\) according to Eq. (30). $$F_{c} = v_{s} \cdot c_{i} = const.$$ With the specification of this special solution \(c_{i} \left[ {\mu g/m^{3} } \right]\) it can subsequently be seen from Eq. (31). $$c_{i} = c(z) = c(0) = c_{0} = const.$$ that the concentration value of \(c_{i}\) also means the soil concentration \(c_{0}\). This gives the relationship (32). $$F_{c} = v_{s} \cdot c_{0} .$$ Equations (25) and (30) can be used to prove the worthlessness of the solution function (25) according to Eq. (33). $$\begin{aligned} c(z) = c_{0} \cdot \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right) + \frac{{c_{i} \cdot v_{s} }}{{v_{s} }} \cdot \left[ {1 - \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right)} \right] = \hfill \\ c_{0} \cdot \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right) + \frac{{c_{0} \cdot v_{s} }}{{v_{s} }} \cdot \left[ {1 - \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right)} \right] = c_{0} \hfill \\ \end{aligned}$$ already mentioned. This integral of the differential Eq. (3) cannot be used to perform simulations for determining concentration distributions. The AUSTAL authors recognize the uselessness of the special solution used (31). Instead of changing the solution method, for example, according to Kneschke (1968), they swap the sedimentation stream \(v_{s} \cdot c_{0}\) with the deposition stream \(v_{d} \cdot c_{0}\) without reason and refer to their self-written convention in VDI 3945 Part 3 (2000) and claim that it would be universal. Instead of Eq. (30), Eq. (26) \(F_{c} = v_{d} \cdot c_{0}\) is used for no reason. After criticism, Trukenmüller (2017) assures that this castling would also be used by the authors Simpson et al. (2012) and Venkatram et al. (1999) used. "Worldwide, the dispersion models are based on the definition of the deposition speed that is recognized in the literature", the AUSTAL authors in Trukenmüller (2017) affirm, but this is not confirmed. One should know that sedimentation and deposition flows, \(v_{s} \cdot c_{0}\) and \(v_{d} \cdot c_{0}\), can be explained physically differently. They cannot be exchanged at will. Incidentally, the castling of \(v_{s} \cdot c_{0}\) by \(v_{d} \cdot c_{0}\) differentially violates the mass conservation rate, as was demonstrated in Schenk (2018b). With this poorly thought-out knowledge, the authors of AUSTAL finally obtained the wrong Janicke Convention (34) from Eqs. (23) and (29) for \(z = 0\), which is used as a boundary condition. $$F_{c} = K \cdot \frac{\partial c}{\partial z} + v_{s} \cdot c = v_{d} \cdot c_{0} .$$ Trukenmüller (2017) later asserts that Eq. (34) is the "true" definition of the deposition rate. It would represent deposition flows parameterized. If you add the two other definitions given initially in Trukenmüller (2016), it is now the third definition. One does not want to learn that the deposition speed \(v_{d} = K_{B} /T\) according to Eq. (9) can be regarded as a material constant. Here, too, the first derivatives of Eq. (25) are of interest, $$\frac{\partial c}{\partial z} = \frac{1}{K} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot z} \right) \cdot (F_{c} - c_{0} \cdot v_{s} )$$ $$\frac{\partial c}{\partial z}(0) = \frac{1}{K} \cdot (F_{c} - c_{0} \cdot v_{s} )$$ for subsequent use. Equation (36) proves that the solution (25) by the AUSTAL authors does not meet Berljand's boundary condition (10). Taking Eq. (26) \(F_{c} = c_{0} \cdot v_{d}\) into account, Eq. (36) is identical to the Janicke Convention. The difference between Janicke's Convention on Berljand's boundary condition can be seen in the comparison of Eqs. (10) and (34). It is described with the formula (37). It can be seen that this convention results in a mass deficit of \(- c_{0} \cdot v_{s}\) at the area boundary from atmosphere to ground. $$\begin{aligned} & K \cdot \frac{\partial c}{\partial z}(0) - v_{d} \cdot c_{0} = 0 \\ & {\text{Berljand's boundary condition}} \\ & K \cdot \frac{\partial c}{\partial z}(0) - v_{d} \cdot c_{0} = - c_{0} \cdot v_{s} \\ & {\text{Janicke Convention}} \\ \end{aligned}$$ Reference solutions for dispersion, deposition, sedimentation and homogeneity Assessment of the tasks The authors of AUSTAL have failed to explain their tasks, model parameters and algorithms for deriving the reference solutions uniformly in a publication. The reader must collect all information about the task, the solution algorithms as well as numerical and graphical evaluation from various publications. With this confusion and trust in the authority of the administration, one can justify why in the past only a few critics have found themselves concerned with the theoretical foundations of AUSTAL. It is only after 31 years that Trukenmüller et al. (2015) read about the derivation of the reference solutions for the first time. In this section, examples of "sedimentation without deposition", "Deposition with sedimentation", and "homogeneity" are used to check all information provided by the AUSTAL authors for credibility and to show contradictions. The tasks explained by the authors of AUSTAL are only slightly different. A uniform three-dimensional control volume is considered, although it is only a matter of zero-dimensional and one-dimensional propagation processes. Time-dependent simulation results are given uniformly. In all cases, it is said that time series over 10 days were expected. The emission occurs only in the first hour of the first day, and the stationary solutions would have appeared after 10 days in all cases. Algorithms and graphics for non-stationary calculations are not described. The AUSTAL authors provide incorrect stationary solutions for all reference cases. Non-stationary calculations are not carried out at all, although simulation results are also given for this. In order to be able to provide credible evidence for this, stationary and non-stationary calculations are carried out for all case studies. The first and second options distinguish between non-stationary and stationary bills. The correct solutions are compared to the wrong ones. Sedimentation without deposition The task for the "sedimentation without deposition" propagation process according to Fig. 2 is taken from the literature reference Janicke (2002). Task of the AUSTAL authors "Sedimentation without Deposition" The model parameters and simulation results can be summarized. "The emission occurs only in the first hour of the first day", which means \(T_{E} = 3600\). The simulation is completed on the "10th day" with \(t = 240{\text{ h}}\). One calculates a "time series over 10 days". The size of the control volume is specified with the geometric lengths \(L_{x} \left[ m \right] = 1000\), \(L_{y} \left[ m \right] = 1000\) and \(L_{z} \left[ m \right] = 200\). There is no "mass flow density forced by the source", \(F_{c} = 0\), "Volume source distributed over the entire computing area". In the literature reference Janicke (2000) one learns for this case of spread that the mean concentration is \(\bar{c}\left[ {\mu g/m^{3} } \right] = 500\). The sedimentation rate and the diffusion coefficient are \(v_{s} = 0,01\) and \(K = 1\). Because "A mass flow density enforced by the source" does not exist, \(F_{c} = 0\), the deposition velocity \(v_{d} = 0\) disappears, since only \(c_{0} \ne 0\) can be valid for the soil concentration. From the task described it appears authentically that one means a non-stationary propagation process, for which only the differential Eq. (2) is responsible. Regardless of this, the authors of AUSTAL assume in their solution process according to Eq. (3) a stationary propagation process. A solution (25) is also given for this. It is not known who should understand this. Correct non-stationary and stationary solution taking into account the Berljand boundary according to Schenk (2018b) First option, non-stationary consideration In the case of a non-stationary consideration, the differential Eq. (2) with the initial condition (12) with \(c_{A} = 0\) applies. The total emission \(m_{E} \left[ {kg} \right]\) can be determined from the specified mean concentration \(\overline{c}\). The geometric information d) for the size of the study area then gives the numerical expression (38). $$\begin{aligned} m_{E} &= \overline{c} \cdot V = \overline{c} \cdot L_{x} \cdot L_{y} \cdot L_{z} \\ &= 500 \cdot 1000 \cdot 1000 \cdot 200 \cdot \frac{1}{{10^{9} }} = 100 \end{aligned}$$ With the specification a) the emission is ended according to \(1{\text{ h}}\). This enables the source term \(\dot{q} = const.\,{ 0} < {\text{t}} \le {\text{T}}_{\text{E}}\) of the differential Eq. (2) to be determined. Together with \(V = L_{x} \cdot L_{y} \cdot L_{z} = 2 \cdot 10^{8}\), the numerical value can be given using Eq. (39). $$\begin{aligned}& \dot{q} = \frac{{m_{E} }}{{V \cdot T_{E} }} = \frac{100}{{2 \cdot 10^{8} \cdot 3600}} \cdot 10^{9} = 0,139 \quad 0 \le t \le T_{E} \,\\ & \dot{q} = 0\quad{\text{ t}} > {\text{T}}_{\text{E}} . \end{aligned}$$ In addition, the specification f) must be taken into account that the "volume source is distributed over the entire computing area", which means that there are no spatial concentration gradients, \(\partial c/x_{i} = \partial c/z = 0\). This simplifies Eq. (2) to Eq. (4). A simple integration with the initial condition \(c_{A} = 0\) gives the calculated value and Eq. (40). $$\begin{aligned}& {\text{c(t)}} = {\text{c}}_{\text{A}} + \dot{q} \cdot t\\ &{\text{ c(T}}_{\text{E}} )= {\text{c}}_{\text{A}} + \dot{q} \cdot T_{E} = 0 + 0,139 \cdot 3600 \approx 500 . \end{aligned}$$ For \(t > T_{E}\) and because of Eq. (39) as well as Eq. (4), \(\dot{q} = 0\) and \(dc/dt = 0\), this solution cannot be developed further. The concentration of \(c = 500\) reached remains constant over time. Equation (40) describes zero-dimensional propagation with the time coordinate as the only independent variable. The results are shown in graphs A and B in Fig. 3. The graphic A describes the time-dependent course of the filling in the interval \(0 \le t \le T_{E}\) and for \(t > T_{E}\). Graph B further explains that there is no vertical concentration gradient, \(\partial c/z = 0\). The concentrations remain spatially and temporally unchangeable for all simulation times. The results prove that the statement b) by the AUSTAL authors that the steady state would only be reached after 10 days is not correct. The concentration value of \(c = 500\) has already set after 1 h, \(T_{E} = 3600\). According to c) it is said that a time series of 10 days was expected, which cannot be confirmed either. The trivial solution (40) is comparable to filling different containers with different media. The task of the AUSTAL authors trivially describes the filling of Containers It would have to be proven that the solution fulfills the mass conservation law after the first option. The integral Eq. (18) is used for this. According to h), the sedimentation rate in the entire control room is \(v_{s} = 0,01\). The concentrations are spatially constant at all simulation times, so that the identity \(c_{0} (t) \equiv c_{h} (t)\) can be assumed. In addition, the integrals of Eq. (18) can be calculated with \(\partial c/ \partial t = \dot{q}\). According to task i) no deposition should take place, \(v_{d} = 0\). Because of the spatially constant concentration, \(\partial c/dz(h) = 0\) is also valid. The integral Eq. (41). $$\begin{aligned} & \int\limits_{0}^{h} {\frac{\partial c}{\partial t} \cdot dz} &+ &\,\,v_{s} \cdot c_{0} \,& - &\,\,v_{s} \cdot c_{h} \, & + &\,\,v_{d} \cdot c_{0} \, &-&\,\, K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} &- &\,\,\int\limits_{0}^{h} {\dot{q} \cdot dz = 0}\\ & \dot{q} \cdot h & + &\,\,v_{s} \cdot c_{0} (t) &-&\,\, v_{s} \cdot c_{h} (t) \, &+ &\,\,0 \cdot c_{0} (t) \, &- &\,\,K \cdot 0 \,& - &\,\,\dot{q} \cdot h\quad \quad\equiv 0 \\ \end{aligned} ,$$ gives that the mass conservation law is fulfilled for all simulation times if the solutions are correct. Because of \(\partial c/\partial z(z,t) \equiv 0\) according to Eq. (40) and Eq. (6), \(\dot{m}_{{}}^{A} = \dot{m}_{{}}^{B}\), no deposition takes place according to Eq. (42) $$\dot{m}_{{}}^{B} (t) = - K \cdot \frac{\partial c}{\partial z}(0,t) \equiv 0 .$$ The deposition stream \(\dot{m}^{B}\) and the conductive material stream \(- K \cdot \partial c/\partial z(0)\) are identical zero.They coincide according to amount and direction. The second law of thermodynamics is fulfilled. Second option, stationary viewing The second option alternatively considers a stationary propagation process. The corresponding correct stationary solution is described by Eqs. (21) and (22). After e) the task, there are no mass flow densities, which means \(F_{c} = Q = 0\). According to this, no pollutant can be found in the study area, which is also confirmed by the trivial solutions (43). $$c_{0} = \frac{Q}{{(v_{s} + v_{d} )}} \, = \frac{ 0}{(0,01 + 0)} = 0$$ and (44). $$\begin{aligned} c\left( z \right) = c_{0} \cdot \frac{{v_{s} + v_{d} }}{{v_{s} }} \cdot \left[ {1 - \frac{{v_{d} }}{{v_{s} + v_{d} }} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot z} \right)} \right] \, = \hfill \\ 0\cdot \frac{0,01 + 0}{0,01} \cdot \left[ {1 - \frac{0}{(0,01 + 0)}\left( {\exp \left( { - \frac{0,01}{1} \cdot z} \right)} \right)} \right] = 0 \hfill \\ \end{aligned}$$ The result is shown in Fig. 3, graphic C. For the sake of completeness alone, it should be proven that the mass conservation law is fulfilled. Equation (20) can be assumed. $$\begin{aligned}& v_{s} \cdot c_{0} \, &-&\,\, v_{s} \cdot c_{h} \,& +&\,\, v_{d} \cdot c_{0} \,& -&\,\, K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} = 0\hfill \\ &0,01 \cdot 500& - &\,\,0,01 \cdot 500& +&\,\, 0 \cdot 500 \,& -&\,\, 1 \cdot 0 \quad \quad\equiv 0 \hfill \\ \end{aligned}$$ With the appropriate calculation parameters, mass conservation is guaranteed. Because of \(\partial c/\partial z(0) = 0\) and Eq. (44) and considering Eq. (6) \(\dot{m}_{{}}^{A} = \dot{m}_{{}}^{B}\), Eq. (46). $$\dot{m}^{B} = - K \cdot \frac{\partial c}{\partial z}(0) = - 1 \cdot 0 = 0$$ results. After that, there is no conductive mass transfer, \(\dot{m}^{B} = 0\). After i) the task, no deposition should take place \(F_{c} = 0\). Because of a missing potential gradient \(\partial c/\partial z(0) = 0\), this does not take place either, \(\dot{m}^{B} = - K \cdot \partial c/\partial z(0)\). The second law of thermodynamics is fulfilled. Faulty non-stationary and stationary solution taking into account the Janicke Convention according to Trukenmüller et al. (2015) First, a non-stationary view is assumed. After a) "The emission only occurs in the first h of the first day", b) the stationary solution is reached on the "10th day" and c) a "time series over 10 days" is calculated, it is a non-stationary task. However, no solution algorithms and concentration profiles are described for this. For this reason, the integral Eqs. (18) and (20). $$\begin{aligned} & \int\limits_{0}^{h} {\frac{\partial c}{\partial t} \cdot dz} + v_{s} \cdot c_{0} - v_{s} \cdot c_{h} + v_{d} \cdot c_{0} \\ &\quad- K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} - \int\limits_{0}^{h} {\dot{q} \cdot dz = 0} \end{aligned}$$ cannot be used. The authors of AUSTAL remain guilty of the answer, which is why a stationary concentration distribution should have set in after 10 days. The second option describes a stationary view. The authors of AUSTAL assume the stationary differential Eq. (3) and state the solution functions (25) and (26). First, the soil concentration would have to be calculated again according to Eq. (26). However, due to i), \(F_{c} = 0\) and without deposition, \(v_{d} = 0\), an indefinite expression is obtained for calculating the soil concentration \(c_{0}\), \(c_{0} = 0/0\). Equation (25) is simplified because of e) to the exponential function (48). $$c(z) = c_{0} \cdot \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right).$$ Because the soil concentration \(c_{0}\) cannot be calculated according to Eq. (26), a volume source is introduced without further ado after f). According tog), the pollutant particles with a concentration of \(\overline{c} = 500\) are in a thermodynamic equilibrium. Speculatively, these are now redistributed so that they follow the exponential function (48). The second law of thermodynamics is already violated, because mass transport only takes place against the concentration gradient and not vice versa. You don't necessarily have to have doctrine, such as according to Westphal (1959), page 265, cite that mass transport "… never by itself in the reverse sense" can be observed. The authors of AUSTAL reverse all basic knowledge to the contrary and calculate speculatively with Eq. (49) a soil concentration of \(c_{0} = 1100,6\) $$\begin{aligned} c_{0} &= c(z = 5) = \bar{c} \cdot \frac{{v_{s} \cdot L_{z} }}{K} \\ &\quad\cdot \frac{1}{{\left[ {1 - \exp \left( { - \frac{{v_{s} \cdot L_{z} }}{K}} \right)} \right]}} \cdot \exp \left( { - \, \frac{{v_{s} }}{K} \cdot 5} \right) \\ &= 500 \cdot \frac{0,01 \cdot 200}{1} \cdot \frac{1}{{\left[ {1 - \exp \left( { - \frac{0,01 \cdot 200}{1}} \right)} \right]}} \\ &\quad\cdot \exp \left( { - \, \frac{0,01 \cdot 5}{1}} \right) = 1156,52 \\ &\quad\cdot \exp \left( { - 0,05} \right) = 1 1 0 0 , 6\\ \end{aligned} .$$ This concentration value can also be found in Fig. 2, column V. The calculation Eq. (49) has been hidden for 31 years and is left to the public to solve this puzzling algorithm. Its development is described in Schenk (2018b). The course of the solution to this is shown in Fig. 4. According to e) and i), no deposition should take place, but this contradicts the course of the solution function. Because of the negative concentration gradient, there is a conductive mass transfer on the ground towards the free atmosphere. Pollutant particles are speculatively redistributed by the authors of AUSTAL Equation (20) can be used to show that the physics of the authors of AUSTAL prevent mass conservation. The soil concentration is \(c_{0} = 1100,6\), and the concentration at the upper boundary of the study area is calculated according to Eqs. (25) and (50), \(c_{h} = 148,95\). $$\begin{aligned} c_{h} & = c_{0} \cdot \exp \left( { - h \cdot \frac{{v_{s} }}{K}} \right) + \frac{{F_{c} }}{{v_{s} }} \\ &\quad\cdot \left[ {1 - \exp \left( { - h \cdot \frac{{v_{s} }}{K}} \right)} \right] \\ &= 1100,6 \cdot \exp \left( { - 200 \cdot \frac{0,01}{1}} \right) + \frac{0}{0,01} \\ &\quad\cdot \left[ {1 - \exp \left( { - 200 \cdot \frac{0,01}{1}} \right)} \right] = 148,95 \\ \end{aligned}$$ The concentration gradient at the upper limit is. $$\begin{aligned} \frac{\partial c}{\partial z}(h) &= \frac{1}{K} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot h} \right) \cdot (F_{c} - c_{0} \cdot v_{s} ) \\ &= \frac{1}{1} \cdot \exp \left( { - \frac{0,01}{1} \cdot 200} \right) \\ &\quad\cdot (0 - 1100,6 \cdot 0,01) = - 1,48 \end{aligned}$$ to \(\partial c/\partial z(h) = - 1,48\) according to Eqs. (35) and (51). With further parameters according to h) and i) of the task, the mass balance (52). $$\begin{aligned}& v_{s} \cdot c_{0} \,& -&\,\, v_{s} \cdot c_{h} \,& +&\,\, v_{d} \cdot c_{0} \,& - &\,\, K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} = 0\hfill \\ &0,01 \cdot 1100,6 &- &\,\,0,01 \cdot 148,9 &+ &\,\,0 \cdot 1100,6 \, &+&\,\, 1 \cdot 1,489 \, \ne 0 \hfill \\ \end{aligned} .$$ is obtained with Eq. (20). The mass conservation law is violated. Taking into account Eqs. (36) and (6), \(\dot{m}_{{}}^{A} = \dot{m}_{{}}^{B}\), Eq. (53). $$\begin{aligned} \dot{m}^{B} &= - K \cdot \frac{\partial c}{\partial z}(0) = - (F_{c} - c_{0} \cdot v_{s} )\\ & = - \left( {0 - 1100,6 \cdot 0,01} \right) = 11,006 , \end{aligned}$$ results for the calculation of the deposition current. Then there is a conductive mass transfer, \(\dot{m}^{B} = 11,006\). However, the AUSTAL authors stipulate that no deposition should take place after e) and i), \(F_{c} = 0\). This contradiction can only be clarified in such a way that one would have to assume that the diffusion coefficient would be identical to zero, \(K = 0\) or, on the other hand, that despite an existing potential gradient, \(\partial c/\partial z(0) \ne 0\), it would be contrary to \(\dot{m}^{B} = - K \cdot \partial c/\partial z(0)\) no material flow take place. The first case is excluded because the diffusion coefficient is a substance parameter. The second case is applicable and justified, why the second law of thermodynamics is violated. If contradicted, then the pollutant particles would have to be contrary to the one in Häfner et al. (1992) described Fick's law can be rearranged so that there would be \(\partial c/\partial z(0) = 0\) on the ground. Deposition with sedimentation Figure 5 describes the task for the spreading case "Deposition with sedimentation". The input parameters are described by the following information a) to g). The task and parameters have been taken from the literature reference Janicke (Janicke (2002). Task of the AUSTAL authors "Deposition with Sedimentation" "The emission is continuous at 1 g/s". The size of the control volume is specified with the geometric lengths \(L_{x} = 1000\), \(L_{y} = 1000\) and \(L_{z} = 200\). The sedimentation and deposition rate are \(v_{d} = 0,05\) and \(v_{s} = 0,05\). The diffusion coefficient is \(K = 1\). There is a "mass flow density forced by the source", \(F_{c} = Q = 1\). The source is at an altitude of \(h\left[ m \right] = 200\). According to f) the area source is at a height of \(h = 200\). As described under e), the emission takes place through a "mass flow density forced by the source" with \(F_{c} = 1\). According to b) and g) the task is again based on a non-stationary approach. In contrast, the AUSTAL authors only carry out stationary examinations. Algorithms and solution functions for non-stationary examinations are also unknown here. The AUSTAL authors relate their calculations to the validity of the differential Eq. (3) and use the wrong solution functions (25) and (26). In order to gain certainty about the validity of all approaches, non-stationary and stationary simulations are also carried out here and the results compared with Janicke's solutions. Correct non-stationary and stationary solution taking into account the Berljand boundary condition according to Schenk (2018b) According to the first option, it is a non-stationary task, which is described by the differential Eqs. (2) with the initial condition (12) \(c_{A} = 0\). Analytical solutions are not available for this, which is why the solution method according to Schenk (1980) was used here. The results for this are shown in Fig. 6 with the graphs A and B. The deposition and sedimentation speed as well as the diffusion coefficient are specified according to d) the task. The source height is taken into account according to f) and is at a height of 200 m. At the lower boundary, the Berljand boundary condition according to Eq. (10) was fulfilled. At the upper limit, it was assumed that pollutant concentrations can no longer be measured at a sufficiently high level, \(c_{H} = c(H) = 0\). So that the height of the source can be included with sufficient accuracy, the height of the study area was increased from to \(H\left[ m \right] = 400\). In graph A, the temporal development of the concentration distribution in the interval \(0 \le t \le T_{E}\) was evaluated. After a simulation time of \(T_{E} = 2,6h\) the stationary solution is reached. The maximum concentration at source height is \(c_{h} = c(200) \approx 20\) and the soil concentration is \(c_{0} \approx 10\). The error deviation \(\varepsilon = \left| {c_{An} - c(2,6h)} \right|/c_{An} \cdot 100\left[ \% \right]\) compared to analytical solutions is below \(\varepsilon < 0,1\). The analytical solution is to be understood under \(c_{An} \left[ {\mu g/m^{3} } \right]\). Correct stationary and unsteady solution for the spreading case "Sedimentation with Deposition" High demands are placed on reference solutions. The mass consistency must be demonstrated for all simulation times. For this purpose, all required balance sheet quantities according to Eq. (18) must be carried along during the calculation. These include the production term \(\int {\partial c/\partial t} \cdot dz\), the convective and conductive material flows at the boundary surfaces \(v_{s} \cdot c_{0} , \, v_{s} \cdot c_{H} , \, v_{d} \cdot c_{0} {\text{ und K}} \cdot \partial {\text{c/}}\partial {\text{z(H)}}\) as well as the source term \(Q = F_{c} = \int {\dot{q}} \cdot dz = 1\) according to Eqs. (19) and e). These terms were determined numerically as a function of time and their course is shown graphically in graphic B in Fig. 6. The integral Eq. (54) $$\begin{array} {llll} X) \, \int\limits_{0}^{H} {\frac{\partial c}{\partial t} \cdot dz} \, &+ \,\,\, v_{s} \cdot c_{0} \,\,\quad\quad - \, v_{s} \cdot c_{H} \, &+\,\, v_{d} \cdot c_{0} \, \quad\quad - \,\,K \cdot \frac{\partial c}{\partial z} ( {\text{H) }} &- \,\int\limits_{0}^{H} {\dot{q} \cdot dz = 0} \\ Y){\text{ 5,32E - 01 }} \,&+ \,\,{\text{ 2,31E - 01 }} - \, 0,05 \cdot 0 \,&+ {\text{ 2,31E - 01 }} + {\text{ 2,58E - 05 }} &- \,\, 1 \,\quad\quad \approx 0 \\ Z){\text{ 1,75E - 02 }} \,&+ \,\,{\text{ 4,85E - 01 }} - \, 0,05 \cdot 0 \, &+ {\text{ 4,85E - 01 }} + {\text{ 4,94E - 05 }} &- \,\, 1 \,\quad\quad \approx 0 \\ \end{array}$$ explains an exemplary numerical evaluation of the mass balance (18) for two different simulation times. In it, X) describe the integral mass balance (18) and Y) and Z) the evaluation at real time \(t = 1,16h\) and \(t = 2,60h\). The steady state is reached approximately after \(t = 2,60h\) and not only after 10 days, as the AUSTAL authors claim. The mass conservation law is fulfilled. Because of \(\partial c/\partial z(0,t) \ne 0\) according to Fig. 6, graphic A, the deposition current is non-zero at all simulation times, \(\dot{m}_{{}}^{B} (t) = - K \cdot \partial c/\partial z(0,t) \ne 0\). For \(t \to \infty\) one obtains a concentration distribution that does not change over time. It is approximately identical to the later stationary solution according to Eq. (24) if it is transformed according to \(- K \cdot \partial c/\partial z(0)\). This gives Eq. (55). $$\begin{aligned} & \dot{m}^{B} (z = 0,t \to \infty ) = - K \cdot \frac{\partial c}{\partial z}(0) \\ &\quad\cong - c_{0} \cdot v_{d} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot z} \right) = - 1 0\cdot 0,05 \\ &\quad\quad\cdot \exp \left( { - \frac{0,01}{1} \cdot 0} \right) = - 0,5 . \end{aligned}$$ In the stationary case, \(t \to \infty\), Eqs. (55) and (60) are identical. The deposition current \(\dot{m}_{{}}^{B} < 0\) is directed against the positive potential gradient \(\partial c/\partial z(0) > 0\) at all simulation times. The second law of thermodynamics is fulfilled. In the stationary case, Eqs. (21) and (22) must be assumed. First, the soil concentration is calculated according to Eq. (56). $$c_{0} = \frac{Q}{{(v_{s} + v_{d} )}} = \frac{ 1}{ ( 0 , 0 5+ 0 , 0 5 )} = 1 0 { }$$ with the information on d) and e). Equation (57) gives the maximum concentration \(c_{h} = 2 0\) at source height \(h = 200\). $$\begin{aligned} c_{h} &= c\left( {200} \right) = c_{0} \cdot \frac{{v_{s} + v_{d} }}{{v_{s} }} \\ &\quad\cdot \left[ {1 - \frac{{v_{d} }}{{v_{s} + v_{d} }} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot 200} \right)} \right] \, \hfill \\&= 1 0\cdot \frac{ 0 , 0 5+ 0 , 0 5}{ 0 , 0 5} \\ &\quad\cdot \left[ { 1 { - }\frac{ 0 , 0 5}{ 0 , 0 5} \cdot \exp \left( { - \frac{0,05}{1} \cdot 200} \right)} \right] \, = 2 0\hfill \\ \end{aligned} .$$ The concentration curve calculated with this equation can be seen in the graph C of Fig. 6. The excellent agreement between the analytical and numerical solution in the scope \(z \le 200\), which was achieved with the Schenk (1980) method, should be emphasized. Here too it must be demonstrated that the mass conservation law is fulfilled. The integral Eq. (20) is again responsible. Approximately, no convective material flow is observed at the upper limit of the investigation area for \(h = 200\) according to Eq. (23), which is proven with Eq. (58). $$\begin{aligned} K \cdot \frac{\partial c}{\partial z}(h) &= v_{d} \cdot c_{0} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot h} \right) \,\\ & = 0,05 \cdot 1 0\cdot { \exp }\left( { - \frac{0,05}{1} \cdot 200} \right) \\ &= 2 , 2 7 {\text{E - 05}} \approx 0 \end{aligned}$$ In addition, the specification \(v_{s} = 0,05\) according to d) must be observed $$\begin{aligned} & v_{s} \cdot c_{0} \, &- &\,\, v_{s} \cdot c_{h} \, &+&\, \, v_{d} \cdot c_{0} \, &-&\, \, K \cdot \frac{\partial c}{\partial z} ( {\text{h) }}\quad\quad = 0 { } \hfill \\& 0,05 \cdot 10 \, &-&\, \, 0,05 \cdot 20 \, &+&\, \, 0,05 \cdot 10 \, &-&\, \, 1 \cdot 2,27 \cdot 10^{ - 5} \, \approx 0 \hfill \\ \end{aligned}$$ The balance Eq. (59) proves that constant mass is guaranteed. Equation (23) gives the relationship (60) $$\begin{aligned} \dot{m}_{{}}^{B} &= - K \cdot \frac{\partial c}{\partial z}(0) = - c_{0} \cdot v_{d} \cdot \exp \left( { - \frac{{v_{s} }}{K} \cdot z} \right) \\ &= - 1 0\cdot 0,05 \cdot \exp \left( { - \frac{0,01}{1} \cdot 0} \right) = - 0,5 . \end{aligned}$$ Then there is a conductive mass transfer, \(\dot{m}^{B} = - 0,5\). Deposition should take place after (e), \(F_{c} \ne 0\). Due to an existing potential gradient \(\partial c/\partial z(0) \ne 0\) there is also a deposition, \(\dot{m}^{B} = - K \cdot \partial c/\partial z(0) \ne 0\). The second law is the thermodynamics is fulfilled. Faulty non-stationary and stationary solution taking into account the Janick's Convention according to Trukenmüller et al. (2015) Again, according to b) and g) of the task, it must be assumed that the AUSTAL authors considered non-stationary conditions. However, no solution algorithms and concentration profiles are given for this. The AUSTAL authors did not perform non-stationary calculations. Due to the lack of non-stationary solution courses, the integral Eq. (18) cannot be used to control mass conservation. The AUSTAL authors do not provide any simulation results. If the authors of AUSTAL state that a stationary solution would have appeared after 10 days, the public will be deceived as well. The AUSTAL authors only provide stationary solutions for this task. To do this, they use their incorrect solutions (25) and (26). Using Eq. (26), \(F_{c} = v_{d} \cdot c_{0}\), one calculates the soil concentration according to Eq. (62). $$c_{0} = \frac{{F_{c} }}{{v_{d} }} = \frac{1}{0,05} = 20$$ and with Eq. (25) a constant concentration distribution in the entire study area from \(c(z) = 20 = const.\) according to Eq. (63). $$\begin{aligned} c(z) &= c_{0} \cdot \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right) + \frac{{F_{c} }}{{v_{s} }} \cdot \left[ {1 - \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right)} \right] \\ &= c_{0} \cdot \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right) + \frac{{0,05 \cdot c_{0} }}{0,05} \\ &\quad\cdot \left[ {1 - \exp \left( { - z \cdot \frac{{v_{s} }}{K}} \right)} \right] = c_{0} = 20 \hfill \\ \end{aligned} .$$ The result of this calculation is shown in Fig. 7. In contrast to the correct solution with \(c_{0} = 10\), the authors of AUSTAL calculate the wrong concentration distribution in the amount of \(c(z) = 20 = const.\). This untrue result is also highlighted in column V of Fig. 5. As can be seen with the specification f), a source should have been in 200 m, which, however, contrary to Fig. 6, Graph C, cannot be seen in Fig. 7 of the AUSTAL authors. The concentration distribution shows that no deposition can take place It can easily be demonstrated that Eq. (25) is an incorrect solution of differential Eq. (3). For this purpose, Eq. (20) is used again. With the simulation results already described and taking into account Eq. (36), \(\partial c/\partial z(h) \sim (F_{c} - c_{0} \cdot v_{s} ) = 1 - 20 \cdot 0,05 = 0\), the expression (64). $$\begin{aligned} &v_{s} \cdot c_{0} \,& - &\,\, v_{s} \cdot c_{h} \, &+&\, \, v_{d} \cdot c_{0} \,& - &\,\, K \cdot \frac{\partial c}{\partial z} ( {\text{h)}} = 0\hfill \\& 0,05 \cdot 20 \,& - &\,\, 0,05 \cdot 20 \,& + &\,\, 0,05 \cdot 20 \, &- &\,\, 1 \cdot \, 0 \quad\quad \ne 0 \hfill \\ \end{aligned}$$ results. The mass conservation law is therefore also violated for the "sedimentation with deposition" propagation case. Equation (6), \(\dot{m}^{A} = \dot{m}^{B}\), can be used to prove that the second law of thermodynamics is also violated. Equation (36). $$K \cdot \frac{\partial c}{\partial z}(0) = (F_{c} - c_{0} \cdot v_{s} ) = (1 - 20 \cdot 0,05) \equiv 0$$ is required to calculate the conductive current. The deposition current $$\dot{m}_{{}}^{B} = - K \cdot \frac{\partial c}{\partial z}(0) = - (F_{c} - c_{0} \cdot v_{s} ) = - \left( {1 - 20 \cdot 0,05} \right) = 0 .$$ is calculated using Eq. (65). After that, there is no conductive mass transfer. However, the authors of AUSTAL state that deposition should take place after e), \(F_{c} \ne 0\). This contradiction can only be clarified in such a way that one would have to assume that the diffusion coefficient would strive towards infinity, \(K \to \infty\). On the other hand, contrary to \(\dot{m}^{B} = - K \cdot \partial c/\partial z(0)\), the material flow would follow a non-existent potential gradient \(\partial c/\partial z(0) = 0\). The first case is excluded because the diffusion coefficient is a finite material parameter. The second case is correct and justified. Therefore the second law of thermodynamics is violated. If contradicted, the pollutant particles would have to be contrary to Fick's law according to Häfner et al. (1992) that the concentration gradient at the bottom is not equal to zero, \(\partial c/\partial z(0) \ne 0\). Homogeneity tests In order to derive reference solutions for homogeneity, the AUSTAL authors describe the so-called "Homogeneous turbulence, constant step size, "Homogeneous turbulence, variable step size", so-called "Inhomogeneous turbulence, constant step size" and "Inhomogeneous turbulence" variable step size" as four separate test cases. However, as will be shown, all these test cases can be traced back to a single trivial task and solution. The model parameters for all tasks are given uniformly with a) to g). The tasks of the AUSTAL authors are described in graphs A to D in Fig. 8. The only difference is that in the two cases of so-called "Homogeneous turbulence", the conductive transport is described by a constant. In the two other examples of so-called "Inhomogeneous turbulence", location-dependent diffusion is used. As already described in the other cases of sedimentation and deposition, the authors of AUSTAL consider a), b) and c) a non-stationary approach here. While one pretends to carry out three-dimensional calculations, in all four cases one considers only a zero-dimensional spread with the time coordinate as the only variable. The task therefore describes the filling of any container with different media. Identical tasks for four supposedly different homogeneity tests A special mention deserves the specification e) "Volume source distributed over the entire computing area". It is identical to the specification f) of the task "sedimation without deposition". The tasks were taken from the Janicke (2002) reference. The results can be found in the publication Janicke (2000). "The total emission is 100 kg". "The mean concentration is \(\bar{c} = 500\)". Non-stationary propagation processes are described by the differential Eq. (2). With the described model parameters, the Eq. (67). $$\frac{\partial c}{\partial t} - v_{s} \frac{\partial c}{\partial z} = \frac{{\partial K_{zz} (z)}}{\partial z} \cdot \frac{\partial c}{\partial z} + K_{zz} (z) \cdot \frac{{\partial^{2} c}}{{\partial z^{2} }} + \dot{q}(t)$$ results, whereby \(K_{zz} (z)\left[ {m^{2} /s} \right]\) is to be understood here as the approach for describing the so-called "Homogeneous turbulence" or the so-called "Inhomogeneous turbulence". In the case of so-called "Homogeneous turbulence", \(K_{zz} = const\) applies and in the case of so-called "Inhomogeneous turbulence", a dependency on z must be taken into account, \(K_{zz} (z)\). It is therefore generally valid to replace the expression \(K_{zz} \cdot \partial^{2} c/\partial z^{2}\) in the differential Eq. (2) with \(\partial /\partial z(K_{zz} (z) \cdot \partial c/\partial z)\), which results in Eq. (67). In the case of so-called "Homogeneous turbulence", the authors of AUSTAL choose the simple approach to describe the effective diffusion (68). $$K_{zz} = 1 = const.$$ In a so-called "Inhomogeneous turbulence", the relationships (68), (70) and (71). $$\sigma_{w} (z) = 0,5 - 0,4 \cdot \sin \left( {\frac{z \cdot \pi }{2 \cdot h}} \right) ,$$ $$T_{w} (z) = 1 + 20 \cdot \sin \left( {\frac{z \cdot \pi }{2 \cdot h}} \right) ,$$ $$K_{zz} (z) = \left[ {\sigma_{w} (z)} \right]^{2} \cdot T_{w} (z)$$ are used. In these equations, \(\sigma_{w} \left[ {m/s} \right]\) means the dispersion of wind speed fluctuations and \(T_{w} \left[ s \right]\) the Lagrangian correlation time. In connection with the solution of the differential Eq. (67), ultimately only the approach (71) is of interest. In addition, it must be noted that after e) the task for all four cases for so-called homogeneity, a "volume source over the entire computing area" is assumed. However, this assumption means that the mass according to f) of \(m_{E} = 100\) with a concentration according to g) of \(c(z) = \bar{c} = 500 = cons\tan t\) fills the entire control volume evenly. This means that no changes in concentration can occur in the study area, which means \(\partial c/\partial z = 0\). If one looks at Eq. (67), the trivial relationship \(\partial c/\partial t = \dot{q}(t)\) already results. According to a), the source term \(\dot{q}(t)\) of Eq. (67) is constant over time for the time interval \(0< {\text{t}} \le {\text{T}}_{\text{E}} = 1h\), \(\dot{q}(t) = \dot{q} = const.\), and can be calculated according to Eqs. (72). $$\begin{aligned} \dot{q} &= \frac{{m_{E} }}{{V \cdot T_{E} }} = \frac{100}{{2 \cdot 10^{8} \cdot 3600}} \cdot 10^{9} = 0,139 \quad 0 \le t \le T_{E} \,\\ \dot{q} &= 0\quad{\text{ t}} > {\text{T}}_{\text{E}} . \end{aligned}$$ Equation (67) is simplified because of \(\partial c/\partial x_{i} = \partial c/\partial z = 0\) to Eq. (4), \(dc/dt = \dot{q}\). A simple integration \(c(t) = c_{A} + \int {\dot{q}} \cdot dt\) with the initial condition \(c_{A} = 0\) gives Eqs. (40) with the calculated value $$\begin{aligned} {\text{c(t)}} &= {\text{c}}_{\text{A}} + \dot{q} \cdot t \\ {\text{ c(T}}_{\text{E}} ) &= c_{A} + \dot{q} \cdot T_{E} = 0 + 0,139 \cdot 3600 \approx 500 .\end{aligned}$$ Equation (73) is identical to Eq. (40) in the case of "sedimentation without deposition". It can ultimately be seen that because of the disappearing concentration gradients, \(\partial c/\partial z = 0\), the relationships. (68) for the calculation of a so-called homogeneous turbulence \(K_{zz}\), (69) to calculate a so-called speed fluctuation \(\sigma_{w}\), (70) to calculate the so-called Lagrangian correlation time \(T_{w}\). (71) to calculate a so-called inhomogeneous turbulence \(K_{zz} (z)\) can have no influence on the course of the solution. The solution is independent of these parameters, which the AUSTAL authors did not recognize due to ignorance or which they intentionally concealed. It would be interesting to find out what the specialist guides have to say. The correct solutions according to Eq. (73) are shown in Fig. 9. You can see the filling of the control room for the time intervals \(0 \le t \, \left[ h \right] \le 1\) and \(1 \, < t \, [Tage] \le 10 \,\). The task of the AUSTAL authors trivially describes the filling of containers for all four case studies Contrary to the claims of the AUSTAL authors according to b) that the simulation should only be completed on the "10th day", the mean concentration of \(\bar{c} = 500\) is already reached after 1 h. According to c), a "time series over 10 days" could not have been calculated. Here too, the solution (73) only describes zero-dimensional propagation with the time coordinate as the only independent variable for all four test cases. This result can also only be compared with the filling of a container, which means that no dispersion models can be validated. The results of the AUSTAL authors are explained in Fig. 10 with graphics A to D. With the correct solution according to Fig. 9 it turns out that all non-stationary simulation results of the authors of AUSTAL according to b) and c) are wrong. Non-stationary calculations have not taken place. The AUSTAL authors only provide stationary solutions, non-stationary bills have not taken place One specialty cannot be overlooked. The specification e) "Volume source distributed over the entire computing area" is not only applicable to all four homogeneity tests. It is also used for the trivial case of "sedimentation without deposition". Thus, the authors of AUSTAL provide five different reference solutions for one and the same task according to the differential Eq. (4) and the initial condition (12) according to Figs. 4 and 10. "Sedimentation without deposition", Fig. 4, "Homogeneous turbulence, constant time step", Fig. 10, Graph A, "Inhomogeneous turbulence, constant time step", Fig. 10, Graph B, "Homogeneous turbulence, variable time step", Fig. 10, Graph C, "Inhomogeneous turbulence, variable time step", Fig. 10, Graph D, However, they claim that there are different reference solutions. The publication by Janicke (2000) shows that the AUSTAL authors actually mean different solutions, where adventurous physical interpretations are given for each of the minor deviations. Different physics are faked. The analogy to the impulse—heat and mass transport Textbooks on physics and thermodynamics as well as process engineering like to refer to the existing analogy between the impulse, heat and mass transport. In the case of impulse, it is Newton's stress approach, \(\tau = \eta \cdot \partial u/\partial z\). In the case of heat, it is Fourier's heat conduction, \(N = - \lambda \cdot \partial \vartheta /\partial z\). In the case of admixtures, the analogy concerns Fick's law \(\dot{m}^{B} = - K \cdot \partial c/\partial z\). This analogy is founded on these conductive approaches. The currents of impulses, energy and mass caused by them are collectively referred to as the conductive transport. Here it means \(\tau [N/m^{2} ]\) the shear stress, \(\eta [kg/(m \cdot s)]\) the dynamic toughness, \(u[m/s]\) the speed, \(N[W/m^{2} ]\) the specific heat output, \(\lambda [W/(m \cdot \text{Kelvin})]\) the thermal conductivity and \(\vartheta [\text{Kelvin}]\) the temperature. If one refers to the conductive material flow and considers the analogy to the heat flow, one would have to swap the concentration distribution with a temperature distribution in the case of "sedimentation without deposition" in Fig. 4 of the AUSTAL authors. After Fourier's heat conduction, a conductive heat flow takes place analogously to Eq. (53) from a higher temperature level in the direction of a lower ambient temperature. The authors of AUSTAL would now have to explain why, despite an analog negative temperature gradient \(\partial \vartheta/ \partial z < 0\) and therefore analogously to \(N > 0 \, und \, \dot{m}^{B} > 0,\), there should be no analog heat flow \(N \equiv 0{\text{ und }}\dot{m}^{B} = 0,\). It would then also have to be explained why, analogous to \(F_{c} = 0{\text{ und N}} = 0\), if there is no heat source, there is a heat flow according to Fig. 4. The second law of thermodynamics is violated. In the case of "Deposition with sedimentation", the concentration would also have to be exchanged with the temperature in Fig. 7. After Fourier's heat conduction, there will then be no conductive heat flow analogous to Eq. (66). The authors of AUSTAL should now explain why, despite a disappearing analog temperature gradient \(\partial \vartheta/ \partial z = 0\) and consequently analogous to \(N = 0 \, und \, \dot{m}^{B} = 0,\), an anologic heat flow \(N \ne 0 \, und \, \dot{m}^{B} \ne 0,\) should result towards the ground. It should also be explained here why, analogously to \(F_{c} = 1{\text{ und N}} = 1\), despite the existing heat source, according to Fig. 7 there should be no heat flow at all. The second law of thermodynamics is violated. If one considers the analogy to the impulse transport, the concentration distributions would have to be exchanged with flow velocities, from which the stress distributions in the fluid can be calculated. According to Schlichting (1964) there is a direct proportionality between stress and deformation. In the present case, however, the proportionality would be reversed. Tension and deformation are not the same here, but opposed. Newton's 3rd axiom is violated. The life stories of the AUSTAL dispersion model Preliminary remarks The author of this article takes a close look at the validity of all reference solutions given by the AUSTAL authors. He concludes that all physics and mathematics by the AUSTAL authors should be questioned. All doubts about credibility, honesty and scientific thoroughness deepen. This distrust was an occasion to investigate the life story and all the strange circumstances surrounding this model development. A true and elitist life story face each other. The real life story The real life story begins. (1984) It is in Axenfeld et al. (1984) described a model for calculating dust precipitation. The theoretical basis is explained by a thought model. This defines the deposition speed as the speed after which "… a column standing on the surface of the earth, which contains the material capable of deposition, runs empty through deposition". Deposition means loss and not retention. However, the authors of AUSTAL are in prominent company with their opinion. So you can later e.g. also in Graedel et al. (1994), p. 144, learn that material capable of deposition is lost. You can read there "… deposition occurs when a gas molecule comes into contact with a surface and is lost on it". (1988) with reference to Axenfeld et al. (1984) and by means of the Janicke Convention in VDI (1988) establish a new theory of the spread of air pollutants. The physics and mathematics of the AUSTAL authors are adopted without criticism. (2001) this model is further developed in Janicke (2001) to the LASAT model. This dispersion model is later promisingly referred to as the "parent model" for all dispersion calculations. (2002) is described by the authors Janicke et al. (2002) developed the "Model-Based Assessment System for Immission Control in Industry and Economy" called AUSTAL. The faulty algorithms for deposition and sedimentation by the authors of Axenfeld et al. (1984) not corrected. (2009) Further development of AUSTAL to a model for the calculation of the spread of radionuclides for defense against nuclear-specific hazards LASAIR according to Janicke (2009). Ensuring security is an interdisciplinary task. Nuclear technicians, thermodynamics engineers, materials scientists, solid-state mechanics and, for example, fluid mechanics are responsible for the safety of their nuclear power plants. You can expect that efforts will also be made to protect citizens and protect the environment outside of nuclear power plants. They don't want their efforts to be frivolously gambled away. (2011) Development of AUSTAL to calculate the spread of different substances according to TA Luft and odor spread according to Janicke et al. (2011). The substance-specific peculiarities of the spread of smells, e.g. have already been described in Westphal 1959 are not taken into account. The algorithms for this are unknown and remain unpublished. One only refers to source texts here, as if users should read the physics used from the source texts. (2014) After the AUSTAL was published in BMU (2002), those responsible for pollution control and environmental engineers raised doubts about the validity of the AUSTAL reference solutions. In 2014, the author of this article was commissioned by the company WETSTKAL, United Warstein Limestone Industry, to develop expertise on this expansion model according to Schenk (2014). The author of this expertise comes to the conclusion that all reference solutions from AUSTAL violate mass conservation and the second law of thermodynamics. The use of critical terms also leads to the conclusion that the AUSTAL authors are not very familiar with the theory of modeling the spread of air pollutants. The results of this expertise are published in Schenk (2015a). They form the background of all criticism of the AUSTAL expansion model. (2015) Further development of AUSTAL into a model called LASPORT for calculating the spread of airport-specific pollutants according to Janicke (2015). However, the spread of aviation pollutants generally requires a non-stationary view. The validation of time-dependent reference solutions is not considered necessary. (2015), interested environmental engineers recognize the contradictions of different reference solutions. In Schenk (2015a) it is recognized for the first time in 31 years that all reference solutions are faulty. The mass conservation law and the second law of thermodynamics are violated. For example, the authors of AUSTAL are completely ridiculous with the claim that 3D wind fields can be validated with the rigid rotation of a solid in the plane. (2015) contradict in Trukenmüller et al. (2015) 14 authors of all objections. The AUSTAL authors want to prove the opposite. But they rely more on the authority of their offices than on mathematics and mechanics. They are very convinced that they are publishing false reference solutions. The authors include sworn and non-committed experts, protagonists and expert advisors from AUSTAL, office managers and administrative workers. The authors also include a nuclear optician with a doctorate in 1997, who, together with a plasma physicist, is one of the actual authors of AUSTAL. How they obtained the required basic knowledge in the field of coupled impulse, heat and mass transport to model the spread of air pollutants is unknown. (2015) With the wording "As a closer analysis shows, the results of AUSTAL2000 are correct, while the contradictions highlighted by R. Schenk are based on fundamental errors in his evidence…" published the Federal Environment Agency Dessau Roßlau according to UBA (2015a) his website publicly false reports. (2015) In Schenk (2015b) the author deals with the explanations in Trukenmüller et al. (2015). The errors of all described reference solutions are analyzed. The correct reference solutions and derivation are given. (2015) The publication Schenk (2015a) is obviously viewed as an industrial accident. A new editorial team will be appointed for the magazine IMMISSIONSCHUTZ at the end of the year. The area of spreading air pollutants will be filled with an office worker for meteorology. With regard to the AUSTAL topic, the editorial team announced immediately after the new appointment that "the space that this technical discussion had taken up in the magazine was more than exhausted". Later it is communicated again that "the discourse on AUSTAL2000 has ended from the editorial point of view". However, no claim is made that this castling is intended to intentionally prevent further occupational accidents. (2016) In reply to Schenk (2015b) all criticism is rejected in Trukenmüller (2016). With differently defined deposition speeds, the aim is to achieve equivalence of the reference solutions from AUSTAL to Schenk (2015b). With little physics, attempts are made to prove the correctness of one's own reference solutions. Trukenmüller (2016) turns out to be a smooth delusion. (2017), finally, Trukenmüller (2017) again contests all of the objections justified in Schenk (2015b). The Janicke Convention is universal and, for example, by Venkatram et al. (1999) justified. In the absence of physical insight, reference is made to the authority of other scientists. You would also use Janicke's Convention, but this is not true. The author of this article is asked to agree with the incorrect views on sedimentation and deposition. One uses the reputation of 20 internationally recognized and esteemed authors in the field of modeling and spreading, sedimentation and deposition and hides their own ignorance behind it. (2017) The authors of AUSTAL publish in Janicke et al. (2017) under the heading "Precise numerical solution and analytical approximation for the wind profile over flat terrain" an attempt to validate AUSTAL with a wind field. The AUSTAL authors obviously want to react to the criticism in Schenk (2015a), but they prove that they did not understand the difference between numerical and analytical solutions in 2017 either. The opposite is true. Numerical algorithms describe approximate solutions and not vice versa analytical methods. (2018) Schenk (2018b) deals with the results of Venkatram et al. (1999). It is true that the authors of Venkatram et al. (1999) are more concerned with deriving analytical relationships between deposition and sedimentation speeds than with explaining any conventions. (2020) the AUSTAL authors send the deception Trukenmüller (2016) to administrative offices and offices of the Federal Republic of Germany on request. They abuse the authority of their office and position. The true life story is a teaching example of how truths could be suppressed for 36 years from 1984 to 2020. The elite life story of the AUSTAL authors The authors of AUSTAL write the other life story according to UBA (2018) themselves and explain how their model of expansion came about. "The history of AUSTAL2000 started almost exactly 21 years ago. At the NATO-CCMS conference in San Francisco at the end of August 1981, I had just presented my approach to Lagrangian modeling in inhomogeneous turbulence, at the same time as the corresponding work by Wilson and Legg & Raupach, and thus fulfilled a promise that I made on Hanna Had given last year's conference in Amsterdam. Preparations for TA Luft 1983 were still going on, but the parties involved were already considering how to proceed with TA Luft in the medium and long term. So after the conference, we sat down in the small town of Kirkwood, in the mountains east of Jackson, to summarize our ideas for a concept in a workshop (as part of the UBA project "Handbook of Immission Forecasting"). these were: Werner Klug, Paul Lühring, Rainer Stern, Robert Yamartino and I. The key points of the long-term concept, which should extend 5 to 7 years into the future, included:… After 21 years now, with the new TA Luft, that on October 1st, 2002, key points of the concept realized at that time, maybe you should meet again in the mountains to think about how the TA Luft expansion model would have to be developed in the next 20 years","Lutz Janicke am September 30, 2002". Unquestionably, you present yourself inflated in public. Summary and discussion of the results The author of this article deals with the faulty algorithms of the AUSTAL dispersion model. Since 2002, according to VDI 3945 Part 3 (2000), this expansion model with its reference solutions has been declared binding for all model development in the Federal Republic of Germany. Other model developments have to prove their equivalence on the fixed reference solutions. Because of the high public importance attached to this model of expansion, the discussion of its physical and mathematical foundations is justified. Every effort is justified. The public should also be involved. Berljand's boundary condition Initially, this article explains in detail the initial boundary value output for the description of the spread of air pollutants. It consists of the mass transfer Eq. (1), the initial condition (12) and Berljand's boundary conditions (10). Because of the general validity for all stationary and non-stationary tasks of impulse, heat and mass transport, this boundary condition according to Fig. 1 is derived in detail. It can thus be used for all tasks of the AUSTAL authors to derive the reference solutions. Integral sentences In Schenk (2015a) the accusation is raised that all reference solutions by the authors of AUSTAL violate the mass conservation law and the second law of thermodynamics. The general validity of these allegations is demonstrated in Schenk (2015b), which is heavily disputed in Trukenmüller (2017). For this reason, the author of this article develops the integral Eqs. (18) and (20), which are directly applied to all individual cases of the reference solutions. The validity of the second law of thermodynamics can also be checked. On the basis of the initial boundary value task described and taking Berljand's boundary condition into account, the correct solutions according to Eqs. (21) and (22) are compared with the incorrect reference solutions (25) and (26). The defective Janicke Convention (34) is subjected to criticism and shown that, in contrast to Berljand's boundary condition, deposition means loss and not vice versa. In order to be able to judge in individual cases whether the second law of thermodynamics is fulfilled or not, the derivations (23) and (35) for the calculation of the concentration gradients are given. In the case of the reference solution for"sedimentation without deposition", if it is considered correctly, it is first shown that the task according to Fig. 2 is reduced to a trivial task and solution due to "volume source over the entire computing area", Eq. (40). The results for stationary and non-stationary calculations are shown in Fig. 3. The mass conservation law and the II. Law of thermodynamics are fulfilled in the case of a non-stationary calculation according to Eqs. (41) and (42) and in the case of a stationary solution according to Eqs. (45) and (46). The course of the solution according to Fig. 3 is comparable to filling any container with different media and cannot be related to tasks for modeling the spread of air pollutants. The stationary state is reached after filling according to \(1h\) and not only after \(10Tagen\), as the AUSTAL authors claim. In the event of a faulty reference solution by the AUSTAL authors, there is no calculation equation available for stationary considerations due to Janicke's Convention and an indefinite expression for calculating the soil concentration. The pollutant particles in the control volume must be redistributed against the existing potential gradient so that they follow the faulty exponential function (48). The soil concentration is speculatively calculated using Eq. (49). In individual cases, Eq. (52) is used to prove that the mass conservation law has been violated. The conductive material flow is directed into the free atmosphere according to Eq. (53), whereas deposition flows point towards the bottom. The second law of thermodynamics is violated. The incorrect concentration curve is shown in Fig. 4. Despite a stationary observation, the AUSTAL authors questionably state the time-dependent simulation results. The stationary solution would have been reached after \(10Tagen\), and a time series over \(10Tage\) would have been calculated, which is not true. The information on non-stationary solutions can only presumably be described as inventions by the authors of AUSTAL. The differential Eq. (3) is available for determining stationary solutions. But it is ignored. For the spreading case "Deposition with sedimentation" according to Fig. 5 of the task, correct consideration is initially assumed. The differential Eq. (2) is available for non-stationary calculations. The emission source is at an altitude of \(200m\). No analytical solution is available to solve this differential equation, which is why a numerical algorithm must be used. The method used here is based on the intermediate step method according to Janenko (1968), which was further developed in Schenk (1980) for tasks related to the spread of air pollutants. The results of this non-stationary calculation are shown in Fig. 6, Graphs A and B. The graphic A describes the non-stationary course of the propagation, and the graphic B shows calculated integrals. They are required for proof of the validity of the main and maintenance rates. The stationary final state is reached after \(2,6h\) and not only after \(10Tagen\), as the AUSTAL authors claim. The conservation of mass is fulfilled according to Eq. (59). The deposition current coincides with the conductive material flow and is directed into the soil according to Eq. (60). The second law of thermodynamics is fulfilled. The stationary view is shown in graphic C in Fig. 6. A comparison between the graphs A and C shows an excellent agreement between numerical and analytical calculations for the stationary final states. The effect of high-altitude sources can be clearly seen in both non-stationary and stationary cases. In the case of incorrect reference solutions by the AUSTAL authors, Eq. (64) proves that the mass conservation law is violated. According to Eq. (66) there is no conductive material flow, whereas the AUSTAL authors calculate an alleged deposition flow. The conductive material flow and the deposition flow are not identical, which is why the second law of thermodynamics is also violated here. The results of the AUSTAL authors are shown in Fig. 7. It cannot be seen that the authors of AUSTAL considered a source in \(200m\). The AUSTAL authors also report non-stationary simulation results for this spreading case. The steady state would also have occurred here according to \(10Tagen\), but this could not be confirmed. Time series were also not calculated. This information can also only be described as the idiosyncrasies of the AUSTAL authors. You lose all credibility. In addition to the sedimentation and deposition studies, four so-called homogeneity tests are also carried out. The tasks are described in Fig. 8 with graphics A to D. These are the test cases so-called, "Homogeneous turbulence, constant time step", so-called "Homogeneous turbulence, variable time step", so-called "Inhomogeneous turbulence, constant time step" and so-called "Inhomogeneous turbulence, variable time step". The wording of all these tasks is identical. The only difference is that in the case of so-called homogeneous turbulence, a constant effective mass transfer coefficient according to Eq. (68) and in the case of so-called inhomogeneous turbulence, a variable effective mass transfer coefficient according to Eq. (71) is used. Process engineering homogenization is confused with Fick's diffusion. While in the case of homogenization the concentration balance is brought about by an energy input, such as with stirring, in the case of diffusion an existing potential gradient is responsible for the concentration balance, which the AUSTAL authors do not understand. All tasks assume a "volume source distributed over the entire computing area". This assumption can be used to prove with Eqs. (40) and (73) that all the tasks for this can be traced back to a single trivial dispersion calculation with the solution (4). However, this only describes the filling of different containers with different media. It is a zero-dimensional spread with the time coordinate as the only independent variable. The simulation results are applicable for all dispersion cases, as can be seen in Fig. 9. With the end of the emission after the filling is completed and the steady state is reached. In contrast, the authors of Fig. 10 use graphics A to D to provide four different solutions for one and the same task. For the discernible filigree differences in the solution behavior, the authors of AUSTAL give detailed physical reasons and prove that they actually started from different solutions. They explain these deviations incomprehensibly, for example, with periodic edges or different force effects that would allegedly have an effect in the study area. The AUSTAL authors also do not recognize that all solutions have to be independent of any material parameters. They explain their results with drift speeds, which are not available. It was not recognized that all solutions should actually describe identical concentration courses. The lack of knowledge of the AUSTAL authors is convincingly demonstrated in this example. Here, too, the AUSTAL authors report non-stationary simulation results for all four propagation cases. The stationary final states would have returned after. It is also reiterated that a time series had been calculated. Not a single simulation result is true. All of the tasks described for sedimentation, deposition and homogeneity have in common that they start from a three-dimensional investigation area. However, the differential Eq. (3) used by the AUSTAL authors only describes a one-dimensional propagation process. The reader is misled. All solutions and algorithms given by the authors of AUSTAL are wrong. Your train of thought cannot be understood with mathematics and mechanics. Confusion is created with determination. The analogy of the impulse,—heat and mass transport Textbooks on physics, thermodynamics and process engineering like to refer to the existing analogy to the impulse heat and mass transport. Looking at this analogy, the authors of AUSTAL would have to say, for example, that heat and material can be transferred from a lower energy level to a higher one. Because of the contradictions between tension and deformation, Newton's 3rd axiom would not be valid either. All principles of mathematics and mechanics are questioned with AUSTAL. The AUSTAL authors have to explain how you can recalculate nature experiments with dispersion models that contradict all recognized principles. The real life story is a prime example of how truths could be suppressed for 36 years. In contrast, the history of science in all disciplines proves that truths cannot be suppressed in the long run. The elitist life story is a teaching example of how to mislead the public for more than 36 years from 1984 to 2020. A prologue proves that AUSTAL is not validated. The simulation results for the reference solutions are wrong without exception. An epilogue has not yet been written. The authors of AUSTAL have to demonstrate how nature experiments can be calculated using dispersion models that contradict all recognized principles. All hazard prevention plans, safety analyzes and immission forecasts that have been determined with AUSTAL must be checked. Court rulings are also affected. Data and material are freely available. A [m2]: Control room area Ci [μg/m3]: Special solution C [μg/m3]: CT [μg/m3]: Concentration deep soil \({\bar{\text{C}}}[\mu {\text{g}}/{\text{m}}^{ 3} ]\) : Ch [μg/m3]: Concentration above limit CAn [μg/m3]: Analytical solution CA [μg/m3]: Initial concentration Fc [μg/m2 S]: Area source H[m]: Upper limit of the study area KB[m2/s]: Diffusion coefficient floor Kzz[m2/s]: Diffusion coefficient z direction K[m2/s]: Diffusion coefficient atmosphere Lx,y,z[m]: Extension of the study area \({\dot{\text{m}}}^{\text{s}}\)[μg/m2 S]: Conductive material flow atmosphere N[w/m2]: Conductive material flow floor Sedimentation flow \({\dot{\text{m}}}_{\text{z}}^{\text{B}}\)[μg/m2 S]: \({\dot{\text{q}}}\) (t)[μg/S m3]: Source term differentiel Q[μg/m2 S]: t[S]: Time coordinate TE[S]: Time end simulation T[m]: Deep soil Tw[S]: LAGRANGE correlation u[m/S]: Vi,x,y,z[m/S]: Speeds in spatial direction Vs[m/S]: Sedimentation rate Vd[m/S]: Deposition speed \({\text{V}}_{{_{\text{d}} }}^{\text{Schenk}} \left[ {\text{m/s}} \right]\) : \({\text{V}}_{{_{\text{d}} }}^{\text{Janicke}} \left[ {\text{m/s}} \right]\) : \({\text{X}}_{\text{i}}^{ *} \left[ {\text{m}} \right]\) : Ground coordinate \({\text{X}}_{\text{i}} \left[ {\text{m}} \right]\) : Location coordinate Z[m]: Vertical coordinate \(\beta_{\text{i,z}} \left[ {\text{m S}} \right]\) : Mass transfer velocity \(\eta \left[ {\text{kg/m S}} \right]\) : ϑ[k]: λ[w/m K]: ε[%]: Numerical error σw[m/S]: Fluctuations in speed τ[N/m2]: Shear stress Abas N, Saleem MS, Kalair E (2019) Cooperative control of regional transboundary air pollutants. Environ Syst Res 8:10 Albring W (1961) Angewandte Strömungslehre. Akademie Verlag: Berlin Axenfeld F, Janicke L, Münch J (1984) Entwicklung eines Modells zur Berechnung des Staubniederschlages. Umweltforschungsplan des Bundesministers des Innern Luftreinhaltung, Forschungsbericht 104 02 562, Dornier System GmbH Friedrichshafen, im Auftrag des Umweltbundesamtes BMU (2002) Erste Allgemeine Verwaltungsvorschrift zum Bundes-Immissionsschutzgesetz (Technische Anleitung zur Reinhaltung der Luft-TA Luft) Vom 24. Juli 2002. GMBL Heft 25-29 S: 511-605 Boŝnjakoviĉ F (1971) Techische Thermodynamik. Verlag Theodor und Steinkopf Dresden, 5. Auflage Graedel TE, Crutzen PJ (1994) Chemie der Atmosphäre. Spektrum Akademischer Verlag, Heidelberg, Berlin, Oxford Gröber, Erk, Grigull (1955) Grundgesetze der Wärmeübertragung. Springer-Verlag, Berlin, Göttingen, Heidelberg Häfner F, Sames D, Voigt HD (1992) Wärme- und Stofftranspor. Springer-Verlag, Berlin, Heidelberg, New York, London, Paris, Tokyo, Hong Kong, Barcelona, Budapest Janenko NN (1968) Die Zwischenschrittmethode zur Lösung mehrdimensionaler Probleme der mathematischen Physik. Springer-Verlag, Berlin Janicke L (2000) IBJparticle Eine Implementierung des Ausbreitungsmodells. Bericht IBB Janicke, Dunum Janicke (2001) Ausbreitungsmodell LASAT Referenzbuch zur Version 2.10. Dunum Janicke (2002) AUSTAL 2000 Programmbeschreibung. Forschungskennzahl des Umweltbundesamtes UFOPLAN 200 43 256, Dunum Janicke L (2009) Ein Programmsystem LASAIR in der nuklearspezifischen Gefahrenabwehr, Vorhaben 3607S04553 im Auftrag des Bundesministeriums für Umwelt, Naturschutz und Reaktorsicherheit. urn: nbn: de: 0221-2009011255, BfS- RESFOR-/06/09 Janicke L (2015) LASPORT Ein Programmsystem zur Berechnung von Emissionen und Immissionen flughafenbezogener Quellsysteme in der unteren Atmosphäre. Janicke Consulting, Dunum Janicke U, Janicke L (2002) Entwicklung eines Modellgestützten Beurteilungssystems für den Anlagenbezogenen Immissionsschutz. IBJanicke Dunum Janicke U, Janicke L (2011) AUSTAL2000 Stoffe nach TA Luft im Auftrag des Umweltbundesamtes Dessau-Roßlau, Geruchsausbreitung im Auftrag der Landesanstalt für Umweltschutz Karlsruhe, des Niedersächsischen Landesamtes für Ökologie Hildesheim und des Landesamtes NRW Essen. IB Janicke, Überlingen Janicke U, Janicke L (2017) Genaue numerische Lösung und analytische Näherung für das Windprofil über ebenem Gelände. Berichte zur Umweltphysik, Number 8 S. 1–19 Kneschke A (1968) Differentialgleichungen und Randwertprobleme. Teubner Verlagsgesellschaft, Leipzig Naue G (1967) Einführung in die Strömungsmechanik Vorlesung an der Technischen Hochschule Leuna-Merseburg. VEB Reprocolor Leipzig, Werk III/18/6 Nr. 3162/67 Pasquill F (1962) Atmospheric diffusion: The dispersion of windborne material from industrial and other sources. London van Nostrand Rafique M, Nawaz H, Rafique H, Shahid M (2019) Material and method selection for efficient solid oxide fuel cell anode: recent advancements and reviews. Int J Energy Res 43(7):2423–2446 Schenk R (1979) Ein Modell zur Berechnung des grenzüberschreitenden Schadstofftransports. Konferenzmaterial der ehem. DDR zum Umweltschutzkongress auf hoher Ebene in Genf S: 11–22 Schenk R (1980) Numerische Behandlung nichtstationärer Transportprobleme. Habilitation, TU Dresden Schenk R (2014) Expertise zu Austal 2000. Bericht im Auftrag der Vereinigten Warsteiner Kalksteinindustrie, Archiv Westkalk und IBS Schenk R (2015a) AUSTAL2000 ist nicht validiert. Immissionsschutz 01.15 S: 10–21 Schenk R (2015b) Replik auf den Beitrag "Erwiderung der Kritik von Schenk an AUSTAL2000 in Immissionsschutz 01/2015". Immissionsschutz 04.15 S. p. 189–191 Schenk R (2017) The pollutant spreading model AUSTAL2000 Is Not Validated. Environ Ecol Res 5(1):45–58 Schenk R (2018a) Not Only AUSTAL2000 is Not Validated. Environ Ecol Res 6(3):187–202 Schenk R (2018b) Deposition Mans Storage And Not Loss. Environmental Systems Research16. p: 1–14 Schenk R., Andrasch U. (1989) Numerische Simulation von Schadstofftransportvorgängen durch Lösung der Transportgleichung. Zeitschrift für Meteorologie, 30(1989)3 S: 169–175 Schlichting H (1964) Grenzschicht -Theorie. Verlag G, Braun Karlsruhe Schorling M (2009) WinKFZ Verifikation nach VDI 3945. Ingenieurbüro Schorling & Partner, Vagen Schüle W (1930) Technische Thermodynamik. Verlag von Julius Springer, Berlin Simpson D, Benedictow A, Berge H, Bergström R, Emberson LD, Fagerli H, Flechard CR, Hayman GD, Gauss M, Jonson JE, Jenkin ME, Hyiri A, Richter C, Semeena VS, Tsyro S, Tuovinen JP, Valdebenito A, Wind P (2012) The EMEP MSC-W chemical transport model—techical description. Atmos Chem Phys 12:7825–7865 Stephan K, Mayinger F (1992) Thermodynamik, Grundlagen und Technische Anwendungen. Berlin, Heidelberg, New York, London, Paris, Tokyo, Hong Kong, Barcelona, Budapest, 13. Auflage Travnikov O, Ilyin I (2005) Regional Model MSCE-HM of Heavy Metal Transboundary Air Pollution in Europe. EMEP/MSC-E Technical Report 6/2005 Truckenbrodt E (1983) Lehrbuch der angewandten Fluidmechanik. Springer-Verlag, Berlin , Heidelberg, New York, Tokyo Trukenmüller A (2016) Äquivalenz der Referenzlösungen von Schenk und Janicke. Abhandlung Umweltbundesamt Dessau-Rosslau S: 1–5 Trukenmüller A (2017) Abhandlungen des Umweltbundesamtes vom 10.02.2017 und 23.03.2017. Dessau-Rosslau S: 1-15 Trukenmüller A, Bächlin W, Bahmann W, Förster A, Hartmann U, Hebbinghaus H, Janicke U, Müller WJ, Nielinger J, Petrich R, Schmonsees N, Strotkötter U, Wohlfahrt T, Wurzler S (2015) Erwiderung der Kritik von Schenk an AUSTAL2000 in Immissionsschutz 01/2015. Immissionsschutz 03/2015 S: 114–126 UBA (2015a) https://www.umweltbundesamt.de/themen/luft/regelungen- strategien/ausbreitungsmodelle-fuer-anlagenbezogene/faq#textpart-1 UBA (2015) https://www.umweltbundesamt.de/themen/luft/regelungen-trategien/ausbreitungsmodelle-fuer-anlagenbezogene/faq#a13-wie-ist-die-kritik-von-r-chenk-in-quotimmissionsschutzquot-012015-zu-bewerten UBA (2018) https://www.umweltbundesamt.de/themen/luft/regelungen-trategien/ausbreitungsmodelle-fuer-anlagenbezogene/uebersicht-geschichte VDI Kommission Reinhaltung der Luft (1988) Stadtklima und Luftreinhaltung. Springer Verlag VDI 3945 Blatt3 (2000) Umweltmeteorologie—Atmosphärisches Ausbreitungsmodell—Partikelmodell. Beuth Verlag Berlin Venkatram A, Pleim J (1999) The electrical analogy does not apply to modeling dry deposition of particles. Atmos Environ 33:3075–3076 Westphal WH (1959) Physik. Springer-Verlag Berlin, Göttingen, Heidelberg. 20. und 21. Auflage Бepлянд ME (1975) Coвpeмeнныe пpoблeмы диффyзии и зaгpязнeния aтмocфepы. Издaтeльcтвo Гидpoмeтeoиздaт The author thanks the meteorologist Dr.rer.nat. Klaus Schiller from the State Environment Agency of Saxony-Anhalt, Halle. He was a participant in the UBA workshops on the presentation of AUSTAL2000 on November 20, 2000 and on January 15 and March 28, 2001. After a thorough review and analysis, he was the first and only one of the participants in the workshop to recognize that all the graphics for spreading, sedimentation and deposition are wrong. He also recognized that the calculated concentration distributions cannot be used. Together, he and the author of this article asked the authors of the AUSTAL in Dunum for clarification. With the explanation that the engineering office Janicke from the Federal Environment Agency, Germany, was not only commissioned with the development but also with the quality assurance at the same time, any demand would be pointless. The author of this contribution continues to thank the environmental engineer Mr. Bergeassessor Dipl.-Ing. Peter Dolch from the company WESTKALK from Warstein. He asked the author of this article again for clarification because of the incomprehensible description of the spread, sedimentation and deposition by AUSTAL. This demand prompted the author of this article to take a closer look at the mathematics and physics of AUSTAL2000 from 2014. In this context, the author would also like to thank the company WESTKALK for the assignment to carry out the first "EXPERTISE ZU AUSTAL2000". Obviously not close enough to Dessau-Rosslau Dr.oec. Ursula Andrasch, MA, worked in the Wittenberg Center for Environmental Design. She graduated from the Leningrad Elite State University in Zhdanov, now St. Petersburg State University, and received her doctorate. The author of this publication thanks for their valuable contributions in the field of modeling propagation, sedimentation and deposition as well as the development of algorithms for the calculation of cross-border pollutant flows. The author of this article also thanks the process engineer Dr.-Ing. Ingwalt Friedemann, former Technical College Merseburg. He has proofread all AUSTAL2000 publications since 2014 and followed everything that happened with interest. Since 1972 he was also a member of the main research direction "Air pollution control" of the former GDR. Special thanks to the author of this contribution also go to Prof. Dr. rer.nat. habil. Prof. Ursula Stephan of the former Academy Institute for Chemical Toxicology in Leipzig. Your current job at the Hazardous Substances Office in Halle has shown the dangers to people and the environment if, for example, radioactive or toxic deposits are incorrectly calculated. The author of this publication also thanks the publisher of the German edition of the textbook Бepлянд (1975), Prof. Dr.Ing.habil. Horst Ihlenfeld. For decades he was head of the wind tunnel at the Technical University of Dresden. The author of this article connects him with a fruitful scientific collaboration and personal friendship. He was also a member of the main research in keeping the air clean. Finally, the author thanks Mr. Alfred Trukenmüller from the Federal Environment Agency Dessau-Roßlau. His acquaintance has contributed greatly to the understanding of the AUSTAL dispersion model and its authors. 1968 doctorate to Dr.-Ing. at the Technical University of Merseburg. 1968–1970 additional studies in the field of "Computational Fluid Dynamics" at the Academy of Sciences of the former USSR in Novosibirsk, Akademgorodok. 1970 Lecturer in Theoretical Fluid Mechanics at the Technical University of Merseburg. Since 1972 active in the field of modeling of the spread of air pollutants at the Technical University of Merseburg and member of the main research area air pollution control at the Academy of Sciences of the former GDR. 1978 Calculation of transboundary pollutant flows and international cooperation with the Meteorological Institute of Leningrad University and with the NILU Institute Oslo. 1979 Calculation of long-distance transport Europe. 1979 Development of a 24 h forecast model and application by the Meteorological Service of the former GDR. 1980 Habilitation and scientific work in the field of numerical fluid mechanics and modeling of the spread of air pollutants under the direction of full members of the Academies of Sciences of the former USSR and former GDR Akademik JANENKO, Novosibirsk, and OM ALBRING, Dresden. 1980 participation in the construction of a data center east. 1980 Head of Environmental Monitoring at the Center for Environmental Design Wittenberg. 1982 Lecturer and University Professor of Fluid Mechanics at the Technical University of Zittau. 1985 Model for the calculation of the spread of radionuclides. 2004 honorary professor at the Technical University of Dresden, IHI Zittau. 1996 Research project model for the calculation of the spread of traffic emissions on behalf of the Ministry of the Environment Saxony-Anhalt. 2005 Research project model for the calculation of the expansion of heavy gases and vapors on behalf of the Ministry of the Environment of Saxony-Anhalt. 2007 Research project Mobile Environmental Data AVIS on behalf of the Arbeitsgemeinschaft für industrielle Forschung Berlin. 2008 Research Project Instruments Pollutant Prediction on behalf of the Arbeitsgemeinschaft für industrielle Forschung Berlin. 2010 Model for the calculation of the spread of traffic emissions taking into account moving point sources. 2015 Software developments for the evaluation of meteorological measurement series and for the development of cause analyzes. There is self-financing. Rainer Schenk Present address: , 06193, Wettin-Löbejün, Germany Dresden University of Technology, International University Institute Zittau, Zittau, Sachsen, Germany This entry was only published in "Environmental Systems Research". All authors read and approved the final manuscript. Correspondence to Rainer Schenk. Permission to review this work by an ethics committee is granted. A publication of this work is approved. ".. as the responsible member of the Federal Environment Agency, I welcome and support constructive discussions about the TA Luft expansion model", is how Trukenmüller (2017) writes. The UBA will invite to a nationwide congress on the topic "Invite modeling and calculation of the spread of air pollutants". Until then you will spread silence rather than information. Schenk, R. Integral sentences and numerical comparative calculations for the validity of the dispersion model for air pollutants AUSTAL2000. Environ Syst Res 9, 28 (2020). https://doi.org/10.1186/s40068-020-00181-6 AUSTAL2000 Dispersion calculations Particle model
CommonCrawl
Effusive crises at Piton de la Fournaise 2014–2015: a review of a multi-national response model A. J. L. Harris1,2, N. Villeneuve3, A. Di Muro3, V. Ferrazzini3, A. Peltier3, D. Coppola4, M. Favalli5, P. Bachèlery1,2, J.-L. Froger1,2, L. Gurioli1,2, S. Moune1,2, I. Vlastélic1,2, B. Galle6 & S. Arellano6 Many active European volcanoes and volcano observatories are island-based and located far from their administrative "mainland". Consequently, Governments have developed multisite approaches, in which monitoring is performed by a network of individuals distributed across several national research centers. At a transnational level, multinational networks are also progressively emerging. Piton de la Fournaise (La Réunion Island, France) is one such example. Piton de la Fournaise is one of the most active volcanoes of the World, and is located at the greatest distance from its "mainland" than any other vulnerable "overseas" site, the observatory being 9365 km from its governing body in Paris. Effusive risk is high, so that a well-coordinated and rapid response involving near-real time delivery of trusted, validated and operational product for hazard assessment is critical. Here we review how near-real time assessments of lava flow propagation were developed using rapid provision, and update, of key source terms through a dynamic and open integration of near-real time remote sensing, modeling and measurement capabilities on both the national and international level. The multi-national system evolved during the five effusive crises of 2014–2015, and is now mature for Piton de la Fournaise. This review allows us to identify strong and weak points in an extended observatory system, and demonstrates that enhanced multi-national integration can have fundamental implications in scientific hazard assessment and response during an on-going effusive crisis. When people think of an eruption at a European volcano, they prepare themselves for a damaging event on Vesuvius (e.g., Zuccaro et al. 2008) or a Laki-type event in Iceland (e.g., Schmidt 2015), maybe even an eruption of Etna (e.g., Chester et al. 2008) or Santorini (e.g., Dominey-Howes and Minos-Minopoulos 2004). However, by population number, by far the largest threat is from small island volcanoes beyond the European mainland and the Mediterranean. As we see from Table 1, at-least 27 active European volcanoes are on small islands. Of these islands, La Réunion Island (Fig. 1a) is probably the largest with an area of 2510 km2, and dimensions of 71 km (NW-SE) by 52 km (NE-SW). On the island no person is further than 57 km from the active volcanic center: Piton de la Fournaise (Fig. 1b). Although small in a territorial sense, on these islands 3.6 million people live within 30 km of an active eruptive center, 19 of which have erupted since 1800, with the impacted populations residing at an average distance of 2400 km from their "mainland" administrative center in continental Europe (Table 1). Of the 27 volcanoes listed in Table 1, 13 (with a total population of 1.1 million) are between 1000 and 2000 km from their mainland administrative centers, and six (accounting for 0.9 million people) are at distances greater than 5000 km. Of these six, three of the four furthest volcanoes from their mainland administrative centers are French (Table 1). To this list we can add the sub-marine centers of Mount MacDonald, Mehetia, Mouha Pihaa, Mont Rocard and Tehaitia all of which are in French Polynesia and which erupted in 1989, 1981, 1970, 1972 and 1983, respectively. The capital of French Polynesia, a French Overseas Collective, is Papeete, which is on the island of Tahiti. Papeete is 15,715 km from Paris. Table 1 Sub-areal island volcanoes under European governance with historic activity, populations >100 within 30 km of the active center, and/or eruptions since 1900, as listed by the Smithsonian Institution Global Volcanism Program data base (http://volcano.si.edu/search_volcano.cfm). Distance from each state capital (Paris, Athens, Rome, The Hague, Oslo, Lisbon and Madrid) was obtained using the world distance calculator of GlobalFeed.com. Last eruption is as of 31/12/2016 Locations of a La Réunion island in the Indian Ocean, b the main towns and roads on La Réunion, and c the OVPF permanent monitoring network at Piton de la Fournaise. All places mentioned in the text are located in panels (b) and (c) However, the same active volcanic islands tend to be something of an exotic notion to the mainland population, often being a popular sun-and-beach, tropical vegetation-and-temperature or exotic food-and-rum (or wine) holiday option, with the island being a "sun-mass tourist" destination or popular cruise-ship stop over (e.g., Bardolet and Sheldon 2008; Etcheverria 2014; Garau-Vadell et al. 2014; Silvestre et al. 2008). In some cases, recognizing the island as potentially active may even be deemed unwanted due to potential damage to the same tourism (Dominey-Howes and Minos-Minopoulos 2004). Montserrat is a well-known recent example. During the 1980s Montserrat was an exotic island in the Caribbean, not well-known for its volcanic activity. However, beginning in 1995 renewed activity covered a large part of the southern half of the island, including the principle town (Plymouth), in pyroclastic deposits; necessitating evacuation (Brown 2010). Vulcano (Italy) may be argued to have suffered a similar fate during the 1888–90 eruption. Having purchased and developed the northern part of the island for sulfur-and-alum mining, as well as grape cultivation, in 1870 for £ 8000, the Tyneside-based (UK) entrepreneur, James Stevenson, sold-up and left having seen his beautiful villa and prosperous enterprise destroyed by air fall and ballistics, Vulcano having become 'an awful place' (Stevenson 2009). Thus, for the local populations of such "exotic" locations, with well-established and tight-knit local communities, the hazard, risk, impact and loss, both tangible and intangible, due to a volcanic event is very real (e.g., Payet 2007), as was witnessed during the loss of Kalapana on Kilauea (Hawaii) to lava flow inundation during the 1990s (Weisel and Stapleton 1992). Worse, on-site observatories on active European volcanic islands are either: (i) non-existent, (ii) offshore and/or (iii) lacking in numbers and resource. Thus, during a crisis, the local staff (if there are any) may become spread so thin in collecting and interpreting data, while maintaining equipment, reporting, forecasting and situation-advisory duties, that there is no time to put out calls for help to expand the monitoring and response network. Such a beleaguered staff will need all the help that they can get; but, help needs to involve the implementation of tested and trusted techniques that provide product that is useful and that can be merged seamlessly into their response, forecasting and reporting duties (RED SEED Working Group 2016). That is, in the terminology of the remote sensing community, tools need to have been validated against reference data or 'ground-truth' (Lillesand and Kiefer 1987), so that they have gone from experimental to operational (Rudd 1974), thereby being known, trusted, valid and useable. Such help needs to be tested before, not during, a crisis (RED SEED Working Group 2016). In this case, the best way-forward is an ensemble approach whereby external (to the observatory) partners with pertinent expertise are invited to contribute to the response effort. In such a case, all partners need to integrate fully, and openly, with the group so that each partner adapts their strengths, weaknesses and roles as situations and data dictate, with all partners being open to communication and data sharing across the entire group. We here review just such a response model by focusing on a multinational and multidisciplinary group active during the five recent eruptive crises of Piton de la Fournaise (La Réunion Island, France) that occurred between 2014 and 2015; Piton de la Fournaise being the furthest active European island volcano from its mainland administration center, Paris (France). Hazard and response setting In France, three main groups of volcanic overseas territories exist: French Polynesia (Polynesie Française) is a "Collectivité d'Outre Mer" (COM) or an "Overseas Collective". Here the seismic network is run by CEA ("Commissariat à l'énergie atomique et aux énergies Alternatives") and real time data are locally transmitted to the "Laboratoire de Géophysique in Papeete" on Tahiti. The "Terres Australes et Antarctiques Françaises" (TAAF) or the "French Southern and Antarctic Territories" includes those French overseas islands in the Indian and Antarctic Oceans, apart from La Réunion and Mayotte. The "Institut Polaire Française Paul-Émile Victor" in collaboration with the "Institut de Physique du Globe de Strasbourg" (and the Geoscope Observatory) are in charge of monitoring the Antarctic part of TAAF, which are labelled TOM (Territoires d'outre mer). The Austral part of TAAF is not permanently monitored. Guadeloupe, Martinique, La Réunion and Mayotte are "Departements d'Outre Mer" (DOM). Active DOM volcanoes are monitored by "Institut de Physique du Globe de Paris" (IPGP) via a network of local volcano observatories. These are, respectively, Observatoire Volcanologique et Simologique de la Guadeloupe (OVSG), Observatoire Volcanologique et Sismologique de la Martinique (OVSM) and Observatoire Volcanologique Piton de la Fournaise (OVPF). The National Observation Service for Volcanology (SNOV) operated by the "Institut National des Sciences de l'Univers" (INSU) of the "Centre National de la Recherche Scientifique" (CNRS) is in charge of scientific duties, as well as collection and distribution of geological and geophysical data. Although Mayotte was built by volcanic activity (as was all of the Comoros Archipelago), it does not have a permanent seismic network. The most recent volcanic activity on Mayotte was 6.5 kyr BP (Zinke et al. 2001). As part of this monitoring system, OVPF was built in 1980 in La Plaine des Cafres (15 km away from Piton de la Fournaise) to monitor volcanic activity on Piton de la Fournaise and Piton des Neiges (Fig. 1b), as well as to track seismic activity on and around La Réunion island. OVPF was set-up in the aftermath of the eccentric 1977 eruption whose lavas inundated the village of Piton Sainte Rose (Fig. 1b). Led by IPGP, whose headquarters are in Paris, OVPF has (as of December 2016) just 12 permanent staff who are based in La Plaine des Cafres (Fig. 1b). Five of these staff are scientists charged with data monitoring (checking data, derived parameters, trends, etc.), five others are engineers charged with instrument and network maintenance and monitoring. All ten have reporting duties both during eruptive and non-eruptive periods (checking activity and data availability, situation updates, and reports to the head of the observatory who prepare official bulletins, etc.). Within the framework of SNOV, OVPF staff collaborate closely with IPGP staff in Paris and other French groups, or National Partners (NP), mainly at La Réunion University, at the Observatoire de Physique du Globe in Clermont-Ferrand (OPGC), and OPGC's academic companion department, Laboratoire Magmas et Volcans (LMV). The time difference between the OVPF and French sites is 2 or 3 h depending on the season, where Paris and Clermont Ferrand is UTC + 1 and La Plaine des Cafres is UTC + 4. Specific agreements between OVPF and other international agencies, or International partners (INP), such as the Hawaiian Volcano Observatory (USGS-HVO, Hawaii, USA), Istituto Nazionale di Geofisica e Vulcanologia (INGV – Pisa, Palermo and Catania, Italy) and Chalmers University of Technology (Gothenberg, Sweden), as well as informal arrangements with INPs such as Università di Torino (Turin, Italy), have also permitted data sharing, technology upgrade and knowledge transfer beyond the national framework. Here we note that, informal agreements based exclusively on mutual and collaborative efforts, as carried out here with Università di Torino, are developed during an eruptive crisis. However, as discussed later, continuity of service provision, data validation efforts, transparency and efficiency then benefit from development of more robust, formal agreements developed during non-eruptive periods. During and between crises the role of OVPF is to communicate with the national, regional and local responding agencies. Following the French national plan for response protocols during a crisis at Piton de la Fournaise, the call-down procedure is laid out in "Organisation de la Réponse de SEcurité Civile" (ORSEC), i.e., the "ORSEC-Piton de la Fournaise" plan. Within this plan OVPF communicates only with the prefecture via the "Etat Major de Zone et de Protection Civile de Ocean Indien" (EMZPCOI). The prefecture (decentralized administrative service of the French state) then communicates with other actors in the response chain. For each eruption, and for any change in volcanic activity, a Volcano Observatory Notice for Aviation (VONA) is also sent to the Volcanic Ash Advisory Center (VAAC) in Toulouse (France). OVPF reporting duties also include communication with the air quality office in Saint Denis (La Réunion). The period between 2014 and 2015 was particularly challenging because five eruptions occurred in short succession, with one during June 2014, and then four during 2015 (in February, May, July–August and August–October), the last of which ranked as the fifth largest eruption since records began in 1700 (Michon et al. 2015; Peltier et al. 2016). During these events round the clock service was maintained by OVPF. During non-eruptive periods, when the observatory is not staffed out of working hours, full 24 h service (during all days of the year) is maintained by instrumental monitoring and an alarm, which triggers in the case of a change in seismic activity, where the alert is sent to a scientist-on-duty. Duties at the observatory between eruptions thus continue on a 24/7 basis, and include daily checking of activity and proper functioning of all networks, plus creation of daily reports. OVPF monitoring network On site, OVPF-IPGP maintains four types of ground-based real-time monitoring networks, namely (in order of network size): (i) seismic, (ii) geodetic (deformation), (iii) geochemical (gases), and (iv) imagery, including permanently installed visible and infrared cameras (Fig. 1c). The OVPF team is also charged with performing detailed syn-eruptive sampling and mapping of eruptive products (Additional file 1). At the national level, OPGC and LMV are charged with satellite remote sensing (OI2: "Observatoire InSAR de l'Océan Indien", https://wwwobs.univ-bpclermont.fr/SO/televolc/volinsar/; HotVolc: HotVolc Observing System, https://wwwobs.univ-bpclermont.fr/SO/televolc/hotvolc/), plus petrochemical and volcanological analysis of the eruptive products (Dynvolc: Dynamics of Volcanoes, http://wwwobs.univ-bpclermont.fr/SO/televolc/dynvolc/; GazVolc: Observation des gaz volcaniques, http://wwwobs.univ-bpclermont.fr/SO/televolc/gazvolc/). International collaboration with the INGV in Pisa and the Università di Torino also allows near-real time provision of potential lava flow paths and validated time-averaged lava discharge rate (TADR), respectively. These products have been coupled with operational lava flow modelling at LMV to allow assessment of potential lava flow run-out. Collaboration with Chalmers University (Gothenburg, Sweden) is fundamental for post-processing and validation of SO2 flux data acquired by the permanent NOVAC DOAS network. The Network for Observation of Volcanic and Atmospheric Change (NOVAC) is a system of automatic gas emission monitoring at active volcanoes using a worldwide array of permanently-installed differential optical absorption spectroscopy (DOAS) scanners which measure volcanic gas emissions by UV absorption spectroscopy (Galle et al. 2010). We focus, here, on how these disparate external groups integrated to provide timely and useful product during effusive crises involving emplacement of lava flow fields that threatened infrastructure (mostly the island belt road) between 2014 and 2015. A flowchart synthesizing the information chain, plus the dependence of actions and data between the observatory, is given in Fig. 2. Communication route into OVPF through, and between, national partners (NP) and International partners (INP) during the on-island effusive crises of 2014–2015. Ground truth flow out (via white arrows) of the observatory, model source terms are passed (via blue arrows) between the partners, and products are passed back (via red arrows) to the observatory. These are folded into one-voice communication onwards to civil protection (green line), and only on to the media through carefully controlled routes (orange and red lines). The national partners were OPGC (NP-1), for textural and geochemical products and LMV (NP-2) for lava flow simulations; and the International Partners were the Università di Torino (INP-1) for satellite-based TADR provision and INGV-Pisa (INP-2) for lava flow modelling, with there being open, two-way communication routes between all partners La Réunion: Volcanic hazard, risk and perception The relatively late creation of a continuous monitoring system, where the observatory was established in 1979, together with the recent age of permanent human settlement on the island, where the first people arrived from Brittany (France) in the seventeenth century (Vaxelaire 2012a), have meant that the observatory and the population have had to deal with a growing awareness of the variability, in time and space, of the eruptive behavior of Piton de la Fournaise (see Morandi et al. 2016 for review). Since the creation of the observatory, 66 eruptions (1981–2016) have occurred with durations of between 0.3 and 196 days emitting, on average, a bulk volume of 9 × 106 m3 (Peltier et al. 2009; Roult et al. 2012). With the exception of the 1986 and 1998 eruptions, all lava emissions have been confined inside the uninhabited 'Enclos Fouqué' caldera (Fig. 1b), and have involved vents opening inside the summit craters or on the flanks of the central cone, or further away to the east on the floor of the Enclos Fouqué caldera. Eight of these eruptions (March 1986, March 1998, June 2001, January 2002, November 2002, August–October 2004, February 2005, and April 2007) have cut the island belt road – the only link between the southern and northern part of the island on the eastern flank (RN2, Fig. 1b). Large-volume eruptions have been documented, with that of April 2007 (Staudacher et al. 2009), being of quite short-duration (less than 1 month). Effusive events can also be long-lasting (several years), where decade-long phases of continuous activity, punctuated by short-lived explosive events, have been observed in the geological record (Peltier et al. 2012; Michon et al. 2013). In 2007, the most voluminous eruption of historical times occurred at an altitude of 570 m above sea level, 400 m north of the southern wall of the Enclos Fouqué caldera. The eruption buried 1.5 km of the belt road under 60 m of lava, caused gas exposure problems in towns on the east and west coasts, and prompted the evacuation of the nearby village of Le Tremblet. Likewise the eruptions of 1986 and 2002 also required evacuation of villages north or south of the Enclos Fouqué caldera. There are around 840,000 habitants on La Réunion island, and over 245,000 permanent residents on the volcano flanks. Tourism is a major industry, accounting (in 2014) for 7.8% of the Gross Domestic Product, 8500 jobs (3.2% of total employment), 40.7% of total exports and 3.1% of total investment (WTTC 2015), where the warm waters, exotic marine life, white-sand beaches, surfing and tropical coastline are big draws (e.g., The Lonely Planet 2015; Michelin Green Guide 2015). However, tourism has been affected by two water-related hazards, the "shark crisis", where there have been 42 attacks since 1990, and Chikungunya epidemics, both of which have been widely disseminated by global media (e.g., Santora 2015, Stewart 2015, Surfer Today 2016). Chikungunya is a mosquito-borne virus characterized by arthralgia or arthritis, where an outbreak between March 2005 and April 2006 resulted in 255,000 cases (i.e., it affected 30% of the population), was responsible for 87% of the deaths on the island during the same period (Josseran et al. 2006), and was followed by a sharp down turn in visitor numbers (INSEE 2016). As a result, the tourism strategy has turned to the attractiveness of the terrestrial environment. This includes the draw of an active volcano (Gaudru 2010), which is classed as a World Heritage Site by UNESCO. This policy re-orientation has allowed La Réunion to maintain visitor numbers at a level of around 420,000 per year since 2007, this being the same as the pre-2006 level; where numbers dipped to 300,000 in 2006 following the Chikungunya epidemic (INSEE 2016). Although the tourist office and local media promotes Piton de la Fournaise and its activity as a tourist attraction, portraying the volcano as dangerous or hazardous would have a negative effect on tourism. In terms of the resident population, a recent survey of Réunion's resident population on volcanic risk perception thus revealed a relatively poor knowledge of the volcano and its activity; although, the same people had a high level of trust in scientists to provide accurate and reliable information (Nave et al. 2016). The tourist draw and hazard to tourists An on-going eruption is a positive draw to the Parc national de La Réunion, which covers 105,000 ha (or 42% of La Réunion island), and the access town (for Piton de la Fournaise) of La Plaine des Cafres and the visitors center (La Cité du Volcan) dedicated to the volcano and its activity. The first tourist activity at Piton de la Fournaise was in the form of scientific expeditions during the nineteenth century. These expeditions were usually composed of a foreign naturalist on a visit to the island, some local volcano experts, guides, porters, and governors or other senior administrative officials. At that time, a guide was quite rare and difficult to find on the island and the porters, initially slaves, were often frightened at the idea of going to the "territory of the devil". In 1863, for example, Baron Carl Claus Von Der Decken (Kersten 1871; Kersten et al. 2016) spent 6 days exploring the volcano, departing from Saint Denis and pausing to find a guide at La Plaine des Palmistes. Such expeditions were increasingly facilitated by the opening of a railway, which was constructed along the coast from Saint Denis to Saint Pierre and Saint Benoît, in 1882 (Vaxelaire 2012b). In 1925 a first Gîte was built on the site of the present one near Pas de Bellecombe (Fig. 1b), and in 1933 construction of Madame Brunel's Hotel at La Plaine des Cafres (near the current OVPF buildings – Fig. 1b) gave explorers a base camp. In 1957, the Office National des Forêts (National Forestry Office) initiated the construction of the "Route du Volcan", which was completed as far as Pas de Bellecombe in 1968. Germanaz (2005; 2013) estimated the number of visitors to Piton de la Fournaise between 1750 and 1965 as being less than a thousand per year. Between 2011 and 2016, The Office National des Forêts estimated that around 350,000 people per year used the Route du Volcan, with one-in-three visitors hiking down into the Enclos Fouqué caldera. In June 1972, a protocol was put in place whereby policemen prohibited access to Pas de Bellecombe during an eruption. In November 2002, to limit unauthorized access to the Enclos Fouqué cadera during periods of closure, a gate was installed in Pas de Bellecombe. This gate is a physical means of limiting access to the volcano by blocking the narrow entrance to the only path down the 130–160 m high cliffs of the caldera wall (Fig. 3a). However, the position of lock-down by the local civil protection is contrary to certain political, economic and even public wishes to use the volcano as tourist draw for La Réunion. Strategies to communicate the beauty of the volcano and its eruptions are widespread ranging from numerous glossy brochures in hotel lobbies and adverts in the local newspapers published by tour operators, guides, adventure companies, and air tours, to press releases to international media and work plans for rangers accompanying hikers during eruptions. The publicity campaign intensifies during periods of eruptive crisis, and includes provision of space on shuttle buses operating between La Plaine des Cafres or Le Tampon and Pas de Bellecombe. a The gate at the head of the trail down into the caldera at the Pas de Bellecombe entry point between and during effusive crises. b;c Congestion on the Route du Volcan at the Plaine des Sables around 06:40 (local time) on 1 August 2015: the second day of the July–August 2015 eruption. After these photos were taken, parking next to the road was banned and cars were allowed to ascend from La Plaines des Cafres in groups of 100, as space became available in designated parking areas. A mini-bus shuttle service was also added from bases in La Plaines des Cafres and Tampon. This caused severe traffic and parking congestion, but no doubt a short economic boom, for the town of La Plaines des Cafres On a normal day 2000–3000 people use the main visitor access point for the Dolomieu crater (Bello 2010), this being Pas de Bellecombe (Fig. 1b). This load increases enormously during activity (Fig. 3b, c). Given a traffic density of 270 cars per kilometer at peak flow (estimated from Fig. 3b on the basis of nose-to-tail traffic in both directions at peak flow), this gives 1160 cars parked on the final 4 km of the road to Pas de Bellecombe. If we add the capacity of the Pas de Bellecombe car park (1200 cars), then this amounts to around 2400 cars. Given an average number of four people per car, this is a visitor load on Pas de Bellecombe during the night of 31 July - 1 August 2015 of almost 10,000. We round up because this does not take into account emergency parking opened at La Plaine des Sables, as well as the car park at, and 1.3 km road, to the Gite du Volcan (Fig. 1b), a restaurant/lodge 900 m north of Pas de Bellecombe into which around 100 stranded tourists broke-in, so as to shelter for the same (cold) night. However, this increase in traffic overwhelms parking and access facilities, causing severe congestion in towns lying along access roads to the Enclos Fouqué caldera and Dolomieu crater, including the towns of Saint Philippe and Sainte Rose on the RN2 (Bello 2010). A high tourist load can also be damaging to the flora of the park itself (Bello 2010), and will (i) be associated with an increased number of accidents and illness among park visitors (Heggie and Heggie 2004; Heggie 2005), (ii) result in increased need for emergency search and rescue operations (Heggie 2008; Heggie and Heggie 2008; 2009), and (iii) cause fatalities if tourists stray into dangerous environments (e.g. Heggie 2009) or unstable areas subject to collapse (e.g. Perkins 2006). Following the 31 July - 2 August 2015 eruption, the local newspaper reported 41 hikers evacuated from closed zones, 14 victims of minor injuries and illnesses, and 11 cases of hypothermia at Pas de Bellecombe, plus four cases of illness, some minor injuries and cases of fatigue on the road between La Plaine des Cafres and Pas de Bellecombe (L.R. 2015). The eruption was just 50 h in duration. Methodology: Near-real time tools and integration Our focus here is on running and delivering, in as timely-a-fashion as possible, lava flow simulations during an effusive crisis to assess and update likely inundation zones. We here use the merged output from two lava flow models: FLOWGO and DOWNFLOW, the FLOWGO model already having been initialized and validated for channel-fed flow at Piton de la Fournaise (Harris et al. 2015). Merged application of FLOWGO-DOWNFLOW has also been tested for near-real time lava flow simulation using a feed of lava time-averaged discharge rate (TADR) derived from 1 km spatial resolution thermal data from satellite-based sensors such as MODIS (Wright et al. 2008). Any lava flow simulation model first requires initialization with vent location, which is provided by OVPF as part of their monitoring and response procedures. The key source term then becomes TADR, which can potentially be obtained in near-real time, and multiple times per day, from satellite-based sensors imaging in the thermal infrared. The Spinning Enhanced Visible and Infrared Imager (SEVIRI) sensor on the MeteoSsat Second Generation (MSG) satellite series provides thermal infrared data for Piton de la Fournaise at a spatial resolution of 3 km and nominal temporal resolution of 15 min (Gouhier et al. 2016). These data are potentially capable of providing TADR in a timely fashion. However, tests on GOES data have shown that, although variations in volcanic radiance can be trusted to provide arrival time of magma at the surface with a precision of 7 ± 7.5 min (Harris et al. 1997a) and to track variations in effusive activity at 15 min time steps (e.g., Mouginis-Mark et al. 2000), TADR derived from 4-km pixel data were not reliable (Dawn Pirie, Hawaii Institute of Geophysics and Planetology, unpublished data, 1999). TADR were not, thus, included as part of "hot spot" products delivered to recipient observatories by the Hawaii Institute of Geophysics and Planetology (HIGP, University of Hawaii, Honolulu, USA) because they were not deemed valid or trustworthy (Harris et al. 2001). This is a result of the large pixel size and mixed pixel problems, as well as pixel deformation, where pixels become increasingly large, ovoid, overlapping and rotated with scan angle (Harris 2013). Worse at high scan angles, or with extreme Earth curvature (which is the case for SEVIRI observations of Piton de la Fournaise), unreliable spectral radiances have been recorded (Holben and Fraser 1984; Singh 1988; Coppola et al. 2010), so that spurious data at high scan angles tend to be filtered out of quantitative analyses (e.g., Tucker et al. 1984; Goward et al. 1991; Harris et al. 1997b). Following Frulla et al. (1995), radiances for volcanic hot spots are deemed unreliable at scan angles of greater than 50°, where SEVIRI views Piton de la Fournaise at an angle of 63.4°. At such high scan angles, while over-estimates of spectral radiance, and hence TADR, will result from smearing of the anomaly due to extreme pixel overlap effects and point-spread-function problems (e.g., Markham 1985; Breaker 1990; Schowengerdt 2007), underestimates will result from atmospheric effects (Coppola et al. 2010) and topographic shadowing of all or part of the thermal anomaly (Dehn et al. 2002). Even local topographic features, such as cones, levees and skylights have been shown to play a role in shadowing the anomaly at high scan angles (Mouginis-Mark et al. 1994), causing detection problems even at quite low scan angles for active lava surrounded by topographic highs (e.g., Wooster et al. 1998; Harris et al. 1999; Calder et al. 2004). We see this problem in Fig. 4. TADR derived from the 1-km MODIS data following the method of Coppola et al. (2013) are in agreement with those obtained from areal photography and gas flux data. However, those obtained from 3-km SEVIRI data are consistently much lower, and show a large degree of scatter. Much of this scatter is due to the fact that the SEVIRI data have not been cleaned for cloud contamination. All the same, the trend apparent in the SEVIRI data does not match that of the MODIS-photogrammetry-gas data. This is likely due to changing shadowing effects (due to growth and decay of cone rims, levees, etc.) and the evolving form of the lava flow field in relation to its location in the detected pixels. This latter affect will continually modify the influence of the point-spread-function which, as noted above, will be exaggerated at high scan angles. We thus prefer to use 1-km spatial resolution, acquired at low scan angles from polar orbits, for TADR-derivation. Such data are nominally available four times per day, a frequency sufficient to describe an effusive event evolving over the time scale of hours (e.g., Wooster and Rothery 1997; Harris et al. 2000b; Wright 2016). Better, TADR derived from such data have been validated for Piton de la Fournaise by Coppola et al. (2009; 2010), and appear valid from the ground-truth test completed in Fig. 4. Comparison between SEVIRI- and MODIS-derived TADRs (in cubic meters per second) during the first 4 days of the February 2017 eruption at Piton de la Fournaise with ground truth (i.e., TADR derived from photogrammetry and gas flux). The eruption began at 19:40 local time (15:40 UT) on 31 January 2017 Other source terms involve chemical and petrological data, to set and check rheological models used, as well as physical volcanological measurements (at-vent temperature, crystallinity and vesicularity). These data can, in turn, also be used to track the effusive event (Coppola et al. 2017). Finally, data processed post-event can then be used for de-briefing purposes and supplementary validation checks. While we use lava unit area and length derived from observations, photogrammetric surveys and InSAR data (Bato et al. 2016) for checking model-derived lava flow run-outs, SO2 and thermal camera data are used to check the validity of TADR obtained from MODIS data. We here review the methodologies (as well as their application in near-real time, problems and delivery delays) that formed this chain from source term provision, through model execution to output validation during the five effusive eruptions that comprised the 2015–2016 cycle of activity (Peltier et al. 2016; Coppola et al. 2017) at Piton de la Fournaise. OVPF response At the onset of an eruption, dyke location plus propagation direction and velocity is tracked on monitors in the OVPF operations room, in real-time, using the live data stream from the permanent seismic and geodetic networks. Upon eruption, vent location, eruptive fissure length and geometry, plus the number and activity of eruptive sources, are assessed using the OVPF permanent camera network, whose images are also streamed live to the operations room, and field reconnaissance. Remote surveillance is followed by in-situ inspection and/or civil protection helicopter-based over-flight including one OVPF agent. In-situ GPS-location of effusive vents, sampling of eruptive products (solids and gases), and infrared and visible image surveys are typically performed during the first few hours of an eruptive event, when weather conditions are favorable. Additional file 1 provides an overview of the frequency of: (i) sampling of solid products (pyroclasts, lavas, sublimates), (ii) in situ analyses of gas composition and fluxes, and (iii) thermal and visible camera surveys during the 2014–2015 events. A representative set of solid samples is then sent during, and/or immediately after, each eruptive event to LMV and IPGP for textural, chemical and petrological analyses. Lava flow volume estimates are currently based on field mapping, photogrammetry and InSAR analysis (Peltier et al. 2016). Precise volume estimations are generally performed post-event or late during a long-lasting event due to (i) the cost of, and preparation time required for, satellite and/or aerial photography acquisitions, (ii) the need for Ground Control Point (GCP) measurement, and (iii) the complexities of data processing. However, due to new research funding from the Agence National de Recherche (ANR), a new satellite image-purchase program (Kalidéos 2 by CNES) and implementation of crowd sourcing techniques, volume estimation and lava flow mapping are becoming increasingly available during eruptive events and with low degrees of latency. Crowd sourcing involves use of high definition drone-derived images available on YouTube, collaboration with a network of professional photographers and journalists, and networking with drone pilots who all provide OVPF with images for photogrammetry free of charge. TADR derivation using MIROVA MIROVA (Middle InfaRed Observation of Volcanic Activity) is an automated global hot spot detection system run at the Università di Torino (Coppola et al. 2016). The system is based on near-real time processing of MODerate resolution Imaging Spectroradiometer (MODIS) data to produce hot spot detection, location and tracking products (Fig. 5). MODIS is a multispectral radiometer carried aboard the Terra (EOS-AM) and Aqua (EOS-PM) polar orbiting satellites. MODIS acquires data of the entire Earth's surface in 36 wavebands and offers a temporal coverage of ∼4 images per day at a spatial resolution of 1 km in the infrared (IR) bands, specifically bands 21 and 22 (3.929–3.989 μm, low and high gain, respectively), 31 (10.78–11.28 μm) and 32 (11.77–12.27 μm). Using MODIS, MIROVA completes automatic detection and location of high-temperature thermal anomalies, and provides a quantification of the volcanic radiant power (VRP), within 1 to 4 h of each satellite overpass (Coppola et al. 2016). With each overpass, thermal maps (in .kmz format for use with Google Earth) and VRP time-series are updated on the MIROVA website (www.mirovaweb.it). This provides the user with immediate access to the post-processed products, allowing visual inspection of the images so that data contaminated by clouds and volcanic plumes, or acquired at poor viewing geometries (i.e high satellite zenith angles), to be discarded (Coppola et al. 2013). a MIROVA-derived TADRs (circles) recorded during the May 2015 eruption. Uncertainty related to variable emplacement styles or underlying topography is taken into account by the upper and lower bounds of each TADR estimate (thin solid lines). b Example of thermal image (Brightness Temperature at 3.9 μm: MODIS band 22) output by the MIROVA system and overlain (in transparency) on Google Earth to allow rapid geolocation of thermally anomalous pixels Satellite-based thermal data have been used operationally to estimate the lava discharge rates during effusive eruptions since first application in 1997 (Harris et al. 1997a). This approach relies on the observed relationship between lava discharge rate, lava flow area and thermal flux (e.g. Pieri and Baloga 1986; Wright et al. 2001; Harris and Baloga 2009, Garel et al. 2012). For any given eruptive condition, this relationship allows VRP to be set as proportional to the time-averaged lava discharge rate (TADR) using the coefficient of proportionality, crad = TADR/VRP (Coppola et al. 2013). Validation by Coppola et al. (2009; 2010; 2013) indicates that the eruptions of Piton de la Fournaise are characterized by a best-fit coefficient of between 1.4 × 108 and 2.9 × 108 J m−3. This range likely reflects variation in eruptive conditions, such as different emplacement styles (i.e., channel- versus tube-fed) or underlying topography (steep versus gentle slopes). Comparison with post-event lava flow volumes indicated that short-lived (<15 days), low-volume (< 3 × 106 m3) eruptions are best described by the upper bound of the coefficient range (Coppola et al. 2017), with the lower bounds being more applicable to eruptions lasting more than 2 weeks and emitting more than 5 × 106 m3 of lava. However, in the absence of syn-event validation, upper, median and lower bounds on MIROVA-derived TADR are given to take into account this uncertainty (Fig. 5). During the opening phases of each eruption (i.e., during the first 48 h), as well as during periods of major changes in output, TADR time-series were updated at-least four times per day and were delivered, via email, to OVPF for ingestion into the on-going hazard assessment and response. They were also sent to INGV-PI and LMV for initialization and updating of model-based lava flow run-out assessments. This service was maintained during the June 2014, May 2015, July–August 2015 and August–October 2015 eruptions. Initialization and execution of DOWNFLOW and FLOWGO DOWNFLOW is a stochastic model developed at INGV-Pisa that searches for the most likely array of down-hill paths that a lava flow will follow on a DEM of a given spatial resolution, vertical resolution and error (Favalli et al. 2005). During each eruption, DOWNFLOW was initialized upon reception of the new vent location using the 25-m resolution DEM of Piton de la Fournaise based on the 1997 topography. For each eruption DOWNFLOW was run twice. Each run involved 10,000 iterations but with the random elevation change introduced at each iteration (Δh) first set at 0.8 m, and then at 2.5 m. Based on calibration against the 1998, 2000, 2004 and 2005 lava flows, Δh = 2.5 m gives the best fit for DOWNFLOW in regions proximal to the vent, but Δh = 0.8 m provides the best fit in the distal regions. Each run took less than 1 min to execute. Upon run completion, a text file containing the slope down the line of steepest descent and a map showing all flow paths projected onto the shaded relief of Enclos Fouqué were sent via email attachment to LMV and OVPF. The slope file integrated values over 10 m steps so as to be compatible with ingestion into FLOWGO, whose distance increment is 10 m (Harris and Rowland 2001). Results of the DOWNFLOW runs for the May 2015 eruption are given in Fig. 6. Lava flow paths forecast by DOWNFLOW for the May 2015 eruption for noise levels (Δh) of 0.8 m and 2.5 m on the shaded relief of the same DEM used to run DOWNFLOW. The slope taken down the steepest descent path was that sent to LMV for initialization of FLOWGO FLOWGO is a thermo-rheological model designed to assess the one-dimensional thermal, rheological and dynamic evolution of lava flowing down a channel (Harris and Rowland 2001). Although not intended to give flow length, the point at which lava reaches its freezing point in the model channel is usually close to the actual lava flow run-out distance, if flow is cooling-limited (Harris and Rowland 2001). FLOWGO was initialized for Piton de la Fournaise using geochemical and textural data from the 2010 eruption (Harris et al. 2016) as well as the temperature-dependent viscosity model derived for Piton de la Fournaise by Villeneuve et al. (2008). Initially, FLOWGO was run each time TADR was updated by MIROVA. However, it was found to be more efficient to simply run FLOWGO at a range of TADRs to provide a run-out look-up table, which was updated if TADR rose above, or fell below, the look-up table range. Look-up tables were sent in both graphical form (Fig. 7) and as a two-column text file (giving TADR and run-out) to OVPF by email. FLOWGO run, in terms of velocity of lava flowing in the master channel with distance from the vent, using the first TADR values received during the May 2015 eruption. Red-line gives simulation for the maximum-bound on TADR given by MIROVA (50 m3/s) and blue-line gives the minimum bound (15 m3/s). While the maximum-bound attains the coast (i.e., the edge of the DEM) 9.1 km from the source, the minimum bound reveals a potential run-out of 8.4 km at 15 m3/s. Model was delivered at 10:45 UT on 18 May 2015, based on TADR derived from the 10:20 UT MODIS overpass (with upper bound being based on the maximum recorded during the 19:00 UT overpass on 17 May 2015). The eruption had begun at 11:45 UT on 17 May 2015 Delivery delays Lava flow model product and TADR were delivered to OVPF with a delay of up to 24 h. For a lava flow front advancing at a few tens of meters per hour, and several kilometers distance from vulnerable infrastructure, then a 24 h delay may be reasonable. But for faster moving flows, closer to vulnerable sites, this may need to be reduced; and delivery delay can be reduced to an hour or so. The May 2015 eruption began at 12:45 LT (8:45 UT) on 17 May and two OVPF staff members arrived, by helicopter, at the new eruptive fissure around 3 h later at 15:30 LT (11:30 UT). At 18:30 LT (14:30 UT), the two observers decided to stay near the eruptive fissure to make IR camera acquisitions during the night returning on foot and then car to OVPF later in the evening, thus sending the vent coordinate to LMV after closure of the mainland offices (after 21:00 LT; 17:00 UT). All the same, the vent coordinate was sent from LMV to INGV at 22:39 LT (18:39 UT). DOWNFLOW and FLOWGO runs were then executed during the following 4 h, with product being delivered to OVPF at 02:17 LT on 18 May (22:17 UT, 17 May); thus being picked up by the OVPF director early the following morning. In this case, if OVPF could have returned the position of the main vent to LMV at the point of first observation, then the first simulation could have been delivered to OVPF in less than 5 h. Indeed in some cases we were able to reduce the delay between announcement of eruption onset and delivery of vent location coordinates, to provision of DOWNFLOW maps and FLOWGO look-up tables, to about 1 h. However, it is currently difficult to guarantee a turn-around of less than 5 h for three reasons: TADR used for initialization of FLOWGO needs to wait for the first cloud-free MODIS overpass. In the best case, this wait-time was just 23 min, but was typically 3-to-4 h. Due to the lack of GSM (Global System for Mobile Communications) coverage over a large part of the volcano, the ability to communicate between on-site observers and the observatory is extremely limited, and messages need –literally – to be carried back by hand resulting in delivery delays for vent location if observers are out of range. The management priorities at OVPF, where a very small team needs to deal with all scientific, media and civil defense reporting duties, mean that it may take some time to communicate vent coordinates, especially at the beginning of an eruption. For example, at the beginning of the July 2015 eruption, the director took calls from at-least seven journalists during the first 2 h of activity, while also having to organize field crews and meet civil protection call down duties. Remaining staff were spread thin keeping up with real-time geophysical and field-based surveillance duties and reporting. However, the resulting delay in product delivery of typically 3–4 h was acceptable for the cases tracked here, where lava flows had their sources high on the volcano flanks, at-least 5 km from vulnerable infrastructure. Real time lava flux estimation based on SO2 flux measurements At the eruptive vent, lava effusion rate is proportional to the gas flux, provided that volatiles are dominantly released by melt upon ascent, decompression and degassing, and assuming that external sources (or magmatic mineral phases) do not comprise a significant fraction of the gas emission. At Piton de la Fournaise, SO2 flux has been demonstrated to scale linearly with effusion rate during small-to-intermediate volume eruptions (Hibert et al. 2015; Coppola et al. 2017). However, large volume, intense eruptions potentially degas a large volume of magma with respect to the volume erupted (Gouhier and Coppola 2011; Di Muro et al. 2014). During the 2014–2015 eruptions, SO2 fluxes were quantified in real-time by OVPF's permanent DOAS network and through completion of walked traverses along the caldera rim. The three scanning DOAS have been installed as permanent stations, these being Partage – nord, Enclos – west, and Bert – south (Galle et al. 2010). These locations are all close to the rim of the caldera, and perform continuous scanning of the sky above the Dolomieu cone during day light hours. Real-time integration of the plume cross section is performed using a set of standard and constant values for wind speed and plume height. Daily post-processing allows the spectral analysis to be refined by using the actual plume height and direction through triangulation of simultaneous scans from the three stations, and by taking into account wind speed data. During the 2014–2015 eruptions, wind speed data were provided by Meteo-France and were recorded at a station located at Bellecombe, i.e., between the DOAS stations at Partage and Enclos (see Fig. 1 for locations). Wind speed data are acquired hourly by an anemometer installed on a mast 10 m above the ground. Post-processing is carried out in collaboration with Chalmers University and allows SO2 flux to be correlated with daily rain fall data, acquired by the OVPF stations, to farther constrain the environmental effects on the gas flux estimates. While the short-lived June 2014 and July–August 2015 eruptions occurred during good weather conditions, periods of rain interrupted the longer eruptions of May 2015 and August–October 2015. Bad weather conditions were dominant during the February 2015 eruption, making any real time assessment of gas emissions unreliable during most of the event. Precise assessment of gas fluxes is also challenging during very short-lived eruptions (lasting just a few hours), especially if a significant part of the eruption occurs at night when the UV-reliant DOAS acquisition cannot be performed. DOAS sessions are acquired at a high sampling rate (one complete sky-scan every 13 min), but still only cover one third of a day, being limited to daylight hours (which is <8 h during the winter). TADR (bulk values) were derived from SO2 emissions using the procedure, validated for the January 2010 eruption, by Hibert et al. (2015), whereby TADR is directly proportional to the SO2 flux, and inversely proportional to the pre-eruptive gas content and degassed magma volume. This approach requires some assumptions regarding the pre-eruptive sulfur content of the magma, and on the density and vesicularity of the emitted lava. The approach becomes challenging for chemically or physically zoned eruptions, when the time evolution of magma chemistry, volatile content and physical properties of the erupted products requires careful estimation of the influence of these parameters on TADR estimations. For example, chemical zoning potentially translates to highly variable initial sulfur contents and produces a high uncertainty on the TADR estimate. We estimate the relative error on SO2 flux to be ±22.5%, where the main source of error is represented by the potentially high variability of lava vesicularity. For both the May and August–October 2015 eruptions, the best fit of estimated volumes to volumes measured by lava mapping (Peltier et al. 2016) was obtained using a low pre-eruptive sulfur content (600 ppm S), which is 55% of the content commonly assumed for undegassed magma stored in the shallow system at Piton de la Fournaise reservoir (Di Muro et al. 2014; 2016). In spite of the uncertainties in the estimations of TADR from SO2 flux, time evolution of TADR was obtained at high frequency using DOAS data and compared well with those provided by MIROVA (Fig. 8). a Relationships between erupted magma volume (from MIROVA) and total SO2 emissions (from NOVAC network) for four eruptions of Piton de la Fournaise (Coppola et al., submitted). Dashed lines are the theoretical relationships for initial sulfur content in the melt of 10, 100, 1000 ppm. Total SO2 emissions retrieved during short-lived eruptions (June 2014 and July 2015) were strongly underestimated because a significant part of the eruptions occurred at night. b Temporal evolution of erupted lava (bulk) volumes for the May 2015 and c August–October 2015 eruption. The cumulative volumes were derived from MODIS (red lines) and from SO2 flux, by assuming different sulfur contents in the magma (gray scale symbols). The yellow stars indicate the final lava flow volume obtained after each eruption from photogrammetry and InSAR analysis (Peltier et al. 2016). Note how the three methods converge when considering the lower estimates provided by MIROVA data and a low pre-eruptive S content (~600 ppm) for Piton de la Fournaise magmas. During each eruption, because of the uncertainty of both MIROVA and SO2-derived cumulative volumes, such comparisons were carried out in real-time to derive the mostly likely-curve on the basis of convergence, which was only achieved if adopted a low pre-eruptive S content Texture, geochemistry and petrology Since 1981, lava samples have been collected during or shortly after all eruptions at Piton de la Fournaise. Efforts are made to collect early-erupted lavas and tephra, and to quench molten lava in water. For geochemical measurements, samples need to be as un-contaminated as possible. During the events considered here, 110 samples were collected during or shortly after each eruption by the OVPF team. As listed in Additional file 1, they included lavas, scoria and lapilli which were both water and air quenched. After preliminary inspection and characterization at OVPF, a representative subset of samples are mailed to LMV for textural analysis, and to LMV and IPGP for geochemical and petrological analysis. As soon as the samples arrive they are macroscopically and microscopically described, and then each sample is divided according to the needs of textural, petrological, and geochemical measurements. Time for delivery of analyses can be up to 20 days. This is mostly due to the delay imposed by sample shipping to the mainland, where sample preparation and lab-time can be prioritised during an eruptive crisis reducing delays due to booking of the preparation and measurement facilities by other projects. Samples for textural analysis are dried in an oven for 24 h and then used for grain size, componentry, connectivity, density, porosity and permeability measurements. Vesicle and crystal contents, as well as their size distributions, are also derived from all pyroclasts and lava samples (see Gurioli et al. (2015), Latutrie et al. (2017) and Colombier et al. (2017) for the detail regarding standard procedures, plus the meaning and application of the measurements). These measurements are performed to check variation in space (down a fissure or vent system) and in time, both within single eruptions and between different eruptions. Results are also used to check, and update if necessary, the validity of FLOWGO source terms, such as the chemistry-based rheological model, and vesicularity, density and crystal content values used by FLOWGO (Harris et al. 2016), as well as to allow SO2 emission conversions. All textural measurements are performed at the LMV textural laboratory as part of the DYNVOLC "service d'observation" (SO) or observation service (wwwobs.univ-bpclermont.fr/SO/televolc/dynvolc/). For textural purposes, the first objective of sampling in active lava is to try to quench the sample so as to preserve the texture of the active flow and its chemistry (e.g., Cashman et al. 1999; Robert et al. 2014). The second objective is to be as representative of the flow source conditions, which for modelling means sampling as close to the vent as possible to allow source term validation, and then – if there is the luxury – to sample farther down channel so-as to provide ground-truth for the model in terms of cooling and crystallization rates (Harris et al. 2016). The third requirement is to be representative of the eruption itself. That is, to be sure that the at-vent thermal, chemical and textural conditions are not changing. To perform systematic observations, it is best to sample always at the flow front at the very beginning, during, and at the end of the eruption. In this way we always sample the same population. However, flow fronts are not always accessible, and to sample an active channel is not trivial; so that the reality of the situation is that we have to collect those samples that we can given difficult and challenging situations (e.g., www.youtube.com/embed/iwwV4hGVEcQ). Whole rock major and trace element concentrations are analysed by ICP-OES and ICP-MS, respectively. Major element composition of minerals and glass are analysed by EPMA on a subset of quenched samples (naturally quenched in air or water quenched). For geochemical and petrological analysis, samples are cut into centimetre-sized chips, before being crushed into millimetre-sized chips using a set of thermally hardened steel jaws (which were not chemically doped). Finally the sample is powdered in a motorised agate mortar. To reduce cross-contamination, the first powder fraction is discarded, and the second and third powders are used for major and trace element/isotope analysis, respectively. Major elements are analysed by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES, HORIBA Jobin Yvon Ultima C) following a Lithium metaborate (LiBO2) fusion method, and trace elements are analysed using a Quadrupole Mass Spectrometer (ICP-MS, Agilent 7500) following acid dissolution (HF-HNO3) of the sample in teflon vials. This method allows routine analysis of 47 elements (Li, Be, Sc, Ti, V, Cr, Co, Ni, Cu, Ga, Ge, As, Rb, Sr, Rb, Sr, Y, Zr, Nb, Cd, In, Sn, Sb, Cs, Ba, Rare Earth Elements, Hf, Ta, W, Tl, Pb, Bi, Th and U), but does not dissolve resistant minerals such as olivine-hosted Cr spinel. High-temperature (220 °C) dissolution of samples with ammonium bifluoride (NH4HF2) (Zhang et al. 2012) is currently being tested to overcome this issue. Routinely measured magma source tracers include Sr, Nd and Pb long-lived radiogenic isotopes. Strontium and Nd are purified using Eichrom specific resins (Sr.Spec and Tru.Spec) and their isotopic compositions are measured by thermal ionisation mass spectrometry (TIMS Thermo Triton). Lead is separated using Biorad AG-1X8 anionic resin and its isotopic composition is measured using a Neptune plus Multi-collector ICP-MS. A detailed description of trace and isotope analytical methods is given in Vlastelic et al. (2009). Contamination problems An issue encountered at Piton de la Fournaise is sample contamination (for some trace elements and Pb isotope composition) by the tool used to collect and quench the samples. Since 1983, a zinc-coated steel (i.e., galvanized) pipe has been used (Fig. 9a), potentially contaminating samples with siderophile/chalcophile trace elements of geochemical interest, such as Pb and Zn. In addition to direct contamination (Fig. 9b), which is not so problematic in the sense that the pipe mold is not used for chemical analysis, there is evidence of contamination of lava samples that have not been in direct contact with the pipe, where we have found 100–300 μm metal nuggets embedded in the melt (Fig. 9c, d). This indicates that contamination occurred while the lava was still molten. The metal chips include blobs of native iron with an oxidized shell (Fig. 9c) and flakes of Zn oxide (Fig. 9d). Thin coatings of Zn (Fig. 9e) also occur at the surface of iron spherules suggesting Zn addition from a vapour phase, and iron-oxide coatings occur at the surface, or within vesicles, of some samples (Fig. 9f). However, the origin of the latter deposits remains uncertain as similar deposits occur in naturally quenched scoria (Vlastélic et al. 2016). Bulk trace element concentrations and Pb isotopic compositions of the pipe and the stainless steel bucket used to quench samples are given in Table 2. Scanning Electron Microscopy (SEM) imaging of the contamination phases found in quenched lava samples. a Image of the sampling pipe showing pipe interior and Zn coating. b Iron oxide flakes on the mold of the sampling tool (sample 0107–22, July 2001). c Nugget of iron half embedded in the quenched melt. A polished section (inset) shows a grain core of metallic Fe and a thin shell of hematite (sample 030827–1, August 2003). d Flake of Zn oxide half embedded in the quenched melt (sample 070405–1, April 2007). e Iron spherule with thin Zn oxide coating (sample 030827–1, August 2003). f Silicate spherule with coating of little oxidized Fe (FeO to Fe metal) (sample 77–16, November 1977) Table 2 Trace element and Pb isotopic composition of the tools used to collect and quench molten lava since 1998 Elements with the highest enrichment (E) in "pipe" relative to La Réunion basalts are Sb (E = 48), As (E = 38), Zn (E = 21), Mo (E = 17), W (E = 8), Sn (E = 4), Pb and Mn (E = 2). Elements enriched in "bucket" are Mo (E = 1258), Ni (E = 967), Cr (Ni = 830), W (E = 575), Sb (E = 154), As (E = 86), Sn (E = 74), Co (E = 38), Cu (E = 12) and Mn (E = 10). Magnetic fractions separated from recent (2001–2007) quenched samples have elevated Zn-Pb concentrations (up to 13% for Zn and 450 ppm for Pb). These values exceed those measured in the bulk pipe (0.23% for Zn, and 3.6 ppm for Pb) (Fig. 10). This rules out bulk assimilation of pipe material and suggests either preferential input of the galvanized coating (made essentially of Zn) or deposition of a vapor phase enriched in Zn and Pb. The Pb isotopic signature of the magnetic fractions separated from quenched lava samples, as well as those of the pipe and the bucket are given in 207Pb/204Pb versus 206Pb/204Pb isotope space in Fig. 11a. The compositions of the magnetic fractions plot along well-defined mixing lines between lavas and three distinct contaminants. It is clear that lavas quenched between 2003 and 2007 were contaminated by the pipe. The 2001 contaminant had higher 207Pb/204Pb and Zn-Pb concentrations compared with the bulk pipe. We expect this contaminant to be the Zn coating of the pipe, although we have not measured its Pb isotopic composition. The change of contaminant between 2001 and 2003, despite the use of the same tool, is consistent with progressive abrasion of the galvanized coating. The 1977 contaminant is even higher in terms of 207Pb/204Pb (Fig. 9a). The occurrence of small Fe-Cr-Ni shavings (68 wt.% Fe, 18 wt.% Cr and 9 wt.% Ni) points to stainless steel as being the cause. We hypothesize that contamination arose from the use of a K-type (chromel/alumel) thermocouple (Boivin and Bachèlery 2009), whose mold was found in some quenched samples. This source of contamination does, thus, not apply to 1977, 1979 and 1981 samples because the first K-type thermocouple was not used on Piton de la Fournaise until December 1983. To date, contamination with the sampling tool results in spikes in Zn concentration and Pb isotopes that superimpose on the otherwise smooth temporal trend (Fig. 11b). Solutions that are currently envisioned to reduce or suppress contamination include the use of high temperature resistant ceramics, or a tool made of natural basalt from Piton de la Fournaise. Zn-Pb concentration plot. Compositions of the magnetic fractions separated from quenched samples are here compared with that of the bulk composition of the pipe, the bucket and unquenched lavas. The expected composition of the Zn coating of the pipe (not measured) is indicated (80 wt.% Zn, 0.4% wt% Pb) Lead isotope plots. a 207Pb/204Pb versus 206Pb/204Pb plot showing the composition of the magnetic fractions separated from quenched lava samples, the bulk compositions of the pipe, the bucket and unquenched lavas. b Temporal evolution of 206Pb/204Pb in Piton de la Fournaise lavas. Unradiogenic Pb spikes in October 1977, July 2001, August 2003 and April 2007 result from the contamination of quenched samples In total 127 TADR values were derived and delivered to OVPF by the MIROVA system during the 2014–2015 crises, cloud cover meaning that reporting varied between zero and four TADRs per day; with an average of two per day during the August–October 2015 eruption. During the five 2014–2015 eruptions, the results of 31 sample analyses, plus all DOWNFLOW and FLOWGO model runs were delivered directly to OVPF. All measurements and products were cross-checked, and merged with data fed back into the communication loop by the monitoring system operated by OVPF. This allowed source term update, uncertainty control and validation, as well as fully constrained event tracking. MIROVA-derived TADR and cumulative volume For the duration, erupted volume, peak TADR and mean output rate (MOR), there was a general tendency of increase between each of the five eruptions (Table 3). That is, the first eruption had the smallest intensity and magnitude, and the last eruption had the largest, peaking at a maximum TADR of 59 m3/s. During the whole sequence of eruptions the MOR increased almost linearly, $$ \mathrm{MOR}=1.95\left(\mathrm{eruption}\#\right)\hbox{-} 0.80\kern1em {\mathrm{R}}^2=0.731\ \left(\mathrm{fit}\ \mathrm{is}\ \mathrm{for}\ \mathrm{the}\ \mathrm{range}\ \mathrm{mid}\hbox{-} \mathrm{point}\right) $$ Table 3 Statistics for the MIROVA-derived eruption parameters for, and total SO2 mass emitted during, each eruption The first four events had TADR and cumulative volume trends (e.g., Fig. 5) that displayed the classic rapid waxing and waning forms that characterize the eruption of a pressurized source, as defined by Wadge (1981). However the stable, generally flat trend (after a short initial peak) of the final and long lasting eruption is more typical of that witnessed during an eruption that taps an unpressurized source (Harris et al. 2000b). The final eruption did, though, undergo an increase from 7.2 ± 1.4 m3/s between 25 August and 12 October 2015 to 15.9 ± 4.1 m3/s between 13 and 17 October (Fig. 12). The two TADR spikes, or reactivations of the eruption, on 23 and 30 October record two short-events with peaks at 32 and 20 m3/s, respectively, at the end of the eruption. Both of these two events, plus the peak that ended the main phase of effusion which was centered on 16 October, were interrupted by abrupt cessations of lava effusion. Each peak was separated by exactly 7 days and lasted for 2 days. MIROVA-derived TADRs (circles) and related uncertainty (thin solid lines) recorded during the August – October 2015 eruption Viewing of the flow field on 23–24 October was complicated by a wild fire that was burning on the caldera wall and in the same pixel (s) as the cooling flow field. The fire was ignited by lava contact at the base of the caldera wall around 13:00 on 23 October and quickly spread up the wall, requiring rapid evacuation of footpaths and viewing points (Le Journal de L'Ile de la Réunion, 24 October 2015, p. 4). Landsat ETM+ images acquired after the event revealed a 0.2 km2 (1000 × 200 m) burn scar. However, given an estimate of 100–300 MW for the fire radiative power, the contribution to the total radiative power (2000–6000 MW) was not significant, and of the order of the uncertainty due to the contribution from cooling lava flows emplaced during the main phase of activity (400–500 MW). Textural characterization and geochemical evolution Quenched lava samples erupted during June 2014 to August–October 2015 were characterized by a mean porosity of 51% with a standard deviation of 15% (Fig. 13a), where all calculations are based on the DRE (dense rock equivalent) measurement of 2.88 × 103 kg m−3. Two extreme points (with porosities of 16% and 86%) were measured measured on the first day of the August–October 2015 eruption and then on 15 September 2015, respectively. Otherwise, the most degassed lava sample was obtained from the June 2014 eruption (Fig. 13a). When compared with the porosity values obtained for 450 coarse lapilli, which ranged from 36 to 86%, the lava values were comparable to this range. For crystallinity, we analyzed a lava sample with a porosity of 50% from the July 2015 eruption (Fig. 13b). This was characterized by a crystallinity of 20%, which mainly comprised mesocrystals of plagioclase (up to 3 mm in diameter), clinopyroxene (up to 2 mm) and scarce olivine (+ spinel inclusions) in a glassy matrix with microcrystals of the same paragenesis (Fig. 13b, c). Measured ranges were consistent with parameters (vesicularity: 49%; melt density: 2.80 × 103 kg m−3) selected for conversion of SO2 fluxes in lava output rates. a Porosity versus time for quenched lava fragments collected during the 2014 and 2015 eruptive events at Piton de La Fournaise; b BSE (Back-Scattered Electron Imaging mode) image of a quenched sample from the July 2015 lava flow, in which: V = vesicles; C = mesocrystals of plagioclase and clinopyroxene; G = glass plus microcrystals of plagiolase, clinopyroxene and olivine; c zoom of the area identified in (b) by the red rectangle, where P = plagioclase and C = clinopyroxene July 2015 lava samples were aphyric basalts that mainly contained clinopyroxene and plagioclase microphenocrysts (< 500 μm) with rare olivine microphenocrysts set in a glassy or fine-grained matrix. Lavas emitted at the beginning of the August–October 2015 eruption had the same modal composition as those of July 2015, with microphenocrysts mainly of clinopyroxene and plagioclase set in a glassy-to-microlitic groundmass. After 15 September (the date on which the porosity trend turned around from its minimum value, Fig. 13a), a change in magma composition became evident as olivine mesocrysts became more frequent, and plagioclase microphenocrysts disappeared between 15 and 27 September (an exception being the lavas of 9 October). Thus, from the end of September until the beginning of October, lavas were aphyric basalts with clinopyroxene and olivine microphenocrysts and mesocrysts. From 9 October onwards, clinopyroxene was no longer observed as microphenocrysts, and only olivine microphenocrysts and mesocrysts (with Cr-spinels in inclusion) were observed. From mid-October to the end of the eruption, lavas were olivine basalts that contained 5–10% of olivine crystals (>500 μm, up to 6 mm in size) set in a matrix containing microlites of clinopyroxene, olivine and plagioclase (+ glass). Lavas erupted between June 2014 and May 2015 underwent a decreasing trend in MgO (6.6–6.1 wt%), Cr (87–58 ppm) and CaO/Al2O3 (from 0.78 to 0.73) (Fig. 14). The lavas erupted on 17 and 18 May 2015 were amongst the most differentiated of the historical period, resembling those produced in March 1998 after 5.5 years of quiescence (Vlastélic et al. 2005). This was consistent with the low-pre-eruptive S content of these lavas deduced on the basis of SO2 fluxes. However, a change in behaviour occurred during the May 2015 eruption when the MgO, Cr, CaO/Al2O3 temporal trends reversed between 18 and 24 May. The new trend was, at first, subtle but became more evident during the subsequent eruption of July–August 2015 and, especially, August–October 2015 (Fig. 14): the long-lived August–October eruption underwent a compositional evolution of MgO from 6.6 to 10.3 wt%. Inspection of Ni-Cr suggested that cumulative olivine occurred in the lavas erupted between 16 and 26 October, which had MgO contents in excess of 9 wt%. The last sample analysed that contained no evidence for cumulative olivine was erupted on 9 October, and had 8.0 wt% MgO, 122 ppm Ni and 302 ppm Cr, indicating the occurrence of a relatively primitive melt. Whole rock major and trace element composition plotted versus time for (a) MgO, (b) CaO/Al2O3 and (c) Cr. A log scale is used for Cr to emphasize the trend reversal in May 2015. Dashed lines indicate: (1) the compositional trend reversal (between 19 and 23 May 2015, which is ascribed to the delivery of new, less differentiated magma), and (2) the arrival of cumulative olivine during the August-October 2015 eruption These changes in chemistry had little effect on the modelled lava flow run-outs, with TADR being the main determinate on controlling FLOWGO-derived run-out. Likewise, textural changes were within the error of source parameters set using the 2010 eruption conditions (Harris et al. 2015). However, following the time evolution of bulk magma composition was critical for the interpretation of time evolution of TADR, which increased during the second half of August eruption, concomitantly with the emission of more magnesian basalts. What the evolution of the geochemical and textural parameters did show was that the system was evolving towards an unloading scenario to result in a terminating effusive "paroxysm" which, in hind sight, signalled the end of this particular cycle (Coppola et al. 2017). FLOWGO was validated through comparison of simulated flow lengths with actual flow lengths. Best data for comparison were achieved during the August–October 2015 eruption, during which cooling-limited flow regimes became established, as opposed to the volume-limited cases of the shorter duration eruptions when the eruption ended while flow fronts were still extending. On 2 September field-observations revealed that flow lengths were around 1 km, and 2 km by 4 September. When run at the MIROVA-derived TADR for this period of around 3.5–4.5 m3/s, we obtained a FLOWGO run out of around 1.25–2 km (Fig. 15). During the eruption, this approach was thus used in a circular fashion. That is: if the TADRs input into the model produced run outs that agreed with ground truth, then we have confidence in the TADRs used to initialize the model run. The good agreement between TADR-derived FLOWGO run-outs and observed run outs gave us confidence in (i) the TADR entered into the model, and then (ii) the model. The combination of TADR and flow model, at the very-least, were giving reliable run out estimates, such that we had confidence in the next run-out estimate should TADR increase. FLOWGO run, in terms of velocity of lava flowing in the master channel with distance from the vent, using the first TADR values (in m3/s) received from MIROVA during 2–4 September 2015, along with field-measured flow lengths for the same period. FLOWGO run outs, marked by the point where velocity reaches zero, i.e., the lava control volume has stopped, fall within the range of field measurements After the eruption, FLOWGO run outs were checked against flow lengths derived from InSAR mapping (Table 4). Based on satellite overpasses, maps of the lava flow were produced for eight dates during, and after the end of, the eruption. Maps were derived from interferometric coherence maps following the approach developed by Zebker et al. (1996), Dietterich et al. (2012) and Bato et al. (2016). For each date, the length of all main lava flow branches active between each overpass were estimated using a stochastic maximum slope path approach so as to find the flow center-line between the source and the flow front and to extract its distance (e.g., Favalli et al. 2005). The InSAR analysis revealed that, between 29 September and 13 October 2015, lava flows were cooling-limited where, under a relatively stable TADR, units were extending to 3.6–3.9 km before stalling so that the following flow was emplaced next to the preceding unit. This built a broad, branching flow field with a low aspect (length/width) ratio, typical of long-lived eruptions at stable TADRs that feed sequential cooling-limited units (Kilburn and Lopes 1988). At this time, FLOWGO run outs were in good agreement with the mapped lengths, being in the range 3–4 km depending on TADR (Table 4). This comparison reveals that differences between FLOWGO-simulated and InSAR-mapped run outs for this event were 0.1–0.8 km or 3–20%. During the short-lived TADR spikes of 17 and 24 October 2015, FLOWGO run outs were much longer than measured-flow lengths (Table 4). This was a result of these events being volume-limited, so that supply was cut before flow could attain its maximum potential distance (Guest et al. 1987), with FLOWGO simulating the cooling-limited length a flow can attain IF supply is maintained for a time sufficient for the flow to attain its maximum potential length. Table 4 MIROVA-derived time-averaged discharge rate (TADR) for dates on which InSAR data are available with FLOWGO lava flow run-outs that each TADR gives and the InSAR-derived lava flow length for the same day. Δ gives the difference between the InSAR-derived flow length and FLOWGO run-out Measurements to support validation of MIROVA-derived TADR were also made during the July–August 2015 eruption. Observations were made of the main active vent and its outlet channel between 10:00 and 15:00 on 1 August 2015. In addition, thermal video was taken at the head of the master channel where it exited the eruptive fissure for 5 min at 11:40 on 1 August, and a water-quenched sample was collected from the same location at 13:15. The channel was 2 m wide and contained a 2 m deep flow with its surface 1 m below the levee rim. Velocities obtained from the thermal video were 0.05–0.1 m/s, for an effusion rate of 0.2–0.4 m3/s. This agrees with MIROVA-derived TADRs of 0.8 ± 0.4 m3/s obtained from the two evening MODIS overpasses at 19:25 and 20:55 on 1 August, and with the value of 0.18 ± 0.6 m3/s obtained on 2 August at 06:20, 5 h before the eruption ended. Because our observations were made during a period of waning activity, where flow levels and velocities in the master channel underwent a noticeable decline after 11:30 on 1 August, we used Jeffreys (1925) to estimate flow velocities under peak-flow conditions. Viscosity was calculated on the basis of thermal-camera-derived flow temperature (1114 °C), plus sample crystal (19–20%) and/or vesicle (50–58%) content, using Villeneuve et al. (2008), Roscoe (1952), Manga and Loewenberg (2001), Pal (2003) and Llewellin and Manga (2005). Results were in the range 370–700 Pa s. Using this viscosity range in Jeffreys (1925), with the sample density of 1510 kg/m3 and underlying slope of 5°, yields a peak-flow velocity of 0.9–1.7 m/s, for an effusion rate of 5.35 ± 1.65 m3/s. This matches the MIROVA-derived TADR of 6.25 ± 2.25 m3/s for 10:00 on 1 August. Aerial photographs taken by journalists during the opening day of the eruption (H. Douris, Le Journal de L'Ile de la Réunion, 02/08/15, p. 7) reveals the master channel to have been brim full (i.e., flow depth = 3 m) at that point. This higher flow level yields velocities of 2.1–3.9 m/s, which convert to effusion rates of 12 ± 3.7 m3/s. The same photographs indicated that two channel systems of similar dimensions were active during 31 July, so the total effusion rate could have been as high as 24 ± 8 m3/s during the opening hours of the eruption. These values again compare well with those derived from MIROVA which gave 22 ± 8 m3/s from MODIS images acquired at 18:40 and 21:50 on 31 July. The response: An example from the may 2015 eruption The May 2015 eruption began at 13:45 (local time) on 17 May at three en-echelon fissures. The shortest (30 m long) and most western fissure was located at 2285 m asl. The second fissure was located 200 m farther east, between 2250 and 2100 m asl, and was 500 m long. The third, and most eastern, fissure was located a farther 1100 m downslope. Located between 2060 and 1980 m asl it was 360 m long. It is the lava flow that spread from this third fissure that is the subject of this case-study. At 17:45, a GPS coordinate was acquired during a helicopter over flight allowing us to place the flow front at 1.6 km from the main vent, having advanced down to the 1700 m asl elevation and meaning that the flow front had advanced at 6.7 m per minute over the first 4 h of the eruption. By 18 May it was clear that the belt road was in the path of the flow, with the flow having advanced 3.1 km in the first 18 h and now being 4.9 km short of the road (at 150 m asl) and 5.9 km short of the coast. A request thus went out from the OVPF director asking for a risk-assessment to be run regarding the exposure of the road to lava inundation. This request was prompted by the fact that the lava flow was, at that time, moving into a zone known as "Grandes Pentes" (big slopes, Fig. 1b). Here slopes are between 30 and 45%. In response, FLOWGO was run at all possible TADRs so as to provide a look up table for assessment of risk to the road (Fig. 16). These runs made it clear that the road was in possible danger at the TADRs of that date (10.1 and 21.5 m3/s, at 09:40 on 19 May), especially if the upper bound applied. There was also a threat to an OPGC permanent monitoring station, station GPSG, where GNSS and a seismometer are operational (Fig. 1). This station was on the predicted path of the flow at a distance of 4.2 km from the vent and 1.1 km from the lava flow front position of 18 May at 08:00. It was therefore at risk at TADRs greater than 14 m3/s. On the basis of precautionary principle, OVPF thus recovered the equipment by a helicopter provided by the Gendarmerie. By the end of the eruption, this branch of the lava flow had stopped less than 150 m from the station, at which time the station was re-installed. FLOWGO run, in terms of velocity of lava flowing in the master channel with distance from the vent, using the first TADR values in the range 10–26 m3/s down the May 2015 LoSD. This look-up graph shows that the road will likely be attained by channel-fed flow fed at TADRs greater than 22 m3/s. Any flow fed at TADR exceeding 24 m3/s will likely reach the coast On 18 May around 08:00, the flows entered a zone of vegetation that extended between 1450 and 1150 m asl covering an area of 42,500 m2 at a distance between 5 and 4.3 km west of the road. At this location, the flow front appeared to slow somewhat (despite the mean slope increasing to 40%). This was probably due to the effect of the vegetation. In addition, average SO2 fluxes quickly decreased from 2700 tons/day on 18 May to 390 tons/day on 19 May, decreasing more progressively thereafter until 23 May. After 23 May SO2 fluxes were <80 tons/day until the end of the eruption. By 21 May, TADRs had also dropped to 2.5 m3/s (Fig. 5) so that predicted run-outs reduced to 2 km, 6 km above the road. The eruption ended at 20:50 on 31 May, with the longest-reached flows being 4.05 km long, having extended to within 3.9 km of the road. The response model We have reviewed a response model for a crisis at an island volcano that is distant from its administrative center. The model involves synergy between multiple distant nodes so as to create an extended, virtual observatory. At the onset of each eruption the system was triggered by an email from the OVPF director, declaring the date and time of eruption. This was distributed to the email distribution list linking the five institutes involved in this exercise (IPGP, OPGC, LMV, Università di Torino, INGV, Chalmers). Shortly thereafter, the coordinates of the vent location were emailed to all partners and DOWNFLOW launched at INGV-Pisa. The output line of steepest descent was then handed on to LMV for initialization of FLOWGO. FLOWGO was then run, and output delivered to OVPF, as soon as the first MIROVA-derived TADR arrived (again via email from the Università di Torino to the email distribution list). Initially FLOWGO was run using the chemical model validated for the 2010 eruption, and the magma composition, crystal and vesicle properties of the same eruption. These source terms were then updated upon receipt of first chemical and textural analyses. In the case reviewed here, the response model was thus based on coordinated and timely input from six different institutions, three of whom were part of the national network charged with formal response (OVPF-IPGP, OPGC and LMV), and three of whom were European partners (INGV, Università di Torino, Chalmers University). The flow of source terms, ground truth, product and communication through the system that was developed is given in Fig. 2. Key source terms were vent location, TADR, chemistry, texture, gas fluxes, and temperature LoSD (Table 5) and these were provided by OVPF, Università di Torino, OPGC and Chalmers University. Modeling was then executed by INGV-Pisa and LMV and fed back into the loop. Key to the smooth operation of this system was that information was passed seamlessly between each node, and in the correct order, so as to ensure that products (with known uncertainty) were provided to the observatory director. These information were delivered in a manner and format that was (i) immediately useable, and (ii) trusted. The director was then charged with one-voice communication to the next level in the response chain. The key was that the product had been user-tested, validated and ground-truthed before the crisis, so that useable information (rather than raw data) were provided. Further, the system had sufficient flexibility and communication openness that problems arising during any crisis were clearly communicated, succinctly discussed and then solved in real-time, so that the work flow was modified and evolved as any particular situation evolved. Table 5 Source terms, ground truth and products, with source, frequency of provision and delivery delay, developed as part the effusive crisis response network of Fig. 13 On the basis of working together during these five effusive crises we can identify five crucial components to ensure smooth information flow: (1) a need for TADR validation; (2) exclusion of fires from satellite signals to isolate the volcanic component; (3) an understanding of lava – vegetation interactions; (4) timely provision and validation of model source terms, and (5) a clear statement of, and if possible a reduction in, uncertainty (including cleaning of data sets of unreliable or untrustworthy data points). One unexpected uncertainty which we encountered was due to sample contamination by the sampling device. Chemical and textural samples critical for source term checking need to be uncontaminated, so we need to find a way to sample without (i) introducing chemical artifacts due to the sampling device, (ii) changing the vesicle structure due to shearing on withdrawal of the sampling device from the fluid, or (iii) from vesiculation during quenching and boiling in water. A realistic sampling protocol also needs to be defined, with no redundancy. That is, if a small, over-worked field- crew charged with monitoring an eruption several kilometers from the nearest vehicle access is to be efficient, then the minimum number of most useful measurements need to be made in a limited amount of time at the most viable (accessible and safe) sites. That is: a realistic and viable sampling protocol needs to be put in place before the crisis. TADR validation The best approach to TADR validation is an ensemble-based one that looks for consistency across approaches. In our case the test was: satellite-based and ground SO2- and/or IR-derived TADR are in agreement, when fed into an appropriately initialized and tested flow model the simulation runs out to field-observed flow lengths the TADR can be trusted. This is an important need because TADR is intimately related to flow length and area (e.g., Walker 1973; Malin 1980; Pinkerton and Wilson 1994; Calvari and Pinkerton 1998; Murray and Stevens 2000; Harris and Rowland 2009), and hence defines the potential for a lava flow to enter vulnerable areas to cause damage. Collection of field based TADRs are viable, and can be obtained on the basis of thermal emission or flow dynamics from IR cameras. The thermal approach requires a flow-wide thermal mosaic, which requires a platform, such as ultra-light aircraft, helicopter, or drone which may be beyond capacity in terms of funds, weather or technology. The ground-based approach requires set-up of a camera within sight of the master channel to acquire flow dimensions and velocity. Although removing the requirement of a platform, it still means that viewing conditions and geometries have to be opportune, and that a field crew can reach the observation site. At this point, satellite-based methods (validated by occasional field spot checks) appear best suited for rapid response, although the satellite-based measurement has to consistently cross-check with ground truth if it is to be trusted. Inter-validation between satellite and SO2-flux derived TADR is valid only when the pre-eruptive sulfur content is constant. If SO2-derived TADR does not agree with the satellite-based values, but when the latter values provide model-based flow lengths consistent with observed lengths, then we may be able to infer the sulfur output of the erupted lavas. Problems with fires If, as is likely on a heavily vegetated, tropical island volcano, lava ingress into vegetation ignites a fire, then the fire has to be removed from the heat budget if the TADR is not to be over-estimated using the thermal approach. This is an old problem, but solutions are possible using a combination of spectral and field observations, as was the case here. Indeed, separating the intensity, location and spreading direction of a fire from that of the lava is an extremely useful exercise as the fire poses a hazard in its own right. A-priori knowledge of the pre-burn fuel load (kg m−2) will help to estimate the "fire radiative power" (e.g., Van Wagner 1967; Viskanta and Mengüç 1987; Mell et al. 2009), once the fire-affected area is estimated. A map of the potential biomass energy released per unit mass of fully burnt, dry fuel, may thus be useful to correct the radiant power measured by satellite if lava ignites a fire. Vegetation interaction Trees may, or may not, affect lava spreading in terms of both a thermal and mechanical effect. Thermally, a heavily vegetated zone may cause flows to excessively cool and crystalize due to the need to dry and then ignite trees (Van Wagner 1967). Open tree molds may then serve as skylights that allow heat to radiatively escape from the flow interior as is the case for lava tubes (Witter and Harris 2007). The solidification of lava around trees, as well as the trees themselves, then cause mechanical obstructions. Both effects may serve to slow flow advance, but the problem is totally unconstrained. Provision and validation of model source terms Any TADR-derivation model or lava flow simulation model requires input of source terms and validation of output. Crucially this requires data for: chemistry (for the flow viscosity model); eruption temperature (for the flow and TADR model); SO2 gas flux (for TADR validation); flow crystallinity and vesicularity (for the flow and TADR model); plus vent location and up-to-date DEM, with a horizontal and vertical resolution of less than one meter, for flow direction runs. We find use of InSAR data to be extremely promising in this regard. The response model described here integrates external partners who need to enter the communication network seamlessly, and provide product that is: (i) trusted and validated, (ii) in a format that is immediately useable, (iii) useful for monitoring and execution of assessment and reporting duties, and (iv) trusted. Within this network, the operation of any methodology, and sources of uncertainty, need to be well-known and spurious (or un-necessary) information removed. Transparency, efficiency and full documentation is thus key. Raw data will not be used, neither will product which is difficult to interpret or whose source is unknown. What are needed are answers, where the observatory will have defined the questions; and it will be up to the partners to iterate on their answers until product is seamlessly integrated into the workflow. A good state-of-the-art example of a similar, but internally-developed, model is that of the HOTSAT system at INGV's Osservatorio Etnea (Ganci et al. 2016). HOTSAT is designed to allow near-real time assessment and simulation of effusive crises at Mount Etna (Sicily, Italy). The system uses satellite (SEVIRI and MODIS) data which are fed into an in-house physics-based lava flow propagation model (MAGFLOW), and is based on detection algorithms (Ganci et al. 2012a, 2012b), TADR conversions (Ganci et al. 2012a, 2012b), and lava flow simulations (Del Negro et al. 2008; Hérault et al. 2009) that have been developed, tested and validated in-house (Vicari et al. 2009), before being launched operationally to allow improved crisis response (Vicari et al. 2011a). The workflow is almost identical to that applied here, where (i) hot pixels are located using satellite data, (ii) the flow model is initialized with a DEM and appropriate chemical and physical volcanological parameters, (iii) the model is executed with vent coordinates and satellite-derived TADR, allowing (iv) flow coverage assessments to be delivered for observatory-based hazard assessment purposes (Vicari et al. 2011b). In the case of HOTSAT, the workflow is completed by an observatory-based remote sensing and modeling group which has five members: the same number as the total staff at OVPF charged with monitoring. In our case, the group currently working on the same task (TADR derivation and model execution) does actually number four, but that grouping comes from four institutions in two countries. Periods of major unrest or high magnitude (explosive) eruptive events have, in the past, prompted assembly of multidisciplinary response teams to support small groups charged with tracking the event. A recent example is the interaction of the UK's Meteorlogical Office and British Geological Survey with the Icelandic Meteorological Office and the Institute of Earth Sciences at the University of Iceland during the 2010 eruption of Eyjafjällajökull (Donovan and Oppenheimer 2012). Such groupings have also been developed, for example, during the eruption of Nevado del Ruiz in 1985 (Hall 1990), at Nyiragongo during and following the 2002 eruption (Allard et al. 2002; Tedesco et al. 2007), and for Merapi in 2010 (Jousset et al. 2012). Another example is the response to eruptive events provided by the USGS Volcano Disaster Assistance Program, which has collaborated with volcano observatories in 12 countries in connection with at least 30 eruptive crises since 1985 (https://volcanoes.usgs.gov/vdap/). In these cases, large international collaborations were configured to assess volcanic activity and their impacts during and following a single event. The case presented in our study differs from these examples in that (i) the collaboration was coordinated from the local volcano observatory, instead of by an external, international (e.g., USGS) or transnational (e.g., UN, VAAC) organization, (ii) our response model is permanent (that is, we activate the model each time there is an eruption), and (iii) the response targets frequent, but low-magnitude eruptive events. The cases tested here have, instead, been typical, low magnitude effusive events at Piton de la Fournaise with limited (in a geographical sense) impact, with lava flow activity being confined to the Enclos Fouqué caldera. In this case the main causes for concern are (i) burial of the island belt road (RN2), (ii) destruction of observatory instrumentation, (iii) evacuation of the Enclos Fouqué caldera, (iv) security of the footpaths, (v) injury to (and evacuation of) tourists, (vi) fatalities among tourists entering the closed zone, and (vii) forest fires ignited by the active lava. For an eruption in inhabited areas outside of the Enclos Fouqué caldera, including densely populated areas such as around Le Tampon (Fig. 1a), the impact and response model would have to be scaled up, other external participants called, and the component models adjusted to suit, and tested and validated on, the new case. We are currently preparing for such an eventuality through an initiative entitled ANR-LAVA (Lava Advance in Vulnerable Areas). This initiative, funded by the French ANR (Agence National de Recherche), supports the group to develop and test the response model and its component parts, including the simulation model which will be based on that of Bernabeu et al. (2016), for effusive events that enter heavily vegetated and/or populated areas. In such a sensitive case there is even less room for error or mis-communication of uncertainty. In France, volcano observatories are dedicated to observations and measurements, plus recording, archiving, communicating and distributing data. During an effusive crisis the observatory director needs to provide local civil protection, and therefore the local municipality, with factual elements that are often quantitative – but always based on trusted measurements. For example, the director needs to support statements such as: "as of 09:45 this-morning discharge rate and flow length was increasing, and the flow front was 5 km above the road". To answer legitimate questions regarding risk, the director will give responses only based on statistics that have been validated and published with appropriate error bars along with providing possible scenarios based on knowledge of past activity at the volcano. A small staff thus needs all the measurement and model based support that it can obtain, as well as base-line data, in order to support such communications. That support needs to be trusted, timely and, above all, validated. Allard P, Baxter P, Halbwachs M, Komorowski J-C. Final report of the French-British scientific team: submitted to the Ministry for Foreign Affairs, Paris, France, foreign office, London, United Kingdom and respective embassies in Democratic Republic of Congo and Republic of Rwanda; 2002. p. 24. Bardolet E, Sheldon PJ. Tourism in archipelagos: Hawai'i and the Balearics. Ann Tour Res. 2008;35(4):900–23. Bato M, Froger J, Harris A, Villeneuve N. Monitoring an effusive eruption at Piton de la Fournaise using radar and thermal infrared remote sensing data: insights into the October 2010 eruption and its lava flows. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, Modelling and responding to effusive eruptions, vol. 426. London: Geological Society, London, Special Publications; 2016. p. 533–52. Bello A. Valorisation des eruptions du Piton de la Fournaise, Ile de la Réunion. Report to the Parc National de La Réunion (Secteur Est); 2010. p. 25. Bernabeu N, Saramito C, Smutek C. Modelling lava flow advance using a shallow-depth approximation for three-dimensional cooling of viscoplastic flows. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, Modelling and responding to effusive eruptions, vol. 426. London: Geological Society, London, Special Publications; 2016. p. 409–23. Boivin P, Bachèlery P. Petrology of 1977 to 1998 eruptions of Piton de la Fournaise, la Réunion Island. J Volcanol Geotherm Res. 2009;184:109–5. Breaker LC. Estimating and removing sensor-induced correlation from advanced very high resolution radiometer satellite data. J Geophys Res. 1990;95(C6):9601–711. Brown L. Birth of a mountain: Montserrat's volcano – an eyewitness account. Blackwater: Sargeant Press; 2010. p. 278. Calder E, Harris A, Peña P, Pilger E, Flynn L, Fuentealba G, et al. Combined thermal and seismic analysis of the Villarrica volcano lava lake, Chile. Rev Geol Chile. 2004;31(2):259–72. Calvari S, Pinkerton H. Formation of lava tubes and extensive flow field during the 1991–1993 eruption of Mount Etna. J Geophys Res. 1998;B103:27291–301. Cashman KV, Thornber C, Kauahikaua. Cooling and crystallization of lava in open channels, and the transition of Pāhoehoe lavat o 'A'ā. Bull Volcanol. 1999;61:306–23. Chester DK, Duncan AM, Dibben CJL. The importance of religion in shaping volcanic risk perception in Italy, with special reference to Vesuvius and Etna. J Volcanol Geotherm Res. 2008;172:216–28. Colombier M, Gurioli L, Druitt TH, Shea T, Boivin P, Miallier D, et al. Textural evolution ofmagma during the 9.4-ka trachytic explosive eruption at Kilian volcano, Chaîne des Puys, France. Bull Volcanol. 2017;79:17. doi:10.1007/s00445-017-1099-7. Coppola D, Piscopo D, Staudacher T, Cigolini C. Lava discharge rate and effusive pattern at Piton de la Fournaise from MODIS data. J Volcanol Geotherm Res. 2009;184(1–2):174–92. doi:10.1016/j.jvolgeores.2008.11.031. Coppola D, James MR, Staudacher T, Cigolini C. A comparison of field- and satellite-derived thermal flux at Piton de la Fournaise: implications for the calculation of lava discharge rate. Bull Volcanol. 2010;72(3):341–56. doi:10.1007/s00445-009-0320-8. Coppola D, Laiolo M, Piscopo D, Cigolini C. Rheological control on the radiant density of active lava flows and domes. J Volcanol Geotherm Res. 2013;249:39–48. doi:10.1016/j.jvolgeores.2012.09.005. Coppola D, Laiolo M, Cigolini C, Delle Donne D, Ripepe M. Enhanced volcanic hot-spot detection using MODIS IR data: results from the MIROVA system. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, Modelling and responding to effusive eruptions, vol. 426. London: Geological Society, London, Special Publications; 2016. p. 181–205. First published online 14 May 2015. http://doi.org/10.1144/SP426.5. Coppola D, Di Muro A, Peltier A, Villeneuve N, Ferrazzini V, Favalli M, et al. Shallow system rejuvenation and magma discharge trends at Piton de la Fournaise volcano (la Réunion Island). Earth Planet Sci Lett. 2017;463:13–24. Dehn J, Dean KG, Engle K, Izbekov P. Thermal precursors in satellite images of the 1999 eruption of Shishaldin volcano. Bull Volcanol. 2002;64:525–45. Del Negro C, Fortuna L, Herault A, Vicari A. Simulations of the 2004 lava flow at Etna volcano by the MAGFLOWcellular automata model. Bull Volcanol. 2008;70:805–12. http://doi.org/10.1007/s00445-007-0168-8 Di Muro A, Métrich N, Vergani D, Rosi M, Armienti P, Fougeroux T, et al. The shallow plumbing system of Piton de la Fournaise volcano (la Réunion Island, Indian Ocean) revealed by the major 2007 caldera forming eruption. J Petrol. 2014;55:1287–315. Di Muro A, Métrich N, Allard P, Aiuppa A, Burton M, Galle B, et al. Magma degassing at Piton de la Fournaise volcano. In: Bachelery P, Lenat J-F, Di Muro A, Michon L, editors. Active volcanoes of the Southwest Indian Ocean. Berlin: Springer; 2016. p. 203–22. Dietterich HR, Poland MP, Schmidt DA, Cashman KV, Sherrod DR, Espinosa AT. Tracking lava flow emplacement on the east rift zone of Kīlauea, Hawai'i, with synthetic aperture radar coherence, Geochem. Geophys Geosyst. 2012;13:Q05001. doi:10.1029/2011GC004016. Dominey-Howes D, Minos-Minopoulos D. Perceptions of hazard and risk on Santorini. J Volcanol Geotherm Res. 2004;137:285–310. Donovan A, Oppenheimer C. Governing the lithosphere: Insights from Eyjafjallajökull concerning the role of scientists in supporting decision-making on active volcanoes. J Geophys Res. 2012;117(B03214). doi:10.1029/2011JB009080. Etcheverria O. Du vignoble à la destination oenotouristique. L'exemple de l'Ile de Santorin. Cultur – Revista de Cultura e Turismo. 2014;8(3):188–210. Favalli M, Pareschi MT, Neri A, Isola I. Forecasting lava flow paths by a stochastic approach. Geophys Res Lett. 2005;32(L03305). doi: 10.1029/2004GL021718. Frulla LA, Milovich JA, Gagliardini DA. Illumination and observation geometry for NOAA-AVHRR images. Int J Remote Sens. 1995;16(12):2233–53. Galle B, Johansson M, Rivera C, Zhang Y, Kihlman M, Kern C, Lehmann T, Platt U, Arellano S, Hidalgo S. Network for Observation of Volcanic and Atmospheric Change (NOVAC)—A global network for volcanic gas monitoring: Network layout and instrument description. J Geophys Res. 2010;115(D05304). doi:10.1029/2009JD011823. Ganci G, Vicari A, Cappello A, Del Negro C. An emergent strategy for volcano hazard assessment: from thermal satellite monitoring to lava flow modeling, remote Sens. Environment. 2012a;119:197–207. doi:10.1016/j.rse.2011.12.021. Ganci G, Harris AJL, Del Negro C, Guehenneux Y, Cappello A, Labazuy P, et al. A year of lava fountaining at Etna: volumes from SEVIRI. Geophys Res Lett. 2012b;39:L06305. doi:10.1029/2012GL051026. Ganci G, Bilotta G, Capello A, Herault A, Del Negro C. HOTSAT: a multiplatform system for the thermal monitoring of volcanic activity using satellite data. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, Modelling and responding to effusive eruptions, vol. 426. London: Geological Society, London, Special Publications; 2016. p. 207–21. Garau-Vadell JB, Díaz-Armas R, Gutierrez-Taño D. Residents' perceptions of tourism impacts in island destinations: a comparative analysis. Int J Tour Res. 2014;16:578–85. Garel F, Kaminski E, Tait S, Limare A. An experimental study of the surface thermal signature of hot subaerial isoviscous gravity currents: implications for thermal monitoring of lava flows and domes. J Geophys Res. 2012;117(B02205). http://dx.doi.org/10.1029/2011JB008698. Gaudru H. Case study 1: Reunion Island, France – Piton de la Fournaise volcano. In: Erfurt-Cooper P, Cooper M, editors. Volcano and geothermal tourism. London: earthscan; 2010. p. 54–5. Germanaz C. Du pont des navires au bord des cratères: regards croisés sur le Piton de la Fournaise (1653–1964). Itinéraires iconographiques et essai d'iconologie du volcan actif de La Réunion. Paris: Université Paris-Sorbonne, thèse de doctorat; 2005. Germanaz C. Le haut lieu touristique comme objet spatial linéaire : le somin Volcan (île de Réunion) Fabrication, banalisation et patrimonialisation. Cahiers de géographie du Québec. 2013;V57(N162):379–405. doi:10.7202/1026525ar. Gouhier M, Coppola D. Satellite-based evidence for a large hydrothermal system at Piton de la Fournaise volcano. Geophys Res Lett. 2011;38(2):L02302. doi:10.1029/2010GL046183. Gouhier M, Guéhenneux Y, Labazuy P, Cacault P, Decriem J, Rivet S. HOTVOLC: a web-based monitoring system for volcanic hot spots. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, modelling and responding to effusive eruptions, vol. 426, The Geological Society of London. London: Geological Society, London, Special Publications; 2016. p. 223–42. doi:10.1144/SP426.31. Goward SN, Markham B, Dye DG, Dulaney W, Yang J. Normalized difference vegetation index measurements from the advanced very high resolution radiometer. Remote Sens Environ. 1991;35:257–77. Gurioli L, Andronico D, Bachèlery P, Balcone-Boissard H, Battaglia J, Boudon G, et al. MeMoVolc consensual document: a review of cross-disciplinary approaches to characterizing small explosive magmatic eruptions. Bull Volcanol. 2015;77:49. doi:10.1007/s00445-015-0935-x. Hall M. Chronology of the principal scientific and governmental actions leading up to the November 13, 1985 eruption of Nevado del Ruiz, Colombia. J Volcanol Geotherm Res. 1990;42(1):101–15. doi:10.1016/0377-0273(90)90072-N. Harris AJL, Baloga S. Lava discharge rates from satellite-measured heat flux. Geophys Res Lett. 2009;36(L19302). doi:10.1029/2009GL039717. Harris AJL, Rowland SK. FLOWGO: a kinematic thermo-rheological model for lava flowing in a channel. Bull Volcanol. 2001;63:20–44. doi:10.1007/s004450000120. Harris AJL, Rowland SK. Effusion rate controls on lava flow length and the role of heat loss: a review. In: Thordarson T, Self S, Larsen G, Rowland S K & Hoskuldsson A, editors. Studies in Volcanology: The Legacy of George Walker. Special Publications of IAVCEI 2; 2009. p. 33–51. Harris AJL, Keszthelyi L, Flynn LP, Mouginis-Mark PJ, Thornber C, Kauahikaua J, et al. Chronology of the episode 54 eruption at Kilauea volcano, Hawaii, from GOES-9 satellite data. Geophys Res Lett. 1997a;24(24):3281–4. Harris AJL, Blake S, Rothery DA, Stevens NF. A chronology of the 1991 to 1993 Etna eruption using AVHRR data: implications for real time thermal volcano monitoring. J Geophys Res. 1997b;102(B4):7985–8003. Harris AJL, Wright R, Flynn LP. Remote monitoring of Mount Erebus Volcano, Antarctica, using polar orbiters: progress and prospects. Int J Remote Sens. 1999;20(15&16):3051–71. Harris AJL, Murray JB, Aries SE, Davies MA, Flynn LP, Wooster MJ, et al. Effusion rate trends at Etna and Krafla and their implications for eruptive mechanisms. J Volcanol Geotherm Res. 2000;102(3–4):237–69. Harris AJL, Pilger E, Flynn LP, Garbeil H, Mouginis-Mark PJ, Kauahikaua J, et al. Automated, high temporal resolution, thermal analysis of Kilauea volcano, Hawaii, using GOES-9 satellite data. Int J Remote Sens. 2001;22(6):945–67. Harris A, Rhéty M, Gurioli L, Villeneuve N, Paris R. Simulating the thermorheological evolution of channel-contained lava: FLOWGO and its implementation in EXCEL. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, modelling and responding to effusive eruptions, vol. 426. London: Geological Society, London Special Publication; 2016. p. 313–36. doi:10.1144/SP426.9. Heggie TW. Reported fatal and non-fatal incidents involving tourists in Hawaii volcanoes National Park, 1992–2002. Travel med Infect Dis. 2005;3:123–31. Heggie TW. Search and Rescue in Alaska's National Parks. Travel med Infect Dis. 2008;6:355–61. Heggie TW. Death by volcanic laze. Wilderness Environ Med. 2009;20(1):101–3. Heggie TW, Heggie TM. Viewing lava safely: an epidemiology of hiker injury and illness in Hawaii volcanoes national park. Wilderness Environ Med. 2004;15:77–81. Heggie TW, Heggie TM. Search and rescue trends and the emergency medical service workload in Utah's national parks. Wilderness Environ Med. 2008;19:164–71. Heggie TW, Heggie TM. Search and rescue trends associated with recreational travel in US national parks. J Transl Med. 2009;16(1):23–7. Hérault A, Vicari A, Ciraudo A, Del Negro C. Forecasting lava flow hazards during the 2006 Etna eruption: using the MAGFLOW cellular automata model. Comput Geosci. 2009;35:1050–60. http://doi.org/10.1016/j.cageo.2007.10.008 Hibert C, Mangeney A, Polacci M, Di Muro A, Vergniolle S, Ferrazzini V, Taisne B, Burton M, Dewez T, Grandjean G, Dupont A, Staudacher T, Brenguier F, Shapiro N.M, Kowalski P, Boissier P, Catherine P, Lauret F (2015) Multidisciplinary monitoring of the January 2010 eruption of Piton de la Fournaise volcano, la Réunion island: J Geophys Res 120: 3026. doi:10.1002/2014JB011769. Holben B, Fraser RS. Red and near-infrared sensor response to off-nadir viewing. Int J Remote Sens. 1984;5(1):145–60. INSEE. Réunion: Fréquentation touristique 2015 – La frequentation touristique repart à la hausse. La Réunion, St. Denis), No. 16 (Mai 2016): INSEE Analyses; 2016. p. 4. Jeffreys H. The flow of water in an inclined channel of rectangular section. Philos Mag. 1925;49:793–807. Josseran L, Paquet C, Zehgnoun A, Caillere N, Le Tertre A, Solet J-L, et al. Chikungunya disease outrbreak, Reunion Island. Emerg Infect Dis. 2006;12(12):1994–5. doi:10.3201/eid1212.060710. Jousset S, Pallister J, Boichu M, Buongiorno M, Budisantoso A, Costa F, et al. The 2010 explosive eruption of Java's Merapi volcano—a '100-year' event. J Volcanol Geotherm Res. 2012;241(242):121–35. doi:10.1016/j.jvolgeores.2012.06.018. Kersten O. Baron Carl Claus von der Decken's Reisen in Ost-Afrika in den jahren 1859 bis 1865 (Vol. 2). Ed Leipzig/Heidelberg: Winterliche Verlagshandlung; 1871. Kersten O, Tolède M, Fois-Kaschel G (2016). Les Voyages en Afrique orientale du baron Carl Claus von der Decken. : La Réunion (28 mai - 7 août 1863). Les éditions de Villèle; Cercle des Muséophiles de Villèle, 108 p. DOI: 978-2-905861-28-3. <hal-01367548>. Kilburn CRJ, Lopes RMC. The growth of aa lava fields on Mount Etna, Sicily. J Geophys Res. 1988;93:14759–72. L. R. Une trentaine d'interventions pour le SDIS et le PGHM. Le Journal de la Ile de La Réunion: Monday 3 August 2015. 2015;21 364:7. Latutrie B, Harris A, Médard E, Gurioli L. Eruption and emplacement dynamics of a thick trachytic lava flow of the Sancy volcano (France). Bull Volcanol. 2017;79:4. doi:10.1007/s00445-016-1084-6. Lillesand TM, Kiefer RW. Remote sensing and image interpretation. New York: Wiley; 1987. p. 24–7. Llewellin EW, Manga M. Bubble suspension rheology and implications for conduit flow. J Volcanol Geotherm Res. 2005;143:205–17. doi:10.1016/j.jvolgeores.2004.09.018. Malin MC. Lengths of Hawaiian lava flows. Geology. 1980;8:306–8. Manga M, Loewenberg M. Viscosity of magmas containing highly deformable bubbles. J Volcanol Geotherm Res. 2001;105:19–24. doi:10.1016/S0377-0273(00)00239-0. Markham BL. The Landsat sensors' spatial response. IEEE Trans Geosci Remote Sens. 1985;GE-23:864–75. Mell W, Maranghides A, McDermott R, Manzello SL. Numerical simulation and experiments of burning douglas fir trees. Combustion and Flame. 2009;156:2023–41. Michelin Green Guide. La Réunion. Boulonge Billancourt: Le Guide Verte, Michelin Propriétaires-éditeurs; 2015. p. 258. Michon L, Di Muro A, Villeneuve N, Saint-Marc C, Fadda P, Manta F. Explosive activity of the summit cone of Piton de la Fournaise volcano (la Réunion island): a historical and geological review. J Volcanol Geotherm res. 2013;263:117–33. Mouginis-Mark PJ, Garbeil H, Flament P. Effects of viewing geometry on AVHRR observations of volcanic thermal anomalies. Remote Sens Environ. 1994;48:51–60. Mouginis-Mark PJ, Snell H, Ellisor R. GOES satellite and field observations of the 1998 eruption of Volcan Cerro Azul, Galápagos. Bull Volcanol. 2000;62:188–98. Murray JB, Stevens N. New formulae for estimating lava flow volumes at Mt. Etna volcano, Sicily. Bull Volcanol. 2000;61:515–26. Nave R, Ricci T, Pacilli MG. Perception of risk for volcanic hazard in Indian Ocean: la Réunion Island case study. In: Bachelery P, Lenat J-F, Di Muro A, Michon L, editors. Active volcanoes of the Southwest Indian Ocean. Berlin: Springer; 2016. p. 315–26. Pal R. Rheological behavior of bubble-bearing magmas, Earth Planet. Sci Lett. 2003;207:165–79. doi:10.1016/S0012-821X(02)01104-4. Payet G. Les Réunionnais et leur Volcan. Antenne Reunionnaise de l'Institute de Victimologie (St Denis, Réunion); 2007. p. 147. Peltier A, Bachèlery P, Staudacher T. Magma transport and storage at Piton de la Fournaise (la Réunion) between 1972 and 2007: a review of geophysical and geochemical data. J Volcanol Geotherm Res. 2009;184:93–108. Peltier A, Massin F, Bachèlery P, Finizola A. Internal structures and building of basaltic shield volcanoes : the example of the Piton de la Fournaise terminal cone (la Réunion). Bull Volcanol. 2012;74:1881–97. Peltier A, Beauducel F, Villeneuve N, Ferrazzini V, Di Muro A, Aiuppa A, et al. Deep fluid transfer evidenced by surface deformation during the 2014-2015 unrest at Piton de la Fournaise volcano. J Volcanol Geotherm Res. 2016;321:140–8. doi:10.1016/j.jvolgeores.2016.04.031. Perkins MC. Surviving Paradise. USA: Quidnunc Press; 2006. p. 338. Pieri DC, Baloga SM. Eruption rate, area, and length relationships for some Hawaiian lava flows. J Volcanol Geotherm Res. 1986;30:29–45. Pinkerton H, Wilson L. Factors effecting the lengths of channel-fed lava flows. Bull Volcanol. 1994;56:108–20. RED SEED Working Group. Conclusion: recommendations and findings of the RED SEED working group. In: AJL H, De Groeve T, Garel F, Carn SA, editors. Detecting Modelling and responding the effusive eruptions, vol. 426. London: Geological Society London Special Publications; 2016. p. 567–648. Robert B, Harris A, Gurioli L, Médard E, Sehlke A, Whittington A. Textural and rheological evolution of basalt flowing down a lava channel. Bull Volcanol. 2014;76:824. doi:10.1007/s00445-014-0824-8. Roscoe R. The viscosity of suspensions of rigid spheres. Br J Appl Phys. 1952;3:267–9. doi:10.1088/0508-3443/3/8/306. Rudd DR. Remote Sensing: A better view. Belmont: Duxbury Press; 1974. p. 12–3. Santora M. Réunion, once a surfer's paradise, finds only sharks in its waters. NY: The New York Times (New York Edition), 12 August 2015; 2015. p. A5. Schmidt A. Volcanic gas and aerosol hazards from a future Laki-type eruption in Iceland. In: Papale P, editor. Volcanic hazards, risks and disasters, hazards and disasters series. Amsterdam: Elsevier; 2015. p. 377–97. Schowengerdt RA. Remote sensing: models and methods for image processing. Burlington: Academic Press; 2007. p. 515. Silvestre AL, Santos CM, Ramalho C. Satisfaction and behavioural intentions of cruise passengers visiting the Azores. Tour Econ. 2008;14(1):169–84. Singh SM. Simulation of solar zenith angle effect on global vegetation index (GVI) data. Int J Remote Sens. 1988;9(2):237–48. Staudacher T, Ferrazzini V, Peltier A, Kowalski P, Boissier P, Catherine P, et al. The April 2007 eruption and the Dolomieu crater collapse, two major events at Piton de la Fournaise (la Réunion Island, Indian Ocean). J Volcanol Geotherm Res. 2009;184:126–37. doi:10.1016/j.jvolgeores.2008.11.005. Stevenson H. Jobs for the boys: the story of a family in Britain's imperial heyday. Ipswich: Dove Books; 2009. p. 377. Stewart R. Reunion shark attacks scare surfers and tourists from beaches. The Daily Telegraph; 2015. www.telegraph.co.uk/travel/news/. Downloaded 20/01/17. Surfer Today. Is there a solution for the shark attack drama in Reunion Island? SurferToday; 2016. www.surfertoday.com/environment/12880. Downloaded 20/01/17. Tedesco D, Badiali L, Boschi E, Papale P, Tassi F, Vaselli O, et al. Cooperation on Congo volcanic and environmental risks. Eos, Transactions American Geophysical Union. 2007;88(16):2324–9250. http://dx.doi.org/10.1029/2007EO160001 Tucker CJ, Gatlin JA, Schneider SR. Monitoring vegetation in the Nile delta with NOAA-6 and NOAA-7 AVHRR imagery. Photogramm eng Remote Sens. 1984;50(1):53–61. Van Wagner CE. Calculations on forest fire spread by flame radiation. Ottowa: Forestry Branch Departmental Publication 1185, Queen's Printer and Controller of Stationary; 1967. p. 14. Vaxelaire D. L'histoire de La Réunion 1. Des origines à 1848. Saint-Denis: Édtions Orphie; 2012. p. 350. Vaxelaire D. L'histoire de La Réunion 2. De 1848 à 2012. Saint-Denis: Édtions Orphie; 2012. p. 703. Vicari A, Ciraudo A, Del Negro C, Herault A, Fortuna L. Lava flow simulations using discharge rates from thermal infrared satellite imagery during the 2006 Etna eruption. Nat Hazards. 2009;50:539–50. http://doi.org/10.1007/s11069-008-9306-7 Vicari A, Ganci G, Behncke B, Cappello A, Neri M, Del Negro C. Near-real-time forecasting of lava flow hazards during the 12–13 January 2011 Etna eruption. Geophys Res Lett. 2011;38:L13317. http://doi.org/10.1029/2011GL047545 Vicari A, Bilotta G, et al. LAV@HAZARD: web-Gis interface for volcanic hazard assessment. Ann Geophys. 2011;54:662–70. http://doi.org/10.4401/ag-5347 Villeneuve N, Neuville DR, Boivin P, Bachèlery P, Richet P. Magma crystallization and viscosity: a study of molten basalts from the Piton de la Fournaise volcano (la Réunion island). Chem Geol. 2008;256:242–51. Viskanta R, Mengüç MP. Radiation heat transfer in combustion systems. Prog Energy Combust Sci. 1987;13:97–160. Vlastélic I, Staudacher T, Semet M. Rapid change of lava composition from 1998 to 2002 at Piton de la Fournaise (Réunion) inferred from Pb isotopes and trace elements: evidence for variable crustal contamination. J Petrol. 2005;46:79–107. Vlastélic I, Deniel C, Bosq C, Télouk P, Boivin P, Bachèlery P, et al. Pb isotope geochemistry of Piton de la Fournaise historical lavas. J Volcanol Geotherm Res. 2009;184:63–78. Vlastélic I, Gannoun A, Di Muro A, Gurioli L, Bachèlery P, Henot JM. Origin and fate of sulfide liquids in hotspot volcanism (la Réunion): Pb isotope constraints from residual Fe-cu oxides. Geochim Cosmochim Acta. 2016;194:179–92. Wadge G. The variation of magma discharge during basaltic eruptions. J Volcanol Geotherm Res. 1981;11:139–68. Walker GPL. Lengths of lava flows. Philos Trans R Soc Lond. 1973;274:107–18. Weisel D, Stapleton F. Aloha O Kalapana. Honolulu: Bishop Museum Press; 1992. p. 153. Witter JB, Harris AJL. Field measurements of heat loss from skylights and lava tube systems. J Geophys Res. 2007;112 (B01203). doi:10.1029/2005JB003800. Wooster MJ, Rothery DA. Time-series analysis of effusive volcanic activity using the ERS along track scanning radiometer: the 1995 eruption of Fernandina volcano, Galápagos Islands. Remote Sens Environ. 1997;62:109–17. Wooster MJ, Rothery DA, Kaneko T. Geometric considerations for the remote monitoring of volcanoes: studies of lava domes using ATSR and the implications for MODIS. Int J Remote Sens. 1998;19(13):2585–91. Wright R. MODVOLC: 14 years of autonomous observations of effusive volcanism from space. In: Harris AJL, De Groeve T, Garel F, Carn SA, editors. Detecting, Modelling and responding to effusive eruptions, vol. 426. London: Geological Society, London, Special Publications; 2016. p. 23–53. Wright R, Blake S, Harris AJL, Rothery DA. A simple explanation for the space-based calculation of lava eruption rates. Earth Planet Sci Lett. 2001;192:223–33. doi:10.1016/S0012-821X(01)00443-5. Wright R, Garbeil H, Harris AJL. Using infrared satellite data to drive a thermo-rheological/stochastic lava flow emplacement model: A method for near-real-time volcanic hazard assessment. Geophys Res Lett. 2008;35(L19307): doi:10.1029/2008GL035228. WTTC. Travel & Tourism Economic Impact 2015 Reunion. London: Annual Report of the World Travel and Tourism Council; 2015. p. 24. Zebker HA, Rosen P, Hensley S, Mouginis-Mark PJ. Analysis of active lava flows on Kilauea volcano, Hawaii, using SIR-C radar correlation measurements. Geology. 1996;24:495. Zhang W, Hu Z, Liu Y, Chen H, Gao S, Gasching RM. Total rock dissolution using ammonium bifluoride (NH4HF2) in screw-top Teflon vials: a new development in open-vessel digestion. Anal Chem. 2012;84:10686–93. Zinke J, Reijmer JJG, Thomassin BA. Seismic architecture and sediment distribution within the Holocene barrier reef-lagoon complex of Mayotte (Comoro archipelago, SW Indian Ocean). Palaeogeogr Palaeoclimatol Palaeoecol. 2001;175:343–68. Zuccaro G, Cacace F, Spence RJS, Baxter PJ. Impact of explosive eruption scenarios at Vesuvius. J Volcanol Geotherm Res. 2008;178:416–53. We thank two anonymous reviews for their input, and the patience of the Applied Volcanology Editors Office for allowing us two extensions as we were distracted by two eruptive crisis during the submission (31 January – 5 February 2017) and correction (18-19 May 2017) of this manuscript, respectively. We thank the STRAP project funded by the Agence Nationale de la Recherche (ANR-14-CE03-0004-04) and the OMNCG OSU program from la Reunion University for supporting DOAS data acquisition and processing. This work was funded by the Agence National de la Recherche (ANR: www.agence-nationale-recherche.fr) through project ANR-LAVA (ANR Program: DS0902 2016; Project: ANR-16 CE39-0009, PI: A. Harris, Link: www.agence-nationale-recherche.fr/?Project=ANR-16-CE39-0009). This is ANR-LAVA contribution no. 1. All authors read and approved the final manuscript. The authors declare that they have no competing interests Université Clermont Auvergne, CNRS, IRD, OPGC, Laboratoire Magmas et Volcans, F-63000, Clermont-Ferrand, France A. J. L. Harris , P. Bachèlery , J.-L. Froger , L. Gurioli , S. Moune & I. Vlastélic Observatoire de Physique du Globe Clermont Ferrand (OPGC), Campus Universitaire des Cézeaux, 4 Avenue Blaise Pascal, TSA 60026 – CS 60026, 63178, Aubière CEDEX, France Observatoire Volcanologique du Piton de la Fournaise (OVPF), Institut de Physique du Globe de Paris, Sorbonne Paris Cité, Univ. Paris Diderot, CNRS, F-97418, La Plaine des Cafres, La Réunion, France N. Villeneuve , A. Di Muro , V. Ferrazzini & A. Peltier Dipartimento di Scienze della Terra, Università degli Studi di Torino, Via Valperga Caluso 35, 10125, Torino, Italy D. Coppola Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via della Faggiola, 32, 56126, Pisa, Italy M. Favalli Department of Space, Earth and Environment, Chalmers University of Technology, SE-412 96, Gothenburg, Sweden B. Galle & S. Arellano Search for A. J. L. Harris in: Search for N. Villeneuve in: Search for A. Di Muro in: Search for V. Ferrazzini in: Search for A. Peltier in: Search for D. Coppola in: Search for M. Favalli in: Search for P. Bachèlery in: Search for J.-L. Froger in: Search for L. Gurioli in: Search for S. Moune in: Search for I. Vlastélic in: Search for B. Galle in: Search for S. Arellano in: Correspondence to A. J. L. Harris. Sample data base with delivery and analysis check-list for all samples collected to track activity at Piton de la Fournaise between 2014 and 2015. Table S1. Sample archive, Part 1: Sample notes and descriptions. Samples are listed in order of eruption, then by order of collection. Eruptions are labelled by date of first sample collection in each case. Table S2. Sample archive, Part 2: Sample collection details. Samples are listed in order of eruption, then by order of collection. Eruptions are labelled by date of first sample collection in each case. Table S3. Sample archive, Part 3: Analysis completed Samples are listed in order of eruption, then by order of collection. Eruptions are labelled by date of first sample collection in each case. G3 = Morphologi G3. (DOCX 125 kb) Harris, A.J.L., Villeneuve, N., Di Muro, A. et al. Effusive crises at Piton de la Fournaise 2014–2015: a review of a multi-national response model. J Appl. Volcanol. 6, 11 (2017) doi:10.1186/s13617-017-0062-9 Accepted: 29 May 2017 Effusive crisis Volcano observatory Time averaged discharge rates Lava flow model Inundation forecasts
CommonCrawl
Towards a pancreatic surgery simulator based on model order reduction Andrés Mena1,3, David Bel1, Icíar Alfaro1, David González1, Elías Cueto ORCID: orcid.org/0000-0003-1017-43811 & Francisco Chinesta2 In this work a pancreatic surgery simulator is developed that provides the user with haptic feedback. The simulator is based on the use of model order reduction techniques, particularly Proper Generalized Decomposition methods. The just developed simulator presents some notable advancements with respect to existing works in the literature, such as the consideration of non-linear hyperelasticity for the constitutive modeling of soft tissues, an accurate description of contact between organs and momentum and energy conserving time integration schemes. Pancreas, liver, gall bladder, and duodenum are modeled in the simulator, thus providing with a very realistic and immersive perception to the user. It is now well known and scientifically demonstrated that the use of surgery simulators provides the practitioner with a fast method to developed the necessary skills [1]. And this is despite the well-known limitations that nowadays surgical simulators have [2, 3]. This is due to the complexity of the problem and the need for a feedback response at some 500 Hz to 1 kHz. Indeed, the problem is highly non-linear, due to both the constitutive modeling of soft living tissues, frequently considered as hyperelastic, and the non-linear phenomena taking place at the operating room: contact, friction, cutting, etc. All these limitations make the development of surgery simulators a delicate task that has faced important difficulties in the last decades. It has not been until very recently that truly non-linear constitutive models have been developed [4–6]. In general, they are based on the employ of explicit finite element simulations, that allow for a fast resolution, element by element, of the equations of motion. However, these explicit algorithms are not unconditionally stable, and often lack of an appropriate energy conservation. Recently, model order reduction techniques [7–9] have opened a different way of looking at real-time simulation. These techniques that, essentially, develop models with a minimal number of degrees of freedom, seem to be very well suited for the purpose of developing a real-time simulator [5, 10–12]. However, they are not free of limitations. In particular, projection-based (a posteriori) model order reduction techniques very often lack of efficiency in non-linear problems, where the complete system of equations needs to be rebuilt in order to perform consistent linearization, thus loosing all the pretended gain. Methods as the empirical interpolation method [13] or the coupling with Asymptotic Numerical Methods [14] aim at solving these deficiencies. Proper Generalized Decomposition (PGD) techniques [15, 16], however, operate in a slightly different way. They operate by casting the problem in a parametric way, by considering every possible parameter of the problem (the position of application of the load, material parameters, etc.) as a new dimension in the phase space, end then solving the resulting high-dimensional problem off-line, once for life. A sort of response surface is then obtained that has been coined (in opposition to traditional response surface methodologies, that need for a well-developed campaign of computer experiments) as a computational vademecum [17]. This work is thus aimed at developing a prototype of real-time simulator for pancreatic surgery (very few examples exist, see [18]), able of providing an immersive response for surgery training and planning. This simulator should be able to run in standard laptops, without any supercomputing facility, thus being able to be used in the operating room (OR). In this framework, ta PGD-based surgery simulator here developed is composed by the necessary number of organ vademecums (depending on the particular type of surgery considered). These organ vademecums, that provide the system with each organ response to an applied load at any point of its surface, are then assembled together by considering their relative contact, giving finally a very realistic haptic sensation and immersive feeling. Pancreatic cancer is one of the most severe illness, with the fourth rate of fatality among all types of cancer. Every year some 233,000 new cases are diagnosed worldwide. The pancreatic cancer characterization is extremely complex, for instance, with different types of cancer such as pancreatic cystic neoplasms and pancreatic neuroendocrine tumors among them, and often the preoperative procedures do not offer a conclusive diagnostic. Therefore, it is of utmost interest to have a system able to provide the surgeon with an augmented/virtual reality-based experience that could eventually help to make a diagnosis. The complexity of the diagnosis is only one of the possible sources of difficulty. Under the name pancreatectomy (the surgical removal of all or part of the pancreas) different surgery procedures are encompassed. In pancreaticoduodenectomy, for instance, distal segment of the stomach, the first and second portions of the duodenum, the head of the pancreas, the bile duct, and the gallbladder are removed. This gives an idea of the difficulty of simulating such a surgery, see Fig. 1, where only the liver, duodenum, gall bladder and pancreas have been represented. Anatomy of the considered organs for the simulation of pancreatectomy. Source [19] In the sequel we develop models for the aforementioned organs, considered as the most representative of the type of surgery at hand. Previously, and for the sake of completeness, we recall in "A review of PGD methods appliked to real-time surgery" the basics of the PGD method applied to real-time surgery. In "Perfomance" we analyze the performance of the resulting prototype. Finally, in "Conclusion" we draw some conclusions. A review of PGD methods applied to real-time surgery As mentioned before, the main novelty in PGD-based real-time simulators consist in developing a sort of a priori response surface, what we called a computational vademecum in [17]. Therefore, without any campaign of computer experiments, typical of response surface methodologies, PGD methods are able to provide in an off-line phase of the method with the expected response of the system in the form of a high-dimensional response surface or meta-model. This response surface is then able to run on-line at extremely high feedback rates. One example of such a vademecum, for the simplest case, say quasi-static equilibrium for any (within a previously selected region) possible loading point, would provide us with an expression of the type $$\begin{aligned} {\varvec{u}} = {\varvec{u}}({\varvec{x}}, {\varvec{s}}), \end{aligned}$$ i.e., a generalized solution of the displacement field of a solid undergoing a load at any possible point of its boundary, \(\varvec{s}\). Such a high-dimensional response is found under the PGD rationale as a fight sum of separate functions, i.e., $$\begin{aligned} u^n_j({\varvec{x}},{\varvec{s}})=\sum _{k=1}^n X_j^k({\varvec{x}})\cdot Y_j^k({\varvec{s}}), \end{aligned}$$ where \(u_j\) refers to the j-th component of the displacement vector, \(j=1,2,3\) and functions \(\varvec{X}^k\) and \(\varvec{Y}^k\) represent the separated functions used to approximate the unknown field. To determine these functions, PGD methods proceed by first computing an admissible variation of \(\varvec{u}\), by substituting them in the weak form of the problem, and subsequently linearizing it. This is usually accomplished by employing a greedy algorithm in which one sum is computed at a time, and within each sum, each function is determined by a fixed-point, alternate directions algorithm. The interested reader can consult more details of this approach in [2]. Should we need to consider (non-linear) dynamics, for instance, the PGD approach looks always for a sort of surface response, in this case in the form of an energy and momentum conserving integrator that provides the response of the system within a time increment \(\triangle t\) in the form $$\begin{aligned} {\varvec{u}}^{n+1}({\varvec{x}}, t+\triangle t,{ \varvec{u}}^t, {\varvec{v}}^t ) = {\varvec{u}}^n +{ \varvec{R}}({\varvec{x}})\circ {\varvec{S}}({\varvec{u}}^t)\circ {\varvec{T}}( {\varvec{v}}^t) \circ {\varvec{d}}(t), \end{aligned}$$ where the sought displacement field is now function (as obviously corresponds to an initial and boundary-value problem) of the initial conditions, seen as the converged displacement \(\varvec{u}\) and velocity \(\varvec{v}\) fields of the previous time increment. This approach has rendered excellent stability properties for linear and non-linear hyperelastodynamics [20]. Contact phenomena also play a crucial role in the simulation of surgery. In [21] a method based PGD was developed. In order to fully exploit the characteristics of PGD methods, one of the two candidate solids was embedded within a structured mesh in whose nodes the distance to the boundary field (i.e., a level set) was stored. The method then proceeds by checking that no boundary marker of the second solid crosses the zero-level set. Otherwise, a penalty force is applied at that point in order to prevent interpenetration. This very simple algorithm is able to run at haptic feedback rates without any problem. Architecture of the simulator The PGD technique allows to exploit all the off-line effort of pre-computation and only post-process the result at extremely high feedback rates. Therefore, unlike previous examples of surgery simulators such as [22], for instance, there is no need to establish multiple threads in the simulation, nor establishing different feedback requirements for the different tasks in the simulator. In our approach, there is one single thread, see Fig. 2, and all the different procedures (contact detection among the virtual tool—assumed rigid for simplicity—and the organ(s), among the different organs themselves, displacement and strain field computation) run under the same constrain, that imposed by the haptic peripheral, a Geomagic Touch device [23] running the OpenHaptcis Toolkit in which all the system is developed. Only rendering is accomplished under weaker requirements, usually in the order of some 30 Hz. In the rendering process the computation of nodal normal vectors at the deformed configuration of the solid is mandatory for an appropriate visualization. Even these normals could be pre-computed and stored in memory in the spirit of Eq. 1. However, our prototype showed that they can be computed in runtime without interfering with the main loop of haptic response. All these tests were made on a HP ProBook 6470b laptop (Intel Core i7, with 8 Gb DDR3 PC3-12800 SDRAM), see Fig. 3. Appearance of the developed simulator Models for the different organs considered in the prototype In this section we detail the performed work in the modeling of each organ, particularly each constitutive law employed for that purpose. The liver is the biggest gland in the human body. It is connected to the diaphragm by the coronary ligament so it seems reasonable to assume it to be constrained at the posterior face by the rest of the organs, while the anterior face is accessible to the surgeon. The inferior vena cava travels along the posterior surface, and the liver is frequently assumed clamped a that location. The literature on the mechanical properties of the liver parenchyma is not very detailed. In [24] a Mooney–Rivlin and an Ogden models are compared to experimental results on deformations applied to a liver. No clear conclusion is obtained, however, given that no in vivo measurements could be performed. In view of that, we have assumed a simplified Kirchhoff-Saint Venant model, with Young's modulus of 1.60 kPa, and a Poisson coefficient of 0.48, thus nearly incompressible [25]. This constitutes just a simplification that should be validated with the help of experienced surgeons, but remains valid as long as more complex models can be developed within the PGD techniques exposed before without any difficulty [26]. Finite element mesh for the parenchyma of the liver First four spatial modes \({\varvec{X}}^k({\varvec{x}})\), \(k=1, \ldots , 4\) (respectively, (a)–(d)) for the liver model The finite element mesh of the liver, see Fig. 4 consists of 4349 nodes and 21,788 linear tetrahedral elements. The PGD modes obtained in the off-line procedure described before are shown in Fig. 5. The gallbladder is a pressurized vesicle attached to the liver and also connected to the duodenum. It contains the bile generated by the hepatocytes [27] provides the most comprehensive constitutive modeling of gallbladder walls. Gallbladder is composed by four different layers: adventitia, muscle, mucosa and epithelium. Of course, the muscle layer is responsible of expelling the bile right after consumption of foods or drinks. By employing elastography experimental measurements [27], developed a model very closely resembling that of Gasser-Holzapfel-Ogden for arteries, previously employed in some of our previous works [10]. However, it is also stated that a linear elastic model that takes into account the heavy changes in gallbladder wall thickness produces almost the same results. Also a neo-Hookean model [28] has been found to accurately capture most of the deformation patterns of elastin. Therefore, we have adopted again, for simplicity and without loss in generality, a Kirchhoff-Saint Venant model, despite its well-known limitations. Finite element mesh for the gall bladder The gallbladder is subjected, in normal conditions, to an internal pressure that has been estimated in [27] on 466.6 Pa. We have assumed a wall thickness on the order of 2.5 mm, and a Young's modulus on the order of 1.15 kPa, with \(\nu \,=\,\)0.48, and thus nearly incompressible. The finite element model, see Fig. 6, is composed by 4183 nodes and 17,540 linear tetrahedral elements Fig. 7. First four spatial modes \({\varvec{X}}^k({\varvec{x}})\), \(k=1, \ldots , 4\) (respectively, (a)–(d)) for the gallbladder model In principle the gallbladder has not been considered as attached to the liver, but to the duodenum only, since during the surgery procedure it needs to be detached from it, by appropriately simulating the scratching process done by the surgeon, more properly related to continuum damage mechanics than to cutting itself. This is currently one of our lines of research. Pancreas and duodenum In our model, pancreas and duodenum have been modeled as being attached to each other, see Fig. 8, since indeed they are, on one side, and most likely will be removed together, without detaching one from the other. Very few papers deal with the constitutive modeling of pancreatic tissue, despite a few simulators exist, see for instance [29] for an example of a web-based navigation system. In [30] elastography is employed to determine the shear stiffness of the tissue, giving some 1.20 kPa at 40 Hz. In our simulations, an almost incompressible character (i.e., \(\nu \,=\,\)0.48) is assumed. Finite element mesh for the pancreas and duodenum First four spatial modes \({\varvec{X}}^k({\varvec{x}})\), \(k=1, \ldots , 4\) (respectively, (a)–(d)) for the pancreas model The two free opposite sides of the duodenum are considered clamped. In particular, the proximal one is indeed attached to the stomach, whose distal part is usually removed during this type of surgery. Therefore, in a more advanced version of the simulator, a model of the stomach should also be considered for completeness. In this proof of concept prototype, however, this simplification does not imply a loss of validity of the proposed methodology. In Fig. 9 the first four modes of the PGD approximation to the pancreatic vademecum are depicted. Putting it all together: simulating contact One of the most salient advantages of the procedure described before relies precisely in its modularity. Once a vademecum is computed for each organ, the resulting simulator integrates them all by simulating the contact between them, and their boundary conditions such as attachments to ligaments, tendons, blood vessels, etc. The strong feedback requirements given by the haptic peripheral in terms of number of simulations per second and stability of the transmitted force in the haptic device, prevents us from using state-of-the-art FE frictional contact algorithms. Instead, we follow a simplified voxmap pointshell strategy [20] in which one of the solids in the model is considered the master and equipped with a distance field, see Fig. 10. This distance field is stored in memory at nodal positions in a lattice that surrounds each organ. In this case, the master is the gallbladder, since it occupies a central position among the simulated organs. The rest of the organs are marked with a pointshell, in our case composed by the nodes of the boundary (although a different set may be employed depending on the required precision). Distance field computed around the gallbladder model One of the advantages of the vademecum approach is the possibility of computing a high-dimensional distance field $$\begin{aligned} d=d(\varvec{x}, \varvec{s}) \end{aligned}$$ for every possible load position \(\varvec{s}\), and to store it in memory to avoid the computation of distances in runtime. Since the deformed configuration of the solids is know beforehand, see Eq. (2) for every possible load position, orientation and modulus (which in this case could be a contact reaction), we can compute the distance field for each of these deformed configurations and store them in memory in the form of a sequence of separated vectors, as in Eq. (2). Once the collision between the pointshell(s) and the zero-level iso-surface of the distance field vademecum is detected, a penalization force \(\varvec{F}=-k_c\cdot d\cdot \varvec{n}\) is applied to both solids, so as to prevent penetration. Here, \(\varvec{n}\) represents the normal to the surface in contact and \(k_c\) a penalty parameter. Although this simple contact procedure is far from the state of the art in usual engineering practice, it must be kept in mind that the haptic feedback imposes an important bottleneck to the procedure: to avoid unphysical jumps in the response, producing a sequence of fast contact detections and loss of contact. This produces a very unphysical sensation that should be avoided. The proposed algorithm produces no unphysical sensation and no artificial jump in the haptic response could be felt. In fact, many existing simulators employ different threads for the simulation of contact and deformation, with considerably weaker requirements on the contact side see, for instance [22], among others. This is absolutely not necessary here, and a single thread is used in our architecture. Our tests on the performance of the contact detection rendered feedback times ranging from 0.0007148 seconds as a lower bound, for the case in which no contact was detected and 0.0012811 seconds for situations in which more than 400 nodes were detected in contact with the zero-distance level set of the master surface. Both values are in good agreement with the feedback requirements imposed by the haptic device, while no artificial, unphysical jump was detected in the transmitted reaction forces, giving a smooth sensation of contact. In Fig. 11 a test is made for the contact algorithm. In it, a time instant is shown in which a force applied in the gallbladder makes it to get into contact with the liver, whose displacement field is shown. On the contrary, duodenum and pancreas are at this time instant free of contact and therefore have been represented in wireframe to highlight it. Application of a force (indicated by an arrow) that makes liver and gallbladder to get into contact Once the models have been developed and the PGD modes computed off-line, a realistic texturing is applied to the meshes so as to provide the user with an immersive sensation, see Fig. 12. Appearance of the simulator once the different organs have been textured The number of modes necessary for a prescribed degree of accuracy in the feedback force can be determined either via heuristic approaches, with the help of experienced surgeons, or by employing error estimators on certain quantities of interest [31]. In general, the number of terms depends on the size of the models and the desired level of accuracy, but in our experience rarely exceeds some 50 modes. Keep in mind that the usual scientific computing levels of accuracy are rarely attained nor needed in this type of applications, where the variance of properties between patients is on the order of the mean. In Fig. 13, for instance, a plot is shown of how the proposed PGD technique converges respect the reference solution given by a full FEM model of the pancreas. Note that the comparison between reduced models and FE reference ones is made on the basis of the same mesh size. Of course a more detailed FE mesh would imply a bigger memory usage for the reduced order model, but note that the CPU cost, and therefore the feedback speed, do not depend on the number of nodes of the mesh, but on the number of modes employed in the PGD approaximation, i.e., on n in Eq. (2), which remains roughly constant. The applied load was divided into three load steps, following the technique presented in [26], where an explicit algorithm was developed. This is why the error seems to reach two intermediate plateaus. The evolution of the error in terms of the number of modes is perhaps better seen if we plot the error versus the number of modes employed per load step (i.e., the number of modes applied at the same time at load increment one, two and three). This is depicted in Fig. 14. In all these off-line computations, errors below 10 % were judged sufficient. Nonetheless, lower errors can be easily attained by adding more modes. It is well known that, at the limit, the full FE accuracy will be attained once the sufficient number of modes is considered (i.e., equivalent to the number of degrees of freedom of the full FE model). Convergence of the PGD approximation of a the pancreas vademecum towards the reference FEM solution for different number of modes Convergence of the PGD approximation of a the pancreas vademecum towards the reference FEM solution for different number of modes. Here, the number of modes represented at the abscisa is employed in all the three load increments Under these circumstances, the just developed simulator provided feedback responses in the range of 600 Hz to 1 kHz. As mentioned before, no parallelization was needed and the prototype ran on a HP ProBook 6470b laptop (Intel Core i7, with 8 Gb DDR3 PC3-12800 SDRAM) without any appreciable jump in the force feedback, that remained very smooth throughout the simulation. The developed method is so powerful that it can even be implemented on a html web page running javascript, see Fig. 15. In this case, without the requirements imposed by the haptic peripheral, javascript is able to provide the results with a feedback rate of more than 25 frames per second. This opens unprecedented opportunities for augmented learning strategies, for instance. Implementation on a web page running javascript Simulation of surgical cutting in the context of PGD deserves some very specific comments. In [32] a method based on the combination of PGD and X-FEM technique was developed that provides very realistic sensations for the cutting procedure. The integration of this or other method in the framework of our simulator is currently one of our main efforts of research. In this work a pancreatic surgery simulator has been developed by resorting to the concept of computational vademecum. A computational vademecum is a sort of computational response surface technique obtained without the need of a campaign of computer experiments. Instead, in an off-line phase, this response surface is obtained in the form of a finite sum of separable functions, typical of PGD methods. The high-dimensional response thus obtained is exploited under severe real-time constraints in the on-line phase of the method. The method is able to generate quasi-static as well as dynamic approaches to the problem, including contact and surgical cutting. Other phenomena, such as scratching, are currently being studied and will hopefully be published elsewhere. In any case, PGD methods provide a very appealing way of developing surgical simulators able to run is very simple platforms (typically, in a standard laptop), and even on smartphones or tablets [33]. Notably, it enables the possibility of developing surgical simulators including state of the art (usually, hyperelastic) constitutive laws and momentum and energy conserving, unconditionally stable, dynamical integrators. The lack of suitable data for an accurate modeling of some soft living tissues (arteries are very well characterized, however, other tissues such as duodenum or pancreas very often lack of appropriate models other than linear elastic) remains, however a true limitation for the development of such a simulator. It should be compensated, for instance, by resorting to experienced surgeons that help engineers to increase the realism of the simulations. Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD, Satava RM. Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg. 2005;241(2):364–72. doi:10.1097/01.sla.0000151982.85062.80. Cueto E, Chinesta F. Real time simulation for computational surgery: a review. Advan Model Simul Eng Sci. 2014;1(1):11. doi:10.1186/2213-7467-1-11. Meier U, Lopez O, Monserrat C, Juan MC, Alcaniz M. Real-time deformable models for surgery simulation: a survey. Comp Methods Programs Biomed. 2005;77(3):183–97. Taylor ZA, Cheng M, Ourselin S. High-speed nonlinear finite element analysis for surgical simulation using graphics processing units. IEEE Trans Med Imag. 2008;27(5):650–63. doi:10.1109/TMI.2007.913112. Taylor ZA, Ourselin S, Crozier S. A reduced order finite element algorithm for surgical simulation. In: Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, pp 239–242 (2010). doi:10.1109/IEMBS.2010.5627720 Miller K, Joldes G, Lance D, Wittek A. Total lagrangian explicit dynamics finite element algorithm for computing soft tissue deformation. Commun Num Method Eng. 2007;23(2):121–34. doi:10.1002/cnm.887. Karhunen K. Uber lineare methoden in der wahrscheinlichkeitsrechnung. Annales Academiae scientiarum Fennicae. Series A. 1. Mathematica-physica; 1947. p. 1–79. Loève MM. Probability Theory. The University Series in Higher Mathematics, 3rd ed. Van Nostrand, Princeton, NJ. 1963. Ryckelynck D, Chinesta F, Cueto E, Ammar A. On the a priori model reduction: overview and recent developments. Archiv Comput Method Eng. 2006;12(1):91–128. Niroomandi S, Alfaro I, Cueto E, Chinesta F. Real-time deformable models of non-linear tissues by model reduction techniques. Comp Methods Programs Biomed. 2008;91(3):223–31. doi:10.1016/j.cmpb.2008.04.008. Radermacher A, Reese S. Proper orthogonal decomposition-based model reduction for nonlinear biomechanical analysis. Int J Mat Eng Innov. 2013;4(4):149–65. doi:10.1504/IJMATEI.2013.054393. Taylor ZA, Crozier S, Ourselin S. A reduced order explicit dynamic finite element algorithm for surgical simulation. IEEE Trans Med Imag. 2011;30(9):1713–21. doi:10.1109/TMI.2011.2143723. Barrault M, Maday Y, Nguyen N, Patera A. An empirical interpolation method: application to efficient reduced-basis discretization of partial differential equations. Comptes Rendus Mathematique. 2004;339(9):667–72. doi:10.1016/j.crma.2004.08.00. Niroomandi S, Alfaro I, Cueto E, Chinesta F. Model order reduction for hyperelastic materials. Int J Num Methods Eng. 2010;81(9):1180–206. doi:10.1002/nme.2733. Chinesta F, Ammar A, Cueto E. Recent advances in the use of the Proper Generalized Decomposition for solving multidimensional models. Archiv Comp Methods Eng. 2010;17(4):327–50. Chinesta F, Ladeveze P, Cueto E. A short review on model order reduction based on proper generalized decomposition. Archiv Comput Methods Eng. 2011;18:395–404. Chinesta F, Leygue A, Bordeu F, Aguado JV, Cueto E, Gonzalez D, Alfaro I, Ammar A, Huerta A. PGD-based computational Vademecum for efficient design, optimization and control. Archiv Comput Method Eng. 2013;20(1):31–59. doi:10.1007/s11831-013-9080-x. Kim Y, Kim L, Lee D, Shin S, Cho H, Roy F, Park S. Deformable mesh simulation for virtual laparoscopic cholecystectomy training. Vis Comput. 2015;31(4):485–95. doi:10.1007/s00371-014-0944-3. The Database Center for Life Science Japan: BodyParts3D. Licensed under CC Attribution-Share Alike 2.1. 2015 Gonzalez D, Cueto E, Chinesta F. Real-time direct integration of reduced solid dynamics equations. Int J Num Methods Eng. 2014;99(9):633–53. Gonzalez D, Alfaro I, Quesada C, Cueto E, Chinesta F. Computational vademecums for the real-time simulation of haptic collision between nonlinear solids. Comp Methods Appl Mech Eng. 2015;283:210–23. doi:10.1016/j.cma.2014.09.029. Jeřábková L, Kuhlen T. Stable cutting of deformable objects in virtual environments using xfem. IEEE Comput Graph Appl. 2009;29(2):61–71. doi:10.1109/MCG.2009.32. Geomagic: OpenHaptics Toolkit. 3D systems—Geomagic solutions, 430 Davis Drive, Suite 300 Morrisville, NC 27560 USA. 2013. Martinez-Martinez F, Ruperez MJ, Martin-Guerrero JD, Monserrat C, Lago MA, Pareja E, Brugger S, Lopez-Andujar R. Estimation of the elastic parameters of human liver biomechanical models by means of medical images and evolutionary computation. Comp Methods Programs Biomed. 2013;111(3):537–49. doi:10.1016/j.cmpb.2013.05.005. Delingette H, Ayache N. Soft tissue modeling for surgery simulation. In: Ayache N, editors. Computational Models for the Human Body. Handbook of Numerical Analysis (Ph. Ciarlet, Ed.), Elsevier. 2004. p. 453–50. Niroomandi S, González D, Alfaro I, Bordeu F, Leygue A, Cueto E, Chinesta F. Real-time simulation of biological soft tissues: a PGD approach. Int J Num Methods Biomed Eng. 2013;29(5):586–600. doi:10.1002/cnm.2544. Li WG, Hill NA, Ogden RW, Smythe A, Majeed AW, Bird N, Luo XY. Anisotropic behaviour of human gallbladder walls. J Mech Behav Biomed Mat. 2013;20:363–75. doi:10.1016/j.jmbbm.2013.02.015. Niroomandi S, Gonzalez D, Alfaro I, Cueto E, Chinesta F. Model order reduction in hyperelasticity: a proper generalized decomposition approach. Int J Num Methods Eng. 2013;96(3):129–49. doi:10.1002/nme.4531. Demirel D, Yu A, Halic T, Kockara S. Web based camera navigation for virtual pancreatic cancer surgery: Whipple surgery simulator (vpanss). In: IEEE Innovations in Technology Conference (InnoTek), 2014. p. 1–8. Shi Y, Glaser KJ, Venkatesh SK, Ben-Abraham EI, Ehman RL. Feasibility of using 3d mr elastography to determine pancreatic stiffness in healthy volunteers. J Mag Res Imaging. 2015;41(2):369–75. doi:10.1002/jmri.24572. Alfaro I, Gonzalez D, Zlotnik S, Diez P, Cueto E, Chinesta F. An error estimator for real-time simulators based on model order reduction. Advan Model Simul Eng Sci. 2015. Quesada C, Gonzalez D, Alfaro I, Cueto E, Chinesta F. Computational vademecums for real-time simulation of surgical cutting in haptic environments. Comput Mech. 2015. Quesada C, González D, Alfaro I, Cueto E, Huerta A, Chinesta F. Real-time simulation techniques for augmented learning in science and engineering. Visual Comp. 2015;1–15. doi:10.1007/s00371-015-1134-7. AM segmented all organ's anatomy. DB analysed the models by employing a code developed by DG and IA, who implemented it in Matlab. EC and FC developed the method, verified the results and wrote the manuscript. All authors read and approved the final manuscript. This work has been supported by the Spanish Ministry of Economy and Competitiveness through Grants number CICYT DPI2014-51844-C2-1-R and by the Regional Government of Aragon and the European Social Fund. This support is gratefully acknowledged. CIBER-BBN is an initiative funded by the VI National R\(+\)D\(+\)i Plan 2008-2011, Iniciativa Ingenio 2010, Consolider Program, CIBER Actions and financed by the Instituto de Salud Carlos III with assistance from the European Regional Development Fund. The support from Dr. J. A. Fatás, from the Royo Villlanova Hospital Surgery dept., Zaragoza, and also from E. Estopiñán and M. A. Varona, from the Aragon Institute of Engineering Research of the University of Zaragoza, who carefully developed the realistic rendering, is gratefully acknowledged. Aragon Institute of Engineering Research, Universidad de Zaragoza, María de Luna, s.n., 50018, Zaragoza, Spain Andrés Mena, David Bel, Icíar Alfaro, David González & Elías Cueto GeM, Ecole Centrale de Nantes, 1 rue de la Noe, 44321, Nantes, France CIBER-BBN-Centro de Investigacion Biomedica en Red en Bioingenieria y Biomateriales y Nanomedicina, María de Luna, s.n., 50018, Zaragoza, Spain Andrés Mena David Bel Icíar Alfaro Elías Cueto Correspondence to Elías Cueto. Andrés Mena, David Bel, Icíar Alfaro, David González, Elías Cueto and Francisco Chinesta contributed equally to this work. Mena, A., Bel, D., Alfaro, I. et al. Towards a pancreatic surgery simulator based on model order reduction. Adv. Model. and Simul. in Eng. Sci. 2, 31 (2015). https://doi.org/10.1186/s40323-015-0049-1 Accepted: 16 October 2015 Model order reduction Proper generalized decomposition Computational mechanics and medicine Verification and validation for and with reduced order modeling
CommonCrawl
R. A. Bernstein, W. L. Freedman, and B. F. Madore, The First Detections of the Extragalactic Background Light at 3000, 5500, and 8000 ???. III. Cosmological Implications, The Astrophysical Journal, vol.571, issue.1, p.107, 2002. DOI : 10.1086/339424 T. M. Brown, R. A. Kimble, and H. C. Ferguson, Measurements of the Diffuse Ultraviolet Background and the Terrestrial Airglow with the Space Telescope Imaging Spectrograph, The Astronomical Journal, vol.120, issue.2, p.1153, 2000. L. Cambresy, W. T. Reach, C. A. Beichman, and T. H. Jarrett, The Cosmic Infrared Background at 1.25 and 2.2 Microns Using DIRBE and 2MASS: A Contribution Not Due to Galaxies?, The Astrophysical Journal, vol.555, issue.2, p.563, 2001. K. I. Caputi, H. Dole, and G. Lagache, MIPS 24 ??m Galaxies, The Astrophysical Journal, vol.637, issue.2, p.727, 2006. URL : https://hal.archives-ouvertes.fr/hal-00288465 R. Chary and D. Elbaz, Interpreting the Cosmic Infrared Background: Constraints on the Evolution of the Dust???enshrouded Star Formation Rate, The Astrophysical Journal, vol.556, issue.2, p.562, 2001. R. Chary, S. Casertano, and M. E. Dickinson, Observations of ELAIS???N1, The Astrophysical Journal Supplement Series, vol.154, issue.1, p.80, 2004. H. Dole, G. Lagache, and J. L. Puget, The Astrophysical Journal, vol.585, issue.2, p.617, 2003. H. Dole, L. Floc-'h, E. Perez-gonzalez, and P. G. , Deep Surveys, The Astrophysical Journal Supplement Series, vol.154, issue.1, p.87, 2004. H. Dole, G. H. Rieke, and G. Lagache, and Beyond, The Astrophysical Journal Supplement Series, vol.154, issue.1, p.93, 2004. H. C. Dole, S. B. Gry, J. Peschke, and . Matagne, Exploiting the ISO Data Archive, p.307, 2003. E. Dwek and F. Krennrich, ApJ, pp.618-657, 2005. J. Edelstein, S. Bowyer, and M. Lampton, Ultraviolet Spectrometer Limits to the Extreme???Ultraviolet and Far???Ultraviolet Diffuse Astronomical Flux, The Astrophysical Journal, vol.539, issue.1, p.187, 2000. URL : http://iopscience.iop.org/article/10.1086/309192/pdf E. Egami, H. Dole, and J. S. Huang, Observations of the SCUBA/VLA Sources in the Lockman Hole: Star Formation History of Infrared???Luminous Galaxies, The Astrophysical Journal Supplement Series, vol.154, issue.1, p.130, 2004. D. Elbaz and C. J. Cesarsky, A Fossil Record of Galaxy Encounters, Science, vol.300, issue.5617, p.270, 2003. DOI : 10.1126/science.1081525 URL : http://arxiv.org/pdf/astro-ph/0304492v1.pdf D. Elbaz, C. J. Cesarsky, and D. Fadda, ISO science legacy -a compact review of ISO major achievements, A&A Space Science Reviews ISO Special Issue, vol.351, p.37, 1999. G. G. Fazio, M. L. Ashby, and P. Barmby, The Astrophysical Journal Supplement Series, vol.154, issue.1, p.39, 2004. D. P. Finkbeiner, M. Davis, and D. J. Schlegel, Detection of a Far???Infrared Excess with DIRBE at 60 and 100 Microns, The Astrophysical Journal, vol.544, issue.1, p.81, 2000. H. Flores, F. Hammer, and F. X. Désert, A&A, vol.343, p.389, 1999. A. Franceschini, S. Berta, and D. Rigopoulou, m sources in the Hubble Deep Field South: First hints at the properties of the sources of the IR background, Astronomy & Astrophysics, vol.119, issue.2, p.501, 2003. DOI : 10.1051/aas:1996267 D. T. Frayer, D. Fadda, and L. Yan, 70 and 160 ??m Observations of the Extragalactic First Look Survey, The Astronomical Journal, vol.131, issue.1, p.250, 2006. J. P. Gardner, T. M. Brown, H. C. Ferguson, R. Genzel, and C. J. Cesarsky, , p.761, 2000. R. Gispert, G. Lagache, and J. L. Puget, A&A, vol.360, issue.1, 2000. K. D. Gordon, G. H. Rieke, and C. W. Engelbracht, Publications of the Astronomical Society of the Pacific, vol.117, issue.831, p.503, 2005. K. D. Gordon, J. Bailin, and C. W. Engelbracht, ApJ, 2005. V. Gorjian, E. L. Wright, and R. R. Chary, Tentative Detection of the Cosmic Infrared Background at 2.2 and 3.5 Microns Using Ground???based and Space???based Observations, The Astrophysical Journal, vol.536, issue.2, p.550, 2000. G. Hasinger, B. Altieri, and M. Arnaud, XMM-Newton observation of the Lockman Hole, Astronomy & Astrophysics, vol.346, issue.1, pp.45-249, 2001. DOI : 10.1051/0004-6361:20000067 M. G. Hauser, R. G. Arendt, and T. Kelsall, Diffuse Infrared Background Experiment Search for the Cosmic Infrared Background. I. Limits and Detections, The Astrophysical Journal, vol.508, issue.1, p.25, 1998. J. R. Houck, B. T. Soifer, and D. Weedman, Phys. Rep, vol.409, p.361, 2005. T. Kelsall, J. L. Weiland, and B. A. Franz, Diffuse Infrared Background Experiment Search for the Cosmic Infrared Background. II. Model of the Interplanetary Dust Cloud, The Astrophysical Journal, vol.508, issue.1, p.44, 1998. J. E. Krist, J. V. Hanisch, and . Brissenden, Tiny Tim: an HST PSF simulator, in Astronomical Data Analysis Software and Systems II, p.536, 1993. G. Lagache, A. Abergel, and F. Boulanger, A&A, vol.344, p.322, 1999. G. Lagache, L. M. Haffner, R. J. Reynolds, and S. L. Tufte, A&A, vol.354, p.247, 2000. G. Lagache and H. Dole, FIRBACK. II. Data reduction and calibration of the 170 $\mathsf{\mu}$m ISO deep cosmological survey, Astronomy & Astrophysics, vol.426, issue.2, p.702, 2001. DOI : 10.1017/S0074180900226156 G. Lagache, H. Dole, J. L. Puget, G. Lagache, H. Dole et al., Modelling infrared galaxy evolution using a phenomenological approach, Monthly Notices of the Royal Astronomical Society, vol.356, issue.3, pp.555-112, 2003. L. Floc-'h, E. Pérez-gonzález, and P. G. Rieke, ApJS, vol.154, p.170, 2004. L. Floc-'h, E. Papovich, C. Dole, and H. , ApJ ApJ, vol.632, issue.626, pp.169-200, 2004. K. Mattila, Has the Optical Extragalactic Background Light Been Detected?, The Astrophysical Journal, vol.591, issue.1, p.119, 2003. M. A. Miville-deschênes, G. Lagache, and J. L. Puget, m???with IRAS, Astronomy & Astrophysics, vol.393, issue.3, p.749, 2002. L. A. Montier and M. Giard, A&A, pp.439-474, 2005. R. F. Mushotzky, L. L. Cowie, A. J. Barger, and K. A. Arnaud, Resolving the extragalactic hard X-ray background, Nature, vol.253, issue.6777, p.459, 2000. C. Papovich, H. Dole, and E. Egami, Surveys, The Astrophysical Journal Supplement Series, vol.154, issue.1, p.70, 2004. R. B. Partridge and P. J. Peebles, Are Young Galaxies Visible? II. The Integrated Background, The Astrophysical Journal, vol.148, p.377, 1967. P. G. Pérez-gonzález, G. H. Rieke, and E. Egami, ??? 3, The Astrophysical Journal, vol.630, issue.1, p.82, 2005. J. R. Primack, J. S. Bullock, R. S. Somerville, and D. Macminn, Probing galaxy formation with TeV gamma ray absorption, Astroparticle Physics, vol.11, issue.1-2, p.93, 1999. DOI : 10.1016/S0927-6505(99)00031-6 J. L. Puget and A. Leger, A New Component of the Interstellar Matter: Small Grains and Large Aromatic Molecules, Annual Review of Astronomy and Astrophysics, vol.27, issue.1, p.161, 1989. DOI : 10.1146/annurev.aa.27.090189.001113 J. L. Puget, A. Abergel, and J. P. Bernard, A&A J. L. A&A, vol.308, issue.371, pp.5-771, 1996. G. H. Rieke, E. T. Young, and C. W. Engelbracht, (MIPS), The Astrophysical Journal Supplement Series, vol.154, issue.1, p.25, 2004. D. B. Sanders and I. F. Mirabel, LUMINOUS INFRARED GALAXIES, Annual Review of Astronomy and Astrophysics, vol.34, issue.1, p.749, 1996. DOI : 10.1146/annurev.astro.34.1.749 R. S. Savage and S. Oliver, , 2005. M. Schroedter, Upper Limits on the Extragalactic Background Light from the Very High Energy Gamma???Ray Spectra of Blazars, The Astrophysical Journal, vol.628, issue.2, p.617, 2005. D. Scott, Cosmic Flows Workshop, ASP Conf, p.403, 2000. I. Smail, R. J. Ivison, A. W. Blain, and J. P. Kneib, , pp.331-495, 2002. B. T. Soifer and G. Neugebauer, The properties of infrared galaxies in the local universe, The Astronomical Journal, vol.101, p.354, 1991. F. W. Stecker and O. C. De-jager, ApJ, pp.476-712, 1997. A. W. Strong, I. V. Moskalenko, and O. Reimer, ApJ, pp.613-956, 2004. R. Thompson, Star Formation History and Other Properties of the Northern Hubble Deep Field, The Astrophysical Journal, vol.596, issue.2, p.748, 2003. M. W. Werner, T. L. Roellig, and F. J. Low, ApJS, vol.154, issue.1, 2004. E. L. Wright, DIRBE minus 2MASS: Confirming the Cosmic Infrared Background at 2.2 Microns, The Astrophysical Journal, vol.553, issue.2, p.538, 2001. E. L. Wright, COBE observations of the cosmic infrared background, New Astronomy Reviews, vol.48, issue.5-6, p.465, 2004. DOI : 10.1016/j.newar.2003.12.054 C. Xu, C. J. Lonsdale, and D. L. Shupe, Models for Multiband Infrared Surveys, The Astrophysical Journal, vol.562, issue.1, p.179, 2001.
CommonCrawl
Category: Geometric topology Differential topology, algebraic topology, Manifolds,… Feynman's lectures on Quantum mechanics 11/10/2018 11/10/2018 hpdungLeave a comment http://www.feynmanlectures.caltech.edu/III_01.html 1Quantum Behavior Note: This chapter is almost exactly the same as Chapter 37 of Volume I. 1–1Atomic mechanics "Quantum mechanics" is the description of the behavior of matter and light in all its details and, in particular, of the happenings on an atomic scale. Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen. Newton thought that light was made up of particles, but then it was discovered that it behaves like a wave. Later, however (in the beginning of the twentieth century), it was found that light did indeed sometimes behave like a particle. Historically, the electron, for example, was thought to behave like a particle, and then it was found that in many respects it behaved like a wave. So it really behaves like neither. Now we have given up. We say: "It is like neither." There is one lucky break, however—electrons behave just like light. The quantum behavior of atomic objects (electrons, protons, neutrons, photons, and so on) is the same for all, they are all "particle waves," or whatever you want to call them. So what we learn about the properties of electrons (which we shall use for our examples) will apply also to all "particles," including photons of light. The gradual accumulation of information about atomic and small-scale behavior during the first quarter of the 20th century, which gave some indications about how small things do behave, produced an increasing confusion which was finally resolved in 1926 and 1927 by Schrödinger, Heisenberg, and Born. They finally obtained a consistent description of the behavior of matter on a small scale. We take up the main features of that description in this chapter. Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and of human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way. So we have to learn about them in a sort of abstract or imaginative fashion and not by connection with our direct experience. In this chapter we shall tackle immediately the basic element of the mysterious behavior in its most strange form. We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by "explaining" how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics. 1–2An experiment with bullets Fig. 1–1.Interference experiment with bullets. To try to understand the quantum behavior of electrons, we shall compare and contrast their behavior, in a particular experimental setup, with the more familiar behavior of particles like bullets, and with the behavior of waves like water waves. We consider first the behavior of bullets in the experimental setup shown diagrammatically in Fig. 1–1. We have a machine gun that shoots a stream of bullets. It is not a very good gun, in that it sprays the bullets (randomly) over a fairly large angular spread, as indicated in the figure. In front of the gun we have a wall (made of armor plate) that has in it two holes just about big enough to let a bullet through. Beyond the wall is a backstop (say a thick wall of wood) which will "absorb" the bullets when they hit it. In front of the wall we have an object which we shall call a "detector" of bullets. It might be a box containing sand. Any bullet that enters the detector will be stopped and accumulated. When we wish, we can empty the box and count the number of bullets that have been caught. The detector can be moved back and forth (in what we will call the xx-direction). With this apparatus, we can find out experimentally the answer to the question: "What is the probability that a bullet which passes through the holes in the wall will arrive at the backstop at the distance xx from the center?" First, you should realize that we should talk about probability, because we cannot say definitely where any particular bullet will go. A bullet which happens to hit one of the holes may bounce off the edges of the hole, and may end up anywhere at all. By "probability" we mean the chance that the bullet will arrive at the detector, which we can measure by counting the number which arrive at the detector in a certain time and then taking the ratio of this number to the total number that hit the backstop during that time. Or, if we assume that the gun always shoots at the same rate during the measurements, the probability we want is just proportional to the number that reach the detector in some standard time interval. For our present purposes we would like to imagine a somewhat idealized experiment in which the bullets are not real bullets, but are indestructible bullets—they cannot break in half. In our experiment we find that bullets always arrive in lumps, and when we find something in the detector, it is always one whole bullet. If the rate at which the machine gun fires is made very low, we find that at any given moment either nothing arrives, or one and only one—exactly one—bullet arrives at the backstop. Also, the size of the lump certainly does not depend on the rate of firing of the gun. We shall say: "Bullets always arrive in identical lumps." What we measure with our detector is the probability of arrival of a lump. And we measure the probability as a function of xx. The result of such measurements with this apparatus (we have not yet done the experiment, so we are really imagining the result) are plotted in the graph drawn in part (c) of Fig. 1–1. In the graph we plot the probability to the right and xx vertically, so that the xx-scale fits the diagram of the apparatus. We call the probability P12P12 because the bullets may have come either through hole 11 or through hole 22. You will not be surprised that P12P12 is large near the middle of the graph but gets small if xx is very large. You may wonder, however, why P12P12 has its maximum value at x=0x=0. We can understand this fact if we do our experiment again after covering up hole 22, and once more while covering up hole 11. When hole 22 is covered, bullets can pass only through hole 11, and we get the curve marked P1P1 in part (b) of the figure. As you would expect, the maximum of P1P1 occurs at the value of xx which is on a straight line with the gun and hole 11. When hole 11 is closed, we get the symmetric curve P2P2 drawn in the figure. P2P2 is the probability distribution for bullets that pass through hole 22. Comparing parts (b) and (c) of Fig. 1–1, we find the important result that P12=P1+P2.(1.1)(1.1)P12=P1+P2. The probabilities just add together. The effect with both holes open is the sum of the effects with each hole open alone. We shall call this result an observation of "no interference," for a reason that you will see later. So much for bullets. They come in lumps, and their probability of arrival shows no interference. 1–3An experiment with waves Fig. 1–2.Interference experiment with water waves. Now we wish to consider an experiment with water waves. The apparatus is shown diagrammatically in Fig. 1–2. We have a shallow trough of water. A small object labeled the "wave source" is jiggled up and down by a motor and makes circular waves. To the right of the source we have again a wall with two holes, and beyond that is a second wall, which, to keep things simple, is an "absorber," so that there is no reflection of the waves that arrive there. This can be done by building a gradual sand "beach." In front of the beach we place a detector which can be moved back and forth in the xx-direction, as before. The detector is now a device which measures the "intensity" of the wave motion. You can imagine a gadget which measures the height of the wave motion, but whose scale is calibrated in proportion to the square of the actual height, so that the reading is proportional to the intensity of the wave. Our detector reads, then, in proportion to the energy being carried by the wave—or rather, the rate at which energy is carried to the detector. With our wave apparatus, the first thing to notice is that the intensity can have any size. If the source just moves a very small amount, then there is just a little bit of wave motion at the detector. When there is more motion at the source, there is more intensity at the detector. The intensity of the wave can have any value at all. We would not say that there was any "lumpiness" in the wave intensity. Now let us measure the wave intensity for various values of xx (keeping the wave source operating always in the same way). We get the interesting-looking curve marked I12I12 in part (c) of the figure. We have already worked out how such patterns can come about when we studied the interference of electric waves in Volume I. In this case we would observe that the original wave is diffracted at the holes, and new circular waves spread out from each hole. If we cover one hole at a time and measure the intensity distribution at the absorber we find the rather simple intensity curves shown in part (b) of the figure. I1I1 is the intensity of the wave from hole 11 (which we find by measuring when hole 22 is blocked off) and I2I2 is the intensity of the wave from hole 22 (seen when hole 11 is blocked). The intensity I12I12 observed when both holes are open is certainly not the sum of I1I1 and I2I2. We say that there is "interference" of the two waves. At some places (where the curve I12I12 has its maxima) the waves are "in phase" and the wave peaks add together to give a large amplitude and, therefore, a large intensity. We say that the two waves are "interfering constructively" at such places. There will be such constructive interference wherever the distance from the detector to one hole is a whole number of wavelengths larger (or shorter) than the distance from the detector to the other hole. At those places where the two waves arrive at the detector with a phase difference of ππ (where they are "out of phase") the resulting wave motion at the detector will be the difference of the two amplitudes. The waves "interfere destructively," and we get a low value for the wave intensity. We expect such low values wherever the distance between hole 11 and the detector is different from the distance between hole 22 and the detector by an odd number of half-wavelengths. The low values of I12I12 in Fig. 1–2correspond to the places where the two waves interfere destructively. You will remember that the quantitative relationship between I1I1, I2I2, and I12I12 can be expressed in the following way: The instantaneous height of the water wave at the detector for the wave from hole 11 can be written as (the real part of) h1eiωth1eiωt, where the "amplitude" h1h1 is, in general, a complex number. The intensity is proportional to the mean squared height or, when we use the complex numbers, to the absolute value squared |h1|2|h1|2. Similarly, for hole 22 the height is h2eiωth2eiωt and the intensity is proportional to |h2|2|h2|2. When both holes are open, the wave heights add to give the height (h1+h2)eiωt(h1+h2)eiωt and the intensity |h1+h2|2|h1+h2|2. Omitting the constant of proportionality for our present purposes, the proper relations for interfering waves are I1=|h1|2,I2=|h2|2,I12=|h1+h2|2.(1.2)(1.2)I1=|h1|2,I2=|h2|2,I12=|h1+h2|2. You will notice that the result is quite different from that obtained with bullets (Eq. 1.1). If we expand |h1+h2|2|h1+h2|2 we see that |h1+h2|2=|h1|2+|h2|2+2|h1||h2|cosδ,(1.3)(1.3)|h1+h2|2=|h1|2+|h2|2+2|h1||h2|cos⁡δ, where δδ is the phase difference between h1h1 and h2h2. In terms of the intensities, we could write I12=I1+I2+2I1I2−−−−√cosδ.(1.4)(1.4)I12=I1+I2+2I1I2cos⁡δ. The last term in (1.4) is the "interference term." So much for water waves. The intensity can have any value, and it shows interference. 1–4An experiment with electrons Fig. 1–3.Interference experiment with electrons. Now we imagine a similar experiment with electrons. It is shown diagrammatically in Fig. 1–3. We make an electron gun which consists of a tungsten wire heated by an electric current and surrounded by a metal box with a hole in it. If the wire is at a negative voltage with respect to the box, electrons emitted by the wire will be accelerated toward the walls and some will pass through the hole. All the electrons which come out of the gun will have (nearly) the same energy. In front of the gun is again a wall (just a thin metal plate) with two holes in it. Beyond the wall is another plate which will serve as a "backstop." In front of the backstop we place a movable detector. The detector might be a geiger counter or, perhaps better, an electron multiplier, which is connected to a loudspeaker. We should say right away that you should not try to set up this experiment (as you could have done with the two we have already described). This experiment has never been done in just this way. The trouble is that the apparatus would have to be made on an impossibly small scale to show the effects we are interested in. We are doing a "thought experiment," which we have chosen because it is easy to think about. We know the results that would be obtained because there are many experiments that have been done, in which the scale and the proportions have been chosen to show the effects we shall describe. The first thing we notice with our electron experiment is that we hear sharp "clicks" from the detector (that is, from the loudspeaker). And all "clicks" are the same. There are no "half-clicks." We would also notice that the "clicks" come very erratically. Something like: click ….. click-click … click …….. click …. click-click …… click …, etc., just as you have, no doubt, heard a geiger counter operating. If we count the clicks which arrive in a sufficiently long time—say for many minutes—and then count again for another equal period, we find that the two numbers are very nearly the same. So we can speak of the average rate at which the clicks are heard (so-and-so-many clicks per minute on the average). As we move the detector around, the rate at which the clicks appear is faster or slower, but the size (loudness) of each click is always the same. If we lower the temperature of the wire in the gun, the rate of clicking slows down, but still each click sounds the same. We would notice also that if we put two separate detectors at the backstop, one or the other would click, but never both at once. (Except that once in a while, if there were two clicks very close together in time, our ear might not sense the separation.) We conclude, therefore, that whatever arrives at the backstop arrives in "lumps." All the "lumps" are the same size: only whole "lumps" arrive, and they arrive one at a time at the backstop. We shall say: "Electrons always arrive in identical lumps." Just as for our experiment with bullets, we can now proceed to find experimentally the answer to the question: "What is the relative probability that an electron 'lump' will arrive at the backstop at various distances xx from the center?" As before, we obtain the relative probability by observing the rate of clicks, holding the operation of the gun constant. The probability that lumps will arrive at a particular xx is proportional to the average rate of clicks at that xx. The result of our experiment is the interesting curve marked P12P12 in part (c) of Fig. 1–3. Yes! That is the way electrons go. 1–5The interference of electron waves Now let us try to analyze the curve of Fig. 1–3 to see whether we can understand the behavior of the electrons. The first thing we would say is that since they come in lumps, each lump, which we may as well call an electron, has come either through hole 11 or through hole 22. Let us write this in the form of a "Proposition": Proposition A: Each electron either goes through hole 11 or it goes through hole 22. Assuming Proposition A, all electrons that arrive at the backstop can be divided into two classes: (1) those that come through hole 11, and (2) those that come through hole 22. So our observed curve must be the sum of the effects of the electrons which come through hole 11 and the electrons which come through hole 22. Let us check this idea by experiment. First, we will make a measurement for those electrons that come through hole 11. We block off hole 22 and make our counts of the clicks from the detector. From the clicking rate, we get P1P1. The result of the measurement is shown by the curve marked P1P1 in part (b) of Fig. 1–3. The result seems quite reasonable. In a similar way, we measure P2P2, the probability distribution for the electrons that come through hole 22. The result of this measurement is also drawn in the figure. The result P12P12 obtained with both holes open is clearly not the sum of P1P1 and P2P2, the probabilities for each hole alone. In analogy with our water-wave experiment, we say: "There is interference." For electrons:P12≠P1+P2.(1.5)(1.5)For electrons:P12≠P1+P2. How can such an interference come about? Perhaps we should say: "Well, that means, presumably, that it is not true that the lumps go either through hole 11 or hole 22, because if they did, the probabilities should add. Perhaps they go in a more complicated way. They split in half and …" But no! They cannot, they always arrive in lumps … "Well, perhaps some of them go through 11, and then they go around through 22, and then around a few more times, or by some other complicated path … then by closing hole 22, we changed the chance that an electron that started out through hole 11 would finally get to the backstop …" But notice! There are some points at which very few electrons arrive when both holes are open, but which receive many electrons if we close one hole, so closing one hole increased the number from the other. Notice, however, that at the center of the pattern, P12P12 is more than twice as large as P1+P2P1+P2. It is as though closing one hole decreased the number of electrons which come through the other hole. It seems hard to explain both effects by proposing that the electrons travel in complicated paths. It is all quite mysterious. And the more you look at it the more mysterious it seems. Many ideas have been concocted to try to explain the curve for P12P12 in terms of individual electrons going around in complicated ways through the holes. None of them has succeeded. None of them can get the right curve for P12P12 in terms of P1P1 and P2P2. Yet, surprisingly enough, the mathematics for relating P1P1 and P2P2 to P12P12 is extremely simple. For P12P12 is just like the curve I12I12 of Fig. 1–2, and that was simple. What is going on at the backstop can be described by two complex numbers that we can call ϕ1ϕ1 and ϕ2ϕ2 (they are functions of xx, of course). The absolute square of ϕ1ϕ1 gives the effect with only hole 11 open. That is, P1=|ϕ1|2P1=|ϕ1|2. The effect with only hole 22 open is given by ϕ2ϕ2 in the same way. That is, P2=|ϕ2|2P2=|ϕ2|2. And the combined effect of the two holes is justP12=|ϕ1+ϕ2|2P12=|ϕ1+ϕ2|2. The mathematics is the same as that we had for the water waves! (It is hard to see how one could get such a simple result from a complicated game of electrons going back and forth through the plate on some strange trajectory.) We conclude the following: The electrons arrive in lumps, like particles, and the probability of arrival of these lumps is distributed like the distribution of intensity of a wave. It is in this sense that an electron behaves "sometimes like a particle and sometimes like a wave." Incidentally, when we were dealing with classical waves we defined the intensity as the mean over time of the square of the wave amplitude, and we used complex numbers as a mathematical trick to simplify the analysis. But in quantum mechanics it turns out that the amplitudes must be represented by complex numbers. The real parts alone will not do. That is a technical point, for the moment, because the formulas look just the same. Since the probability of arrival through both holes is given so simply, although it is not equal to (P1+P2)(P1+P2), that is really all there is to say. But there are a large number of subtleties involved in the fact that nature does work this way. We would like to illustrate some of these subtleties for you now. First, since the number that arrives at a particular point is not equal to the number that arrives through 11 plus the number that arrives through 22, as we would have concluded from Proposition A, undoubtedly we should conclude that Proposition A is false. It is not true that the electrons go either through hole 11 or hole 22. But that conclusion can be tested by another experiment. 1–6Watching the electrons Fig. 1–4.A different electron experiment. We shall now try the following experiment. To our electron apparatus we add a very strong light source, placed behind the wall and between the two holes, as shown in Fig. 1–4. We know that electric charges scatter light. So when an electron passes, however it does pass, on its way to the detector, it will scatter some light to our eye, and we can see where the electron goes. If, for instance, an electron were to take the path via hole 22 that is sketched in Fig. 1–4, we should see a flash of light coming from the vicinity of the place marked AA in the figure. If an electron passes through hole 11, we would expect to see a flash from the vicinity of the upper hole. If it should happen that we get light from both places at the same time, because the electron divides in half … Let us just do the experiment! Here is what we see: every time that we hear a "click" from our electron detector (at the backstop), we also see a flash of light either near hole 11 or near hole 22, but neverboth at once! And we observe the same result no matter where we put the detector. From this observation we conclude that when we look at the electrons we find that the electrons go either through one hole or the other. Experimentally, Proposition A is necessarily true. What, then, is wrong with our argument against Proposition A? Why isn't P12P12 just equal to P1+P2P1+P2? Back to experiment! Let us keep track of the electrons and find out what they are doing. For each position (xx-location) of the detector we will count the electrons that arrive and also keep track of which hole they went through, by watching for the flashes. We can keep track of things this way: whenever we hear a "click" we will put a count in Column 11 if we see the flash near hole 11, and if we see the flash near hole 22, we will record a count in Column 22. Every electron which arrives is recorded in one of two classes: those which come through 11 and those which come through 22. From the number recorded in Column 11 we get the probability P′1P1′ that an electron will arrive at the detector via hole 11; and from the number recorded in Column 22 we get P′2P2′, the probability that an electron will arrive at the detector via hole 22. If we now repeat such a measurement for many values of xx, we get the curves for P′1P1′ and P′2P2′ shown in part (b) of Fig. 1–4. Well, that is not too surprising! We get for P′1P1′ something quite similar to what we got before for P1P1 by blocking off hole 22; and P′2P2′ is similar to what we got by blocking hole 11. So there is not any complicated business like going through both holes. When we watch them, the electrons come through just as we would expect them to come through. Whether the holes are closed or open, those which we see come through hole 11 are distributed in the same way whether hole 22 is open or closed. But wait! What do we have now for the total probability, the probability that an electron will arrive at the detector by any route? We already have that information. We just pretend that we never looked at the light flashes, and we lump together the detector clicks which we have separated into the two columns. We must just add the numbers. For the probability that an electron will arrive at the backstop by passing through either hole, we do find P′12=P′1+P′2P12′=P1′+P2′. That is, although we succeeded in watching which hole our electrons come through, we no longer get the old interference curve P12P12, but a new one, P′12P12′, showing no interference! If we turn out the light P12P12 is restored. We must conclude that when we look at the electrons the distribution of them on the screen is different than when we do not look. Perhaps it is turning on our light source that disturbs things? It must be that the electrons are very delicate, and the light, when it scatters off the electrons, gives them a jolt that changes their motion. We know that the electric field of the light acting on a charge will exert a force on it. So perhaps we should expect the motion to be changed. Anyway, the light exerts a big influence on the electrons. By trying to "watch" the electrons we have changed their motions. That is, the jolt given to the electron when the photon is scattered by it is such as to change the electron's motion enough so that if it might have gone to where P12P12 was at a maximum it will instead land where P12P12 was a minimum; that is why we no longer see the wavy interference effects. You may be thinking: "Don't use such a bright source! Turn the brightness down! The light waves will then be weaker and will not disturb the electrons so much. Surely, by making the light dimmer and dimmer, eventually the wave will be weak enough that it will have a negligible effect." O.K. Let's try it. The first thing we observe is that the flashes of light scattered from the electrons as they pass by does not get weaker. It is always the same-sized flash. The only thing that happens as the light is made dimmer is that sometimes we hear a "click" from the detector but see no flash at all. The electron has gone by without being "seen." What we are observing is that light also acts like electrons, we knew that it was "wavy," but now we find that it is also "lumpy." It always arrives—or is scattered—in lumps that we call "photons." As we turn down the intensity of the light source we do not change the size of the photons, only the rate at which they are emitted. That explains why, when our source is dim, some electrons get by without being seen. There did not happen to be a photon around at the time the electron went through. This is all a little discouraging. If it is true that whenever we "see" the electron we see the same-sized flash, then those electrons we see are always the disturbed ones. Let us try the experiment with a dim light anyway. Now whenever we hear a click in the detector we will keep a count in three columns: in Column (1) those electrons seen by hole 11, in Column (2) those electrons seen by hole 22, and in Column (3) those electrons not seen at all. When we work up our data (computing the probabilities) we find these results: Those "seen by hole 11" have a distribution like P′1P1′; those "seen by hole 22" have a distribution like P′2P2′ (so that those "seen by either hole 11 or 22" have a distribution like P′12P12′); and those "not seen at all" have a "wavy" distribution just like P12P12 of Fig. 1–3! If the electrons are not seen, we have interference! That is understandable. When we do not see the electron, no photon disturbs it, and when we do see it, a photon has disturbed it. There is always the same amount of disturbance because the light photons all produce the same-sized effects and the effect of the photons being scattered is enough to smear out any interference effect. Is there not some way we can see the electrons without disturbing them? We learned in an earlier chapter that the momentum carried by a "photon" is inversely proportional to its wavelength (p=h/λp=h/λ). Certainly the jolt given to the electron when the photon is scattered toward our eye depends on the momentum that photon carries. Aha! If we want to disturb the electrons only slightly we should not have lowered the intensity of the light, we should have lowered its frequency (the same as increasing its wavelength). Let us use light of a redder color. We could even use infrared light, or radiowaves (like radar), and "see" where the electron went with the help of some equipment that can "see" light of these longer wavelengths. If we use "gentler" light perhaps we can avoid disturbing the electrons so much. Let us try the experiment with longer waves. We shall keep repeating our experiment, each time with light of a longer wavelength. At first, nothing seems to change. The results are the same. Then a terrible thing happens. You remember that when we discussed the microscope we pointed out that, due to the wave nature of the light, there is a limitation on how close two spots can be and still be seen as two separate spots. This distance is of the order of the wavelength of light. So now, when we make the wavelength longer than the distance between our holes, we see a big fuzzy flash when the light is scattered by the electrons. We can no longer tell which hole the electron went through! We just know it went somewhere! And it is just with light of this color that we find that the jolts given to the electron are small enough so that P′12P12′ begins to look like P12P12—that we begin to get some interference effect. And it is only for wavelengths much longer than the separation of the two holes (when we have no chance at all of telling where the electron went) that the disturbance due to the light gets sufficiently small that we again get the curve P12P12 shown in Fig. 1–3. In our experiment we find that it is impossible to arrange the light in such a way that one can tell which hole the electron went through, and at the same time not disturb the pattern. It was suggested by Heisenberg that the then new laws of nature could only be consistent if there were some basic limitation on our experimental capabilities not previously recognized. He proposed, as a general principle, his uncertainty principle, which we can state in terms of our experiment as follows: "It is impossible to design an apparatus to determine which hole the electron passes through, that will not at the same time disturb the electrons enough to destroy the interference pattern." If an apparatus is capable of determining which hole the electron goes through, it cannot be so delicate that it does not disturb the pattern in an essential way. No one has ever found (or even thought of) a way around the uncertainty principle. So we must assume that it describes a basic characteristic of nature. The complete theory of quantum mechanics which we now use to describe atoms and, in fact, all matter, depends on the correctness of the uncertainty principle. Since quantum mechanics is such a successful theory, our belief in the uncertainty principle is reinforced. But if a way to "beat" the uncertainty principle were ever discovered, quantum mechanics would give inconsistent results and would have to be discarded as a valid theory of nature. "Well," you say, "what about Proposition A? Is it true, or is it not true, that the electron either goes through hole 11 or it goes through hole 22?" The only answer that can be given is that we have found from experiment that there is a certain special way that we have to think in order that we do not get into inconsistencies. What we must say (to avoid making wrong predictions) is the following. If one looks at the holes or, more accurately, if one has a piece of apparatus which is capable of determining whether the electrons go through hole 11 or hole 22, then one can say that it goes either through hole 11 or hole 22. But, when one does not try to tell which way the electron goes, when there is nothing in the experiment to disturb the electrons, then one may not say that an electron goes either through hole 11 or hole 22. If one does say that, and starts to make any deductions from the statement, he will make errors in the analysis. This is the logical tightrope on which we must walk if we wish to describe nature successfully. If the motion of all matter—as well as electrons—must be described in terms of waves, what about the bullets in our first experiment? Why didn't we see an interference pattern there? It turns out that for the bullets the wavelengths were so tiny that the interference patterns became very fine. So fine, in fact, that with any detector of finite size one could not distinguish the separate maxima and minima. What we saw was only a kind of average, which is the classical curve. In Fig. 1–5 we have tried to indicate schematically what happens with large-scale objects. Part (a) of the figure shows the probability distribution one might predict for bullets, using quantum mechanics. The rapid wiggles are supposed to represent the interference pattern one gets for waves of very short wavelength. Any physical detector, however, straddles several wiggles of the probability curve, so that the measurements show the smooth curve drawn in part (b) of the figure. Fig. 1–5.Interference pattern with bullets: (a) actual (schematic), (b) observed. 1–7First principles of quantum mechanics We will now write a summary of the main conclusions of our experiments. We will, however, put the results in a form which makes them true for a general class of such experiments. We can write our summary more simply if we first define an "ideal experiment" as one in which there are no uncertain external influences, i.e., no jiggling or other things going on that we cannot take into account. We would be quite precise if we said: "An ideal experiment is one in which all of the initial and final conditions of the experiment are completely specified." What we will call "an event" is, in general, just a specific set of initial and final conditions. (For example: "an electron leaves the gun, arrives at the detector, and nothing else happens.") Now for our summary. (1)The probability of an event in an ideal experiment is given by the square of the absolute value of a complex number ϕϕ which is called the probability amplitude: PϕP=probability,=probability amplitude,=|ϕ|2.(1.6)(1.6)P=probability,ϕ=probability amplitude,P=|ϕ|2. (2)When an event can occur in several alternative ways, the probability amplitude for the event is the sum of the probability amplitudes for each way considered separately. There is interference: ϕP=ϕ1+ϕ2,=|ϕ1+ϕ2|2.(1.7)(1.7)ϕ=ϕ1+ϕ2,P=|ϕ1+ϕ2|2. (3)If an experiment is performed which is capable of determining whether one or another alternative is actually taken, the probability of the event is the sum of the probabilities for each alternative. The interference is lost: P=P1+P2.(1.8)(1.8)P=P1+P2. One might still like to ask: "How does it work? What is the machinery behind the law?" No one has found any machinery behind the law. No one can "explain" any more than we have just "explained." No one will give you any deeper representation of the situation. We have no ideas about a more basic mechanism from which these results can be deduced. We would like to emphasize a very important difference between classical and quantum mechanics. We have been talking about the probability that an electron will arrive in a given circumstance. We have implied that in our experimental arrangement (or even in the best possible one) it would be impossible to predict exactly what would happen. We can only predict the odds! This would mean, if it were true, that physics has given up on the problem of trying to predict exactly what will happen in a definite circumstance. Yes! physics has given up. We do not know how to predict what would happen in a given circumstance, and we believe now that it is impossible—that the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our earlier ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it. We make now a few remarks on a suggestion that has sometimes been made to try to avoid the description we have given: "Perhaps the electron has some kind of internal works—some inner variables—that we do not yet know about. Perhaps that is why we cannot predict what will happen. If we could look more closely at the electron, we could be able to tell where it would end up." So far as we know, that is impossible. We would still be in difficulty. Suppose we were to assume that inside the electron there is some kind of machinery that determines where it is going to end up. That machine must also determine which hole it is going to go through on its way. But we must not forget that what is inside the electron should not be dependent on what we do, and in particular upon whether we open or close one of the holes. So if an electron, before it starts, has already made up its mind (a) which hole it is going to use, and (b) where it is going to land, we should find P1P1 for those electrons that have chosen hole 11, P2P2 for those that have chosen hole 22, and necessarily the sum P1+P2P1+P2 for those that arrive through the two holes. There seems to be no way around this. But we have verified experimentally that that is not the case. And no one has figured a way out of this puzzle. So at the present time we must limit ourselves to computing probabilities. We say "at the present time," but we suspect very strongly that it is something that will be with us forever—that it is impossible to beat that puzzle—that this is the way nature really is. 1–8The uncertainty principle This is the way Heisenberg stated the uncertainty principle originally: If you make the measurement on any object, and you can determine the xx-component of its momentum with an uncertainty ΔpΔp, you cannot, at the same time, know its xx-position more accurately than Δx≥ℏ/2ΔpΔx≥ℏ/2Δp, where ℏℏ is a definite fixed number given by nature. It is called the "reduced Planck constant," and is approximately 1.05×10−341.05×10−34 joule-seconds. The uncertainties in the position and momentum of a particle at any instant must have their product greater than or equal to half the reduced Planck constant. This is a special case of the uncertainty principle that was stated above more generally. The more general statement was that one cannot design equipment in any way to determine which of two alternatives is taken, without, at the same time, destroying the pattern of interference. Let us show for one particular case that the kind of relation given by Heisenberg must be true in order to keep from getting into trouble. We imagine a modification of the experiment of Fig. 1–3, in which the wall with the holes consists of a plate mounted on rollers so that it can move freely up and down (in the xx-direction), as shown in Fig. 1–6. By watching the motion of the plate carefully we can try to tell which hole an electron goes through. Imagine what happens when the detector is placed at x=0x=0. We would expect that an electron which passes through hole 11 must be deflected downward by the plate to reach the detector. Since the vertical component of the electron momentum is changed, the plate must recoil with an equal momentum in the opposite direction. The plate will get an upward kick. If the electron goes through the lower hole, the plate should feel a downward kick. It is clear that for every position of the detector, the momentum received by the plate will have a different value for a traversal via hole 11 than for a traversal via hole 22. So! Without disturbing the electrons at all, but just by watching the plate, we can tell which path the electron used. Fig. 1–6.An experiment in which the recoil of the wall is measured. Now in order to do this it is necessary to know what the momentum of the screen is, before the electron goes through. So when we measure the momentum after the electron goes by, we can figure out how much the plate's momentum has changed. But remember, according to the uncertainty principle we cannot at the same time know the position of the plate with an arbitrary accuracy. But if we do not know exactly where the plate is, we cannot say precisely where the two holes are. They will be in a different place for every electron that goes through. This means that the center of our interference pattern will have a different location for each electron. The wiggles of the interference pattern will be smeared out. We shall show quantitatively in the next chapter that if we determine the momentum of the plate sufficiently accurately to determine from the recoil measurement which hole was used, then the uncertainty in the xx-position of the plate will, according to the uncertainty principle, be enough to shift the pattern observed at the detector up and down in the xx-direction about the distance from a maximum to its nearest minimum. Such a random shift is just enough to smear out the pattern so that no interference is observed. The uncertainty principle "protects" quantum mechanics. Heisenberg recognized that if it were possible to measure the momentum and the position simultaneously with a greater accuracy, the quantum mechanics would collapse. So he proposed that it must be impossible. Then people sat down and tried to figure out ways of doing it, and nobody could figure out a way to measure the position and the momentum of anything—a screen, an electron, a billiard ball, anything—with any greater accuracy. Quantum mechanics maintains its perilous but still correct existence. Some illustrations of dynamical systems Blog of Gabriel Peyré: https://twitter.com/gabrielpeyre For example: Gradient flows: https://twitter.com/gabrielpeyre/status/1007865434320850944 The gradient field defines the steepest descent direction. The gradient flow dynamic defines a segmentation of the space into attraction bassins of the local minimizers. pic.twitter.com/wW0flPEWor — Gabriel Peyré (@gabrielpeyre) 16/6/2018. The Gaussian curvatures of spheres 1. An example of the Gaussian curvature Example 1 Compute the Gaussian curvature of sphere Parametrizing The coefficients of the second fundamental form: we compute coefficients of the first fundamental form: These imply that By the above computation, the curvature of sphere is MATH-F-420: Differential geometry of Verbitsky Misha Verbitsky MATH-F-420: Differential geometry Monday 16:00-18:00, P.OF.2058 Announcement for this course. Slides: Lecture 1. Manifolds (September 21). Lecture 2. Partition of unity (October 5). Lecture 3. Derivations and vector fields (October 12). Lecture 4. Derivations and sheaves (October 19). Lecture 5. Locally trivial fibrations (October 26). Lecture 6. Vector bundles as locally trivial fibrations (November 9). Lecture 7. Operations on vector bundles (November 16). Lecture 8. Grassmann algebra (November 30). Lecture 9. De Rham differential (December 7). Lecture 10. Poincare lemma (December 14). Remedial topology Manifolds and sheaves Derivations Sheaves and germs Vector bundles and sheaves Smooth fibrations Vector bundles Grassmann algebra De Rham differential Miscellanea: test problems, exam, etc. Test assignment 1 (28.09.2015) Exam problems. Source: http://verbit.ru/ULB/GEOM-2015/ On the diffetential of a mapping 2 In the case case, there is a linear map, which is "linear approximation" of . In the manifold case, there is a similar linear map, but now it acts between tangent spaces. If and are smooth manifolds and is a smooth map then for each , the map is defined by is called the pushforward. Actually, Suppose that and is a differentiable mapping. We have Definition 1 The mapping is called a trivial fibration (differentiable) on if there exists a differential manifold , is called fibre of , and a diffeomorphism such that the following diagram is commutative On the differential of a mapping From on Warner's book. Smooth curve on manifold : A mapping . Let , we define the tangent vector of the curve at is the vector we apply the formula where is an any function on . Put and , the above formula implies This is directional derivative. Một ví dụ về mặt chính quy BT. Cho hàm . CMR không là giá trị chính qui của hàm thế nhưng vẫn là mặt chính qui. Lời giải. Ta có ma trận suy biến tức khi và chỉ khi , do đó tại điểm , ma trận trên suy biến và do đó là điểm kì dị. Mà nên là giá trị tới hạn, hay ko phải giá trị chính qui. Nhưng là mặt phẳng Oxy, đây là mặt trơn nên chính qui. What is the derivative of cos(x² + 1)? Calculating the probability of tournament results of a skill based game? Found some contradiction in wikipedia about topological space prove that $S$ is a $C_1$ curve given by Decomposition in modular representation theory Does system of cutting arcs generate $H_1(M,\partial M)$? Cross Product of 2 random unit vectors in $\mathbb{R}^3$ Let P be the transition matrix of a Markov chain, and there exists an integer $r \geq 0$ such that every entry of $P^r$ is positive Convex hull of path in $\mathbb R^2$ is the set of convex combination of 2 points of the path On convergence of functions and sequences Is strong convergence of measures equivalent to convergence in measure of the Radon Nikodym derivatives? Composition ordering on $X^X$ On components of centralisers in unipotent groups Weyl group actions on standard subgroups What happens to eigenvalues when edges are removed? An exercise from Loday and Vallette about Koszul morphism Approximations to $\pi$ Extension of an addition functor Vị giáo sư gần 30 năm đem tiếng Việt vào ĐH hàng đầu nước Mỹ
CommonCrawl
Search SpringerLink Factors Controlling Seasonal Phytoplankton Dynamics in the Delaware River Estuary: an Idealized Model Study Yoeri M. Dijkstra ORCID: orcid.org/0000-0003-0682-09691, Robert J. Chant2 & John R. Reinfelder3 Estuaries and Coasts volume 42, pages 1839–1857 (2019)Cite this article Phytoplankton biomass in estuaries is controlled by complex biological and chemical processes that control growth and mortality, and physical processes that control transport and dilution. The effects of these processes on phytoplankton blooms were systematically analyzed, focusing on identifying the dominant controlling factors out of river-induced flushing, tidal dispersion, nutrient limitation, and light limitation. To capture the physical processes related to flow and sediment dynamics, we used the idealized width-averaged iFlow model. The model was extended with a nutrient-phytoplankton module to capture the essential biological-chemical processes. The model was applied to the Delaware River Estuary for the productive months of March to November. Model results were compared with field observations. It was found that phytoplankton blooms cannot form in the lower bay due to tidal dispersion, as water from the estuary and coastal ocean mix in early spring, and due to local effects of nitrogen limitation in summer. In the middle to upper bay, sediment-induced deterioration of the light climate limits the growth but allows for blooms in the mid bay, while no blooms can form in the turbidity maximum zone in the upper estuary. Further upstream in the tidal river, the effects of river-induced flushing dominate in early spring and prevent bloom formation. In the summer and fall, lower river discharges and higher growth rates at higher temperatures allow blooms to form and persist. Analysis of the connectivity between mid bay and tidal river blooms showed that coastal ocean phytoplankton may contribute to mid bay blooms, but do not penetrate beyond the turbidity maximum zone. Working on a manuscript? Avoid the common mistakes Phytoplankton biomass is considered one of the main indicators of the health of an estuarine ecosystem, because of its key role in the food web and the oxygen cycle. However, understanding the dynamics of phytoplankton in estuaries is particularly complex due to the interplay of biological-chemical processes, including growth, respiration, nutrient uptake and remineralization, and grazing, and physical transport processes driven by tides, river runoff, and vertical mixing (e.g., Cloern et al. 2014), which have characteristic timescales that vary from hours to seasons. In order to better understand estuarine phytoplankton dynamics, predictive models with varying degrees of complexity, aggregation of processes, and timescales have been developed. Many such models make a distinction between biological-chemical processes, which act locally and determine the net local growth rate of phytoplankton, and transport-related processes, which act non-locally and connect various parts of the estuary (e.g., Lucas et al. 1999a, 2009; Liu and De Swart 2015; Qin and Shen 2017). One of the foundational models of estuarine primary production is based on local biological processes applied to oceanic environments and lead to the critical depth theory (Sverdrup 1953). This theory states that phytoplankton blooms can occur when the depth of the surface mixed layer is shallower than a critical depth, which is related to the light penetration depth (or euphotic depth). In estuaries, the euphotic depth is often controlled by the amount of suspended sediment (Wofsy 1983; Peterson and Festa 1984). Using local biological-chemical models together with observations of sediment concentrations, critical depth theory has been successfully used to qualitatively describe phytoplankton dynamics in some estuaries (e.g., Colijn 1982; Cloern 1987, and references therein). However, bloom dynamics may not obey the critical depth theory. In oceanic and coastal ecosystems, this is for example observed when greater growth rates of phytoplankton than those of zooplankton initiate phytoplankton spring blooms in the absence of thermal stratification, such that the spring bloom occurs very early in the season (February in the North Atlantic) (Behrenfeld 2010; George et al. 2015). In estuaries, critical depth theory may be violated due to salinity stratification and turbulence, which control the exchange between the euphotic and aphotic layers due to mixing and sinking (Lucas et al. 1998) or due to non-local effects (Lucas et al. 1999b), most notably river flushing (e.g., Filardo and Dunstan 1985; Zakardjian et al. 2000; Liu and De Swart 2015). When conversely assuming that the phytoplankton biomass is fully controlled by flow and dilution, residence time is a useful predictor of estuarine primary production (e.g., Howarth et al. 2000). High residence times are typically associated with higher biomass, as the phytoplankton has time to grow. However, Lucas et al. (2009) explained that the converse may be true if local losses exceed growth or that there may be no apparent relation if local losses and growth are balanced. Clearly, a full understanding of phytoplankton dynamics in estuaries requires a combined insight into local and non-local processes. While both classes of processes are generally built into complex numerical simulation models, the high complexity of such models and the variability on many timescales makes it difficult to distinguish between the effects of local and non-local processes and evaluate their relative importance. In this study, we developed and analyzed a method to assess and compare the relative importance of local and non-local processes to the control of phytoplankton dynamics in an estuary in terms of equivalent growth rates. The method is formulated generally and can be used to compare local and non-local processes in various modelling frameworks. The goal of this modelling study is to provide insight into the main processes that govern phytoplankton blooms on the scale of the entire estuary and on a seasonal timescale. As the focus is on understanding of large-scale dynamics, we used an idealized width-averaged model that extends the iFlow model (Dijkstra et al. 2017a), resolving the dynamics of the transport of water, sediment, phytoplankton, and nutrients. The strengths of the model are the ability to make a further decomposition of the local and non-local processes into individual biological and physical processes and the models' computational speed, taking only seconds to compute long-term equilibrium solutions. The model is applied to the Delaware River Estuary, and results are calibrated against an extensive set of observations. The current understanding of phytoplankton dynamics in the Delaware River Estuary is based on observations and conceptual local models (Pennock 1985; Pennock and Sharp 1994), where the effect of non-local flow-related processes has not been considered. Therefore, in this study, we focus on the relative importance of local and non-local processes during various seasons. A brief introduction to the study area and the model and analysis methods developed in this study are described in "Model and Site Description". Month-to-month model results covering the entire year are presented in comparison to observations in "Year-Round Results for Default Settings" and analyzed in the context of local and non-local processes in "Balances and Limiting Factors" to "Synthesis". The model results and assumptions are discussed in context of other literature on the Delaware Estuary and in context of its general implications in "Discussion". Finally, the conclusions are summarized in "Summary". Model and Site Description Study Area: Delaware River Estuary The Delaware River Estuary is located on the east coast of the USA (Fig. 1). Tides in the Delaware propagate from the mouth at Cape May and Cape Henlopen to Trenton, 215 km upstream, beyond which the tidal influence disappears. Several tributaries flow into the Delaware Estuary, of which the Schuylkill River (km 149) is the most significant. The Delaware River is well monitored with long-term data on tidal elevation available from eight NOAA tide gauge stations, information on the river discharge available from USGS, and data on suspended sediment and biochemical quantities available from several sources. Biochemical data at the surface has been gathered in several cruises by researchers from the university of Delaware in the 1980s and 1990s (see Sharp et al. 2009, for an overview). McSweeney et al. (2016b) present data from several cruises in 2010–2011, where the distribution of sediment, oxygen, chlorophyll-a, and nitrate over the depth has been measured along the length of the channel. The most extensive long-term data set is collected by the Delaware River Basin Commission (DRBC). The data set consists of measurements taken approximately every month since 1967 at the surface at 22 stations along the estuary and tidal river and includes observations of suspended sediment concentration, temperature, salinity, chlorophyll-a, and several nutrients. In our study, we use the DRBC observations from 2000 to 2016, which are well documented and available online.Footnote 1 Map of the Delaware River Estuary (east coast of the USA). The zone indication is adapted from Sharp et al. (2009) iFlow Model and Application to the Delaware The nutrient-phytoplankton model used in this study extends the iFlow model (Dijkstra et al. 2017a). This is a process-based width-averaged idealized model for tidal hydrodynamics and sediment dynamics in estuaries and tidal rivers. The geometry of the estuary is represented in the model by a smooth width and depth profile capturing the estuary-scale features. A perturbation method is used to obtain an approximate solution to the non-linear equations for water motion and sediment dynamics. The approximations resulting from the perturbation method lead to short computation times, allowing for sensitivity studies investigating the sensitivity of model results to uncertain or variable model parameters. Moreover, the model allows for an explicit decomposition of the water motion, sediment concentration, and sediment transport into different components resulting from various physical processes, thus allowing for a mechanistic interpretation of the results. Finally, the model immediately solves for dynamic equilibrium conditions, i.e., the condition that develops after a long time of constant forcing, and therefore quickly gives insight into long-term trends. Hence, numerical time-stepping routines are not needed. The model uses entirely analytical solutions in the vertical direction. Solutions in the horizontal direction are numerical on an equidistant grid with 250 computational cells. In this section, we give a short overview of the physics included in the model and the application to the Delaware Estuary. For a detailed description of iFlow, we refer to Dijkstra et al. (2017a). Geometry and Water Motion The geometry and water motion is adopted from the work by Wei et al. (2016). Consistent with their approach, we approximate the estuary-scale geometry by a constant width-averaged depth of 8 m and an exponentially converging width B, according to \(B = B_{0}e^{-x/L_{b}}\), where x is the along-channel axis, B0 the width at the seaward boundary of 39 km, and Lb a convergence length of 42 km. The water motion is described by approximations of the width-averaged momentum and continuity equations. The model resolves the M2, M4 and tide-averaged water level and width-averaged flow velocity in dynamic equilibrium. The model is forced by a prescribed M2 and M4 water level amplitude at the estuary mouth with an amplitude of 0.72 m and 0.14 m, respectively, and a phase difference between the M2 and M4 water level of − 152°. This forcing equals the year-averaged conditions measured at the NOAA tide gauge at Cape May in 2016. The water level inside the estuary is calibrated against seven other NOAA tide gauges by adjusting the bed roughness coefficient and eddy viscosity in the model. Fresh water enters the estuary at the landward boundary at Trenton and at the confluence with the Schuylkill River. The discharge is represented by the monthly average of the discharge measured by USGS at Trenton between 2000 and 2016. Observations by USGS of the Schuylkill River discharge show that this is 25% of the discharge at Trenton on average and we therefore represent the discharge of the Schuylkill River as 25% of the discharge at Trenton. The two discharges combined vary between 300 m3/s on average in July and 727 m3/s in March (see Table 1 for an overview per month). Table 1 Monthly varying model parameters Sediment Dynamics The sediment model includes settling, resuspension, and along-channel advection and diffusion of a single sediment fraction. Sediment is represented as fine non-cohesive sediment with a prescribed settling velocity of 0.5 mm/s, based on model calibration and fitting in the range of observed settling velocities (Cook et al. 2007). The resuspension rate for sediment was chosen sufficiently high so that the muddy bottom pool is formed and depleted over the course of a tide and not growing on a subtidal timescale (i.e., availability limited conditions, for such conditions, the exact value of the resuspension parameter is irrelevant to the model (Brouwer et al. 2018)). The models return approximations to the M2, M4 and tide-averaged signals of the sediment concentration and sediment transport in dynamic equilibrium. The sediment concentration at the seaward boundary is prescribed at a depth-averaged subtidal value of 6 mg/l, based on DRBC observations. Fluvial sources of sediment are imposed at Trenton and at the confluence with the Schuylkill River. The sediment source at Trenton, Qs,Trenton (in kg/s), is related to the river discharge at Trenton according to Buxton et al. (1999) $$ Q_{s, \text{Trenton}} = 2.5\cdot10^{-5} Q_{\text{Trenton}}^{2.09} $$ No such rating curve is available for the Schuylkill River, but Delaware Estuary Regional Sediment Management Plan Workgroup (2013) shows that the long-term sediment discharge of the Schuylkill River between 1950 and 2009 is 70% of that of the Delaware River. Hence, we use the same rating curve as for Trenton, but with a coefficient of 1.8 ⋅ 10− 5 instead of 2.5 ⋅ 10− 5. Salinity in the model is assumed to be well mixed in the vertical and constant over the tidal cycle. This simplified representation of the salinity profile is sufficient to capture the subtidal density gradient induced by the salinity gradient and hence the flow by gravitational circulation. Effects related to periodic stratification or strong stratification are not captured by the model. Dijkstra et al. (2017b) and Burchard and Hetland (2010) estimated that periodic stratification leads to an amplification of the gravitational circulation of approximately a factor of 2 for estuaries like the Delaware. Hence, to parameterize these effects, the gravitational circulation was increased by a factor of 2. The expression for the salinity distribution, s, along the distance of the estuary, x, reads: $$ s(x) = \frac{s_{\text{sea}}}{2}\left( 1 - \tanh\left( \frac{x-x_{c} Q_{\text{Trenton}}^{-1/7}}{x_{L}}\right)\right), $$ where model parameters ssea = 31 psu, xL = 42 km, and xc = 100 km follow from calibration to surface salinity data of DRBC collected between 2000 and 2006. This salinity distribution is related to the river discharge at Trenton using a power law \(\sim Q_{\text {Trenton}}^{-1/7}\) (Monismith et al. 2002; Aristizábal and Chant 2013). Nutrient-Phytoplankton Model In order to describe the phytoplankton dynamics, a width-averaged nutrient-phytoplankton module was added to iFlow. This nutrient-phytoplankton model includes a biological-chemical component and an advection-diffusion component, which are briefly introduced in this section. For an extensive and mathematical description, we refer to the Supplemental Material to this article. The model uses two classes of nutrients: dissolved inorganic nitrogen and phosphorus (DIN and DIP). The DIN fraction represents nitrate, nitrite, and ammonium, while the DIP fraction represents phosphate. The model uses one class of phytoplankton that represents the entire phytoplankton community found in the estuary. The response of this phytoplankton class to environmental conditions is controlled by representative aggregate parameters. Like the model for water motion and sediment concentration, the nutrient-phytoplankton model describes an equilibrium state, i.e., the state that is attained after a sufficiently long time of constant forcing, thus representing long-term trends instead of transient behavior. Throughout this study, we express phytoplankton in terms of its Chl.-a content (in μ g Chl.-a), DIN in mol N, and DIP in mol P. To convert between these units, we assumed constant conversion rates based on the Redfield ratio, i.e., C:N:P equals 106:16:1 (in mol) and a constant C:Chl.-a ratio of 50 g C/g Chl.-a. This ratio is a reasonable estimate for average conditions given the range of values reported for estuaries (e.g., Cloern et al. 1995). As a result, the Chl.-a:N ratio equals 1.6 μ g Chl.-a/μ mol N, which is within the range of 0.9–1.8 reported by Sharp et al. (2009) for the Delaware. Biological-Chemical Component The biological-chemical component of the model is sketched in Fig. 2. Growth of phytoplankton is modelled using a growth rate μ that depends on temperature, light, and nutrient availability. All sinks in the phytoplankton biomass, including mortality, grazing, respiration, and sinking are parameterized by a loss ratem. In the remainder of this study, we refer to this loss rate as the mortality rate, as is conventional, even though m includes more than only mortality. This mortality parameter is treated as a calibration parameter. The organic nutrients originating from these phytoplankton sinks are represented by suspended and bottom pools, where they are remineralized to inorganic forms that can be taken up. Since the model describes an equilibrium state, all fluxes in the model are in balance. Hence, uptake (1. in Fig. 2) equals the phytoplankton sinks (2. and 3.) as well as the remineralization (5. and 6.). It is additionally assumed that transport of the organic bottom pool may be ignored. An important consequence of these assumptions is that the amount of suspended and bottom organic nutrients, the sediment nutrient flux (i.e., 5. in the figure) and the time required for remineralization are irrelevant and do not have to be resolved by the model. Hence, we only explicitly resolve DIN, DIP, and phytoplankton. It is known that time lags related to remineralization of the bottom nutrient pool may be important on the monthly timescale, so that the equilibrium assumption is only an approximation. This is discussed further in "Discussion". Schematic representation of the biological-chemical model component with phytoplankton, nutrients and pools of organic nutrients. All sinks to the phytoplankton biomass are parametrised by a mortality rate. As the model computes equilibrium conditions, the uptake flux (1.) equals the phytoplankton sinks (2. and 3.) and nutrient remineralization (5. and 6.), so that the pools of organic nutrients do not have to be resolved The growth rate μ depends on the temperature, light availability, and nutrient availability according to: $$ \mu = \mu_{{\max}}(T) \left\langle\!\min\!\left( \!\frac{N^{(1)}}{N^{(1)} + H^{(1)}_{N}}, \frac{N^{(2)}}{N^{(2)} + H^{(2)}_{N}}, \frac{E(z, t; P, c)}{\sqrt{E(z, t; P, c)^{2} + {H^{2}_{E}}}}\!\right)\!\right\rangle $$ In this equation, \(\mu _{{\max \nolimits }}(T)\) is the temperature-dependent maximum growth rate, N(1) is the DIN concentration, N(2) is the DIP concentration, E is the photosynthetically active radiation (PAR), and \(H_{N}^{(1)}\), \(H_{N}^{(2)}\), and HE are saturation parameters. The brackets 〈⋅〉 denote averaging over the tide and day. Hence, the minimum function in the equation is evaluated at each instance of time, taking the most limiting out of the instantaneous DIN, DIP, and PAR, and is then averaged over time. The maximum growth rate \(\mu _{{\max \nolimits }}\) is described following Eppley (1972) and using average monthly water temperatures T (Table 1).Footnote 2 $$ \mu_{{\max}}(T) = 0.59\cdot1.066^{T} \text{ (1/d)} . $$ The nitrogen and phosphorus limitations are described by Michaelis-Menten formulations with saturation coefficient \(H_{N}^{(1)} = 3~\mu \)mol N/l (Banas et al. 2009; Eppley et al. 1969; MacIsaac and Dugdale 1969) and \(H_{N}^{(2)}=0.2\)μ mol P/l (using Redfield ratio). The light limitation is described by a different saturating function (e.g. Smith 1936). This function has a saturation parameter HE = 110 μ mol photons/(m2s), based on community averaged in-situ incubations in the Delaware by Harding et al. (1986) (see also Supplemental Material). The PAR depends on the vertical coordinate z, time t and the phytoplankton and sediment concentrations P and c according to $$ E(z,t; P, c) = E_{00}(t) d_{E}(t) \alpha(z, t; P, c). $$ Here, E00(t) is the seasonal variation of the maximum daily light availability and is determined using PAR measurements from 2016–2017 by the National Ecological Observatory Network (NEONFootnote 3) at the Smithsonian Environmental Research Centre (SERC), MD approximately 100 km from the Delaware Estuary (Table 1). The function dE(t) accounts for daily variations in PAR and seasonal variations in day length, also determined using NEON data. The function α describes light attenuation according to the Lambert-Beer law $$ \alpha(z, t; P, c) = \exp\left( \!k_{\text{bg}}z - k_{c}\!{{\int}_{z}^{0}}\! c(z^{\prime}, t) dz^{\prime} - k_{p}\!{{\int}_{z}^{0}}\! P(z^{\prime},t) dz^{\prime} \!\right). $$ This includes a background attenuation coefficient kbg = 0.095 1/m (Pennock 1985), sediment-induced light attenuation with coefficient kc = 50 m2/kg and shading by phytoplankton (i.e. self-shading) with coefficient kp = 18 m2/mol N (Pennock 1985; Banas et al. 2009). Literature values for kc for the Delaware River were originally derived for light attenuation models driven by measured surface sediment concentrations. We corrected the value of kc using modelled vertical sediment profiles, such that the depth-averaged light attenuation is the same as when using surface concentration with a surface attenuation coefficient of 60 m2/kg (Cloern 1987; Sharp et al. 2009). Advection-Diffusion Component The model includes advection of nutrients and phytoplankton with the tidal and subtidal flow as well as diffusion with a prescribed diffusion coefficient Kh of 100 m2/s (Wei et al. 2016). In the vertical direction, an eddy diffusivity was used to describe vertical mixing of nutrients and phytoplankton. Phytoplankton was additionally assigned a prescribed settling velocity wp equal to 1 m/day (Sarthou et al. 2005). At the bed, it was assumed that any live phytoplankton that settles is immediately resuspended. Phytoplankton losses by settling and burial or benthic grazing were parameterized by the mortality rate. In order to force the model, depth-averaged time-averaged nutrient and phytoplankton concentrations were prescribed at the seaward boundary. The nitrogen concentration at this boundary was set to 0, as nitrogen concentrations are negligible at the ocean compared with the concentrations inside the estuary. The phosphorus concentration at the boundary is set to 1 μ mol P/l, informed by measurements. For phytoplankton, we imposed a small phytoplankton concentration, Psea = 0.1 μ g/l Chl.-a. This is so small compared with typical bloom concentrations of 30–40 μ g/l that it can be interpreted as a minimal background condition required for the model to develop phytoplankton growth. At the upstream boundary, fluxes of DIN \(Q_{N}^{(1)}\) (in mol N/s) and DIP \(Q_{N}^{(2)}\) (in mol P/s) into the estuary were based on the measured influx at Trenton, according to (Buxton et al. 1999): $$ \begin{array}{@{}rcl@{}} Q_{N}^{(1)} &=& 0.15Q_{\text{Trenton}}^{0.86}, \end{array} $$ $$ \begin{array}{@{}rcl@{}} Q_{N}^{(2)} &=& 0.005Q_{\text{Trenton}}^{0.89}. \end{array} $$ Based on annual average measured Chl.-a concentrations and the average river discharge at Trenton, the upstream input of phytoplankton, QP, was fixed at a value of 1300 μ g Chl.-a/s. Additional sources of nutrients were added to the model to obtain a reasonable representation of the measurements as input to the phytoplankton model. We chose to impose sources of DIN and DIP at the confluence of the Delaware and Schuylkill Rivers in Philadelphia, representing nutrients flowing into the estuary from the Schuylkill River and effluents from the city of Philadelphia (Lebo and Sharp 1993). The DIN source is represented as a constant point source of 30 mol N/s, irrespective of the discharge. The DIP source is represented as a constant point source of 3 mol P/s. Lebo and Sharp (1992) describe that a part of the phosphorus source is in the form of particulate material that is remineralized into DIP at a different location in the system. For simplicity and since this source is not quantified, we did not take such a particulate phosphorus source into account. Method for Analyzing Local and Non-local Processes In order to distinguish between local and non-local mechanisms, we consider the cross-sectionally integrated tide-averaged phytoplankton dynamics equation, which reads (Qin and Shen 2017): $$ {\int}_{A} \langle P \rangle_{t} dz + \underbrace{\left\langle {\int}_{A} uP-K_{h}P_{x} dz \right\rangle_{x}}_{\text{non-local processes}} = \underbrace{\left\langle {\int}_{A} (\mu - m)P dz\right\rangle}_{\text{local processes}}, $$ where A is the cross-sectional area, u is the along-channel flow velocity, Kh the horizontal eddy diffusivity, x and z denote the along-channel and vertical coordinates, and t denotes time. Angular brackets 〈⋅〉 are used for time averaging. The first term in the equation denotes the variation of P on a long timescale and equals 0 as we considered equilibrium conditions. The remainder of the left-hand side denotes the non-local terms related to advection and diffusion. The terms on the right-hand side denote the local terms describing the biological-chemical component of the model. The equation thus describes a balance between the local and non-local terms when considered in equilibrium. The local and non-local processes scale with the phytoplankton concentration, so that these terms are large in phytoplankton blooms and much smaller outside of these blooms. Moreover, they scale with the cross-sectional area, which can vary strongly along the estuary. For interpretation purposes, it is therefore more practical to convert the local and non-local processes into an equivalent growth rate. This is done by dividing the equation by the depth-averaged tidally averaged phytoplankton concentration \(\langle \bar {P}\rangle \) and cross-sectional area. This yields the following equivalent growth rates (with units 1/day): $$ \begin{array}{@{}rcl@{}} G_{\text{non-local}} & =& \frac{\left\langle {\int}_{A} uP-K_{h}P_{x} dz \right\rangle_{x}}{A\langle\bar{P}\rangle}, \end{array} $$ $$ \begin{array}{@{}rcl@{}} G_{\text{local}} & =& \frac{\left\langle {\int}_{A} (\mu - m)P dz + S_{P}\right\rangle}{A\langle\bar{P}\rangle}. \end{array} $$ This decomposition is the same as that used by Qin and Shen (2017). For further analysis, the local growth rate μ is separated into several effects. First, it is noted that the growth is equal to 0 at night due to light limitation. As this is a rather trivial limitation, light limitation at night is not accounted for in the decomposition. This is achieved by averaging the equation for the growth rate (Eq. 3) over daytime and nighttime, rewriting it as: $$ \mu = \mu_{{\max}}(T) \frac{\tau_{\text{day}}}{\tau}\left\langle\min\left( M_{N^{(1)}}, M_{N^{(2)}}, M_{E}\right)\right\rangle_{\text{day}}. $$ The light limitation at night is now accounted for in the factor \(\frac {\tau _{\text {day}}}{\tau }\) (i.e., time between sunrise and sunset divided by total day length) and 〈⋅〉day indicates averaging over daytime conditions. The limitation to the growth rate is decomposed by evaluating each of the growth-limiting terms \(M_{N^{(1)}}\), \(M_{N^{(2)}}\), ME following the example by e.g. Cerco and Cole (1994). These terms were defined as (cf. Eq. 3): $$ \begin{array}{@{}rcl@{}} M_{N^{(1)}} &=& \frac{N^{(1)}}{N^{(1)}+H^{(1)}_{N}}, \\ M_{N^{(2)}} &=& \frac{N^{(2)}}{N^{(2)}+H^{(2)}_{N}}, \\ M_{E} &=& \frac{E(z, t; P, c)}{\sqrt{E(z, t; P, c)^{2}+{H^{2}_{E}}}}. \end{array} $$ Up to leading order, the functions for N(1) and N(2) are constant over a tidal or daily cycle (see Supplemental Material). The function for light limitation varies over the tidal and daily cycles due to the day-night cycle and tidally varying sediment concentration. The term ME is furthermore separated into contributions by the daily variation of light and different light attenuation factors: background attenuation, sediment shading, and self-shading. Due to the strong non-linearity of the light limitation function, there is no unique way of making such a separation. The method used here is presented in the Supplemental Material. As a result of the perturbation method used in iFlow, the non-local terms in Eq. 10 can also be separated further into contributions by the tide, river discharge, diffusion, and several non-linear effects, as explained by Dijkstra et al. (2017a). Our model results consist of sediment, nutrient, and phytoplankton distributions in equilibrium per month, characterized by monthly averaged light, temperature, and discharge conditions. The model is calibrated separately for each month by adjusting the mortality parameter m, so as to minimize the least-square error between the median of the Chl.-a observations collected by the DRBC (2000–2016) and the model results. Results of this calibration are presented in Table 1. Below, results for the sediment, DIN, DIP, and phytoplankton concentrations in March to November are compared with DRBC observations from 2000 to 2016. It has to be noted that the calibration data set is the same as the data used for comparison of the results. This is acceptable for our purposes as we focus on a qualitative comparison of patterns and on the underlying balance of local and non-local processes. For the qualitative comparison of patterns, we focus on the relative importance of different estuary-scale phytoplankton blooms and on month-to-month variations, which do not follow trivially from the calibration procedure. This is discussed in "Year-Round Results for Default Settings". The underlying balance between local and non-local processes is discussed in "Balances and Limiting Factors". The model is additionally used to draw some conclusions about the connectivity of blooms in the Delaware Estuary in "Sources of Phytoplankton". The results are presented in synthesis in "Synthesis". Year-Round Results for Default Settings Model results of the surface concentrations per month are shown in Fig. 3. The month-to-month variations result from a difference in river discharge, which in turn affects the salinity field and the sediment discharge. The model results are compared with surface sediment data collected by the DRBC between 2000 and 2016. The model produces a clear ETM between km 90 and 120 with a magnitude of 25 to 40 mg/l. The ETM location is very well captured by the model and the concentrations reflect the overall seasonality of the ETM, although they tend to underestimate the median of the observed concentrations in the ETM. The approximate magnitude of the median concentrations up- and downstream of the ETM is captured by the model as well. Only in March and April are the upstream concentrations high compared to the DRBC measurements. However, these high concentrations do correspond to the values measured at Trenton by USGS. It thus seems there is a discrepancy between the data observed near Trenton by DRBC and USGS, with USGS observing higher concentrations than DRBC during high discharge periods. Surface sediment concentrations along the Delaware according to the model (red line) and DRBC observations between 2000 and 2016 (dots: measurements, solid black line: median, dashed black lines: 25 and 75 percentiles). The model results vary month-to-month due to differences in the river discharge DIN concentrations are high in the estuary upstream from the lower bay and throughout the year, with median concentrations up to 150 μ mol N/l occurring between approximately km 80 and 160 (see Fig. 4, left panel). As the month-to-month variation is relatively small, we present the observations of an entire year in one figure. This is plotted together with a band of model solutions representing the spread in model results from month to month. The overall patterns are captured by the model, showing a fairly constant DIN concentration from the head of the estuary to the confluence with the Schuylkill River, then a rapid increase in concentration and a gradual decrease toward the mouth of the estuary. As the concentrations are much higher than the saturation coefficient \(H_{N^{(1)}}\) in most of the estuary, small differences between the model results and observations will have little effect on the phytoplankton concentration. Surface DIN concentration (in μ mol N/l) (left) and DIP concentration (in μ mol P/l) (right) along the Delaware. The red-shaded area shows the variation of the model results for the months March–November, with the red line marking the year average. The dots represent DRBC observations between 2000 and 2016. The median and 25 and 75 percentiles of the measurements are plotted in the black solid and dashed lines, respectively DIP concentrations are also high throughout the estuary, with median concentrations up to 10 μ mol P/l but with a significant spread in the observations (see Fig. 4, right panel). On the whole, the model captures the qualitative trend of relatively higher DIP concentrations in the ETM zone (km 70–115) and relatively lower concentrations up- and downstream of this through the entire year. As the observations and model both show DIP concentrations much larger than the saturation coefficient \(H_{N^{(2)}}\), the scatter in measurements and differences with the model results are of little influence on the phytoplankton concentration. The exception to this is found in the lower bay (< km 25). Here, the model forces the DIP concentration to a small value in the model, whereas observations show DIP concentrations anywhere between 0 and 5 μ mol P/l. The model value at the mouth typically represents a lower estimate of the DIP concentration, although lower values have been observed. Measurements of phytoplankton concentrations (Fig. 5) show two predominant bloom locations. The first is in the mid bay around km 25–50 and is present during the entire span of the data from March to November. The second bloom is located in the tidal river between km 120 and 180. It appears in April or May and disappears in August or September. The relative importance of the two maxima changes over the year. In March, the mid bay bloom is at its maximum, while the tidal river bloom is absent. Toward the summer, the mid bay bloom becomes less pronounced, while the tidal river bloom develops. After August, the chlorophyll-a concentration in both blooms becomes smaller and has nearly disappeared by October. Few measurements from November are available, so that the magnitude of the blooms in this month is unknown. Surface phytoplankton concentration expressed as chlorophyll-a concentration (in μ g/l) along the Delaware according to the model (red line) and DRBC observations between 2000 and 2016 (dots: measurements, solid black line: median, dashed black lines: 25 and 75 percentiles) The modelled chl.-a concentration (red lines in Fig. 5) also shows two bloom locations, which qualitatively capture observed locations and seasonality. The first bloom occurs in the lower-mid bay around km 20–30 from March to November and resembles the observed mid bay bloom. The second occurs in the tidal river around km 160–170 from May to September. This bloom is narrower than the observed bloom. From April to June, the magnitude of the tidal river bloom is underestimated by the model, while the magnitude is approximated well from July to October. The model shows a very strong minimum in the phytoplankton concentration between km 80 and 100 in all months except for March. This minimum is much more pronounced than in the measurements, which show a minimum around the same location but still with chlorophyll-a concentrations typically around 5 μ g/l. Focusing on the relative magnitude of the two blooms in the model, the upstream bloom grows relative to the mid bay bloom between March and July. While the same seasonal behavior appears in the observations, this is exaggerated and delayed in the model result. From August to October, both modeled blooms show similar maximum concentrations decreasing with time, similar to the observations. Balances and Limiting Factors Balance of Equivalent Growth Rates The physical and biological-chemical processes underlying the results are analyzed by expressing them in terms of equivalent growth rates (cf. Eqs. 10–11). As the simulation represents steady-state conditions, the sum of all contributions to the equivalent growth rate equals 0. It is therefore not the absolute magnitude of the contributions that matters, but their relative magnitude. This relative importance of different processes gives information about the main factors that control blooms and the sensitivity of the results to model parameters. A useful way of viewing the balance between non-local and local processes is in terms of the residence timescale versus the growth-mortality timescale (Lucas et al. 2009). The residence timescale is a complex function of the non-local flow-related processes, as the flow may either lead to flushing of phytoplankton or accumulation at different locations and times. The growth-mortality timescale depends on all the factors that affect growth and mortality and is a complex non-linear function of nutrient and light availability. The equivalent growth rates investigated here provide an insight into the relative importance of these timescales. If non-local processes are of a similar magnitude or large compared with local processes, the residence time can be important compared to the growth-mortality timescale. If in this case the local growth rate > loss rate, bloom concentrations can be restricted by flushing or reinforced by an inflow of phytoplankton from elsewhere. If conversely loss rate > growth rate, the phytoplankton may still persist because of the throughflow of phytoplankton from elsewhere. If local processes dominate over non-local processes, an equilibrium state develops primarily because of a balance between local growth and losses. This balance is controlled by the phytoplankton concentration. If growth rate > loss rate, an increase of the phytoplankton concentration leads to a depletion of nutrients and self-shading, so that the growth rate can balance the loss rate. Additionally, if the increase of the phytoplankton concentration is local, a large along-channel gradient of the phytoplankton concentration is created. This leads to an increasing non-local transport of phytoplankton biomass out of the bloom zone. If conversely loss rate > growth rate, a decrease of the phytoplankton concentration leads to a decrease in nutrient consumption and self-shading until a balance is achieved or until all phytoplankton have disappeared. The balance of equivalent growth rates is illustrated for March and July (Fig. 6). The main balance in March is qualitatively representative for early spring conditions with low temperature and a high discharge, while July is representative for summer and fall, with moderate to high temperatures and a low discharge. The figure shows the local growth and mortality (green and orange) and several non-local processes. The term "Diffusion"' (blue) represents effects of parameterized horizontal diffusivity, the "River" (red) term represents flushing by the river discharge, "Tidal return flow" (purple) is the combined effect of Stokes drift, and the resulting return flow and "Tide" (brown) is the net effect of dispersion by the M2 tide. Positive values in the figure denote contributions to the growth, while negative contributions reduce the growth rate. The figures show some spikes in the equivalent growth rates near km 150, due to the local point source discharging water of the Schuylkill River. We will not further consider these peaks in our analysis. Most important terms in the decomposition of the phytoplankton balance into equivalent growth rates for March and July for the phytoplankton. The growth and mortality terms are local terms; the other terms are non-local terms. The peaks around km 150 are artifacts of the localized input of water from the Schuylkill River and should not be considered The March equilibrium phytoplankton concentration is mainly established by a balance of positive local growth (green line) versus mortality (orange line) and river flushing (red line). The latter two are of the same order of magnitude in much of the upper bay and tidal river (> 70 km). This means that the residence and growth-mortality timescales are similar. The result of this balance is an equilibrium concentration that does not allow for bloom formation in the ETM zone and tidal river, even though local growth > mortality. In the mid bay, local processes are dominant as the riverine influence decreases, thus allowing for the formation of a bloom. In the lower bay (< km 25), the non-local processes again are more in balance with the local processes. This is mainly expressed in the tidal and diffusive terms which act to reduce the phytoplankton concentration by mixing water from the lower bay with phytoplankton-poor water from the coastal ocean. The river contribution opposes this by delivering water rich in phytoplankton from the mid bay bloom to the lower bay. In July, the phytoplankton concentration results from a balance that is dominated by local processes along the entire estuary. The dominance of local processes is caused by a combination of a small river discharge and a high growth rate due to a high temperature. Hence, the residence time is large relative to the growth-mortality time. As a result, phytoplankton blooms are found wherever local growth rate > mortality rate, i.e., in the tidal river and mid bay. Consequently, phytoplankton also almost completely vanishes wherever local growth rate < mortality rate, i.e., in the ETM zone. Nevertheless, it would be a misconception to conclude that non-local terms can be omitted from the model. If non-local processes were switched off, phytoplankton concentrations of 100 to 200 μ mol/l would be attained, which are not realistic. The blooms have set up along-channel gradients in phytoplankton concentration, which lead to some non-local transport. As local growth and mortality almost equilibrate, this small non-local transport closes the balance. Limiting Factors to the Local Growth Rate We further study the mechanisms underlying the local processes using the decomposition of the depth-averaged net growth rate (see Method for Analyzing Local and Non-local Processes). This is illustrated in Fig. 7 for March (left panels) and July (right panels). The numbers in the figure should be interpreted as the fraction of reduction of the growth rate \(\mu _{{\max \nolimits }}\) during daytime (dawn to dusk): 1 means no reduction, 0 means reduced to zero growth. These reduction factors are related to the limitation by DIN, DIP, light, and mortality. The reduction factor caused by the mortality rate computed relative to the product of the maximum growth rate and day length \(\mu _{{\max \nolimits }}\frac {\tau _{\text {day}}}{\tau }\). The light limitation varies during the day due to tidal variations of water depth, sediment concentration, and daily variation of solar irradiance. This is visualized in the top panels by the red-shaded area showing the light limitation that occurs between 1 and 99 percentiles of the time. The lower panel shows the same results, but with a further decomposition of the light limitation for time-averaged conditions into effects of background shading, sediment, self-shading, and the daily cycle (i.e., variation of irradiance between sunrise and sunset; light limitation at night is not included). Decomposition of the depth-averaged time-averaged net local growth rate for March and July. The vertical axis represents the fraction of reduction of the maximum growth rate during day time: 1 means no reduction, 0 means reduced to zero growth. The top and bottom panels show the same results in different additions. The top panels add the variation of the light limitation over time (1 and 99 percentiles of time). The bottom panels add the time-averaged composition of the light limitation In March, the growth rate is dominantly limited by light availability in most of the estuary. In the top panel, even the upper edge of the red band is more limiting than any of the other factors for x > km 15, indicating that light is limiting at each instance of time. Hence, light limitation still dominates in the midday around slack tide, when sediment concentrations are relatively low. The bottom panel shows that the light limitation is mostly caused by the daily light availability (i.e., variation of irradiance between sunrise and sunset) and sediment shading. The effect of sediment shading alone already exceeds the effects of mortality and nutrient limitation upstream from the lower bay. Furthermore, self-shading is nearly as important as sediment shading in the mid and lower bay. The phytoplankton growth becomes dominantly nitrogen limited only in the most downstream 10 km in the lower bay. In July, the effect of light limitation still dominates the growth rate in most of the estuary and during the entire day, even though the light limitation is smaller in absolute sense than in March. The light limitation is smaller than in March mainly because the daily light availability is larger (i.e., larger maximum irradiance). Sediment shading is of similar importance as in March in the ETM zone but less in the tidal river due to lower sediment concentrations (cf. Fig. 3). The mortality rate has become a more important limitation to the net growth rate compared with that in March, indicating that processes including grazing and respiration, not explicitly resolved by the model, have become relatively more important. Nitrogen limitation remains the dominant limitation in the most downstream part of the estuary. The resulting net growth rates, μ − m, and their structure in the vertical direction at three locations along the estuary are plotted in Fig. 8. The net growth rates are positive near the surface as the light limitation by sediment and self-shading vanish near the surface. As the light limitation increases with depth, the net growth rate decreases to a value below 0. The depth of zero net growth (horizontal dotted lines) is around 2 m in the ETM zone and 3–4 m outside the ETM zone, which implies that net growth can only occur in less than half of the water column in the entire estuary. Nevertheless, the net growth averaged over the water column (closed circles in the figure) is positive in most locations. Even in the ETM zone in March, the net growth rate is just positive. In July, the net depth-averaged growth in the ETM zone is negative, leading to decay of the phytoplankton concentration. Figure 8 also shows the vertical profiles of the Chl.-a concentration. This is well mixed over the water column, with a slightly smaller concentration near the surface. This is a consequence of the relatively large vertical mixing, combined with a small settling velocity. As the growth rate is much smaller than the vertical mixing rate, vertical variations in the growth rate were estimated to have little effect and were not taken into account in the computation of the vertical Chl.-a profile. Hence, the larger growth rate near the surface than near the bed is not reflected in the vertical phytoplankton distribution. Vertical profiles of the net growth rate μ − m (top) and phytoplankton concentration relative to the depth-averaged phytoplankton concentration \(P(z)/\bar {P}\) (bottom) for March and July at three locations along the estuary: in the mid bay bloom (km 30), in the ETM zone (km 100), and in the tidal river bloom (km 170). In the top row, the horizontal dotted lines indicate the zero crossings, i.e., the transition from net growth to net decay. The dots indicate the value of the depth-averaged net growth rate. Note that the plotted vertical profiles of the phytoplankton concentration are identical for March and July Sources of Phytoplankton The phytoplankton model includes two sources: at the seaward and landward boundaries. A better understanding of the pathways of the phytoplankton through the estuary is obtained by further investigating these sources separately. This is done in the model by setting the phytoplankton concentrations at one of the boundaries equal to 0. Figure 9 shows the phytoplankton concentration in March and July when only the source at the seaward boundary is taken into account (i.e., upstream phytoplankton concentration equal to 0). The model now only reproduces the mid bay bloom. Similar results are found for all months and this is insensitive to the magnitude of the seaward phytoplankton concentration (not shown). This result is found because there is either a net local loss in the ETM zone or a small net local growth combined with a small residence time of the marine phytoplankton. This indicates that the marine phytoplankton hardly penetrates into the fresh tidal river, regardless of its species-specific characteristics (salt tolerance, growth rates, nutrient uptake, etc.) and whether it would be outcompeted by more specialized freshwater phytoplankton. Surface phytoplankton concentration expressed as chlorophyll-a concentration (in μ g/l) for March and July as in Fig. 5, but only using a downstream source of phytoplankton When conversely only using the source at the landward boundary (i.e., downstream phytoplankton concentration equal to 0), the same result is obtained as in the default case (i.e., Fig. 5). The phytoplankton from upstream provides the initial source for growth of the tidal river bloom, then flows downstream and largely dies in the ETM zone. However, a sufficient population makes it through the ETM to also contribute to the mid bay bloom. These results only hold when assuming that the freshwater phytoplankton from upstream is sufficiently salt tolerant and is not outcompeted by more specialized species. These assumptions should at least be called questionable. Therefore, the main conclusion is that it is important to account for species-specific characteristics when investigating the spreading of freshwater phytoplankton species in the estuary, while marine phytoplankton are prevented from spreading into the freshwater zone by the ETM and river flow. Insufficient data on species composition is available to the authors to verify these conclusions with data. Combining the results from Figs. 6 and 7, we propose a two-step analysis for gaining insight into phytoplankton distribution. As a first step, the equivalent growth rates are used to determine whether local or non-local terms are important. If local terms are important, a bloom may develop if light and nutrient conditions are favorable. In that case, it is interesting to investigate the limiting factors to the local growth rate (cf. Fig. 7) as a second step. The two steps for early spring (March) and summer (July) are summarized in Table 2, where the second step is printed in italics if it is not considered to be very important to the end result. Focusing on the second step of the analysis, it is concluded that sediment shading is the most important factor throughout the estuary. Self-shading is also important at the locations of the blooms and, together with non-local dispersion processes, acts to restrict the maximum-occurring phytoplankton concentration. Limitation by nitrogen occurs in the lower bay. However, in spring, this limitation is dominated by the effects of non-local tidal dispersion that mix water from the estuary and coastal ocean. Hence, nitrogen limitation is actually only relevant in summer and fall when the temperature is high and hence local growth is large compared to the effects of tidal dispersion. Table 2 Summary of the bloom conditions in spring (March) and summer (July) in the Delaware River Estuary and the main governing balances Another representation of the bloom dynamics is sketched conceptually in Fig. 10. This figure summarizes the main limitations mentioned in Table 2 and additionally sketches the pathway of phytoplankton through the estuary. Schematic representation of long-term bloom conditions in Delaware River Estuary. Characteristic are two blooms: in the mid bay (40–60 km) and in the tidal river (> 115 km). The ETM is typically located around 100 km. At I, fluvial phytoplankton enters the tidal river. If the temperatures are sufficiently high and river discharge is sufficiently low, this phytoplankton resides in the tidal river long enough to grow and bloom. Much of the phytoplankton that flushes downstream through the ETM (II) dies due to sediment-induced light limitation, unless the river discharge is high and temperature is low. Whether the fluvial phytoplankton can survive downstream in the mid bay depends on its salt tolerance and other characteristics that are not explicitly modelled in this study. In the mid bay, tolerant phytoplankton from the tidal river may grow again (III). Their growth location coincides with the growth location of the marine phytoplankton that is brought into the estuary with the tides (IV).The dynamics in the lower bay is dominated by tidal dispersion mixing water from the estuary and the coastal ocean in early spring and nitrogen limitation in summer Discussion in Context of Delaware Bay Nutrient Limitation in Delaware Bay Pennock and Sharp (1994) used observations of nitrogen, phosphorus and carbon ratios, enrichment experiments, and a simple light limitation model to assess the most limiting factor to the local growth rate in Delware Bay. They concluded that nutrient limitation can sometimes dominate over light limitation. In the mid bay, they indicate that phosphorus may be limiting in early-late spring and nitrogen might be limiting in summer. Yoshiyama and Sharp (2006) also analyzed observations and found that light limitation is usually dominant in the mid bay, except for during the spring bloom. In our results, we used lower estimates of the nitrogen and phosphorus levels by assuming little to no nutrients in the ocean, yet our results indicate that sediment-induced light limitation is much more important than nutrient limitation in the mid bay. Moreover, it is found that light limitation is still dominant during the middle of the day during times with the lowest sediment concentrations. Hence, the results of Pennock and Sharp (1994) do not agree with those found here, and the nutrient limitation during the spring bloom found by Yoshiyama and Sharp (2006) is not identified. As one possible explanation for this disagreement, Yoshiyama and Sharp (2006) discuss that the enrichment experiments may underestimate light limitation due to sediment settling during the experiment. Furthermore, our study focuses on monthly averages over many years, which may not be representative of the rapidly varying conditions during individual spring blooms. In the lower bay, Pennock and Sharp (1994) indicate that nitrogen is limiting in early spring, nitrogen and phosphorus are both limiting in late spring, and possibly nitrogen is limiting in summer. Our results support the potential occurrence of nitrogen limitation in the lower bay. Phosphorus is not limiting on average, but observations show significant variation in phosphorus concentrations, which could result in phosphorus limiting during some part of time. A further source of uncertainty is the ratio between nitrogen and phosphorus taken up by phytoplankton (Pennock and Sharp 1994), which we assumed to be constant. In summer, we found nitrogen limitation to be the main limitation. In early spring, we found that, while nitrogen is limiting, tidal dispersion and low temperature are the dominant reasons for not finding a phytoplankton bloom in the lower bay. Stratification and Three-Dimensionality Over the seasons and within a spring-neap cycle, the mid bay and ETM zone in the Delaware switch between well-mixed and partially stratified states (Aristizábal and Chant 2014). Stratification in estuaries is generally associated with high growth as it prevents sediment from mixing high up in the water column and prevents phytoplankton from mixing to the bed and restricts sediment to the lower part of the water column (Cloern 1987). For the Delaware, Pennock (1985) and Sharp et al. (1986) hypothesize that the variation in bloom magnitude in the mid bay is due to varying salinity stratification, but the importance of stratification to the phytoplankton dynamics in the Delaware has not been proven. It is not evident that stratification is important to phytoplankton dynamics in the Delaware. Stratification occurs mainly in the narrow channel, while most of the surface area of the estuary consists of shallow and well-mixed flanks. Hence, the focus on the channel is probably not justified and likely overemphasizes the role of stratification. Our results, not accounting for salinity stratification, show that the average variability of the mid bay phytoplankton bloom from high values in spring to lower values in summer can be explained just by accounting for the differences in the temperature-related local growth and mortality. Shallow flanks and lateral exchange of water, sediment, and phytoplankton are likely to be important in Delaware Bay. Pennock (1985) shows observations of distinct lateral patterns for chlorophyll-a in summer and McSweeney et al. (2016a) show that lateral processes are important for the sediment transport. Additionally, the lateral dynamics has been shown to be important to understanding the phytoplankton dynamics in other estuaries (e.g., Lucas et al. 1999b). Hence, the three-dimensional dynamics is worth further investigation. Shallow areas on the flanks of the estuary are likely less light limited and can therefore serve as areas of positive growth, whereas the channels may suffer from net loss of phytoplankton if stratification is weak. Hence, a model for this three-dimensional behavior at least needs to capture the large-scale lateral circulation, the lateral distribution of sediment, and formation of stratification in the channel. An idealized three-dimensional model like iFlow has already been developed for sediment and salinity by Kumar et al. (2017) and Wei et al. (2018). Like iFlow, the scaling and solution method in this model is developed for well-mixed estuaries and would require substantial changes to account for stronger stratification. General Model Implications Model Parameterizations and the Mortality Coefficient In our model, we have chosen a simple representation of the biological-chemical processes by including only phytoplankton and two nutrients. The phytoplankton mortality is represented by a simple linear formulation mP, which accounts for several processes including respiration, pelagic grazing, and benthic grazing. Studies that explicitly include one or more of the processes parameterized by the mortality rate use formulations with a number of highly uncertain parameters and model formulations (see, e.g., Franks 2002; Gentleman et al. 2003; Brush and Nixon 2017). Moreover, it has been demonstrated that a model may display one or multiple equilibrium solutions or autonomous periodic solutions depending on the choice of model formulation for the grazing formulation (Steele and Henderson 1992; Edwards and Brindley 1996). Therefore, these processes can only be resolved reliably when these parameters and formulations can be constrained by data. This is a problem for grazing, where the only study known to the authors on zooplankton numbers and species along the entire Delaware Estuary is by Cronin et al. (1962). A downside of the linear mortality formulation with spatially constant, m, is that spatial gradients in mortality, e.g., due to spatial gradients in grazers, are not covered. As it is expected that grazers are more abundant where phytoplankton concentrations are highest, we could have overestimated the mortality in the phytoplankton-poor areas, such as the ETM zone. This effect is more quantitative than qualitative and does not affect the qualitative conclusions of this study. In the current model, the mortality coefficient has been calibrated for each month to give the best fit to the median of the observed phytoplankton biomass. The resulting mortality coefficients are plotted against the water temperature in Fig. 11. The mortality coefficient shows what seems like an exponential function, approximated by: $$ m=0.057 \cdot 1.10^{T} \text{ (1/d)}. $$ Even though this result simply follows from a calibration, there is a strong dependency on the temperature and no such clear seasonality seems to exist with other input variables, such as the discharge or the light intensity. Also, assuming grazing is important, the trend shown in Eq. 13 seems to be supported by formulations used in the size-structured NPZ model of Taniguchi et al. (2014) and Cloern (2017). They use \(\mu \sim 1.049^{T}\) and \(m\sim 1.095^{T}\), resulting in a ratio \(\mu /m \sim 0.96^{T}\). We find the same ratio when combining Eqs. 4 and 13, i.e., both formulations yield the same ratio of the growth and mortality rate. This observation suggests that a relationship like Eq. 13 between m and T might be more universally applicable and sets a fixed ratio between the maximum growth and mortality. However, this observation needs further investigation. Base growth coefficient \(\mu _{{\max \nolimits }}\) (red) and calibrated mortality coefficients m (blue) versus the temperature. The blue dotted line shows an indicative trend of the calibrated values of m with the temperature The magnitude of the mortality found in this study is of the same order of magnitude as grazing rates measured by Sun et al. (2007) for Delaware bay (only microzooplankton) and White and Roman (1992) for Chesapeake Bay. Sun et al. (2007) found an average grazing rate of 0.46 day− 1 based on measurements in one location in the lower Delaware bay at the end of April. Our mortality numbers for April or May are somewhat lower than this measured grazing rate, especially considering that grazing is only one factor that contributes to mortality. White and Roman (1992) on the other hand found depth-averaged grazing rates varying between 0.01 and 0.1 day− 1 based on measurements at one location during various seasons, which are lower than our mortality rates. Given the variation in measured grazing rates, our calibrated mortality rates at least seem within a realistic range. The Equilibrium Assumption The model assumes dynamic equilibrium conditions. When compared with observations, this assumption implies that transient behavior is negligible. Using the example of the Upper James River, Qin and Shen (2017) show that transient behavior becomes less important compared to local and non-local processes when averaging over longer timescales, starting from a spring-neap timescale. At the monthly scale investigated in this study, Qin and Shen (2017) indicate that transient behavior constitutes less than one-third of the effects of non-local and local processes. This means that transient behavior is small, but not completely negligible at this scale. This means that the equilibrium assumption gives a good first estimate of the processes, but cannot give a full detailed description. Nevertheless, the equilibrium assumption provides a large advantage to the model interpretation as time lags in the remineralization of lost phytoplankton to nutrients become irrelevant (cf. Fig. 2), thus highly simplifying the model. General Applicability of the Model The model developed in this study is generally applicable to tide-dominated well-mixed estuaries that largely consist of a single main channel. For example, in studies by Brouwer et al. (2018) and Dijkstra et al. (2019), it has already been shown that observed hydrodynamics and sediment dynamics could be qualitatively reproduced using iFlow in the Scheldt and Ems Rivers and their models could be extended to include phytoplankton. As the model contains few parameters and is computationally inexpensive, it is fast to set up, calibrate, and use as a first assessment tool. As illustrated here using the example of the Delaware River Estuary, the model quickly gives insight into the importance of different aspects including light limitation, nutrient limitation, mortality, and flow throughout the estuary. This basic understanding of phytoplankton dynamics is essential for the development and interpretation of more complex models, as it indicates which parts of the more complex model are most important and should therefore receive more attention. As has been remarked and demonstrated that models with different degrees of complexity can lead to a similar skill in reproducing observations (Franks 2002; Friedrichs et al. 2007), such a systematic approach of using simple models to motivate where and why increasing complexity is necessary can provide a promising strategy to a better choice of model complexity and improved model interpretation. We applied a method of explicitly distinguishing between local biological-chemical processes and non-local flow-related processes governing phytoplankton dynamics in well-mixed estuaries. This method was combined with a newly developed nutrient-phytoplankton module for iFlow, which allows for a further decomposition of the local and non-local processes into specific limiting factors such as nutrients, light, tidal dispersion, and river flushing. The model was used to study phytoplankton dynamics in the Delaware River Estuary as a function of the flow, temperature, nutrient availability, irradiance, and light attenuation due to suspended sediments and self-shading. Average monthly conditions for March through November were simulated and compared with observations collected by the Delaware River Basin Commission (DRBC) between 2000 and 2016. Model results show that, in early spring, the lower bay (< km 25) is dominated by non-local processes due to tidal dispersion. This leads to mixing of the phytoplankton in the estuary with phytoplankton-poor coastal ocean water, preventing bloom formation. Whereas nitrogen is potentially limiting to the local growth rate, this is of relatively little importance compared to the effect of tidal dispersion. In summer, however, the local growth exceeds the effects of tidal dispersion due to higher temperatures and nitrogen limitation is the main factor limiting phytoplankton growth. In the mid bay (km 25–70), local processes are dominant and allow for the formation of a bloom during the entire period from March to November. The growth rate is limited by sediment shading and self-shading. The local growth rate is also dominant in the ETM zone (km 70–115) during the entire year, but the sediment-induced light limitation is so strong that mortality almost equals or even exceeds growth, hence not allowing the formation of blooms. Finally, in the tidal river (> km 115), phytoplankton dynamics vary with the seasons. In early spring, non-local processes dominate due to a low temperature (i.e., low growth rate) and high river discharge (i.e., short residence time). As a result, blooms are absent. In late spring to early fall, the temperature, and hence the growth rate, is higher and the river discharge is lower, leading to a more dominant role of the local growth rate and bloom formation. The main factors controlling the growth rate are again the sediment concentration and self-shading. To study the connectivity between the mid bay and tidal river blooms, it was investigated whether sources of phytoplankton from the sea and from the tidal river could contribute to both blooms. The marine source of phytoplankton only appears in the mid bay bloom and cannot penetrate beyond the ETM zone due to the ETM and river flow, regardless of its species-specific characteristics. The tidal river source of phytoplankton is not constrained by the flow and ETM zone and can contribute to both the mid bay and tidal river blooms. However, whether it does contribute to both blooms depends on its species-specific characteristics and could not be verified due to lack of data on species composition. Based on calibration of monthly values for the mortality rate in the model, an apparent power-relation between the mortality and temperature emerged. Similar relations between mortality and temperature have been obtained using other models and studying other estuaries. Whether this emergent property applies more universally is unclear but is worth further investigation. The iFlow model used for this study is available from version 2.8 on GitHub (doi:10.5281/zenodo.822394) under LGPL license. Here, you can also find example input files to reproduce some of the simulations in this study. When using iFlow, you are kindly requested to refer to Dijkstra et al. (2017a). http://www.state.nj.us/drbc/quality/datum/boat-run.html The original formulation uses 0.85 ⋅ 1.066T and is in doublings per day. A formulation in units 1/day is obtained by multiplying by \(\ln (2)\). www.neonscience.org Aristizábal, M.F., and R.J. Chant. 2013. A numerical study of salt fluxes in Delaware Bay Estuary. Journal of Physical Oceanography 43: 1572–1588. https://doi.org/10.1175/jpo-d-12-0124.1. Aristizábal, M.F., and R.J. Chant. 2014. Mechanisms driving stratification in Delaware Bay estuary. Ocean Dynamics 64: 1615–1629. https://doi.org/10.1007/s10236-014-0770-1. Banas, N.S., E.J. Lessard, R.M. Kudela, P. MacCready, T.D. Peterson, B.M. Hickey, and E. Frame. 2009. Planktonic growth and grazing in the Columbia River plume region: A biophysical model study. Journal of Geophysical Research: Oceans 114(C00B06). https://doi.org/10.1029/2008jc004993. Behrenfeld, M.J. 2010. Abandoning Sverdrup's critical depth hypothesis on phytoplankton blooms. Ecology 91(4): 977–989. https://doi.org/10.1890/09-1207.1. Brouwer, R.L., G.P. Schramkowski, Y.M. Dijkstra, and H.M. Schuttelaars. 2018. Time evolution of estuarine turbidity maxima in well-mixed, tidally dominated estuaries: The role of availability and erosion limited conditions. Journal of Physical Oceanography. https://doi.org/10.1175/jpo-d-17-0183.1. Brush, M.J., and S.W. Nixon. 2017. Modeling coastal hypoxia: Numerical simulations of patterns, controls and effects of dissolved oxygen dynamics, Springer, chap A Reduced Complexity, Hybrid Empirical-Mechanistic Model of Eutrophication and Hypoxia in Shallow Marine Ecosystems, pp 61–93. Burchard, H., and R.D. Hetland. 2010. Quantifying the contributions of tidal straining and gravitational circulation to residual circulation in periodically stratified tidal estuaries. Journal of Physical Oceanography 40: 1243–1262. https://doi.org/10.1175/2010jpo4270.1. Buxton, D.E., K. Hunchak-Kariouk, and R.E. Hickman. 1999. Relations of surface-water quality to streamflow in the Wallkill and Upper Delaware River basins, New Jersey and vicinity, water years 1976-93. Tech. rep., USGS. Cerco, C.F., and T.M. Cole. 1994. Three-dimensional eutrophication model of Chesapeake Bay. Tech. rep., US Army Corps of Engineers, Waterways Experiment Station, Vicksburg. Cloern, J.E. 1987. Turbidity as a control on phytoplankton biomass and productivity in estuaries. Continental Shelf Research 7: 1367–1381. https://doi.org/10.1016/0278-4343(87)90042-2. Cloern, J.E. 2017. Why large cells dominate estuarine phytoplankton. Limnology and Oceanography 63(S1): S392–S409. https://doi.org/10.1002/lno.10749. Cloern, J.E., C. Grenz, and L. Vidergar-Lucas. 1995. An empirical model of the phytoplankton chlorophyll:carbon ratio-the conversion factor between productivity and growth rate. Limnology and Oceanography 40: 1313–1321. https://doi.org/10.4319/lo.1995.40.7.1313. Cloern, J.E., S.Q. Foster, and A.E. Kleckner. 2014. Phytoplankton primary production in the world's estuarine-coastal ecosystems. Biogeosciences 11(9): 2477–2501. https://doi.org/10.5194/bg-11-2477-2014. Colijn, F. 1982. Light absorption in the waters of the ems-dollard estuary and its consequences for the growth of phytoplankton and microphytobenthos. Netherlands Journal of Sea Research 15: 196–216. https://doi.org/10.1016/0077-7579(82)90004-7. Cook, T.L., C.K. Sommerfield, and K.C. Wong. 2007. Observations of tidal and springtime sediment transport in the upper Delaware Estuary. Estuarine, Coastal and Shelf Science 72: 235–246. https://doi.org/10.1016/j.ecss.2006.10.014. Cronin, L.E., J.C. Daiber, and E.M. Hulbert. 1962. Quantitative seasonal aspects of zooplankton in the Delaware River Estuary. Chesapeake Science 3: 63–93. https://doi.org/10.2307/1351221. Delaware Estuary Regional Sediment Management Plan Workgroup. 2013. Delaware Estuary Regional Sediment Management Plan, Sediment Quantity and Dynamics White Paper. Tech. rep., https://www.nj.gov/drbc/library/documents/RSMPaug2013final-report.pdf. Dijkstra, Y.M., R.L. Brouwer, H.M. Schuttelaars, and G.P. Schramkowski. 2017a. The iFlow Modelling Framework v2.4. A modular idealized process-based model for flow and transport in estuaries. Geoscientific Model Development 10: 2691–2713. https://doi.org/10.5194/gmd-10-2691-2017. Dijkstra, Y.M., H.M. Schuttelaars, and H. Burchard. 2017b. Generation of exchange flows in estuaries by tidal and gravitational eddy viscosity - shear covariance (ESCO). Journal of Geophysical Research: Oceans 122: 4217–4237. https://doi.org/10.1002/2016jc012379. Dijkstra, Y.M., H.M. Schuttelaars, G.P. Schramkowski, and R.L. Brouwer. 2019. Modeling the transition to high sediment concentrations as a response to channel deepening in the Ems River Estuary. Journal of Geophysical Research: Oceans 124: 1–17. https://doi.org/10.1029/2018JC014367. Edwards, A.M., and J. Brindley. 1996. Oscillatory behaviour in a three-component plankton population model. Dynamics and Stability of Systems 11: 347–370. https://doi.org/10.1080/02681119608806231. Eppley, R.W. 1972. Temperature and phytoplankton growth in the sea. Fishery Bulletin 70: 1063–1085. Eppley, R.W., J.N. Rogers, and J.J. McCarthy. 1969. Half-saturation constants for uptake of nitrate and ammonium by marine phytoplankton. Limnology and Oceanography 14: 912–920. https://doi.org/10.4319/lo.1969.14.6.0912. Filardo, M.J., and W.M. Dunstan. 1985. Hydrodynamic control of phytoplankton in low salinity waters of the James River Estuary, Virginia, U.S.A. Estuarine, Coastal and Shelf Science 21: 653–667. https://doi.org/10.1016/0272-7714(85)90064-2. Franks, P.J.S. 2002. NPZ models of plankton dynamics: Their construction, coupling to physics, and application. Journal of Oceanography 58(2): 379–387. https://doi.org/10.1023/a:1015874028196. Friedrichs, M.A.M., J.A. Dusenberry, L.A. Anderson, R.A. Armstrong, F. Chai, J.R. Christian, S.C. Doney, J. Dunne, M. Fujii, R. Hood, D.J. McGillicuddy, J.K. Moore, M. Schartau, Y.H. Spitz, and J.D. Wiggert. 2007. Assessment of skill and portability in regional marine biogeochemical models: Role of multiple planktonic groups. Journal of Geophysical Research 112(C8). https://doi.org/10.1029/2006jc003852. Gentleman, W., A. Leising, B. Frost, S. Strom, and J. Murray. 2003. Functional responses for zooplankton feeding on multiple resources: A review of assumptions and biological dynamics. Deep Sea Research Part II: Topical Studies in Oceanography 50(22–26): 2847–2875. https://doi.org/10.1016/j.dsr2.2003.07.001. George, J.A., D.J. Lonsdale, L.R. Merlo, and C.J. Gobler. 2015. The interactive roles of temperature, nutrients, and zooplankton grazing in controlling the winter-spring phytoplankton bloom in a temperate, coastal ecosystem, Long Island Sound. Limnology and Oceanography 60(1): 110–126. https://doi.org/10.1002/lno.10020. Harding, L.W., B.W. Meeson, and T.R. Fischer. 1986. Phytoplankton production in two east coast estuaries: Photosynthesis-light functions and patterns of carbon assimilation in Chesapeake and Delaware Bays. Estuarine, Coastal and Shelf Science 23: 773–806. https://doi.org/10.1016/0272-7714(86)90074-0. Howarth, R.W., D.P. Swaney, T.J. Butler, and R. Marino. 2000. Rapid communication: Climatic control on eutrophication of the Hudson River Estuary. Ecosystems 3: 210–215. https://doi.org/10.1007/s100210000020. Kumar, M., H.M. Schuttelaars, and P.C. Roos. 2017. Three-dimensional semi-idealized model for estuarine turbidity maxima in tidally dominated estuaries. Ocean Modelling 113: 1–21. https://doi.org/10.1016/j.ocemod.2017.03.005. Lebo, M.E., and J.H. Sharp. 1992. Modeling phosphorus cycling in a well-mixed coastal plain estuary. Estuarine, Coastal and Shelf Science 35: 235–252. https://doi.org/10.1016/s0272-7714(05)80046-0. Lebo, M.E., and J.H. Sharp. 1993. Distribution of phosphorus along the Delaware, an urbanized coastal plain estuary. Estuaries 16: 290–301. https://doi.org/10.2307/1352502. Liu, B., and H.E. De Swart. 2015. Impact of river discharge on phytoplankton bloom dynamics in eutrophic estuaries: A model study. Journal of Marine Systems 152: 64–74. https://doi.org/10.1016/j.jmarsys.2015.07.007. Lucas, L.V., J.E. Cloern, J.R. Koseff, S.G. Monismith, and J.K. Thompson. 1998. Does the Sverdrup critical depth model explain bloom dynamics in estuaries Journal of Marine Research 56: 375–415. https://doi.org/10.1357/002224098321822357. Lucas, L.V., J.R. Koseff, J.E. Cloern, S.G. Monismith, and J.K. Thompson. 1999a. Processes governing phytoplankton blooms in estuaries. I: The local production-loss balance. Marine Ecology Progress Series 187: 1–15. https://doi.org/10.3354/meps187001. Lucas, L.V., J.R. Koseff, S.G. Monismith, J.E. Cloern, and J.K. Thompson. 1999b. Processes governing phytoplankton blooms in estuaries. II: The role of horizontal transport. Marine Ecology Progress Series 187: 17–30. https://doi.org/10.3354/meps187017. Lucas, L.V., J.K. Thompson, and L.R. Brown. 2009. Why are diverse relationships observed between phytoplankton biomass and transport time? Limnology and Oceanography 54(1): 381–390. https://doi.org/10.4319/lo.2009.54.1.0381. MacIsaac, J.J., and R.C. Dugdale. 1969. The kinetics of nitrate and ammonium uptake by natural populations of marine phytoplankton. Deep Sea Research 16: 45–57. https://doi.org/10.1016/0011-7471(69)90049-7. McSweeney, J.M., R.J. Chant, and C.K. Sommerfield. 2016a. Lateral variability of sediment transport in the Delaware Estuary. Journal of Geophysical Research: Oceans 121: 725–744. https://doi.org/10.1002/2015jc010974. McSweeney, J.M., R.J. Chant, J.L. Wilkin, and C.K. Sommerfield. 2016b. Suspended-sediment impacts on light-limited productivity in the Delaware Estuary. Estuaries and Coasts. https://doi.org/10.1007/s12237-016-0200-3. Monismith, S.G., W. Kimmerer, J.R. Burau, and M.T. Stacey. 2002. Structure and flow-induced and variability of the subtidal and salinity field and in and Northern San and Francisco Bay. Journal of Physical Oceanography 32: 3003–3019. https://doi.org/10.1175/1520-0485(2002)032%3C3003:SAFIVO%3E2.0.CO;2. Pennock, J.R. 1985. Chlorophyll distributions in the delaware estuary: Regulation by light-limitation. Estuarine, Coastal and Shelf Science 21: 711–725. https://doi.org/10.1016/0272-7714(85)90068-x. Pennock, J.R., and J.H. Sharp. 1994. Temporal alternation between light- and nutrient limitation of phytoplankton production in a coastal plain estuary. Marine Ecology Progress Series 111: 275–288. https://doi.org/10.3354/meps111275. Peterson, D.H., and J.F. Festa. 1984. Numerical simulation of phytoplankton productivity in partially mixed estuaries. Estuarine, Coastal and Shelf Science 19(5): 563–589. https://doi.org/10.1016/0272-7714(84)90016-7. Qin, Q., and J. Shen. 2017. The contribution of local and transport processes to phytoplankton biomass variability over different timescales in the Upper James River, Virginia. Estuarine, Coastal and Shelf Science 196: 123–133. https://doi.org/10.1016/j.ecss.2017.06.037. Sarthou, G., K.R. Timmermans, S. Blain, and P. Treguer. 2005. Growth physiology and fate of diatoms in the ocean: A review. Journal of Sea Research 53: 25–42. https://doi.org/10.1016/j.seares.2004.01.007. Sharp, J.H., L.A. Cifuentes, R.B. Coffin, J.R. Pennock, and K.C. Wong. 1986. The influence of river variability on the circulation, chemistry, and microbiology of the Delaware Estuary. Estuaries 9: 261. https://doi.org/10.2307/1352098. Sharp, J.H., K. Yoshiyama, A.E. Parker, M.C. Schwartz, S.E. Curless, A.Y. Beauregard, J.E. Ossolinski, and A.R. Davis. 2009. A biogeochemical view of estuarine eutrophication: Seasonal and spatial trends and correlations in the Delaware Estuary. Estuaries and Coasts 32: 1023–1043. https://doi.org/10.1007/s12237-009-9210-8. Smith, E.L. 1936. Photosynthesis in relation to light and carbon dioxide. Proceedings of the National Academy of Sciences of the United States of America 22: 504–511. https://doi.org/10.1073/pnas.22.8.504. Steele, J.H., and E.W. Henderson. 1992. The role of predation in plankton models. Journal of Plankton Research 14: 157–172. https://doi.org/10.1093/plankt/14.1.157. Sun, J., Y. Feng, Y. Zhang, and D.A. Hutchins. 2007. Fast microzooplankton grazing on fast-growing, low-biomass phytoplankton: A case study in spring in Chesapeake Bay, Delaware Inland Bays and Delaware Bay. Hydrobiologia 589(1): 127–139. https://doi.org/10.1007/s10750-007-0730-6. Sverdrup, H.U. 1953. On conditions for the vernal blooming of phytoplankton. ICES Journal of Marine Science 18(3): 287–295. https://doi.org/10.1093/icesjms/18.3.287. Taniguchi, D.A.A., P.J.S. Franks, and F.J. Poulin. 2014. Planktonic biomass size spectra: An emergent property of size-dependent physiological rates, food web dynamics, and nutrient regimes. Marine Ecology Progress Series 514: 13–33. https://doi.org/10.3354/meps10968. Wei, X., G.P. Schramkowski, and H.M. Schuttelaars. 2016. Salt dynamics in well-mixed estuaries: Importance of advection by tides. Journal of Physical Oceanography 46: 1457–1475. https://doi.org/10.1175/jpo-d-15-0045.1. Wei, X., M. Kumar, and H.M. Schuttelaars. 2018. Three-dimensional sediment dynamics in well-mixed estuaries: Importance of the internally generated overtide, spatial settling lag, and gravitational circulation. Journal of Geophysical Research: Oceans 123: 1062–1090. https://doi.org/10.1002/2017jc012857. White, J.R., and M.R. Roman. 1992. Seasonal study of grazing by metazoan zooplankton in the mesohaline Chesapeake Bay. Marine Ecology Progress Series 86: 251–261. https://doi.org/10.3354/meps086251. Wofsy, S.C. 1983. A simple model to predict extinction coefficients and phytoplankton biomass in eutrophic waters. Limnology and Oceanography 28 (6): 1144–1155. https://doi.org/10.4319/lo.1983.28.6.1144. Yoshiyama, K., and J.H. Sharp. 2006. Phytoplankton response to nutrient enrichment in an urbanized estuary: Apparent inhibition of primary production by overeutrophication. Limnology and Oceanography 51: 424–434. https://doi.org/10.4319/lo.2006.51.1_part_2.0424. Zakardjian, B.A., Y. Gratton, and A.F. Vézina. 2000. Late spring phytoplankton bloom in the Lower St. Lawrence Estuary: The flushing hypothesis revisited. Marine Ecology Progress Series 192: 31–48. https://doi.org/10.3354/meps192031. This work has been developed during a visit by Yoeri Dijkstra to the Department of Marine and Coastal Sciences (DMCS), Rutgers University, NJ. The authors thank Eli Hunter (DMCS), John Yagecic (DRBC), and Namsoo Suk (DRBC) for help with the observations, Henk Schuttelaars (Delft University of Technology, Netherlands) for inspiration, support, and critical review of this work, and Mark Brush and two anonymous reviewers for excellent reviews that led to substantial improvement of this paper. Delft Institute of Applied Mathematics, Delft University of Technology, Delft, Netherlands Yoeri M. Dijkstra Department of Marine and Coastal Sciences, Rutgers University, New Brunswick, NJ, USA Robert J. Chant Department of Environmental Sciences, Rutgers University, New Brunswick, NJ, USA John R. Reinfelder Correspondence to Yoeri M. Dijkstra. Communicated by Mark J. Brush Below is the link to the electronic supplementary material. Dijkstra, Y.M., Chant, R.J. & Reinfelder, J.R. Factors Controlling Seasonal Phytoplankton Dynamics in the Delaware River Estuary: an Idealized Model Study. Estuaries and Coasts 42, 1839–1857 (2019). https://doi.org/10.1007/s12237-019-00612-3 Revised: 04 July 2019 Light limitation Nutrient limitation Over 10 million scientific documents at your fingertips Switch Edition Corporate Edition Not affiliated © 2023 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Superheterodyne receiver Common type of radio receiver that shifts the received signal to an easily-processed intermediate frequency A 5-tube superheterodyne receiver made in Japan circa 1955 Superheterodyne transistor radio circuit circa 1975 A superheterodyne receiver, often shortened to superhet, is a type of radio receiver that uses frequency mixing to convert a received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the original carrier frequency. It was long believed to have been invented by US engineer Edwin Armstrong, but after some controversy the earliest patent for the invention is now credited to French radio engineer and radio manufacturer Lucien Lévy.[1] Virtually all modern radio receivers use the superheterodyne principle; only those software-defined radios using direct sampling do not. Superhetrodyne Receiver - Modern Electronic Communications - Application of Electronics SUPERHETERODYNE Receiver - Modulation Techniques Lecture -13 Superhet Receiver etc Radio Communication - Part 2 | Basics of Electronics | KTU | Malayalam LECT-16 : SUPERHETERODYNE RECEIVER & CONCEPT OF IMAGE SIGNAL 1 History 1.1 Heterodyne 1.2 Regeneration 1.3 RDF 1.4 Superheterodyne 1.5 Development 1.6 Patent battles 2 Principle of operation 2.1 Circuit description 2.2 Local oscillator and mixer 2.3 IF amplifier 2.4 IF bandpass filter 2.5 Demodulator 3 Multiple conversion 4 Modern designs 5 Advantages and disadvantages 5.1 Image frequency (fIMAGE) 5.2 Local oscillator radiation 5.3 Local oscillator sideband noise 6 Terminology 7 See also 8 Notes 10 Further reading 11 External links Early Morse code radio broadcasts were produced using an alternator connected to a spark gap. The output signal was at a carrier frequency defined by the physical construction of the gap, modulated by the alternating current signal from the alternator. Since the output of the alternator was generally in the audible range, this produces an audible amplitude modulated (AM) signal. Simple radio detectors filtered out the high-frequency carrier, leaving the modulation, which was passed on to the user's headphones as an audible signal of dots and dashes. In 1904, Ernst Alexanderson introduced the Alexanderson alternator, a device that directly produced radio frequency output with higher power and much higher efficiency than the older spark gap systems. In contrast to the spark gap, however, the output from the alternator was a pure carrier wave at a selected frequency. When detected on existing receivers, the dots and dashes would normally be inaudible, or "supersonic". Due to the filtering effects of the receiver, these signals generally produced a click or thump, which were audible but made determining dot or dash difficult. In 1905, Canadian inventor Reginald Fessenden came up with the idea of using two Alexanderson alternators operating at closely spaced frequencies to broadcast two signals, instead of one. The receiver would then receive both signals, and as part of the detection process, only the beat frequency would exit the receiver. By selecting two carriers close enough that the beat frequency was audible, the resulting Morse code could once again be easily heard even in simple receivers. For instance, if the two alternators operated at frequencies 3 kHz apart, the output in the headphones would be dots or dashes of 3 kHz tone, making them easily audible. Fessenden coined the term "heterodyne", meaning "generated by a difference" (in frequency), to describe this system. The word is derived from the Greek roots hetero- "different", and -dyne "power". Morse code was widely used in the early days of radio because it was both easy to produce and easy to receive. In contrast to voice broadcasts, the output of the amplifier didn't have to closely match the modulation of the original signal. As a result, any number of simple amplification systems could be used. One method used an interesting side-effect of early triode amplifier tubes. If both the plate (anode) and grid were connected to resonant circuits tuned to the same frequency and the stage gain was much higher than unity, stray capacitive coupling between the grid and the plate would cause the amplifier to go into oscillation. In 1913, Edwin Howard Armstrong described a receiver system that used this effect to produce audible Morse code output using a single triode. The output of the amplifier taken at the anode was connected back to the input through a "tickler", causing feedback that drove input signals well beyond unity. This caused the output to oscillate at a chosen frequency with great amplification. When the original signal cut off at the end of the dot or dash, the oscillation decayed and the sound disappeared after a short delay. Armstrong referred to this concept as a regenerative receiver, and it immediately became one of the most widely used systems of its era. Many radio systems of the 1920s were based on the regenerative principle, and it continued to be used in specialized roles into the 1940s, for instance in the IFF Mark II. There was one role where the regenerative system was not suitable, even for Morse code sources, and that was the task of radio direction finding, or RDF. The regenerative system was highly non-linear, amplifying any signal above a certain threshold by a huge amount, sometimes so large it caused it to turn into a transmitter (which was the entire concept behind IFF). In RDF, the strength of the signal is used to determine the location of the transmitter, so one requires linear amplification to allow the strength of the original signal, often very weak, to be accurately measured. To address this need, RDF systems of the era used triodes operating below unity. To get a usable signal from such a system, tens or even hundreds of triodes had to be used, connected together anode-to-grid. These amplifiers drew enormous amounts of power and required a team of maintenance engineers to keep them running. Nevertheless, the strategic value of direction finding on weak signals was so high that the British Admiralty felt the high cost was justified. Superheterodyne One of the prototype superheterodyne receivers built at Armstrong's Signal Corps laboratory in Paris during World War I. It is constructed in two sections, the mixer and local oscillator (left) and three IF amplification stages and a detector stage (right). The intermediate frequency was 75 kHz. Although a number of researchers discovered the superheterodyne concept, filing patents only months apart (see below), Armstrong is often credited with the concept. He came across it while considering better ways to produce RDF receivers. He had concluded that moving to higher "short wave" frequencies would make RDF more useful and was looking for practical means to build a linear amplifier for these signals. At the time, short wave was anything above about 500 kHz, beyond any existing amplifier's capabilities. It had been noticed that when a regenerative receiver went into oscillation, other nearby receivers would start picking up other stations as well. Armstrong (and others) eventually deduced that this was caused by a "supersonic heterodyne" between the station's carrier frequency and the regenerative receiver's oscillation frequency. When the first receiver began to oscillate at high outputs, its signal would flow back out through the antenna to be received on any nearby receiver. On that receiver, the two signals mixed just as they did in the original heterodyne concept, producing an output that is the difference in frequency between the two signals. For instance, consider a lone receiver that was tuned to a station at 300 kHz. If a second receiver is set up nearby and set to 400 kHz with high gain, it will begin to give off a 400 kHz signal that will be received in the first receiver. In that receiver, the two signals will mix to produce four outputs, one at the original 300 kHz, another at the received 400 kHz, and two more, the difference at 100 kHz and the sum at 700 kHz. This is the same effect that Fessenden had proposed, but in his system the two frequencies were deliberately chosen so the beat frequency was audible. In this case, all of the frequencies are well beyond the audible range, and thus "supersonic", giving rise to the name superheterodyne. Armstrong realized that this effect was a potential solution to the "short wave" amplification problem, as the "difference" output still retained its original modulation, but on a lower carrier frequency. In the example above, one can amplify the 100 kHz beat signal and retrieve the original information from that, the receiver does not have to tune in the higher 300 kHz original carrier. By selecting an appropriate set of frequencies, even very high-frequency signals could be "reduced" to a frequency that could be amplified by existing systems. For instance, to receive a signal at 1500 kHz, far beyond the range of efficient amplification at the time, one could set up an oscillator at, for example, 1560 kHz. Armstrong referred to this as the "local oscillator" or LO. As its signal was being fed into a second receiver in the same device, it did not have to be powerful, generating only enough signal to be roughly similar in strength to that of the received station.[a] When the signal from the LO mixes with the station's, one of the outputs will be the heterodyne difference frequency, in this case, 60 kHz. He termed this resulting difference the "intermediate frequency" often abbreviated to "IF". In December 1919, Major E. H. Armstrong gave publicity to an indirect method of obtaining short-wave amplification, called the super-heterodyne. The idea is to reduce the incoming frequency, which may be, for example 1,500,000 cycles (200 meters), to some suitable super-audible frequency that can be amplified efficiently, then passing this current through an intermediate frequency amplifier, and finally rectifying and carrying on to one or two stages of audio frequency amplification.[2] The "trick" to the superheterodyne is that by changing the LO frequency you can tune in different stations. For instance, to receive a signal at 1300 kHz, one could tune the LO to 1360 kHz, resulting in the same 60 kHz IF. This means the amplifier section can be tuned to operate at a single frequency, the design IF, which is much easier to do efficiently. The first commercial superheterodyne receiver,[3] the RCA Radiola AR-812, brought out March 4, 1924, priced at $286 (equivalent to $4,320 in 2020). It used 6 triodes: a mixer, local oscillator, two IF and two audio amplifier stages, with an IF of 45 kHz. It was a commercial success, with better performance than competing receivers. Armstrong put his ideas into practice, and the technique was soon adopted by the military. It was less popular when commercial radio broadcasting began in the 1920s, mostly due to the need for an extra tube (for the oscillator), the generally higher cost of the receiver, and the level of skill required to operate it. For early domestic radios, tuned radio frequency receivers (TRF) were more popular because they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong eventually sold his superheterodyne patent to Westinghouse, which then sold it to Radio Corporation of America (RCA), the latter monopolizing the market for superheterodyne receivers until 1930.[4] Because the original motivation for the superhet was the difficulty of using the triode amplifier at high frequencies, there was an advantage in using a lower intermediate frequency. During this era, many receivers used an IF frequency of only 30 kHz.[5] These low IF frequencies, often using IF transformers based on the self-resonance of iron-core transformers, had poor image frequency rejection, but overcame the difficulty in using triodes at radio frequencies in a manner that competed favorably with the less robust neutrodyne TRF receiver. Higher IF frequencies (455 kHz was a common standard) came into use in later years, after the invention of the tetrode and pentode as amplifying tubes, largely solving the problem of image rejection. Even later, however, low IF frequencies (typically 60 kHz) were again used in the second (or third) IF stage of double or triple-conversion communications receivers to take advantage of the selectivity more easily achieved at lower IF frequencies, with image-rejection accomplished in the earlier IF stage(s) which were at a higher IF frequency. In the 1920s, at these low frequencies, commercial IF filters looked very similar to 1920s audio interstage coupling transformers, had similar construction, and were wired up in an almost identical manner, so they were referred to as "IF transformers". By the mid-1930s, superheterodynes using much higher intermediate frequencies (typically around 440–470 kHz) used tuned transformers more similar to other RF applications. The name "IF transformer" was retained, however, now meaning "intermediate frequency". Modern receivers typically use a mixture of ceramic resonators or surface acoustic wave resonators and traditional tuned-inductor IF transformers. "All American Five" vacuum-tube superheterodyne AM broadcast receiver from 1940s was cheap to manufacture because it only required five tubes. By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost advantages, and the explosion in the number of broadcasting stations created a demand for cheaper, higher-performance receivers. The introduction of an additional grid in a vacuum tube, but before the more modern screen-grid tetrode, included the tetrode with two control grids; this tube combined the mixer and oscillator functions, first used in the so-called autodyne mixer. This was rapidly followed by the introduction of tubes specifically designed for superheterodyne operation, most notably the pentagrid converter. By reducing the tube count (with each tube stage being the main factor affecting cost in this era), this further reduced the advantage of TRF and regenerative receiver designs. By the mid-1930s, commercial production of TRF receivers was largely replaced by superheterodyne receivers. By the 1940s, the vacuum-tube superheterodyne AM broadcast receiver was refined into a cheap-to-manufacture design called the "All American Five" because it used five vacuum tubes: usually a converter (mixer/local oscillator), an IF amplifier, a detector/audio amplifier, audio power amplifier, and a rectifier. Since this time, the superheterodyne design was used for almost all commercial radio and TV receivers. Patent battles French engineer Lucien Lévy filed a patent application for the superheterodyne principle in August 1917 with brevet n° 493660.[6] Armstrong also filed his patent in 1917.[7][8][9] Levy filed his original disclosure about seven months before Armstrong's.[1] German inventor Walter H. Schottky also filed a patent in 1918.[6] At first the US recognised Armstrong as the inventor, and his US Patent 1,342,885 was issued on 8 June 1920.[1] After various changes and court hearings Lévy was awarded US patent No 1,734,938 that included seven of the nine claims in Armstrong's application, while the two remaining claims were granted to Alexanderson of GE and Kendall of AT&T.[1] Block diagram of a typical superheterodyne receiver. Red parts are those that handle the incoming radio frequency (RF) signal; green are parts that operate at the intermediate frequency (IF), while blue parts operate at the modulation (audio) frequency. The dotted line indicates that the local oscillator and RF filter must be tuned in tandem. How a superheterodyne radio works. The horizontal axes are frequency f. The blue graphs show the voltages of the radio signals at various points in the circuit. The red graphs show the transfer functions of the filters in the circuit; the thickness of the red bands shows the fraction of signal from the previous graph that passes through the filter at each frequency. The incoming radio signal from the antenna (top graph) consists of the desired radio signal S1 plus others at different frequencies. The RF filter (2nd graph) removes any signal such as S2 at the image frequency LO - IF, which would otherwise pass through the IF filter and interfere. The remaining composite signal is applied to the mixer along with a local oscillator signal (LO) (3rd graph). In the mixer the signal S1 combines with the LO frequency to create a heterodyne at the difference between these frequencies, the intermediate frequency (IF), at the mixer output (4th graph). This passes through the IF bandpass filter (5th graph) is amplified and demodulated (demodulation is not shown). The unwanted signals create heterodynes at other frequencies (4th graph), which are filtered out by the IF filter . The diagram at right shows the block diagram of a typical single-conversion superheterodyne receiver. The diagram has blocks that are common to superheterodyne receivers,[10] with only the RF amplifier being optional. The antenna collects the radio signal. The tuned RF stage with optional RF amplifier provides some initial selectivity; it is necessary to suppress the image frequency (see below), and may also serve to prevent strong out-of-passband signals from saturating the initial amplifier. A local oscillator provides the mixing frequency; it is usually a variable frequency oscillator which is used to tune the receiver to different stations. The frequency mixer does the actual heterodyning that gives the superheterodyne its name; it changes the incoming radio frequency signal to a higher or lower, fixed, intermediate frequency (IF). The IF band-pass filter and amplifier supply most of the gain and the narrowband filtering for the radio. The demodulator extracts the audio or other modulation from the IF radio frequency. The extracted signal is then amplified by the audio amplifier. To receive a radio signal, a suitable antenna is required. The output of the antenna may be very small, often only a few microvolts. The signal from the antenna is tuned and may be amplified in a so-called radio frequency (RF) amplifier, although this stage is often omitted. One or more tuned circuits at this stage block frequencies that are far removed from the intended reception frequency. To tune the receiver to a particular station, the frequency of the local oscillator is controlled by the tuning knob (for instance). Tuning of the local oscillator and the RF stage may use a variable capacitor, or varicap diode.[11] The tuning of one (or more) tuned circuits in the RF stage must track the tuning of the local oscillator. Local oscillator and mixer The signal is then fed into a circuit where it is mixed with a sine wave from a variable frequency oscillator known as the local oscillator (LO). The mixer uses a non-linear component to produce both sum and difference beat frequencies signals,[12] each one containing the modulation contained in the desired signal. The output of the mixer may include the original RF signal at fRF, the local oscillator signal at fLO, and the two new heterodyne frequencies fRF + fLO and fRF − fLO. The mixer may inadvertently produce additional frequencies such as third- and higher-order intermodulation products. Ideally, the IF bandpass filter removes all but the desired IF signal at fIF. The IF signal contains the original modulation (transmitted information) that the received radio signal had at fRF. The frequency of the local oscillator fLO is set so the desired reception radio frequency fRF mixes to fIF. There are two choices for the local oscillator frequency because the dominant mixer products are at fRF ± fLO. If the local oscillator frequency is less than the desired reception frequency, it is called low-side injection (fIF = fRF − fLO); if the local oscillator is higher, then it is called high-side injection (fIF = fLO − fRF). The mixer will process not only the desired input signal at fRF, but also all signals present at its inputs. There will be many mixer products (heterodynes). Most other signals produced by the mixer (such as due to stations at nearby frequencies) can be filtered out in the IF tuned amplifier; that gives the superheterodyne receiver its superior performance. However, if fLO is set to fRF + fIF, then an incoming radio signal at fLO + fIF will also produce a heterodyne at fIF; the frequency fLO + fIF is called the image frequency and must be rejected by the tuned circuits in the RF stage. The image frequency is 2 fIF higher (or lower) than the desired frequency fRF, so employing a higher IF frequency fIF increases the receiver's image rejection without requiring additional selectivity in the RF stage. To suppress the unwanted image, the tuning of the RF stage and the LO may need to "track" each other. In some cases, a narrow-band receiver can have a fixed tuned RF amplifier. In that case, only the local oscillator frequency is changed. In most cases, a receiver's input band is wider than its IF center frequency. For example, a typical AM broadcast band receiver covers 510 kHz to 1655 kHz (a roughly 1160 kHz input band) with a 455 kHz IF frequency; an FM broadcast band receiver covers 88 MHz to 108 MHz band with a 10.7 MHz IF frequency. In that situation, the RF amplifier must be tuned so the IF amplifier does not see two stations at the same time. If the AM broadcast band receiver LO were set at 1200 kHz, it would see stations at both 745 kHz (1200−455 kHz) and 1655 kHz. Consequently, the RF stage must be designed so that any stations that are twice the IF frequency away are significantly attenuated. The tracking can be done with a multi-section variable capacitor or some varactors driven by a common control voltage. An RF amplifier may have tuned circuits at both its input and its output, so three or more tuned circuits may be tracked. In practice, the RF and LO frequencies need to track closely but not perfectly.[13][14] In the days of tube (valve) electronics, it was common for superheterodyne receivers to combine the functions of the local oscillator and the mixer in a single tube, leading to a savings in power, size, and especially cost. A single pentagrid converter tube would oscillate and also provide signal amplification as well as frequency mixing.[15] The stages of an intermediate frequency amplifier ("IF amplifier" or "IF strip") are tuned to a fixed frequency that does not change as the receiving frequency changes. The fixed frequency simplifies optimization of the IF amplifier.[10] The IF amplifier is selective around its center frequency fIF. The fixed center frequency allows the stages of the IF amplifier to be carefully tuned for best performance (this tuning is called "aligning" the IF amplifier). If the center frequency changed with the receiving frequency, then the IF stages would have had to track their tuning. That is not the case with the superheterodyne. Normally, the IF center frequency fIF is chosen to be less than the range of desired reception frequencies fRF. That is because it is easier and less expensive to get high selectivity at a lower frequency using tuned circuits. The bandwidth of a tuned circuit with a certain Q is proportional to the frequency itself (and what's more, a higher Q is achievable at lower frequencies), so fewer IF filter stages are required to achieve the same selectivity. Also, it is easier and less expensive to get high gain at a lower frequencies. However, in many modern receivers designed for reception over a wide frequency range (e.g. scanners and spectrum analyzers) a first IF frequency higher than the reception frequency is employed in a double conversion configuration. For instance, the Rohde & Schwarz EK-070 VLF/HF receiver covers 10 kHz to 30 MHz.[14] It has a band switched RF filter and mixes the input to a first IF of 81.4 MHz and a second IF frequency of 1.4 MHz. The first LO frequency is 81.4 to 111.4 MHz, a reasonable range for an oscillator. But if the original RF range of the receiver were to be converted directly to the 1.4 MHz intermediate frequency, the LO frequency would need to cover 1.4-31.4 MHz which cannot be accomplished using tuned circuits (a variable capacitor with a fixed inductor would need a capacitance range of 500:1). Image rejection is never an issue with such a high IF frequency. The first IF stage uses a crystal filter with a 12 kHz bandwidth. There is a second frequency conversion (making a triple-conversion receiver) that mixes the 81.4 MHz first IF with 80 MHz to create a 1.4 MHz second IF. Image rejection for the second IF is not an issue as the first IF has a bandwidth of much less than 2.8 MHz. To avoid interference to receivers, licensing authorities will avoid assigning common IF frequencies to transmitting stations. Standard intermediate frequencies used are 455 kHz for medium-wave AM radio, 10.7 MHz for broadcast FM receivers, 38.9 MHz (Europe) or 45 MHz (US) for television, and 70 MHz for satellite and terrestrial microwave equipment. To avoid tooling costs associated with these components, most manufacturers then tended to design their receivers around a fixed range of frequencies offered, which resulted in a worldwide de facto standardization of intermediate frequencies. In early superhets, the IF stage was often a regenerative stage providing the sensitivity and selectivity with fewer components. Such superhets were called super-gainers or regenerodynes.[16] This is also called a Q multiplier, involving a small modification to an existing receiver especially for the purpose of increasing selectivity. IF bandpass filter The IF stage includes a filter and/or multiple tuned circuits to achieve the desired selectivity. This filtering must have a band pass equal to or less than the frequency spacing between adjacent broadcast channels. Ideally a filter would have a high attenuation to adjacent channels, but maintain a flat response across the desired signal spectrum in order to retain the quality of the received signal. This may be obtained using one or more dual tuned IF transformers, a quartz crystal filter, or a multipole ceramic crystal filter.[17] In the case of television receivers, no other technique was able to produce the precise bandpass characteristic needed for vestigial sideband reception, such as that used in the NTSC system first approved by the US in 1941. By the 1980s, multi-component capacitor-inductor filters had been replaced with precision electromechanical surface acoustic wave (SAW) filters. Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be made to extremely close tolerances, and are very stable in operation. Demodulator The received signal is now processed by the demodulator stage where the audio signal (or other baseband signal) is recovered and then further amplified. AM demodulation requires the simple rectification of the RF signal (so-called envelope detection), and a simple RC low pass filter to remove remnants of the intermediate frequency.[18] FM signals may be detected using a discriminator, ratio detector, or phase-locked loop. Continuous wave and single sideband signals require a product detector using a so-called beat frequency oscillator, and there are other techniques used for different types of modulation.[19] The resulting audio signal (for instance) is then amplified and drives a loudspeaker. When so-called high-side injection has been used, where the local oscillator is at a higher frequency than the received signal (as is common), then the frequency spectrum of the original signal will be reversed. This must be taken into account by the demodulator (and in the IF filtering) in the case of certain types of modulation such as single sideband. Multiple conversion Double conversion superheterodyne receiver block diagram To overcome obstacles such as image response, some receivers use multiple successive stages of frequency conversion and multiple IFs of different values. A receiver with two frequency conversions and IFs is called a dual conversion superheterodyne, and one with three IFs is called a triple conversion superheterodyne. The main reason that this is done is that with a single IF there is a tradeoff between low image response and selectivity. The separation between the received frequency and the image frequency is equal to twice the IF frequency, so the higher the IF, the easier it is to design an RF filter to remove the image frequency from the input and achieve low image response. However, the higher the IF, the more difficult it is to achieve high selectivity in the IF filter. At shortwave frequencies and above, the difficulty in obtaining sufficient selectivity in the tuning with the high IFs needed for low image response impacts performance. To solve this problem two IF frequencies can be used, first converting the input frequency to a high IF to achieve low image response, and then converting this frequency to a low IF to achieve good selectivity in the second IF filter. To improve tuning, a third IF can be used. For example, for a receiver that can tune from 500 kHz to 30 MHz, three frequency converters might be used.[10] With a 455 kHz IF it is easy to get adequate front end selectivity with broadcast band (under 1600 kHz) signals. For example, if the station being received is on 600 kHz, the local oscillator can be set to 1055 kHz, giving an image on (-600+1055=) 455 kHz. But a station on 1510 kHz could also potentially produce an image at (1510-1055=) 455 kHz and so cause image interference. However, because 600 kHz and 1510 kHz are so far apart, it is easy to design the front end tuning to reject the 1510 kHz frequency. However at 30 MHz, things are different. The oscillator would be set to 30.455 MHz to produce a 455 kHz IF, but a station on 30.910 would also produce a 455 kHz beat, so both stations would be heard at the same time. But it is virtually impossible to design an RF tuned circuit that can adequately discriminate between 30 MHz and 30.91 MHz, so one approach is to "bulk downconvert" whole sections of the shortwave bands to a lower frequency, where adequate front-end tuning is easier to arrange. For example, the ranges 29 MHz to 30 MHz; 28 MHz to 29 MHz etc. might be converted down to 2 MHz to 3 MHz, there they can be tuned more conveniently. This is often done by first converting each "block" up to a higher frequency (typically 40 MHz) and then using a second mixer to convert it down to the 2 MHz to 3 MHz range. The 2 MHz to 3 MHz "IF" is basically another self-contained superheterodyne receiver, most likely with a standard IF of 455 kHz. Modern designs Microprocessor technology allows replacing the superheterodyne receiver design by a software defined radio architecture, where the IF processing after the initial IF filter is implemented in software. This technique is already in use in certain designs, such as very low-cost FM radios incorporated into mobile phones, since the system already has the necessary microprocessor. Radio transmitters may also use a mixer stage to produce an output frequency, working more or less as the reverse of a superheterodyne receiver. Superheterodyne receivers have essentially replaced all previous receiver designs. The development of modern semiconductor electronics negated the advantages of designs (such as the regenerative receiver) that used fewer vacuum tubes. The superheterodyne receiver offers superior sensitivity, frequency stability and selectivity. Compared with the tuned radio frequency receiver (TRF) design, superhets offer better stability because a tuneable oscillator is more easily realized than a tuneable amplifier. Operating at a lower frequency, IF filters can give narrower passbands at the same Q factor than an equivalent RF filter. A fixed IF also allows the use of a crystal filter[10] or similar technologies that cannot be tuned. Regenerative and super-regenerative receivers offered a high sensitivity, but often suffer from stability problems making them difficult to operate. Although the advantages of the superhet design are overwhelming, there are a few drawbacks that need to be tackled in practice. Image frequency (fIMAGE) Graphs illustrating the problem of image response in a superheterodyne. The horizontal axes are frequency and the vertical axes are voltage. Without an adequate RF filter, any signal S2 (green) at the image frequency f IMAGE {\displaystyle f_{\text{IMAGE}}} is also heterodyned to the IF frequency f IF {\displaystyle f_{\text{IF}}} along with the desired radio signal S1 (blue) at f RF {\displaystyle f_{\text{RF}}} , so they both pass through the IF filter (red). Thus S2 interferes with S1. One major disadvantage to the superheterodyne receiver is the problem of image frequency. In heterodyne receivers, an image frequency is an undesired input frequency equal to the station frequency plus (or minus) twice the intermediate frequency. The image frequency results in two stations being received at the same time, thus producing interference. Reception at the image frequency can be combated through tuning (filtering) at the antenna and RF stage of the superheterodyne receiver. f I M A G E = { f + 2 f I F , if f L O > f (high side injection) f − 2 f I F , if f L O < f (low side injection) {\displaystyle f_{\mathrm {IMAGE} }={\begin{cases}f+2f_{\mathrm {IF} },&{\text{if }}f_{\mathrm {LO} }>f{\text{ (high side injection)}}\\f-2f_{\mathrm {IF} },&{\text{if }}f_{\mathrm {LO} }<f{\text{ (low side injection)}}\end{cases}}} For example, an AM broadcast station at 580 kHz is tuned on a receiver with a 455 kHz IF. The local oscillator is tuned to 580 + 455 = 1035 kHz. But a signal at 580 + 455 + 455 = 1490 kHz is also 455 kHz away from the local oscillator; so both the desired signal and the image, when mixed with the local oscillator, will appear at the intermediate frequency. This image frequency is within the AM broadcast band. Practical receivers have a tuning stage before the converter, to greatly reduce the amplitude of image frequency signals; additionally, broadcasting stations in the same area have their frequencies assigned to avoid such images[citation needed]. The unwanted frequency is called the image of the wanted frequency, because it is the "mirror image" of the desired frequency reflected about f o {\displaystyle f_{o}\!} . A receiver with inadequate filtering at its input will pick up signals at two different frequencies simultaneously: the desired frequency and the image frequency. A radio reception which happens to be at the image frequency can interfere with reception of the desired signal, and noise (static) around the image frequency can decrease the receiver's signal-to-noise ratio (SNR) by up to 3dB. Early Autodyne receivers typically used IFs of only 150 kHz or so. As a consequence, most Autodyne receivers required greater front-end selectivity, often involving double-tuned coils, to avoid image interference. With the later development of tubes able to amplify well at higher frequencies, higher IF frequencies came into use, reducing the problem of image interference. Typical consumer radio receivers have only a single tuned circuit in the RF stage. Sensitivity to the image frequency can be minimized only by (1) a filter that precedes the mixer or (2) a more complex mixer circuit [20] to suppress the image; this is rarely used. In most tunable receivers using a single IF frequency, the RF stage includes at least one tuned circuit in the RF front end whose tuning is performed in tandem with the local oscillator. In double (or triple) conversion receivers in which the first conversion uses a fixed local oscillator, this may rather be a fixed bandpass filter which accommodates the frequency range being mapped to the first IF frequency range. Image rejection is an important factor in choosing the intermediate frequency of a receiver. The farther apart the bandpass frequency and the image frequency are, the more the bandpass filter will attenuate any interfering image signal. Since the frequency separation between the bandpass and the image frequency is 2 f I F {\displaystyle 2f_{\mathrm {IF} }\!} , a higher intermediate frequency improves image rejection. It may be possible to use a high enough first IF that a fixed-tuned RF stage can reject any image signals. The ability of a receiver to reject interfering signals at the image frequency is measured by the image rejection ratio. This is the ratio (in decibels) of the output of the receiver from a signal at the received frequency, to its output for an equal-strength signal at the image frequency. Local oscillator radiation Further information: Electromagnetic compatibility It can be difficult to keep stray radiation from the local oscillator below the level that a nearby receiver can detect. If the receiver's local oscillator can reach the antenna it will act as a low-power CW transmitter. Consequently, what is meant to be a receiver can itself create radio interference. In intelligence operations, local oscillator radiation gives a means to detect a covert receiver and its operating frequency. The method was used by MI5 during Operation RAFTER.[21] This same technique is also used in radar detector detectors used by traffic police in jurisdictions where radar detectors are illegal. Local oscillator radiation is most prominent in receivers in which the antenna signal is connected directly to the mixer (which itself receives the local oscillator signal) rather than from receivers in which an RF amplifier stage is used in between. Thus it is more of a problem with inexpensive receivers and with receivers at such high frequencies (especially microwave) where RF amplifying stages are difficult to implement. Local oscillator sideband noise Local oscillators typically generate a single frequency signal that has negligible amplitude modulation but some random phase modulation which spreads some of the signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's frequency response[dubious – discuss], which would defeat the aim to make a very narrow bandwidth receiver such as to receive low-rate digital signals. Care needs to be taken to minimize oscillator phase noise, usually by ensuring[dubious – discuss] that the oscillator never enters a non-linear mode. First detector, second detector The mixer tube or transistor is sometimes called the first detector[citation needed], while the demodulator that extracts the modulation from the IF signal is called the second detector. In a dual-conversion superhet there are two mixers, so the demodulator is called the third detector. RF front end Refers to all the components of the receiver up to and including the mixer; all the parts that process the signal at the original incoming radio frequency. In the block diagram above the RF front end components are colored red. H2X radar Automatic gain control Direct conversion receiver VFO Single sideband modulation (demodulation) Tuned radio frequency receiver Reflex receiver Optical heterodyne detection Superheterodyne transmitter ^ Although, in practice, LOs tend to be relatively strong signals. ^ a b c d Klooster, John W. (2009). Icons of Invention: The Makers of the Modern World from Gutenberg to Gates. ABC-CLIO. p. 414. ISBN 978-0-313-34743-6. Retrieved 2017-10-22. ^ Leutz, C. R. (December 1922). "Notes on a Super-Heterodyne". QST. Hartford, CT, USA: American Radio Relay League. VI (5): 11–14 [11]. ^ Malanowski, Gregory (2011). The Race for Wireless: How Radio Was Invented (or Discovered?). Authorhouse. p. 69. ISBN 978-1-46343750-3. ^ Katz, Eugenii. "Edwin Howard Armstrong". History of electrochemistry, electricity, and electronics. Eugenii Katz homepage, Hebrew Univ. of Jerusalem. Archived from the original on 2009-10-22. Retrieved 2008-05-10. ^ Bussey, Gorden (1990). Wireless: the crucial decade - History of the British wireless industry 1924–34. IEE History of Technology Series. 13. London, UK: Peter Peregrinus Ltd. / Institution of Electrical Engineers. p. 78. ISBN 0-86341-188-6. ISBN 978-0-86341-188-5. Archived from the original on 2021-07-11. Retrieved 2021-07-11. (136 pages) ^ a b Koster, John (2016-12-03). "Radio Lucien Lévy". Vintage Radio Web. Retrieved 2017-10-22. ^ Howarth, Richard J. (2017-05-27). Dictionary of Mathematical Geosciences: With Historical Notes. Springer. p. 12. ISBN 978-3-319-57315-1. Retrieved 2017-10-22. ^ "The History of Amateur Radio". Luxorion. Retrieved 2011-01-19. ^ Sarkar, Tapan K.; Mailloux, Robert J.; Oliner, Arthur A.; Salazar-Palma, Magdalena; Sengupta, Dipak L. (2006). History of Wireless. John Wiley and Sons. p. 110?. ISBN 0-471-71814-9. ^ a b c d Carr, Joseph J. (2002). "Chapter 3". RF Components and Circuits. Newnes. ISBN 978-0-7506-4844-8. ^ Hagen, Jon B. (1996-11-13). Radio-frequency electronics: circuits and applications. Technology & Engineering. Cambridge University Press. p. 58, l. 12. ISBN 978-0-52155356-8. Retrieved 2011-01-17. ^ The art of electronics. Cambridge University Press. 2006. p. 886. ISBN 978-0-52137095-0. Retrieved 2011-01-17. ^ Terman, Frederick Emmons (1943). Radio Engineers' Handbook. New York, USA: McGraw-Hill. pp. 649–652. . (NB. Describes design procedure for tracking with a pad capacitor in the Chebyshev sense.) ^ a b Rohde, Ulrich L.; Bucher, T. T. N. (1988). Communications Receivers: Principles & Design. New York, USA: McGraw-Hill. pp. 44–55, 155–164. ISBN 0-07-053570-1. . (NB. Discusses frequency tracking, image rejection and includes an RF filter design that puts transmission zeros at both the local oscillator frequency and the unwanted image frequency.) ^ Langford-Smith, Fritz, ed. (November 1941) [1940]. Radiotron Designer's Handbook (PDF) (4th impression, 3rd ed.). Sydney, Australia / Harrison, New Jersey, USA: Wireless Press for Amalgamated Wireless Valve Company Pty. Ltd. / RCA Manufacturing Company, Inc. p. 102. Archived (PDF) from the original on 2021-02-03. Retrieved 2021-07-10. (352 pages) (Also published as Radio Designer's Handbook. London: Wireless World, 1940.) ^ "A Three Tube Regenerodyne Receiver". Retrieved 2018-01-27. ^ "Crystal filter types". QSL RF Circuit Design Ideas. Retrieved 2011-01-17. ^ "Reception of Amplitude Modulated Signals - AM Demodulation" (PDF). BC Internet education. 2007-06-14. Retrieved 2011-01-17. ^ "Chapter 5". Basic Radio Theory. TSCM Handbook. Retrieved 2011-01-17. ^ Kasperkovitz, Wolfdietrich Georg (2007) [2002]. "United States Patent 7227912 Receiver with mirror frequency suppression". ^ Wright, Peter (1987). Spycatcher: The Candid Autobiography of a Senior Intelligence Officer. Penguin Viking. ISBN 0-670-82055-5. Whitaker, Jerry (1996). The Electronics Handbook. CRC Press. p. 1172. ISBN 0-8493-8345-5. US 706740, Fessenden, Reginald A., "Wireless Signaling", published September 28, 1901, issued August 12, 1902 US 1050441, Fessenden, Reginald A., "Electric Signaling Apparatus", published July 27, 1905, issued January 14, 1913 US 1050728, Fessenden, Reginald A., "Method of Signaling", published August 21, 1906, issued January 14, 1913 Witts, Alfred T. (1936). The Superheterodyne Receiver (2nd ed.). London, UK: Sir Isaac Pitman & Sons. Wikimedia Commons has media related to Superheterodyne circuits. http://ethw.org/Superheterodyne_Receiver Douglas, Alan (November 1990). "Who Invented the Superheterodyne?". Proceedings of the Radio Club of America. 64 (3): 123–142. . An article giving the history of the various inventors working on the superheterodyne method. Hogan, John L., Jr. (September 1915). "Developments of the Heterodyne Receiver". Proceedings of the IRE. 3 (3): 249–260. doi:10.1109/jrproc.1915.216679. S2CID 51639962. Champeix (March–April 1979). "Qui a Inventé le Superhétérodyne?". La Liaison des Transmissions (in French). 116. Champeix (April–May 1979). "Qui a Inventé le Superhétérodyne?". La Liaison des Transmissions (in French). 117. Raises Paul Laüt published six months before Lévy; Étienne published the memo. Schottky, Walter H. (October 1926). "On the Origin of the Super-Heterodyne Method". Proceedings of the I.R.E. 14 (5): 695–698. doi:10.1109/JRPROC.1926.221074. ISSN 0731-5996. S2CID 51646766. Morse, A. M. (1925-07-31). "needed". Electrician. Describes English efforts. 29F(2d)953. Armstrong v. Lévy, decided Dec. 3, 1928 http://www.leagle.com/decision/192898229F2d953_1614/ARMSTRONG%20v.%20LEVY An in-depth introduction to superheterodyne receivers Superheterodyne receivers from microwaves101.com Multipage tutorial describing the superheterodyne receiver and its technology Cable protection system Cable TV Communications satellite Computer network Data compression Digital media Internet video online video platform social media Edholm's law Electrical telegraph Heliographs Hydraulic telegraph Information Age Information revolution Mass media Mobile phone Optical telecommunication Optical telegraphy Photophone Prepaid mobile phone Radiotelephone Satellite communications Smoke signals Telecommunications history Telautograph Teleprinter (teletype) The Telephone Cases Undersea telegraph line Whistled language Wireless revolution Nasir Ahmed Edwin Howard Armstrong Mohamed M. Atalla John Logie Baird Paul Baran John Bardeen Alexander Graham Bell Emile Berliner Tim Berners-Lee Francis Blake (telephone) Jagadish Chandra Bose Charles Bourseul Walter Houser Brattain Vint Cerf Claude Chappe Yogen Dalal Donald Davies Amos Dolbear Thomas Edison Lee de Forest Philo Farnsworth Reginald Fessenden Elisha Gray Oliver Heaviside Robert Hooke Erna Schneider Hoover Harold Hopkins Gardiner Greene Hubbard Internet pioneers Bob Kahn Dawon Kahng Charles K. Kao Narinder Singh Kapany Hedy Lamarr Innocenzo Manzetti Guglielmo Marconi Robert Metcalfe Antonio Meucci Samuel Morse Jun-ichi Nishizawa Charles Grafton Page Radia Perlman Alexander Stepanovich Popov Tivadar Puskás Johann Philipp Reis Claude Shannon Almon Brown Strowger Henry Sutton Charles Sumner Tainter Nikola Tesla Camille Tissot Alfred Vail Thomas A. Watson Charles Wheatstone Vladimir K. Zworykin Coaxial cable Fiber-optic communication optical fiber Free-space optical communication Molecular communication Radio waves Transmission line data transmission circuit telecommunication circuit Network topology and switching Network switching Telephone exchange Space-division Frequency-division Time-division Polarization-division Orbital angular-momentum Code-division Communication protocol Data transmission Store and forward Telecommunications equipment Types of network Cellular network Public Switched Telephone Wireless network Notable networks BITNET NPL network Toasternet Telecommunication portal Integrated Authority File (Germany) This page was last edited on 4 January 2022, at 01:27
CommonCrawl
HighestWeights :: Example 5 Example 5 -- The singular locus of a symplectic invariant Consider the symplectic group $Sp(6,{\mathbb C})$ of type $C_3$. We denote $V(\omega)$ the highest weight representation of $Sp(6,{\mathbb C})$ with highest weight $\omega$. We denote by $\omega_1,...,\omega_3$ the fundamental weights in the root system of type $C_3$. The action of $Sp(6,{\mathbb C})$ on $V(\omega_3)$, the third fundamental representation, has a unique invariant $\Delta$ of degree 4. If we regard $V(\omega_3)$ as a complex affine space, $\Delta$ describes a hypersurface. We will determine and decompose the minimal free resolution of the coordinate ring of the singular locus of this hypersurface. This singular locus is one of the four orbit closures for the action of $Sp(6,{\mathbb C})$ on $V(\omega_3)$ and has been studied, for example, in Galetto - Free resolutions of orbit closures for the representations associated to gradings on Lie algebras of type E6, F4 and G2. A concise description of this singular locus, together with a construction of the representation $V(\omega_3)$, was also given in Iliev, Ranestad - Geometry of the Lagrangian Grassmannian LG(3,6) with Applications to Brill–Noether Loci. We will follow the notation of this second source. The standard representation $V=V(\omega_1)$ of $Sp(6,{\mathbb C})$ is a six dimensional complex vector space endowed with a symplectic form. Being a symplectic space, $V$ is self dual. Let $x_1,...,x_6$ be a basis for the coordinate functions on $V$. The symplectic form on $V$ can be written as $x_1\wedge x_4 +x_3\wedge x_5 +x_3\wedge x_6 \in \wedge^2 V^*$. The wedge product with this form induces a map $V^* \to \wedge^3 V^*$ whose cokernel is the representation $V(\omega_3)$. As such, the residue classes $x_{i,j,k}$ of the tensors $x_i\wedge x_j\wedge x_k$ span $V(\omega_3)$. Since $V(\omega_3)$ is self dual, we can take the $x_{i,j,k}$ to span the coordinate functions on $V(\omega_3)$. Finally, some of the $x_{i,j,k}$ can be omitted and the remaining ones will be variables in our polynomial ring $R$. i1 : R=QQ[x_{1,2,3},x_{1,2,4},x_{1,2,5},x_{1,2,6},x_{1,3,4},x_{1,3,5},x_{1,4,5},x_{1,4,6},x_{1,5,6},x_{2,3,4},x_{2,4,5},x_{2,4,6},x_{3,4,5},x_{4,5,6}] o1 = R o1 : PolynomialRing The invariant $\Delta$ can be written in terms of certain matrices of variables, as indicated in our source. i2 : X=matrix{{x_{2,3,4},-x_{1,3,4},x_{1,2,4}},{-x_{1,3,4},-x_{1,3,5},x_{1,2,5}},{x_{1,2,4},x_{1,2,5},x_{1,2,6}}} o2 = | x_{2, 3, 4} -x_{1, 3, 4} x_{1, 2, 4} | | -x_{1, 3, 4} -x_{1, 3, 5} x_{1, 2, 5} | | x_{1, 2, 4} x_{1, 2, 5} x_{1, 2, 6} | 3 3 o2 : Matrix R <--- R i3 : Y=matrix{{x_{1,5,6},-x_{1,4,6},x_{1,4,5}},{-x_{1,4,6},-x_{2,4,6},x_{2,4,5}},{x_{1,4,5},x_{2,4,5},x_{3,4,5}}} i4 : Delta=(x_{1,2,3}*x_{4,5,6}-trace(X*Y))^2+4*x_{1,2,3}*det(Y)+4*x_{4,5,6}*det(X)-4*sum(3,i->sum(3,j->det(submatrix'(X,{i},{j}))*det(submatrix'(Y,{i},{j})))); The equations of the singular locus of the hypersurface cut out by $\Delta$ are the partial derivatives of $\Delta$. Let us calculate the resolution of this ideal. i5 : I=ideal jacobian ideal Delta; o5 : Ideal of R i6 : RI=res I; betti RI 0 1 2 3 4 o7 = total: 1 14 21 14 6 0: 1 . . . . 1: . . . . . 2: . 14 21 . . 3: . . . 14 6 The root system of type $C_3$ is contained in $\RR^3$. It is easy to express the weight of each variable of the ring $R$ with respect to the coordinate basis of $\RR^3$. The weight of $x_{i,j,k}$ is the vector $v_i+v_j+v_k$, where $v_h$ is the weight of $x_h$ in the coordinate basis of $\RR^3$. i8 : v_1={1,0,0}; v_2={0,1,0}; v_3={0,0,1}; v_4={-1,0,0}; v_5={0,-1,0}; v_6={0,0,-1}; i14 : ind = apply(gens R,g->(baseName g)#1) o14 = {{1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 2, 6}, {1, 3, 4}, {1, 3, 5}, {1, 4, 5}, {1, 4, 6}, {1, 5, 6}, {2, 3, 4}, {2, 4, 5}, {2, 4, 6}, {3, 4, 5}, {4, 5, 6}} o14 : List i15 : W'=apply(ind,j->v_(j_0)+v_(j_1)+v_(j_2)) o15 = {{1, 1, 1}, {0, 1, 0}, {1, 0, 0}, {1, 1, -1}, {0, 0, 1}, {1, -1, 1}, {0, -1, 0}, {0, 0, -1}, {1, -1, -1}, {-1, 1, 1}, {-1, 0, 0}, {-1, 1, -1}, {-1, -1, 1}, {-1, -1, -1}} Now we convert these weights into the basis of fundamental weights. To achieve this we make each previous weight into a column vector and join all column vectors into a matrix. Then we multiply on the left by the matrix $M$ expressing the change of basis from the coordinate basis of $\RR^3$ to the base of simple roots of $C_3$ (as described in Humphreys - Introduction to Lie Algebras and Representation Theory, Ch. 12.1). Finally we multiply the resulting matrix on the left by $N$, the transpose of the Cartan matrix of $C_3$, which expresses the change of basis from the simple roots to the fundamental weights of $C_3$. The columns of the matrix thus obtained are the desired weights, so they can be attached to the ring $R$. i16 : M=inverse promote(matrix{{1,0,0},{-1,1,0},{0,-1,2}},QQ) o16 = | 1 0 0 | | 1 1 0 | | 1/2 1/2 1/2 | 3 3 o16 : Matrix QQ <--- QQ i17 : D=dynkinType{{"C",3}} o17 = DynkinType{{C, 3}} o17 : DynkinType i18 : N=transpose promote(cartanMatrix(rootSystem(D)),QQ) o18 = | 2 -1 0 | | -1 2 -2 | | 0 -1 2 | i19 : W=entries transpose lift(N*M*(transpose matrix W'),ZZ) o19 = {{0, 0, 1}, {-1, 1, 0}, {1, 0, 0}, {0, 2, -1}, {0, -1, 1}, {2, -2, 1}, {1, -1, 0}, {0, 1, -1}, {2, 0, -1}, {-2, 0, 1}, {-1, 0, 0}, {-2, 2, -1}, {0, -2, 1}, {0, 0, -1}} i20 : setWeights(R,D,W) o20 = Tally{{0, 0, 1} => 1} o20 : Tally At this stage, we can issue the command to decompose the resolution. i21 : highestWeightsDecomposition(RI) o21 = HashTable{0 => HashTable{{0} => Tally{{0, 0, 0} => 1}}} 1 => HashTable{{3} => Tally{{0, 0, 1} => 1}} o21 : HashTable We deduce that the resolution has the following structure $$R \leftarrow V(\omega_3) \otimes R(-3) \leftarrow V(2\omega_1) \otimes R(-4) \leftarrow V(\omega_2) \otimes R(-6) \leftarrow V(\omega_1) \otimes R(-7) \leftarrow 0$$ Let us also decompose some graded components of the quotient $R/I$. i22 : highestWeightsDecomposition(R/I,0,4) o22 = HashTable{0 => Tally{{0, 0, 0} => 1}} 1 => Tally{{0, 0, 1} => 1} {2, 0, 0} => 1
CommonCrawl
Knowledge Center / Application Notes / Optics / Introduction to Modulation Transfer Function Introduction to Modulation Transfer Function http://www.edmundoptics.com/knowledge-center/application-notes/optics/introduction-to-modulation-transfer-function/ Edmund Optics Inc. http://www.edmundoptics.com Introduction to Modulation Transfer Function Components | Understanding | Importance | Characterization When optical designers attempt to compare the performance of optical systems, a commonly used measure is the modulation transfer function (MTF). MTF is used for components as simple as a spherical singlet lens to those as complex as a multi-element telecentric imaging lens assembly. In order to understand the significance of MTF, consider some general principles and practical examples for defining MTF including its components, importance, and characterization. THE COMPONENTS OF MTF To properly define the modulation transfer function, it is necessary to first define two terms required to truly characterize image performance: resolution and contrast. Resolution is an imaging system's ability to distinguish object detail. It is often expressed in terms of line-pairs per millimeter (where a line-pair is a sequence of one black line and one white line). This measure of line-pairs per millimeter (lp/mm) is also known as frequency. The inverse of the frequency yields the spacing in millimeters between two resolved lines. Bar targets with a series of equally spaced, alternating white and black bars (i.e. a 1951 USAF target or a Ronchi ruling) are ideal for testing system performance. For a more detailed explanation of test targets, view Choosing the Correct Test Target. For all imaging optics, when imaging such a pattern, perfect line edges become blurred to a degree (Figure 1). High-resolution images are those which exhibit a large amount of detail as a result of minimal blurring. Conversely, low-resolution images lack fine detail. Figure 1: Perfect Line Edges Before (Left) and After (Right) Passing through a Low Resolution Imaging Lens A practical way of understanding line-pairs is to think of them as pixels on a camera sensor, where a single line-pair corresponds to two pixels (Figure 2). Two camera sensor pixels are needed for each line-pair of resolution: one pixel is dedicated to the red line and the other to the blank space between pixels. Using the aforementioned metaphor, image resolution of the camera can now be specified as equal to twice its pixel size. Figure 2: Imaging Scenarios Where (a) the Line-Pair is NOT Resolved and (b) the Line-Pair is Resolved Correspondingly, object resolution is calculated using the camera resolution and the primary magnification (PMAG) of the imaging lens (Equations 1 – 2). It is important to note that these equations assume the imaging lens contributes no resolution loss. (1)$$ \text{Object Resolution} \left[ \large{\unicode[arial]{x03BC}} \text{m} \right] = \frac{\text{Camera Resolution} \left[ \large{\unicode[arial]{x03BC}} \text{m} \right]}{\text{PMAG}} $$ (2)$$ \text{Object Resolution} \left[ \tfrac{ \text{lp} }{\text{mm}} \right] = \text{PMAG} \times \text{Camera Resolution} \left[ \tfrac{ \text{lp} }{\text{mm}} \right] $$ Contrast/Modulation Consider normalizing the intensity of a bar target by assigning a maximum value to the white bars and zero value to the black bars. Plotting these values results in a square wave, from which the notion of contrast can be more easily seen (Figure 3). Mathematically, contrast is calculated with Equation 3: (3)$$ \text{% Contrast} = \left[ \frac{I_{\text{max}} - I_{\text{min}}}{I_{\text{max}} + I_{\text{min}}} \right] $$ Figure 3: Contrast Expressed as a Square Wave When this same principle is applied to the imaging example in Figure 1, the intensity pattern before and after imaging can be seen (Figure 4). Contrast or modulation can then be defined as how faithfully the minimum and maximum intensity values are transferred from object plane to image plane. To understand the relation between contrast and image quality, consider an imaging lens with the same resolution as the one in Figure 1 and Figure 4, but used to image an object with a greater line-pair frequency. Figure 5 illustrates that as the spatial frequency of the lines increases, the contrast of the image decreases. This effect is always present when working with imaging lenses of the same resolution. For the image to appear defined, black must be truly black and white truly white, with a minimal amount of grayscale between. Figure 4: Contrast of a Bar Target and Its Image Figure 5: Contrast Comparison at Object and Image Planes In imaging applications, the imaging lens, camera sensor, and illumination play key roles in determining the resulting image contrast. The lens contrast is typically defined in terms of the percentage of the object contrast that is reproduced. The sensor's ability to reproduce contrast is usually specified in terms of decibels (dB) in analog cameras and bits in digital cameras. UNDERSTANDING MTF Now that the components of the modulation transfer function (MTF), resolution and contrast/modulation, are defined, consider MTF itself. The MTF of a lens, as the name implies, is a measurement of its ability to transfer contrast at a particular resolution from the object to the image. In other words, MTF is a way to incorporate resolution and contrast into a single specification. As line spacing decreases (i.e. the frequency increases) on the test target, it becomes increasingly difficult for the lens to efficiently transfer this decrease in contrast; as result, MTF decreases (Figure 6). Figure 6: MTF for an Aberration-Free Lens with a Rectangular Aperture For an aberration-free image with a circular pupil, MTF is given by Equation 4, where MTF is a function of spatial resolution (ξ), which refers to the smallest line-pair the system can resolve. The cut-off frequency (ξc) is given by Equation 6. Figure 6 plots the MTF of an aberration-free image with a rectangular pupil. As can be expected, the MTF decreases as the spatial resolution increases. It is important to note that these cases are idealized and that no actual system is completely aberration-free. (4)$$ \text{MTF} \left( \xi \right) = \frac{2}{\pi} \left( \varphi - \cos{\varphi} \cdot \sin{\varphi} \right) $$ (5)$$ \varphi = \cos ^{-1} \left( \frac{\xi}{\xi_c} \right) $$ (6)$$ \xi_c = \frac{1}{\lambda \cdot \left( f/ \# \right)} $$ THE IMPORTANCE OF MTF In traditional system integration (and less crucial applications), the system's performance is roughly estimated using the principle of the weakest link. The principle of the weakest link proposes that a system's resolution is solely limited by the component with the lowest resolution. Although this approach is very useful for quick estimations, it is actually flawed because every component within the system contributes error to the image, yielding poorer image quality than the weakest link alone. Every component within a system has an associated modulation transfer function (MTF) and, as a result, contributes to the overall MTF of the system. This includes the imaging lens, camera sensor, image capture boards, and video cables, for instance. The resulting MTF of the system is the product of all the MTF curves of its components (Figure 7). For instance, a 25mm fixed focal length lens and a 25mm double gauss lens can be compared by evaluating the resulting system performance of both lenses with a Sony monochrome camera. By analyzing the system MTF curve, it is straightforward to determine which combination will yield sufficient performance. In some metrology applications, for example, a certain amount of contrast is required for accurate image edge detection. If the minimum contrast needs to be 35% and the image resolution required is 30 lp/mm, then the 25mm double gauss lens is the best choice. MTF is one of the best tools available to quantify the overall imaging performance of a system in terms of resolution and contrast. As a result, knowing the MTF curves of each imaging lens and camera sensor within a system allows a designer to make the appropriate selection when optimizing for a particular resolution. Figure 7: System MTF is the Product of the MTF of Individual Component: Lens MTF x Camera MTF = System MTF CHARACTERIZATION OF MTF Determining Real-World MTF A theoretical modulation transfer function (MTF) curve can be generated from the optical prescription of any lens. Although this can be helpful, it does not indicate the actual, real-world performance of the lens after accounting for manufacturing tolerances. Manufacturing tolerances always introduce some performance loss to the original optical design since factors such as geometry and coating deviate slightly from an ideal lens or lens system. For this reason, in our manufacturing sites, Edmund Optics® invests in optical test and measurement equipment for quantifying MTF. This MTF test and measurement equipment allows for characterization of the actual performance of both designed lenses and commercial lenses (whose optical prescription is not available to the public). As a result, precise integration - previously limited to lenses with known prescriptions - can now include commercial lenses. Reading MTF Graphs/Data A greater area under the MTF curve does not always indicate the optimal choice. A designer should decide based on the resolution of the application at hand. As previously discussed, an MTF graph plots the percentage of transferred contrast versus the frequency (cycles/mm) of the lines. A few things should be noted about the MTF curves offered by Edmund Optics®: Each MTF curve is calculated for a single point in space. Typical field points include on-axis, 70% field, and full-field. 70% is a common reference point because it captures approximately 50% of the total imaging area. Off-axis MTF data is calculated for both tangential and sagittal cases (denoted by T and S, respectively). Occasionally an average of the two is presented rather than the two individual curves. MTF curves are dependent on several factors, such as system conjugates, wavebands, and f/#. An MTF curve is calculated at specified values of each; therefore, it is important to review these factors before determining whether a component will work for a certain application. The spatial frequency is expressed in terms of cycles (or line-pairs) per millimeter. The inverse of this frequency yields the spacing of a line-pair (a cycle of one black bar and one white bar) in millimeters. The nominal MTF curve is generated using the standard prescription information available in optical design programs. This prescription information can also be found on our global website, in our print catalogs, and in our lens catalogs supplied to Zemax®. The nominal MTF represents the best-case scenario and does not take into account manufacturing tolerances. Conceptually, MTF can be difficult to grasp. Perhaps the easiest way to understand this notion of transferring contrast from object to image plane is by examining a real-world example. Figures 8 - 12 compare MTF curves and images for two 25mm fixed focal length imaging lenses: #54-855 Finite Conjugate Micro-Video Lens and #59-871 Compact Fixed Focal Length Lens. Figure 8 shows polychromatic diffraction MTF for these two lenses. Depending upon the testing conditions, both lenses can yield equivalent performance. In this particular example, both are trying to resolve group 2, elements 5 -6 (indicated by the red boxes in Figure 10) and group 3, elements 5 – 6 (indicated by the blue boxes in Figure 10) on a 1951 USAF resolution target (Figure 9). In terms of actual object size, group 2, elements 5 – 6 represent 6.35 – 7.13lp/mm (14.03 - 15.75μm) and group 3, elements 5 – 6 represent 12.70 – 14.25lp/mm (7.02 - 7.87μm). For an easy way to calculate resolution given element and group numbers, use our 1951 USAF Resolution EO Tech Tool. Under the same testing parameters, it is clear to see that #59-871 (with a better MTF curve) yields better imaging performance compared to #54-855 (Figures 11 – 12). In this real-world example with these particular 1951 USAF elements, a higher modulation value at higher spatial frequencies corresponds to a clearer image; however, this is not always the case. Some lenses are designed to be able to very accurately resolve lower spatial frequencies, and have a very low cut-off frequency (i.e. they cannot resolve higher spatial frequencies). Had the target been group -1, elements 5-6, the two lenses would have produced much more similar images given their modulation values at lower frequencies. Figure 8: Comparison of Polychromatic Diffraction MTF for #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Figure 9: 1951 USAF Resolution Target Figure 10: Comparison of #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Resolving Group 2, Elements 5 -6 (Red Boxes) and Group 3, Elements 5 – 6 (Blue Boxes) on a 1951 USAF Resolution Target Figure 11: Comparison of #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Resolving Group 2, Elements 5 -6 on a 1951 USAF Resolution Target Figure 12: Comparison of #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Resolving Group 3, Elements 5 – 6 on a 1951 USAF Resolution Target Modulation transfer function (MTF) is one of the most important parameters by which image quality is measured. Optical designers and engineers frequently refer to MTF data, especially in applications where success or failure is contingent on how accurately a particular object is imaged. To truly grasp MTF, it is necessary to first understand the ideas of resolution and contrast, as well as how an object's image is transferred from object to image plane. While initially daunting, understanding and eventually interpreting MTF data is a very powerful tool for any optical designer. With knowledge and experience, MTF can make selecting the appropriate lens a far easier endeavor - despite the multitude of offerings. Dereniak, Eustace. "OPTI 340 - Optical Design." Lecture, The University of Arizona, Tucson, AZ, Spring 2010. Geary, Joseph M. "Chapter 34 – MTF: Image Quality V." In Introduction to Lens Design: With Practical Zemax Examples, 389-96. Richmond, VA: Willmann-Bell, 2002. Hecht, Eugene. "11.3.5 Transfer Functions." In Optics, 550-56. 4th ed. San Francisco, CA: Addison-Wesley, 2001. Smith, Warren J. "Chapter 15.8 The Modulation Transfer Function." In Modern Optical Engineering, 385-90. 4th ed. New York, NY: McGraw-Hill Education, 2008. Was this content useful to you? Thank you for rating this content! Camera Resolution for Improved Imaging System Performance With tech tips for each important parameter, imaging users from novice to expert can learn about camera resolution as it pertains to the imaging electronics of a system, in this easy to follow application note. Imaging Lab 1.3: Resolution Resolution, one of the most critical parameters in an imaging system, is the ability to distinguish fine object detail. Learn the importance of resolution in imaging lenses and cameras. Imaging Lab 1.7: Contrast In Depth Contrast is directly related to resolution in an imaging system. Understand how contrast affects an imaging system's performance. Your Partner in Imaging Optics Edmund Optics can not only help you learn how to specify the right imaging optics, but can also provide you with multiple resources and products to surpass your imaging needs. 24/5 We Are Here to Help Need to contact Edmund Optics? Use any of our fast and friendly services to meet your needs.
CommonCrawl
An Adaptive ANOVA-Based Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficient Zhiwen Zhang, Xin Hu, Thomas Y. Hou, Guang Lin & Mike Yan In this paper, we present an adaptive, analysis of variance (ANOVA)-based data-driven stochastic method (ANOVA-DSM) to study the stochastic partial differential equations (SPDEs) in the multi-query setting. Our new method integrates the advantages of both the adaptive ANOVA decomposition technique and the data-driven stochastic method. To handle high-dimensional stochastic problems, we investigate the use of adaptive ANOVA decomposition in the stochastic space as an effective dimension-reduction technique. To improve the slow convergence of the generalized polynomial chaos (gPC) method or stochastic collocation (SC) method, we adopt the data-driven stochastic method (DSM) for speed up. An essential ingredient of the DSM is to construct a set of stochastic basis under which the stochastic solutions enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our ANOVA-DSM consists of offline and online stages. In the offline stage, the original high-dimensional stochastic problem is decomposed into a series of low-dimensional stochastic subproblems, according to the ANOVA decomposition technique. Then, for each subproblem, a data-driven stochastic basis is computed using the Karhunen-Loève expansion (KLE) and a two-level preconditioning optimization approach. Multiple trial functions are used to enrich the stochastic basis and improve the accuracy. In the online stage, we solve each stochastic subproblem for any given forcing function by projecting the stochastic solution into the data-driven stochastic basis constructed offline. In our ANOVA-DSM framework, solving the original high-dimensional stochastic problem is reduced to solving a series of ANOVA-decomposed stochastic subproblems using the DSM. An adaptive ANOVA strategy is also provided to further reduce the number of the stochastic subproblems and speed up our method. To demonstrate the accuracy and efficiency of our method, numerical examples are presented for one- and two-dimensional elliptic PDEs with random coefficients. Parallelization of an Implicit Algorithm for Multi-Dimensional Particle-in-Cell Simulations George M. Petrov & Jack Davis The implicit 2D3V particle-in-cell (PIC) code developed to study the interaction of ultrashort pulse lasers with matter [G. M. Petrov and J. Davis, Computer Phys. Comm. 179, 868 (2008); Phys. Plasmas 18, 073102 (2011)] has been parallelized using MPI (Message Passing Interface). The parallelization strategy is optimized for a small number of computer cores, up to about 64. Details on the algorithm implementation are given with emphasis on code optimization by overlapping computations with communications. Performance evaluation for 1D domain decomposition has been made on a small Linux cluster with 64 computer cores for two typical regimes of PIC operation: "particle dominated", for which the bulk of the computation time is spent on pushing particles, and "field dominated", for which computing the fields is prevalent. For a small number of computer cores, less than 32, the MPI implementation offers a significant numerical speed-up. In the "particle dominated" regime it is close to the maximum theoretical one, while in the "field dominated" regime it is about 75-80% of the maximum speed-up. For a number of cores exceeding 32, performance degradation takes place as a result of the adopted 1D domain decomposition. The code parallelization will allow future implementation of atomic physics and extension to three dimensions. Parametrization of Mean Radiative Properties of Optically Thin Steady-State Plasmas and Applications R. Rodriguez, G. Espinos, J. M. Gil, J. G. Rubiano, M. A. Mendoza, P. Martel & E. Minguez Plasma radiative properties play a pivotal role both in nuclear fusion and astrophysics. They are essential to analyze and explain experiments or observations and also in radiative-hydrodynamics simulations. Their computation requires the generation of large atomic databases and the calculation, by solving a set of rate equations, of a huge number of atomic level populations in wide ranges of plasma conditions. These facts make that, for example, radiative-hydrodynamics in-line simulations be almost infeasible. This has lead to develop analytical expressions based on the parametrization of radiative properties. However, most of them are accurate only for coronal or local thermodynamic equilibrium. In this work we present a code for the parametrization of plasma radiative properties of mono-component plasmas, in terms of plasma density and temperature, such as radiative power loss, the Planck and Rosseland mean opacities and the average ionization, which is valid for steady-state optically thin plasmas in wide ranges of plasma densities and temperatures. Furthermore, we also present some applications of this parametrization such as the analysis of the optical depth and radiative character of plasmas, the use to perform diagnostics of the electron temperature, the determination of mean radiative properties for multicomponent plasmas and the analysis of radiative cooling instabilities in some kind of experiments on high-energy density laboratory astrophysics. Finally, to ease the use of the code for the parametrization, this one has been integrated in a user interface and brief comments about it are presented. Extension and Comparative Study of AUSM-Family Schemes for Compressible Multiphase Flow Simulations Keiichi Kitamura, Meng-Sing Liou & Chih-Hao Chang Several recently developed AUSM-family numerical flux functions (SLAU, SLAU2, AUSM+-up2, and AUSMPW+) have been successfully extended to compute compressible multiphase flows, based on the stratified flow model concept, by following two previous works: one by M.-S. Liou, C.-H. Chang, L. Nguyen, and T.G. Theofanous [AIAA J. 46:2345-2356, 2008], in which AUSM+-up was used entirely, and the other by C.-H. Chang, and M.-S. Liou [J. Comput. Phys. 225:840-873, 2007], in which the exact Riemann solver was combined into AUSM+-up at the phase interface. Through an extensive survey by comparing flux functions, the following are found: (1) AUSM+-up with dissipation parameters of Kp and Ku equal to 0.5 or greater, AUSMPW+, SLAU2, AUSM+-up2, and SLAU can be used to solve benchmark problems, including a shock/water-droplet interaction; (2) SLAU shows oscillatory behaviors [though not as catastrophic as those of AUSM+ (a special case of AUSM+-up with Kp=Ku=0)] due to insufficient dissipation arising from its ideal-gas-based dissipation term; and (3) when combined with the exact Riemann solver, AUSM+-up (Kp=Ku=1), SLAU2, and AUSMPW+ are applicable to more challenging problems with high pressure ratios. Direct Numerical Simulation of Multiple Particles Sedimentation at an Intermediate Reynolds Number Deming Nie, Jianzhong Lin & Mengjiao Zheng In this work the previously developed Lattice Boltzmann-Direct Forcing/ Fictitious Domain (LB-DF/FD) method is adopted to simulate the sedimentation of eight circular particles under gravity at an intermediate Reynolds number of about 248. The particle clustering and the resulting Drafting-Kissing-Tumbling (DKT) motion which takes place for the first time are explored. The effects of initial particle-particle gap on the DKT motion are found significant. In addition, the trajectories of particles are presented under different initial particle-particle gaps, which display totally three kinds of falling patterns provided that no DKT motion takes place, i.e. the concave-down shape, the shape of letter "M" and "in-line" shape. Furthermore, the lateral and vertical hydrodynamic forces on the particles are investigated. It has been found that the value of Strouhal number for all particles is the same which is about 0.157 when initial particle-particle gap is relatively large. The wall effects on falling patterns and particle expansions are examined in the final. Unsteady Flow Separation and High Performance of Airfoil with Local Flexible Structure at Low Reynolds Number Peng-Fei Lei, Jia-Zhong Zhang, Wei Kang, Sheng Ren & Le Wang The unsteady flow separation of airfoil with a local flexible structure (LFS) is studied numerically in Lagrangian frames in detail, in order to investigate the nature of its high aerodynamic performance. For such aeroelastic system, the characteristic-based split (CBS) scheme combined with arbitrary Lagrangian-Eulerian (ALE) framework is developed firstly for the numerical analysis of unsteady flow, and Galerkin method is used to approach the flexible structure. The local flexible skin of airfoil, which can lead to self-induced oscillations, is considered as unsteady perturbation to the flow. Then, the ensuing high aerodynamic performances and complex unsteady flow separation at low Reynolds number are studied by Lagrangian coherent structures (LCSs). The results show that the LFS has a significant influence on the unsteady flow separation, which is the key point for the lift enhancement. Specifically, the oscillations of the LFS can induce the generations of moving separation and vortex, which can enhance the kinetic energy transport from main flow to the boundary layer. The results could give a deep understanding of the dynamics in unsteady flow separation and flow control for the flow over airfoil. A New Family of High Order Unstructured MOOD and ADER Finite Volume Schemes for Multidimensional Systems of Hyperbolic Conservation Laws Raphaël Loubère, Michael Dumbser & Steven Diot In this paper, we investigate the coupling of the Multi-dimensional Optimal Order Detection (MOOD) method and the Arbitrary high order DERivatives (ADER) approach in order to design a new high order accurate, robust and computationally efficient Finite Volume (FV) scheme dedicated to solving nonlinear systems of hyperbolic conservation laws on unstructured triangular and tetrahedral meshes in two and three space dimensions, respectively. The Multi-dimensional Optimal Order Detection (MOOD) method for 2D and 3D geometries has been introduced in a recent series of papers for mixed unstructured meshes. It is an arbitrary high-order accurate Finite Volume scheme in space, using polynomial reconstructions with a posteriori detection and polynomial degree decrementing processes to deal with shock waves and other discontinuities. In the following work, the time discretization is performed with an elegant and efficient one-step ADER procedure. Doing so, we retain the good properties of the MOOD scheme, that is to say, the optimal high-order of accuracy is reached on smooth solutions, while spurious oscillations near singularities are prevented. The ADER technique not only reduces the cost of the overall scheme as shown on a set of numerical tests in 2D and 3D, but also increases the stability of the overall scheme. A systematic comparison between classical unstructured ADER-WENO schemes and the new ADER-MOOD approach has been carried out for high-order schemes in space and time in terms of cost, robustness, accuracy and efficiency. The main finding of this paper is that the combination of ADER with MOOD generally outperforms the one of ADER and WENO either because at given accuracy MOOD is less expensive (memory and/or CPU time), or because it is more accurate for a given grid resolution. A large suite of classical numerical test problems has been solved on unstructured meshes for three challenging multi-dimensional systems of conservation laws: the Euler equations of compressible gas dynamics, the classical equations of ideal magneto-Hydrodynamics (MHD) and finally the relativistic MHD equations (RMHD), which constitutes a particularly challenging nonlinear system of hyperbolic partial differential equation. All tests are run on genuinely unstructured grids composed of simplex elements. Exact Artificial Boundary Condition for the Poisson Equation in the Simulation of the 2D Schrödinger-Poisson System Norbert J. Mauser & Yong Zhang We study the computation of ground states and time dependent solutions of the Schrödinger-Poisson system (SPS) on a bounded domain in 2D (i.e. in two space dimensions). On a disc-shaped domain, we derive exact artificial boundary conditions for the Poisson potential based on truncated Fourier series expansion in θ, and propose a second order finite difference scheme to solve the $r$-variable ODEs of the Fourier coefficients. The Poisson potential can be solved within $\mathcal{O}$($M NlogN$) arithmetic operations where $M,N$ are the number of grid points in $r$-direction and the Fourier bases. Combined with the Poisson solver, a backward Euler and a semi-implicit/leap-frog method are proposed to compute the ground state and dynamics respectively. Numerical results are shown to confirm the accuracy and efficiency. Also we make it clear that backward Euler sine pseudospectral (BESP) method in [33] can not be applied to 2D SPS simulation. Finding Critical Nuclei in Phase Transformations by Shrinking Dimer Dynamics and Its Variants Lei Zhang, Jingyan Zhang & Qiang Du We investigate the critical nuclei morphology in phase transformation by combining two effective ingredients, with the first being the phase field modeling of the relevant energetics which has been a popular approach for phase transitions and the second being shrinking dimer dynamics and its variants for computing saddle points and transition states. In particular, the newly formulated generalized shrinking dimer dynamics is proposed by adopting the Cahn-Hilliard dynamics for the generalized gradient system. As illustrations, a couple of typical cases are considered, including a generic system modeling heterogeneous nucleation and a specific material system modeling the precipitate nucleation in FeCr alloys. While the standard shrinking dimer dynamics can be applied to study the non-conserved case of generic heterogeneous nucleation directly, the generalized shrinking dimer dynamics is efficient to compute precipitate nucleation in FeCr alloys due to the conservation of concentration. Numerical simulations are provided to demonstrate both the complex morphology associated with nucleation events and the effectiveness of generalized shrinking dimer dynamics based on phase field models. A Two-Phase Flow Simulation of Discrete-Fractured Media Using Mimetic Finite Difference Method Zhaoqin Huang, Xia Yan & Jun Yao Various conceptual models exist for numerical simulation of fluid flow in fractured porous media, such as dual-porosity model and equivalent continuum model. As a promising model, the discrete-fracture model has been received more attention in the past decade. It can be used both as a stand-alone tool as well as for the evaluation of effective parameters for the continuum models. Various numerical methods have been applied to the discrete-fracture model, including control volume finite difference, Galerkin and mixed finite element methods. All these methods have inherent limitations in accuracy and applicabilities. In this work, we developed a new numerical scheme for the discrete-fracture model by using mimetic finite difference method. The proposed numerical model is applicable in arbitrary unstructured grid cells with full-tensor permeabilities. The matrix-fracture and fracture-fracture fluxes are calculated based on powerful features of the mimetic finite difference method, while the upstream finite volume scheme is used for the approximation of the saturation equation. Several numerical tests in 2D and 3D are carried out to demonstrate the efficiency and robustness of the proposed numerical model. Stability of Projection Methods for Incompressible Flows Using High Order Pressure-Velocity Pairs of Same Degree: Continuous and Discontinuous Galerkin Formulations E. Ferrer, D. Moxey, R. H. J. Willden & S. J. Sherwin This paper presents limits for stability of projection type schemes when using high order pressure-velocity pairs of same degree. Two high order $h/p$ variational methods encompassing continuous and discontinuous Galerkin formulations are used to explain previously observed lower limits on the time step for projection type schemes to be stable [18], when $h$- or $p$-refinement strategies are considered. In addition, the analysis included in this work shows that these stability limits depend not only on the time step but on the product of the latter and the kinematic viscosity, which is of particular importance in the study of high Reynolds number flows. We show that high order methods prove advantageous in stabilising the simulations when small time steps and low kinematic viscosities are used. Drawing upon this analysis, we demonstrate how the effects of this instability can be reduced in the discontinuous scheme by introducing a stabilisation term into the global system. Finally, we show that these lower limits are compatible with Courant-Friedrichs-Lewy (CFL) type restrictions, given that a sufficiently high polynomial order or a mall enough mesh spacing is selected.
CommonCrawl
May 21, 2017, 2:00 PM → May 26, 2017, 6:00 PM Asia/Shanghai Zhen An LIU (IHEP) The Technology and Instrumentation in Particle Physics 2017 (TIPP2017) conference will be held in Beijing, the capital of China, from May 22-26. TIPP2017 will be the fourth in this series of international conferences on detectors and instrumentation, held under the auspices of the International Union of Pure and Applied Physics (IUPAP). The TIPP conference series, a science-driven cross-disciplinary conference, started in Tsukuba, Japan in 2009 (TIPP 2009), with the second conference held in Chicago in 2011 (TIPP 2011), and the third in Amsterdam in 2014 (TIPP2014). The conference aims to provide a stimulating atmosphere for scientists and engineers from around the world to discuss the latest developments in the field. The program focus is on all areas of detector development and instrumentation in particle physics, astro-particle physics and closely related fields, in particular: Accelerator-based high energy physics Non-accelerator particle physics and particle astrophysics Experiments with synchrotron radiation and neutrons Instrumentation and monitoring of particle and photon beams Applications in photon science, biology, medicine, and engineering It is increasingly important for the field to form industrial partnerships that may lead to transformational new technologies. This medium-sized conference brings together experts from the scientific and industrial communities to discuss current work and to plan for the future. The conference will, as in the past, include plenary invited talks and parallel tracks with contributions outlining state-of-the-art developments in different areas. The program will cover the following areas in parallel tracks focusing on the main themes of sensors, experiments, data processing, emerging technologies, and applications to other fields: Experimental detector systems Gaseous detectors Semiconductor detectors Particle identification Photon detectors Dark Matter Detectors Neutrino Detectors Astrophysics and space instrumentation Front-end electronics and fast data transmission Trigger and data acquisition systems Machine Detector Interface and beam instrumentation Backend readout structures and embedded systems Medical Imaging, security and other applications Lets meet in the beautiful Beijing for a fruitful conference and we wish you a nice stay. Liu, Zhen-An On behalf of the Organization Team Adi Bornheim Adrian Fiergolski Adriano Di Giovanni Alan Cosimo Ruggeri Alejandro Santos Alessandra Camplani Alessandro Thea Alfredo Castaneda Ali Murat Guler ALLEN EFOSA Alviggi Mariagrazia Anastasiia Velyka Andre Zibell Andrea Bizzeti Andreas Düdder Andreas Nurnberg Angelo Cruciani Antonio Amoroso Ardavan Ghassemi Artur Ukleja Axel Kuonen Axel König Baiyang Bi Balazs Voneki Baochen Wang Baojun YAN Bayarto Lubsandorzhiev Beatrice Mandelli Benedikt Vormwald Bo Yu Boqun Wang Boruo Xu Branislav Ristic Bruna BERTUCCI Bruno Lenzi Burak Bilki Carlos Abellan Carlos Garcia Argos Carsten Hast Changqing Feng Chao Li CHENG Li chengguang zhu Chiara La Licata Chiara Perrina Chin-Tu CHEN Chris Craven Christian BOHM Christian Faerber Christian Kahra christophe de La Taille Chunhong Yan Coralie Neubüser Cunfeng Feng Daniel Antrim Daniel Coderre Daniel Rodriguez Rodriguez daniele vivolo Davide Pedretti Dejun Han Dominic Gaisbauer Dongxu Yang Dongxu ZHAO Dongxu Eva Sicking Fabrice Le Goff Fabrice Retiere Fan Yang FANBO MENG Fares Djama Feng GAO Fengjiao Luo Florian Pitters Francesca Nessi-Tedaldi Francesco Di Capua Francis Gagnon-Moisan Fukun Tang Gary Varner Gerald Eigen Gianantonio Pezzullo Giulio Aielli Giuseppe Codispoti Grygorii Sokhrannyi Grzegorz Zuzel Guangqing Yan Guofu Cao Haiyi Jin Han Zhao Haoqi Lu Harris Kagan Hendrik Jansen Hendrik VAN DER GRAAF Hendrik Windel Hongbin Liu Hongkai Wang Hongkui Lv Hongyu ZHANG Hua Ye Hugo Delannoy Huirong Qi Iain Haughton Igal Jaegle imad laktineh In-soo Lee Irakli Keshelashvili ivano sarra James Milnes Jason Mansour jia 刘佳 Jian-Quan Jia Jianbei Liu Jike Wang Jingbo Wang Jingzhou ZHAO Jingzhou jinhai li Jinlong Zhang Jinsheng Liu Jochen Schwiening Joern Lange Johan Borg JOHN BOATENG Josef Frisch Julien Fleury Jun CAO Junji Haba Junjie Zhu Junqi Xie K.K. Gan Kaile WEN Katsuaki Tomoyori Kazuhiko Hara Kazuhiro HAYAMA Kazuya Ogawa Keishi Hosokawa Kejun ZHU Kenichi Takemasa Ki Lie Kodai Matsuoka Koki Maekawa Kuo Chia-Ming Kyungeon CHOI, Lambert Hu Laura Cardani Lehui Guo Lei Zhang Li Min LIANG XIAO Liangjian Wen Lichao Tian Lorenzo Cassina Lorenzo Paolozzi Louis Helary Luigia Elisabetta Barberio Magdalena Munker Makoto Tabata Marcel Demarteau Marco Meschini Masanobu Yonenaga Mateus Vicente Mathieu Benoit Matthew Buckland Maximilian Hils mei xiao Meng Wang Miao He Michele Bianco Michele Piero Blago Mikhail Danilov min ran Ming Zeng Mingkai Yun Mircea Bogdan Mirela Angela Saizu Miroslav Gabriel Mitsutaka Nakao Mohammad AlAnazi Muhammad Farooq Ali Mustafa Schmidt Nan Li Nicola Casali Nicolaus Kratochwil Ning Zhou Nural Akchurin Oleg Solovyanov Paolo Durante Paolo Fresch Patrick Connor Patrizia Barria Paul Schuetze Pawel Marciniewski pin LV qiuju [email protected] Qun OUYANG Raffaele Giordano Rainer Bartoldus Ralf SPIWOKS Ray Larsen Ren Xiangxiang Ren-Yuan Zhu Riccardo Farinelli Richard Bosmans Rikutaro Yoshida Roberto Guida Roger Forty Roman Dzhygadlo Ryo Hashimoto Ryo Yonamine Ryoto Iwai Ryutaro Nishimura Salvatore Costa Satoru Yamada Satoshi Hasegawa Satyanarayana Bheesette Seema Bahinipati Selma Conforti Sen Qian Sergey Ryzhikov Shengli Liu Shikma Bressler Shinhong Kim Shinji Ogawa Shiyu LOU Shoushan Zhang Shuai Wang Shuangxi Shen Shun Ono Shunichi Akatsuka Shunji Kishimoto Siyuan Ma Stepan Vereschagin Stergios Tsigaridas Taikan Suehara Tamas Almos Vami Tamer Tolba Thomas James Ting Miao Tobias Bode Tomomi Kawaguchi Tomoyuki Konno Ulrich Uwer Valerio Vagelli Victor Andrei Vincenzo Battista Wang WANG Gang Weigang yin Weiguo Li Wenli Zheng Xianchao Huang Xianyi Zhang Xiaohui Li Xiaojuan ZHOU Xiaojuan xiaopan 姜小盼 Xiaoxue Han Xiaoyan SHEN Xilei Sun Xinchou Lou xiubo qin Xiuku WANG Yajing XING yanchun wang yang yu YANG ZHOU Yangfan ZHOU Yasuo Arai Yifang Wang Yiming LI Yinghua JIA Yinghua Yinhong ZHANG Yinong LIU Yong Liu Yongqiang Wang Yota Kawamura YU WANG Yuguang Xie Yuji Hotta Yunpeng LU Yuzhen Yang Zhao Liu Zhe Cao Zhe Ning Zhen-an Liu ZHENG WANG Zheng Zhenjie Li Zhigang WANG Zhonghua Qin Zhou He ZHU YUWEI Ziyi Guo 云龙 张 光宇 张 兰馨 马 加丽 江 子佳(Justin Lee) 李 守智 席 彦丽 陈 心成 齐 恒双 刘 柯 韩 树旺 崔 毅超 马 永鹏 张 沁宇 吴 海琼 张 磊 张 磊 郭 莉 于 Mon, May 22 Tue, May 23 Wed, May 24 Thu, May 25 Fri, May 26 Registration 1st floor 8:00 AM → 9:00 AM Opening Convenion hall No.2 Convenion hall No.2 Convener: Prof. Zhen-an LIU (IHEP, CAS) IHEP status 20m Speaker: Prof. Yifang WANG Yifang (IHEP) 9:20 AM → 10:50 AM Plenary 1 Convention hall No.2 Convention hall No.2 Detector challenges for high-energy e+e- colliders 45m Convention hall No.2 Future high-energy e+e- colliders have the potential to perform high precision measurements, for example on the Higgs boson and the top quark. They will provide very accurate information that will complement LHC data, thereby offering significantly more insight into the open questions in particle physics. These scientific objectives put strong demands on the performance of the detectors under study for future e+e- colliders, comprising linear colliders (ILC, CLIC) as well as circular colliders (CEPC, FCC-ee). There is a long tradition of detector development for future linear colliders, which has focused on highly granular calorimetry, silicon-based vertex and tracking detectors or TPC. The presentation will comprise an overview and current status of these detector technology developments. The presentation will also assess the differences in experimental conditions between linear and circular colliders in the few-hundred GeV energy range, targeting Higgs and top physics, and the potential impact on the corresponding detector designs. Speaker: Eva Sicking (CERN) Direct cosmic-ray measurements & Dark Matter search 45m An impressive wealth of results has been released in the last decade by space borne experiments based on state-of-the-art particle physics detector technologies. Direct cosmic ray measurements are finally entering in a "precision era" highlighting new and unexpected phenomena, which challenge the current understanding of cosmic ray acceleration and propagation in galaxy while looking towards new exotic sources, as Dark Matter. We will review the experimental approach in the last generation of direct CR experiments and discuss the latest results on charged cosmic rays with the main focus on the measurements more sensitive to Dark Matter signals. Speaker: Prof. Bruna Bertucci (University of Perugia) Tea Break 30m Corridor on the second floor Corridor on the second floor 11:20 AM → 12:20 PM Convener: Prof. Christian BOHM (Stockholm U./PC) Detectors for electron ion colliders 30m Several plans for Electron-Ion Colliders (EIC) have been advanced around the world. In the US, the Nuclear Physics community, in its long range plan, has endorsed a US based EIC as its highest priority new construction after FRIB completion; R&D funds both for the accelerator and the detector are becoming available. I will discuss the considerations for an EIC detector and how they differ from other collider detectors and describe some of the existing detector concepts. Speaker: Dr Rikutaro Yoshida (JLAB) Future applications in medical imaging 30m Medical physics, particle physics, astrophysics, and other major branches of physics share a very broad technology common platform in their research and development of the respective instrumentation in these fields. Medical imaging often benefits greatly from advances made in particle physics, especially in the area of radiation detection technologies. For example, silicon photomultiplier (SiPM), developed first by and for high energy physics, has enabled revolutionary changes and created new potentials for novel medical imaging system designs unattainable previously. Fast electronics and data sciences advances in particle physics have also facilitated many quantum leaps in new medical imaging systems development, their innovative uses and breakthrough applications. Medical imaging also provides an ideal prototype platform for the very-large scale system planning, design, construction, testing and validation. These interactive and synergistic advances present unique opportunities for innovative developments of novel systems and future applications in medical imaging, such as modular, compact, application-specific, transformable and other innovative system designs for multi-modality, quantitative, and combined structural and molecular imaging applications, especially those in positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray computed tomography (CT), digital X-ray, etc. Speaker: Prof. Chin-Tu Chen (Chicago Univ.) Lunch 1h 40m Banquet Hall on the second floor (Beijing North Star Continental Grand Hotel) Banquet Hall on the second floor Beijing North Star Continental Grand Hotel R1-Calorimeters(1) Room 305A Room 305A Conveners: Gianantonio Pezzullo (INFN-PI) , Prof. Nural Akchurin (Texas Tech University) Design, status and perspectives for the Mu2e crystal calorimeter 18m The Mu2e experiment at Fermilab searches for the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of a aluminum nucleus. The dynamics of such a process is well modelled by a two-body decay, resulting in a mono-energetic electron with an energy slightly below the muon rest mass (104.967 MeV). If no events are observed in three years of running, Mu2e will set a limit on the ratio between the conversion rate and the capture rate Rμe of ≤ 6 × 10$^{−17}$ (@ 90% C.L.). This will improve the current limit by four orders of magnitude [1]. A very intense pulsed muon beam (∼ 10$^{10}$μ/ sec) is stopped on a target inside a very long solenoid where the detector is located. The Mu2e detector is composed of a tracker and an electromagnetic calorimeter and an external veto for cosmic rays surrounding the solenoid. The calorimeter plays an important role in providing excellent particle identification capabilities, a fast online trigger filter while aiding the track reconstruction capabilities. It should be able to keep functionality in an environment where the n, p and photon background from muon capture processes and beam flash events deliver a dose of 120 Gy/year in the hottest area. It will also need to work in 1 T axial magnetic field and a 10−4 torr vacuum. The calorimeter requirements are to provide a large acceptance for 100 MeV electrons and reach • a time resolution better than 0.5 ns @ 100 MeV; • an energy resolution O(10%) @ 100 MeV; • a position resolution of 1 cm. The calorimeter consists of two disks, each one made of 674 pure CsI crystals read out by two large area array 2×3 of UV-extended SiPM 6×6 mm2. A dedicated beam test has been performed at the Beam Test Facility (BTF) in Frascati (Italy) where a small calorimeter prototype, based on a 3×3 matrix of undoped CsI crystals 3×3×20 cm3 coupled with large area UV-extended MPPC from Hamamatsu, has been exposed to an electron beam in the energy range between 80 and 130 MeV. The analog signals have been acquired with a CAEN waveform digitization at 250 Ms/s. Time and energy resolution measurements have been performed using a low energy electron beam, in the range [80,120] MeV, and cosmic rays. We present result of the beam test analyses for the timing and energy resolution. For normal incidence, a time resolu- tion of ∼ 110 ps (250 ps) has been measured in the energy range around 100 MeV (20 MeV). The energy response has also been studied achieving an energy resolution of the order of about 7% @ 100 MeV as limited by energy leakage (due to the small calorimeter dimension) and by beam energy spread. Reasonable data and MC agreement is observed. Dependence of response a resolution as a function of the impinging angle are also presented. References [1] Mu2e Collaboration, Mu2e Technical Design Report, http://arxiv.org/abs/1501.05241, 2015 Speaker: Gianantonio Pezzullo (I) Applications of Very Fast Inorganic Crystal Scintillators in Future HEP Experiments 18m Future HEP experiments at the energy and intensity frontiers require fast inorganic crystal scintillators with excellent radiation hardness to face the challenges of unprecedented event rate and severe radiation environment. This paper reports recent progress in application of fast inorganic scintillators for future HEP experiments, such as thin LYSO crystals for a shashlik sampling calorimeter proposed for the CMS upgrade at HL-LHC, undoped CsI crystals for the Mu2e experiment at Fermilab and a rare earth doped BaF2 crystals for Mu2e-II. Applications of very fast crystal scintillators for Gigahertz hard X-ray imaging for the proposed Marie project at LANL will also be discussed. Speaker: Dr Ren-Yuan Zhu (Caltech) Cerium-doped Fused-silica Fibers 18m We report on current research and development activities on cerium-doped fused-silica optical fibers intended for use in high-energy calorimetry, particle tracking, beam monitoring, dosimetry, and myriad other applications outside particle physics. We have partnered with the specialty fibers industry leader Polymicro Technologies and produced several scintillating and wavelength shifting fibers with an eye towards achieving exceptional radiation-hardness above and beyond what is available today. We present results from beam tests on light yield, pulse shape, attenuation length, and light propagation speeds. We also discuss the results from extensive gamma irradiation tests and the lessons learned. Speaker: Prof. Nural Akchurin (Texas Tech University) Liquid xenon detector with VUV-sensitive MPPCs for MEG II experiment 18m The MEG II experiment is an upgrade of the MEG experiment to search for the charged lepton flavor violating decay of muon, $\mu^+ \rightarrow e^+ \gamma$. The MEG II experiment is expected to reach a branching ratio sensitivity of $4 \times 10^{-14}$ , which is one order of magnitude better than the sensitivity of the current MEG experiment. The resolutions of the all detectors will be improved by a factor of 2, to cope with the increased beam rate in MEG II. The performance of the liquid xenon (LXe) γ-ray detector will be greatly improved with a highly granular scintillation readout realized by replacing 216 photomultiplier tubes (PMTs) on the γ-ray entrance face with 4092 Multi-Pixel Photon Counters (MPPCs). For this purpose, we have developed a new type of MPPC which is sensitive to the LXe scintillation light in vacuum ultraviolet (VUV) range, in collaboration with Hamamatsu Photonics K.K. The MPPC has been tested, and an excellent performance has been confirmed including high photon detection efficiency (> 15%) for LXe scintillation light. Based on the measured properties of the MPPC, an excellent performance of the LXe detector has been confirmed by Monte Carlo simulation. The construction and the commissioning of the detector is in progress. The performance of the VUV-sensitive MPPC will be reported, as well as the preliminary results during the detector commissioning. Speaker: Shinji Ogawa (T) Development of Radiation-Hard Scintillators and Wavelength Shifting Fibers 18m We have been performing research on the radiation-hard active media for calorimetry by exploring intrinsically radiation-hard materials and their mixtures. The first samples we probed were Polyethylene Naphthalate (PEN), Polyethylene Terephthalate (PET) and thin sheets of HEM. These materials have been reported to have promising performance under high radiation conditions. Recently, we developed a new scintillator material doping Peroxide-cured polysiloxane bases with the primary fluors p-terphenyl (pTP), p-quarterphenyl (pQP), or 2.5-Diphenyloxazole (PPO) and/or the secondary fluors 3-HF or bis-MSB. The scintillation yield of the pTP/bis-MSB sample was compared to a BGO crystal and was measured to yield roughly 50% better light production compared to the BGO crystal. Various scintillator tiles were exposed to the gammas from a 137Cs source at the University of Iowa Hospitals and Clinics up to 1 and 10 MRad. The results are within expectations and exhibit sufficiently high performance for implementations in the future/upgrade hadron/lepton collider detectors. We have also identified materials with proven radiation resistance, long Stokes shifts to enable long self-absorption lengths, with decay constants ~10 ns or less for development of radiation-hard wavelength shifting fibers. Here we report on the recent advancements in the development and testing of radiation-hard scintillators and wavelength shifting fibers and discuss possible future implementations. Speaker: Burak Bilki (U) R2-Neutrino Detectors(1) Room 305C Conveners: Jingbo Wang (University of California, Davis) , Michele Cascella (University College London) Design of the Single Phase Liquid ArgonTPC for ProtoDUNE 18m The Deep Underground Neutrino Experiment (DUNE) will use a large liquid argon (LAr) detector to measure the CP violating phase, determine the neutrino mass hierarchy and perform precision tests of the three-flavor paradigm in long-baseline neutrino oscillations. It will also allow sensitive searches for proton decay and the detection and measurement of electron neutrinos from core collapse supernovae. In the DUNE far detector, four modules with fiducial mass of 10kton each are planned. Since each module represents a large leap from the current LArTPCs of 102 ton mass, DUNE is constructing kiloton scale engineering prototypes at CERN to validate the design, fabrication, installation and operation of the full scale detector components. In addition to the engineering studies, charged particle beam tests will also be conducted in these prototypes to provide precision measurements of the detector response to different particle species and energies. ProtoDUNE-SP is the prototype of the single phase liquid argon TPC. It has an active volume of approximately 7.2x7x6m3, constructed with components intended for the larger far detector. Due to the large scale, and underground siting of the far detector, great emphasis was placed on the detector cost, reliability and ease of installation. A modular TPC design is the key to achieve these goals. The DUNE-SP TPC is constructed from hundreds of pre-fabricated and tested TPC modules with unique features: - The anode plane assemblies (APAs) can be tiled on 3 sides with virtually no dead space; - The cathode plane assemblies (CPAs) use all resistive material to improve high voltage safety; - The field cage modules are designed to both mechanically and electrically modular. Details of the design, fabrication and testing will be presented. Speaker: Bo Yu (B) Design and Construction of the Short-Baseline Near Detector (SBND) at Fermilab 18m The Short-Baseline Near Detector (SBND) is one of the three detectors in Fermilab's short-baseline neutrino physics program which is projected to start collecting data in 2019. SBND is to measure the un-oscillated beam flavor composition to enable precision searches for neutrino oscillations via both electron neutrino appearance and muon neutrino disappearance in the far detectors. The core component of SBND detector is based on the Liquid Argon TPC (LArTPC) technology. The design and construction of SBND serves also an important role in the on-going R&D efforts within neutrino physics to develop the LArTPC technology toward many-kiloton-scale detectors for next generation long-baseline neutrino oscillation experiments. In this talk, we will present SBND design and construction progress and challenges together with the project schedule. Speaker: Dr Ting Miao (Fermilab) Latest results from the NEMO-3 and SuperNEMO experiments 18m Neutrinoless double-beta decay, if observed, would be proof that the neutrino is its own antiparticle, would be evidence for total lepton number violation, and could allow a measurement of the absolute neutrino mass. Tracking calorimeter experiments have particular strengths, including the ability to search for neutrinoless double-beta decay amongst several different isotopes hosted in source foils. Full event reconstruction provides powerful background rejection capability, and, in the event of a discovery, topological measurements are a powerful handle to determine the nature of the lepton number violating process. I will present the latest results from the NEMO-3 experiment together with the current status and future prospects for its successor: SuperNEMO. Speaker: Michele Cascella (U) PROSPECT - A Precision Reactor Oscillation and Spectrum Experiment 18m PROSPECT, the Precision Reactor Oscillation and SPECTrum Experiment, is a multi-phased short baseline reactor antineutrino experiment aims to precisely measure the antineutrino spectrum of Highly Enriched U-235 (HEU) reactor and probe the possible neutrino oscillation that involves ∆m2 1 eV scale sterile neutrino. In PROSPECT phase-I, an 14 × 11 optically segmented Li-6 loaded liquid scintillator (LiLS) detector will be deployed at 7-12m from the High Flux Isotope Reactor (HFIR) at Oak Ridge National Lab (ORNL). PROSPECT is able to measure the spectrum of U-235 to aid the inconsistent between predictive spectral models and latest experimental measurements of reactor antineutrino spectrum within 2 reactor cycles. The oscillation measurement will search the best fit region of sterile neutrino in 1-year data taking. This talk will detail the design of PROSPECT's novel lithium-loaded liquid scintillator-based detector, performance of existing PROSPECT prototypes, and the status of the production detector's construction. Speakers: Littlejohn Bryce (Illinois Institute of Technology) , Mr Xianyi Zhang (IIT) Discussion time 18m R3-Trigger and data acquisition systems(1) Room 305E Conveners: Louis Helary (CERN) , Ralf SPIWOKS (CERN) Commissioning and integration testing of the DAQ system for the CMS GEM upgrade 18m The CMS muon system will undergo a series of upgrades in the coming years to preserve and extend its muon detection capabilities during the High Luminosity LHC. The first of these will be the installation of triple-foil GEM detectors in the CMS forward region with the goal of maintaining trigger rates and preserving good muon reconstruction, even in the expected harsh environment. In 2017 the CMS GEM project is looking to achieve a major milestone in the project with the installation of 5 super-chambers in CMS; this exercise will allow for the study of services installation and commissioning, and integration with the rest of the subsystems for the first time. An overview of the DAQ system will be given with emphasis on the usage during chamber quality control testing, commissioning in CMS, and integration with the central CMS system. Speaker: Alfredo Castaneda (T) MiniDAQ1: a compact data acquisition system for GBT readout over 10G Ethernet 18m The LHCb experiment at CERN is undergoing a significant upgrade in anticipation of the increased luminosity that will be delivered by the LHC during Run 3 (starting in 2021). In order to allow efficient event selection in the new operating regime, the upgraded LHCb experiment will have to operate in continuous readout mode and deliver all 40MHz of particle collisions directly to the software trigger. In addition to a completely new readout system, the front-end electronics for most sub-detectors are also to be redesigned in order to meet the required performance. All front-end communication is based on a common ~5Gbps radiation-hard protocol developed at CERN, called GBT. MiniDAQ1 is a complete data-acquisition platform developed by the LHCb collaboration for reduced-scale tests of the new front-end electronics. The hardware includes 36 bidirectional optical links and a powerful FPGA in a small AMC form-factor. The FPGA implements data acquisition and synchronization, slow control and fast commands on all available GBT links, using a very flexible architecture allowing front-end designers to experiment with various configurations. The FPGA also implements a bidirectional 10G Ethernet network stack, in order to deliver the data produced by the front-ends to a computer network for final storage and analysis. An integrated single-board-computer runs the new control system that is also being developed for the upgrade, this allows MiniDAQ1 users to interactively configure and monitor the status of the entire readout chain, from the front-end up to the final output. MiniDAQ1 hardware is currently finalized and successfully used by several sub-detector groups within the collaboration, work is currently well underway on MiniDAQ2, which will feature a high-throughput readout protocol based on PCI Express in place of Ethernet. Firmware and software have already been designed so as to minimize the effort required to transition from MiniDAQ1 to its successor, which implements the final design that will be commissioned in 2019. Speaker: Paolo Durante (CERN) The Intelligent FPGA Data Acquisition Framework 18m The Intelligent FPGA Data Acquisition Framework (IFDAQ) is used for the development of the data acquisition systems. It provides a collection of IPcores needed to built entire data acquisition systems starting from a very simple stand-alone Time-to-Digital-Converter module to a large-scale DAQ including time distribution, slow control, data concentrators and event builders. The IPcore library consists of SERDES-based TDC with a resolution depending on the FPGA type (229 ps for the Artix7-1 speedgrade, 100 ps for the Virtex6-2 speedgrade), an ADC interface including data processing ([pedestal] determination, signal detection and time extraction using digital contstant fraction discrimination), a Unified Communication Framework (UCF), an event builder, and a slow control core. The UCF is an inter-FPGA communication protocol for high-speed serial interfaces. It provides up to 64 different communication channels via a single serial link. One channel is reserved for timing and trigger information, the other channels can be used for slow control interfaces and data transmission. All channels are bidirectional and share link bandwidth according to assigned priority. The timing channel distributes messages with fixed and deterministic latency in one direction. From this point of view the protocol implementation is asymmetrical. The framework supports point-to-point and star-like 1:n topologies. The star-like topology can be used for front-ends with low data rates and pure time-distribution systems. In this topology, the master broadcasts information according to assigned priority, the slaves communicate in a time-sharing manner to the master. The first applications of the IFDAQ is the upgrade of the drift detectors of the COMPASS experiment at CERN, the straw detectors of the dirft chambers of the NA64 experiment at CERN, and the read-out for the Belle II pixel detector at KEK in Japan. Speaker: Mr Dominic Gaisbauer (TU Muenchen) Integration of data acquisition systems of Belle II outer-detectors for cosmic ray test 18m The Belle II experiment is scheduled to start in 2018 and the development of data acquisition (DAQ) system as well as its detector is ongoing. The target luminosity of SuperKEKB, an asymmetric electron-positron collider, is 8x10^35 cm-2s-1, which is 40 times larger than its predecessor, KEKB, and the construction of the DAQ system is challenging. The Belle II detector consists of seven sub-detectors. Frontend electronics for each sub-detector digitizes signals and sends the data to back-end readout boards. To reduce the cost of the development and achieve easier operation, these readout boards are common for all sub-detectors except for the innermost pixel detector. After the readout boards, the data then go through PC farms, where event building and online data-reduction by software trigger are performed, and are stored in a storage system. Currently, most of Belle II outer sub-detectors, including outer tracking detectors, calorimeters and a barrel particle identification detector, have been already installed in the Belle II detector and the integration of DAQ systems for the sub-detectors is in progress. Since the Belle II DAQ system is designed so that we can quickly change operation modes with a standalone sub-detector or combined ones, the DAQ system for is first tested with each sub-detector and then integrated to the combined DAQ system. Towards the global cosmic-ray commissioning with magnetic field in June 2017, we started data-taking of cosmic ray events with central drift chamber and electromagnetic calorimeter separately and the combined data-taking for those two detectors was also tested. We present the status of integration of the DAQ system for the installed sub-detectors. Speaker: Satoru Yamada (KEK) R4-Photon detectors(1) Room 307 Conveners: Prof. Gerald Eigen (University of Bergen) , Prof. Harry van der Graaf (Nikhef) Development of planar microchannel plate photomultiplier with full range response and pixelated readout 18m Planar microchannel plate photomultipliers (MCP-PMTs) with bialkali photocathodes are able to achieve single photon detection with excellent time (picosecond) and spatial (millimeter) resolution. They have recently drawn great interests in experiments requiring time of flight (TOF) measurement and/or Cherenkov imaging. Current MCP-PMTs have a response range of 300 nm – 600 nm, limited by the window transmission and cathode materials. By replacing the glass window with fused silica, the detection range can be dramatically extended from 300 nm to 170 nm, providing much more efficient Cherenkov radiation detection. The Argonne MCP-PMT detector group has recently designed and fabricated 6 cm x 6 cm MCP-PMTs with fused silica window. Initial characterization indicates that the fused silica window photomultipier exhibits a transit-time spread of 57 psec at single photoelectron detection mode and of 27 psec at multi photoelectron mode (100 photoelectrons). The MCP-PMTs was also tested at Fermilab test beam facility for its particle detection performance and rate capability, showing high rate capability up to 75 kHz/cm2, higher than the requirement for future electron-ion collider (EIC) experiment. Currently, the group is exploring the new MCP-PMT with pixelated readout. With a pixelated readout, the new MCP-PMT will provide better position resolution for various applications in different experiments such as Belle II and EIC. The progress on pixelated readout MCP-PMT production and characterization will also be presented and discussed in the presentation. Speaker: Dr Xie Junqi (Argonne National Laboratory) Improvement of the MCP-PMT performance under a high count rate 18m We developed a square-shaped micro-channel-plate photomultiplier tube (MCP-PMT) for the TOP counter in the Belle II experiment in collaboration with Hamamatsu Photonics K.K. It has a time resolution about 30 ps for single photon detection, a large photocoverage of 23 mm square photocathode and a peak quantum efficiency greater than 28% at a wavelength around 360 nm. Those excellent time resolution and efficiency are essential for the TOP counter to reconstruct the Cherenkov image for particle identification. However a major concern of the MCP-PMT is deterioration of the photocathode, and thus drop of the quantum efficiency. That is caused by outgassing from the MCP, of which amount depends on the output charge from the MCP. The quantum efficiency of the initial prototype of the MCP-PMT dropped down to 80% of the beginning at an integrated output charge per photocathode area of only less than 0.1 C/cm$^2$. On the other hand, several C/cm$^2$ is expected for the MCP-PMTs on the TOP counter at 50 ab$^{-1}$ integrated luminosity even when the operation gain of the MCP-PMT is as low as $5\times10^5$. That is because the MCP-PMTs suffer from intensive photon hits of several MHz/PMT due to the beam background. Therefore extending the lifetime of the MCP-PMT was absolutely imperative. We took three steps of approaches to extend the lifetime: blocking the gas and ions from reaching the photocathode by a ceramic cap and an aluminum layer; adopting atomic layer deposition (ALD) technique to coat the MCP surface to prevent outgassing; applying some processes in production to reduce the residual gas on the MCP. By each step, we succeeded in extending the lifetime to about 1 C/cm$^2$, about 10 C/cm$^2$, and more than 13 C/cm$^2$, respectively. The detail of the measurement of the lifetime will also be shown in this presentation. There is another issue to use the MCP-PMT under such a high count rate: The time resolution of the MCP-PMT becomes worse above several MHz/PMT. The origin of the worse resolution is considered as a local distortion of the electric field in the MCP where the electrons are depleted. The fraction of the depleted region increases as the count rate because it takes $O$(10 ms) to recharge the fired micro channels of high resistance by the strip current. This presentation will cover test results of the time resolution under LED backgrounds as well as possible mitigation measures against the deterioration of the time resolution. Speaker: Kodai Matsuoka (KMI, Nagoya University) The VSiPMT project: characterization of the second generation of prototypes 18m VSiPMT (Vacuum Silicon PhotoMultiplier Tube) is an innovative photodetector that matches the excellent photon counting performances of SiPMs with the large sensitive surfaces of standard PMTs. In such device, the photoelectrons generated by a large surface photocathode are accelerated and driven by an electrostatic focusing system towards a small focal area covered by a SiPM. This solution is expected to offer several important advantages with respect to standard PMTs technology (improved photon counting, faster time response, higher stability and a decreased power consumption), while keeping comparable values of gain and quantum efficiency. The project stands on a huge preliminary phase, mainly aimed at investigating the performances of SiPMs as electron detectors. The promising results of this work provided the proof of feasibility of the device and encouraged Hamamatsu Photonics at realizing a first generation of VSiPMT prototypes, based on the combination of a circular GaAsP photocathode (3 mm diameter) and a custom SiPM without optical entrance window. The extensive characterization of these devices provided results going far beyond the most optimistic expectations: excellent SPE resolution, easy low-voltage-based stability, very good time performances, high gain and good PDE are among the most outstanding achievements, counter-balanced by some drawbacks like a still high dark noise and lack of linearity. The success of this phase have boosted a further design effort, which resulted in the realization of a second generation of a VSiPMT prototypes with a 1-inch photocathode surface. The outstanding performances of such device make it an attractive solution for a potentially limitless field of applications, ranging from fundamental physics research to medical applications. In this work, the characterization of the second generation of VSiPMT prototypes will be described in detail, with a special focus on the adopted technological solutions and on the guidelines for a further engineering phase aimed at the realization of a next version of prototypes with an even larger photocathode surface. Speaker: daniele vivolo (I) The cathode quantum efficiency(QE) Testing System 18m Photomultiplier tubes (PMTs), as a kind of light detector with high sensitivity and super fast time response, are widely used in physics experiment, industrial production, medical equipment and other fields. And With the increasingly common use of large area PMTs for nuclear and particle physics experiments, information on the uniformity of photocathode is important to accurate particle identification. Especially in the non-transfer cathode system during the cathode preparation, the antimony ball arrangement and cathode preparation technology always contribute non-uniformity of large area PMTs. A system studying the cathode performance of PMT has been built in our lab. These performance parameters such as quantum efficiency (QE) at specific wavelength, QE spectral response from 200 nm to1000 nm and cathode uniformity could be measured in this system. Because the size and shape of cathode vary with PMTs, three cathode uniformity scanning setups, one for 2-in PMTs ,another for 8-in PMTs, a third for 20-in PMTs were respectively built. Speaker: GAO Feng (IHEP) Tea Break 30m Floor R1-Dark Matter Detectors Room 305A Conveners: Prof. Jin Li (IHEP/THU) , Murat Guler (METU) Development of a novel detector system for the keV sterile neutrino search with KATRIN 18m Sterile neutrinos are a well-motivated extension of the Standard Model of Particle Physics. They are experimentally accessible via the mixing with the known active neutrinos. A sterile neutrino with a mass of $\mathcal{O}$(keV) is a promising dark matter candidate possibly solving the too big to fail and the cusp vs core problem. In addition to astrophysical searches by X-ray telescopes, several laboratory measurement have been proposed. One is the TRISTAN project pursued in the framework of KATRIN. The KATRIN (KArsrluhe TRItium Neutrino Experiment) investigates the energy endpoint of the tritium beta-decay to determine the effective mass of the electron anti-neutrino with a precision of 200 meV (90$\,$% C.L.) after an effective data taking time of three years. The signature of a sterile neutrino would be a kink-like structure in the tritium beta-decay spectrum originating from the mixing with the active neutrino states. The TRISTAN project will proceed in two phases. Phase-0 will use the standard KATRIN setup. Whereas Phase-I will use a greatly improved detector system which will reduce systematics and allow a high count rate ($\mathcal{O}$(Mcps)) on the detector, increasing available statistics. This novel detector system will consist of $\mathcal{O}$(5000) silicon drift detectors (SDDs) with separate read-out and digitisation of each channel. To minimise the impact of electron backscattering on the spectrum the unavoidable inactive entrance window has to be thinned to below 30 nm. First measurements with a down-scaled prototype will be shown. In addition an overview of the two measurement phases and their respective experimental sensitivities will be given. Speaker: Tobias Bode (M) XENON1T: Searching for WIMPs at the Ton Scale 18m The XENON collaboration seeks to measure WIMP-nucleon interactions using liquid xenon time projection chambers (TPCs). The experiment is a staged process, with the most recent iteration, XENON1T, currently in operation at the Gran Sasso National Laboratory in Italy. This TPC uses 3.5 tons of liquid xenon total and will achieve an unprecedented sensitivity to the WIMP-nucleon cross section with 2.0 tons of liquid xenon in the target. XENON1T has been in operation collecting science data since Fall 2016. This talk will present the latest updates from the first dark matter exposure. Speaker: Daniel Coderre (University of Bern) Nuclear Emulsion Based Detector for Directional Dark Mater Search 18m Direct dark matter searches are promising techniques to identify the nature of dark matter particles. A variety of experiments have been developed over the past decades, aiming at detecting Weakly Interactive Massive Particles (WIMPs) via their scattering in a detector medium. Exploiting directionality would give a proof of the galactic origin of dark matter making it possible to provide a clear and unambiguous signal to background separation. In particular, the directionality appears as the only way to overcome the neutrino background that is expected to finally prevent standard techniques to further lower cross-section limits. The directional detection of Dark Matter requires very sensitive experiment combined with highly performing technology. The NEWSdm experiment, based on nuclear emulsions, is proposed to measure the direction of WIMP-induced nuclear recoils and it is expected to produce a prototype in 2017. We discuss the discovery potential of a directional experiment based on the use of a solid target made by newly developed nuclear emulsions and read-out systems reaching sub-micrometric resolution. Speaker: Prof. Murat GULER (METU) Dark matter search with superconducting detector 18m WIMP dark matter search in GeV order mass range is led by xenon detectors. Moreover, searches for light (sub-GeV and MeV) dark matter are planning or ongoing with lighter target like silicon nucleus. We are planning an experiment to search dark matter up to keV mass region. To lower energy threshold, electrons in superconductor will be used as target. Energy gap of cooper-pair is enough small to observe keV dark matter recoils. Superconducting detector LEKID(Lumped Element Kinetic Inductance Detector) will be used for readout. Detector design, setup and commissioning will be performed. In this talk, experimental concept and status will be reported. Speaker: Keishi Hosokawa (Tohoku university) PandaX-4ton liquid xenon detector for rare physics search 18m PandaX collaboration proposed a 4-ton liquid xenon detector to search for rare physics like dark matter and neutrino. The PandaX-4ton detector contains a TPC with a diameter of 1.2 m and a height of 1.2m. The TPC size is approximately doubled in every dimension from PandaX-II (500kg liquid xenon), presenting several technological challenges. With 6 ton-year exposure, the sensitivity to spin-independent WIMP-nucleon cross section is expected to reach 10^-47 cm2. Speaker: Ning Zhou (Shanghai Jiao Tong University) R2-Experimental detector systems(1) Room 305C Conveners: Gary Varner (University of Hawaii) , Dr qiang wang (ihep) Alignment of the CMS Tracker at LHC Run-II 18m The inner tracking detector of the Cosmic Muon Solenoid (CMS) at the CERN Large Hadron Collider (LHC) is $2.6\text{m}$ wide and $5.2\text{m}$ long, and is made of $1440$ silicon pixel and $15 148$ silicon strip modules in the inner and outer part, respectively. Its high granularity has provided an excellent hit resolution of the order of $10\mu\text{m}$ during LHC Run-I and II. In order to achieve such a precision despite the finite fabrication tolerances of the large structures and despite the changes of temperature and magnetic field, the tracking system needs to be aligned, i.e. a correction on the position, orientation and curvature needs to be computed for every single sensor. This challenging problem of $O(10^6)$ parameters can be solved using collision and cosmic-ray data by the MillePede II and HipPy algorithms, where the alignment parameters are determined by minimising the track-hit residuals of large samples of tracks. In this talk, we present the final alignment for 2016 data to illustrate the basic principles of those algorithms and to discuss some data-driven methods that are used to validate the performance of the alignment. Speaker: Patrick Connor (U) Operational Experience with Radioactive Source Calibration of the CMS Hadron Endcap Calorimeter Wedges with Phase I Upgrade Electronics 18m The Phase I Upgrade of the CMS Hadron Endcap Calorimeters consist of new photodetectors (Silicon Photomultipliers in place of Hybrid Photo-Diodes ) and front-end electronics (QIE11). The upgrade will allow the elimination of the high amplitude noise and drifting response of the Hybrid Photo-Diodes, at the same time enabling the mitigation of the radiation damage of the scintillators and the wavelength shifting fibers with a larger spectral acceptance of the Silicon Photomultipliers. The upgrade will also allow to increase the longitudinal segmentation of the readout to be beneficial for pile-up mitigation and recalibration due to depth-dependent radiation damage. As a realistic operational exercise, the responses of the Hadron Endcap Calorimeter wedges are being calibrated with a 60Co radioactive source both with current and upgrade electronics. The exercise will provide a manifestation of the benefits of the upgrade. Here we describe the instrumentation details and the operational experiences related to the sourcing exercise. Thermal mockup studies of BelleII vertex detector 18m The Belle II experiment is currently under construction at the e+e- collider SuperKEKB in Japan. Its vertex detector (VXD), comprising a two layer DEPFET pixel detector (PXD) surrounded by four layers of double sided silicon strip detector (SVD), is indispensable for the accurate determination of the decay point of B or D mesons as well as track reconstruction of low momentum particles. In order to guarantee acceptable operation conditions for the VXD, the cooling system must be capable of removing a total heat load of about 1 kW from the very confined VXD volume. Evaporative two-phase CO2 cooling in combination with forced air flow has been chosen for the VXD cooling system. To verify and optimize the vertex detector cooling concept, a full-size VXD mockup is constructed at DESY. In this talk, thermal and mechanical studies of Belle II VXD mockup are presented. Speaker: Dr Hua Ye (DESY) Developments on a Microchannel CO2 cooling system for the LHCb VELO Upgrade. 18m The LHCb Vertex Detector (VELO) will be upgraded in 2018 to a lightweight, pixel detector capable of 40 MHz readout and operation in very close proximity to the LHC beams. The thermal management of the system will be provided by evaporative CO2 circulating in micro channels embedded within thin silicon plates. This solution has been selected due to the excellent thermal efficiency, the absence of thermal expansion mismatch with silicon ASIC's and sensors, the radiation hardness of CO2, and very low contribution to the material budget. Although micro channel cooling is gaining considerable attention for applications related to microelectronics, it is still a novel technology for particle physics experiments, in particular when combined with evaporative CO2 cooling. The R&D effort for LHCb is focusing on the design and layout of the channels together with a fluidic connector and its attachment which must withstand pressures up to 200 bars. This talk will describe the design and optimization of the cooling system for LHCb together with latest prototyping results. Even distribution of the coolant is ensured by means of the use of restrictions implemented before the entrance to a race-track layout of the main cooling channels. The coolant flow and pressure drop has been simulated together with the thermal performance of the device. The design of a suitable low mass connector, together with the soldering technique to the cooling plate will be described. Long term reliability as well as resistance to extremes of pressure and temperature is of prime importance. The setup and operation of a cyclic stress test of the prototype cooling channel designs will be described. In parallel to the development of the micro-channel substrate, the VELO group is also working on the development of an alternative cooling substrate. This design foresees a network of parallel stainless steel capillaries embedded within an aluminimum nitride cooling plate which forms the backbone of the module. A dedicated manifold supplies the CO2 via tiny orifices of 0.16 mm diameter which serve as an expansion point and control the resistance of the parallel channels. The design of the manifold and pipes and the thermal performance of full scale prototypes will be described. The efficiency of CO2 cooling in extracting the heat from the module will be shown for both implementations, as well as the potential integration into the module construction. Speaker: Kazuyoshi Akiba (IF-UFRJ) R3-Backend readout structures and embedded systems Room 305E Conveners: Christian Bohm (Stockholm University) , christophe de La Taille (OMEGA) Phase-I Trigger Readout Electronics Upgrade for the ATLAS Liquid-Argon Calorimeters 18m The upgrade of the Large Hadron Collider (LHC) scheduled for shut-down period of 2018-2019, referred to as Phase-I upgrade, will increase the instantaneous luminosity to about three times the design value. Since the current ATLAS trigger system does not allow sufficient increase of the trigger rate, an improvement of the trigger system is required. The Liquid Argon (LAr) Calorimeter read-out will therefore be modified to use digital trigger signals with a higher spatial granularity in order to improve the identification efficiencies of electrons, photons, tau, jets and missing energy, at high background rejection rates at the Level-1 trigger. The new trigger signals will be arranged in 34000 so-called Super Cells which achieves 5-10 times better granularity than the trigger towers currently used and allows an improved background rejection. The readout of the trigger signals will process the signal of the Super Cells at every LHC bunch-crossing at 12-bit precision and a frequency of 40 MHz. The data will be transmitted to the back-end using a custom serializer and optical converter and 5.12 Gb/s optical links. In order to verify the full functionality of the future Liquid Argon trigger system, a demonstrator set-up has been installed on the ATLAS detector and is operated in parallel to the regular ATLAS data taking during the LHC Run-2. Noise level and linearity on the energy measurement have been verified to be within our requirements. In addition, we have collected data from 13 TeV proton collisions during the LHC 2015 run, and have observed real pulse from the detector through the demonstrator system. The talk will give an overview of the Phase-I Upgrade of the ATLAS Liquid Argon Calorimeter readout and present the custom developed hardware including their role in real-time data processing and fast data transfer. This contribution will also report on the performance of the newly developed ASICs including their radiation tolerance and on the performance of the prototype boards in the demonstrator system based on various measurements with the 13 TeV collision data. Results of the high-speed link test with the prototypes of the final electronic boards will be also reported. Speaker: Camplani Alessandra (Università degli Studi e INFN Milano) A Service-Oriented Platform for embedded monitoring systems in the Belle II experiment. 18m uSOP is a general purpose single board computer designed for deep embedded applications in control and monitoring of detectors, sensors, and complex laboratory equipment. In this paper, we present its deployment in the monitoring system framework of the ECL endcap calorimeter of the Belle2 experiment, presently under construction at the KEK Laboratory (Tsukuba, J). We discuss the main aspects of the hardware and software architectures tailored on the needs of a detector designed around CsI scintillators. Speaker: Dr Francesco Di Capua (Università Federico II di Napoli and INFN) Integration of readout of the vertex detector in the Belle II DAQ system 18m The Belle II data acquisition system is one of the biggest challenges in the Belle II, a next generation of B factory, experiment, which is designed to collect data streams from the seven sub detectors with much higher trigger rate and larger data size up to 30 kHz and 30 GB/s at the level-1 trigger due to the 40 times higher luminosity of the Belle experiment. The Belle2Link, a common detector readout scheme using COPPER (common pipeline electronics readout) and HSLB (high speed link board) boards, was developed to handle data from the all sub detectors except for the PXD (pixel vertex detector) and then merge them to the HLT (high level trigger) PC farm while The DHH (Data Handling Hybrid), a dedicated readout system for the PXD with FPGA based data processing electronics has newly developed to handle with huge event size from the DEPFET ultra-fine pixel sensors. A reduction scheme of the pixel event size by selectin of RoIs (regions of interests) on the pixel surfaces is also developed based on online track reconstruction in the HLT farm using data from the SVD (silicon vertex detector) and the CDC (central drift chamber). Integration of readouts of the SVD and PXD, the inner vertex detectors, is ongoing for the phase II detector commissioning run using the first beam collisions from the SuperKEKB accelerator in parallel to the DAQ system operations for the outer sub detectors in the phase I cosmic ray run. The outer sub detectors are fully installed in the phase II run while the inner detectors are partially installed with dedicated background sensors to measure the beam background and confirm the radiation resistance in the phase III physics run with the full Belle II detector. In addition to the beam data taking, we will also operate several slow control systems; configuration of the detector readout electronics, the high voltage / low voltage power supplies, environment monitors, and cooperation with the SuperKEKB accelerator. Toward the phase II and III runs, we have been accumulating operation experiences of the inner vertex detector during three times of beam tests in the DESY electron test beam facility. There were a mount of issues to be tested; the establishment of the data links, the readout performances in the SVD-COPPERs and the PXD-DHHs, the online tracking for the RoI extraction and the slow control including the detector power supplies, and demonstration of the data taking shift by non-experts. The final beam test will be carried out in Feb. 2017 to finalize the sensor / readout systems and confirm our achievements. In this presentation, we will report the achievements in the integration of the Belle II DAQ system into the readouts of the vertex detectors based on the results in the final beam test. And then we will discuss prospects of the coming physics run in 2018. Speaker: Tomoyuki Konno (High energy accelerator research organization (KEK)) An analog processor for real time data filtering in large detectors 18m A decision making process requires to evaluate the saliency of data in a time scale short enough for the decision, to be useful in the ecosystem that generated the data. Experimental High Energy Physics pioneered in facing the problem of managing smartly and in real time big data, produced by detectors in the ns scale, and was always on the cutting edge in developing fast and complex electronic trigger systems exploiting the expected data model to perform the selection. Very large volume experiment searching for rare events such as DUNE (Deep Underground Neutrino Experiment) may produce an extremely high data flow, with a very reduced possibility of setting up an effective trigger, in particular when searching for cosmological events typically having a faint signature. Removing this bottleneck is a crucial challenge to extend the discovery potential of such experiments. We propose to overcome this limitation by introducing a novel technology, the WRM (Weighting Resistive Matrix) to perform a topological data driven selection. The WRM technique was originally invented as a fast topological trigger for hadron colliders experiment, and recently implemented as a fast engine for demanding computer vision applications. By treating DUNE data as projected grays-cale image we can exploit the WRM technology to provide a fast data driven trigger-less selection allowing a smart noise suppression on raw data in real time. Speaker: Giulio Aielli (U) The SLAC Instrumentation and Control Platform 18m The SLAC Technology Innovation Directorate has developed a new electronics platform for instrumentation and control of particle accelerators and experiments. This "Common Platform" system is based on the Advanced Telecommunication Computing Architecture , and uses the ATCA shelf backplane for data, management, precision timing and machine interlocking. Local interface and data processing is provided by FPGAs on each ATCA card, each interfaced to ADCs, DACs, network and front end electronics. This "Common Platform" will be used as the primary accelerator control and instrumentation system for future SLAC accelerators including LCLS-II X-ray FEL, as well as for many experiment sub-systems. It is also being developed for use for superconducting sensors for a CMB telescope and a variety of other projects. Speaker: Josef Frisch (S) R4-Semiconductor detectors(1) Room 307 Conveners: Carlos Garcia Argos (University of Freiburg) , Prof. Kazuhiko Hara (University of Tsukuba) Construction of the Phase I upgrade of the CMS pixel detector 18m The innermost layers of the CMS tracker are built out of pixel detectors arranged in three barrel layers (BPIX) and two forward disks in each endcap (FPIX). The original CMS detector was designed for the nominal instantaneous LHC luminosity of 1 x 10^34 cm^-2 s^-1. Under the conditions expected in the coming years, which will see an increase of a factor two of the instantaneous luminosity, the CMS pixel detector will see a dynamic inefficiency caused by data losses due to buffer overflows. For this reason the CMS Collaboration has installed during the recent extended end of year shutdown a replacement pixel detector. The Phase I upgrade of the CMS pixel detector will operate at full efficiency at an instantaneous luminosity of 2 x 10^34 cm^-2 s^-1 with increased detector acceptance and additional redundancy for the tracking, while at the same time reducing the material budget. These goals are achieved using a new readout chip and modified powering and readout schemes, one additional tracking layer both in the barrel and in the disks, and new detector supports including a CO2 based evaporative cooling system, that contribute to the reduction of the material in the tracking volume. This contribution will review the design and technological choices of the Phase I detector, with a focus on the challenges and difficulties encountered, as well as the lessons learned for future upgrades. Speaker: Benedikt Vormwald Commissioning of the Phase I upgrade of the CMS pixel detector 18m The Phase I upgrade of the CMS pixel detector is built out of four barrel layers (BPIX) and three forward disks in each endcap (FPIX). It comprises a total of 124M pixel channels, in 1,856 modules and it is designed to withstand instantaneous luminosities of up to 2 x 10^34 cm^-2 s^-1. Different parts of the detector have been assembled over the last year and later brought to CERN for installation inside the CMS tracker. At various stages during the assembly tests have been performed to ensure that the readout and power electronics, and the cooling system meet the design specifications. After tests of the individual components, system tests have been performed before the installation inside CMS. In addition to reviewing these tests, we also present results from the final commissioning of the detector in-situ using the central CMS DAQ system, as well as results from cosmic rays data, in preparation for the data taking in pp collisions. Speaker: Benedikt Vormwald (U) Operational Experience with the ATLAS Pixel Detector 18m Run-2 of the LHC is providing new challenges to track and vertex reconstruction imposed by the higher collision energy, pileup and luminosity that are being delivered. The ATLAS tracking performance relies critically on the Pixel Detector, therefore, in view of Run-2 of LHC, the ATLAS experiment has constructed the first 4-layer Pixel detector in HEP, installing a new Pixel layer, also called Insertable B-Layer (IBL). Pixel detector was refurbished with a new service quarter panel to recover about 3% of defective modules lost during run-1 and an additional optical link per module was added to overcome in some layers the readout bandwidth limitation when LHC will exceed the nominal peak luminosity by almost a factor of 3. The key features and challenges met during the IBL project will be presented, as well as its operational experience and Pixel Detector performance in LHC. Speaker: Djama Fares (I) TRACKING AND VERTEXING WITH THE ATLAS INNER DETECTOR IN THE LHC RUN2 AND BEYOND 18m Run-2 of the LHC has provided new challenges to track and vertex reconstruction with higher centre-of-mass energies and luminosity leading to increasingly high-multiplicity environments, boosted, and highly-collimated physics objects. To achieve this goal, ATLAS is equipped with the Inner Detector tracking system built using different technologies, silicon planar sensors (pixel and micro-strip) and gaseous drift- tubes, all embedded in a 2T solenoidal magnetic field. In addition, the Insertable B-layer (IBL) is a fourth pixel layer, which was inserted at the centre of ATLAS during the first long shutdown of the LHC. An overview of the use of each of these subdetectors in track and vertex reconstruction, as well as the algorithmic approaches taken to the specific tasks of pattern recognition and track fitting, is given. The performance of the Inner Detector tracking and vertexing will be summarised. These include a factor of three reduction in the reconstruction time, optimisation for the expected conditions, novel techniques to enhance the performance in dense jet cores, time-dependent alignment of sub-detectors and special reconstruction of charged particle produced at large distance from interaction points. Moreover, data-driven methods to evaluate vertex resolution, fake rates, track reconstruction inefficiencies in dense environments, and track parameter resolution and biases will be shown. Luminosity increases in 2017 and beyond will also provide challenges for the detector systems and offline reconstruction, and strategies for mitigating the effects of increasing occupancy will be discussed. Speaker: Kyungeon CHOI, (ATLAS) Operation of the LHCb silicon tracking and vertexing systems in LHC Run-2 18m The primary goal of the LHCb experiment at the LHC is to search for indirect evidence of new physics via measurements of CP violation and rare decays of beauty and charm hadrons. The LHCb detector is a single-arm forward spectrometer with precise silicon-strip detectors in the regions with highest particle occupancies. Around the interaction region, the VErtex LOcator (VELO) has active sensing elements as close as 8 mm from the LHC beams. The Silicon Tracker (ST) consists of a large-area detector located upstream of a dipole magnet, and three stations placed downstream of the magnet. Both detectors share the same front-end electronics, the Beetle chip. The detectors performed very well throughout LHC Run-1 but new operating conditions for Run-2 pose new challenges. In particular, the bunch separation has been reduced to 25 ns, which is the same order of magnitude as the shaping time of the front-end read-out amplifiers. Signal spill-over from adjacent bunch crossings has to be considered in the reconstruction of clusters and tracks. The centre-of-mass energy has also been increased leading to much higher particle multiplicities and increased radiation damage to the silicon sensors. The non-uniform exposure of the LHCb sensors makes it an ideal laboratory to study radiation damage effects in silicon detectors. The VELO sensors are exposed to fluences of the order of $5\times10^{13}$ 1-MeV neq/cm$^2$ per $fb^{-1}$ while the ST sensor are exposed to more moderate fluences of the order of $10^{12}$ 1 MeV neq/cm$^2$ per $fb^{-1}$. Several different methods are used to monitor the radiation damage. In particular, regular High Voltage scans are taken which allow a precise measurement of the charge collection efficiency (CCE) as function of the voltage. This analysis is used to determine the operational voltages, and allows to monitor any degradation in the detector performance. The overall performance of the VELO and ST during Run-2 will be presented. The results of the latest high voltage scans will be shown, and measurements of the effective depletion voltage will be compared with the expected values that are calculated using the Hamburg model. Several fits to the model will be shown that illustrate different annealing scenarios, related to maintenance activities of the cooling system that are evisaged in Run-2, and their impact on the operation of the detector during the remaining Run-2 data taking. Speaker: Vincenzo Battista Reception Lawn area Lawn area Conveners: Prof. Nural Akchurin (Texas Tech University) , Dr Ren-Yuan Zhu (Caltech) Digital Electromagnetic Calorimetry with Extremely Fine Spatial Segmentation 18m The CALICE Digital Hadron Calorimeter, the DHCAL, utilizes Resistive Plate Chambers, RPCs, as active media. The readout is provided by 1 cm x 1 cm pads with the front-end electronics directly coupled to the RPCs. The chambers including the readout are housed within a cassette structure with steel and copper front and back planes. The cassettes are interleaved with iron or tungsten absorber plates to incite hadronic and electromagnetic interactions. In special tests, the active layers of the DHCAL were exposed to low energy particle beams, without being interleaved by absorber plates. The thickness of each layer corresponded approximately to 0.29 radiation lengths or 0.034 nuclear interaction lengths. Here we report on the measurements performed with this device in the Fermilab test beam with positrons in the energy range of 1 to 10 GeV. The measurements provide unprecedented spatial detail of low energy electromagnetic interactions with a factor of approximately 5000 finer granularity compared to conventional electromagnetic calorimeters. The results are compared to simulations based on GEANT4 and a standalone program to emulate the detailed response of the active elements. Precision Timing Detectors with Cadmium Telluride Sensors 18m Precision timing detectors for high energy physics experiments with temporal resolutions of a few 10 ps are of pivotal importance to master the challenges posed by the highest energy particle accelerators. Calorimetric timing measurements have been a focus of recent research, enabled by exploiting the temporal coherence of electromagnetic showers. Scintillating crystals with high light yield as well as silicon sensors are viable sensitive materials for sampling calorimeters. Silicon sensors have very high effciency for charged particles. However, their sensitivity to photons, which comprise a large fraction of the electromagnetic shower, is limited. A large fraction of the energy in an electromagnetic shower is carried by photons. To enhance the efficiency of detecting photons, materials with higher atomic numbers than silicon are preferable. In this paper we present test beam measurements with a Cadmium-Telluride sensor as the active element of a secondary emission calorimeter with focus on the timing performance of the detector. A Schottky type Cadmium-Telluride sensor with an active area of 1 cm$^2$ and a thickness of 1 mm is used in an arrangement with tungsten and lead absorbers. Measurements are performed with electron beams in the energy range from 2 GeV to 200 GeV. A timing resolution of 20 ps is achieved under the best conditions. Speaker: Adi Bornheim (Caltech) Prototype tests for a highly granular scintillator-based hadron calorimeter 18m Within the CALICE collaboration, several concepts for the hadronic calorimeter of a future linear collider detector are studied. After having demonstrated the capabilities of the measurement methods in "physics prototypes", the focus now lies on improving their implementation in "engineering prototypes", that are scalable to the full linear collider detector. The Analog Hadron Calorimeter (AHCAL) concept is a sampling calorimeter of tungsten or steel absorber plates and plastic scintillator tiles read out by silicon photomultipliers (SiPMs) as active material. The front-end chips are integrated into the active layers of the calorimeter and are designed for minimal power consumption (power pulsing). The versatile electronics allows the prototype to be equipped with different types of scintillator tiles and SiPMs. In recent beam tests, a prototype with ~3700 channels, equipped with several types of scintillator tiles and SiPMs, was exposed to electron, muon and hadron beams. The experience of these beam tests as well as the availability of new generation SiPMs with much reduced noise and better device-to-device uniformity resulted in an improved detector design with surface-mount SiPMs allowing for easier mass assembly. The presentation will discuss the testbeam measurements with AHCAL engineering prototype, the improved detector design and the ongoing construction of a large prototype for hadronic showers. Speaker: Yong Liu (D) The CMS High-Granularity Calorimeter (HGCAL) for Operation at the High-Luminosity LHC 18m The High Luminosity LHC (HL-LHC) will integrate 10 times more luminosity than the LHC, posing significant challenges for radiation tolerance and event pileup on detectors, especially for forward calorimetry, and hallmarks the issue for future colliders. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. It features unprecedented transverse and longitudinal segmentation for both electromagnetic (ECAL) and hadronic (HCAL) compartments. This will facilitate particle-flow calorimetry, where the fine structure of showers can be measured and used to enhance pileup rejection and particle identification, whilst still achieving good energy resolution. The ECAL and a large fraction of HCAL will be based on hexagonal silicon sensors of 0.5 - 1 cm^2 cell size, with the remainder of the HCAL based on highly-segmented scintillators with SiPM readout. The intrinsic high-precision timing capabilities of the silicon sensors will add an extra dimension to event reconstruction, especially in terms of pileup rejection. An overview of the HGCAL project is presented, covering motivation, engineering design, readout and trigger concepts, and expected performance. Speaker: Florian Pitters (C) Electromagnetic calorimeter prototype for the SoLID project at Jefferson Lab 18m SoLID (Solenoidal Large Intensity Device) is a new, general-purpose, large acceptance spectrometer being planned for experimental Hall A at Jefferson Lab, Newport News, Virginia, USA. The shashlik-type sampling technique will be used for the electromagnetic calorimeter for SoLID. This calorimeter is 20 radiation-lengths long with 194 layers each of 1.5mm-thickness plastic scintillators alternating with 0.5mm-thickness lead plates. A few calorimeter prototype modules have been built at Shandong University. The light yield of these modules has been test with cosmic ray. Preliminary beam tests were carried out as well. The building process of these calorimeter prototype modules, cosmic ray and beam test results will be presented. Speaker: Prof. Cunfeng Feng (Shandong university) Conveners: Prof. Gerald Eigen (University of Bergen) , daniele vivolo (INFN-NA) Slow liquid scintillator for scintillation and Cherenkov light separation 18m Slow liquid scintillator (water-based or oil-based) is proposed as the detection material of a few future neutrino experiments. It can be used to distinguish between scintillation and Cherenkov light. Thus neutrino detectors with it will have the directionality and particle identification for charged particles, so that a better sensitivity is expected for low energy (MeV-scale) neutrino physics, solar physics, geo-science and supernova relic neutrino search. Linear alkylbenzene (LAB) is the primary component or ingredient of these liquid scintillators. We studied all the relevant physical aspects of different combinations of LAB, 2,5-diphenyloxazole (PPO) and p-bis-(o-methylstyryl)-benzene (bis-MSB), including the light yield, time profile, emission spectrum, attenuation length of scintillation emission and visiable light yield of Cherenkov emission. We also measured the attenuation spectrum of some relevant neutrino detector material, like acrylic. Some formulations allow a good separation between Cherenkov and scintillation light, and a reasonable high light yield can also be achieved. The expected improvement on physics with such type of liquid scintillator will also be discussed. Speaker: Mr Ziyi Guo (Tsinghua University) The R&D progress of the Jinping Neutrino Experiment 18m The Jinping Neutrino Experiment will perform an in-depth research on solar neutrinos, geo-neutrinos and supernova relic neutrinos. Many efforts were devoted to the R&D of the experimental proposal. The assay and selection of low radioactive stainless-steel (SST) was carried out. The U and Th concentration is less than 1e-8 g/g for selected SST samples. A wide field-of-view and high-efficiency light concentrator is developed. Previous designs of light concentrators were optimized to attain a wide field view, 90 degree and a high efficiency, above 98%. At the same time a 1-ton prototype is constructed and placed underground at Jinping laboratory to 1) test the performance of several key detector components, like acrylic, pure water, using of ultra-high molecular weight polyethylene rope, 2) understand the neutrino detection technology with liquid scintillator and slow liquid scintillator and 3) measure the in-situ Jinping underground background, like fast neutron. The design, construction and initial operation of the 1-ton prototype will be discussed. A simulation framework is also developed to facilitate the experimental study of the 1-ton prototype and future detector design. Speaker: Dr Lei Guo (Tsinghua University) Light Detection with Large Area DUV Sensitive SiPMs in nEXO 18m The Enriched Xenon Observatory (EXO) is aiming to search for 0νββ decays of Xe-136 by using liquid xenon TPC detector. nEXO is the second phase of EXO with 5 tons of liquid xenon TPC, requiring ~4m2 of photo-detectors which have to be very efficient at 175nm and very radio-pure. SiPMs are ideally suitable for this application, however they have never been used in large area and detection efficiency at DUV region is relatively low. In the past a few years, lots of efforts have been made to develop photo-detector system for nEXO. In this talk, we will report on the requirements of photo-detector in nEXO, characterization of SiPMs manufactured by Fondazione Bruno Kessler (FBK), Hamamatsu Photonics and KETEK, analog readout technology for large area SiPMs, inter-connections, etc. Speaker: Dr Guofu Cao (IHEP) Double Calorimetry System in JUNO Experiment 18m The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino-oscillation experiment, with a 20 kiloton liquid scintillator detector of unprecedented 3% energy resolution (at 1 MeV) at 700-meter deep underground. There are ~18,000 20-inch photomultiplier tubes (PMTs) in the central detector with an optical coverage greater than 75%. Control of the systematics of the energy response is crucial to archive the designed energy resolution as well as to reach 1% precision of the absolute energy scale. The detected number of photoelectrons in each PMT differs by two orders of magnitude in the reactor antineutrino energy range in such a large detector, which is a challenge to the single channel charge measurement. JUNO has approved a new Small-PMT system, including up to 36,000 3-inch PMTs, installed alternately with 20-inch PMTs. The individual 3-inch PMT receives mostly single photoelectrons, which provides a unique way to calibrate the energy response of the 20-inch PMT system by a photon-counting technology. Besides, the Small-PMT system naturally extends the dynamic range of the energy measurement to help the high-energy physics, such as cosmic muons and atmospheric neutrinos. We will present the physics concept of this double calorimetry, the design and implementation of the 3-inch PMT and its readout electronics system. Speaker: Dr Miao He (高能所) R3-Medical Imaging, security and other applications Room 305E Conveners: Igal Jaegle (University of Florida) , Dr qiang wang (ihep) Multi-layer ionization chamber for quality assurance and stopping power measurements 18m The Center for Proton Therapy (CPT) of the Paul Scherrer Institute has a long history of technical innovation and development in the field of proton therapy and related quality assurance (QA). The second proton pencil beam scanning gantry built at the CPT, Gantry2, is a state-of-the-art system. The unique integration of QA equipment and detectors within the control system of the gantry allows for fast and detailed measurements. Here we present our latest developments in detection systems, their performance for QA and research capabilities in comparison with their commercial equivalent. The QA equipment developed for proton range measurement at Gantry2 consists of a multi-layer ionization chamber (MLIC). We compare three unique MLIC systems, including a commercially available device, against a water-based range measurement device as reference. The range measurement with the MLIC shows a deviation of less than 0.5 mm water equivalent thickness for 115 energies between 70 MeV and 230 MeV on a daily basis, since November 2013. The extensive integration of our detectors with the control system allows fast spot-based measurement. Including the energy change, we can achieve a proton range evaluation within 125ms. Consequently, our device can measure the proton range alteration caused by material samples for hundreds of energies within less than a minute. This method provides a fast and direct way to measure the stopping power of compounds with a σ < 0.1mm. The results show the strong points of the equipment developed in-house, such as the consistency, the reliability and the innovation possibilities. Speaker: Francis Gagnon-Moisan (Paul Scherrer Institute) XEMIS: liquid xenon Compton camera for 3γ imaging 18m We report on an innovative liquid xenon Compton camera project, XEMIS (XEnon Medical Imaging System), for a new functional medical imaging technique based on the detection in coincidence of 3 $\gamma$-rays. The purpose of this 3$\gamma$ imaging modality is to obtain a 3D image using 100 times less activity than in current PET systems. The combination of a liquid xenon time projection chamber (LXe-TPC) and a specific ($\beta^{+}$,$\gamma$) radionuclide emitter $^{44}$ Sc is investigated in this concept. In order to provide an experimental demonstration for the use of a LXe Compton camera for 3$\gamma$ imaging, a succession of R&D programs, XEMIS1 and XEMIS2, have been developed using innovative technologies. Nevertheless, the ultimate goal consists in a large camera XEMIS3 for whole human body imaging building. The first prototype XEMIS1 has been successfully validated showing very promising results for energy, spatial and angular resolutions with an ultra-low noise front-end electronics (below 100 electrons fluctuation) operating at liquid xenon temperature of 101 $^{\circ}$C at 1.2 bar. A timing resolution of 44.3$\pm$3.0 ns for 511 keV photoelectric events has been estimated from the drift time distribution, equivalent to a spatial resolution along z-axis of roughly 100 $\mu$m. The second phase dedicated to a 3D images of small animals, XEMIS2, is now under qualification. XEMIS2 is a monolithic liquid xenon cylindrical TPC that holds around 200 kg of liquid xenon, totally surrounding the small animal. The active volume of detector is covered by 64 Hamamatsu PMTs and two end segmented anodes with a total amount of 20000 pixels, to detect simultaneously the UV scintillation photons and ionization signals produced after interaction of ionizing radiation. Characterizations of ionization signal using Monte Carlo simulation and data analysis have shown good performances for energy measurement. Besides, in order to maintain the normal operation liquid xenon at the desired temperature and pressure, or to recover as fast as possible in urgent case, an innovative compact liquid xenon cryogenics subsystem (called ReStoX) has been successfully developed and validated. The XEMIS2 camera will be operational this year for preclinical research at the Center for Applied Multi-modality Imaging (CIMA) in the Nantes Hospital, while the detector performance has been evaluated through a dedicated simulation analysis. Speaker: Yajing XING (SUBATECH, CNRS/IN2P3, Université de Nantes, Ecole des Mines de Nantes) Feasibility study of track-based multiple scattering tomography 18m Tomographic methods for the imaging and visualization of complex structures are widely used not only in medical applications but also for scientific use in numerous fields. The CT-Imaging technique, which is commonly used for imaging in industry and the medical sector, exploits the difference of attenuation length for photons in different materials. Complete absorption of the photon beam for materials of higher atomic numbers poses a limit on the technique. We propose a new imaging method based on the tracking of electrons in the GeV range traversing a sample under investigation. By measuring the distribution of the deflection angle at the sample, an estimate on the material budget is extracted for a given 2D-cell in the sample. This allows for the 3D-reconstruction of the material budget making use of an inverse Radon transform. For the validation of this method, the AllPix Detector Simulation Framework including the Geant4 Framework was used to simulate a realistic setup. This simulation includes the DATURA Beam Telescope for high-precision particle tracking and an electron beam in the range of several GeV as can be found at the DESY Test Beam Facility, for which first tests are planned. A structured aluminum phantom was used as sample under study. The proposed imaging method represents a candidate for an alternative high-resolution tomographic technique. We will present a feasibility study on the track-based multiple scattering tomography including the simulation setup and reconstruction algorithms. It is shown that this method is able to resolve structures in the range of a few hundreds of micrometers for aluminum targets. The limits of this tomographic technique are discussed in terms of spatial resolution, cell-to-cell variance and discrimination power on material budget. Speaker: Paul Schuetze (Deutsches Elektronen-Synchrotron DESY) A fast monolithic pixel detector in a SiGe Bi-CMOS process 18m The TT-PET collaboration is developing a new generation of fast, low noise and low power-consumption monolithic silicon detector in SiGe Bi-CMOS technology. The target of this R&D is to produce a 100µm thick monolithic detector, with a time resolution better than 100ps for minimum ionizing particles, 1mm^2 readout pads and a time digitization at 20ps level. This performance will be achieved with an overall power consumption of less than 20 mW/cm^2. A first application of this detector will be the development of a silicon-based TOF-PET scanner with 30ps time resolution for 511 keV photons. The results of testbeam measurements using discrete component electronics, as well as the preliminary lab measurements on a monolithic chip realised with the SG13S IHP process will be presented. Speaker: Lorenzo Paolozzi (University of Geneva) PETIROC2A : New measurement results on fast ToF SiPM read-out chip 18m Petiroc2A is a 32-channel ASIC designed conjointly by Omega laboratory and Weeroc company. It is aimed for SiPM read-out for time-of-flight application prototyping. It is a complete read-out ASIC embedding a fast trigger line with a 10GHz gain-bandwidth product and a precise energy measurement based on a shaper. Following the very front-end chain, a time-to-amplitude converter allows to interpolate the time of arrival of an event between two master clock tick. A Wilkinson ADC ensures the analogue to digital conversion of both energy and timing. Digital data are outed through a serial link. Petiroc2A can also be used in different operation mode. Digital back-end can be disabled to allow full-analogue mode. In that case, analogue data and channel triggers are outed directly and can be exploited at the user needs. A photon counting operating mode can also be set up. In photon counting mode, the 32 triggers are available and photon counting up to 120MHz has been measured. ![enter image description here][1] Latest measurement on Petiroc2A will be presented in all of the three operating mode of the ASIC. In full digital mode, linearity and intrinsic timing resolution will be exposed. In analogue mode, maximum event rate will be shown with associated timing and energy resolution. In photon counting mode, maximum frequency will be shown. Beyond electrical measurements, nuclear measurement will be exposed such as CRT and energy resolution from measurement done at CERN and at several of Petiroc2A user facilities. [1]: http://www.weeroc.com/images/Products/blockscheme/petiroc2.png Speaker: Julien Fleury (W) Conveners: Hugo Delannoy (Interuniversity Institute for High Energies (ULB-VUB)) , Valerio Vagelli (INFN-PG) Characterisation of the Hamamatsu silicon photomultiplier arrays for the LHCb Scintillating Fibre Tracker Upgrade 18m In the context of the LHCb detector upgrade, during the long shutdown of LHC (2019/2020), the complete tracking system will be replaced to cope with the increased luminosity and trigger less readout scheme. A large area (300m^2) scintillating fibre tracker (SciFi) with more than 500K channels and 250um readout pitch is under construction. The silicon photomultiplier used for the read-out provide high photon detection efficiency, low correlated noise (optical cross-talk and after-pulse), short recovery time and withstand a high neutron fluence. The Hamamatsu photo-detectors selected in November 2016 have been characterised before and after irradiation with neutrons and protons. We will focus in this talk on the study of the performance of these devices in the context of the LHCb SciFi application regarding the single photon detection capability after irradiation. Speaker: Axel Kuonen (Ã) The Status of the R&D of the 20 inch MCP-PMT in China 18m The JUNO (Jiangmen Underground Neutrino Observatory) to be built in JiangMen, Guangdong province in south China is a generic underground national lab for neutrino physics and other research fields. Its neutrino program requires a high perfor-mance large detector, which needs approximately 16,000 Photomultiplier Tubes (PMTs), that have large sensitive area, high quantum efficiency, high gain and large peak-to-valley ratio (P/V) for good single photoelectron detection. Researchers at IHEP, Beijing have conceived a new concept of MCP-PMT several years ago. The small MCP (Microchannel Plate) units replace the bulky Dynode chain in the tranditional large PMTs. In addition transmission photocathode on the front hemisphere and reflection photocathode on the rare hemisphere are fabricated in the same glass bulb to form nearly 4π effective photocathode in order to enhance the efficiency of photoelectron conversion. A number of experienced researchers and engi-neers in research institutes and companies related to PMT fabrication in China jointly worked on the large area MCP-PMT project. After three years R&D, a number of 8 inch prototypes were produced and their performance was carefully tested at IHEP in 2013 by using the MCP-PMT evaluation system built at IHEP. The 20 inch prototypes were fol-lowed in 2014, and its' per-formance were improving a lot in 2015. The characteristics of the transmission photo-cathode (Trans. PC) was carefully studied by meas-uring the I-V curves, the quantum efficiency (QE) vs. wavelength, and by mapping the QE for both the 8 and 20 inch photocathodes. Charge spectra of sin-gle photoelectrons, timing properties of anode sig-nals and anode linearity were measured. Noise characteristics and after pulse properties were stud-ied at gain ~1.0×107. We are continuing simulation and experimental work to further improve our 8 and 20 inch MCP-PMT prototypes, in particular to improve the QE of the transmission photocathode and the photoelec-tron collection efficiency (CE) of the MCP unit. We believe for 20 inch prototypes, QE greater than 30% and CE better than 90% CE is possible. With the large area about the photocathode, the QE and DE will be improved, but the TTS and dark noise will be worse. So, the users need to get the balance between these above parameters for differ-ent physics aims. Especially, the glass used for 20 inch MCP-PMT has extra low potassium, low uranium and the con-tents resulting extra low radiation background. The PMT purchase of JUNO The JUNO Bidding started on Oct.23th 2015, and completed on Nov.17th 2015. Compensating the PMT performance with fiducially volume con-vert all specifications to cost, radioactivity, dark noise, TTS, the JUNO ordered 15000 pic 20 inch MCP-PMT from the NNVT. Speaker: Dr Sen Qian (高能所) The 20-inch PMT system for the JUNO experiment 18m The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose neutrino experiment currently under the stage of civil construction. The primary goal is to determine the neutrino mass hierarchy and precisely measure the oscillation parameters by detecting reactor anti-neutrinos. There will be around 20000 PMTs with a large photo-cathode of 20-inch equipped for the JUNO experiment, which include 15000 MCP PMTs from a Chinese vendor and 5000 Dynode PMTs from Hamamatsu. To achieve the designed 3% energy resolution, the PMTs are required to have very high detection efficiency as well as very compact layout in the central detector. The PMT system for JUNO includes PMT characterization, waterproof sealing, chain implosion protection, earth-magnetic field shielding, and finally their installation to the detector. Characterization of the PMTs will use a test stand developed in a container for mass testing and a scanning station for sampling test. Since the PMTs are required to work for 20 years in high purity water with a depth up to 45 m, and the front-end electronics including base, high voltage and the ADC chips will be put on PMT, it is very important to design a highly reliable waterproofed sealing. And in a situation that the PMTs will be closest possible arranged with the spacing only a few mm to achieve a coverage larger than 75% in the central detector, their protection from chain implosion and also their installation is very challenging. In this talk, all the aspects of building the large PMT system for the JUNO experiment will be addressed, with a focus on the most challenging parts mentioned above. Speaker: Dr Zhonghua Qin (高能所) Characterization of the large area photocathode PMTs for the JUNO Detector 18m The primary physics goal of the Jiangmen Underground Neutrino Observatory (JUNO) is to resolve neutrino mass hierarchy taking the advantage of the copious antineutrinos from two powerful nuclear power plants at distances of ~53 km in Guangdong Province, China. To meet this goal, JUNO has designed a 20 kt underground liquid scintillator (LS) detector that deploys 20 k high quantum efficiency (HQE) photomultipliers (PMTs) to reach an energy resolution of 3%/$\sqrt{(E/MeV)}$ and an energy scale uncertainty better than 1%. The required performance on such a massive LS detector is unprecedented, which places stringent requirements on the two types of PMTs used by JUNO, the Hamamatsu HQE PMT and the newly developed micro-channel plate (MCP) PMT. To select qualified PMTs and, more importantly, to supply the detector simulation with precise PMT performance data, the JUNO collaboration has developed two PMT performance evaluation systems, an industrial container based multi-PMT testing system and PMT photocathode uniformity scanning station. This talk will explain the requirements on the two types of JUNO PMTs in connection to its physical goals, the technical designs of the two PMT evaluation systems and the strategy to carry out the PMT evaluation. Speaker: Prof. Wei Wang (Sun Yat-Sen University) Tea Break 30m Corridor on the third floor Corridor on the third floor R1-Astrophysics and space instrumentation(1) Room 305A Conveners: Gary Varner (University of Hawaii) , Miroslav Gabriel (Max Planck Institute for Physics) SiPM-based Camera for Image Air Cherenkov Telescope of LHAASO 18m The SiPM-based camera technology is designed and developed for the Wide Field of View Cherenkov Telescope Array (WFCTA) of the Large High Altitude Air Shower Observatory (LHAASO) in the paper. WFCTA consists of 18 Cherenkov telescopes. Each Cherenkov telescope consists of an array of 32×32 SiPM array which cover a field of view 14°×16°with a pixel size of 0.5 °. The main scientific goal of WFCTA is to measure the ultra high energy cosmic ray composition and energy spectrum from 30 TeV to a couple of EeV. Because SiPM cannot be aging under strong light exposure, SiPM-based camera can be operated in half moon conditions, thus achieve a longer duty cycle than PMT-based camera, e.g. the duty cycle of SiPM-based camera is about 30%, while the PMT-based camera is about 10%. In addition to no aging due to strong light exposure, SiPM has more advantages like single photon counting response, high detection efficiency, high gain at low bias voltage and no sensitive to magnetic fields. Speaker: Dr Shoushan Zhang (Institute of High Energy Physics) A comprehensive analysis of polarised $\gamma$-ray beam data with a HARPO demonstrator 18m The HARPO is a design concept of the gaseous TPC aiming for a high precision telescope and polarimeter for cosmic $\gamma$-rays especially in the energy range from the pair-production threshold up to the order of 1 GeV, where current $\gamma$-ray telescope has a sensitivity drop and no polarimetry exists due to the multiple scattering. In order to investigate the feasibility, we built a HARPO demonstrator and performed a beam test campaign with a polarised $\gamma$-ray at the NewSUBARU accelerator in Japan in 2014. Our earlier studies showed promising results as a polarimeter even before performing analysis optimization. We are finalizing the polarimetry study by optimizing analysis and also extending our study to the angular resolution as its telescope performance. Speaker: Ryo Yonamine (CEA/Saclay) TAIGA experiment – a new instrument for high energy gamma-ray astronomy and cosmic ray studies. 18m The gamma-ray observatory TAIGA (Tunka Advanced Instrument for cosmic ray physics and Gamma Astronomy) is being developed to study gamma rays and fluxes of charged cosmic rays in the energy range of 10^13 eV – 10^18 eV. The array will include a network of wide-angle (Field-of-view (FOV) - 0.6 sr) Cherenkov stations and up to 16 Imaging Atmospheric Cherenkov Telescopes (IACTs) with FOV~10×10 degrees each covering an area of 5 km² and muon detectors with a total area of 2000 m^2 distributed over an area of 1 km2. The expected sensitivity of the observatory to search for local sources of gamma-rays in the energy range of 30-200 TeV is about 10-13 erg/cm^2 sec. In the paper we give a detailed description of photon detectors developed for the experiment. This paper presents also results of operation of the first 28 wide-angle Cherenkov stations. Speaker: Dr Bayarto Lubsandorzhiev (Institute for Nuclear Research of the Russian Academy of Sciences) Design and performances of the ED and the prototype array for LHAASO-KM2A 18m This paper describes the design optimization and performances of Electromagnetic particle Detector (ED) used in one km square extensive air shower array (KM2A) in LHAASO project. A 42-ED prototype array was set up at the Yangbajing cosmic ray observatory and has been in stable operation for two years. The performances of the prototype array are studied through hybrid observation of cosmic ray showers with the ARGO-YBJ experiment. The long term stability of the ED and the array are also presented. Speaker: Mr jia liu (IHEP) Calibration of the LHAASO-KM2A electromagnetic particle detectors 18m The Large High Altitude Air Shower Observatory (LHAASO) is a multipurpose project focusing on the study of high energy gamma ray astronomy and cosmic ray physics. The one square kilometer array (KM2A) of the observatory will consists of more than 5000 electromagnetic particle detectors (EDs). The large number of detectors demands on a robust, automatic self-calibration method. In this paper, the hardware and software-level methods used to calibrate the output charge and relative time-offset of EDs are described. These two independent calibration techniques have been applied in the KM2A prototype array to provide an estimation of uncertainties. As a result of this work, we have achieved a precision which can meet the requirements of KM2A EDs. Speaker: Mr Hongkui Lv (Institute of High Energy Physics) Conveners: Murat Guler (METU) , Prof. Yinong LIU (Tsinghua University) The gas systems for the detectors at the LHC experiments: overview of the performances and upgrade strategy in view of the High Luminosity LHC phase. 18m Over the five experiments (ALICE, ATLAS, CMS, LHCb and TOTEM) taking data at the CERN Large Hadron Collider (LHC) 27 gas systems are delivering the proper gas mixture to the corresponding detectors. Each gas system is made of different functional modules which are distributed on average into about 10 Universal Euroracks. If we imagine to put one on the top of the other the 270 Euroracks used for the LHC gas systems, we reach a height (about 500 meters) higher than the tour Eiffel height. The gas systems for the LHC experiments were built according to a common standard allowing minimizing manpower and costs for maintenance and operation. A typical gas system is made of several modules: mixer, pre-distribution, distribution, circulation pump, purifier, gas analysis, etc. Gas systems extend from the surface building where the primary gas supply point is located to the service balcony on the experiment following a route few hundred meters long. Even if all functional modules are basically equal between different gas systems, they can be configured to satisfy the specific needs of every gaseous particle detector. The statistic accumulated over the last years of LHC operation demonstrates how stable and reliable the gas systems for the LHC experiments are: on average a system was in stop for less than 1 hour per year, corresponding to an efficiency greater than 99.98%. Despite the excellent result, the activities are addressed to a careful planning of maintenance and consolidation/upgrade work to maintain and, possibly, improve the performances in the years to come. Clear examples concern the gas system's flow regulators, the needs for an increase in the gas flow circulation in the detectors and the effort for reducing the consumption of expensive or greenhouse gases. After several years of operation, the performance of specific flow regulators is currently under investigation. An extensive calibration/verification campaign is ongoing. It will allow optimizing the flow range for future operation and to understand the performance in relation with the specific gas used. The circulation flow increase is needed to safely operate the detectors at the higher LHC luminosity foreseen in the years to come and/or to integrate new detectors installed during present or imminent upgrades. In addition, also the gas system needs reinforcements in order to maintain a certain redundancy ensuring a fast recovery in case of unexpected failures. Given the large detector volume served by the gas systems and the use of relatively expensive or greenhouse gases components, for technical and economical reasons most of the detectors are operated in gas re-circulation mode. In addition, new systems, making use of different principles, have been developed for the recuperation of the gas mixture present inside the detectors. This operation is of particular relevance especially in preparation of the future LHC shutdowns. Some examples will be described in the present contribution. Speaker: Roberto Guida (C) Design of a high count rate photomultiplier base board for the sodium iodide detector on PGNAA 18m Prompt gamma neutron activation analysis (PGNAA) is a measurement technique for nondestructive elemental analysis. The method is used intensively for on-line and in situ analysis in various fields. It has a short measurement time, usually only about 120s. In order to ensure the measurement accuracy and get better statistics, the measurement system requires a high count rate, which is an important indicator of PGNAA. Industrial field applications must have large detector sizes to increase the detection efficiency. A sodium iodide (NaI) detector which size is 6×7 inch is used. Resistor divider structure is used in conventional PMT base board. At high count rate condition, the drive current would be insufficient and the output signal would be distorted, which leaded to the destruction of the linear relationship between the PMT output signal amplitude and the incident particle energy. The upper limit of energy spectrum is only 5MeV at 100k count rate. In this paper, a PMT base board with current amplification design has been developed. The PNP transistor is used to amplify the drive current. It can avoid the AC coupling and can achieve small size. The test results show that the design satisfies the drive current demand at high count rate. The design increases the upper limit of energy spectrum to 10MeV at 250k count rate, which improves the resolution of elements. Speaker: Mr Baochen Wang (University of Science and Technology of China) A stand alone muon tracking detector based on the use of Silicon Photomultipliers 18m We present the characterization and performances of a muon tracking detector developed by New York University Abu Dhabi and Gran Sasso National Laboratory (Italy). The tracker consists of 200 channels, organized in 10 separate levels. Each level is composed of two independent 40 cm X 40 cm planes, each one equipped with 10 plastic scintillator bars read out through Silicon Photomultipliers. To increase the light collection, wavelength shifter fibers have been embedded in the scintillator bars. The instrument can be controlled and remotely operated acting on trigger level, detection thresholds and on the acquisition making possible routinely checking (noise spots, efficiency maps, event cluster length) especially if deployed in locations with limited access. The detector and its data acquisition system have been designed and built with the aim at providing 3D particle reconstruction within 2 cm precision allowing for the determination of the direction. We will discuss its main applications: the possibility of precise measurements of the muon angular distribution, its possible use in Cultural Heritage studies allowing for the discovery of hidden chambers in pyramids for example and its capabilities of making building tomography. Speaker: Adriano Di Giovanni (N) The Barrel DIRC Detector for the PANDA Experiment at FAIR 18m The PANDA experiment at the international accelerator Facility for Antiproton and Ion Research in Europe (FAIR) near GSI, Darmstadt, Germany will address fundamental questions of hadron physics. Excellent Particle Identification (PID) over a large range of solid angles and particle momenta will be essential to meet the objectives of the rich physics program. Charged PID for the barrel region of the PANDA target spectrometer will be provided by a DIRC (Detection of Internally Reflected Cherenkov light) detector. The PANDA Barrel DIRC will cover the polar angle range of 22-140 degrees and separate charged pions from kaons for momenta between 0.5 GeV/c and 3.5 GeV/c with a separation power of at least 3 standard deviations. The design is based on the successful BABAR DIRC and the SuperB FDIRC R&D with several important improvements to optimize the performance for PANDA, such as a focusing lens system, fast timing, a compact fused silica prism as expansion region, and lifetime-enhanced Microchannel-Plate PMTs for photon detection. We will discuss the baseline design of the PANDA Barrel DIRC, based on narrow bars made of synthetic fused silica and a complex multi-layer spherical lens system, and the potentially cost-saving design option using wide fused silica plates, and present the result of tests of a large system prototype with a mixed hadron beam at CERN. Speaker: Roman Dzhygadlo (GSI Helmholtzzentrum für Schwerionenforschung GmbH) The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger 18m The LHC is expected to increase its instantaneous luminosity to $3\times10^{34} \rm{cm^{-2}s^{-1}}$ after the 'Phase-1' upgrade, to take place from 2018-2020. In order to cope with the high luminosity, an upgrade of the ATLAS trigger system will be required. The first-level Endcap Muon system identifies muons with high transverse momentum by combining data from fast a muon trigger detector, TGC, and some inner station detectors. In the Phase-1 upgrade a new detector, called the New-Small-Wheel (NSW), will be installed at the inner station region. Finer track information from the NSW can be used as part of the muon trigger logic to enhance performance significantly. In order to handle data from both TGC and NSW some new electronics have been developed, including the trigger processor board known as 'Sector Logic'. The Sector Logic board has a modern FPGA to make use of Multi-Gigabit transceiver technology, which will be used to receive data from the NSW. The readout system for trigger data has also been re-designed, with data transmission planned to be implemented with TCP/IP instead of a dedicated ASIC. This makes it possible to minimise the use of custom readout electronics and instead use some commercial PCs and network switches to collect, format and send the data. This presentation will describe the aforementioned upgrades of the first-level Endcap Muon trigger system. Particular emphasis will be placed on the electronics and its firmware. The performance of the system and the trigger performance will be also discussed. Speaker: Akatsuka Shunichi (Kyoto University) The Phase-1 Upgrade of the ATLAS First Level Calorimeter Trigger 18m The ATLAS Level-1 calorimeter trigger is planning a series of upgrades in order to face the challenges posed by the upcoming increase of the LHC luminosity. The hardware built for the Phase-1 upgrade will be installed during the long shutdown of the LHC starting in 2019, with the aim of being fully commissioned before the restart in 2021. The upgrade will benefit from new front end electronics for parts of the calorimeter which provide the trigger system with digital data with a tenfold increase in granularity. This makes possible the use of more complex algorithms than currently used and while maintaining low trigger thresholds under much harsher collision conditions. Of principal significance among these harsher conditions will be the increased number interactions per bunch crossing, known as pile-up. The Level-1 calorimeter system upgrade consists of an active and a passive system for digital data distribution and three different Feature EXtraction systems (FEXs) which run complex algorithms to identify electromagnetic energy deposits, taus, hadronic jets, large area jets as well as total and missing transverse momentum. These algorithms feature isolation criteria and pile-up subtraction techniques as well as multiplicity determination for large area jets. The algorithms are implemented in firmware on custom electronics boards with up to four high speed processing FPGAs. The identified trigger objects are transmitted to the topological trigger system, which counts the objects with energies above configurable thresholds and performs various topological trigger algorithms combining the properties of different objects. The main characteristics of the electronic boards are a high input bandwidth up to several TB/s per module implemented through optical receivers and a large number of tracks (up to several hundred) providing high speed (up to 12.8 Gb/s ) connections on the modules between the receivers and the FPGAs as well as between the FPGAs for data sharing. The PCB design uses modern materials and signal routing is supported by modern design tools to ensure a high level of data integrity. The used electrical power is estimated to be up to 400 W per module, which necessitates careful design of the power distribution and heat dissipation system. Extensive simulation studies are carried out to understand and optimise the characteristics of the modules. Prototypes have been built and extensively tested to prepare for the final design steps and the production of the modules. The contribution will give an overview of the system and discuss the module design challenges. Extensive tests of the boards, including tests of the data transmission between modules, will be reported. Speaker: Victor ANDREI (Kirchhoff Institute for Physics, Heidelberg University) The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II 18m An upgrade of the CMS Level-1 calorimeter trigger has been completed, fully commissioned and was used by CMS to collect data starting with the 2016 run. The new trigger has been designed to improve performance at high luminosity and large number of simultaneous inelastic collisions per crossing (pile-up). For this purpose it uses a novel design, the Time Multiplexed (TM) design, which enables all the data from an event to be processed by a single trigger processor at full granularity over several bunch crossings. The TM design is a modular design based on the uTCA standard. The trigger processors are instrumented with Xilinx Virtex-7 690 FPGAs and 10 Gbps optical links. The TM architecture is flexible and the number of trigger processors can be expanded according to the physics needs of CMS. Intelligent, sophisticated and innovative algorithms are now the core of the first decision layer of CMS: the upgraded trigger system implements pattern recognition and MVA (Boosted Decision Tree) regression techniques in the trigger processors for momentum assignment, pile up subtraction, and isolation requirements for electrons, and tau leptons. The resolution of the jet pseudorapidity and azimuthal angle have dramatically improved, allowing the implementation of di-jet mass triggers. The performance of the TM design and latency measurements are presented, alongside algorithm performance measured using the 2016 data and a summary of the running experience from 2016. Speaker: Alessandro Thea (R) The ATLAS Muon-to-Central Trigger Processor Interface (MUCTPI) Upgrade 18m The Muon-to-Central Trigger Processor Interface (MUCTPI) is part of the Level-1 trigger system of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. We will describe an upgrade of the MUCTPI which will use optical input and provide full precision region-of-interest information on muon candidates to the topological trigger processor of the Level-1 trigger system. The new MUCTPI will be implemented as an ATCA blade receiving 208 optical serial links from the ATLAS muon trigger detectors. Two high-end processing FPGAs will eliminate double counting of identical muon candidates in overlapping regions and send candidate information to the topological trigger. A third FPGA will combine the candidate information, send muon multiplicities to the Central Trigger Processor (CTP) and provide readout data to the ATLAS data acquisition system. A System-on-Chip (SoC) module will provide communication with the ATLAS run control system for control, configuration and monitoring of the new MUCTPI. Speaker: Spiwoks Ralf (Rutherford Appleton Laboratory) Conveners: K.K. Gan (The Ohio State University) , Prof. Kazuhiko Hara (University of Tsukuba) Belle-II Silicon Vertex Detector 18m The Belle II experiment at the SuperKEKB collider in Japan will operate at an unprecedented luminosity of $8\times10^{35}$ cm$^{-2}$ s$^{-1}$, about 40 times larger than its predecessor, Belle. Its vertex detector is composed on two-layer DEPFET pixel detector (PXD) and four layers double-sided silicon microstrip detector (SVD). To achieve a precise decay-vertex position determination and excellent low-momentum tracking under a harsh background condition and high trigger rate of 10 kHz, the SVD employs several innovative techniques. In order to minimise the parasitic capacitance in the signal path, 1748 APV25 ASIC chips, which read out signal from 224k strip channels, are directly mounted on the modules with the novel Origami concept. The analog signal from APV25 are digitised by a flash ADC system, and sent to the central DAQ as well as to online tracking system based on SVD hits to provide region of interests to the PXD for reducing the latter's data size to achieve the required bandwidth and data storage space. Furthermore, the state-of-the-art dual phase CO$_2$ cooling solution has been chosen for a combined thermal management of the PXD and SVD system. In this talk, we present the key design principles, module construction and integration status of the Belle II SVD. Speaker: Bahinipati Seema (TIFR Mumbai) Silicon Tracker for the J-PARC muon g-2/EDM experiment 18m The J-PARC muon g-2/EDM experiment is a planned experiment to measure the anomalous magnetic moment (g-2) and the electric dipole moment (EDM) of muons. In contrast to the experiment at Fermilab, which uses "magic momentum" (~3.09 GeV/c) of muons to exclude the electric field dependent term on the spin precession frequency, our experiment uses ultra-cold slow muon beam which requires no focusing electric field to be free from the term. Since the beam is slow (~300 MeV/c), we can store the muons to a relatively small magnetic bottle, equipped with a positron tracker to observe the muon decay very precisely. Since the two experiments have different sources of systematic effects, we can complimentarily probe the g-2 deviation from the Standard Model, which may lead to confirm the effect of new physics. The positron tracker consists of 48 vanes (96 sides) of detector layers. Each vane consists of 2 times 8 silicon strip sensors, incorporated with flexible printed circuit (FPC) and embedded front-end electronics, their cooling system and support structure. Design of the sensor has been finalized and the mass production is underway. We have also developed a dedicated front-end ASIC, called SliT128A, directly wire-bonded to the circuit. In this talk, the design and the status of the preparation of the tracker system is presented, including the characterization of the sensors, the operation of SliT128A with the sensors wire-bonded and the FPC development. Consideration of the backend electronics, DAQ strategy and reconstruction software are also presented. Speaker: Taikan Suehara (Kyushu University) Modules and Front-End Electronics Developments for the ATLAS ITk Strips Upgrade 18m The ATLAS experiment is currently preparing for an upgrade of the tracking system in the course of the High Luminosity LHC, scheduled for 2024. The existing Inner Detector will be replaced by an all-silicon Inner Tracker (ITk) with a pixel detector surrounded by a strip detector. The ITk strip detector consists of a four layer barrel and a forward region composed of six discs on each side of the barrel. The basic unit of the detector is the silicon-strip module, consisting of a sensor and one or more hybrid circuits that hold the read-out electronics. The geometries of the barrel and end-cap modules take into account the regions that they have to cover. In the central region, the detectors are rectangular with straight strips, whereas on the forward region the modules require wedge shaped sensors with varying strip length and pitch. The current prototyping phase has resulted in the ITk Strip Detector Technical Design Report (TDR), which kicks-off the pre-production readiness phase at the involved institutes. In this contribution we present the current status of R&D of the ITk Strip Detector modules and read-out electronics. Speaker: Garcia-Argos Carlos (C) Staves and Petals: Multi-module local support structures of the ATLAS ITk Strips Upgrade 18m The ATLAS Inner Tracker (ITk) is an all-silicon tracker that will replace the existing inner detector at the Phase-II Upgrade of ATLAS. The outermost part of the tracker consists of the strips tracker, in which the sensors elements consist of silicon micro-strip sensors with strip lengths varying from 1.7 to up to 10 cm. The current design, at the moment under internal review in the Strips part of the Technical Design Report (TDR), envisions a four-layer barrel and two six-disk endcap regions. The sensor and readout units ("modules") are directly glued onto multi-module, low-mass, high thermal performance carbon fiber structures, called "staves" for the barrel and "petals" for the endcap. They provide cooling, power, data and control lines to the modules with a minimal amount of external services. An extensive prototyping program was put in place over the last years to fully characterize these structures mechanically, thermally, and electrically. Thermo-mechanical stave and petal prototypes have recently been built and are currently under intensive study. This contribution will focus on describing the stave and petal structures and the prototyping work carried out so far. In addition, some details of the work carried out on the global supports which will hold the staves and petals in place will also be presented. Speaker: Carlos Garcia-Argos (University of Freiburg) The Silicon Micro-strip Upstream Tracker for the LHCb Upgrade 18m A comprehensive upgrade of the LHCb detector is foreseen for the long shutdown of the LHC in 2019/20 (LSII). The upgrade has two main goals: enabling the experiment to operate at an up to five times higher instantaneous luminosity and increasing trigger efficiencies by substituting the current hardware trigger by a software one. As part of the upgrade, the existing TT tracking station in front of the LHCb dipole magnet will be replaced by a new silicon micro-strip detector, the Upstream Tracker (UT). Similar to the TT, the UT will consist of four planar detection layers covering the full acceptance of the experiment. In total, the detector will use about 1000 silicon sensors and 5000 ASICs. Sensor R&D concentrates on three advanced features that are being considered: a quadrantile cut-out for the innermost sensors to optimize the detector coverage around the LHC beam pipe, an embedded pitch adapter implemented as a double metal layer, and strip-side contacts for connecting the bias voltage through the silicon bulk to the backplane. A new radiation-hard front end readout chip for the UT is being developed in 130 nm TSCM technology. It incorporates 128 input channels with the complete DAQ chain integrated: preamplifier, shaper and a 6-bit ADC, pedestal and common-mode subtraction, and zero-suppression as well as data serialization. Measurements on a full-featured prototype chip are well advanced and results from these tests will be shown. Detector modules host 4 or 8 ASICs and are mounted onto the front and back of 130 cm long staves that cover the full height of the detector acceptance. The staves consist of light-weight foam embedded between two sheets of carbon fibre. The cooling of the silicon sensors and the front-end chips is done via embedded titanium cooling pipes, through which innovative bi-phase C02 is circulated as coolant. The progress of this system will also be discussed. Output signals and control signals, low-voltage power for the front-end chips and bias voltage for the silicon sensors are transported along the staves via kapton flex cables that are glued onto both at sides of the stave. Each of these cables carry up to 120 high speed differential pairs with a total of 38.4Gbps and 8A of current to power up to 24 ASICs while maintaining a minimal material budget and being easy to manufacture. The design solutions and results of prototype iterations will be presented. The detector design presents several practical challenges. These include a retractable detector frame, a light-weight detector box that will seal directly around the LHC beam pipe, and custom-made electronics for signal processing and detector control that are mounted against the detector frame as well as practical implementation issues. These will be discussed too. Speaker: Carlos Abellan (U) LUNCH (Bento box) 1h 30m Corridor on the third floor Conveners: Adi Bornheim (Caltech) , Burak Bilki (U) High granularity digital Si-W electromagnetic calorimeter for forward direct photon measurements at LHC 18m It is widely expected that the non-linear growth of parton densities at low x predicted from linear QCD evolution will lead to gluon saturation. As a decisive probe of gluon saturation, the measurement of forward (3.5 < y < 5) direct photons in a new region of low x ($10^{−5}$ ∼ $10^{−6}$ ) in proton-nucleus collisions at the LHC is proposed. An extremely high-granularity electromagnetic calorimeter is proposed as a detector upgrade to the ALICE experiment. This Forward Calorimeter (FoCal), is required to discriminate direct photons from decay photons with very small opening angle from neutral pions. To facilitate the design of the upgrade and to perform generic R&D necessary for such a novel calorimeter, a compact digital Si/W sampling electromagnetic calorimeter prototype using Monolithic Active Pixel Sensors(MAPS) with a granularity of 30 × 30 μm and 28 $X_{0}$ has been built and tested with beams. The test beam results have shown the good energy linearity and very small Moliere radius (∼ 11 mm). We will discuss new results of the R&D with electromagnetic showers, in particular a position resolution of better than 30 μm. This precise position determination and the detailed knowledge of the electromagnetic shower shape obtained will provide the crucial capability for two photon separation down to a few mm. The results also show the successful proof of principle of particle counting calorimetry technology for future calorimeter development. Speaker: Hongkai Wang (Utrecht University) Construction and first beam-tests of silicon-tungsten prototype modules for the CMS High Granularity Calorimeter for HL-LHC 18m The High Granularity Calorimeter (HGCAL) is the technology choice of the CMS collaboration for the endcap calorimetry upgrade planned to cope with the harsh radiation and pileup environment at the High Luminosity-LHC. The HGCAL is realized as a sampling calorimeter, including an electromagnetic compartment comprising 28 layers of silicon pad detectors with pad areas of 0.5 — 1.0 cm^2 interspersed with absorbers. Prototype modules, based on hexagonal silicon pad sensors, with 128 channels, have been constructed and tested in beams at FNAL and at CERN. The modules include many of the features required for this challenging detector, including a PCB glued directly to the sensor, using through-hole wire-bonding for signal readout and ~5mm spacing between layers – including the front-end electronics and all services. Tests in 2016 have used an existing front-end chip - Skiroc2 (designed for the CALICE experiment for ILC). We present results from first tests of these modules both in the laboratory and with beams of electrons, pions and protons, including noise performance, calibration with mips, electron energy resolution and precision-timing measurements. Speaker: Dr Francesco Romeo (IHEP of Beijing) A Si-PAD and Tungsten based electromagnetic calorimeter for the forward direct photon measurement at LHC 18m In central heavy ion collisions at very high energy such as at LHC at CERN, one can create a matter of high energy density and high temperature in which quarks and gluon can move freely beyond the boundary of hadrons, called Quark Gluon Plasma (QGP). One of the unanswered questions for on the creation process of QGP is the initial state of nuleons. According to the QCD, the gluon density in small-x region ($𝑥 = 10^{-3}\sim 10^{-5}$) saturates, and such state, referred to Color Glass Condensate (CGC), is consider to be an initial condition of heavy ion collisions. Despite the extensive experimental studies, there is no clear evidence of the creation of CGC so far. By the measurements of direct photon in the forward direction, one can access the CGC picture more clearly compared to hadrons, and obtain a clear picture of initial condition of heavy ion collisions at high energies. In the ALICE experiment at LHC, there is an upgrade plan to construct a Forward Calorimeter (FoCal). The FoCal-E is an electromagnetic calorimeter of FoCal for the direct photon measurement at LHC in the small-x, which covers $3.3<\eta<5.3$. FoCal-E consists of a low granularity layers (LGL) and a high granularity layers (HGL). A LGL module is composed by tungsten layers and silicon PAD (Photo Avalanche Diode) layers, which has 8×8 PADs ($1\times 1\;\mathrm{cm}^2$ per PAD). this measures the energy of electromagnetic showers. A HGL module is composed by MAPS (Monolithic Active Pixel Sensors, $30\times 30\;\mathrm{\mu m}^2$ per pixel) layers which have high position resolution to discriminate between decay photons and direct photons. In this presentation, we discuss the results on LGL from the 2015/2016 test beam experiment at CERN PS and SPS. The energy resolution, linearity, and shower profiles are shown, and those are compared to the simulation results. We also show the performance of the integrate system, i.e. combined LGL and HGL detectors, as a straw-man design of FoCal- E prototype from 2016 test beam data. Speaker: Yota Kawamura (U) proceedings tex file Software compensation and particle flow 18m The Particle Flow approach to calorimetry requires highly granular calorimeters and sophisticated software in order to reconstruct and identify individual particles in complex event topologies. Within the CALICE collaboration, several concepts for highly granular calorimeters are studied. The Analog Hadron Calorimeter (AHCAL) concept is a sampling calorimeter of tungsten or steel absorber plates and plastic scintillator tiles read out by silicon photomultipliers (SiPMs) as active material. The high calorimeter granularity can also provide a discrimination of the electromagnetic sub-showers in hadron showers. This discrimination can be utilised in an offline weighting scheme, the so-called software compensation technique, to improve the energy resolution for single particles. This presentation will discuss results obtained with the AHCAL physics prototype in several testbeam campaigns with steel and tungsten absorber. It will concentrate on software compensation, its implications for the detector design as well as the use of software compensation techniques in the PandoraPFA particle flow algorithm. Speaker: Boruo Xu (D) Development of High Precision Polarimeter for the charged particle EDM Experiment 18m The **JEDI** (Jülich Electric Dipole moment Investigations) collaboration performs a set of experiments at the COSY storage ring in Jülich, within the R&D phase to search for the Electric Dipole Moments (EDM) of charged particles. A measurement of proton and deuteron EDMs is a sensitive probe of yet unknown CP violation. The method of charged particle EDM search will exploit stored polarized beams and observe a minuscule rotation of the polarization axis as a function of time due to the interaction of a finite EDM with large electric fields. The key challenge is the provision of a sensitive and efficient method to determine the tiny change of the beam polarization. The elastic scattering of the polarized beam particles off target with highest analyzing power will provide the polarimetry reaction. The current status of a dedicated high precision polarimeter concepts will be overviewed. To fulfill specifications, a fast, dense, high resolution (energy and time), and the radioactive hard novel scintillating material is chosen. The LYSO crystals are supposed to be the best candidate for this type of detector system. Also, we have designed a new kind of LYSO coupled to the Silicon PM, very compact modular system, which is under intensive tests right now. In this contribution, results from last experiments with the deuteron and proton (polarized and unpolarized) beams of five different energies up to 300 $MeV$ will be presented. Speaker: Irakli Keshelashvili (Forschungszentrum Jülich Gmbh Central Institute of Engineering Electronics and Analytics ZEA-2 - Electronic Systems) R2-Gaseous detectors(1) Room 305C Conveners: Antonio Amoroso (University of Turin and INFN) , Bo Yu (Brookhaven National Lab) Innovative design and construction technique for the Cylindrical GEM detector for the BESIII experiment 18m Speaker: Antonio Amoroso (University of Turin and INFN) A Cylindrical GEM Inner Tracker for the BESIII experiment at IHEP 18m The Beijing Electron Spectrometer III (BESIII) is a multi-purpose detector that collects data provided by the collision in the Beijing Electron Positron Collider II (BEPCII), hosted at the Institute of High Energy Physics of Beijing. Since the start of its operation, BESIII has collected the world's largest sample of J/psi and psi(2s). Due to the unprecedent luminosity of the BEPCII, the most inner part of the Multilayer Drift Chamber (MDC) is showing aging effects. It has been proposed an replacement based on the new technology of Cylindrical Gas Electron Multipliers (CGEM). The CGEM-IT project will deploy several new features and innovation with respect the other state-of-art GEM detector: the µTPC and analog readout, with time and charge measurements will allow to reach the 130 µm spatial resolution in 1 Tesla magnetic field requested by the BESIII collaboration; the Rohacell, a PMI foam, will give solidity to cathode and anode, with very low impact on material budget; the jagged anode will allow to reduce the interstrip capacitance. In this presentation, an update of the status of the project will be presented, with a particular focus on the results with planar and cylindrical prototypes with cosmic rays and test beams data. These results are beyond the state of the art for GEM technology in magnetic field.The CGEM-IT project has been founded by the European Commission in the action H2020-RISE-MSCA-2014. Speaker: Mr Riccardo Farinelli (INFN Sezione di Ferrara) Upgrade of the CMS Muon Spectrometer in the forward region with the GEM technology 18m The Large Hadron Collider (LHC) will be upgraded in several phases that will allow to significantly expanding its physics program. After the expected long shutdown in 2018 (LS2) the accelerator luminosity will be increased to 2 − 3 × 1034 cm^−2s^−1 exceeding the design value of 1 × 1034 cm^−2s^−1 allowing the CMS experiment to collect approximately 100 fb^−1/year. A subsequent upgrade in 2022-23 will increase the luminosity up to 5 × 1034 cm^−2s^−1. The CMS muon system must be able to sustain a physics program after the LS2 shutdown that maintains sensitivity for electroweak scale physics and for TeV scale searches similar to what was achieved up to now. To cope with the corresponding increase in background rates and trigger requirements the installation of additional sets of muon detectors, referred to as GE1/1, GE2/1 and ME0 that use Gas Electron Multiplier (GEM) technology has been planned. While the installation of the GE1/1 chambers has been already approved and scheduled by 2019/20, the GE2/1 and ME0 project are now in the final phase of review. We present an overview of the Muon Spectrometer upgrade using the GEM technology, the details of the ongoing GE1/1 chambers production with the first results of the Quality Assurance tests performed on a such a chambers as well as the design and the technical solution adopted for the foreseen GE2/1 and ME0 chambers. Speaker: Michele Bianco (C) Performance of Triple GEM Detector in X-Rays and Beta Particles Imaging 18m A triple gas electron multiplier (GEM) detector with an active area of 10x10 cm2 was constructed and tested for x-rays imaging using a 256 channel 2D x-y strips readout. The study includes optimization of GEM operating high voltage, x-ray tube distance, x-ray tube high voltage, the best x-ray filter, best ratio of Ar/CO2 gas mixture. A 90Sr beta source also was used. The result of the study shows a good ability of GEM detector for x-ray 2D imaging and beta particles tracking. Speaker: Mr Mohammad AlAnazi (King Abdulaziz City for Science and Technology (KACST)) An improved self-stretching GEM assembly technique 18m We have improved the self-stretching GEM assembly technique that was initially developed at CERN for the CMS GEM upgrade project. With this improved technique, we can build GEM detectors at a scale of > 1m that still preserve very good gain uniformity. The technique results in high-quality stretching of GEM foils and good gas tightness in GEM detectors. This report presents details of the improved self-stretching technique for large-size GEM assembly and some test results of a large-size GEM prototypes built with this technique. Speaker: Prof. Jianbei Liu (University of Science and Technology of China) A multi-chip data acquisition system based on a heterogeneous system-on-chip platform. 18m The development of pixel detectors for future high-energy physics experiments requires flexible high-performance readout systems supporting a wide range of current and future device generations. The versatile readout system of the Control and Readout Inner tracking Board (CaRIBou) targets laboratory and high-rate test-beam measurements with a multitude of detector prototypes. Under the project umbrella, application-specific chipboards and a common interface card have been developed for a variety of pixel detector readout ASICs and active sensors. The boards are hosted by a commercial evaluation kit (ZC706). This talk focuses on the data acquisition system (DAQ) based on a heterogeneous Xilinx Zynq All Programmable System-on-Chip (AP SoC). The device integrates the software programmability of an ARM-based processor with the hardware programmability of an FPGA, enabling acceleration of the design, verification, test and commissioning processes. The CPU handles the slow control of the system, while the FPGA fabric performs data processing and data encapsulation in UDP datagrams moved by a Direct Memory Access (DMA) device through the High Performance Advanced Extensible Interface (AXI) port directly to the shared Random Access Memory (RAM). Further, data in RAM is accessed by the CPU for prompt analysis (data-quality monitoring, calibration, etc.) or is transferred eventually to a storage server over the Ethernet link using a standard Linux network stack and the DMA. Thanks to the fully capable dual-core processor running a Linux operating system, the DAQ board provides the unique user experience of a regular fully-functional remote terminal able to execute high level code (such as Python scripts). Moreover, as the code runs locally on the CPU integrated directly or indirectly (through the FPGA fabric) with the given ASIC, operations involving high input/output (I/O) activity (e.g. chip equalization) are not affected by network delays. The logic modules implemented in the FPGA fabric are available to the end user through the open source Linux device drivers maintained by the Xilinx community. In order to facilitate the creation of an embedded Linux distribution, CaRIBou provides a layer to the Yocto build framework supported by a large community of open-source and industrial developers. The talk presents the design of the SoC-based DAQ system, its building blocks and shows examples of the achieved functionality and performance for the CLICpix2 readout ASIC and the C3PD active CMOS sensor. Speaker: Adrian Fiergolski (CERN) Upgrade of the ATLAS Monitored Drift Tube Electronics for the HL-LHC 18m To cope with large amount of data and high event rate expected from the planned High-Luminosity LHC (HL-LHC) upgrade, the ATLAS monitored drift tube (MDT) readout electronics will be replaced. In addition, the MDT detector will be used at the first-level trigger to improve the muon transverse momentum resolution and reduce the trigger rate. A new trigger and readout system has been proposed. Prototypes for two frontend ASICs and a data transmission board have been designed and tested, detailed simulation of the trigger latency has been performed, and segment-finding and track fitting algorithms have been developed. We will present the overall design of the trigger and readout system and show latest results from various prototype studies. Speaker: Junjie Zhu (University of Michigan) Challenges and performance of frontier technology applied to an ATLAS Phase-I calorimeter trigger board dedicated to jet identification 18m The 'Phase-I' upgrade of the Large Hadron Collider (LHC), scheduled to be completed in 2021, will lead to an enhanced collision luminosity of 2.5x10e34cm-2s-1. To cope with the new and challenging accelerator conditions, all the CERN experiments have planned a major detector upgrade to be installed during the associated experimental shutdown period. One of the physics goals of the ATLAS experiment is to maintain sensitivity to electroweak processes despite the increased number of interactions per LHC bunch crossing. To this end, the component of the first level hardware trigger based on calorimeter data will be upgraded to exploit fine-granularity readout using a new system of Feature EXtractors (FEXs), which each uses different physics objects for trigger selection. There will be three FEX systems in total, with this contribution focusing on the first prototype of the jet FEX (jFEX). This system identifies jets and large area tau candidates while also calculating global variables such as transverse energy sums and missing transverse energy. The jFEX prototype is characterised by four large Xilinx Ultrascale Field Programmable Gate Arrays (FPGAs), XCVU190FLGA2577, so far the largest available on the market, capable of handling a data volume of more than 3 TB/s of input bandwidth. The choice of such large devices was driven by the requirement for large input bandwidth and processing power. This comes from the need to exploit high granularity calorimeter information and also run several jet identification algorithms within the few hundred nanoseconds latency budget (~350 ns). This presentation will report on the hardware design challenges and adopted solutions to preserve signal integrity within a densely populated high signal speed ATCA board. The parallel simulation activity that supported and validated the board design will also be presented. Particular emphasis will be given to the large FPGA power consumption effects on the boards. This was assessed via dedicated thermal simulation and cross-checked with a campaign of measurements. Preliminary results will also be presented from tests both at CERN and Mainz, based on the first jFEX prototype from December 2016. Speaker: Christian Kahra RDMA optimizations on top of 100 Gbps Ethernet for the upgraded data acquisition system of LHCb 18m The LHCb experiment will be upgraded in 2018-2019 to change its operation to a triggerless full-software readout scheme from Run3. This results in increasing the load of the event building and filtering farm by a factor of 40. The farm will need to be able to handle all the 40 MHz rate of the particle collisions. The network of the data acquisition system is facing with a target speed of 40 Tb/s, aggregated by 500 nodes. It requires the links to be capable of delivering the data with at least 100 Gbps speeds per direction. Three solutions are being evaluated: Intel® Omni-Path Architecture, 100G Ethernet and EDR InfiniBand. Intel® OPA and EDR IB runs by Remote Direct Memory Access. Ethernet uses TCP/IP or UDP/IP by default, which involves significant CPU load. However, there are solutions to implement RDMA-enabled data transfer via Ethernet as well. These technologies are called RoCE (RDMA over Converged Ethernet) and iWARP. We present first measurements with such technologies on 100 Gbps equipment in respect of the data acquisition use-case. Speaker: Mr Balazs Voneki (CERN) Data transmission system for 2D-SND at CSNS 18m China Spallation Neutron Source (CSNS) is the first high-performance pulsed neutron source in China, which will meet the increasing fundamental research and technique applications demands domestically and overseas. Scintillator neutron detector (2D-SND) is the detector on the General Purpose Powder diffractometer (GPPD) of CSNS. It consists of 36 banks. Every bank has 192 channels. 2D-SND is planned to go into service in 2018. At present, 2D-SND have been made and relative systems including electronics system, data acquisition (DAQ) system, data transmission system and data analysis system have been constructed essentially. Electronics system is used to get signals from the detector, amplify these signals, convert them to digital data, construct data to events and finally send events to DAQ system. Electronics system consists of 36 modules corresponding 36 banks of detector and every module consists of 192 electronic channels corresponding 192 channels on relative bank of detector. Every electronic channel gets signals, disposes signals and sends events independently. Electronics system is based on SiTCP to send events. DAQ system is used to read events from electronics system, save events in local files and Network File System (NFS). The user interface of DAQ system is based on QT and the bottom program of it is adopted multithreading technology to read events from each electronic module and save them to each file. Data analysis system is used to receive events from data transmission system, reconstruct events in the form of Nexus, analyze reconstructed events and display results in the form of charts. The functions of analysis and display in data analysis system are realized by Python. The functions of receive events, reconstruction in it are realized by C++ Dynamic Link Library (DLL) which is called by the program of Python. Multithreading technology is adopted in the program of C++ DLL. There are three threads in it. One is used to receive events. The second is used to reconstruct. The third is used to send reconstructed events to back end program of analysis and display. The data transmission system is used to get events from DAQ system, pick good events and send these good events to data analysis system. The whole system is written in C which as a process oriented language offers an easy way to dispose data with flexibility and high efficiency. The program of data transmission system is designed to multithreading in C. It means that events of each electronic module are disposed independently in respective thread. The disposal includes reading file to get its events, picking these events and sending good events being picked. Multithreading technology makes system of multi-tasking and parallel processing work more efficiently. For the convenience of event reconstruction in data analysis system, a pubic buffer is used in data transmission system to collect good events from each electronic module and it is called in every disposal thread for the purpose of sending events of every electronic module together to data analysis system. A public buffer offers an environment of resource sharing and integration in a simple way. The mutex lock implements the mutually exclusive access to shared resource. The application of the mutex lock accompanying with the use of public buffer ensures that every event stored in public buffer is complete and correct. The interface between DAQ system and data transmission system in it is adopted NFS which provides an environment with capability of mutual interference. It means that DAQ system saves events sent to data transmission system in NFS and data transmission system reads events from NFS. Distributed Information Management System (DIM) developed by European Organization for Nuclear Research (CERN) is adopted to be the interface between data transmission system and data analysis system. It provides a method to realize loose coupling, meanwhile it is very efficient in data transmission. More concretely, the mechanism of DIM is based on C/S pattern. DIM server is in charge of sending data which is put in the data transmission system. DIM client is in charge of receiving data which is put in the data analysis system. DIM server can keep sending data no matter DIM client is active or dead to realize loose coupling. The data transmission system together with 2D-SND and other relative systems has been applied in neutron beam experiment successfully. With the common framework, it can easily be expanded and improved to fit for applications of other detectors at CSNS or other place. In late development, the interface between DAQ system and data transmission system should be improved to fulfil real-time on data transmission. Speaker: Ms Dongxu Zhao (China Spallation Neutron Source) The LHCb Vertex Locator Upgrade 18m The Large Hadron Collider Beauty detector is a flavour physics detector, designed to detect decays of b- and c-hadrons for the study of CP violation and rare decays. At the end of Run-II, many of the LHCb measurements will remain statistically dominated. In order to increase the trigger yield for purely hadronic channels, the hardware trigger will be removed and the detector will operate at 40 MHz. This, in combination with the five-fold increase in luminosity necessitates radical changes to LHCb's electronics with entire subdetector replacements required in some cases. The Vertex Locator (VELO) surrounding the interaction region is used to reconstruct the collision points (primary vertices) and decay vertices of long-lived particles (secondary vertices). The upgraded VELO modules will each be equipped with 4 silicon hybrid pixel tiles, each read out with by 3 VeloPix ASICs. The highest occupancy ASICs will have pixel hit rates of 900 Mhit/s and produce an output data rate of over 15 Gbit/s, with a total rate of 1.6 Tbit/s anticipated for the whole detector. The VELO upgrade modules are composed of the detector assemblies and electronics hybrid circuits mounted onto a cooling substrate. The modules are located in vacuum, separated from the beam vacuum by a thin custom made foil. The foil will be manufactured through a novel milling process and possibly thinned further by chemical etching. The front-end hybrid hosts the VeloPix ASICs and a GBTx ASIC for control and communication. They hybrid is linked to the the the opto-and-power board (OPB) by 60 cm electrical data tapes running at 5 Gb/s. The tapes must be vacuum compatible and radiation hard and are required to have enough flexibility to allow the VELO to retract during LHC beam injection. The OPB is situated immediately outside the VELO vacuum tank and performs the opto-electrical conversion of control signals going to the front-end and of serial data going off-detector. The board is designed around the Versatile Link components developed for high-luminosity LHC applications. From the OPB the detector data are sent through 300 m of optical fibre to LHCb's common readout board (PCIe40). The PCIe40 is an Altera Arria10-based PCI-express control and readout card capable of 100 Gb/s data throughput. The PCIe40 firmware is designed as a series of common components with the option for user-specific data processing. The common components deal with accepting the input data from the detector over the GBT protocol, error-checking, dealing with reset signals, and preparing the data for the computing farm. The VELO-specific code would, for example, perform clustering of hits and time reordering of the events scrambled during the readout. An additional challenge is the non uniform nature of the radiation damage, which results in requiring a guard ring design with excellent high voltage control. In addition, the n-in-p design requires the guard ring to be on the chip side making the high voltage reach the vicinity of the ground plane (about 30 $\mu$m apart). This requires a high voltage tolerant setup for irradiated assemblies which can be achieved using a vacuum chamber. The performance of the prototype sensors has been investigated in a test beam in which a dedicated telescope system was created read out by Timepix3 ASICs. Several different tests of the of the sensor prototypes were performed before and after irradiation. A collection of preliminary results will be presented, as well as a comparison of the performance of the different sensor prototypes. The design of the complete VELO upgrade system will be presented with the latest results from the R\&D. The LHCb upgrade detector will be the first detector to read out at full LHC rate of 40 MHz. The VELO upgrade will utilise the latest detector technologies to read out at this rate using while maintaining the necessary radiation hard profile and minimising the detector material. 3D diamond detectors for tracking and dosimetry 18m Advances in the laser assisted transformation of diamond into amorphous-carbon has enabled the production of a new type of particle detector - 3D diamond. When compared to conventional planar technologies, previous work has proven a 3D geometry to improve the radiation tolerance of detectors fabricated in silicon. This work demonstrates the same principle in diamond, with the aim of producing an accurate particle detector tolerant to extreme radiation fields. We present the latest fabrication methods, including the use of a spatial light modulator to produce a 3D array of ~1um diameter low resistivity electrodes, and discuss the fabrication of several devices in both single-crystal and polycrystalline CVD diamond. In order to optimise the 3D geometry, devices were fabricated with various cell geometries, and measurements obtained from various beams, all of which shall be presented. Outside the field of high energy particle physics, a potential application for this technology includes medical dosimetry; where the high resilience to radiation damage, operation at low bias voltage with well defined active volume, in addition to high compatibility to human tissue, makes their use desirable. We shall present results obtained with 3D diamond detectors for dosimetry applications. Speaker: Dr Iain Haughton (The University of Manchester) Radiation Monitoring with Diamond Sensors for the Belle-II Vertex Detector 18m The Belle II detector is currently under construction at the SuperKEKB electron-positron high-luminosity collider that will provide an instantaneous luminosity 40 times higher than that of KEKB. Therefore the Belle-II VerteX Detector (VXD) will operate in a very harsh environment. A radiation monitoring and beam abort system is needed to safely operate the VXD detector in these conditions. The Belle II radiation monitoring system will be based on 20 single crystal diamond sensors placed in 20 key positions in the vicinity of the interaction region. In this contribution we describe the system design and we present the procedures followed for the characterisation and calibration of the diamond sensors. We discuss also the performance of the prototype system during the first SuperKEKB commissioning phase in February-June 2016. Speaker: Chiara La Licata (for the BEAST II Collaboration) POSTER & Break 1h Corridor on the third floor R1-Particle identification(1) Room 305A Conveners: Miroslav Gabriel (Max Planck Institute for Physics) , Prof. Wang Yi (Tsinghua University) Assembly of a Silica Aerogel Radiator Module for the Belle II ARICH System 18m We have been in the process of developing the ARICH detector for identifying charged $\pi $ and $K$ mesons in a super-B factory experiment (Belle II) to be performed at the High Energy Accelerator Research Organization (KEK), Japan. The ARICH detector is a ring-imaging Cherenkov counter that uses silica aerogel as a radiator and hybrid avalanche photo-detectors as position-sensitive photo-sensors which are installed at the endcap of the Belle II spectrometer. The particle identification performance of the ARICH detector is basically measured by the Cherenkov angular resolution and the number of detected photoelectrons. At momenta below 4 GeV/$c$, to achieve high angular resolution, the refractive index of the aerogel must be approximately 1.05. A scheme for focusing the propagation pass of emitted Cherenkov photons on the photo-detectors is introduced by using multiple layers of aerogel tiles with different refractive indices. To increase the number of detected photoelectrons, the aerogel is expected to be highly transparent. A support module to install the aerogel tiles is comprised of a cylindrical shape with a diameter of approximately 2.3 m. It is important to reduce adjacent boundaries between the aerogel tiles where particles cannot be clearly identified. Accordingly, larger-sized, crack-free aerogel tiles are therefore preferred. Installing the tiles to the module by trimming them with a water jet cutter and avoiding optical degradation of the aerogel by moisture adsorption during long-term experiments should ultimately result in highly hydrophobic conditions. By 2013, our group established a method for producing, with high yield, large-area aerogel tiles (18 cm $\times $ 18 cm $\times $ 2 cm; approximately tripled) that fulfilled optical performance level requirements (transmission length ~40 mm at 400-nm wavelength; almost doubled). This enabled us to divide the module into 124 segments to install the trimmed aerogel tiles. Two aerogel tiles with refractive indices of 1.045 and 1.055 were installed to each segment (total of 248 tiles), thus resulting in a radiator thickness of 4 cm. By 2014, 450 aerogel tiles were mass-produced and optically characterized. After water jet machining, the optical parameters were re-investigated. Ultimately, selected aerogel tiles were successfully installed to the module by the end of 2016. Speaker: Makoto Tabata (Chiba University) The Aerogel Ring Image Cherenkov counter for particle identification in the Belle II experiment 18m The Belle II spectrometer, an upgrade of the Belle detector, is under construction together with the SuperKEKB electron-positron accelerator at KEK in Japan to search for the New Physics beyond the Standard Model using 50 times higher statistics of $e^+-e^-$ collisions of the Belle experiment. An aerogel ring imaging Cherenkov (ARICH) counter will be installed into the end cap region of the new spectrometer as a particle identification device to secure $4\sigma$ separation of charged kaons and pions up to momentum of 3.5 GeV. We developed several techniques to maximize the pion-kaon separation performance in 1.5 T magnetic field and a limited space available between the tracker and the calorimeter. Two layers of silica aerogel radiators with different refraction indices are used to focus the Cherenkov lights. We have established a method to process the aerogel radiators with flexible refraction index and high transparency. Hybrid Avalanche Photo Detector (HAPD), which has 144 pixels with 5 mm pitch, was developed to detect the positions of incoming photons in the high magnetic field. Two steps of readout electronics of the HAPDs was introduced in order to process the signals and to merge data and reduce numbers of cables to the outside of the detector. A frontend board attached to the HAPD reads out signals to digitize photon hit patterns; a merger board collects digitized data from several of the frontend boards (up to 6) to send them to the Belle II global data acquisition system. In total, 248 segments of the silica aerogel radiators cover a plane of the end cap, while 420 HAPDs are located in another plane 20 cm behind the plane. Developments and productions of these detector components were already finished in 2016 and the ARICH counter is under construction, scheduled to be installed into the Belle II detector in summer of 2017. All the segments of the aerogel tiles are fully installed while installation of the HAPDs and the readout electronics are ongoing in parallel with detector test operation using cosmic rays. Cherenkov ring images of cosmic rays were collected using the framework of the Belle II global data acquisition system to study detector response and readout performance. A LED light injection system to monitor the photo detectors was also developed and installed. In addition to the construction. We have also developed slow control software systems of the ARICH detector and the readout system including power supply for high voltages and low voltages. In this presentation, we will overview the details of the ARICH counter and its construction with results of the cosmic ray test and then show the expected performance after installation to the Belle II detector. Speaker: Tomoyuki Konno (KEK) High rate time of flight system for FAIR-CBM 18m The Compressed Baryonic Matter experiment (CBM) is one of the big experiments of the international Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. CBM aims to investigate rare probes such as charmed hadrons, multiple strange baryons, di-electrons and di-muons as messengers of the dense phase of strongly interacting matter with unprecedented accuracy. This is achieved by designing all components of the experiment for an interaction rate of 10MHz for the largest reaction systems. Charged hadron identification in the system is realized via the Time-of-Flight (TOF) method. For this purpose the CBM-TOF collaboration designed a TOF wall composed of Multi-gap Resistive Plate Chambers (MRPC). Due to the high interaction rate the key challenge is the development of high rate MRPCs above 25 kHz/cm2 which becomes possible due to the development of low resistive glass with extremely good quality. Based on the low resistive glass, we designed several high rate MRPCs of different structure and readout electronics. A couple of beam test have been performed and excellent results were obtained. The TDR of TOF has been approved and the production of low resistive glass, MRPC modules and electronics proceeds smoothly. In this article we present the actual design of the TOF-wall. The design of high rate MRPC, thin glass MRPC, readout chain and beam test results are also discussed in detail. Speaker: Prof. Yi Wang (Tsinghua University) Endcap Disc DIRC for PANDA at FAIR 18m The PANDA detector at the future FAIR facility at GSI is planned as a fixed-target experiment for proton-antiproton collisions at momenta between 1.5 and 15\,GeV/c. It will be used to address open questions in hadronic physics. In order to achieve excellent particle identification, two different DIRC detector concepts have been developed. This talk describes the Endcap Disc DIRC detector, which will cover the forward endcap region of the PANDA target spectrometer and to provide a $4\sigma$ separation of pions and kaons up to a momentum of 4\,GeV/c for polar angles from $5^\circ$ to $22^\circ$. The main advantage of the actual design is the compact modular structure. It consists of a synthetic fused silica radiator disk, which is divided into 4 identical quadrants. The readout system consists of 108 focusing elements with attached MCP-PMTs, which have the task to collect, focus and register the Cherenkov photons produced by the particle traversing the radiator. This new detector concept requires the development of dedicated reconstruction and PID algorithms, which permit an efficient analysis of the measured time-correlated photon patterns. The performance of a possible online reconstruction system is under investigation with a design for a single Virtex 4 FPGA card calculating the Cherenkov angle from the measured hit pattern and related tracking information for each event with a rate of up to 20\,MHz. Time- and event-based Monte-Carlo simulations within the PandaRoot framework have been used to analyse and evaluate the PID performance for high momentum particles. In order to determine the future overall performance of PANDA at realistic conditions, the benchmark channel $p\bar p \rightarrow f_0\pi^0 \rightarrow K^+K^-$ with suitable background events has been studied by including all tracking information and likelihood values from surrounding detectors. Results from various testbeams during the last years were used to validate the PID performance for the desired momentum range. Speaker: Mustafa Schmidt (Justus Liebig University Giessen) Barrel time-of-flight detector for the PANDA experiment at FAIR 18m The PANDA experiment at the new FAIR facility at GSI will perform high precision experiments in the strange and charm quark sector using cooled beams of antiprotons at high luminosity, in the momentum range of 1.5 GeV/c to 15 GeV/c. For the identification of low momentum charged particles with extreme accuracy, the barrel time-of-flight (TOF) detector is one of the key components of PANDA. Its main requirement is to achieve a time resolution of σ<100ps as well as a large solid angle coverage at high collision rates. The final Barrel ToF consists of 16 independent segments, located azimuthally at 50cm radial distance from the beam pipe. Every segment contains a sensitive area, that is covered by 2x60 single Scintillator Tile (SciTil). Each SciTil (90 x 30 x 5 mm³) is read out by 4 Silicon Photomultipliers (SiPM) on both ends. In 2016, a beam test at CERN exposed the SciTil with 6 GeV/c secondary beam where σ<60 ps time resolution was reached. In this talk we will present the further optimization of operational conditions and time resolution. Speaker: Nicolaus Kratochwil (Österreichische Akademie der Wissenschaften, Stefan Meyer Institut für Subatomare Physik, Wien, Austria) Commissioning and Initial Performance of the Belle II iTOP PID Subdetector 18m High precision flavor physics measurements are an essential complement to the direct searches for new physics at the LHC. Such measurements will be performed using the upgraded Belle II detector that will take data at the SuperKEKB accelerator. With an anticipated 40-fold increase in the integrated luminosity of KEKB, the detector systems must operate efficiently at much higher rates than the original Belle detector. A central element of the detector upgrade is the barrel particle identification system. Belle II has built and installed an imaging-Time-of-Propagation (iTOP) detector. The iTOP uses quartz as the Cherenkov radiator and the photons are transported down the quartz bars via total internal reflection with a spherical mirror at the forward end to reflect forward-going photons to the backward end, where they are imaged onto an array of segmented Micro-Channel Plate Photo-Multiplier Tubes. The system is readout using Giga-sample per second waveform sampling Application-Specific Integrated Circuits that provide precise photon timing. The combined timing and spatial distribution of the photons for each event are used to determine particle species. A summary of commissioning and current status will be provided. Speaker: Gary Varner (University of Hawaii) Conveners: Christian Bohm (Stockholm University) , Prof. Jin Li (IHEP/THU) SciFi - A large Scintillating Fibre Tracker for LHCb 18m The LHCb detector will be upgraded during the Long Shutdown 2 (LS2) of the LHC in order to cope with higher instantaneous luminosities and to read out the data at 40MHz using a trigger-less read-out system. The current LHCb main tracking system, composed of an inner and outer tracking detector, will not be able to cope with the increased particle multiplicities and will be replaced by a single homogenous detector based on scintillating fibres. The new Scintillating Fibre (SciFi) Tracker covers a total detector area of 340 m2 and should provide a spatial resolution for charged particles better than 100 µm in the bending direction of the LHCb spectrometer. The detector will be built from individual modules (0.5 m × 4.8 m), each comprising 8 fibre mats with a length of 2.4 m as active detector material. The fibre mats consist of 6 layers of densely packed blue emitting scintillating fibres with a diameter of 250 µm. The scintillation light is recorded with arrays of state-of-the-art multi-channel silicon photomultipliers (SiPMs). A custom ASIC will be used to digitize the SiPM signals. Subsequent digital electronics performs clustering and data-compression before the data is sent via optical links to the DAQ system. To reduce the thermal noise of the SiPM in particular after being exposed to a neutron fluence of up to 10$^{12}$ $n_{eq}$ /cm$^2$, expected for the lifetime of the detector, the SiPMs arrays are mounted in so called cold-boxes and cooled down by 3D-printed titanium cold-bars to -40$^o$ C. The production of fibre mats and modules is in full swing: fibre mats are being produced in four production centers and being assembled at two sites. In parallel the readout electronics is finalized and its series production is prepared. The detector installation is foreseen to start end of 2019. The talk will give an overview of the detector concept and will present the experience from the series production complemented by most recent test-beam and laboratory results. Speaker: Ulrich Uwer (H) The tracking system at LHCb in run-2: hardware alignment systems, online calibration, radiation tolerance and 4D tracking with timing 18m The LHCb experiment is designed to study B and D decays at the LHC, and as such is constructed as a forward spectrometer. The large particle density in the forward region poses extreme challenges to the subdetectors, in terms of hit occupancies and radiation tolerance. Two methods and their results will be presented that show no radiation damage of the gaseous straw tube detector after having received a dose of about 0.2 C/cm2 in the hottest area. The precision measurements at LHCb require accurate alignment of their elements. In run-2 of the LHC the full potential of the state-of-the-art alignment system "RASNIK" is being exploited. Relative movements down to 1 um are being monitored and will be shown for the first time at this conference. High accuracy of the RASNIK data allow to track deformations connected with changing magnetic field configurations, operational interventions and environmental conditions. The RASNIK system also provides crucial input to the software alignment by constraining the so-called "weak-modes", like movements in the longitudinal direction z. The Outer Tracker subdetector is a gaseous straw tube tracker that measures the drift time with a resolution of 2.4 ns. This accuracy implies an improvement of 20% with respect to the run-1 performance, thanks to a new strategy used for the drift time calibration including real-time calibration during data taking deployed in run-2. Interestingly, recent studies show that this superb timing resolution can be combined to measure the time-of-flight of single particles with an accuracy of 0.55 ns. We will show that low momentum protons can be cleanly distinguished from pions. In addition, these pilot studies show the potential of distinguishing primary vertices which occur at different times, which will be crucial for LHCb when operating in the high-luminosity regime as is being proposed after the phase-II upgrade. The possibility is being investigated to use the vertex timing for the measurement of the longitudinal distribution of the unintentional beam population in the nominally empty slots, which requires a timing precision better than about 2.5 ns. Speaker: Dr Artur Ukleja (National Centre for Nuclear Research Warsaw) CMS Tracker performance in 2016 18m In 2016, the CERN Large Hadron Collider (LHC) reached a peak instantaneous luminosity of 1.5x10^34 cm^-2 s^-1, going above the original design and reaching up to 40 interactions per bunch crossing. Under those conditions the CMS tracker managed to have a ~98% efficiency for the data taking period of 2016. This talk will present the performance of the CMS tracker in 2016, for both the Pixel and Strip sub-detectors. Speaker: Hugo Delannoy (I) The CMS ECAL Upgrade for Precision Crystal Calorimetry at the HL-LHC 18m The electromagnetic calorimeter (ECAL) of the Compact Muon Solenoid Experiment (CMS) is operating at the Large Hadron Collider (LHC) in 2016 with proton-proton collisions at 13 TeV center-of-mass energy and at a bunch spacing of 25 ns. Challenging running conditions for CMS are expected after the High-Luminosity upgrade of the LHC (HL-LHC). We review the design and R&D studies for the CMS ECAL crystal calorimeter upgrade and present first test beam studies. Particular challenges at HL-LHC are the harsh radiation environment, the increasing data rates and the extreme level of pile-up events, with up to 200 simultaneous proton-proton collisions. We present test beam results of hadron irradiated PbWO crystals up to fluences expected at the HL-LHC. We also report on the R&D for the new readout and trigger electronics, which must be upgraded due to the increased trigger and latency requirements at the HL-LHC. Speaker: Patrizia Barria (University of Virginia, on behalf of the CMS Collaboration) Design of the new ATLAS Inner Tracker for the High Luminosity LHC era 18m In the high luminosity era of the Large Hadron Collider (HL-LHC), the instantaneous luminosity is expected to reach unprecedented values, resulting in about 200 proton-proton interactions in a typical bunch crossing. To cope with this high rate, the ATLAS Inner Detector is being completely redesigned, and will be replaced by an all-silicon system, the Inner Tracker (ITk). This new tracker will have both silicon pixel and silicon strip sub-systems. The components of the Inner Tracker will have to be resistant to the large radiation dose from the particles produced in HL-LHC collisions, and have low mass and sufficient sensor granularity to ensure a good tracking performance over the pseudorapidity range |η|<4. In this talk, first the challenges and second possible solutions to these challenges will be discussed, i.e. designs under consideration for the pixel and strip modules, and the mechanics of local supports in the barrel and endcaps. Speaker: Jike Wang R3-Front-end electronics and fast data transmission(1) Room 305E Conveners: Fukun Tang (The University of Chicago) , Johan Borg (Imperial College London) A readout ASIC for the LHCb Scintillating Fibre (SciFi) tracker 18m The LHCb detector will be upgraded during the Long Shutdown 2 (LS2) of the LHC in order to cope with higher instantaneous luminosities and to read out the data at 40MHz using a trigger-less read-out system. The current LHCb main tracking system will be replaced by a single homogenous detector based on scintillating fibres. The detector will be built from 2.5 m long plastic fibres with a diameter of 250um. The scintillation light is recorded with arrays of state-of-the-art multi-channel silicon photomultipliers (SiPMs). Each SIPM sensor provides 128 channels grouped in two silicon dies and packaged together. The electrical SiPM signals are collected and processed by the low Power ASIC for the sCIntillating FIbres traCker (PACIFIC). The 64 channel ASIC comprises for every channel analog processing, digitization, slow control and digital output at a rate of 40MHz. The analog processing includes preamplifier, shaping and integration. The integrator is formed by an interleaved double gated integrator and a track and hold to avoid dead time (one integrator is in reset while the other collects the signal). The output of the integrator is digitized using 3 comparators (non-linear flash ADC). The three bits output is then encoded into two bits and serialized to be transmitted to a readout FPGA used for clustering and data-compression. Some auxiliary blocks are also needed to produce a fully functional device and include voltage references, current references, control DACs, power on reset (POR) circuitry and serializers. PACIFIC has been designed using deep sub-micron technologies and actual implementation uses TSMC130nm process. PACIFICr3 was the first full size prototype providing real measurements of signals from 64 channels and including the analog processing, digital control and serialization. PACIFICr4 corrects some issues found in the r3-prototype and updates some parameters to improve signal collection from the detector. The talk will present the ASIC design concept and provide results from laboratory tests and test-beam measurements. These studies include the characterization of prototypes, measurements with electrical signal and light injection and measurement with a radioactive source (Sr90) using full fibre modules. The test-beam results are in particular important to understand the expected physics performance of the full chain from the fibre to the digital PACIFIC output. Speaker: Xiaoxue Han (H) CATIROC, a multichannel front-end ASIC to read out the Small PMTs (SPMT) system of the JUNO experiment 18m CATIROC (Charge And Time Integrated Read Out Chip) is a complete read-out chip designed to read arrays of 16 photomultipliers (PMTs). It finds a valuable application in the content of the JUNO experiment, the largest Liquid Scintillator Antineutrino Detector ever built, currently under construction in the south of China. A double calorimetry system will be used for the first time ever combining about 18k 20" PMTs and around 36k small PMTs (3"). The CATIROC will be used to read out the small PMTs system and provide the charge measurement up to 400 photoelectrons (70 pC) on two scales of 10 bits and a timing information with an accuracy of 200 ps rms. It is composed of 16 independent channels that work in triggerless mode, auto-triggering on the single photo-electron (p.e.). It is a SoC (System on Chip) that processes analog signals up to the digitization and sparsification to reduce the cost and number of cables. The ASIC performances will be detailed in the paper. Speaker: Selma Conforti (OMEGA/IN2P3/CNRS) Development of Radiation-Hard ASICs for the ATLAS Phase-1 Liquid Argon Calorimeter Readout Electronics Upgrade 18m The new trigger readout electronics system for ATLAS Liquid Argon (LAr) Calorimeter aims to improve granularity of the calorimeter information for the L1 trigger, essential for physics research goals after the phase-1 upgrade. The major R&D activities for front-end readout electronics upgrade include designing a new radiation-hard ADC and a data multiplexing and serialization ASIC to send the data off-detector via optical links. The NevisADC is a radiation-hard four-channel 12-bit 40 MS/s pipeline ADC, which consists of four 1.5-bit Multiplying Digital-to-Analog Converters, with nominal 12-bit resolution, followed by an 8-bit Successive-Approximation-Register analog-to-digital converter to reach an optimization on performance and power consumption. The LArTDS ASIC multiplexes 16 channels of ADC data, then scrambles and serializes the data for transmission over two optical links each with a data transfer rate of 4.8 Gbps. The custom design chips, fabricated in the GF 130 nm CMOS 8RF process, are extensively evaluated and tested. The NevisADC achieves an ENOB of 11 at 40 MS/s target sampling rate, with a latency of 112.5 ns while consuming 45 mW/channel and exhibits no performance degradation after irradiation. The LArTDS has passed the entire design function test with test pattern data and real ADC data. A 48-hours long term stability test shows the bit error rate is bellow 1.2x10^-15 for both high speed serial channels. The ASIC chips architectures and detailed performance results will be presented. Speaker: Dr Qiang Wang (Nevis Laboratories, Columbia University) TDC based on FPGA of Boron-coated MWPC for Thermal Neutron Detection 18m Li Yu, Ping Cao, WeiJia Sun, ManYu Zheng, Ying Zhang and Qi An Speaker: Dr li yu (University of Science and Technology of China) Radiation-Hard/High-Speed Optical Engine for HL-LHC 18m The LHC has recently been upgraded to operate at higher energy and luminosity. In addition, there are plans for further upgrades. These upgrades require the optical links of the experiments to transmit data at much higher speed in a more intense radiation environment. We have designed a new optical transceiver for transmitting data at 10 Gb/s. The device consists of a 4-channel ASIC driving a VCSEL (Vertical Cavity Surface Emitting Laser) array in an optical package. The ASIC is designed using only core transistors in a 65 nm CMOS process to enhance the radiation-hardness. The ASIC contains an 8-bit DAC to control the bias and modulation currents of the individual channels in the VCSEL array. The DAC settings are stored in SEU (single event upset) tolerant registers. Several optical transceivers were irradiated with 24 GeV/c protons up to a dosage of 74 Mrad to study the radiation hardness of the high-speed optical links. The irradiated devices have been extensively characterized. The performance of the devices is satisfactory after the irradiation. We will present a comparison of the performance of the devices before and after the irradiation. Speaker: K.K. Gan (T) Conveners: Prof. Gerald Eigen (University of Bergen) , Valerio Vagelli (INFN-PG) The TORCH PMT, a close packing, long life MCP-PMT for Cherenkov applications with a novel high granularity multi-anode 18m Photek are in a development program with CERN and the Universities of Oxford and Bristol to produce a novel square PMT for the proposed TORCH detector which is being developed within an ERC project, with potential application in a future upgrade of the LHCb experiment around 2023. The PMT development takes the known performance of Photek microchannel plate (MCP) based detectors of a potential spatial resolution of < 0.1 mm and potential time resolution of < 40 ps rms and aims for a balance of these performance objectives (that often work in opposition) to meet the technical PMT requirements of the proposed TORCH upgrade at LHCb. To achieve high resolution in both time and position and maintain a good level of parallelism in photon detection, a multi-anode approach has to be used. From a detector manufacturing perspective there are three main challenges in this PMT development: long lifetime, multi-anode output and close packing (requiring a square tube envelope). 1. Long Lifetime Previous work published by Photek and several other parties have now established the method of atomic layer deposition (ALD) coating of the MCP as being the most effective method of demonstrated a significant lifetime improvement in an MCP-PMT. We will present further evidence of a PMT capable of producing over 5 C / cm2 of anode charge without any detectable reduction in photocathode sensitivity. 2. Multi-Anode Output The technical requirements of the TORCH PMT include an effective spatial resolution of 128 × 8 pixels within a 53 mm x 53 mm working area. Such high granularity in one direction presents a difficult challenge in terms of manufacturing the segmented anode and also in keeping inter-anode cross talk to a minimum. We will present a novel anode design that combines the image charge technique with a patterned anode, and uses a charge sharing algorithm that produces an inter-pad position resolution beyond the granularity of the pads themselves, 0.225 mm FWHM (sigma ~ 0.1 mm) derived from pads on a 0.83 mm pitch: The anode signal is A.C. coupled and the structure is high voltage tolerant, so the input window can be fixed at ground potential which removes any issues with charging effects. We will describe the methods of coupling the detector to multiple NINO chips, a 32-channel time-over-threshold ASIC using ACF (anisotropic conductive film) that minimises any parasitic input capacitance by allowing very close proximity between the NINO and the detector. We will build on previous results of software simulations that combine the pulse height variation from the detector and NINO threshold levels to predict a position resolution to show initial results using the NINO ASIC and the first multi-anode tube prototypes. 3. Close Packing (requiring a square tube envelope) The technical challenge for Photek is to produce a square tube envelope that has a fill factor of > 88 % working width over the total detector size (including housing) in one direction. We will present results from the first square PMT prototypes demonstrating the fill factor ratio. We will present results from the first square PMT prototypes demonstrating the required fill factor ratio. Speaker: James Milnes (P) Recent Advances in LAPPD™ and Large Area Micro-channel Plates 18m Recent performance results are presented for Large Area Picosecond Photodetectors (LAPPD™s) that are being developed by Incom, Inc. The LAPPD is a micro-channel plate (MCP) based photodetector containing a bi-alkali photocathode with overall dimensions of 230 mm x 220 mm x 21 mm, an active area of up to 400 cm2, spatial resolution ~1 mm, and timing resolution of approximately 100 picoseconds for single photoelectrons and better for multiple photoelectrons. Performance will be discussed for LAPPDs that have been fabricated with gains ~1x106 and quantum efficiency >20%. The low MCP contribution to background rates will also be discussed. LAPPDs are being developed for precision time-of-flight and particle identification measurements in accelerator-based experiments and large water Cherenkov and scintillation detectors. The key component of the LAPPD is the large area MCP manufactured by Incom. A "hollow-core" process is used to draw and fuse millions of micro-capillaries into blocks that are sliced and polished into glass capillary array (GCA) plates. The glass contains low levels of radioactive isotopes resulting in lower dark noise. The GCAs are then converted into MCPs using an atomic layer deposition (ALD) process. ALD-coating has been shown to extend the life of MCP photomultipliers (Conneelly et al. 2013; Lehmann et al., 2014; Matsuoka et al., 2017). Because the glass fabrication and coating operations are separate they can be independently optimized to produce high performing, large area MCPs at low cost per area. The large format also enables dicing into smaller MCPs of desired shapes with matched resistance. Recent performance results will also be discussed for pairs of 203 mm x 203 mm MCPs with gains >1x107 and uniformity >80% across the full area. Speaker: Mr Christopher Craven (Incom, Inc.) Hamamatsu PMTs Latest developmental status 18m Speaker: haiyi jin (Hamamatsu Photonics K.K.) Plenary 3 305 Convener: Junji Haba (K) Photon detection 30m Photon detection has been a cornerstone in particle physics, enabling many fundamental physics discoveries. Traditional photo-detectors are based on a mature, time-honored technology that has seen incremental improvements over time. Recent years, however, have seen a rapid increase in new developments, either bringing new techniques to bear on traditional methods or implementing transformational improvements in existing technologies. The continuing trend in adopting new techniques and methodologies to the development of photodetectors holds significant promise, possibly providing for a transformation in how photodetectors will be viewed and produced. This would have a huge impact on science and society. Some examples will be discussed. Speaker: Prof. Marcel Demarteau (Argonne National Laboratory) Neutrino physics and detectors 30m Recent development of detectors for neutrino intrinsic property measurements and neutrino oscillations are reviewed, including low background detectors for searching for the neutrino-less double beta decay, water Cherenkov, liquid Argon TPC, liquid scintillator for neutrino oscillations. Such technologies had significant progresses in the past a few years and further steps are planned for the future. Speaker: Dr Liangjian Wen (高能所) Gravitational wave detection 30m For the detection of gravitational wave, KAGRA, Japanese 3km cryogenic gravitational wave telescope, project started from 2010. KAGRA has two unique feature to achieve the detection and long term stable observation : KAGRA is being constructed in an underground mine. The test mass is cooled down to cryogenic temperature of ~20 Kelvin. In this talk, we will present current status of the KAGRA experiment. Speaker: Prof. Kazuhiro Hayama (U.Tokyo) Tea Break 30m Convener: Dr Marcel Demarteau (Fermilab) Low radiation techniques 30m Speaker: Prof. Grzegorz Zuzel (Institute of Physics, Jagiellonian University) Performance studies and requirements on the calorimeters for a FCC-hh experiment 30m The physics reach and feasibility of the Future Circular Collider (FCC) with center of mass energies up to 100 TeV and unprecedented luminosity is currently under investigation. The new energy regime opens the opportunity for the discovery of physics beyond the standard model. However, the discovery of e.g. postulated new heavy particles such as gauge bosons require an efficient reconstruction of very high $p_{T}$ jets. The reconstruction of these boosted objects, with a large fraction of highly energetic hadrons, set the requirements on the calorimetry: excellent energy resolution (especially low constant term), containment of highly energetic hadron showers, and high transversal granularity to provide sufficient distinction of close by particles. Additionally the FCC detectors have to meet the challenge of a very high pile-up environment. We will present the preliminary results of the ongoing performance studies, discuss the feasibility and potential of the technologies under test, while addressing the needs of the physics benchmarks of the FCC-hh experiment. Speaker: Coralie Neubüser (CERN) Front-end electronics for next generation of imaging/timing calorimeters 30m The next generation of calorimeters on colliders will provide unprecedented measurements of particle showers in 5 dimensions (space, energy and time). The very fine granularity leads to millions of readout channels and sets high constraints on the readout electronics, which is embedded inside the detector. In addition to the usual low noise/high dynamic range/high accuracy requirements of calorimetry come requirements of very low power and high data rate output. More recently, high timing accuracy (~20-50 ps) is studied to mitigate the harsh pileup environement. The talk will present front-end architectures and recent results form CALICE, ATLAS and CMS collaborations. Speaker: Dr Christophe de La TAILLE (OMEGA Ecole Plytechnique-CNRS/IN2P3) Parallel Room305E Room305E Conveners: Miroslav Gabriel (Max Planck Institute for Physics) , Prof. Yinong LIU (Tsinghua University) A Novel Gamma-ray Detector for Gravitational Wave Electromagnetic Counterpart Searches in Space 18m Gravitational wave burst high energy Electromagnetic Counterpart All-sky Monitor experiment (GECAM) is proposed by Institute of High Energy Physics (IHEP), which is characterized of all-sky 4π γ-rays monitor with two micro-satellite in space. A novel LaBr3 gamma-ray detector readout with large area Silicon Photomultiplier (SiPM) array has been developed for this special application, characterized by only one readout channel, compact, low power, X-ray sensitive to about 5 keV. This presentation will report the detector design and performance. Speaker: Ms Pin Lv (IHEP) Spin-Off Application of Silica Aerogel in Space: Capturing Intact Cosmic Dust in Low-Earth Orbits and Beyond 18m Since the 1970s, silica aerogel has been widely used as Cherenkov radiators in accelerator-based particle- and nuclear-physics experiments, as well as cosmic ray experiments. For this major application, the adjustable refractive index and optical transparency of the aerogel are highly important. We have been in the process of developing high-quality aerogel tiles for use in a super-B factory experiment (Belle II) to be performed at the High Energy Accelerator Research Organization (KEK), Japan, and for various particle- and nuclear-physics experiments performed (or to be performed) at the Japan Proton Accelerator Complex (J-PARC) since the year 2004. Our recent production technology has enabled us to obtain a hydrophobic aerogel with a wide range of refractive indices (1.0026–1.26) and with an approximately doubled transmission length (i.e., a 400-nm wavelength) in various refractive index regions. Silica aerogel is also useful as a cosmic dust capture medium. Low-density aerogels can capture almost-intact micron-size dust grains with hypervelocities on the order of several kilometers per second in space, which was first recognized in the 1980s. For this interesting application, the high porosity (i.e., low bulk density below 0.1 g/cm$^3$; refractive index $n$ < 1.026) and optical transparency of the aerogel are vitally important. The latter characteristic enables one to easily find a cavity under an optical microscope, which is produced in an aerogel by the hypervelocity impact of a dust particle. Aerogel-based cosmic dust collectors were used in several missions aboard spacecraft such as the Space Shuttles and the International Space Station (ISS) in low-Earth orbits. The Stardust spacecraft, which was a deep-space mission by the U.S. National Aeronautics and Space Administration (NASA), retrieved comet and interstellar dust back to Earth successfully in 2006. In support of present-day endeavors, we have developed a next-generation ultralow-density (0.01 g/cm$^3$; $n$ = 1.003) aerogel for the Tanpopo mission, which is an astrobiological experiment in operation now aboard the ISS. In this paper, a spin-off application of aerogel as a dust-capture medium in space is described. We provide an overview of the physics behind hypervelocity capture of dust via aerogels and chronicle their history of use as a dust collector. In addition, recent developments regarding the high-performance aerogel used in the Tanpopo mission are discussed. Research and development on a Scintillating Fiber Tracker with SiPM array readout for Application in Space 18m Scintillating fibers can be complementary to silicon micro-strips detectors for particle trackers in Space or offer an interesting alternative. Less fragile, more flexible, with no need of wire bonds, they can be used for the development of high-resolution charged-particle tracking detectors. Prototypes consisted in a ribbon of 40 cm long, 250 $\mu$m diameter fibers, with Hamamatsu MPPC arrays and readout by VATA ASICs have been tested. Proton beam test results, status of the space qualification process, as well as the preliminary tests with the new IDEAS SIPHRA chip will be presented. Speaker: Chiara Perrina (University of Geneva) MoBiKID - Kinetic Inductance Detectors for upcoming B-mode satellite missions 18m Our comprehension of the dawn of universe grew incredibly during last years, pointing to the existence of the cosmic inflation. The primordial B-mode polarization of the Cosmic Microwave Background (CMB) represents a unique probe to confirm this hypothesis. The detection of such small perturbations of the CMB is a challenge that will be faced in the near future by a new dedicated satellite mission. MoBiKID is a new project, funded by INFN, to develop an array of Kinetic Inductance Detectors able to match the requirements of a next-generation experiment. The detectors will feature a Noise Equivalent Power better than 5 aW/Hz^0.5 and will be designed to minimize the background induced by cosmic rays, which could be the main limit to the sensitivity. I will present the current status of detectors development and the next planned steps to reach the goal of this project. Speaker: Angelo Cruciani (INFN - Sezione di Roma) Resistive Micromegas for the Muon Spectrometer Upgrade of the ATLAS Experiment 18m Large size multilayer resistive Micromegas detectors will be employed for the Muon Spectrometer upgrade of the ATLAS experiment at CERN. The current innermost stations of the muon endcap system, the 10 m diameter Small Wheel, will be upgraded in the 2019-2020 long shutdown of LHC, to retain the good precision tracking and trigger capabilities in the high background environment expected with the upcoming luminosity increase of the LHC. Along with the small-strip Thin Gap Chambers (sTGC) the "New Small Wheel" will be equipped with eight layers of Micromegas (MM) detectors arranged in multilayers of two quadruplets, for a total of about 1200 m2 detection planes. All quadruplets have trapezoidal shapes with surface areas between 2 and 3 m2. The Micromegas system will provide both trigger and tracking capabilities. In order to achieve a 15% transverse momentum resolution for 1 TeV muons, a challenging mechanical precision is required in the construction for each plane of the assembled modules, with an alignment of the readout elements (strips with ~450 um pitch) at the level of 30 μm along the precision coordinate and 80 μm perpendicular to the plane. Each Micromegas plane must achieve a spatial resolution better than 100 μm independent of the track incidence angle and operate in an inhomogeneous magnetic field (B < 0.3 T), with a rate capability up to ~15 kHz/cm2. In May 2017, all four types full size prototypes (modules-0) will be completed and will be subjected to a thorough validation phase. The Modules-0 construction procedures will be reviewed along with the results of the quality controls results during constructions and the final validation tests obtained with X-rays, cosmic tracks and with high-energy particle beams at CERN. Speaker: Andreas Dudder (Johannes-Gutenberg-Universitaet Mainz) Small-pads Resistive Micromegas for Operation at Very High Rates 18m The resistive Micromegas detectors have already proved to be suitable for precision tracking in dense particle rate environment up to few kHz/cm$^2$. In order to achieve even higher rate capability, with low occupancy up to few MHz/cm$^2$, fine-segmented strips could be replaced by few mm$^2$ pads. We present here a solution based on small anode pads, overlayed by an insulating layer with a pattern of resistive pads on top. The readout and resistive pads are connected by intermediate resistors embedded in the insulating layer. A first prototype has been constructed at CERN, composed of a matrix 48x16 of read-out pads with rectangular shape 0.8mm x 2.8mm (pitch of 1 and 3 mm in the two coordinates), for a total of 768 channels read-out by 6 APV-25 chips. Characterization and performance studies of the detector have been carried out by means of radioactive sources, X-Rays, cosmic rays and test beam data. The results will be presented. Speaker: Alviggi Mariagrazia (Universita e INFN, Napoli) Readout and Precision Calibration of square meter sized Micromegas Detectors using the Munich Cosmic Ray Facility 18m Currently m$^2$ large Micromegas detectors with a spatial resolution better than 100 $\mu$m are of big interest for many experiments and applications. The combination of large size and excellent spatial resolution requires highly sophisticated construction methods in order to fulfil tight mechanical tolerances. We present a method to survey full sized micromegas detectors on potential detector deformations or deviations of the internal micro pattern structure from design values by comparing to precision reference tracking of cosmic muons. The LMU Cosmic Ray Facility consists of two 8 m$^2$ ATLAS MDT (monitored drift tube) chambers for precision muon reference tracking, as well as two segmented trigger hodoscopes for 10 cm position information along the wires of the MDTs with sub-ns time resolution. The angular acceptance for cosmic muons is $\pm$ 30 degrees and its mechanical layout allows the installation of one or multiple Micromegas detectors in between the MDT reference chambers. Track segments reconstructed in all systems can be compared, allowing a full scan for efficiency homogenity, pulse height, single plane angular resolution and spatial resolution, also as function of multiple scattering. In addition to results on the performance of resistive strip Micromegas detectors of size up to 2 m$^2$ we report on the synchronized electronic readout system, based on standerd MDT electronics, together with custom electronics and firmware, based on the SRS Scalable Readout System. Speaker: Andre Zibell (U) Particle tracking in the Negative Ion Gas SF6 with a Micromegas 18m Recent work demonstrated gas gain in the negative ion gas SF$_6$ using GEMs and Thick GEMs. SF$_6$ is a favorable gas for directional dark matter detection with a TPC because it provides low diffusion (at the thermal limit), strong sensitivity to spin-dependent WIMP dark matter (through the high fluorine content), and full-volume fiducialization (thanks to multiple negative ion species). In this work, we present results from a prototype detector showing successful operation of a Micromegas with strip readout in SF$_6$ gas, and discuss the prospects for directional dark matter detectors using this readout technology. Speaker: Prof. James Battat (Wellesley College) A new method for Micromegas fabrication 18m We have developed a new method for fabricating Micromegas detectors based on thermal bonding technique. A high gain (>10000) and a good energy resolution of 16% (FWHM, 5.9 KeV x-rays) can be obtained for Micromegas detectors built with this method. In order to reduce sparking rate of the detectors, we have also studied resistive anodes by Germanium plating and carbon paste screen printing techniques. Combining the thermal bonding technique with the resistive electrode technique, we have built a 2D position-sensitive Micromegas detector with four-corner readout and a back-to-back double avalanche structure with good performance. This demonstrates the wide range of applications of the new method. This report will describe the new Micromegas fabrication method in various aspects, including its advantages over conventional Micromegas fabrication methods. Results from the prototyping for the development of the new method will also be presented. Speaker: Dr Jianbei Liu (University of Science and Technology of China) Conveners: Hugo Delannoy (Interuniversity Institute for High Energies (ULB-VUB)) , jennifer thomas (高能所) Development of a SiPM camera demonstrator for the Cherenkov Telescope Array observatory telescopes 18m The Cherenkov Telescope Array (CTA) Consortium is developing the new generation of ground observatories for the detection of very-high energy gamma-rays. The Italian Institute of Nuclear Physics (INFN) is contributing to the R&D of a possible solution for the Cherenkov photon cameras based on Silicon Photomultiplier (SiPM) detectors sensitive to near ultraviolet energies, produced by Fondazione Bruno Kessler (FBK). The concept, mechanics and readout electronics for SiPM modules which could equip a possible upgrade for the focal plane camera of the pSCT telescope, prototype of a CTA medium size telescope with Schwarzschild-Couder optics, are currently being developed. This contribution reviews the development, the assembly and the performances of 4x4 SiPM modules intended to equip the pSCT camera upgrade. Speaker: Valerio Vagelli (INFN Perugia, Università degli Studi di Perugia) Study on Recovery Time of Silicon Photomultiplier with Epitaxial Quenching Resistors 18m Silicon photomultiplier (SiPM), which consists of multiple pixels of avalanche photodiodes working in Geiger-mode (G-APD) is a promising semiconductor device in low level light detection for its excellent performances such as high response speed, low operated voltage, insensitive to magnetic field and small volume. The SiPM with epitaxial quenching resistors (EQR SiPM) using epitaxial silicon layer below p-n junction as the quenching resistor has been developed by the Novel Device Lab (NDL) at Beijing Normal University. EQR SiPM resolves a conflict between wide dynamic range and large photon detection efficiency (PDE), which exists in most commercial SiPMs for their poly-silicon quenching resistors on the surface. In some high energy physics and medical imaging, strict demands are put forward on the recovery time of SiPM that means device must be restored in short time after detecting a photon and prepare for the next photon as soon as possible, e.g. Compact Muon Solenoid (CMS) detector in Large Hadron Collider (LHC) and computerized tomographic scanning (CT). When upgrading the LHC for higher luminosities, the bunch spacing intervals is planned to be decreased to 12 ns, thus the dead time of detection is required shorter than the interval time. When detecting at the CT system, SiPM with shorten recovery time is welcome for scan time could be reduced. The time needed to recharge a pixel after a breakdown has been quenched due to the finite time taken to quench the avalanche then reset the diode voltage to its initial bias value is defined as recovery time (or dead time). It is important to measure the recovery time for studying the internal mechanism of SiPM and to design detectors. In this manuscript, the EQR SiPM produced by NDL has P on N structure and pixel size of 10μm. The recovery time is mainly investigated with double light pulse method, which employ two consecutive laser pulses with a defined relative time differences varying from several nanoseconds to hundreds of nanoseconds, and record the charge number change with the corresponding time, then fit out recovery curve to determine the recovery time. By illuminating whole sensor, the overall recovery time of all pixels was measured; by partially illuminating the detector using a bare optical fiber with diameter of tens of micrometer, the partial recovery time of fired pixels was obtained. The devices were all tested on optimized over-bias voltage and at room temperature. The results show that the recovery time of device has a great dependence on the active area of device and the number of fired pixels. The larger active area or the more fired pixels, the longer recovery time. For the EQR SiPM with active area of 1.4mm2, the overall recovery time was characterized as 15ns. For the EQR SiPM with active area of 3mm2, the overall recovery time was 30ns, and the partial recovery time was 6ns while the number of fired pixels were controlled about 2000. Though the SiPM has small pixel size and small RC time constant, the pixels can't possibly be fired synchronously when they are bias on, that lead to pulse-spreading thus broaden recovery time. In addition, the adding capacitance of pixels, the relative circuit and the distance of fired pixel to extraction electrode all affect characterization of the recovery time. Speaker: Ms Jiali Jiang (Novel Device Laboratory,Beijing Normal University) Gain stabilization and afterpulsing studies of SiPMs 18m The gain of SiPMs increases with bias voltage and decreases with temperature. To operate SiPMs at stable gain, the bias voltage can be adjusted to compensate temperature changes. We have tested this concept with 30 SiPMs from three manufacturers (Hamamatsu, KETEK, CPTA) in a climate chamber at CERN varying the temperature from 1°C to 50°C. We built an adaptive power supply that used a linear temperature dependence of the bias voltage readjustment. With one selected bias voltage readjustment, we stabilized four SiPMs simultaneously. We fulfilled our goal of limiting the deviation from gain stability in the 20°C-30°C temperature range to less than ±0.5% for most of the tested SiPMs. We further studied afterpulsing for sensors with trenches. Speaker: Dr Eigen Gerald (University of Bergen) New Study for SiPMs Performance in High Electric Field Environment 18m In the search for the nature of the neutrino, neutrinoless double beta decay (0νββ) plays a significant role in understanding its properties. By measuring the 0νββ decay rate with the desired sensitivity, it is hoped to verify the nature of the neutrino (Majorana or Dirac particle), lepton number violation and help determine the values for the absolute neutrino masses. The Enriched Xenon Observatory (EXO), with its two phases; the current EXO-200 and the future multi-tonne upgrade nEXO, is aiming at search for the 0νββ decay of 136Xe. A key parameter that defines the detection sensitivity/capability of the detector is its energy resolution. nEXO aims to reach < 1% energy resolution at the Q-value of the decay. Efficient detection of LXe scintillation photons is critical to achieve this desired value. The current nEXO concept has an array of silicon photomultipliers (SiPMs) located behind the field shaping rings for this purpose. Although, during the past decade, substantial development in the area of SiPMs has offered what appears to be a superior alternative to conventional methods for our detector, SiPMs are still counted as relatively new technology. Hence, not all their features have been examined under the influence of extreme working environments. Although, it is known that the SiPMs are stable against the change in the magnetic field, but little is known about their behavior in high electric field variations. In the current design of the nEXO field cage, the SiPMs will be exposed to different electric field values along the drift axis. In this work we perform new study on the SiPMs performance under the influence of the exposure to high electric field value. Speaker: Dr Tamer Tolba (Institute of High Energy Physics - Chinese Academy of Science) 3Dimensionally integrated Digital SiPM 18m Analog silicon photo-multipliers (SiPMs) are now a mature technology in particle physics being widely used for the detection of scintillation and Cerenkov light. Digital SiPMs remain an emerging technology driven in part by the goal of achieving 10ps coincidence timing resolution for Positron Emission Tomography, which translates roughly to requiring photo-detectors with 10ps single photon timing resolution (SPTR). Our group is developing a photo-detector solution based on 3 dimensional integration capable of achieving 10ps SPTR with high efficiency, while remaining cost effective. Our 3-Dimensionally integrated digital SiPM (3DdSiPM) solution is expected to be ideally suited for many particle physics experiments requiring timing resolution better than 100ps. Our solution is also fully digital (photon coming in, bits coming out) hence eliminating the need for front end electronics. The power dissipation of 3DdSiPMs is expected to be significantly lower than analog SiPM front end electronics for the same performance, which is a very attractive feature for the detection of scintillation light in liquid Xenon and liquid Argon, where liquid boil-off is a serious concern. Our group is pursuing in particular a solution for the nEXO experiment requiring the detection of 175nm light over 5 m$^2$. We will describe the technology in details showing prototype performances and discussing applications in particle and astro-particle physics. Speaker: Fabrice Retiere (TRIUMF) POSTER + Tea Break 1h Corridor on the third floor Conveners: Burak Bilki (U) , Prof. Nural Akchurin (Texas Tech University) Development of ATLAS Liquid Argon Calorimeter Readout Electronics for the HL-LHC 18m The LHC high-luminosity upgrade in 2024-2026 requires the associated detectors to operate at luminosities about 5-7 times larger than assumed in their original design. The pile-up is expected to increase to up to 200 events per proton bunch-crossing. To be able to retain interesting physics events even at rather low transverse energy scales, increased trigger rates are foreseen for the ATLAS detector. At the hardware selection stage acceptance rates of 1 MHz are planned, combined with longer latencies up to 60 micro-seconds in order to read out the necessary data from all detector channels. Under these conditions, the current readout of the ATLAS Liquid Argon (LAr) Calorimeters does not provide sufficient buffering and bandwidth capabilities. Furthermore, the expected total radiation doses are beyond the qualification range of the current front-end electronics. For these reasons a replacement of the LAr front-end and back-end readout system is foreseen for all 182,500 readout channels, with the exception of the cold pre-amplifier and summing devices of the hadronic LAr Calorimeter. The new low-power electronics must be able to capture the triangular detector pulses of about 400-600 nano-seconds length with signal currents up to 10 mA and a dynamic range of 16 bit. Different technologies to meet these requirements are under evaluation: A preamplifier in 130nm CMOS technology with two gain stages can cover the desired dynamic range while meeting the required noise levels and non-linearity values. Alternatively, developments of pre-amplifier, shaper as well as ADCs are performed in 65 nm CMOS technology. Due to the lower voltage range, 2-gain and 4-gain designs of the analog part are studied with programmable peaking time to optimize the noise level in presence of signal pile-up. Radiation-hard, 14 bit ADC operating at 40 or 80 MHz are also being studied. Results from performance-simulation of the calorimeter readout system for the different options and results from design studies and first tests of the components will be presented. Speaker: Hils Maximilian (Technische Universitaet Dresden) Upgrade of the ATLAS Tile Calorimeter for the High luminosity LHC 18m The Tile Calorimeter (TileCal) is the hadronic calorimeter of ATLAS covering the central region of the ATLAS experiment. TileCal will undergo a major replacement of its on- and off-detector electronics in 2024 for the high luminosity programme of the LHC. The calorimeter signals will be digitized and sent directly to the off-detector electronics, where the signals are reconstructed and shipped to the first level of trigger at a rate of 40 MHz. This will provide a better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Three different options are presently being investigated for the front-end electronic upgrade. Extensive test beam studies are being employed to determine which option will be selected. The off-detector electronic is based on the Advanced Telecommunications Computing Architecture (ATCA) standard and is equipped with high performance optical connectors. The system is designed to operate in a high radiation environment and presents a high level of redundancy. Field Programmable Gate Arrays (FPGAs) are extensively used for the logic functions of the off- and on-detector electronics. One hybrid demonstrator prototype module with the new calorimeter module electronics, but still compatible with the present system, is planned to be inserted in ATLAS in one of the next winter shutdown. This contribution presents the components of the Tile Calorimeter upgrade for the high luminosity LHC, the production and performance of the prototype of the read-out electronics, the results of the test-beam tests at CERN and the plans for the next years. Speaker: Fukun Tang Energy Resolution and Timing Performance Studies of a W-CeF3 Sampling Calorimeter prototype with a Wavelength-Shifting Fiber Readout 18m An electromagnetic sampling calorimeter prototype has been developed to satisfy the requirements for running at the CERN Large Hadron Collider after the planned High-Luminosity upgrade (HL-LHC). An innovative design, with wavelength-shifting (WLS) fibers running along the chamfers of each calorimeter cell, minimizes the mechanical complexity. The resistance to radiation has been optimised by minimizing the light path, by adopting Cerium Fluoride crystals as active medium and by aiming at Cerium-doped quartz for the WLS fibers, as its luminescence excitation wavelength matches well the CeF3 emission. At the Beam Test Facility in Frascati, Italy, electrons with an energy of up to 491 MeV have allowed us to obtain first performance results on a prototype channel of 24 mm x 24 mm transversal cross section, using Kuraray WLS fibers. At the SPS-H4 beam line at CERN, electrons with energies of up to 150 GeV have then been used for an in-depth study of the energy resolution and of the impact-point dependence of response, and agreement is found with detailed GEANT4 simulations. A further beam test, where Cerium-doped quartz fibers have been adopted for wavelength-shifting, gives an energy resolution matching expectations. First tests of the timing performance, an aspect which is crucial for pileup mitigation at the HL-LHC, yield a resolution better than 100 ps using SiPMs, when the fast Cherenkov component from the fibers is exploited. A matrix of 5 x 3 channels has been built and has been exposed to high-energy electrons from the CERN SPS to study the impact angle dependence of energy resolution and response up to ~15 deg. Its granularity and sampling fraction have been optimised for optimum pileup rejection. Transverse dimensions of 17 mm x 17 mm, 12 samplings of 6 mm Tungsten and 6 mm CeF3 for a total of 25 radiation lengths, and a readout using Avalanche Photodiodes have been adopted. Speaker: Dr Francesca Nessi-Tedaldi (ETH Zurich, Switzerland) A High-Granularity Timing Detector for the Phase-II upgrade of the ATLAS Calorimeter system 18m The expected increase of the particle flux at the high luminosity phase of the LHC (HL-LHC) with instantaneous luminosities up to L ≃ 7.5 $\times 10^{34} cm^{−2} s^{−1}$ will have a severe impact on pile-up. The pile-up is expected to increase on average to 200 interactions per bunch crossing. The reconstruction and trigger performance for electrons, photons as well as jets and transverse missing energy will be severely degraded in the end-cap and forward region, where the liquid Argon based electromagnetic calorimeter has coarser granularity compared to the central region. A High Granularity Timing Detector (HGTD) is proposed in front of the liquid Argon end-cap calorimeters for pile-up mitigation at Level-0 (L0) trigger level and in the offline reconstruction. This device should cover the pseudo-rapidity range of 2.4 to about 4.2. Four layers of Silicon sensors, possibly interleaved with Tungsten, are foreseen to provide precision timing information for charged and neutral particles with a time resolution of the order of 30 pico-seconds per readout cell in order to assign the energy deposits in the calorimeter to different proton-proton collision vertices. Each readout cell has a transverse size of only a few mm, leading to a highly granular detector with several hundred thousand readout cells. Using the information provided by the detector, the contribution from pile-up jets can be reduced significantly while preserving high efficiency for hard-scatter jets. The expected improvements in performance are in particular relevant for physics processes with forward jets, like vector-boson fusion and vector-boson scattering processes, and for physics signatures with large missing transverse energy. Silicon sensor technologies under investigation are Low Gain Avalanche Detectors (LGAD), pin diodes, and HV-CMOS sensors. In this presentation, starting from the physics motivations and expected performance of the High Granular Timing Detector at the HL-LHC, then the proposed detector layout and Front End readout, laboratory and beam test characterization of sensors and the results of radiation tests will be discussed. Speaker: Lenzi Bruno (CERN) Precision Timing Calorimetry with the upgraded CMS Crystal ECAL 18m Particle detectors with a timing resolution of order 10 ps can improve event reconstruction at high luminosity hadron colliders tremendously. The upgrade of the Compact Muon Solenoid (CMS) crystal electromagnetic calorimeter (ECAL), which will operate at the High Luminosity Large Hadron Collider (HL-LHC), will achieve a timing resolution of around 30 ps for high energy photons and electrons. In this talk we will discuss the benefits of precision timing for the ECAL event reconstruction at HL-LHC. Simulation and test beam studies carried out for the timing upgrade of the CMS ECAL will be presented and the prospects for a full implementation of this option will be discussed. Speaker: Adi Bornheim (On behalf of the CMS Collaboration) Conveners: Igal Jaegle (University of Florida) , Prof. Jin Li (IHEP/THU) A vertex and tracking detector system for CLIC 18m The physics aims at the proposed future CLIC high-energy linear e+e- collider pose challenging demands on the performance of the detector system. In particular the vertex and tracking detectors have to combine precision measurements with robustness against the expected high rates of beam-induced backgrounds. The requirements include ultra-low mass, facilitated by power pulsing and air cooling in the vertex-detector region, small cell sizes and precision hit timing at the few-ns level. A detector concept meeting these requirements has been developed and an integrated R&D program addressing the challenges is progressing in the areas of ultra-thin sensors and readout ASICs, interconnect technology, mechanical integration and cooling. We present the proposed vertex and tracking detector system, its performance obtained from full-detector simulations, and give an overview of the ongoing technology R&D, including results from recent beam tests of prototypes. Speaker: Nurnberg Andreas (CERN, Geneva, Switzerland) Radiative Decay Counter for active background identification in MEG II experiment 18m The MEG experiment searched for the lepton flavor violating process, $\mu^{+} \rightarrow e^{+}\gamma$, and published result gave a new upper limit on the branching ratio of $\mathscr{B} < 4.2 \times 10^{-13}$. The upgraded experiment (MEG II) will start to achieve one order higher branching ratio sensitivity $O(10^{-14})$ by using the world's most intense muon beam up to $\sim10^{8} \mu^{+}/$s and upgraded detectors with considerably improved performance. One of the key for the upgrade is to suppress the background rate which is significantly increased with the higher muon beam rate. We will newly introduce the Radiative Decay Counter (RDC) to identify the background photon from the radiative muon decay (RMD). The RDC detects the low momentum positron associated with RMD on the beam axis at downstream of the muon stopping target. We developed the detector which consists of fast plastic scintillators and LYSO crystals with SiPM readout. By testing the detector with the muon beam, the capability of the background identification was successfully demonstrated. Further improvement of the sensitivity is possible by detecting the positrons from RMD at upstream of the target. We designed the detector based on a thin layer of plastic scintillating fibers to minimize the influence on the muon beam. A series of feasibility studies were performed towards the installation. We concluded that the influence on the muon beam transportation is expected to be small. Moreover, by considering the pileup beam muons and the light yield of the scintillating fibers, the detection efficiency for the positrons was evaluated. By installing both RDC detectors, the improvement of the sensitivity is expected to be 22-28$\%$. Speaker: Ryoto Iwai (ICEPP, The University of Tokyo) Belle II iTOP optics: design, construction, and performance 18m The imaging-Time-of-Propogation (iTOP) counter is a new type of ring-imaging Cherenkov counter developed for particle identification at the Belle II experiment. It consists of 16 modules arranged azimuthally around the beam line. Each module consists of one mirror, one prism and two quartz bar radiators. Here we describe the design, acceptance test, alignment, gluing and assembly of the optical components. All iTOP modules have been successfully assembled and installed in the Belle II detector by the middle of 2016. After installation, laser and cosmic ray data have been taken to test the performance of the modules. First results from these tests will be presented. Speaker: Dr Boqun Wang (University of Cincinnati) Spherical Measuring Device of Secondary Electron Emission Coefficient Based on Pulsed Electron Beam 18m In order to improve the performance of the microchannel plate, a material having a high secondary electron emission coefficient (SEEC) is required, and the SEEC of this material needs to be accurately measured. For this purpose, a SEEC measuring device with spherical collector structure was designed. The device consists of a vacuum system, a baking system, a test system, an electronic readout system, and a magnetic shield system. The measurement of the SEEC from a wide incident energy range (100eV ~ 10keV) and a large incident angle (0°~ 85 °) is realized by using the pulsed electron beam as the incident electron. The energy distribution of the secondary electrons is measured by a multi-layer grid structure. The SEEC of the metallic material was tested by using this device, which proves that the device is stable and good. Speaker: Mr Kaile Wen (Institute of High Energy Physics, Chinese Academy of Sciences) Conveners: Benedikt Vormwald (University of Hamburg, Institute of Experimetal Physics) , K.K. Gan (The Ohio State University) The CMS Tracker Phase II Upgrade for the HL-LHC era 18m The LHC will reach its third long shutdown period (LS3) around 2024. During this period the machine will be upgraded to the High Luminosity LHC (HL-LHC) increasing its instantaneous luminosity to 5x10^34 cm^-2 s^-1. As a result, an integrated luminosity of about 3000 fb^-1 will be reached after 10 years of running. The drastic increase in luminosity demands for an upgrade of the CMS experiment, the so called Phase II Upgrade. The current tracking detector of the CMS experiment will not be able to operate efficiently after LS3 mainly due to accumulated radiation damage in the silicon sensors. To ensure an efficient operation after LS3 and to profit from the high luminosity conditions, the CMS Tracker will be completely renewed. One of the key aspects of the upgrade are newly designed silicon sensor modules, the so called pT-modules, able to provide information to the L1 trigger. Two types of modules, called PS- and 2S-module, are foreseen. The PS-modules, which will be installed in the inner regions of the CMS Tracker, consist of a silicon strip sensor and a macro pixel sensor which are stacked and closely separated. For the outer regions, the 2S-modules will be installed which consist of two stacked silicon strip sensors with parallel strip orientation. By correlating the hit positions on each sensor, information on the particle's transverse momentum can be gained since the particle tracks are bent by the strong 3.8 T magnetic field of CMS. Using this functionality, particles above a given momentum threshold can be determined and the information is send to the L1 trigger. Instead of the currently used analogue read out chips, binary chips will be used capable of correlating the hit positions of the two stacked silicon sensors. The sensors for the pT-modules will have to withstand fluences of up to 1.5x10^15 neq cm^-2, a factor of 10 larger than the requirement for the present Tracker. P-type base material has been chosen as it was proved to be more radiation hard and to withstand this fluence. Speaker: Axel König (Institute of High Energy Physics) The Phase-2 ATLAS ITk Pixel Upgrade 18m The entire tracking system of the ATLAS experiment will be replaced during the LHC Phase II shutdown (foreseen to take place around 2025) by an all-silicon detector called the "ITk" (Inner Tracker). The innermost portion of the ITk will consist of a pixel detector with stave-like support structures in the most central region and ring-shaped supports in the endcap regions; there may also be novel inclined support structures in the barrel-endcap overlap regions. The new detector could have as much as 14 m2 of sensitive silicon. Support structures will be based on low mass, highly stable and highly thermally conductive carbon-based materials cooled by evaporative carbon dioxide. The ITk will be instrumented with new sensors and readout electronics to provide improved tracking performance compared to the current detector. All the module components must be performant enough and robust enough to cope with the expected high particle multiplicity and severe radiation background of the High-Luminosity LHC. Readout will be based on the new front-end ASIC being developed by the RD53 Collaboration. Ideally the readout chips will be thinned to as little as 100 μm to save material; this presents a challenge for sensor-chip interconnection and options are being evaluated in collaboration with industrial partners to develop reliable processing techniques. Servicing the detector reliably without introducing excessive amounts of material and dead space is another significant challenge. Data cables must be capable of handling up to 5 Gb/s and must be electrical in nature, with optical conversion at larger radii where the radiation background is less intense. Serial powering has been chosen as the baseline for the ITk pixel system as it minimises service cable mass; extensive testing has been carried out to prove its feasibility. Attention must also be paid to grounding and shielding in the detector to mitigate cross-talk and common mode noise. Most of the baseline technological decisions will be taken this year in view of the ITk Pixel TDR to be completed by the end of 2017. Speaker: Dr Mathieu Benoit (University of Geneva) EXPECTED PERFORMANCE OF THE ATLAS INNER TRACKER AT THE HIGH-LUMINOSITY LHC 18m he large data samples at the High-Luminosity LHC will enable precise measurements of the Higgs boson and other Standard Model particles, as well as searches for new phenomena such as supersymmetry and extra dimensions. To cope with the experimental challenges presented by the HL-LHC such as large radiation doses and high pileup, the current Inner Detector will be replaced with a new all-silicon Inner Tracker for the Phase II upgrade of the ATLAS detector. The current tracking performance of two candidate Inner Tracker layouts with an increased tracking acceptance (compared to the current Inner Detector) of |η|<4.0, employing either an 'Extended' or 'Inclined' Pixel barrel, is evaluated. New pattern recognition approaches facilitated by the detector designs are discussed, and ongoing work in optimising the track reconstruction for the new layouts and experimental conditions are outlined. Finally, future approaches that may improve the physics and/or technical performance of the ATLAS track reconstruction for HL-LHC are considered. Speaker: Jason Dhia MANSOUR (ATLAS) Pixel Detector Developments for Tracker Upgrades of the High Luminosity LHC 18m The talk will report on the INFN ATLAS-CMS joint research activity in collaboration with FBK, which is aiming at the development of new pixel detectors for the LHC Phase-2 upgrades. The talk will cover the main aspects of the research program, starting from the sensor design and fabrication technology, with an outlook on the future steps using both Silicon On Insulator (SOI) and Direct Wafer Bonded (DWB) wafers. The RD covers both planar and 3D, made with columnar technology, pixel devices. All sensors are low thickness n-in-p type, as this is the mainstream foreseen for the HL-LHC pixel upgrades. Results from device characterization measurements will be shown. Hybrid modules, with 100µm and 130µm active thickness, connected to the PSI46dig readout chip, have been tested on beam test experiments. Most recent results from test beams will be presented. Speaker: Marco Meschini (I) Radiation hardness of small-pitch 3D pixel sensors up to HL-LHC fluences 18m 3D silicon detectors, with cylindrical electrodes that penetrate the sensor bulk perpendicularly to the surface, present a radiation-hard sensor technology. Due to a reduced electrode distance, trapping at radiation-induced defects is less and the operational voltage and power dissipation after heavy irradiation are significantly lower than for planar devices. During the last years, the 3D technology has matured and 3D pixel detectors are already used in high-energy physics particle detectors where superior radiation hardness is key: in the ATLAS Insertable B-Layer (IBL) and the ATLAS Forward Proton (AFP) detector. For the High-Luminosity upgrade of the Large Hadron Collider (HL-LHC), the radiation-hardness requirements are even more demanding with expected fluences up to 1-2$\times10^{16}\,n_{eq}$/cm$^2$ for the innermost pixel layer of the ATLAS and CMS experiments at the end of life time after an integrated luminosity of 3,000 fb$^{-1}$. Moreover, to face the foreseen large particle multiplicities, smaller pixel sizes of 50$\times$50 or 25$\times$100 $\mu$m$^{2}$ are planned. In the context of this work, a new generation of 3D pixel sensors with small pixel sizes of 50x50 and 25x100 µm² and reduced electrode distances are developed for the HL-LHC upgrade of the ATLAS pixel detector, and their radiation hardness is tested up to the expected high fluences. Since a readout chip with the desired pixel size is still under development by the RD53 collaboration, first prototype small-pitch pixel sensors were designed to be matched to the existing ATLAS IBL FE-I4 readout chip for testing. Irradiation campaigns with such pixel devices have been carried out at KIT with a uniform irradiation of 23 MeV protons to a fluence of 5$\times10^{15}\,n_{eq}$/cm$^2$, as well as at CERN-PS with a non-uniform irradiation of 23 GeV protons to a peak fluence of 1.4$\times10^{16}\,n_{eq}$/cm$^2$. The hit efficiency has been measured in several beam tests at the CERN-SPS in 2016. The benchmark efficiency of 97% has been reached at remarkably low bias voltages of 40 V at 5$\times10^{15}\,n_{eq}$/cm$^2$ or 100 V at 1.4$\times10^{16}\,n_{eq}$/cm$^2$. Thanks to the low operation voltage, the power dissipation can be kept at low levels of 1.5 mW/cm² at 5$\times10^{15}\,n_{eq}$/cm$^2$ and 13 mW/cm² at 1.4$\times10^{16}\,n_{eq}$/cm$^2$ for -25$^{\circ}$C. The performance of these devices is significantly better than for the previous generation of 3D detectors or the current generation of planar silicon pixel detectors, demonstrating the excellent radiation hardness of the new 3D technology. Speaker: Joern Lange (IFAE) TORCH: a large-area detector for high resolution time-of-flight measurement 18m The TORCH concept is based on the detection of Cherenkov light produced in a quartz radiator plate. It is an evolution of the DIRC technique, extending the performance by the use of precise measurements of the emission angles and arrival times of detected photons. This allows dispersion in the quartz to be corrected for, and the time of photon emission to be determined with a target precision of $\rm 70~ps$ per photon. Combining the information from the 30 or so detected photons from each charged particle that traverses the plate, exceptional resolution on the time-of-flight of order $\rm 15~ps$ should be possible. The TORCH technique is a candidate for application in a future upgrade of the LHCb experiment, for low-momentum charged particle identification. Over a flight distance of $\rm 10~m$ it would provide clean pion-kaon separation up to $\rm 10~GeV$, in the busy environment of collisions at the LHC. Fast timing will also be crucial at higher luminosity for pile-up rejection. A 5-year R&D program has been pursued with industry to develop suitable photon detectors with the required fast timing performance, fine spatial granularity (0.8 mm-wide pixels), long lifetime $\rm (5~C/cm^2$ integrated charge at the anode) and large active area (80% for a linear array). This is being achieved using $\rm 6 \times 6~cm^2$ micro-channel plate PMTs, and final prototype tubes are expected to be delivered early in 2017. Earlier prototype tubes have demonstrated most of the required features individually, using fast read-out electronics that has been developed based on NINO+HPTDC chips. A small-scale prototype of the optical arrangement has been tested in beam at CERN over the last year, and demonstrated close to nominal performance. Components for a large-scale prototype which will be read out using 10 MCP-PMTs, including a highly-polished synthetic quartz radiator plate of dimensions $\rm 125 \times 66 \times 1~cm^3$, are currently being manufactured for delivery on the same timescale. The status of the project will be reviewed, including the latest results from test beam analysis, and the progress towards the final prototype. Speaker: Roger Forty (CERN) The RICH of the NA62 experiment at CERN 18m The NA62 experiment at CERN has been constructed to measure the ultra rare charged Kaon decay into a charged pion and two neutrinos with a 10% uncertainty. The main background is made by the charged kaon decay into a muon and a neutrino which is suppressed by kinematic tools using a magnetic spectrometer and by the different stopping power of muons and pions in the calorimeters. A RICH detector is needed to further suppress the μ+ contamination in the π+ sample by a factor of at least 100 between 15 and 35 GeV/c momentum, to measure the pion crossing time with a resolution of about 100 ps and to produce the trigger for a charged track. The detector consists of a 17 m long tank (vessel), filled with Neon gas at atmospheric pressure. Cherenkov light is reflected by a mosaic of 20 spherical mirrors with 17 m focal length, placed at the downstream end, and collected by 1952 photomultipliers (PMTs) placed at the upstream end. The RICH detector installation was completed in the summer of 2014 and the detector was used for the first time during the pilot run at the end of 2014. The RICH was then operated during the NA62 Commissioning Run in 2015 and has been used in the 2016 Physics Run. It must be noted that in 2014 and 2015 the RICH mirrors alignment was not optimal and the need of a better performance in the pion-muon separation was the main reason for the detector maintenance carried out in the 2015-2016 winter shutdown. In this presentation the construction of the detector will be described and the performance reached during the 2014-2015 data-taking will be discussed. Some preliminary results of the 2016 data-taking will also be shown. Speaker: Prof. Andrea Bizzeti (Università di Modena (Italy)) Machine Learning Techniques for Triggering and Event Classification in Collider Experiments 18m Machine learning techniques have already started to take place in the offline analysis of the data obtained with the collider detectors. The implementation is usually in the form of supervised learning where the machine learning algorithms are trained for certain classification or regression tasks and then utilized on the actual data. With recent developments on the hardware that are capable of unsupervised and reinforcement learning to some extent and the increased variety of complex software algorithms, online triggering and event classification could also be made possible. In lepton collider experiments, which basically record every single event, such techniques can be used for online event classification. In hadron collider experiments on the other hand, such systems can be utilized to trigger "out of expectation events" in addition to trigger for "target category of events". Here, we will discuss possible implementations of machine learning techniques for future collider experiments and demonstrate the implementation of powerful software tools. CUPID-0: a cryogenic calorimeter with particle identification for double beta decay search. 18m With their excellent energy resolution, efficiency, and intrinsic radio-purity, cryogenic calorimeters are primed for the search of neutrino-less double beta decay (0nDBD). The sensitivity of these devices could be further increased by discriminating the dominant alpha background from the expected beta-like signal. The CUPID-0 collaboration aims at demonstrating that the measurement of the scintillation light produced by the absorber crystals allows for particle identification and, thus, for a complete rejection of the alpha background. The CUPID-0 detector, assembled in 2016 and now in commissioning, consists of 26 Zn$^{82}$Se scintillating calorimeters for about 2x10$^{25}$ 0nDBD emitters. In this contribution we present the preliminary results obtained with the detector and the perspectives for a next generation project. Speaker: Laura Cardani (I) The CUORE bolometric detector for neutrinoless double beta decay searches 18m The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment reaching the 1-ton scale. The detector consists of an array of 988 TeO2 crystals arranged in a cylindrical compact structure of 19 towers. The construction of the experiment and, in particular, the installation of all towers in the cryostat was completed in August 2016: the experiment is now in pre-operation phase and data taking is commencing. In this talk, we will discuss the technical challenges of the construction and pre-operation phases, the design choices and measured performance of its electronic instrumentation and the first results from the full detector runs. Speaker: Lorenzo Cassina (University of Milano Bicocca) Conveners: Bo Yu (Brookhaven National Lab) , Dr Jianbei Liu (University of Science and Technology of China) Upgrade of the ATLAS Thin Gap Chambers Electronics for HL-LHC 18m The High-Luminosity LHC (HL-LHC) is planned to start the operation in 2026 with an instantaneous luminosity of 7.5 x 10^34 cm-2s-1. To cope with the event rate higher than that of LHC, the trigger and readout electronics of ATLAS Thin Gap Chamber will need to be replaced. An advanced first-level trigger with fast tracking will be implemented with the transfer of all hit data from the frontend to the backend boards. Studies with the data taken by ATLAS indicate that the advanced trigger could reduce the event rate by 30% for a single muon trigger with a transverse momentum threshold of 15 GeV while maintaining similar efficiency. First prototype of the frontend board has been developed with full functions required for HL-LHC including the data transfer of 256 channels with a 16 Gbps bandwidth and the control of the discriminator threshold. The data transfer has been demonstrated with charged particle beam at the CERN SPS beam facility. The control of the discriminator threshold has also been demonstrated, and a perfect linearity between the set and the measured values was obtained. We will present the overall design of the new trigger and readout electronics as well as the demonstration of the frontend board prototype. Speaker: Tomomi Kawaguchi (Nagoya University) Small-Strip Thin Gap Chambers for the Muon Spectrometer Upgrade of the ATLAS Experiment 18m The instantaneous luminosity of the Large Hadron Collider at CERN will be increased up to a factor of five to seven with respect to the design value by undergoing an extensive upgrade program over the coming decade. Such increase will allow for precise measurements of Higgs boson properties and extend the search for new physics phenomena beyond the Standard Model. The largest phase-1 upgrade project for the ATLAS Muon System is the replacement of the present first station in the forward regions with the so-called New Small Wheels (NSWs) during the long-LHC shutdown in 2019/20. Along with Micromegas, the NSWs will be equipped with eight layers of small-strip thin gap chambers arranged in multilayers of two quadruplets, for a total active surface area of more than 2500 m$^2$. All quadruplets have trapezoidal shapes with surface areas up to 2 m$^2$. To retain the good precision tracking and trigger capabilities in the high background environment of the high luminosity LHC, each sTGC plane must achieve a spatial resolution better than 100 μm to allow the Level-1 trigger track segments to be reconstructed with an angular resolution of approximately 1mrad. The basic sTGC structure consists of a grid of gold-plated tungsten wires sandwiched between two resistive cathode planes at a small distance from the wire plane. The precision cathode plane has strips with a 3.2mm pitch for precision readout and the cathode plane on the other side has pads for triggering. The position of each strip must be known with an accuracy of 40 µm along the precision coordinate and 80 µm along the beam. On such large area detectors, the mechanical precision is a key point and then must be controlled and monitored all along the process of construction and integration. The pads are used to produce a 3-out-of-4 coincidence to identify muon tracks in an sTGC quadruplet. A full size sTGC quadruplet has been constructed and equipped with the first prototype of dedicated front-end electronics. The performance of the full size sTGC quadruplet has been studied at the Fermilab (May 2014) and CERN (October 2014) test beam facilities to study spatial resolution and trigger efficiencies. We will describe the technological novelties, production challenges, performance and test results of the sTGC detectors. The status of the project and the plan for the completion will also be discussed. Speaker: Chengguang Zhu (Shandong University) Simulation and investigation of the gaseous detector module for CEPC TPC 18m Compared with the International Linear Collider (ILC), the beam structure of the future Circular Electron Positron Collider (CEPC) is very different without the 'power-pulsing' mode. In this paper, some simulation and estimation results of the Time Projection Chamber (TPC) as one tracker detector option for CEPC were given. The optimized operation gas (Ar:CF4:C2H6=92:7:1) with the fast velocity, low diffusion and low attachment was simulated used Garfield/Garfield++, and the performance of the selection gas was compared with the T2K (Ar:CF4:iC4H10=95:3:2) working gas. The position resolution of deviation was calculated by the space charge caused the track distortions in the drift chamber at Z pole run in CEPC, and the value was less than 10μm in the inner diameter of TPC detector. To meet the critical physics requirements of the tracker detection at CEPC, the new concept structure gaseous detector module as one option for the tracer detector has been developed and experimental measured. Some performance of the concept detector module was obtained. The energy resolution is better than 20% for 5.9 keV X-rays and it indicated that the continuous suppression ions backflow ratio better than 0.1% can be reached at a gain of about 5000. The preliminary results could be compared with simulation and satisfied with the ions suppression requirements of the TPC detector module. Speaker: Dr Huirong Qi (Institute of High Energy Physics, CAS) The Belle II / SuperKEKB commissioning Time Projection Chambers - characterization, simulation, and results 18m Ten 10 cm drift distance Time Projection Chambers (TPCs), filled with He:CO2:70:30 at slightly above one Atmosphere, equipped with a double GEM and a high resolution pixel readout were built by the University of Hawaii to measure fast neutrons produced by the SuperKEKB beam-induced background during the first and second commissioning phases. We characterized the TPCs with two different sources (Fe55 and Po210) and will discuss a TPC simulation validated by these calibration measurements. Finally, we will present the experimental results of the two TPCs installed during the first commissioning phase and the expected results for the 8 TPCs that will be installed during the second commissioning phase. Speaker: Igal Jaegle (BEAST II Collaboration - University of Florida) PandaX-III: Searching for Neutrinoless Double Beta Decay with High Pressure Xe-136 Gas Time Projection Chambers 18m The PandaX-III (Particle And Astrophysical Xenon Experiment III) experiment will search for Neutrinoless Double Beta Decay (NLDBD) of Xe-136 at the China Jin Ping underground Laboratory (CJPL). In the first phase of the experiment, a high pressure gas Time Projection Chamber (TPC) will contain 200 kg, 90% Xe-136 enriched gas operated at 10 bar. Fine pitch micro-pattern gas detector (Microbulk Micromegas) will be used at both ends of the TPC for the charge readout with a cathode in the middle. Charge signals can be used to reconstruct tracks of NLDBD events and provide good energy and spatial resolution. In this talk, I will give an overview of recent progress of PandaX-III, including data taking of a prototype TPC at Shanghai. Speaker: Prof. Ke Han (Shanghai Jiao Tong University) Conveners: Jinlong Zhang (A) , Ralf SPIWOKS (CERN) Electronics, trigger and data acquisition systems for the INO ICAL experiment 18m India-based Neutrino Observatory (INO) has proposed construction of a 50kton magnetised Iron Calorimeter (ICAL) in an underground laboratory located in South India. Main aims of this, now funded project are to precisely study the atmospheric neutrino oscillation parameters and to determine the ordering of neutrino masses. The detector will deploy about 28,800 glass Resistive Plate Chambers (RPCs) of approximately 2m  2m in area. About 3.6 million detector channels are required to be instrumented. The analog front-end comprises mainly of 4-channel preamplifier and 8-channel leading edge discriminator ASICs. The digital front-end is implemented using a high-end FPGA, a TDC ASIC and a network controller chip. The multi-level trigger system generates the global trigger signal based solely on event topology information. A dedicated sub-system handles distribution of global clock and trigger signals as well as measurement of time offsets due to disparate signal path lengths. The data, acquired on receipt of trigger signal by the digital front-end sub-system is dispatched to the backend data concentrator hosts via a multi-tier network. Finally, the event data is compiled by the event builder, which also performs various data quality monitors on the data besides archiving the same. We will present the design of electronics, trigger and data acquisition systems of this ambitious and indigenous experiment as well as its current status of deployment. Speaker: Dr Satyanarayana Bheesette (Tata Institute of Fundamental Research) The ATLAS Level-1 Trigger System with 13TeV nominal LHC collisions 18m The Level-1 (L1) Trigger system of the ATLAS experiment at CERN's Large Hadron Collider (LHC) plays a key role in the ATLAS detector data-taking. It is a hardware system that selects in real time events containing physics-motivated signatures. Selection is purely based on calorimetry energy depositions and hits in the muon chambers consistent with muon candidates. The L1 Trigger system has been upgraded to cope with the more challenging run-II LHC beam conditions, including increased centre-of-mass energy, increased instantaneous luminosity and higher levels of pileup. This talk summarises the improvements, commissioning and performance of the L1 ATLAS Trigger for the LHC run-II data period. The acceptance of muon triggers has been improved by increasing the hermiticity of the muon spectrometer. New strategies to obtain a better muon trigger signal purity were designed for certain geometrically difficult transition regions by using the ATLAS hadronic calorimeter. Algorithms to reduce noise spikes in muon trigger rates were also deployed. L1 Calorimeter Trigger underwent various major upgrades. At the pre-prosessing stage, more than 1700 FPGA-based daughter boards were exchanged which replace the previous ASIC-based modules. The new modules enable significantly improved pile-up control, such as dynamic bunch-by-bunch pedestal correction as well as an enhanced signal-to-noise ratio by the use of digital autocorrelation Finite Impulse Response filters. Furthermore the digitisation speed was doubled to 80 MHz which allows for improved treatment of saturated signals and refined input timing. The firmware of the subsequent object finding hardware components was modified to add extra selectivity, such as energy-dependent electromagnetic isolation criteria in the cluster processor. In addition, the transmission bandwidths were enlarged, as well as new merger modules introduced that provided flexibility for the integration of a brand new system: the ATLAS L1 Topological Trigger. The ATLAS L1 Topological trigger uses physically motivated kinematic quantities of triggered candidates to reject undesired background processes extending the reach of the ATLAS physics program. The Central Trigger Processor, heart of the ATLAS L1 Trigger system, was also upgraded. Its hardware, firmware and software architectures were redesigned. It now allows twice as many trigger channels, a much more flexible handling of detector dead-times, the possibility of concurrent independent triggering of up to 3 different sub-detector combinations and the handling interface to the new topological trigger system. Speaker: Louis Helary (CERN) Modelling Resource Utilization of a Large Data Acquisition System 18m The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current and upgraded ATLAS data-acquisition system architectures. We discuss the modeling techniques in use and their implementation. We will show that our model reproduces the current data acquisition system's operational behaviour and present the plans and initial results for Phase-II system model evolution. Speaker: Santos Alejandro (CERN,the University of Heidelberg, Germany) An general high performance xTCA compliant and FPGA based Data Processing Unit for trigger and data acquisition and trigger applications 18m This talk will be about an new version of high performance xTCA compliant and FPGA based Data Processing Unit for trigger and data acquisition applications like in PANDA, PXD/BelleII upgrade and CMS trigger. The Unit consists of 4 Advanced Mezzanine Cards (AMC, called xFP card), 1 AMC carrier ATCA board(ACAB) and 1 Rear Transition I/O Board(RTM). The ACAB board features 1 Xilinx Ultrascale XCKU060 FPGA chip, 16GBytes DDR4 memory, 5 ports Gigabit Ethernet Switch and 1 10G Ethernet port for data processing, buffering and switching. Gigabit Ethernet Switch is designed switching four xFP cards and ACAB board Ethernet ports to ATCA Backplane fabric port. And the xFP board features 1 xilinx Virtex-5 FX70T FPGA chips and 4GBytes DDR2 memory for data processing. The connection between ACAB board and four xFP boards are by RocketIO port and other LVDS I/O pairs. 8 optical links by 4 xFP4(with two 6Gbps optical IO) cards provide an input bandwidth of 48Gbps and 16 optical link by 4 xFP3.1(with four 4Gbps optical IO) cards provide an highest input bandwidth of 64Gbps. Optical links can either from panel of AMC card or from RTM card. A single ATCA shelf can host up to 14 boards interconnected via a full mesh backplane. Each board can directly connect to any other 13 boards point-to-point via 10G RocketIO link.A prototype unit will be shown and some functions tests will be reported and discussed. Key words: xTCA, AMC, ACAB, RTM, RocketIO, DDR4 Speaker: Mr Jingzhou ZHAO Jingzhou (高能所) FELIX: the new detector readout system for the ATLAS experiment 18m After the Phase-I upgrade and onward, the Front-End Link eXchange (FELIX) system will be the interface between the data handling system and the detector front-end electronics and trigger electronics at the ATLAS experiment. FELIX will function as a router between custom serial links and a commodity switch network which will use standard technologies to communicate with data collecting and processing components. The FELIX system is being developed by using commercial-off-the-shelf server PC technology in combination with a FPGA-based PCIe Gen3 I/O card interfacing to GigaBit Transceiver links and with Timing, Trigger and Control connectivity provided by an FMC-based mezzanine card. Dedicated firmware for the Xilinx FPGA (Virtex 7 and Kintex UltraScale) installed on the I/O card alongside an interrupt-driven Linux kernel driver and user-space software will provide the required functionality. On the network side, the FELIX unit connects to both Ethernet-based network and Infiniband. The system architecture of FELIX will be described and the results of the development program currently in progress will be presented. Speaker: Jinlong Zhang Conveners: Benedikt Vormwald (University of Hamburg, Institute of Experimetal Physics) , Prof. Qun OUYANG (IHEP) CMOS pixel development for the ATLAS experiment at the HL-LHC 18m To cope with the rate and radiation environment expected at the HL-LHC new approaches are being developed on CMOS pixel detectors, providing charge collection in a depleted layer. They are based on: HV enabling technologies that allow to use high depletion voltages (HV-MAPS), high resistivity wafers for large depletion depths (HR-MAPS); radiation hard processed with multiple nested wells to allow CMOS electronics embedded with sufficient shielding into the sensor substrate and backside processing and thinning for material minimization and backside voltage application. Since 2014, members of more than 20 groups in the ATLAS experiment are actively pursuing CMOS pixel R&D in an ATLAS Demonstrator program pursuing sensor design and characterizations. The goal of this program is to demonstrate that depleted CMOS pixels, with monolithic or hybrid designs, are suited for high rate, fast timing and high radiation operation at LHC. For this a number of technologies have been explored and characterized. In this presentation the challenges for the usage of CMOS pixel detectors at HL-LHC are discussed such as fast read-out and low power consumption designs as well as fine pitch and large pixel matrices. Different designs of CMOS prototypes are presented with emphasis on performance and radiation hardness results, and perspectives of application in the upgrade of the ATLAS tracker will be discussed. Speaker: Ristic Branislav (C) Integrated CMOS sensor technologies for the CLIC tracker 18m The tracking detector at the proposed high-energy CLIC electron-positron collider will be based on small-pitch silicon pixel- or strip sensors arranged in a multi-layer barrel and end-cap geometry with a total surface of about 90 sqm. The requirements include single-point position resolutions of a few microns and time stamping with an accuracy of approximately 10 ns, combined with a low material budget of less than 2% of a radiation length per layer, including cables, cooling and supports. Mainly fully integrated CMOS sensors are under consideration. One of the candidate technologies is based on a 180 nm CMOS process with a high-resistivity substrate. Test beam measurements and TCAD simulations were performed for demonstrator chips consisting of an array of analog pixel matrices with different pixel pitch and a variety of collection-electrode geometries and process options. The analog signals of each matrix are read out by external sampling ADCs, allowing for a precise characterisation of the signal response. In this contribution we present the sensor design and show results from recent test-beam campaigns, as well as comparisons with TCAD simulations. The results show good spatial and timing resolution in line with the requirements for the CLIC tracker. Speaker: Magdalena Munker (CERN, Geneva, Switzerland) Analysis and simulation of HV-CMOS assemblies for the CLIC vertex detector 18m The requirement of precision physics and the environment found in the proposed future high-energy linear e+e- collider CLIC, result in challenging constraints for the vertex detector. In order to reach the performance goals, various sensor technologies are under consideration. Prototypes of an active pixel sensor with 25 μm pitch (CCPDv3) have been fabricated in a commercial 180 nm High-Voltage CMOS technology. The sensors are capacitively coupled to CLICpix readout ASICs with matching footprint, implemented in a commercial 65 nm CMOS process. Tests of the assemblies were carried out at the CERN SPS using 120 GeV pions over an angular range of 0°-80°. The measurements have shown an excellent tracking performance with an efficiency of >99% and a resolution of 5-7 μm over the angular range. These were then compared to simulations carried out using TCAD, showing a good agreement for the current-voltage, breakdown and charge collection properties. The simulations have also been used to optimise features for future sensor design. Speaker: Mr Matthew Buckland (University of Liverpool) Capacitively Coupled Pixel Detectors: From design simulations to test beam 18m An overview on the characterisation of aH35 HV-CMOS active pixel sensors for the ATLAS ITk. Capacitively Coupled Pixel Detectors (CCPDs) are only possible due to HV-CMOS sensors, where the high voltage (necessary to deplete the sensor) can be applied on CMOS circuits, allowing the sensor to be capacitively coupled to a read out ASIC, avoiding the expensive bump-bonds. An extensive work is done in the characterisation of this new sensor technology. TCAD simulations of the HV-CMOS pixel designs and TCT measurements on real devices is shown. In addition, automatised wafer probing measurements and the flip-chip, of the sensors with the readout chips, will be presented, and the FPGA-based read out system developed is introduced. Laboratory measurements, such as DAC scans and test-pulse calibration, will be shown. To conclude, test beam measurements done at CERN SPS and at Fermilab, using the UniGE FE-I4 Telescope, with non-irradiated and irradiated samples of the AMS-H18 CCPDv4 and H35 full-size demonstrator, will be shown and discussed. Speaker: Mateus Vicente (U) Enhanced lateral drift sensors: concept and development 18m Future experiments in particle physics require few-micrometer position resolution in their tracking detectors. Silicon is today's material of choice for high-precision detectors and offers a high grade of engineering possibilities. Instead of scaling down pitch sizes, which comes at a high price for increased number of channels, our new sensor concept seeks to improve the position resolution by increasing the lateral size of the charge distribution already during the drift in the sensor material. To this end, it is necessary to carefully engineer the electric field in the bulk of this so-called enhanced lateral drift (ELAD) sensor. This is achieved by implants with different values of doping concentration deep inside the bulk which allows for modification of the drift path of the charge carriers in the sensor. In order to find an optimal sensor design, detailed simulation studies have been conducted using SYNOPSYS TCAD. The parameters that need to be defined are the geometry of the implants, their doping concentration and the position inside the sensor. Process simulations are used to provide the production-determined shapes of the implants in order to allow for a realistic modelling. Results of a geometry optimisation are shown realising an optimal charge sharing and hence position resolution. A position resolution of a few micrometer was achieved by using deep implants without relying on a Lorentz drift or tilted incident angle. Additionally, a description of the multi-layer production process is presented, which represents a new production technique allowing for deep bulk engineering. Speaker: Ms Anastasiia Velyka (DESY Hamburg) R1-Interface and beam instrumentation Room 305A Electron Test Beams at SLAC 18m We present status of and future plans for the various electron test beam lines at SLAC. The presentation will focus on ESTB, the End Station (A) Test Beam, which after rebuilds during 2017, will continue to deliver 2 to 16 GeV primary electrons (10^9) per pulse, or single electron (1-100) per pulse at 5Hz rate. These beams have been used by around 500 Users in 38 experiments over the past 4 years for detector R&D (RHIC, ATLAS, g-2, etc.) and accelerator physics experiments. In addition SLAC is currently operating ASTA, a 5MeV electron beam line with ultra short pulses and NLCTA beam lines which provides electron beams between 60 and 300 MeV. FACET-II, which will provide 10 GeV very high current and very short pulsed electron and positron beams, is in its planning stage for delivering beams in 2019. Speaker: Carsten Hast (S) Scattering studies with the DATURA beam telescope 18m High-precision particle tracking devices allow for two-dimensional analyses of the material budget distribution of particle detectors and their periphery. These tracking devices, called beam telescopes, enable a precise measurement of the track of charged particles with an angular resolution in the order of a few ten microradian and a position resolution of a few micrometer. The material budget is reconstructed from the variance of the angular distribution of the scattered particles. Similarly, a new tomographic technique exploiting the deflection of electrons with an energy of a few GeV in a sample requires precise reference measurements of the scattering angle distribution of targets of known thicknesses. At the DESY test-beam facilities, the DATURA beam telescope, a high-precision tracker using pixel sensors, was used to record GeV electrons traversing aluminium targets with precisely known thickness between 13 um and 1e4 um. A track reconstruction was performed enabling the measurement of the scattering angle at the target due to multiple scattering therein. For that purpose, the General Broken Lines method was used incorporating a new unbiased target-material estimator. In response to the increased interest in material budget measurements, we present the reconstruction of electron tracks and detail the analysis and accuracy of the angular deflection measurements. The width and the shape of the recorded distributions are compared to theoretical estimates and Geant4 simulations. Additionally, calibration techniques required as input for precise tomographic reconstructions are discussed. Speaker: Dr Hendrik Jansen (DESY) CLAWS - A Plastic Scintillator / SiPM based Detector measuring Backgrounds during the Commissioning of SuperKEKB 18m The SuperKEKB collider at KEK, which has started its commissioning in February 2016, is designed to achieve unprecedented luminosities, with a factor 40 higher than the record-breaking luminosity of the KEKB machine. For the operation of the Belle II detector, in particular of its pixel vertex detector, a precise understanding of the background conditions at the interaction point is crucial. To study backgrounds, a dedicated detector setup consisting of different subsystems has been installed for the first commissioning phase of the accelerator. Among those systems is CLAWS, consisting of 8 scintillator tiles with directly coupled SiPMs, read out by computer-controlled oscilloscopes with very deep buffers. CLAWS focuses on the background connected to the continuous injection of the accelerator, by monitoring the background levels of individual particle bunches in the machine with sub-nanosecond resolution continuously over ms time-frames. We will present the technology of the CLAWS detectors, the overall installation and the detector performance including the calibration, time resolution and observed effects from the moderate radiation dose received during operation. We will also discuss selected results on the time structure of the injection background during the first phase of SuperKEKB, and present the plans for an upgraded system to be installed as part of the Belle II inner detector for the second commissioning phase scheduled for spring 2018. Speaker: Windel Hendrik (Max-Planck-Institute for Physics) CMS Central Beam Pipe Instrumentation with Fiber Bragg Grating Sensors: Two Years of Data Taking 18m We present the recent results of the monitoring of the central beam pipe of the Compact Muon Solenoid Experiment (CMS), at the European Organization for Nuclear Research (CERN). The measurements are carried out by means of an innovative fiber optic monitoring system based on the Fibre Bragg Grating (FBG) sensor technology. The CMS central beam pipe is part of the Large Hadron Collider (LHC) and is the place where the high energy LHC collisions take place. It is made of a beryllium tube section, 3m long with a central diameter of 45mm and 0.8mm thickness wall, sealed on the two extremities with two conical aluminium sections, each 1.5m long. Being spectrally encoded, the FBG sensors are insensible to electromagnetic interference, intensity modulation of the optical carrier and broadband-radiation-induced losses. Hence, fiber optic monitoring system based on FBG sensors represents the ideal solution to achieve a reliable and accurate sensing system to be used 24/7 in the harsh environment in the CMS experimental facility. Our monitoring system consists of four polyimide coated SMF28 fibers (200μm diameter: core-cladding-coating) placed along the cardinal longitudinal positions on beam pipe cross section. On each fiber, 16 FBGs have been manufactured: 7 are solidary glued on the pipe to measure the local strain and the remaining 9, are left unglued but in contact with the pipe in order to work as local thermometers and as temperature compensators for the adjacent strain sensors. The mechanical complexity of the structure will be described and the first temperature and strain measurements data recorded during the LHC operations will be discussed. The data recorded have proven the overall sensitivity and reliability of this innovative monitoring system. The designed system allows the monitoring of any deformation induced on the CMS central beam pipe during the detector motion in the maintenance periods and magnetic field induced deformation during operation phases. Moreover, the temperature FBG sensors represents a unique solution to monitor the beam pipe thermal behaviour during the various operational and maintenance phases. This innovative solution will be a milestone for beam pipe monitoring in high energy physics. Speaker: Dr Francesco Fienga (University of Napoli Federico II) Conveners: Mr Kejun ZHU (高能所) , Dr Liangjian Wen (高能所) The JUNO VETO detector system 18m The Jiangmen Underground Neutrino Observatory (JUNO) is a 20 kton liquid scintillator detector with primary physics goal the neutrino mass hierarchy determination. The detector will be built in 700m deep underground laboratory. A multi-veto system will be built for cosmic muon detection and background reduction. The outer of the central detector is filled with water and equipped with ~2000 MCP-PMTs (20 inches) to form a water Cherenkov detector for muon tagging. Both the water Cherenkov detector walls and the central detector external surface are coated with Tyvek reflector to increase the light collection efficiency. A Top Tracker (TT) detector will be built by re-using the Target Tracker of the OPERA experiment. The TT consists of 62 walls made of plastic scintillator strips equipped with WLS fibers with dimension 6.8m*6.8m each, and allows x-y readout for precise muon tracking. The three layers of the TT with the appropriate trigger electronics will help to understand the cosmogenic background contribution and reduction as the one induced by the isotopes 9Li and 8He.. It will cover half of the top area with three layers spaced my one meter. The muon detection efficiency is >95% for water Cherenkov detector. With this veto system, the cosmic muon induced fast neutron background can be reduced at the level of ~0.1/day. Speaker: Mr Haoqi Lu (IHEP) The ANNIE experiment: measuring neutron yield from neutrino-nucleus interactions 18m The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a Water Cherenkov (WCh) - based neutrino experiment downstream of the Booster Neutrino Beam at Fermilab, designed to study the abundance of final state neutrons from neutrino-nucleus interactions. The measurement is enabled by two new techniques with wide relevance for neutrino physics: (1) the first application of Large-Area Picosecond Photodetectors (LAPPDs) to localize primary neutrino interaction vertices within a small fiducial volume through precision timing measurement, and (2) the use of gadolinium-doped water to count the number of final-state neutrons through the measurement of emitting gammas from neutron captures. Phase I of ANNIE is currently being performed on the Booster Neutrino Beam (BNB) in Fermilab, aiming to provide the neutron background of neutrino interactions. A small movable volume of gadolinium-loaded liquid scintillator is used to measure the rate of neutron events as a function of positions inside the water tank. Phase II of ANNIE is designed to fully demonstrate the realization of the ANNIE detector. During this stage, additional PMTs and functional LAPPDs will be covering the entire water tank, which enables detailed reconstruction of kinematics. This presentation will give an overview of the experiment, the techniques to be used, the reconstruction algorithms and the current project progress. Speaker: Jingbo Wang (U) The KM3NeT Digital Optical Module 18m KM3NeT is a European deep-sea multidisciplinary research infrastructure in the Mediterranean Sea. It will host a km3-scale neutrino telescope and dedicated instrumentation for long-term and continuous measurements for Earth and Sea sciences. The KM3NeT neutrino telescope is a 3-dimensional array of Digital Optical Modules (DOMs), suspended in the sea by means of vertical string structures, called Detection Units, supported by two Dyneema ropes, anchored to the seabed and kept taut with a system of buoys. The Digital Optical Module represents the active part of the neutrino telescope and therefore the real heart of KM3NeT. It consists in a pressure-resistant borosilicate glass spherical vessel housing 31 photomultiplier tubes and the associated front-end and readout electronics. The aim is to provide nanosecond precision on the arrival time of single Cherenkov photons and directional information with a high sensitive surface (1260 cm2) and an almost isotropic field of view. Temperature and humidity sensors are used to monitor the environmental conditions, while a system of compasses and calibration components provide precision about the position and orientation of the photo-sensors up to a few centimetres and few degrees, respectively. In this contribution the design and the performances of the KM3NeT Digital Optical Modules are discussed, with a particular focus on enabling technologies and integration procedure. Conveners: Johan Borg (Imperial College London) , christophe de La Taille (OMEGA) Electronics and triggering challenges for the CMS High Granularity Calorimeter for HL-LHC 18m The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0-10 pC), low noise (~2000e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~10mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing all the data from the HGCAL imposes equally large challenges on the off-detector electronics, both for the hardware and incorporated algorithms. We present an overview of the complete electronics architecture, as well as the performance of prototype components and algorithms. Speaker: Johan Borg (I) Progress of PandaX-III readout electronics 18m The PandaX-III (Particle And Astrophysical Xenon Experiment III) experiment, with the scientific objective of searching for neutrinoless double beta decay, is going to be carried out at the China Jin Ping underground Laboratory (CJPL). In the first phase of the experiment, a Time Projection Chamber (TPC) with 200 kg Xenon gas at the pressure of 10 bar is to be constructed. A total of 82 Micromegas modules using the Microbulk technique will be installed for the two endcaps of the TPC. For each Micromegas module, there are 64 X, 64 Y readout strips and one mesh, which results in 10496 strip signals and 82 mesh signals for one TPC. In order to accomplish the readout task for the 10496 strip signals and 82 mesh signals of the Phase-I TPC, an electronics system following a modular and multi-level design concept is proposed. At the bottom level, there are 42 FEC (Front-end Card) based on the 64-channel ASIC chips named AGET, and 2 MRC (Mesh Readout Card) modules. At the higher level, there are the back-end electronics, including two S-TDCMs (Slave Trigger and Data Concentration Modules) and one MTCM (Master Trigger and Clock Module), which collect the event data and perform the trigger function. Currently the first version of the FEC module and MRC module, as well as a DAQ (Data Acquisition) Board which plays the role as the prototype of back-end electronics, have been successfully developed. Joint-test with a prototype TPC was performed at Shanghai and preliminary results which consisted with expectation were obtained. The details of the electronics design and the progresses will be described in this paper. Speaker: Dr Changqing Feng (University of Science & Technology of China) Development of Trigger and Readout Electronics for the ATLAS New Small Wheel Detector Upgrade 18m The present small wheel muon detector at ATLAS will be replaced with a New Small Wheel (NSW) detector to handle the increase in data rates and harsh radiation environment expected at the LHC. Resistive Micromegas and small-strip Think Gap Chambers will be used to provide both trigger and tracking primitives. Muon segments found at NSW will be combined with the segments found at the Big Wheel to determine the muon transverse momentum at the first-level trigger. A new trigger and readout system is developed for the NSW detector. The new system has about 2.4 million trigger and readout channels and about 8,000 frontend boards. The large number of input channels, short time available to prepare and transmit data, harsh radiation environment, and low power consumption all impose great challenges on the design. We will discuss the overall electronics design and studies with various ASIC and board prototypes. Speaker: Daniel Antrim (University of California, Irvine) First Prototype of the Muon Frontend Control Electronics for the LHCb Upgrade: Hardware Realization and Test 18m The muon detector plays a key role in the trigger of the LHCb experiment at CERN. The upgrade of its electronics is required in order to be compliant with the new 40 MHz readout system, designed to cope with future LHC runs between five and ten times the initial design luminosity. The framework of the Service Board System upgrade is aimed to replace the system in charge of monitoring and tuning the 120'000 readout channels of the muon chambers. The aim is to provide a more reliable, flexible and fast means of control migrating from the actual distributed local control to a centralized architecture based on a custom high speed serial link and a remote software controller. In this paper we present in details the new Service Board System hardware prototypes from the initial architectural description to board connections, highlighting the main functionalities of the designed devices with preliminary test results. Speaker: Dr Paolo Fresch (INFN Sezione di Roma) Development of Superconducting Tunnel Junction Photon Detector with Cryogenic Preamplifier for COBAND experiment 18m We present the status of the development of Superconducting Tunnel Junction (STJ) detector with the cryogenic preamplifier as a far-infrared single photon detector for the COsmic BAckground Neutrino Decay search (COBAND) experiment. The photon energy spectrum from the radiative decay of cosmic background neutrino is expected to have a sharp cutoff at high energy end in a far-infrared region ranging from 15meV to 30meV. The detector is required to measure an individual photon energy with a sufficient energy resolution less than 2% for identifying the cutoff structure, and to be designed for a rocket or satellite experiment. We develop a diffraction grating and an array of Nb/Al-STJ pixels, where each pixel can detect a single far-infrared photon delivered by the grating according to its wavelength. An amplifier is required to have a level of 10 electron equivalent-noise for the Nb/Al-STJ. To achieve high signal-to-noise ratio of the STJ, we use a preamplifier made with the Silicon on Insulator (SOI) technique that can be operated at low temperature around 0.3K. We have developed the Nb/Al-STJ with the SOI cryogenic preamplifier and have tested the detector performance around 0.3K. The present status of this STJ detector development is reported in details. Speaker: Prof. Shinhong Kim (University of Tsukuba) Cryogenic light detectors for background suppression: the CALDER project 18m Background rejection plays a key role for experiments searching for rare events, like neutrino-less double beta decay (0$\nu$DBD) and dark matter interactions. Among the several detection technologies that were proposed to study these processes, cryogenic calorimeters (bolometers) stand out for the excellent energy resolution, the ease in achieving large source mass, and the intrinsic radio-purity. Moreover, bolometers can be coupled to a light detector that measures the scintillation or Cherenkov light emitted by interactions in the calorimeter, enabling the identification of the interacting particle (alpha, nuclear recoil or electron) by exploiting the different light emission. This feature allows to disentangle possible signals from the background produced by all the other interactions that, otherwise, would dominare the region of interest, preventing the achievement of a high sensitivity. Next generation bolometric experiments, such as CUPID, are demanding for very competitive cryogenic light detectors. The technology for light detection must ensure an RMS noise resolution lower than 20 eV, a wide active surface (several cm$^2$) and a high intrinsic radio-purity. Furthermore, the detectors have to be multiplexable, in order to reduce the number of electronics channels for the read-out, as well as the heat load for the cryogenic apparatus. Finally they must be characterized by a robust and reproducible behavior, as next generation detectors will need hundreds of devices. None of the existing light detectors satisfies all these requests. In this contribution I will present the CALDER project, a recently proposed technology for light detection which aim to realize a device with all the described features. CALDER will take advantage from the superb energy resolution and natural multiplexed read-out provided by Kinetic Inductance Detectors (KIDs). These sensors, that have been successfully applied in astro-physics searches, are limited only by their poor active surface, of a few mm$^2$. For this reason, we are exploiting the phonon-mediated approach: the KIDs are deposited on an insulating substrate featuring a surface of several cm$^2$. Photons emitted by the bolometer interact in the substrate and produce phonons, which can travel until they are absorbed by a KID. The first phase of the project was devoted to the optimization of the KIDs design, and to the understanding/suppression of the noise sources. For this phase phase we chose a well-known material for KIDs application, aluminum, which according to our detector model allows to reach a noise resolution of about 80 eV RMS. In the second phase we are investigating more sensitive materials (like Ti, Ti-Al, TiN) which will allow to reach the target sensitivity. In this contribution I will present the results obtained at the end of the first project phase in terms of efficiency and energy resolution, and I will present the encouraging results obtained at the beginning of the second project phase. Speaker: Dr Nicola Casali (INFN-Roma1) The Mu2e Calorimeter Photosensors 18m The Mu2e experiment at FNAL aims to measure the charged-lepton flavor violating neutrinoless conversion of a negative muon into an electron. The conversion results in a monochromatic electron with an energy slightly below the muon rest mass (104.97 MeV). The calorimeter should confirm that the candidates reconstructed by the extremely precise tracker system are indeed conversion electrons while performing a powerful µ/e particle identification. The baseline version of the calorimeter is composed by two disks of inner filled by 1348 pure CsI crystals of 20 cm length. Each crystal is readout by two large area custom SiPMs. We translate the calorimeter requirements in a series of technical specification for the photo sensors that are summarized in the following list: (i) a good photon detection efficiency (PDE) of above 20%, for wavelengths around 310 nm to well match the light emitted by the un-doped CsI crystals; (ii) a large light collection area that in combination with (i) provides a light yield of above 20 p.e./MeV; (iii) a fast rise time; (iv) a narrow signal width to improve pileup rejection; (v) a high gain and (vii) the capability of surviving in presence of 1 Tesla magnetic field, operating in vacuum and in the harsh Mu2e radiation environment. Our solution to all of this is an array of large area UV extended Silicon Photomultipliers (SiPM) connected in series configuration. Speaker: ivano sarra (I) APPLICATION OF MOBILE TECHNOLOGY TO PHOTOMULTIPLIER TUBE READOUT FOR PARTICLE PHYSICS EXPERIMENTS 18m J.Thomas, A. Loving, J. Kelley, C. Wendt - University of Wisconsin, Madison Water Cherenkov neutrino physics experiments typically utilize thousands of large area PhotoMulitplier- Tubes (PMTs) distributed around water volumes of size 105 m3. The precision of the physics results de- pends on the overall enclosed volume, and so larger detectors of order 106 m3 are desirable but presently hindered by the typical large costs involved. As part of the novel CHIPS neutrino detector, which drastically reduces several potentially dominant construction costs, we are developing intelligent, low-cost data ac- quisition modules that will be installed directly with each PMT. The new system takes advantage of recent rapid development of ARM chips used in Raspberry Pi[12], BeagleBone[13] (BB) and other single-board computers, because they are very small, inexpensive and consume very little power. More generally, ARM chips are found in almost every mobile phone, are practically bug-free, hugely adaptable and versatile, and can be applied (rather than developed from scratch) to work at the very front end of a particle physics experiment. With a 1GHz clock, cable Ethernet and a micro USB power supply, the single board computers provide a complete suite of functionality. The White Rabbit[15] (WR) system developed at CERN and GSI for a timing distribution network with sub-nanosecond accuracy over Ethernet delivers encoded PPS timing signals. These three technology developments together provide an innovative and very inexpensive electronics platform for neutrino physics. Looking at the detector construction, the PMTs, and the electronics, we expect the final costs to be dominated by the PMTs themselves. Speaker: Prof. jennifer thomas (ucl) Conveners: Gianantonio Pezzullo (INFN-PI) , Igal Jaegle (University of Florida) Commissioning of the CMS Hadron Forward Calorimeters Phase I Upgrade 18m The final phase of the CMS Hadron Forward Calorimeters Phase I upgrade is being performed during the Extended Year End Technical Stop of 2016 – 2017. In the framework of the upgrade, the PMT boxes are being reworked to implement two channel readout in order to exploit the benefits of the multi-anode PMTs in background tagging and signal recovery. The front-end electronics is also being upgraded to QIE10-based electronics which will implement larger dynamic range and a 6-bit TDC to eliminate the background to have an effect on the trigger. Following this major upgrade, the Hadron Forward Calorimeters will be commissioned for operation readiness in 2017. Here we describe the details and the components of the upgrade, and discuss the operational experience and results obtained during the upgrade and commissioning. Calibration and Performance of the ATLAS Tile Calorimeter during the run 2 of the LHC 18m The Tile Calorimeter (TileCal) is a hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. It is a non-compensating sampling calorimeter comprised of steel and scintillating plastic tiles which are read-out by photomultiplier tubes (PMTs). The TileCal is regularly monitored and calibrated by several different calibration systems: a Cs radioactive source that illuminates the scintillating tiles directly, a laser light system to directly test the PMT response, and a charge injection system (CIS) for the front-end electronics. These calibrations systems, in conjunction with data collected during proton-proton collisions, provide extensive monitoring of the instrument and a means for equalizing the calorimeter response at each stage of the signal propagation. The performance of the calorimeter and its calibration has been established with cosmic ray muons and the large sample of the proton-proton collisions to study the energy response at the electromagnetic scale, probe of the hadronic response and verify the calorimeter time resolution. This contribution presents a description of the different TileCal calibration systems with the latest results on their performance and the results on the calorimeter operation and performance during the LHC Run 2. Speaker: Dr Oleg Solovyanov (V) Performance of the CMS electromagnetic calorimeter during the LHC Run II 18m Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements. Particularly important are decays of the Higgs boson resulting in electromagnetic particles in the final state, as well as searches for very high mass resonances decaying to energetic photons or electrons. Following the excellent performance achieved in Run I at center of mass energies of 7 and 8 TeV, the CMS electromagnetic calorimeter (ECAL) is operating at the LHC with proton-proton collisions at 13 TeV center-of-mass energy. The instantaneous luminosity delivered by the LHC during Run II has achieved unprecedented values, using 25 ns bunch spacing. The average number of concurrent proton-proton collisions per bunch-crossing (pileup) has reached up to 40 interactions in 2016 and may increase further in 2017. These high pileup levels necessitate a retuning of the ECAL readout and trigger thresholds and reconstruction algorithms, to maintain the best possible performance in these more challenging conditions. The energy response of the detector must be precisely calibrated and monitored to achieve and maintain the excellent performance obtained in Run I in terms of energy scale and resolution. A dedicated calibration of each detector channel is performed with physics events exploiting electrons from W and Z boson decays, photons from pi0/eta decays and from the azimuthally symmetric energy distribution of minimum bias events. Speaker: Chia-Ming Kuo (National Central University (on behalf of the CMS Collaboration)) The Development and Performance of a 3D Imaging Calorimeter of DAMPE 18m The Dark Matter Particle Explorer (DAMPE) satellite has been operating in space for more than one year, and considerable science data have already been obtained. The BGO Electromagnetic Calorimeter (BGO ECAL) of the DAMPE is a total absorption calorimeter that allows for a precise three-dimensional imaging of the shower shape. It provides a good energy resolution (<1%@200GeV) and high electron/hadron discrimination (>105). An Engineering qualified model was built and tested using the cosmic rays and high energy beams with energy ranging from 1 GeV to 250GeV. The status of BGO calorimeter in space will also be presented. Speaker: Yunlong Zhang (University of Science and Technology of China) The Semi-Digital Hadronic Calorimeter for Future Leptonic Collider experiments 18m The successful running of the technological prototype of the Semi-Digital Hadronic CALorimter (SDHCAL) proposed to equip the future ILD detector of the ILC has provided excellent results in terms of energy linearity and resolution and also tracking capabilities. Stability with time of the prototype is also successfully tested. To validate completely the SDHCAL option for ILD, a new R&D activities have started. The aim of such activities is to demonstrate the ability to build large detectors (> 2m2). The construction of efficient detectors of such a size necessitates additional efforts to ensure the homogeneity and the efficiency of these large detectors. An important aspect of the new activities is to use a new version of the HARDROC ASIC. The new version has several advantages with respect to the one used in the SDHCAL prototype such as the zero suppression and the I2C protocol. Another development is the DAQ electronic board. A new one is proposed. In addition to a reduced size to cope with the ILD requirements, new features are being implemented. A TCP/IP protocol is adopted in the new card to ensure the coherency of the data transmission. The TTC protocol is also to be used to distribute the clock to the different ASIC on the electronic board. The new DAQ board is being conceived to have the capability to address up to 432 ASICs of 64 channels each. Designs for both the DAQ board and the electronic boards are being finalized and the first boards will be produced soon while 600 of the new HARDROC were produced and tested. A new cassette, to host the active layer while being as before a part of the absorber, is being also conceived. The challenge is to maintain a good rigidity to ensure the perfect contact between the electronic board and the GRPC and also to facilitate the dissipation of the ASIC heating. Finally, the mechanical structure of the new prototype will use a new welding technique to reduce the dead zones and provide less deformed structure. Few attempts using the electron beam welding technique to build small setup have been realized at CERN. Speaker: imad laktineh (IPNL) Conveners: Prof. Jin Li (IHEP/THU) , Prof. Wang Yi (Tsinghua University) The Belle II / SuperKEKB Commissioning Detector - Results from the First Commissioning Phase 18m The SuperKEKB energy-asymmetric e+e- collider has now started commissioning and is working towards its design luminosity of 8x10^35cm-2s-1. In spring 2016, SuperKEKB circulated beams in both rings during the first phase of commissioning, with the Belle II detector at the roll-out position. A dedicated array of sensors collectively called BEAST II was installed around the SuperKEKB interaction point to monitor and study beam background conditions. These measurements determine particle loss rates contributing to the beam life time, expected dose rates and thus possible effects on the survival time of the inner detectors, and both beam and physics background-induced particle rates, which impact detector operation and physics analysis. We will discuss the BEAST II setup, consisting of a total of seven different detector systems, each specialized for the measurement of different aspects of the beam background. We will present results on beam background for different accelerator conditions and studies of the injection background originating from the continuous "top up" injection of SuperKEKB. An outlook for the second phase of the commissioning, where data will be taken with the Belle II detector with a modified inner detector system specialized for background measurements, partially derived from the first phase of BEAST II, will also be given. Speaker: Dr Gabriel Miroslav (Max-Planck-Institute for Physics) Integration and characterization of the vertex detector in SuperKEKB commissioning Phase 2 18m As an upgrade of asymmetric e+e- collider KEKB, SuperKEKB aims to increase the peaking luminosity by a factor of 40 to 8*10^{35}cm^{-2}s^{-1}. The SuperKEKB commissioning is achieved in 3 phases. The Phase 1 was successfully finished in June.2016. Now the commissioning is working towards the Phase 2 targeting to reach the luminosity of 1*10^{34}cm^{-2}s^{-1}. In Phase 2, the beam induced background versus luminosity and beam current will be investigated, to ensure a radiation safe operation environment for the Belle II vertex detector during the Physics data taking in Phase 3. The final focusing magnets will be installed and partial Belle II detector will be rolled in. Closed to the beam pipe, 2 pixel and 4 double-sided strip detector layers will be installed, together with the dedicated radiation monitors, FANGS, CLAWS and PLUME, which aims at investigating the backgrounds near the interacting point. The Phase 2 vertex detector integration was practiced and the combined beam test was accomplished at DESY. In this talk, the status of the vertex detector and the beam tests results are presented. The SHiP experiment at CERN 18m SHIP is a new general purpose fixed target facility, whose Technical Proposal has been recently reviewed by the CERN SPS Committee and by the CERN Research Board. The two boards recommended that the experiment proceeds further to a Comprehensive Design phase in the context of the new CERN Working group "Physics Beyond Colliders", aiming at presenting a CERN strategy for the European Strategy meeting of 2019. In its initial phase, the 400GeV proton beam extracted from the SPS will be dumped on a heavy target with the aim of integrating 2×10^20 pot in 5 years. A dedicated detector, based on a 30m long and 5x10m wide vacuum tank followed by a spectrometer and particle identification detectors, will allow probing a variety of models with light long-lived exotic particles and masses below O(10) GeV /c2. Another dedicated detector will allow the study of neutrino cross-sections and angular distributions. and tau neutrino deep inelastic scattering cross sections and is based on the OPERA emulsion brick technology. The talk will focus on the detector design and on beam test results that are being carried out Speaker: Murat Ali Guler Conveners: christophe de La Taille (OMEGA) , Dr qiang wang (ihep) The Global Control Unit for the JUNO front-end electronics 18m At the core of the Jiangmen Underground Neutrino Observatory (JUNO) front-end and readout electronics is the Global Control Unit (GCU), a custom and low power hardware platform with glue logic on board which is able to perform several different tasks spanning from selective readout and transmission as well as remote peripherals control. The hardware inaccessibility after installation, the timing resolution and synchronization among channels, the trigger generation and data buffering, the supernova events data storage, the data readout bandwidth requirements are all key factors that are reflected in the GCU architecture. The main logic of the GCU is in an FPGA that interfaces with a custom made ASIC that continuously digitizes the signal from the photomultiplier tube (PMT). The proposed paper introduces a detailed overview of the main GCU functionalities and then focuses on the prototypes validation and the first traces readout from a PMT. Speaker: Dr Davide Pedretti (University of Padova - INFN Laboratori Nazionali di Legnaro) Design of a Data Acquisition Module Based on PXI for Waveform Digitization 18m The waveform digitization is more and more popular for readout electronics in the particle and nuclear physics experiment. A data acquisition module for waveform digitization is investigated in this paper. The module is designed on a 3U PXI (PCI eXtensions for Instrumentation) shelf, which can manage the measurement of two channels of waveform digitization for detector signals. It is equipped with a two channels ADC (Analog to Digital Converter) of 12 bits resolution and up to 1.8G samples per second sampling rate, and an FPGA (Filed Programming Gate Array) for controlling and data buffering. Meanwhile, a CPLD is employed to implement the PXI interface communication via PXI Bus. The electronics performance of this system was tested. The bandwidth of the system is more than 450MHz. The ENOB (Effective Number Of Bits) is up to 9.31 bits for an input signal from 5 MHz to 150 MHz and the ENOB is still above 8.17 bits for an input up to 400 MHz. The results show that the module can be successfully used in the particle and nuclear physics experiment. Speaker: Dr Zhe Cao (State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China) A compact size, 64-channel, 80 MSPS, 14-bit dynamic range ADC module for the PANDA Electromagnetic Calorimeter 18m A compact size, 64-channel, 80 MSPS, 14-bit dynamic range ADC modules for the scintillating electromagnetic calorimeter of PANDA were developed and used for testing in various detector readout set-ups [1]. To minimize cabling bulk, the modules are planned to be placed inside of the PANDA detector volume, where they will be exposed to magnetic field of 2T and a non-negligible radiation flux. The module performs signal filtration, extracts important signal parameters and allows for resolving and parametrizing overlapping pulses. A dual FPGA structure and a hardwired arbitration circuit allows for resolving potentially catastrophic situations caused by radiation-induced (SEU) configuration damages. The FPGAs are prepared for self-detection and recovery from SEU. Processed data are pushed to the optical link running at 2 Gbit/s. The ADC module is compliant with a "Synchronization Of Data Acquisition" (SODA) System, which allows for obtaining defined latencies with a reference time accuracy of 50 ps [2]. The paper describes construction details and test environments. The results of performance test, including dynamic range, linearity, magnetic field and preliminary radiation sustainability are also presented. Speaker: Dr Pawel Marciniewski (Uppsala University) Development of CaRIBOu: a modular readout system for pixel sensor R&D 18m The ATLAS experiment is planning to build and install a new all-silicon Inner Tracker (ITk) for the High-Luminosity LHC (HL-LHC) upgrade. Extensive R&D on pixel sensors based on High-Voltage CMOS (HV-CMOS) process is ongoing. Given the potential advantages of this technology compared with the traditional planar pixel sensors, several prototypes with different pixel type have been designed and fabricated in the 180nm and 350nm HV-CMOS processes provided by Austria Microsystems (ams). CaRIBOu (Control and Readout Itk BOard) is a modular readout system developed to test silicon-based pixel sensors. It currently includes several different front-end chip boards with compatible interface for pixel sensor mounting, a CaR (Control and Readout) board to provide power, bias, configurational signals and calibration pulse for sensors under test, a Xilinx ZC706 development board for data and command routing, and a host computer for data storage and command distribution. A software program has been developed in Python to control the CaRIBOu system and implement the tuning algorithm for different pixel sensors. CaRIBOu has been used in various testbeam at CERN and Fermilab for the HV-CMOS sensors fabricated in the ams HV-CMOS 180nm and 350nm processes since the end of 2015. We successfully integrated the ATLAS FELIX (Front-End LInk eXchange) DAQ system into CaRIBOu by using a FELIX PCIe card for the testbeam data readout, slow control and clock distribution through two GBT optical links instead of the standard Gigabit Ethernet interface of CaRIBOu. The testbeam results have demonstrated that the CaRIBOu readout system is very versatile for the test of different pixel sensors, and works very well with the FELIX DAQ system. Further development is ongoing to adapt it to different pixel sensors (e.g. MIMOSA and CLICpix), to implement multi-channel readout, and to make it available to various lab test stands. Speaker: Mr Hongbin Liu (University of Science and Technology of China) Study of Radiation-induced Soft-errors in FPGAs for Applications at High-luminosity e+e- Colliders 18m Static RAM-based Field Programmable Gate Arrays (SRAM-based FPGAs) [1, 2] are widely adopted in Trigger and Data Acquisition (TDAQ) systems of High-Energy Physics (HEP) experiments for implementing fast logic due to their re-configurability, large real-time processing capabilities and embedded high-speed serial IOs. However, these devices are sensitive to radiation effects such as single event upsets (SEUs) or multiple bit upsets (MBUs) in the configuration memory, which may alter the functionality of the implemented circuit. Therefore, they are normally employed only in off-detector regions, where no radiation is present. Special families of SRAM-based FPGAs (e.g. the Xilinx Virtex-5QV) have been designed for applications in radiation environments, but their excessive cost (few 10k USD), with respect to their standard counterpart ($\sim$ 500 USD), usually forbids their usage in many applications, including HEP. Therefore, there is a strong interest in finding solutions for enabling the usage of standard SRAM-based FPGAs also on-detector. Methods based on modular redundancy and periodic refresh of the configuration, i.e. configuration scrubbing, are used in order to mitigate single event effects, which become more significant as the technological scaling proceeds towards smaller feature sizes. In fact, latest devices also include dedicated circuitry implementing error correcting codes for mitigating configuration errors. The expected bit configuration upset rate is valuable information for choosing which protection strategy, or which mixture of strategies, to adopt. Typically, test campaigns are carried out at dedicated irradiation facilities by means of heavy ions, proton and neutron beams [3,4,5] and they permit to determine the particle to bit error cross section. However, a reliable prediction of the upset rate, and of radiation effects in general, requires the knowledge of the cross section as function of the particle species and their spectra and it depends on a detailed knowledge of the radiation fluxes. Often such information is not available with sufficient precision, and when possible an in situ (or in flight for space applications) measurement of the upset rate is highly recommended. For instance, experiments at the Large Hadron Collider have been monitoring SEUs in readout control FPGAs [6], experiments in space have been launched in order to measure single event effects rates and compare them to predictions based on cross sections [7]. Furthermore, over the last decade, FPGA vendors have been carrying out experiments aimed at measuring SEUs induced by atmospheric neutrons in their devices [8]. In February 2016 the SuperKEKB [9] $e^+e^-$ high-luminosity ($8\cdot10^{35} cm^{-2} s^-1$) collider of the KEK laboratory (Tsukuba, Japan) has been commissioned and it has been operated until June 2016 completing the so-called Phase-1. In this work, we present direct measurements of radiation-induced soft-errors in a SRAM-based FPGA device installed at a distance of $\sim$ 1 m from the SuperKEKB beam pipe. We designed a dedicated test board hosting a Xilinx Kintex-7 FPGA. In order to distinguish between FPGA failures from those of other devices, our board hosts only passive components other than the device under test. Power and configuration are fed to the board over dedicated cabling from a remote control room. A single board computer manages configuration and read back via a JTAG connection. During the SuperKEKB operation, we continuously read back the FPGA configuration memory in order to spot single and multiple bit upsets (SBUs and MBUs) and we logged power consumption at the different power rails of the device. Since the operation current of the SuperKEKB collider spanned a range between 50 and 500 mA for both the electron and positron rings, the experimental scenario allowed us to perform measurements in different radiation conditions. We discuss the measured FPGA configuration error rate for both SBUs and MBUs and the power consumption variation in the view of applications in Belle2, but also taking into account other experiments operating in similar radiation conditions. Our study will continue in 2018 during the Phase-2 operation of the SuperKEKB collider, when the ring currents will increase and the final focusing magnets will be installed for providing $e^+e^-$ collisions. The background radiation is expected to rise as well as related effects in FPGAs. This work is part of the ROAL SIR project funded by the Italian Ministry of Research (MIUR). References [1] Xilinx Inc., "Virtex UltraScale FPGAs Data Sheet: DC and AC Switching Characteristics," DS893 (v1.7.1) April 4, 2016 [2] Altera Corp., "Stratix 10 Device Overview," S10-OVERVIEW, 2015.12.04 [3] D. M. Hiemstra and V. Kirischian, "Single Event Upset Characterization of the Kintex-7 Field Programmable Gate Array Using Proton Irradiation," 2014 IEEE Radiation Effects Data Workshop (REDW), Paris, 2014, pp. 1-4. doi: 10.1109/REDW.2014.7004593 [4] M.J. Wirthlin, H. Takai and A. Harding, "Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter ," in Proc. of Topical Workshop on Electronics for Particle Physics 2013, Perugia, Italy [5] T. Higuchi, M. Nakao and E. Nakano, "Radiation tolerance of readout electronics for Belle II," in Proc. of Topical Workshop on Electronics for Particle Physics 2011, Vienna, Austria [6] K. Røed, J. Alme, D. Fehlker, C. Lippmann and A. Rehman, "First measurement of single event upsets in the readout control FPGA of the ALICE TPC detector," in Proc. of Topical Workshop on Electronics for Particle Physics 2011, Vienna, Austria [7] A. Samaras, A. Varotsou, N. Chatry, E. Lorfevre, F. Bezerra and R. Ecoffet, "CARMEN1 and CARMEN2 Experiment: Comparison between In-Flight Measured SEE Rates and Predictions," 2015 15th European Conference on Radiation and Its Effects on Components and Systems (RADECS), Moscow, 2015, pp. 1-6. doi: 10.1109/RADECS.2015.7365590 [8] Xilinx Inc., "Continuing Experiments of Atmospheric Neutron Effects on Deep Submicron Integrated Circuits," WP286 (v2.0) March 22, 2016 [9] I. Adachi, "Status of Belle II and SuperKEKB," Journal of Instrumentation, Volume 9, July 2014 Speaker: Dr Giordano Raffaele (University of Naples and INFN) Conveners: K.K. Gan (The Ohio State University) , Yasuo Arai (High Energy Accelerator Research Organization (KEK)) An SOI pixel sensor with in-pixel binary counters 18m Results from an SOI pixel sensor with in-pixel binary counters are reported. It's been well known that the transition of output pattern within each counter would induce considerably large spurious signal on the nearby charge collection electrodes, which interferes with the detection of real signals. Among the various remedies investigated, Double-SOI process proved to be an effective cure thanks to the advancement in semi-conductor industry. The design concept of CPIXTEG3b, in particular the usage of the shiedling layer enabled by Double-SOI is covered in this talk. S-curve measurement reveals ENC around 60e- and sigma of threshold distribution less than 20 e-. The pixel array has demonstrated an excellent feature of zero noise with a low threshold around 800e-. The depletion of sensor and inefficiency at the square boundary of pixel have been studied using a synchrotron X-ray beam. The depletion depth reaches 130um under -100V bias. Charge sharing at the edge of two adjacent pixels can be corrected by properly setting the threshold, while at the corner where 4 pixels adjoin specific comparison logic is needed to cope with it. The success of CPIXTEG3b brings about a promising prospect for applications such as photon counting for the synchrotron light source and charged particle tracking for future e+e- collider. Speaker: Dr Yunpeng LU (Insitute of High Energy Physics, CAS) Fine-Pixel Detector FPIX Realizing Sub-micron Spatial Resolution Developed Based on FD-SOI Technology 18m Monolithic pixel devices are attractive for various aspects in particle detector application. One of the notable features is that the pixel size can be reduced without constraints from the metal bumps which limit the pixel size of hybrid pixel devices typically to 50um. We are developing monolithic pixel devices utilizing Lapis 0.20 um FD-SOI (Fully-Depleted Silicon-on-Insulator) technology. FPIX, fine-pixel detector, has been designed to demonstrate the capability of the SOI monolithic pixel in view of excellent spatial resolution achievable. With consisting of eight on-pixel FETs, FPIX realized a pixel size of 8x8um in a 128x128 matrix, 1x1mm active area, in a chip size of 3 mm square. The signals are extracted in a rolling-shutter mode and digitized by external ADCs. There are eight parallel readout lines; therefore each ADC handles signals out of 16 columns of 128 rows. The 12-bit digitization requires 200 ns, corresponding to a frame readout time of 0.5 ms. FPIXs have been fabricated on various SOI handle wafer types, single SOIs in Cz and FZ, p- and n-types, also on double SOI (p type Cz). This redundancy is an outstanding feature that we can select the sensor type and resistivity best suited for application. Among them, double SOI has been developed for, among other reasons, radiation resistivity. The second active layer is used to compensate for the threshold shifts caused by holes trapped in the BOX (buried oxide) layer due to radiation. We have evaluated the tracking performance of a system consisting of four single SOI FPIX devices of FZ p-type (25kOhmcm, 500um thickness) in a 120 GeV hadron beam at Fermilab. A double SOI FPIX (1kOhmcm, 300um thickness) irradiated to 100 kGy has also been tested for the performance. We calculated the residual distribution of hit position to the track reconstructed from other three. The residual distribution is well fitted by a Gaussian function with a standard deviation of 0.87um. Taking into account the uncertainty in the track position, we are confident that sub-micron spatial resolution has be achieved by FPIX. We have also observed a signal out of 100kGy irradiated FPIX corresponding to the MIP particles. We report the detail of the FPIX design and the performance evaluation. Speaker: Kazuhiko Hara (University of Tsukuba) A monolithic pixel sensor with fine space-time resolution based on Silicon-on-Insulator technology for the ILC vertex detector 18m Silicon-on-insulator (SOI) wafer technology can be used to achieve a monolithic pixel detector, in which both a semiconductor pixel sensor and readout electronics are integrated in the same wafer. We are developing an SOI pixel sensor SOFIST, SOI sensor for Fine measurement of Space and Time, optimized for the vertex detector system of the International Linear Collider (ILC) experiment. This sensor has a pixel size of 20$\times$20 um$^2$ with fine position resolution for identifying the decay veteces of short life-time particles. The pixel circuit stores both the signal charge and timing information of the incident particles. The sensor can separate hit events with recording timing information during bunch-train collisions of the ILC beam. Each pixel has multiple stages of analog memories and time-stamp circuits for accumulating multiple hit events. SOFIST Ver.1, the first prototype sensor chip, was fabricated using 0.2 $\mu$m SOI process of LAPIS Semiconductor. The prototype chip consists of 50$\times$50 pixels and Column-ADC circuits in a chip size of 3x3 mm$^2$. We have designed the pixel circuit for the charge signal read out with a pre-amplifier circuit and 2 analog memories. We measured the sensor position resolution with 120 GeV Proton beam at Fermilab Test Beam Facility in January 2017. We observed the position resolution of 3 $\mu$m, which is required as a pixel sensor for ILC vertex detector. In 2016, we have submitted SOFIST Ver.2, which measures the hit timing information. We are designing SOFIST Ver.3 storing both the signal charge and timing information within a pixel area of 20$\times$20 $\mu$m$^2$. We adopt 3D stacking technology which implements additional circuit layer on the SOI sensor chip. The additional layers are connected electrically by advanced micro-bump technology, which can place bump with the pitch of 5 $\mu$m. In this presentation, we report the status of the development and the evaluation of the SOFIST prototype sensor. Speaker: Shun Ono (KEK) Secondary electron yield of nano-thick aluminum oxide and its application on MCP detector 18m The secondary electron properties of nano-thick aluminum oxide have been studied. The MCP assembly performance improvement through coating aluminum oxide is investigated. The gain, the charge resolution and the peak-to-valley ratio of the MCP detector are improved. Two possible solutions are proposed to improve the maximum yield with reduced optimal energy of secondary electron emission materials. Speaker: Dr Yan Baojun (IHEP) Signal to noise ratio of Low Gain Avalanche Detector 18m Low gain avalanche detectors (LGAD) were attracted considerable attention as a new concept of silicon radiation detectors. These devices are based on reach‐through avalanche photodiodes and provide a moderate gain (gain~10). Compared with general avalanche photodiodes (APD), LGAD have a remarkable improvement of the signal-to-noise ratio (SNR) which makes them more suitable to detect high energy charged particles. Moreover, LGAD have good time resolution so that they can be used as sensors for tracking. In this paper, the SNR of LGAD was proved to be better than APD and PIN by theoretical methods. Our LGAD were measured. The punch-through voltage is 45V and the breakdown voltage is 65V. The gain is almost not changed with the biased voltage and temperature under the breakdown voltage. The SNR reaches the maximum 320 while the bias voltage is 64V and the gain is 4. Speaker: Prof. Guangqing Yan (Beijing Normal University) Discussion Meeting Room305A Room305A Conveners: Hugo Delannoy (Interuniversity Institute for High Energies (ULB-VUB)) , Dr Jianbei Liu (University of Science and Technology of China) Performance of Resistive Plate Chamber operated with new environmental friendly gas mixtures 18m The Resistive Plate Chamber (RPC) detectors are widely used thanks to their excellent time resolution and low production cost. At the CERN LHC experiments, the large RPC systems are operated in avalanche mode thanks to a Freon based gas mixture containing C2H2F4, SF6 and iC4H10. Unfortunately C2H2F4 and SF6 are considered greenhouse gases with high impact on the environment. Furthermore the C2H2F4 is also subject to European regulations aiming in a gradual phase out from production in the near future that could induce instability on the price and incertitude in the product availability. The search of new environmental friendly gas mixtures is therefore advisable for reducing GHG emissions and costs as well as to optimize RPC performance and possible aging issues. Several hydrofluorocarbons (HFCs) and hydrofluoroolefins (HFOs) with a global warming potential (GWP) lower than the C2H2F4 have been studied as possible replacement. More than 60 new environmental friendly gas mixtures based on these gases with the addition of inert components have been tested on single-gap RPCs by measuring the detector performance in terms of efficiency, streamer probability, induced charge, cluster size and time resolution. Evaluations of the quenching and electronegative capacities of the selected eco-friendly gas candidates have been deduced by comparison of the RPC performance. A particular attention has been addressed to the possibility of maintaining the current LHC RPC operation conditions (i.e. currently used applied voltage and front-end electronics) in order to be able to use new gas mixtures for RPC systems even when the common infrastructure (i.e. high voltage and detector components) cannot be replaced. A complete replacement of C2H2F4 with the HFOs does not give satisfactory results using the current LHC detector front-end electronics and high voltage system. However reasonable avalanche operation is achievable with some of the low GWP HFCs tested. It has been observed that methane (C) and ethane (C2) molecular structures allow direct operation at applied high voltage similar to the ones currently used at the LHC experiments. On the contrary propane or propene structures (C3 without or with double bounds) require the addition of Argon or Helium. Unfortunately mixtures with Argon and Helium show the presence of a large fraction of streamers well above the tolerable limit for safe and long-term operation at the LHC experiments. Encouraging results have been obtained with a partial (50%) substitution of the C2H2F4 with HFO and the addition of small Helium quantity or by using HFCs based gas mixtures. The nowadays RPC gas mixture is therefore not easily substitutable with another 3 gas mixture components but encouraging results have been obtained with a 4-6 gas mixture components. In case of no constrains with the RPC design and infrastructure, operation in avalanche mode with a HFO based gas mixture can be obtained at higher electric field and with a dedicated electronics. Speaker: Beatrice Mandelli (CERN) High tracking performance in 3D with gaseous pixel detectors based on the Timepix3 chip. 18m Our group is developing gaseous pixel detectors by using micromegas-based amplification structures on top of CMOS pixel readout chips of the Medipix family. By means of wafer post-processing techniques we add a spark-protection layer and a grid to create the amplification region above the chip. An inserted gas layer and cathode plane above the grid create a complete gaseous detector able to reconstruct 3D track segments due to the TDC per pixel topology which enables the recording of the drift time. By fitting the track segments, we obtain the resolutions for the position and angle. Using a small scale prototype of the Timepix3 chip, we have demonstrated high tracking performance in [1]. However, the resolution along the drift direction is dominated by timewalk. The existing Timepix3 chip, thanks to the simultaneous measurement of the time-of-arrival (ToA) and charge via time-over-threshold (ToT) allows corrections to remaining timewalk effects, improving further the resolution. We have developed a gaseous pixel detector based on the Timepix3 chip. The detector was used SPS/Cern in order to measure the tracking performance. I will report on the timewalk correction obtained with real data from a particle beam. The results obtained make this detector the most precise gaseous detector to date for measuring the creation position of individual ionisation electrons. **References** [1] S. Tsigaridas, et al., Precision tracking with a single gaseous pixel detector, Nucl. Instr. and Meth. A 795 (2015) 309-317. Speaker: Stergios Tsigaridas (Nikhef) GridPix detector with Timepix3 ASIC 18m GridPix detectors combine the advantages of a high granularity readout based on a pixel ASIC with a Micromegas gas amplification stage. By producing the Micromegas with photolithographic postprocessing techniques directly on the ASIC a very good alignment of grid holes with readout pixels can be reached. Thus, the charge avalanche started by a single primary electron can be collected and digitized by a single pixel giving excellent spatial resolution. Also, the energy resolution improves because of the primary electron counting instead of charge summation. After demonstrating the potential of the GridPix detector in several environments a new ASIC, Timepix3, has been designed and produced. It overcomes its predecessors limitations. Most notably it allows for multihit readout and for simultaneous charge and time measurement of each pixel. While preparing for the new generation of GridPix detectors, also the design and the production techniques of the grid were revised and improved. A first detector was built with the new Timepix3-based GridPix. It was tested with different kinds of ionization sources among which are radioactive sources and a laser setup. These first measurements underline the improvements of the system and will be presented in the conference. As a possible application a design for a TPC endplate covered with GridPixes for an ILC experiment will be discussed. Speaker: Dr Peter Kluit (Nikhef) Progress in Room-Temperature and Cryogenic Resistive THGEM-based detectors 18m Future experiments in Particle and Astro-particle Physics pose a growing demand for cost-effective large-area imaging detectors, capable of operating stably over a broad range of conditions. Promising candidates are detectors based on the Thick Gas Electron Multiplier (THGEM) principle. Among them are: the cascaded-THGEM, WELL, Resistive-WELL (RWELL) and the recently introduced Resistive-Plate WELL (RPWELL) detector. It is a single-sided THGEM electrode coupled to a segmented readout anode through a sheet of large bulk resistivity. Laboratory and accelerator studies, performed in Ne- and Ar-based gas mixtures at room temperature, have demonstrated the large dynamic range (from one to several thousand electrons) of this few-millimeter thick single-element multiplier, high achievable gains, sub-mm localization resolution and discharge-free operation with high detection efficiency over a broad particle-flux range. Results from new detector prototypes, 500x500 mm2 in size, will be presented. Originally, the main potential applications focus on particle tracking and sampling elements in digital hadron calorimetry. We will present and discuss new detector concepts for two other potential applications. A large dynamic-range RPWELL-based UV-photon detector, comprising a multiplier coated with a reflective CsI photocathode – with potential applications for RICH and a cryogenic RPWELL-based detector for UV-photon and charge recording in dual-phase noble-gas TPCs – with potential applications in dark-matter searches and neutrino physics. The characteristics of these detectors will be presented. Speaker: Shikma Bressler (Weizmann Institute of Science) Common software for controlling and monitoring the upgraded CMS Level-1 trigger 18m The Large Hadron Collider restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The instantaneous luminosity is expected to increase significantly in the coming years. An upgraded Level-1 trigger system was deployed in the CMS experiment in order to maintain the same efficiencies for searches and precision measurements as those achieved in 2012. This system must be controlled and monitored coherently through software, with high operational efficiency. The legacy system was composed of a large number of custom data processor boards; correspondingly, only a small fraction of the software was common between the different subsystems. The upgraded system is composed of a set of general purpose boards, that follow the MicroTCA specification, and transmit data over optical links, resulting in a more homogeneous system. The associated software is based on generic components corresponding to the firmware blocks that are shared across different cards, regardless of the role that the card plays in the system. A common database schema is also used to describe the hardware composition and configuration data. Whilst providing a generic description of the upgrade hardware, this software framework must also allow each subsystem to specify different configuration sequences and monitoring data depending on its role. We present here, the design of the control software for the upgrade Level-1 Trigger, and experience from using this software to commission the upgraded system. Speaker: Giuseppe Codispoti (U) Automated load balancing in the ATLAS high-performance storage software 18m The ATLAS experiment collects proton-proton collision events delivered by the LHC accelerator at CERN. The ATLAS Trigger and Data Acquisition (TDAQ) system selects, transports and eventually records event data from the detector at several gigabytes per second. The data are recorded on transient storage before being delivered to permanent storage. The transient storage consists of high-performance direct-attached storage servers accounting for about 500 hard drives. The transient storage operates dedicated software in the form of a distributed multi-threaded application. The workload includes both CPU-demanding and IO-oriented tasks. This paper presents the original application threading model for this particular workload, discussing the load-sharing strategy among the available CPU cores. The limitations of this strategy were reached in 2016 due to changes in the trigger configuration involving a new data distribution pattern. We then describe a novel data-driven load-sharing strategy, designed to automatically adapt to evolving operational conditions, as driven by the detector configuration or the physics research goals. The improved efficiency and adaptability of the solution were measured with dedicated studies on both test and production systems. This paper reports on the results of those tests which demonstrate the capability of operating in a large variety of conditions with minimal user intervention. Speaker: Le Goff Fabrice (Rutherford Appleton Laboratory) Transparents An FPGA-Based Hough Transform Track Finder for the L1 Trigger of the CMS Experiment at the High Luminosity LHC 18m A new tracking system is under development for operation in the CMS experiment at the High Luminosity LHC. It includes an outer tracker which will construct stubs, built by correlating clusters in two closely spaced sensor layers for the rejection of hits from low transverse momentum tracks, and transmit them off-detector at 40 MHz. If tracker data is to contribute to keeping the Level-1 trigger rate at around 750 kHz under increased luminosity, a crucial component of the upgrade will be the ability to identify tracks with transverse momentum above 3 GeV/c by building tracks out of stubs. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are identified using a projective binning algorithm based on the Hough Transform, and then refined with a Kalman Filter, fully implemented in FPGA. A hardware system based on the MP7 MicroTCA processing card has been assembled, which demonstrates a realistic slice of the track finder in order to help gauge the performance and requirements for a full system. This talk outlines the system architecture and algorithms employed, highlighting some of the performance and latency results from the hardware demonstrator, and discusses the prospects and performance of the final system. Speaker: Tom James (I) Recent Update on Trigger and Data Acquisition System of PandaX-II Experiment 18m PandaX-II is direct dark matter search experiment, operating a half-ton scale dual-phase xenon Time Projection Chamber, located at China Jinping Underground Laboratory. Signals from the detector are recorded by 158 photomultipliers, which are then digitized and recorded by commercial flash ADC waveform digitizers. In this paper we present PandaX-II trigger and data acquisition system, focusing on recent update with a FPGA-based trigger system and multithread readout. Speaker: Qinyu Wu (SJTU) Acceleration of an particle identification algorithm used for the LHCb Upgrade with the new Intel(r) Xeon(r)/FPGA 18m The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a 'triggerless' readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to read out the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which also has to be processed to select the interesting proton-to-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new event filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade, the usage of an experimental FPGA accelerated computing platform in the event building or in the event filter farm (trigger) is being considered and therefore tested. This platform from Intel(r) hosts a general Xeon(r) CPU and a high performance Arria 10 FPGA inside a multi-chip package linked via a high speed and low latency link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel(r) with both sockets hosting an Intel(r) Xeon(r)/FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. A computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported to the Intel(r) Xeon(r)/FPGA platform and accelerated. For this a Verilog and a OpenCL kernel were used, and compared in performance and development time. Also, other PCIe FPGA accelerators using the same FPGA were tested for performance. One important measurement is the performance per Joule, which will be compared to modern GPUs. The results show that the Intel(r) Xeon(r)/FPGA platforms, which are built in general for high performance computing, are also very interesting for the High Energy Physics community. Speaker: Christian Faerber (CERN) Conveners: Prof. Gerald Eigen (University of Bergen) , jennifer thomas (高能所) Results from Pilot Run for MEG II Positron Timing Counter 18m The MEG II experiment at Paul Scherrer Institut in Switzerland will search for the lepton flavour violating muon decay, $\mu^+\to e^+\gamma$, with a sensitivity ($4\times10^{-14}$) improving the existing limit of an order of magnitude. The positron Timing Counter (pTC) is the subdetector dedicated to the measurement of the positron emission time. It is designed on the basis of a new approach to improve a positron ($e^+$) timing resolution by a factor of two compared to MEG. pTC is composed of 512 ultra-fast plastic scintillator with SiPM readouts. The mean hit multiplicity for signal $e^+$ is evaluated to be $\sim$ 9 and a high timing resolution of $\sim$ 35 ps is expected by averaging the signal time of multiple hit counters. To achieve the target resolution, an internal time calibration with a precision of 10 ps or better is required. We have developed two new methods for the calibration, which meet the requirement: Track-based calibration and Laser-based calibration. In 2016, We have finished construction of pTC and installed the first one-fourth of pTC into the MEG II experimental area to evaluate the performance using $\mu^+$ beam as a pilot run. We took data of $e^+$ from the dominant $\mu$ decay (Michel Decay, $\mu^+\to e^+\nu_e\overline{\nu}_{\mu}$) and applied both time calibration methods. The time offsets of each counter calculated independently from the two calibration methods were consistent and stable during the run within 6 ps. The systematic uncertainty between these methods was 39 ps, which is suppressed with $\frac{1}{\sqrt{N}}$ using multiple hits (N: number of hit counters). The overall timing resolution weighted with a distribution of the number of hit counters for signal $e^+$ of 38 ps was achieved in this pilot run. The prospects towards MEG II physics run are also discussed. Speaker: Mitsutaka Nakao (The University of Tokyo) Determining the photon yield for the LHCb RICH Upgrade photodetection system 18m For the upgrade of the LHCb RICH detectors in 2020 the existing photon detection system will be replaced. This study investigates the photon yield of the Multi Anode PMTs (MaPMT), which have been proposed as the new photon detectors, together with the associated readout electronics. Data collected during the LHCb RICH Upgrade testbeam experiment in autumn of 2016 is used. Four MaPMTs were exposed simultaneously to Cherenkov light generated in a solid radiator by a charged Pion beam of 180 GeV. The collected data was combined and matched with tracking information from the LHCb VeLo track telescope, which was also present in the same particle beam. The tracking information allows for track selection by the number of concurrently arriving charged particles and track direction. A simulation of the testbeam set up was created using the Geant4 toolkit. Results obtained from reconstruction of the Monte-Carlo events are compared with those from the data taken during the testbeam. Comparing the number of detected photoelectrons for each incident charged particle in real data and simulation allows to determine the detection efficiency of the MaPMTs. Speaker: Mr Michele Piero Blago (CERN) Application of the SOPHIAS Detector to Synchrotron Radiation X-ray Experiments 18m The structural analysis for functional materials is one of studies which are recently very interested in application of synchrotron radiation science. With a low-emittance synchrotron ring, higher performance is required for an X-ray area detector used in experiments. A charge integrating type detector SOPHIAS, which was designed fitting to XFEL experiments, was developed by RIKEN based on Silicon-On-Insulator technology. The SOPHIAS detector has a 2157 times 891 pixel array consisted of 30 micro meter square pixels. The SOPHIAS is powerful tool in X-ray structural analysis because of its property of high definition and high dynamic range. The application of the SOPHIAS to synchrotron radiation experiments was started at Photon Factory, KEK (KEK/PF). Focusing to small angle X-ray scattering (SAXS) for block copolymers and X-ray diffraction for ferroelectrics, synchrotron radiation X-ray experiments were conducted by use of the SOPHIAS at KEK/PF. In the measurement of the SAXS for a poly(epsiron-caprolactone)-polybutadiene diblock copolymer, the SAXS pattern has complicated peak structure originated in Frank-Kasper sigma phase so that the fine pixel of the SOPHIAS was very important to resolve the peaks. We will report the results of the experiments using the SOPHIAS. Speaker: Dr Ryo Hashimoto (KEK) Development of the Photon-Detector System for DUNE 18m This presentation will concentrate on the development of the Photon-Detector (PD) System for DUNE. The DUNE (Deep Underground Neutrino Experiment) will observe the long-baseline neutrino oscillations to determine the neutrino mass ordering, to determine if CP symmetry is violated in the lepton sector, and to precisely measure the parameters governing neutrino oscillation to test the three-neutrino paradigm. The DUNE physics program will also include precise measurements of neutrino interactions, observation of atmospheric neutrinos, searches for nucleon decay, and sensitivity to supernova burst neutrinos. DUNE is planned to consist of the near detector systems and four liquid argon TPC (LArTPC) far detector modules, each with a fiducial mass of about 10 kton. The single-phase DUNE far detector module design will be tested with the ProtoDUNE-SP, which is the single-phase DUNE Far Detector prototype that is under construction and will be operated at the CERN Neutrino Platform starting in 2018. ProtoDUNE-SP is a crucial part of the DUNE effort towards the construction of the first DUNE 10-kton fiducial mass far detector module (17 kton total LAr mass), and is a significant experiment in its own right. With a total LAr mass of 0.77 kton, it represents the largest monolithic single-phase LArTPC detector to be built to date. The detector elements, consisting of the time projection chamber (TPC), the cold electronics (CE), and the photon detection system (PDS), are housed in a cryostat that contains the LAr target material. The construction and operation of ProtoDUNE-SP will serve to validate the DUNE single-phase detector design, while the charged-particle beam test will enable calibration measurements necessary for precise calorimetry and optimization the event reconstruction algorithms. Scientific results would lead to quantifying and reducing systematic uncertainties for the DUNE far detector. The Photon-Detector (PD) System for DUNE will be integrated into the APAs. For the ProtoDUNE-SP each PD module will consist of a bar-shaped light guide and a wavelength-shifting layer (surface-coating or mounted radiator plate). The wavelength-shifting layer converts incoming VUV (128 nm) scintillation photons to longer-wavelength photons, in the visible blue range. A fraction of the converted photons are emitted into the bar where they are detected by silicon photomultipliers (SiPMs). Each APA frame is designed with ten bars into which PDs are inserted after the TPC wires have been mounted. The SiPM sign
CommonCrawl
Program NVMW 2021 Registered users: Check your email for Zoom and GatherTown links to attend. Chair: Steven Swanson Metall: A Persistent Memory Allocator for Accelerating Data Analytics Roger Pearce & Keita Iwabuchi Keynote I Chair: Eitan Yaakobi Accelerating Deep Neural Networks with Analog Memory Devices GeoffreyBurr (IBM Research - Almaden); Speaker:Geoffrey W. Burr, IBM Research - Almaden AbstractDeep Neural Networks (DNNs) are very large artificial neural networks trained using very large datasets, typically using the supervised learning technique known as backpropagation. Currently, CPUs and GPUs are used for these computations. Over the next few years, we can expect special-purpose hardware accelerators based on conventional digital-design techniques to optimize the GPU framework for these DNN computations. Even after the improved computational performance and efficiency that is expected from these special-purpose digital accelerators, there would still be an opportunity for even higher performance and even better energy-efficiency for inference and training of DNNs, by using neuromorphic computation based on analog memory devices. In this presentation, I discuss the origin of this opportunity as well as the challenges inherent in delivering on it, including materials and devices for analog volatile and non-volatile memory, circuit and architecture choices and challenges, and the current status and prospects. Speaker bioGeoffrey W. Burr received his Ph.D. in Electrical Engineering from the California Institute of Technology in 1996. Since that time, Dr. Burr has worked at IBM Research--Almaden in San Jose, California, where he is currently a Distinguished Research Staff Member. He has worked in a number of diverse areas, including holographic data storage, photon echoes, computational electromagnetics, nanophotonics, computational lithography, phase-change memory, storage class memory, and novel access devices based on Mixed-Ionic-Electronic-Conduction (MIEC) materials. Dr. Burr's current research interests involve AI/ML acceleration using non-volatile memory. Geoff is an IEEE Fellow (2020), and is also a member of MRS, SPIE, OSA, Tau Beta Pi, Eta Kappa Nu, and the Institute of Physics (IOP). Break / Poster Session | GatherTown Session 1: Memorable Paper Award Finalists I (ECC and Devices) Chair: Paul Siegel Cooperative Data Protection for Topology-Aware Decentralized Storage Networks SiyiYang (University of California, Los Angeles);AhmedHareedy (Duke University);RobertCalderbank (Duke University);LaraDolecek (University of California, Los Angeles); Speaker:Siyi Yang, UCLA AbstractWhile codes with hierarchical locality have been intensively studied in the context of centralized cloud storage due to their effectiveness in reducing the average reading time, those in the context of decentralized storage networks (DSNs) have not yet been discussed. In this paper, we propose a joint coding scheme where each node receives extra protection through the cooperation with nodes in its neighborhood in a heterogeneous DSN with any given topology. Our proposed construction not only supports desirable properties such as scalability and flexibility, which are critical in dynamic networks, but also adapts to arbitrary topologies, a property that is essential in DSNs but has been overlooked in existing works. Speaker bioSiyi Yang is a Ph.D. candidate with the Electrical and Computer Engineering department at the University of California, Los Angeles (UCLA). She received her B.S. degree in Electrical Engineering from the Tsinghua University, in 2016 and the M.S. degree in Electrical and Computer Engineering from the University of California, Los Angeles (UCLA) in 2018. Her research interests include design of error-correction codes for non-volatile memory and distributed storage. Power Spectra of Finite-Length Constrained Codes with Level-Based Signaling JessicaCenters (Duke University);XinyuTan (Duke University);AhmedHareedy (Duke University);RobertCalderbank (Duke University); Speaker:Jessica Centers and Xinyu Tan, Electrical and Computer Engineering Department, Duke University AbstractIn various practical systems, certain data patterns are prone to errors if written or transmitted. Constrained codes are used to eliminate error-prone patterns, and they can also achieve other goals. Recently, we introduced efficient binary symmetric lexicographically-ordered constrained (LOCO) codes and asymmetric LOCO (A-LOCO) codes to increase density in magnetic recording systems and lifetime in Flash systems by eliminating the relevant detrimental patterns. Due to their applications, LOCO and A-LOCO codes are associated with level-based signaling. In this paper, we first modify a framework from the literature in order to introduce a method to derive the power spectrum of a sequence of constrained data associated with level-based signaling. We then provide a generalized method for developing the one-step state transition matrix (OSTM) for finite-length codes constrained by the separation of transitions. Via their OSTMs, we devise closed form solutions for the spectra of finite-length LOCO and A-LOCO codes. Speaker bioJessica Centers is a 3rd year Ph.D. student in the Electrical and Computer Engineering Department at Duke University. Her research focusses primarily on developing signal processing techniques to utilize recently affordable sensors such as millimeter-wave radars in non-traditional applications. Xinyu Tan is junior undergraduate student pursuing a degree in Mathematics and Computer Science at Duke University. She has recently been pursuing an interest in quantum computing. Jessica and Xinyu found an interest in the analysis and development of novel constrained codes used to improve the performance in data storage and other computer systems through the Coding Theory course offered at Duke University. Through this course, Jessica and Xinyu were able to explore their interests in constrained codes more in-depth, which resulted in the paper being presented on and summarized at this conference. Optimal Reconstruction Codes for Deletion Channel JohanChrisnata (Nanyang Technological University);Han MaoKiah (Nanyang Technological University);EitanYaakobi (Technion - Israel Institute of Technology); Speaker:Johan Chrisnata, Nanyang Technological University AbstractThe sequence reconstruction problem, introduced by Levenshtein in 2001, considers a communication scenario where the sender transmits a codeword from some codebook and the receiver obtains multiple noisy reads of the codeword. Motivated by modern storage devices, we introduced a variant of the problem where the number of noisy reads $N$ is fixed (Kiah \etal{ }2020). Of significance, for the single-deletion channel, using $\log_2\log_2 n +O(1)$ redundant bits, we designed a reconstruction code of length $n$ that reconstructs codewords from two distinct noisy reads. In this work, we show that $\log_2\log_2 n -O(1)$ redundant bits are necessary for such reconstruction codes, thereby, demonstrating the optimality of our previous construction. Furthermore, we show that these reconstruction codes can be used in $t$-deletion channels (with $t\ge 2$) to uniquely reconstruct codewords from $n^{t-1}+O\left(n^{t-2}\right)$ distinct noisy reads. Speaker bioJohan Chrisnata received his Bachelor degree in mathematics from Nanyang Technological University (NTU), Singapore in 2015. From August 2015 until August 2018, he was a research officer in NTU. Currently he is pursuing a joint Ph.D. degree in mathematics from School of Physical and Mathematical Sciences at Nanyang Technological University, Singapore and Computer Science from Department of Computer Science at Technion University, Israel. His research interest includes enumerative combinatorics and coding theory. Partial MDS Codes with Regeneration LukasHolzbaur (Technical University of Munich);SvenPuchinger (Technical University of Denmark (DTU));EitanYaakobi (Technion - Israel Institute of Technology);AntoniaWachter-Zeh (Technical University of Munich); Speaker:Lukas Holzbaur, Technical University of Munich AbstractPartial MDS (PMDS) and sector-disk (SD) codes are classes of erasure correcting codes that combine locality with strong erasure correction capabilities. We construct PMDS and SD codes where each local code is a bandwidth-optimal regenerating MDS code. In the event of a node failure, these codes reduce both, the number of servers that have to be contacted as well as the amount of network traffic required for the repair process. The constructions require significantly smaller field size than the only other construction known in literature. Further, we present a PMDS code construction that allows for efficient repair for patterns of node failures that exceed the local erasure correction capability of the code and thereby invoke repair across different local groups. Speaker bioLukas Holzbaur received his B.Sc. and M.Sc. degrees in electrical engineering from the Technical University of Munich (TUM), Germany, in 2014 and 2017, respectively. Since 2017 he is working towards a Ph.D. at the Institute for Communications Engineering, TUM, Germany, in the group of Prof.~Wachter-Zeh. His research interests are coding theory and its applications, in particular to distributed data storage and privacy. Non-Uniform Windowed Decoding For Multi-Dimensional Spatially-Coupled LDPC Codes LevTauz (University of California, Los Angeles);LaraDolecek (University of California, Los Angeles);HomaEsfahanizadeh (Massachusetts Institute of Technology); Speaker:Lev Tauz, Electrical and Computer Engineering, University of California, Los Angeles AbstractIn this work, we propose a non-uniform windowed decoder for multi-dimensional spatially-coupled LDPC (MD-SC-LDPC) codes over the binary erasure channel. An MD-SC-LDPC code is constructed by connecting together several SC-LDPC codes into one larger code that provides major benefits over a variety of channel models. We propose and analyze a novel non-uniform decoder that allows for greater flexibility between latency and code reliability. Our theoretical derivations and empirical results show that our non-uniform decoder greatly improves upon the standard windowed decoder in terms of design flexibility, latency, and complexity. Speaker bioLev Tauz received their B.S. degree (Hons.) in electrical engineering and computer science from the University of California, Berkeley, in 2016 and their M.S. degree in electrical engineering and computer engineering from the University of California, Los Angeles (UCLA) in 2020. He is currently pursuing a Ph.D. degree with the Electrical and Computer Engineering Department in UCLA. He currently works at the Laboratory for Robust Information Systems (LORIS), and he is focused on coding techniques for distributed storage and computation. His research interests include distributed systems, error-correcting codes, machine learning, and graph theory. He was a recipient of the Best Preliminary Exam in Signals and Systems Award in the Electrical and Computer Engineering Department, UCLA, in 2019. Session 2: Memorable Paper Award Finalists II (Systems and Architecture) Chair: Samira Khan Characterizing and Modeling Non-Volatile Memory Systems ZixuanWang (University of California San Diego);XiaoLiu (University of California, San Diego);JianYang (University of California, San Diego);TheodoreMichailidis (University of California, San Diego);StevenSwanson (University of California, San Diego);JishenZhao (University of California, San Diego); Speaker:Zixuan Wang, University of California, San Diego AbstractScalable server-grade non-volatile RAM (NVRAM) DIMMs became commercially available with the release of Intel's Optane DIMM. Recent studies on Optane DIMM systems unveil discrepant performance characteristics, compared to what many researchers assumed before the product release. Most of these studies focus on system software design and performance analysis. To thoroughly analyze the source of this discrepancy and facilitate real-NVRAM-aware architecture design, we propose a framework that characterizes and models Optane DIMM's microarchitecture. Our framework consists of a Low-level profilEr for Non-volatile memory Systems (LENS) and a Validated cycle-Accurate NVRAM Simulator (VANS). LENS allows us to comprehensively analyze the performance attributes and reverse engineer NVRAM microarchitectures. Based on LENS characterization, we develop VANS, which models the sophisticated microarchitecture design of Optane DIMM, and is validated by comparing with the detailed performance characteristics of Optane-DIMM-attached Intel servers. VANS adopts a modular design that can be easily modified to extend to other NVRAM architecture designs; it can also be attached to full-system simulators, such as gem5. Speaker bioI am Zixuan Wang, a 3rd year Ph.D. student at University of California San Diego. I'm working with Prof. Jishen Zhao and Prof. Steven Swanson. My research interest is mainly on memory systems. Assise: Performance and Availability via Client-local NVM in a Distributed File System ThomasAnderson (University of Washington);MarcoCanini (KAUST);JongyulKim (KAIST);DejanKostić (KTH Royal Institute of Technology);YoungjinKwon (KAIST);SimonPeter (University of Texas at Austin);WaleedReda (KTH Royal Institute of Technology and Université catholique de Louvain);Henry N.Schuh (University of Washington);EmmettWitchel (UT Austin); Speaker:Waleed Reda, KTH Royal Institute of Technology and Université catholique de Louvain AbstractThe adoption of low-latency non-volatile memory (NVM) at scale upends the existing client-server model for distributed file systems. Instead, by leveraging client-local NVM storage, we can provide applications with much higher IO performance, sub-second application failover, and strong consistency. To that end, we built the Assise distributed file system, which uses client-local NVM as a linearizable and crash-recoverable cache between applications. Assise maximizes locality for all file IO by carrying out IO on process-local and client-local NVM whenever possible. By maintaining consistency at IO operation granularity, rather than at fixed block sizes, Assise minimizes coherence overheads and prevents block amplification. In doing so, Assise provides orders of magnitude lower tail latency, higher scalability, and higher availability than the state-of-the-art. Speaker bioWaleed is a final year PhD student at the Université catholique de Louvain (UCL) and the Royal Institute of Technology (KTH). His work focuses on accelerating distributed storage systems by rearchitecting them to maximize the benefits of state-of-the-art networking and storage technologies. More recently, he has been working on speeding up distributed file systems by exploiting client-local NVM and carefully balancing their use of network and storage resources. Clobber-NVM: Log Less, Re-execute More YiXu (UC San Diego);JosephIzraelevitz (University of Colorado, Boulder);StevenSwanson (UC San Diego); Speaker:Yi Xu, University of California, San Diego AbstractNon-volatile memory allows direct access to persistent storage via a load/store interface. However, because the cache is volatile, cached updates to persistent state will be dropped after a power loss. Failure-atomicity NVM libraries provide the means to apply sets of writes to persistent state atomically. Unfortunately, most of these libraries impose significant overhead. This work proposes Clobber-NVM, a failure-atomicity library that ensures data consistency by reexecution. Clobber-NVM's novel logging strategy, clobber logging, records only those transaction inputs that are overwritten during transaction execution. Then, after a failure, it recovers to a consistent state by restoring overwritten inputs and reexecuting any interrupted transactions. Clobber-NVM utilizes a clobber logging compiler pass for identifying the minimal set of writes that need to be logged. Based on our experiments, classical undo logging logs up to 42.6X more bytes than Clobber-NVM, and requires 2.4X to 4.7X more expensive ordering instructions (e.g., clflush and sfence). Less logging leads to better performance: Relative to prior art, Clobber-NVM provides up to 2.5X performance improvement over Mnemosyne and 2.6X over Intel's PMDK. Speaker bioYi is a third-year PhD student at UC San Diego, advised by Prof. Steven Swanson. She is interested in memory and storage systems. Her current research focuses on compiler and library supports for persistent memory programming. CoSpec: Compiler Directed Speculative Intermittent Computation JongoukChoi (Purdue University);QingruiLiu (Annapurna Labs);ChangheeJung (Purdue University); Speaker:Jongouk Choi, Purdue University AbstractEnergy harvesting systems have emerged as an alternative to battery-operated embedded devices. Due to the intermittent nature of energy harvesting, researchers equip the systems with nonvolatile memory (NVM) and crash consistency mechanisms. However, prior works require non-trivial hardware modifications, e.g., a voltage monitor, nonvolatile flip-flops/scratchpad, dependence tracking modules, etc., thereby causing significant area/power/manufacturing costs. For low-cost yet performant intermittent computation, this paper presents CoSpec, a new architecture/compiler co-design scheme that works for commodity in-order processors used in energy-harvesting systems. To achieve crash consistency without requiring unconven- tional architectural support, CoSpec leverages speculation assuming that power failure is not going to occur and thus holds all committed stores in a store buffer (SB)—as if they were speculative—in case of mispeculation. CoSpec compiler first partitions a given program into a series of recoverable code regions with the SB size in mind, so that no region overflows the SB. When the program control reaches the end of each region, the speculation turns out to be successful, thus releasing all the buffered stores of the region to NVM. If power failure occurs during the execution of a region, all its speculative stores disappear in the volatile SB, i.e., they never affect program states in NVM. Consequently, the interrupted region can be restarted with consistent program states in the wake of power failure. To hide the latency of the SB release—i.e., essentially NVM writes—at each region boundary, CoSpec overlaps the NVM writes of the current region with the speculative execution of the next region. Such instruction level parallelism gives an illusion of out- of-order execution on top of the in-order processor, achieving a speedup of more than 1.2X when there is no power outage. Our experiments on a set of real energy harvesting traces with frequent outages demonstrate that CoSpec outperforms the state-of-the-art scheme by 1.8∼3X on average. Speaker bioJongouk Choi is a Ph.D student at Purdue University. His current research interests are in computer architecture, compiler, systems, and hardware security. He obtained his MS and BS in CS from Kentucky State University. He held various research positions at LG Electronics, NASA EpSCoR, and ARM research. For more information, please see his webpage at https://www.cs.purdue.edu/homes/choi658/. CrossFS: A Cross-layered Direct-Access File System YujieRen (Rutgers University);ChangwooMin (Virginia Tech);SudarsunKannan (Rutgers University); Speaker:Yujie Ren, Rutgers Unversity AbstractWe design CrossFS, a cross-layered direct-access file system disaggregated across user-level, firmware, and kernel layers for scaling I/O performance and improving concurrency. CrossFS is designed to exploit host- and device-level compute capabilities. CrossFS introduces a file descriptor-based concurrency control that maps each file descriptor to one hardware-level I/O queue for concurrency with or without data sharing across threads and processes. This design allows CrossFS's firmware component to process disjoint access across file descriptors concurrently. CrossFS delegates concurrency control to powerful host-CPUs, which convert the file descriptor synchronization problem into an I/O queue request ordering problem. CrossFS exploits byte-addressable nonvolatile memory for I/O queue persistence to guarantee crash consistency in the cross-layered design and designs a lightweight firmware-level journaling mechanism. Finally, CrossFS designs a firmware-level I/O scheduler for efficient dispatch of file descriptor requests. Evaluation of emulated CrossFS on storage-class memory shows up to 4.87x concurrent access gains for bench- marks and 2.32x gains for real-world applications over the state-of-the-art kernel, user-level, and firmware file systems. Speaker bioYujie Ren is a 4th-year Ph.D. candidate in computer science department at Rutgers University. His research interests are in file systems, memory management and computational storage. His research projects focus on reducing IO software overheads by disaggregating file system components across software/hardware layers and utilize storage compute resources. Networking Session Keynote II Chair: Jishen Zhao Twizzler: Rethinking the Operating System Stack for Byte-Addressable NVM EthanMiller (University of California, Santa Cruz); Speaker:Ethan Miller, University of California, Santa Cruz AbstractByte-addressable non-volatile memory (NVM) promises applications the ability to persist small units of data, enabling new programming paradigms and system designs. However, such gains will require significant help from the operating system: it needs to "get out of the way" while still providing strong guarantees for security and resource management. This talk will describe our approach to designing an operating system and programming environment that leverages the advantages of NVM to provide a single-level store for application data. Under this approach, NVM can be accessed, transparently, by any thread at any time, with pointers retaining their meanings across multiple invocations. Equally important, the operating system is minimally involved in program operation, limiting itself to managing virtualized devices, scheduling threads, and managing page tables to enforce user-managed access controls at page-level granularity. Structuring the system in this way provides both a simpler programming model and, in many cases, higher performance, allowing NVM-based systems to fully leverage the new ability to persist data with a single write while providing a stronger, more flexible security model than traditional operating systems. Speaker bioEthan L. Miller is a Professor in the Computer Science and Engineering Department at the University of California, Santa Cruz. He is a Fellow of the IEEE and an ACM Distinguished Scientist, and his publications have received multiple Best Paper awards. Prof. Miller received an Sc.B. from Brown University in 1987 and a Ph.D. from UC Berkeley in 1995, and has been on the UC Santa Cruz faculty since 2000. He has co-authored over 160 papers in a range of topics in file and storage systems, operating systems, parallel and distributed systems, information retrieval, and computer security; his research has received over 15,000 citations. He was a member of the team that developed Ceph, a scalable high-performance distributed file system for scientific computing that is now being adopted by several high-end computing organizations. His current research projects, which are funded by the National Science Foundation and industry support for the CRSS and SSRC, include system support for byte-addressable non-volatile memory, archival storage systems, reliable and secure storage systems, and issues in ultra-scale storage systems. Prof. Miller has worked with Pure Storage since its founding in 2009, helping to design and refine its storage architecture, resulting in over 120 awarded patents. He has also worked with other companies, including Samsung, Veritas, and Seagate, to help move research results into commercial use. Additional information is available at https://www.crss.ucsc.edu/person/elm.html. Session 3A: ECC Chair: Homa Esfahanizadeh (MIT) Flexible Partial MDS Codes WeiqiLi (University of California, Irvine);TaitingLu (University of California, Irvine);ZhiyingWang (University of California, Irvine);HamidJafarkhani (University of California, Irvine); Speaker:Weiqi Li, Center for Pervasive Communications and Computing (CPCC), University of California, Irvine, USA AbstractThe partial MDS (PMDS) code was introduced by Blaum et al. for RAID systems. Given the redundancy level and the number of symbols in each node, PMDS codes can tolerate a mixed type of failures consisting of entire node failures and partial errors (symbols failures). Aiming at reducing the expected accessing latency, this paper presents flexible PMDS codes that can recover the information from a flexible number of nodes according to the number of available nodes, while the total number of symbols required remains the same. We analyze the reliability and latency of our flexible PMDS codes. Speaker bioWeiqi Li received his B.S. and M.S. degree from Xian Jiaotong University, China in 2013 and 2016, respectively. Currently, he is pursuing a Ph.D in the Department of Electrical Engineering and Computer Science, University of California, Irvine. His research interests includes coding theory, signal processing and communication network. Codes for Cost-Efficient DNA Synthesis AndreasLenz (Technical University of Munich);YiLiu (Center for Memory and Recording Research, UCSD);CyrusRashtchian (UCSD);PaulSiegel (UCSD);AndrewTan (UCSD);AntoniaWachter-Zeh (Technical University of Munich);EitanYaakobi (Technion - Israel Institute of Technology); Speaker:Andreas Lenz, Technical University of Munich AbstractAs a step toward more efficient DNA data storage systems, we study the design of codes that minimize the time and number of required materials needed to synthesize the DNA strands. We consider a popular synthesis process that builds many strands in parallel in a step-by-step fashion using a fixed supersequence $S$. The machine iterates through $S$ one nucleotide at a time, and in each cycle, it adds the next nucleotide to a subset of the strands. We show that by introducing redundancy to the synthesized strands, we can significantly decrease the number of synthesis cycles required to produce the strands. We derive the maximum amount of information per synthesis cycle assuming $S$ is an arbitrary periodic sequence. To prove our results, we exhibit new connections to cost-constrained codes. Speaker bioAndreas Lenz received the B.Sc. and M.Sc. degrees (both with high distinction) in electrical engineering and information technology from Technische Universität München (TUM), Germany in 2013 and 2016, respectively. During his studies, his research interests included parameter estimation, communications, and circuit theory. Since 2016 he is working as a doctoral candidate at the Coding for Communications and Data Storage (COD) group at TUM, where he is involved in research about coding theory for insertion and deletion errors and modern data storage systems. Systematic Single-Deletion Multiple-Substitution Correcting Codes WentuSong (Singapore University of Technology and Design);NikitaPolyanskii (Technical University of Munich, Germany, and Skolkovo Institute of Science and Technology);KuiCai (Singapore University of Technology and Design);XuanHe (Singapore University of Technology and Design); Speaker:Wentu Song, Singapore University of Technology and Design, Singapore AbstractRecent work by Smagloy et al. (ISIT 2020) shows that the redundancy of a single-deletion s-substitution correcting code is asymptotically at least (s+1)log (n)+o(log(n)), where n is the length of the codes. They also provide a construction of single-deletion and single-substitution codes with redundancy 6log(n)+8. In this paper, we propose a family of systematic single-deletion s-substitution correcting codes of length n with asymptotical redundancy at most (3s+4)log(n)+o(log(n)) and polynomial encoding/decoding complexity, where s>=2 is a constant. Specifically, the encoding and decoding complexity of the proposed codes are O(n^{s+3}) and O(n^{s+2}), respectively. Speaker bioWentu Song received the BS and MS degrees in Mathematics from Jilin University, China in 1998 and 2006, respectively, and the Ph.D. degree in Mathematics from Peking University in 2012, China. He is currently a Post Doc research fellow in the Advanced Coding and Signal Processing Lab, Singapore University of Technology and Design, where he is working on coding theory, network distributed storage, and coding for DNA based data storage. Single Indel/Edit Correcting Codes: Linear-Time Encoders and Order-Optimality KuiCai (Singapore University of Technology and Design);Yeow MengChee (National University of Singapore);RyanGabrys (Spawar Systems Center);Han MaoKiah (Nanyang Technological University);TUAN THANHNGUYEN (Singapore University of Technology and Design); Speaker:Tuan Thanh Nguyen , Singapore University of Technology and Design AbstractAn indel refers to a single insertion or deletion, while an edit refers to a single insertion, deletion or substitution. In this work, we investigate quaternary codes that correct a single indel or single edit and provide linear-time algorithms that encode binary messages into these codes of length n. Particularly, we provide two linear-time encoders: one corrects a single edit with ⌈log n⌉ + O(log log n) redundancy bits, while the other corrects a single indel with ⌈log n⌉ + 2 redundant bits. These two encoders are order-optimal. The former encoder is the first known order- optimal encoder that corrects a single edit, while the latter encoder (that corrects a single indel) reduces the redundancy of the best known encoder of Tenengolts (1984) by at least four bits. Speaker bioTuan Thanh Nguyen received the B.Sc. degree and the Ph.D. degree in mathematics from Nanyang Technological University, Singapore, in 2014 and 2018, respectively. He is currently a Research Fellow at Singapore University of Technology and Design (SUTD), with the Advanced Coding and Signal Processing (ACSP) Lab of SUTD. He was a research fellow in the School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, from Aug 2018 to Sep 2019. His research interest lies in the interplay between combinatorics and computer science/engineering, particularly including combinatorics and coding theory. His research project concentrates on error correction codes and constrained codes for communication systems and data storage systems, especially codes for DNA-based data storage. Coding and Bounds for Partially Defect Memory Cells HaiderAl Kim (Technical University of Munich, TUM);SvenPuchinger (Technical University of Denmark (DTU));AntoniaWachter-Zeh (Technical University of Munich, TUM); Speaker:Haider Al Kim, PhD Candidate at Technical University of Munich / Department of Electrical and Computer Engineering / Coding and Cryptography (COD) Group AbstractThis paper considers coding for \emph{partially stuck} memory cells. Such memory cells can only store partial information as some of their levels cannot be used due to, e.g., wear out. First, we present a code construction for masking such partially stuck cells while additionally correcting errors. Second, we derive a sphere-packing and a Gilbert-Varshamov bound for codes that can mask a certain number of partially stuck cells and correct errors additionally. A numerical comparison between the new bounds and our constructions of PSMCs for any $u\leq n$ shows that our construction matches the Gilbert--Varshamov-like bound for several code parameters. Speaker bioCurrently, Haider Al Kim is a PhD candidate at TUM, Germany focusing on error correction codes for telecommunication networks and storage. From 2008-2018, he was a lecturer, researcher, and assistant chief network engineer at the University of Kufa (UOK) in both faculty of engineering - Electronic and Communication Department ECE and ITRDC. He finished his Master's degree in Telecommunication Networks from University of Technology Sydney (UTS), Sydney, Australia in 2014 under the supervision A. Prof. Kumbesan Sandrasegaran. He got a B.Sc. in Information and Communication Engineering from Alkhwarizmi Engineering College were ranked 2 among 27 students at the University of Baghdad, Baghdad, Iraq in 2008. Working and research areas are Wireless Telecommunication, Mobile Network, Network Management, Network Design and Implementation, and Data Analysis and Monitoring while more recently he is working on Coding for Memory with Partial Defects and Algebraic Coding. Session 3B: Storage and Operating Systems Chair: Sudarsun Kannan Manycore-Based Scalable SSD Architecture Towards One and More Million IOPS JieZhang (KAIST);MiryeongKwon (KAIST);MichaelSwift (University of Wisconsin-Madison);MyoungsooJung (KAIST); Speaker:Jie Zhang, Korea Advanced Institute of Science and Technology (KAIST) AbstractNVMe is designed to unshackle flash from a traditional storage bus by allowing hosts to employ many threads to achieve higher bandwidth. While NVMe enables users to fully exploit all levels of parallelism offered by modern SSDs, current firmware designs are not scalable and have difficulty in handling a large number of I/O requests in parallel due to its limited computation power and many hardware contentions. We propose DeepFlash, a novel manycore-based storage platform that can process more than a million I/O requests in a second (1MIOPS) while hiding long latencies imposed by its internal flash media. Inspired by a parallel data analysis system, we design the firmware based on many-to-many threading model that can be scaled horizontally. The proposed DeepFlash can extract the maximum performance of the underlying flash memory complex by concurrently executing multiple firmware components across many cores within the device. To show its extreme parallel scalability, we implement DeepFlash on a many-core prototype processor that employs dozens of lightweight cores, analyze new challenges from parallel I/O processing and address the challenges by applying concurrency-aware optimizations. Our comprehensive evaluation reveals that DeepFlash can serve around 4.5 GB/s, while minimizing the CPU demand on microbenchmarks and real server workloads. Speaker bioDr. Jie Zhang is a postdoctoral researcher at KAIST. He is engaged in the research and design of computer architecture and systems including storage systems, non-volatile memory, and specialized processors. His research addresses the requirements for high-performance storage systems in the era of big data and artificial intelligence from the perspective of computer architecture. He is dedicated to breaking through the bottlenecks of data migration and the limitations of memory walls in the Von Neumann architecture. The Storage Hierarchy is Not a Hierarchy: Optimizing Caching on Modern Storage Devices with Orthus KanWu (University of Wisconsin-Madison);ZhihanGuo (University of Wisconsin—Madison);GuanzhouHu (University of Wisconsin-Madison);KaiweiTu (University of Wisconsin-Madison);RamnatthanAlagappan (VMware Research Group);RathijitSen (Microsoft);KwanghyunPark (Microsoft);AndreaArpaci-Dusseau (University of Wisconsin-Madison);RemziArpaci-Dusseau (University of Wisconsin–Madison); Speaker:Kan Wu, University of Wisconsin-Madison AbstractWe introduce non-hierarchical caching (NHC), a novel approach to caching in modern storage hierarchies. NHC improves performance as compared to classic caching by redirecting excess load to devices lower in the hierarchy when it is advantageous to do so. NHC dynamically adjusts allocation and access decisions, thus maximizing performance (e.g., high throughput, low 99%-ile latency). We implement NHC in Orthus-CAS (a block-layer caching kernel module) and Orthus-KV (a user-level caching layer for a key-value store). We show the efficacy of NHC via a thorough empirical study: Orthus-KV and Orthus-CAS offer significantly better performance (by up to 2×) than classic caching on various modern hierarchies, under a range of realistic workloads. Speaker bioKan Wu is a PhD candidate in computer science from the University of Wisconsin-Madison. He works with Professor Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau. His research interests include storage systems, databases and distributed systems, with a primary focus on emerging storage technologies such as persistent memory. He received his Bachelor of Science (B.S.) from the University of Science and Technology of China. Architecting Throughput Processors with New Flash JieZhang (KAIST);MyoungsooJung (KAIST); AbstractWe propose ZnG, a new GPU-SSD integrated architecture, which can maximize the memory capacity in the GPU and address the performance penalty imposed by SSD. Specifically, ZnG replaces all GPU internal DRAM with an ultra-low-latency SSD to maximize the GPU memory capacity. ZnG further removes the performance bottleneck of SSD by replacing the flash channels with a high-throughput flash network and integrating the SSD firmware in the GPU MMU to reap the benefits of hardware acceleration. Although the NAND flash array within the SSD can deliver high accumulated bandwidth, only a small fraction of its bandwidth can be utilized by the memory requests, due to the mismatch of access granularity. To address this, ZnG employs a large L2 cache and flash registers to buffer the memory requests. Our evaluation results indicate that ZnG can achieve 7.5x higher performance than prior work. Explaining SSD failures using Anomaly Detection ChandranilChakraborttii (University of California, Santa Cruz);HeinerLitz (University of California, Santa Cruz); Speaker:Chandranil Chakraborttii, University of California Santa Cruz AbstractNAND flash-based solid-state drives (SSDs) represent an important storage tier in data centers holding most of today's warm and hot data. Even with the advanced fault tolerance techniques and low failure rates, large hyperscale data centers utilizing 100,000's of SSDs suffer from multiple device failures daily. Data center operators are interested in predicting SSD device failures for two main reasons. First, even with RAID [2] and replication [5] techniques in place, device failures induce transient recovery and repair overheads, affecting the cost and tail latency of storage systems. Second, predicting near-term failure trends helps to inform the device acquisition process, thus avoiding capacity bottlenecks. Hence, it is important to predict both the short-term individual device failures as well as near-term failure trends. Prior studies on predicting storage device failures [1, 6, 7, 9] suffer from the following main challenges. First, as they utilize black-box machine learning (ML) techniques, they are unaware of the underlying failure reasons rendering it difficult to determine the failure types that these models can predict. Second, the models in prior work struggle with dynamic environments that suffer from previously unseen failures that have not been included in the training set. These two challenges are especially relevant for the SSD failure detection problem which suffers from high class-imbalance. In particular, the number of healthy drive observations is generally orders of magnitude larger than the number of failed drive observations, thus posing a problem for training most traditional supervised ML models. To address these challenges, we propose to utilize 1-class ML models that are trained only on the majority class. By ignoring the minority class for training, our 1-class models avoid overfitting to an incomplete set of failure types, thereby improving the overall prediction performance by up to 9.5% in terms of ROC AUC score. Furthermore, we introduce a new learning technique for SSD failure detection, 1-class autoencoder, which enables interpretability of the trained models while providing high prediction accuracy. In particular, 1-class autoencoders provide insights into what features and their combinations are most relevant to flagging a particular type of device failure. This enables categorization of failed drives based on their failure type, thus informing about specific procedures (e.g., repair, swap, etc.) that need to be applied to resolve the failure. For analysis and evaluation of our proposed techniques, we leverage a cloud-scale dataset from Google that has already been used in prior work [1, 8]. This dataset contains 40 million observations from over 30,000 drives over a period of six years. For each observation, the dataset contains 21 different SSD telemetry parameters including SMART (Self-Monitoring, Analysis, and Reporting Technology) parameters, the amount of read and written data, error codes, as well as the information about blocks that became nonoperational over time. Around 30% of the drives that failed during the data collection process were replaced while the rest were removed, and hence no longer appeared in the dataset. As a result, we obtained approximately 300 observations for each healthy drive (40 million observations in total) and 4 to 140 observations for each failed drive (15000 total observations). We treated each data point as an independent observation and normalized all the non-categorical data values to be between 0 and 1. One of our primary goals was to select the most distinguishing features that are highly correlated to the failures for training. We used three different feature selection methods, Filter, Embedded, and Wrapper [4] techniques, for selecting the most important features contributing to failures for our dataset. The resulting set of top features selected were correctable error count, cumulative bad block count, cumulative bad block count, cumulative p/e cycle, erase count, final read error, read count, factory bad block count, write count, and status read-only. The dataset containing only the top selected features is then used for training the different ML models. In a datacenter, we envision our SSD failure prediction technique to be implemented as shown in Figure 1. The telemetry traces are collected periodically from all SSDs in the datacenter and sent to the preprocessing pipeline transforming all input data into numeric values while filtering out incomplete and noisy values. Following data preprocessing, feature selection is performed to extract the most important features from the data set. The preprocessed data is then either utilized for training or inference. For inference, device anomalies are reported and classified according to our 1-class autoencoder approach. SSDs can then be manually analyzed by a technician or replaced directly. As an alternative, a scrubber can be leveraged to validate the model predictions by performing a low-level analysis of the SSD. To evaluate the five ML techniques, we first label all 40 million observations in the dataset to separate between healthy and failed drive observations. We then perform a 90% - 10% split of the dataset into a training set and an evaluation set respectively. For training the 1-class models we remove all failed drive observations from the training set, however, the evaluation set is kept identical for our proposed 1-class techniques and the three baselines. We use ROC AUC score as a metric for comparing the performance of our approaches with chosen baselines, which is inline with prior work [1] and use 10-fold cross-validation for evaluating all approaches. The 1-class autoencoder model utilizes 4 hidden layers comprising of 50, 25, 25, and 50 neurons, respectively. The neurons utilize a 𝑡𝑎𝑛ℎ activation function, 𝐴𝑑𝑎𝑚 optimizer, and the model is trained for 100 epochs. We use early stopping with a patience value of 5 ensuring that the training of the model stops when the loss does not decrease after 5 consecutive epochs. Increasing the number of hidden layers beyond 4 increases the training time significantly without providing performance benefits. Figure 2 illustrates the comparative performance of different ML techniques for predicting SSD failures one day ahead. Among the baselines, Random Forest performs best, providing a ROC AUC score of 0.85. Both our 1-class models outperform the best baseline. In particular, 1-class isolation forest achieves a ROC AUC score of 0.91, representing a 7% improvement over the best baseline while 1-class AutoEncoder, outperforms Random Forest by 9.5%. This work introduces 1-class autoencoders for interpreting failures. In particular, our technique exposes the reasons determined by our model to flag a particular device failure. This is achieved by utilizing the reconstruction error generated by the model while reproducing the output using the trained representation of a healthy drive. The failed drives do not conform to the representation, hence, generate an output that differs significantly from the actual input producing a large reconstruction error. . We study the reconstruction error per feature to generate the failure reasons. The features which contribute more than average error per feature to the reconstruction error, is defined as a significant reason. The results show that many failed drives show a higher than normal number of 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑎𝑏𝑙𝑒_𝑒𝑟𝑟𝑜𝑟𝑠 and 𝑐𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒_𝑏𝑎𝑑_𝑏𝑙𝑜𝑐𝑘, wever they were selected as a reason for failure only for 35%, and 30% of the cases respectively. Hence, our analysis shows that there exist particularly relevant features that indicate device failures in many cases, however, only the combination of several features enables accurate failure prediction. To conclude, this paper provides a comprehensive analysis of machine learning techniques to predict SSD failures in the cloud. We observe that prior works on SSD failure prediction suffer from the inability to predict previously unseen failure types motivating us to explore 1-class machine learning models such as 1-class isolation forest and 1-class autoencoder. We show that our approaches outperform prior work by 9.5% ROC-AUC score by improving on the prediction accuracy for failed drives. Finally, we show that 1-class autoencoders enable interpretability of model predictions by exposing the reasons determined by the model for predicting a failure. A more comprehensive evaluation of our approach in discussed in [3], where we show the adaptability of 1-class models to dynamic environments with new types of failures emerging over time and the impact of predicting further ahead in time. Speaker bioChandranil Chakraborttii is a doctoral candidate in the department of computer science and engineering at the University of California, Santa Cruz. His main research interests are in machine learning, data science, and storage systems with a focus on data centers. His research involves the use of machine learning techniques for improving the performance of flash-based storage systems. This improvement reflects in two major directions - reliability, and the response time of flash-based storage devices. Before starting his Ph.D. program, Chandranil spent three years as a software engineer in the software industry and has also taught for the Stanford Summer Institutes for four years. Deterministic I/O and Resource Isolation for OS-level Virtualization In Server Computing MiryeongKwon (KAIST);DonghyunGouk (KAIST);ChangrimLee (KAIST);ByoungGeunKim (Samsung);JooyoungHwang (Samsung);MyoungsooJung (KAIST); Speaker:Miryeong Kwon, KAIST AbstractWe propose DC-store, a storage framework that offers deterministic I/O performance for a multi-container execution environment. DC-store's hardware-level design implements multiple NVM sets on a shared storage pool, each providing a deterministic SSD access time by removing internal resource conflicts. In parallel, software support of DC-Store is aware of the NVM sets and enlightens Linux kernel to isolate noisy neighbor containers, performing page frame reclaiming, from peers. We prototype both hardware and software counterparts of DC-Store and evaluate them in a real system. The evaluation results demonstrate that containerized data-intensive applications on DC-Store exhibit 31% shorter average execution time, on average, compared to those on a baseline system. Speaker bioMiryeong Kwon is a Ph.D. Candidate of Korea Advanced Institute of Science and Technology (KAIST). She is advised by Myongsoo Jung who leads the Computer Architecture, Non-volatile memory, and operating system. Her main research interest is OS-level virtualization environment and non-volatile and storage device management in that system. Session 4A: Devices and ECC Chair: Ahmed Hareedy (Duke) Tunable Fine Learning Rate controlled by pulse width modulation in Charge Trap Flash (CTF) for Synaptic Application ShaliniShrivastava (Indian Institute of Technology Bombay);UdayanGanguly (Indian Institute of Technology Bombay); Speaker:Shalini Shrivastava, Indian Institute of Technology Bombay, Mumbai, India AbstractThe brain-inspired neuromorphic computation is on high demand for the next generation computational systems due to its high performance, low-power and high energy efficiency. The highly mature technology of today, Flash memory, is the first and has been a promising electronic synaptic device since 1989. The linear, gradual, and symmetric learning rate are the basic requirements for a high-performance synaptic device. In this paper, we demonstrate a fine-controlled learning rate in Charge Trap Flash (CTF) by pulse width modulation of the input gate pulse. We further study the effect of the cycle to cycle (C2C) and device to device (D2D) variability, and the limits of charge fluctuation with scaling on the learning rate. The comparison of CTF as synapse with other state-of-the-art devices is carried out. The learning rate with CTF can be tuned from 0.2% to 100%, which is remarkable for a single device. Further, the C2C variability does not affect the conductance however it is limited by D2D variability only for learning levels > 8000. We also show that the CTF synapse has a lower sensitivity to charge fluctuation even with scaled devices. The tunable learning rate and lower sensitivity to variability and charge fluctuation in CTF synapse is significant compared to the state-of-the-art. The tunable learning rate of CTF is very promising and of great interest for brain-inspired computing systems. Speaker bioShalini Shrivastava received M.Tech in Electrical Engineering from IIT Bombay in 2013. She is a Ph.D student at the Department of Electrical Engineering, IIT Bombay. Her research interest includes both experimental and theoretical semiconductor device physics. She is currently doing research on energy efficient electronic devices for neuromorphic computing at IITB with Prof. Udayan Ganguly. Ferroelectric, Analog Resistive Switching in BEOL Compatible TiN/HfZrO4/TiOx Junctions LauraBégon-Lours (IBM Research Gmbh);MattiaHalter (IBM Research Gmbh, ETH Zurich);YouriPopoff (IBM Research Gmbh, ETH Zurich);Bert JanOffrein (IBM Research Gmbh); Speaker:Laura Bégon-Lours, IBM Research AbstractThanks to their compatibility with CMOS technologies, hafnium based ferroelectric devices receive increasing interest for the fabrication of neuromorphic hardware. In this work, an analog resistive memory device is fabricated with a process developed for Back-End-Of-Line integration. A 4.5 nm thick HfZrO4 (HZO) layer is crystallized into the ferroelectric phase, a thickness thin enough to allow electrical conduction through the layer. A TiOx interlayer is used to create an asymmetric junction as required for transferring a polarization state change into a modification of the conductivity. Memristive functionality is obtained, in the pristine state as well as after ferroelectric wake-up, involving redistribution of oxygen vacancies in the ferroelectric layer. The resistive switching is shown to originate directly from the ferroelectric properties of the HZO layer. Speaker bioLaura is a post-doctoral researcher at IBM Research in Zurich. After studying Physics in ESPCI (Paris) she joined Unite Mixte de Physique CNRS-Thales for her PhD research on ferroelectric field-effects in high-Tc (YBCO cuprates) superconductors. To develop her skills in materials sciences and epitaxial growth of complex oxides, she joined the MESA+ institute for two years, where she demonstrated epitaxial growth of HfZrO4 on a GaN template. She is now a Marie-Curie fellow at IBM Research where she develops ferroelectric devices for synaptic weight in artificial neural networks accelerators. HD-RRAM: Improving Write Operations on MLC 3D Cross-point Resistive Memory ChengningWang (Huazhong University of Science and Technology);DanFeng (Huazhong University of Science and Technology);WeiTong (Huazhong University of Science and Technology);YuHua (Huazhong University of Science and Technology);JingningLiu (Huazhong University of Science and Technology);BingWu (Huazhong University of Science and Technology);WeiZhao (Huazhong University of Science and Technology);LinghaoSong (Duke University);YangZhang (Huazhong University of Science and Technology);JieXu (Huazhong University of Science and Technology);XueliangWei (Huazhong University of Science and Technology);YiranChen (Duke University); Speaker:Chengning Wang, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, China. AbstractMultilevel cell (MLC), cross-point array structure, and three-dimensional (3D) array integration are three technologies to scale up the density of resistive memory. However, composing the three technologies together strengthens the interactions between array-level and cell-level nonidealities (IR drop, sneak current, and cycle-to-cycle variation) in resistive memory arrays and significantly degrades the array write performance. We propose a nonidealities-tolerant high-density resistive memory (HD-RRAM) system based on multilayered MLC 3D cross-point arrays that can weaken the interactions between nonidealities and mitigate their degradation effects on the write performance. HD-RRAM is equipped with a double-transistor array architecture with multiside asymmetric bias, proportional-control state tuning, and MLC parallel writing techniques. The evaluation shows that HD-RRAM system can reduce the access latency by 27.5% and energy consumption by 37.2% over an aggressive baseline. Speaker bioChengning Wang is a fifth-year Ph.D. student with Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, China. His research interests include modeling, design, analysis, and co-optimization of high-density memristive nanodevices, arrays, and analog parallel computation-in-memory for novel applications. He was invited and served as a reviewer for several science citation index journals and conferences, including IEEE ISCAS, IEEE TVLSI, IEEE TC, and Front. Inform. Technol. Electron. Eng. Reconstruction Algorithms for DNA-Storage Systems OmerSabary (University of California, San Diego);AlexanderYucovich (Technion);GuyShapira (Technion);EitanYaakobi (Technion); Speaker:Omer Sabary, University of California, San Diego AbstractIn the \emph{trace reconstruction problem} a length-$n$ string $x$ yields a collection of noisy copies, called \emph{traces}, $y_1, \ldots, y_t$ where each $y_i$ is independently obtained from $x$ by passing through a \emph{deletion channel}, which deletes every symbol with some fixed probability. The main goal under this paradigm is to determine the required minimum number of i.i.d traces in order to reconstruct $x$ with high probability. The trace reconstruction problem can be extended to the model where each trace is a result of $x$ passing through a \emph{deletion-insertion-substitution channel}, which introduces also insertions and substitutions. Motivated by the storage channel of DNA, this work is focused on another variation of the trace reconstruction problem, which is referred by the \emph{DNA reconstruction problem}. A \emph{DNA reconstruction algorithm} is a mapping $R: (\Sigma_q^*)^t \to \Sigma_q^*$ which receives $t$ traces $y_1, \ldots, y_t$ as an input and produces $\widehat{x}$, an estimation of $x$. The goal in the DNA reconstruction problem is to minimize the edit distance $d_e(x,\widehat{x})$ between the original string and the algorithm's estimation. For the deletion channel case, the problem is referred by the \emph{deletion DNA reconstruction problem} and the goal is to minimize the Levenshtein distance $d_L(x,\widehat{x})$. In this work, we present several new algorithms for these reconstruction problems. Our algorithms look globally on the entire sequence of the traces and use dynamic programming algorithms, which are used for the \emph{shortest common supersequence} and the \emph{longest common subsequence} problems, in order to decode the original sequence. Our algorithms do not require any limitations on the input and the number of traces, and more than that, they perform well even for error probabilities as high as $0.27$. The algorithms have been tested on simulated data as well as on data from previous DNA experiments and are shown to outperform all previous algorithms. Speaker bioOmer Sabary is a PhD student at the Center of Memory and Recording Research, Electrical and Computer Engineering Department at UC San Diego. His advisor is Prof. Paul H. Siegel and his research interests include coding and algorithms for DNA storage systems. He recently recieved his M.Sc. from the Technion, where his advisor was Prof. Eitan Yaakobi. Session 4B: Accelerating Applications I Chair: Baris Kasikci Digital-based Processing In-Memory for Acceleration of Unsupervised Learning MohsenImani (UC Irvine);SaranshGupta (UC San Diego);YeseongKim (UC San Diego);TajanaRosing (UC San Diego); Speaker:Mohsen Imani, University of California Irvine AbstractToday's applications generate a large amount of data that need to be processed by learning algorithms. In practice, the majority of the data are not associated with any labels. Unsupervised learning, i.e., clustering methods, are the most commonly used algorithms for data analysis. However, running clustering algorithms on traditional cores results in high energy consumption and slow processing speed due to a large amount of data movement between memory and processing units. In this paper, we propose DUAL, a Digital-based Unsupervised learning AcceLeration, which supports a wide range of popular algorithms on conventional crossbar memory. Instead of working with the original data, DUAL maps all data points into high-dimensional space, replacing complex clustering operations with memory-friendly operations. We accordingly design a PIM-based architecture that supports all essential operations in a highly parallel and scalable way. DUAL supports a wide range of essential operations and enables in-place computations, allowing data points to remain in memory. We have evaluated DUAL on several popular clustering algorithms for a wide range of large-scale datasets. Our evaluation shows that DUAL provides a comparable quality to existing clustering algorithms while using a binary representation and a simplified distance metric. DUAL also provides 58.8× speedup and 251.2× energy efficiency improvement as compared to the state-of-the-art solution running on GPU Speaker bioMohsen Imani is a tenure track assistant professor in the Department of Computer Science at the University of California, Irvine. He is also a director of Bio-Inspired Architecture and Systems Laboratory (BIASLab) in Donald Bren School of Information and Computer Sciences (ICS). Dr. Imani received his Ph.D. degree from the Department of Computer Science and Engineering at UC San Diego in 2020. The PI has a stellar record of publication with over 100 papers in top IEEE/ACM conferences and journals with over 2,000 citation counts and h-index of 25. Dr. Imani's contribution has led to a new direction on brain-inspired hyperdimensional computing that enables ultra-efficient and real-time learning and cognitive support. His research was the main initiative in opening up multiple industrial and governmental research programs in Semiconductor Research Corporation (SRC) and DARPA. Dr. Imani's research has been recognized with several awards, including the Bernard and Sophia Gordon Engineering Leadership Award, the Outstanding Researcher Award, and the Powell Fellowship Award. He also received the Best Doctorate Research from UC San Diego in 2018, and several best paper nomination awards at multiple top conferences. High Precision In-Memory Computing for Deep Neural Network Acceleration MohsenImani (University of California Irvine);SaranshGupta (University of California San Diego);YeseongKim (University of California San Diego);MinxuanZhou (University of California San Diego);TajanaRosing (University of California San Diego); Speaker:, AbstractProcessing In-Memory (PIM) has shown a great potential to accelerate inference tasks of Convolutional Neural Network (CNN). However, existing PIM architectures do not support high precision computation, e.g., in floating point precision, which is essential for training accurate CNN models. In addition, most of the existing PIM approaches require analog/mixed-signal circuits, which do not scale, exploiting insufficiently reliable multi-bit Non-Volatile Memory (NVM). In this paper, we propose FloatPIM, a fully-digital scalable PIM architecture that accelerates CNN in both training and testing phases. FloatPIM natively supports floating-point representation, thus enabling accurate CNN training. FloatPIM also enables fast communication between neighboring memory blocks to reduce internal data movement of the PIM architecture. We evaluate the efficiency of FloatPIM on ImageNet dataset using popular large-scale neural networks. DRAM-less Accelerator for Energy Efficient Data Processing JieZhang (KAIST);GyuyoungPark (KAIST);DavidDonofrio (Lawrence Berkeley National Laboratory);JohnShalf (Lawrence Berkeley National Laboratory);MyoungsooJung (KAIST); AbstractGeneral purpose hardware accelerators have become major data processing resources in many computing domains. However, the processing capability of hardware accelerations is often limited by costly software interventions and memory copies to support compulsory data movement between different processors and solid-state drives (SSDs). This in turn also wastes a significant amount of energy in modern accelerated systems. In this work, we propose, DRAM-less, a hardware automation approach that precisely integrates many state-of-the-art phase change memory (PRAM) modules into its data processing network to dramatically reduce unnecessary data copies with a minimum of software modifications. We implement a new memory controller that plugs a real 3x nm multi-partition PRAM to 28nm technology FPGA logic cells and interoperate its design into a real PCIe accelerator emulation platform. The evaluation results reveal that our DRAM-less achieves, on average, 47% better performance than advanced acceleration approaches that use a peer-to-peer DMA. Cross-Layer Design Space Exploration of NVM-based Caches for Deep Learning AhmetInci (Carnegie Mellon University);Mehmet MericIsgenc (Carnegie Mellon University);DianaMarculescu (The University of Texas at Austin); Speaker:Ahmet Inci, Carnegie Mellon University AbstractNon-volatile memory (NVM) technologies such as spin-transfer torque magnetic random access memory (STT-MRAM) and spin-orbit torque magnetic random access memory (SOT-MRAM) have significant advantages compared to conventional SRAM due to their non-volatility, higher cell density, and scalability features. While previous work has investigated several architectural implications of NVM for generic applications, in this work we present DeepNVM++, a framework to characterize, model, and analyze NVM-based caches in GPU architectures for deep learning (DL) applications by combining technology-specific circuit-level models and the actual memory behavior of various DL workloads. We present both iso-capacity and iso-area performance and energy analysis for systems whose last-level caches rely on conventional SRAM and emerging STT-MRAM and SOT-MRAM technologies. In the iso-capacity case, STT-MRAM and SOT-MRAM provide up to 3.8x and 4.7x energy-delay product (EDP) reduction and 2.4x and 2.8x area reduction compared to conventional SRAM, respectively. Under iso-area assumptions, STT-MRAM and SOT-MRAM provide up to 2x and 2.3x EDP reduction and accommodate 2.3x and 3.3x cache capacity when compared to SRAM, respectively. We also perform a scalability analysis and show that STT-MRAM and SOT-MRAM achieve orders of magnitude EDP reduction when compared to SRAM for large cache capacities. Our comprehensive cross-layer framework is demonstrated on STT-/SOT-MRAM technologies and can be used for the characterization, modeling, and analysis of any NVM technology for last-level caches in GPUs for DL applications. Speaker bioAhmet Inci received his B.Sc. degree in Electronics Engineering at Sabanci University, Istanbul, Turkey in 2017. He is currently a Ph.D. candidate at Carnegie Mellon University, co-advised by Prof. Diana Marculescu and Prof. Gauri Joshi. His research interests include systems for ML, computer architecture, hardware-efficient deep learning, and HW/ML model co-design. Sentinel: Efficient Tensor Migration and Allocation on Persistent Memory-based Heterogeneous Memory Systems for Deep Learning JieRen (University of California, Merced);JiaolinLuo (University of California, Merced);KaiWu (University of California, Merced);MinjiaZhang (Microsoft Research);HyeranJeon (University of California, Merced);DongLi (University of California, Merced); Speaker:Jie Ren, University of California, Merced AbstractMemory capacity is a major bottleneck for training deep neural networks (DNN). Heterogeneous memory (HM) combining fast and slow memories provides a promising direction to increase memory capacity. However, HM imposes challenges on tensor migration and allocation for high performance DNN training. Prior work heavily relies on DNN domain knowledge, unnecessarily causes tensor migration due to page-level false sharing, and wastes fast memory space. We present Sentinel, a software runtime system that automatically optimizes tensor management on HM. Sentinel uses dynamic profiling, and coordinates operating system (OS) and runtime-level profiling to bridge the semantic gap between OS and applications, which enables tensor-level profiling. This profiling enables co-allocating tensors with similar lifetime and memory access frequency into the same pages. Such fine-grained profiling and tensor collocation avoids unnecessary data movement, improves tensor movement efficiency, and enables larger batch training because of saving in fast memory space. Sentinel reduces fast memory consumption by 80% while retaining comparable performance to fast memory-only system; Sentinel consistently outperforms a state-of-the-art solution on CPU by 37% and two state-of-the-art solutions on GPU by 2x and 21% respectively in training throughput. On two billion-sized datasets BIGANN and DEEP1B, HM-ANN outperforms state-of-the-art compression-based solutions such as L&C and IMI+OPQ in recall-vs-latency by a large margin, obtaining 46% higher recall under the same search latency. We also extend existing graph-based methods such as HNSW and NSG with two strong baseline implementations on HM. At billion-point scale, HM-ANN is 2X and 5.8X faster than our HNSW and NSG baselines respectively to reach the same accuracy. Speaker bioJie Ren is a PhD student in University of California, Merced. Her research focuses on high performance computing, especially on memory management on persistent memory-based heterogeneous memory. Session 5A: Hybrid Memory Chair: Dong Li (UC Merced) Dancing in the Dark: Profiling for Tiered Memory JinyoungChoi (University of California, Riverside);SergeyBlagodurov (Advanced Micro Devices (AMD));Hung-WeiTseng (University of California, Riverside); Speaker:Jinyoung Choi, University of California, Riverside AbstractWith the DDR standard facing density challenges and the emergence of the non-volatile memory technologies such as Cross-Point, phase change, and fast FLASH media, compute and memory vendors are contending with a paradigm shift in the datacenter space. The decades-long status quo of designing servers with DRAM technology as an exclusive memory solution is likely coming to an end. Future systems will increasingly employ tiered memory architectures (TMAs) in which multiple memory technologies work together to satisfy applications' ever- growing demands for more memory, less latency, and greater bandwidth. Exactly how to expose each memory type to software is an open question. Recent systems have focused on hardware caching to leverage faster DRAM memory while exposing slower non-volatile memory to OS-addressable space. The hardware approach that deals with the non-uniformity of TMA, however, requires complex changes to the processor and cannot use fast memory to increase the system's overall memory capacity. Mapping an entire TMA as OS-visible memory alleviates the challenges of the hardware approach but pushes the burden of managing data placement in the TMA to the software layers. The software, however, does not see the memory accesses by default; in order to make informed memory-scheduling decisions, software must rely on hardware methods to gain visibility into the load/store address stream. The OS then uses this information to place data in the most suitable memory location. In the original paper, we evaluate different methods of memory- access collection and propose a hybrid tiered-memory approach that offers comprehensive visibility into TMA. Speaker bioJinyoung Choi is a third-year Ph.D. student in the Department of Computer Science and Engineering at the University of California, Riverside. He is a current member of Extreme Storage & Computer Architecture Lab (ESCAL) under the grace guidance of Dr.Tseng. His main research interests include tiered-memory architectures, emerging memory technologies, heterogeneous architectures, and emerging interconnects. He collaborated with AMD research as a co-op student in 2019, 2020 summers. Before pursuing Ph.D. program, he worked for Telechip,Inc., Korea for 4ys as an embedded SW engineer mainly developing Linux Kernel Drivers. Investigating Hardware Caches for Terabyte-scale NVDIMMs Julian T.Angeles (University of California, Davis);MarkHildebrand (University of California, Davis);VenkateshAkella (University of California, Davis);JasonLowe-Power (University of California, Davis); Speaker:Julian T. Angeles, PhD, Department of Computer Science, UC Davis AbstractNon-volatile memory (NVRAM) based on phase-change memory (such as Optane DC Persistent Memory Module) is making its way into Intel servers to address the needs of emerging applications that have a huge memory footprint. These systems have both DRAM and NVRAM on the same memory channel with the smaller capacity DRAM serving as a cache to the larger capacity NVRAM in the so called 2LM mode. In this work, we perform a preliminary study on the performance of applications known for having diverse workload characteristics and irregular memory access patterns, using DRAM caches on real hardware. To accomplish this, we evaluate a variety of graph processing algorithms on large real world graph inputs using Galois, a high performance shared memory graph analytics framework. We identify a few key characteristics of these large-scale, bandwidth bound applications that DRAM caches don't account for and prevent them from taking full advantage of PMM read and write bandwidth. We argue that software based techniques are necessary for orchestrating the data movement to take full advantage of these new heterogeneous memory systems. Speaker bioJulian is a second year PhD student in the Computer Science Department at UC Davis. His current primary research interest lies in software-hardware co-designs for heterogeneous architectures. He received his bachelor's degree from Chico State. He is advised by Professor Jason Lowe-Power. HMMU: A Hardware-based Hybrid Memory Management Unit FeiWen (Texas A&M University);MianQin (Texas A&M University);PaulGratz (Texas A&M University);NarasimhaReddy (Texas A&M University); Speaker:Fei Wen, Texas A&M University AbstractThe current mobile applications have rapidly growing memory footprints, posing a great challenge for memory system design. Insufficient DRAM main memory will incur frequent data swaps between memory and storage, a process that hurts performance, consumes energy and deteriorates the write endurance of typical flash storage devices. Alternately, a larger DRAM has higher leakage power and drains the battery faster. Further, DRAM scaling trends make further growth of DRAM in the mobile space prohibitive due to cost. Emerging non-volatile memory (NVM) has the potential to alleviate these issues due to its higher capacity per cost than DRAM and minimal static power. Recently, a wide spectrum of NVM technologies, including phase-change memories (PCM), memristor, and 3D XPoint have emerged. Despite the mentioned advantages, NVM has longer access latency compared to DRAM and NVM writes can incur higher latencies and wear costs. Therefore, integration of these new memory technologies in the memory hierarchy requires a fundamental rearchitecting of traditional system designs. In this work, we propose a hardware-accelerated memory manager (HMMU) that addresses in a flat address space, with a small partition of the DRAM reserved for sub-page block level management. We design a set of data placement and data migration policies within this memory manager, such that we may exploit the advantages of each memory technology. By augmenting the system with this HMMU, we reduce the overall memory latency while also reducing writes to the NVM. Experimental results show that our design achieves a 39% reduction in energy consumption with only a 12% performance degradation versus an all-DRAM baseline that is likely untenable in the future. Speaker bioFei Wen received his Ph.D. degree in Computer Engineering from Texas A&M University in 2020. He conducted research on interconnect network design and modelling for exascale systems as research associate at the HP Labs. His current research interests include computer architecture, memory systems, and FPGA accelerator. He has expertise across the hardware/software stack, in RTL design, FPGA development, kernel programming, and architecture performance modeling. Unbounded Hardware Transactional Memory for a Hybrid DRAM/NVM Memory System JungiJeong (Purdue University);JaewanHong (KAIST);SeungryoulMaeng (KAIST);ChangheeJung (Purdue University);YoungjinKwon (KAIST); Speaker:Jungi Jeong, Purdue University AbstractPersistent memory programming requires failure atomicity. To achieve this in an efficient manner, recent proposals use hardware-based logging for atomic-durable updates and hardware transactional memory (HTM) for isolation. Although the unbounded HTMs are promising for both performance and programmability reasons, none of the previous studies satisfies the practical requirements. They either require unrealistic hardware overheads or do not allow transactions to exceed on-chip cache boundaries. Furthermore, it has never been possible to use both DRAM and NVM in HTM, though it is becoming a popular persistency model. To this end, this study proposes UHTM, unbounded hardware transactional memory for DRAM and NVM hybrid memory systems. UHTM combines the cache coherence protocol and address-signatures to detect conflicts in the entire memory space. This approach improves concurrency by significantly reducing the false-positive rates of previous studies. More importantly, UHTM allows both DRAM and NVM data to interact with each other in transactions without compromising the consistency guarantee. This is rendered possible by UHTM's hybrid version management that provides an undo-based log for DRAM and a redo-based log for NVM. The experimental results show that UHTM outperforms the state-of-the-art durable HTM, which is LLC-bounded, by 56% on average and up to 818%. Speaker bioPostdoctoral Research Associate at Purdue Sparta: High-Performance, Element-Wise Sparse Tensor Contraction on Persistent Memory-based Heterogeneous Memory JiawenLiu (University of California, Merced);JieRen (University of California, Merced);RobertoGioiosa (Pacific Northwest National Laboratory);DongLi (University of California, Merced);JiajiaLi (Pacific Northwest National Laboratory); Speaker:Jiawen Liu, University of California, Merced AbstractSparse tensor contractions appear commonly in many applications. Efficiently computing a two sparse tensor product is challenging: It not only inherits the challenges from common sparse matrix-matrix multiplication (SpGEMM), i.e., indirect memory access and unknown output size before computation, but also raises new challenges because of high dimensionality of tensors, expensive multi-dimensional index search, and massive intermediate and output data. To address the above challenges, we introduce three optimization techniques by using multi-dimensional, efficient hash table representation for the accumulator and larger input tensor, and all-stage parallelization. Evaluating with 15 datasets, we show that Sparta brings 28 − 576 x speedup over traditional sparse tensor contraction with SPA. With our proposed algorithm- and memory heterogeneity-aware data management, Sparta brings extra performance improvement on the heterogeneous memory with DRAM and Intel Optane DC Persistent Memory Module (PMM) over a state-of-the-art software-based data management solution, a hardware-based data management solution, and PMM-only by 30.7% (up to 98.5%), 10.7% (up to 28.3%) and 17% (up to 65.1%) respectively. Speaker bioJiawen Liu is a fourth-year PhD candidate at University of California, Merced, supervised by Prof. Dong Li. His research interests lie in the intersection of systems, machine learning, and high performance computing. Session 5B: Crash Consistent Recovery Chair: Hung-Wei Tseng Cross-Failure Bug Detection in Persistent Memory Programs SihangLiu (University of Virginia);KorakitSeemakhupt (University of Virginia);YizhouWei (University of Virginia);ThomasWenisch (University of Michigan);AasheeshKolli (Pennsylvania State University);SamiraKhan (University of Virginia); Speaker:Sihang Liu, University of Virginia AbstractPersistent memory (PM) technologies, such as Intel's Optane memory, deliver high performance, byte-addressability, and persistence, allowing programs to directly manipulate persistent data in memory without any OS intermediaries. An important requirement of these programs is that persistent data must remain consistent across a failure, which we refer to as the crash consistency guarantee. However, maintaining crash consistency is not trivial. We identify that a consistent recovery critically depends not only on the execution before the failure, but also on the recovery and resumption after failure. We refer to these stages as the pre- and post-failure execution stages. In order to holistically detect crash consistency bugs, we categorize the underlying causes behind inconsistent recovery due to incorrect interactions between the pre- and post-failure execution. First, a program is not crash-consistent if the post-failure stage reads from locations that are not guaranteed to be persisted in all possible access interleavings during the pre-failure stage — a type of programming error that leads to a race that we refer to as a cross-failure race. Second, a program is not crash-consistent if the post-failure stage reads persistent data that has been left semantically inconsistent during the pre-failure stage, such as a stale log or uncommitted data. We refer to this type of bug as a cross-failure semantic bug. Together, they form the cross-failure bugs in PM programs. In this work, we provide XFDetector, a tool that detects cross-failure bugs by automatically injecting failures into the pre-failure execution, and checking for cross-failure races and semantic bugs in the post-failure continuation. XFDetector has detected four new bugs in three pieces of PM software: one of PMDK's examples, a PM-optimized Redis database, and a PMDK library function. Speaker bioSihang Liu is a 5th year Ph.D. student at the University of Virginia, advised by Professor Samira Khan. Before pursuing the doctoral degree, he has obtained Bachelor's degrees from both the University of Michigan and Shanghai Jiaotong University. Sihang Liu's primary research interest lies in the software and hardware co-design of persistent memory systems. On the hardware side, his research aims to optimize the performance and guarantee crash consistency for practical persistent memory systems that are integrated with both storage and memory support. On the software side, he works on testing the crash consistency guarantees of PM-based programs. His works have provided several open-source tools and detected real-world bugs in well-known persistent memory software systems. He has published these works at top conferences in Computer Architecture, including ISCA, ASPLOS, and HPCA. He has also served as reviewer for ToS, TCAD, ASPLOS AE, OSDI AE, and EuroSys AE. Towards Bug-free Persistent Memory Applications IanNeal (University of Michigan);AndrewQuinn (University of Michigan);BarisKasikci (University of Michigan); Speaker:Ian Neal, University of Michigan AbstractPersistent Memory (PM) aims to revolutionize the storage-memory hierarchy, but programming these systems is error-prone. Our work investigates how to to help developers write better, bug-free PM applications by automatically debugging them. We first perform a study of bugs in persistent memory applications to identify the opportunities and pain-points of debugging these systems. Then, we discuss our work on AGAMOTTO, a generic and extensible system for automatically detecting PM bugs. Unlike existing tools that rely on extensive test cases or annotations, AGAMOTTO automatically detects bugs in PM systems by extending symbolic execution to model persistent memory. AGAMOTTO has so far identified 84 new bugs in 5 different PM applications and frameworks while incurring no false positives. We then discuss HIPPOCRATES, a system that automatically fixes bugs in PM systems. HIPPOCRATES "does no harm": its fixes are guaranteed to fix an PM bug without introducing new bugs. We show that HIPPOCRATES produces fixes that are functionally equivalent to developer fixes and that HIPPOCRATES fixes have performance that rivals manually-developed code. Speaker bioIan Neal is a PhD candidate at the University of Michigan, advised by Baris Kasikci. His current research focus is in the development of efficient and reliable systems using emerging persistent main memory technologies. He is also interested in developing verifiably secure hardware systems and tools which allow for easier development of secure systems. Corundum: Statically-Enforced Persistent Memory Safety MortezaHoseinzadeh (UC San Diego);StevenSwanson (UC San Diego); Speaker:Morteza Hoseinzadeh, University of California, San Diego AbstractFast, byte-addressable, persistent main memories (PM) make it possible to build complex data structures that can survive system failures. Programming for PM is challenging, not least because it combines well-known programming challenges like locking, memory management, and pointer safety with novel PM-specific bug types. It also requires logging updates to PM to facilitate recovery after a crash. A misstep in any of these areas can corrupt data, leak resources, prevent successful recovery after a crash. Existing PM libraries in a variety of languages – C, C++, Python, Java – simplify some of these areas, but they still require the programmer to learn(and flawlessly apply) complex rules to ensure correctness. Opportunities for data-destroying bugs abound. This paper presents Corundum, a Rust-based library with an idiomatic PM programming interface, and leverages Rust's type system to statically avoid most common PM programming bugs. Corundum lets programmers develop persistent data structures using familiar Rust constructs and have confidence that they are free of many types of bugs. We have implementedCorundum and found its performance to be as good or better than Intel's widely-used PMDK library. Speaker bioMorteza Hoseinzadeh is a Ph.D. candidate in the NVSL Lab at CSE Department, UC San Diego. He has been working on building software toolchains aiming to ease persistent memory programming with confidence. Fast, Flexible and Comprehensive Bug Detection for Persistent Memory Programs BangDi (Hunan University);JiawenLiu (University of California, Merced);HaoChen (Hunan University);DongLi (University of California, Merced); Speaker:Bang Di, Hunan University AbstractDebugging PM programs faces a fundamental tradeoff between performance overhead and bug coverage (comprehensiveness). Large performance overhead or limited bug coverage makes debugging infeasible or ineffective for PM programs. In this paper, we propose PMDebugger, a debugger to detect crash consistency bugs. Unlike prior work, PMDebugger is fast, flexible and comprehensive for bug detection. The design of PMDebugger is driven by the characterization of how three fundamental operations in PM programs (store, cache writeback and fence) typically happen in PM programs. PMDebugger uses a hierarchical design composed of PM debugging-specific data structures, operations and bug-detection algorithms (rules). We generalize nine rules to detect crash-consistency bugs for various PM persistency models. Compared with a state-of-the-art detector (XFDetector) and an industry-quality detector (Pmemcheck), PMDebugger leads to 49.3x and 3.4x speedup on average. Compared with another state-of-the-art detector (PMTest) optimized for high performance, PMDebugger achieves comparable performance, without heavily relying on the programmer's annotation but detect 38 more bugs than PMTest on ten applications. PMDebugger also identifies more bugs than XFDetector, Pmemcheck and PMTest. PMDebugger detects 19 new bugs in a real application (memcached) and two new bugs from Intel PMDK. Speaker bioI am a 4th-year Ph.D. student in the College of Computer Science and Electronic Engineering at Hunan University. My research focuses on computer architecture and operating systems, specifically in debugging for the Persistent Memory (PM) and GPU. Tracking in Order to Recover - Detectable Recovery of Lock-Free Data Structures HagitAttiya (Technion);OhadBen-Baruch (Ben-Gurion University);PanagiotaFatourou (FORTH ICS and University of Crete, Greece);DannyHendler (Ben-Gurion University);EleftheriosKosmas (University of Crete, Greece); Speaker:Ohad Ben-Baruch, Ben-Gurion University AbstractThis paper presents the \emph{tracking approach} for deriving \emph{detectably recoverable} (and thus also \emph{durable}) implementations of many widely-used concurrent data structures. Such data structures, satisfying \emph{detectable recovery}, are appealing for emerging systems featuring byte-addressable \emph{non-volatile main memory} (\emph{NVRAM}), whose persistence allows to efficiently resurrect failed processes after crashes.Their implementation is important because they are building blocks for the construction of simple, well-structured, sound and error-resistant multiprocessor systems. For instance, in many big-data applications, shared in-memory tree-based data indices are created for fast data retrieval and useful data analytics. Speaker bioOhad Ben-Baruch completed his PhD in computer science at Ben-Gurion University under the supervision of Prof. Danny Hendler and Prof. Hagit Attiya. His research focuses on shared memory and concurrent computation. More specifically, on complexity bounds for concurrent objects in the crash-stop and crash-recovery system models. In his PhD dissertation the notion of Nesting-Safe Recoverable Linearizability (NRL) was proposed, a novel model and correctness condition for the crash-recovery shared memory model which allows for nesting of recoverable objects, together with lower and upper bounds for objects implementations satisfying the condition. Ohad is currently working at BeyondMinds as a researcher and algorithms development. Building Fast Recoverable Persistent Data Structures HaosenWen (University of Rochester);WentaoCai (University of Rochester);MingzheDu (University of Rochester);LouisJenkins (University of Rochester);BenjaminValpey (University of Rochester);Michael L.Scott (University of Rochester); Speaker:Haosen Wen, University of Rochester AbstractThe recent emergence of fast, dense, nonvolatile main memory suggests that certain long-lived data might remain in its natural pointer-rich format across program runs and hardware reboots. Operations on such data must be instrumented with explicit write-back and fence instructions to ensure consistency in the wake of a crash. Techniques to minimize the cost of this instrumentation are an active topic of research. We present what we believe to be the first general-purpose approach to building \emph{buffered durably linearizable} persistent data structures, and a system, Montage, to support that approach. Montage is built on top of the Ralloc nonblocking persistent allocator. It employs a slow-ticking \emph{epoch clock}, and ensures that no operation appears to span an epoch boundary. It also arranges to persist only that data minimally required to reconstruct the structure after a crash. If a crash occurs in epoch $e$, all work performed in epochs $e$ and $e-1$ is lost, but work from prior epochs is preserved. Speaker bioHaosen Wen is a senior Ph.D. candidate at University of Rochester. His research interests include storage models and applications for non-volatile byte-addressable memories, nonblocking data structures and their memory management, and in-memory database systems. Session 6A: Hardware for Crash Consistency Chair: Changhee Jung (Purdue) ArchTM: Architecture-Aware, High Performance Transaction for Persistent Memory KaiWu (University of California, Merced);JieRen (University of California, Merced);IvyPeng (Lawrence Livermore National Laboratory);DongLi (University of California, Merced); Speaker:Kai Wu, University of California Merced AbstractFailure-atomic transactions are a critical mechanism for accessing and manipulating data on persistent memory (PM)with crash consistency. We identify that small random writes in metadata modifications and locality-oblivious memory al-location in traditional PM transaction systems mismatch PMarchitecture. We present ArchTM, a PM transaction system based on two design principles: avoiding small writes and encouraging sequential writes. ArchTM is a variant of copy-on-write (CoW) system to reduce write traffic to PM. Unlike conventional CoW schemes, ArchTM reduces metadata modifications through a scalable lookup table on DRAM. ArchTM introduces an annotation mechanism to ensure crash consistency and a locality-aware data path in memory allocation to increases coalesable writes inside PM devices. We evaluateArchTM against four state-of-the-art transaction systems (one in PMDK, Romulus, DudeTM, and one from oracle). ArchTM outperforms the competitor systems by 58x, 5x, 3x and 7x on average, using micro-benchmarks and real-world workloads on real PM. Speaker bioKai is a Ph.D. candidate at University of California Merced. Before coming to UC Merced, He earned his master degree in Computer Science and Engineering from Michigan State University in 2016. His research areas are computer system, heterogeneous computing and high performance computing (HPC). He designs high performance and large-scale computer systems with hardware heterogeneity. His recent work focuses on designing system support for persistent memory-based big memory platforms. PMEM-Spec: Persistent Memory Speculation (Strict Persistency Can Trump Relaxed Persistency) JungiJeong (Purdue University);ChangheeJung (Purdue University); AbstractPersistency models define the persist-order that controls the order in which stores update persistent memory (PM). As with memory consistency, the relaxed persistency models provide better performance than the strict ones by relaxing the ordering constraints. To support such relaxed persistency models, previous studies resort to APIs for annotating the persist-order in program and hardware implementations for enforcing the programmer-specified order. However, this approach to the relaxed persistency support imposes costly burdens on both architects and programmers. In particular, the goal of this study is to demonstrate that the strict persistency model can outperform the relaxed models with way less hardware complexity and programming difficulty. To achieve that, this paper presents PMEM-Spec that speculatively allows any PM accesses without stalling or buffering, detecting the ordering violation (e.g., misspeculation) for PM loads and stores. PMEM-Spec treats misspeculation as power failure and thus leverage failure-atomic transactions to recover from misspeculation by aborting and restarting them purposely. Since the ordering violation rarely occurs, PMEMSpec can accelerate persistent memory accesses without significant misspeculation penalty. Experimental results show that PMEM-Spec outperforms the epoch-based persistency models with Intel X86 ISAs and the state-of-the-art hardware support by 27.2% and 10.6%, respectively. Efficient Hardware-Assisted Out-of-Place Update for Non-Volatile Memory MiaoCai (Nanjing University);Chance C.Coats (University of Illinois at Urbana-Champaign);JeonghyunWoo (University of Illinois at Urbana-Champaign);JianHuang (University of Illinois at Urbana-Champaign); Speaker:Jian Huang, University of Illinois at Urbana-Champaign AbstractByte-addressable non-volatile memory (NVM) is a promising technology that provides near-DRAM performance with scalable memory capacity. However, it requires atomic data durability to ensure memory persistency. Therefore, many techniques, including logging and shadow paging, have been proposed. However, most of them either introduce extra write traffic to NVM or suffer from significant performance overhead on the critical path of program execution, or even both. In this paper, we propose a transparent and efficient hardware-assisted out-of-place update (Hoop) mechanism that supports atomic data durability, without incurring much extra writes and performance overhead. The key idea is to write the updated data to a new place in NVM, while retaining the old data until the updated data becomes durable. To support this, we develop a lightweight indirection layer in the memory controller to enable efficient address translation and adaptive garbage collection for NVM. We evaluate Hoop with a variety of popular data structures and data-intensive applications, including key-value stores and databases. Our evaluation shows that Hoop achieves low critical-path latency with small write amplification, which is close to that of a native system without persistence support. Compared with state-of-the-art crash-consistency techniques, it improves application performance by up to 1.7X, while reducing the write amplification by up to 2.1X. Hoop also demonstrates scalable data recovery capability on multi-core systems. Speaker bioJian Huang is an Assistant Professor in the ECE department at the University of Illinois at Urbana-Champaign. He received his Ph.D. in Computer Science at Georgia Tech in 2017. His research interests include computer systems, systems architecture, systems security, distributed systems, and especially the intersections of them. He enjoys building systems. His research contributions have been published at top-tier architecture, systems, and security conferences. His work received the Best Paper Award at USENIX ATC and IEEE Micro Top Picks. He also received NetApp Faculty Fellowship Award and Google Faculty Research Award. TSOPER: Efficient Coherence-Based Strict Persistency PerEkemark (Uppsala University, Sweden);YuanYao (Uppsala University, Sweden);AlbertoRos (Universidad de Murcia, Spain);KonstantinosSagonas (Uppsala University, Sweden and National Technical Univ. of Athens, Greece);StefanosKaxiras (Uppsala University, Sweden); Speaker:Per Ekemark, Uppsala University, Sweden AbstractWe propose a novel approach for hardware-based strict TSO persistency, called TSOPER. We allow a TSO persistency model to freely coalesce values in the caches, by forming atomic groups of achelines to be persisted. A group persist is initiated for an atomic group if any of its newly written values are exposed to the outside world. A key difference with prior work is that our architecture is based on the concept of a TSO persist buffer, that sits in parallel to the shared LLC, and persists atomic groups directly from private caches to NVM, bypassing the coherence serialization of the LLC. To impose dependencies among atomic groups that are persisted from the private caches to the TSO persist buffer, we introduce a sharing-list coherence protocol that naturally captures the order of coherence operations in its sharing lists, and thus can reconstruct the dependencies among different atomic groups entirely at the private cache level without involving the shared LLC. The combination of the sharing-list coherence and the TSO persist buffer allows persist operations and writes to non-volatile memory to happen in the background and trail the coherence operations. Coherence runs ahead at full speed; persistency follows belatedly. Our evaluation shows that TSOPER provides the same level of reordering as a program-driven relaxed model, hence, approximately the same level of performance, albeit without needing the programmer or compiler to be concerned about false sharing, data-race-free semantics, etc., and guaranteeing all software that can run on top of TSO, automatically persists in TSO. Speaker bioCurrently working toward efficient and productive use of persistent memory in both single and distributed systems. Previously explored compiler optimisations involving software prefetching and data-race-freedom. I prefer my bows with either strings or arrows. Characterizing non-volatile memory transactional systems PradeepFernando (Georgia Tech);IrinaCalciu (VMware);JayneelGandhi (VMware);AasheeshKolli (Penn State);AdaGavrilovska (Georgia Tech); Speaker:Pradeep Fernando, Georgia Institute of Technology AbstractEmerging non-volatile memory (NVM) technologies promise memory speed byte-addressable persistent storage with a load/store interface. However, programming applications to directly manipulate NVM data is complex and error-prone. Applications generally employ libraries that hide the low-level details of the hardware and provide a transactional programming model to achieve crash-consistency. Furthermore, applications continue to expect correctness during concurrent executions, achieved through the use of synchronization. To achieve this, applications seek well-known ACID guarantees. However, realizing this presents designers of transactional systems with a range of choices in how to combine several low-level techniques, given target hardware features and workload characteristics. This presentation will discuss the tradeoffs associated with these choices and present detailed experimental analysis performed across a range of single- and multi-threaded workloads using a simulation environment and real PMEM hardware. Session 6B: Accelerating Applications II Building Scalable Dynamic Hash Tables on Persistent Memory BaotongLu (The Chinese University of Hong Kong);XiangpengHao (Simon Fraser University);TianzhengWang (Simon Fraser University);EricLo (The Chinese University of Hong Kong); Speaker:Baotong Lu, The Chinese University of Hong Kong AbstractByte-addressable persistent memory (PM) brings hash tables the potential of low latency, cheap persistence, and instant recovery. The recent advent of Intel Optane DC Persistent Memory Modules(DCPMM) further accelerates this trend. Many new hash table designs have been proposed, but most of them were based on emulation and perform sub-optimally on real PM. They were also piece-wise and partial solutions that side-step many important properties, in particular good scalability, high load factor, and instant recovery. We present Dash, a holistic approach to building dynamic and scalable hash tables on real PM hardware with all the aforementioned properties. Based on Dash, we adapted two popular dynamic hashing schemes (extendible hashing and linear hashing). On a 24-core machine with Intel Optane DCPMM, we show that compared to state-of-the-art, Dash-enabled hash tables can achieve up to∼3.9×higher performance with up to over 90% load factor and an instant recovery time of 57ms regardless of data size. Speaker bioBaotong Lu is a Ph.D. candidate at the Department of Computer Science and Engineering, The Chinese University of Hong Kong (advisor: Prof. Eric Lo). He is also a visiting Ph.D. student at the Data Science Research Group, Simon Fraser University (host advisor: Prof. Tianzheng Wang). His research interest lies in the data management system, specifically next-generation database system on persistent memory and multicore processor. He is a recipient of the 2021 ACM SIGMOD Research Highlight Award. Performance Prediction of Graph Analytics on Persistent Memory DiegoBraga (UFBA);DanielMosse (PITT);ViniciusPetrucci (University of Pittsburgh & UFBA); Speaker:Diego Moura, Federal University of Bahia AbstractConsidering a system with heterogeneous memory (DRAM and PMEM, in this case Intel Optane), the problem we address is to decide which application will be allocated on each type of resource. We built a model that estimates the impact of running the application on Intel Optane using performance counters from previous runs on DRAM. From this model we present an offline application placement for the context of heterogeneous memories. Our results show that judicious allocation can yield average reduction of 22% and 120% in makespan and degradation metrics respectively. Speaker bioPhD student Diego Braga started by doing research with heterogeneous processors using arm big.little boards. In 2020, after receiving a grant to access an Intel server with Optane technology, he "migrated" his research to heterogeneous memory. Currently, he works with allocation of memory for the entire application as well as investigating which characteristics have a larger impact on performance at an object-level. Disaggregating Persistent Memory and Controlling Them Remotely: An Exploration of Passive Disaggregated Key-Value Stores Shin-YehTsai (Facebook);YizhouShan (University of California, San Diego);YiyingZhang (University of California, San Diego); Speaker:Yizhou Shan, UCSD AbstractMany datacenters and clouds manage storage systems separately from computing services for better manageability and resource utilization. These existing disaggregated storage systems use hard disks or SSDs as storage media. Recently, the technology of persistent memory (PM) has matured and seen initial adoption in several datacenters. Disaggregating PM could enjoy the same benefits of traditional disaggregated storage systems, but it requires new designs because of its memory-like performance and byte addressability. In this paper, we explore the design of disaggregating PM and managing them remotely from compute servers, a model we call passive disaggregated persistent memory, or pDPM. Compared to the alternative of managing PM at storage servers, pDPM significantly lowers monetary and energy costs and avoids scalability bottlenecks at storage servers. We built three key-value store systems using the pDPM model. The first one lets all compute nodes directly access and manage storage nodes. The second uses a central coordinator to orchestrate the communication between compute and storage nodes. These two systems have various performance and scalability limitations. To solve these problems, we built Clover, a pDPM system that separates the location, communication mechanism, and management strategy of the data plane and the metadata/control plane. Speaker bioYizhou Shan is PhD student at UCSD, advised by Prof. Yiying Zhang. His research focus on distributed system, persistent memory, and operating systems. Making Volatile Index Structures Persistent using TIPS R. MadhavaKrishnan (Virginia Tech);Wook-HeeKim (Virginia Tech);Hee WonLee (Consultant);MinsungJang (Perspecta Labs);Sumit KumarMonga (Virginia Tech);AjitMathew (Virginia Tech);ChangwooMin (Virginia Tech); Speaker:R. Madhava Krishnan, Virginia Tech AbstractWe propose TIPS – a generic framework that systematically converts volatile indexes to their persistent counterpart. Any volatile index can be plugged-in into the TIPS framework and become persistent with only minimal source code changes. TIPS neither places restrictions on the concurrency model nor requires in-depth knowledge of the volatile index. TIPS supports a strong consistency guarantee, i.e., durable linearizability, and internally handles the persistent memory leaks across crashes. TIPS relies on novel DRAM-NVMM tiering to achieve high-performance and good scalability. It uses a hybrid logging technique called the UNO logging to minimize the crash consistency overhead. We converted seven different indexes and a real-world key-value store application using TIPS and evaluated them using YCSB workloads. TIPS-enabled indexes outperform the state-of-the-art persistent indexes significantly in addition to offering many other benefits that existing persistent indexes do not provide. Speaker bioMadhava Krishnan is a third-year Ph.D. student at Virginia Tech working under Dr. Changwoo Min. His research interests include Operating systems, Storage systems, and Concurrency & Scalability. He currently works on developing system software for emerging persistent memory particularly focusing on persistent transactions and index structures. HM-ANN: Efficient Billion-Point Nearest Neighbor Search on Persistent Memory-based Heterogeneous Memory JieRen (University of California, Merced);MinjiaZhang (Microsoft Research);DongLi (University of California, Merced); AbstractThe state-of-the-art approximate nearest neighbor search (ANNS) algorithms face a fundamental tradeoff between query latency and accuracy, because of small main memory capacity: To store indices in main memory for fast query response, They have to limit the number of data points or store compressed vectors, which hurts search accuracy. The emergence of heterogeneous memory (HM) brings opportunities to largely increase memory capacity and break the above tradeoff: Using HM, billions of data points can be placed in main memory on a single machine without using any data compression. However, HM consists of both fast (but small) memory and slow (but large) memory, and using HM inappropriately slows down query time significantly. In this work, we present a novel graph-based similarity search algorithm called HM-ANN, which takes both memory and data heterogeneity into consideration and enables billion-scale similarity search on a single node without using compression. Award Finalists NVMW 2020 PMTest: A Fast and Flexible Testing Framework for Persistent Memory Programs SihangLiu (University of Virginia);YizhouWei (University of Virginia);JishenZhao (UC, San Diego);AasheeshKolli (Penn State University & VMware Research);SamiraKhan (University of Virginia); AbstractRecent non-volatile memory technologies such as 3D XPoint and NVDIMMs have enabled persistent memory (PM) systems that can manipulate persistent data directly in memory. This advancement of memory technology has spurred the development of a new set of crash-consistent software (CCS) for PM - applications that can recover persistent data from memory in a consistent state in the event of a crash (e.g., power failure). CCS developed for persistent memory ranges from kernel modules to user-space libraries and custom applications. However, ensuring crash consistency in CCS is difficult and error-prone. Programmers typically employ low-level hardware primitives or transactional libraries to enforce ordering and durability guarantees that are required for ensuring crash consistency. Unfortunately, hardware can reorder instructions at runtime, making it difficult for the programmers to test whether the implementation enforces the correct ordering and durability guarantees. We believe that there is an urgent need for developing a testing framework that helps programmers identify crash consistency bugs in their CCS. We find that prior testing tools lack generality, i.e., they work only for one specific CCS or memory persistency model and/or introduce significant performance overhead. To overcome these drawbacks, we propose PMTest, a crash consistency testing framework that is both flexible and fast. PMTest provides flexibility by providing two basic assertion-like software checkers to test two fundamental characteristics of all CCS: the ordering and durability guarantee. These checkers can also serve as the building blocks of other application-specific, high-level checkers. PMTest enables fast testing by deducing the persist order without exhausting all possible orders. In the evaluation with eight programs, PMTest not only identified 45 synthetic crash consistency bugs, but also detected 3 new bugs in a file system (PMFS) and in applications developed using a transactional library (PMDK), while on average being 7.1× faster than the state-of-the-art tool. Speaker bioSihang Liu is a 4th year Ph.D. student at the University of Virginia, advised by Professor Samira Khan. Before pursuing the doctoral degree, he has obtained Bachelor's degrees from both the University of Michigan and Shanghai Jiaotong University. Sihang Liu's primary research interest lies in the software and hardware co-design of persistent memory systems. On the hardware side, his research aims to optimize the performance and guarantee crash consistency for practical persistent memory systems that are integrated with both storage and memory support. On the software side, he works on testing the crash consistency guarantees of PM-based programs. His works have provided several open-source tools and detected real-world bugs in well known persistent memory software systems. He has published these works at top conferences in Computer Architecture, including ISCA, ASPLOS, and HPCA. He has also served as reviewer for ToS and artifact reviewer for ASPLOS. Semi-Asymmetric Parallel Graph Algorithms for NVRAMs LaxmanDhulipala (Carnegie Mellon University);CharlesMcGuffey (Carnegie Mellon University);HongboKang (Carnegie Mellon University);YanGu (UC Riverside);Guy E.Blelloch (Carnegie Mellon University);Phillip B.Gibbons (Carnegie Mellon University);JulianShun (Massahusetts Institute of Technology); Speaker:Laxman Dhulipala, Carnegie Mellon University AbstractEmerging non-volatile main memory (NVRAM) technologies provide novel features for large-scale graph analytics, combining byte-addressability, low idle power, and improved memory-density. Systems are likely to have an order of magnitude more NVRAM than traditional memory (DRAM), allowing large graph problems to be solved efficiently at a modest cost on a single machine. However, a significant challenge in achieving high performance is in accounting for the fact that NVRAM writes can be significantly more expensive than NVRAM reads. In this paper, we propose an approach to parallel graph analytics in which the graph is stored as a read-only data structure (in NVRAM), and the amount of mutable memory is kept proportional to the number of vertices. Similar to the popular semi-external and semi-streaming models for graph analytics, the approach assumes that the vertices of the graph fit in a fast read-write memory (DRAM), but the edges do not. In NVRAM systems, our approach eliminates writes to the NVRAM, among other benefits. We present a model, the \emph{Parallel Semi-Asymmetric Model (PSAM)}, to analyze algorithms in the setting, and run experiments on a 48-core NVRAM system to validate the effectiveness of these algorithms. To this end, we study over a dozen graph problems. We develop parallel algorithms for each that are efficient, often work-optimal, in the model. Experimentally, we run all of the algorithms on the largest publicly-available graph and show that our PSAM algorithms outperform the fastest prior algorithms designed for DRAM or NVRAM. We also show that our algorithms running on NVRAM nearly match the fastest prior algorithms running solely in DRAM, by effectively hiding the costs of repeatedly accessing NVRAM versus DRAM. Speaker bioLaxman is a final-year Ph.D. student at CMU where he is very fortunate to be advised by Guy Blelloch. He has worked broadly on designing and implementing provably-efficient parallel graph algorithms in different parallel models of computation. Error-Correcting WOM Codes for Worst-Case and Random Errors AmitSolomon (Technion);YuvalCassuto (Technion); Speaker:Amit Solomon, Technion - Israel Institute of Technology AbstractWe construct error-correcting WOM (write-once memory) codes that guarantee correction of any specified number of errors in q-level memories. The constructions use suitably designed short q-ary WOM codes and concatenate them with outer error-correcting codes over different alphabets, using suitably designed mappings. In addition to constructions for guaranteed error correction, we develop an error-correcting WOM scheme for random errors using the concept of multi-level coding. Speaker bioAmit Solomon is a Ph.D. candidate at the Department of Electrical Engineering and Computer Science at Massachusetts Institute of Technology, working with Prof. Muriel Mèdard at the Network Coding and Reliable Communication group in the Research Laboratory of Electronics. His research interests are coding theory, information theory, communication, among others. He received the B.Sc. degree, (cum laude), and the M.Sc. in Electrical Engineering from the Technion-Israel Institute of Technology, in 2015 and 2018, respectively. He has received the Irwin Mark Jacobs and Joan Klein Jacobs Presidential Fellowship. Efficient Architectures for Generalized Integrated Interleaved Decoder XinmiaoZhang (The Ohio State University);ZhenshanXie (The Ohio State University); Speaker:Zhenshan Xie, The Ohio State University AbstractGeneralized integrated interleaved (GII) codes nest short sub-codewords to generate parities shared by the sub-codewords. They allow hyper-speed decoding with excellent correction capability, and are essential to next-generation data storage. On the other hand, the hardware implementation of GII decoders faces many challenges, including low achievable clock frequency and large silicon area. This abstract presents novel algorithmic reformulations and architectural transformations to address each bottleneck. For an example GII code that has the same rate and length as eight un-nested (255, 223) Reed-Solomon (RS) codes, our GII decoder only has 30% area overhead compared to the RS decoder while achieving 7 orders of magnitude lower frame error rate. Its critical path only consists of 7 XOR gates, and can easily achieve more than 40GByte/s throughput. Speaker bioZhenshan Xie received the B.S. degree in information engineering from East China University of Science and Technology, Shanghai, China, in 2014, and the M.S. degree in communications and information system from University of Chinese Academy of Sciences, Beijing, China, in 2017. He is currently working toward the Ph.D. degree in the Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH. SOLQC: Synthetic Oligo Library Quality Control Tool OmerSabary (Technion – Israel Institute of Technology);YoavOrlev (Interdisciplinary Center Herzliya);RoyShafir (Technion – Israel Institute of Technology, Interdisciplinary Center Herzliya);LeonAnavy (Technion – Israel Institute of Technology);EitanYaakobi (Technion – Israel Institute of Technology);ZoharYakhini (Technion – Israel Institute of Technology, Interdisciplinary Center Herzliya); AbstractDNA-based storage has attracted significant attention due to recent demonstrations of the viability of storing information in macromolecules using synthetic oligo libraries. As DNA storage experiments, as well as other experiments of synthetic oligo libraries are growing by numbers and complexity, analysis tools can facilitate quality control and help in assessment and inference. We present a novel analysis tool, called SOLQC, which enables fast and comprehensive analysis of synthetic oligo libraries, based on next generation sequencing (NGS) analysis performed by the user. SOLQC provides statistical information such as the distribution of variant representation, different error rates, and their dependence on sequence or library properties. SOLQC produces graphical descriptions of the analysis results. The results are reported in a flexible report format. We demonstrate SOLQC by analyzing literature libraries. We also discuss the potential benefits and relevance of the different components of the analysis. Speaker bioOmer Sabary received his B.Sc in Computer Science from the Technion in 2018. He is currently an M.Sc. student in the department of computer science at the Technion under the supervision of Associate Professor Eitan Yaakobi. His dissertation spans over reconstruction algorithms for DNA storage systems and error characterization of the DNA storage channel. LIST OF POSTERS Thread-specific Database Buffer Management in Multi-core NVM Storage Environments TsuyoshiOzawa (Institute of Industrial Science, The University of Tokyo);YutoHayamizu (Institute of Industrial Science, The University of Tokyo);KazuoGoda (Institute of Industrial Science, The University of Tokyo);MasaruKitsuregawa (Institute of Industrial Science, The University of Tokyo); Speaker:Tsuyoshi Ozawa, The University of Tokyo AbstractDatabase buffer management is a cornerstone in modern database management systems (DBMS). So far, a *shared buffer* strategy has been widely employed to improve the cache efficiency and reduce the IO workload. However, it involves a significant processing overhead induced by the inter-thread synchronization, thus failing to exploit the potential bandwidth that recent non-volatile memory (NVM) storage devices offer. This paper proposes to employ a *separated buffer* strategy. According to this strategy, the database buffer manager is allowed to achieve significantly higher throughput, even though it may produce an extra amount of IO workload. In recent multi-core NVM storage environments, separated buffer performs faster in query processing than shared buffer. This paper presents our experimental study with the TPC-H dataset on two different NVM machines, demonstrating that separated buffer achieves up to 1.47 million IOPS and finally performs up to 637% faster in query processing than shared buffer. Speaker bioTsuyoshi Ozawa is a researcher at the University of Tokyo on topics in database management systems and storage systems. He is a member of ACM, USENIX, IPSJ (Information Processing Society of Japan), and DBSJ (The Database Society of Japan). Leveraging Intel Optane for HPC workflows Ranjan SarpangalaVenkatesh (Georgia Institute of Technology);TonyMason (University of British Columbia);PradeepFernando (Georgia Institute of Technology);GregEisenhauer (Georgia Institute of Technology);AdaGavrilovska (Georgia Institute of Technology); Speaker:Ranjan Sarpangala Venkatesh, Georgia Institute of Technology AbstractHigh Performance Computing (HPC) workload demands are increasing data volumes, which gives rise to data movement challenges. In-situ execution of HPC workflows coupling simulation and analytics applications is a common mechanism for reducing cross-node traffic. Further, data movement costs can be reduced by using large capacity persistent memory, such as Intel's Optane PMEM. Recent work has described best practices for optimizing use of Optane by tuning based on workload characteristics. However, optimizing one component of an HPC workload does not guarantee optimal performance in the end-to-end workflow. Instead, we propose and evaluate new strategies for optimizing the use of Intel Optane for such HPC workflows. Speaker bioRanjan Sarpangala Venkatesh is a Ph.D. student in the School of Computer Science, at the Georgia Institute of Technology advised by Prof. Ada Gavrilovska. His current research focus is on system support for leveraging persistent memory to make HPC workflows faster. His previous research made Docker container snapshot/restore 10X faster for large memory applications. Prior to joining Georgia Tech, he worked on storage device drivers at Hewlett Packard Enterprise. He earned M.S. in Computer Science from the University of California, Santa Cruz. Durability Through NVM Checkpointing DavidAksun (EPFL);JamesLarus (EPFL); Speaker:David Aksun, EPFL AbstractNon-Volatile Memory (NVM) is an emerging type of memory that offers fast, byte-addressable persistent storage. One promising use for persistent memory is constructing robust, high-performance internet and cloud services, which often maintain very large, in-memory databases and need to quickly recover from faults or failures. The research community has focused on storing these large data structures in NVM, in essence using it as durable RAM. The focus of the paper is to take advantage of the existing DRAM to provide better runtime performance. Heavily read or written data should reside in DRAM, where it can be accessed at a fraction of the cost, and only modified values should be persisted in NVM. This paper presents CpNvm, a runtime system that uses periodic checkpoints to maintain a recoverable copy of a program's data, with overhead low enough for widespread use. To use CpNvm, a program's developer must insert a library call at the first write to a location in a persistent data structure and make another call when the structure is in a consistent state. Our primary goal is high performance, even at the cost of relaxing the crash-recovery model. CpNvm offers durability for large data structures at low overhead cost. Running on Intel Optane NVM, we achieve overheads of $0-15\%$ on the YCSB benchmarks running with minimally-modified Masstree and overheads of $6.5\%$ or less for Memcached. Speaker bioDavid Aksun is a Ph.D. candidate at EPFL in Lausanne. His research focuses on programming language challenges and performance optimization issues for byte-addressable non-volatile memory. His current research investigates the potential optimizations that can be used for building durable data structures. He has a B.S. (summa cum laude) in Computer Engineering from Istanbul Technical University. PMIdioBench: A Benchmark Suite for Understanding the Idiosyncrasies of Persistent Memory ShashankGugnani (The Ohio State University);ArjunKashyap (The Ohio State University);XiaoyiLu (The Ohio State University); AbstractHigh capacity persistent memory (PMEM) is finally commercially available in the form of Intel's Optane DC Persistent Memory Module (DCPMM). Early evaluations of DCPMM show that its behavior is more nuanced and idiosyncratic than previously thought. Several assumptions made about its performance that guided the design of PMEM-enabled systems have been shown to be incorrect. Unfortunately, several peculiar performance characteristics of DCPMM are related to the memory technology (3D-XPoint) used and its internal architecture. It is expected that other technologies (such as STT-RAM, ReRAM, NVDIMM), with highly variable characteristics, will be commercially shipped as PMEM in the near future. Current evaluation studies fail to understand and categorize the idiosyncratic behavior of PMEM; i.e., how do the peculiarities of DCPMM related to other classes of PMEM. Clearly, there is a need for a study which can guide the design of systems and is agnostic to PMEM technology and internal architecture. In this work, we first list and categorize the idiosyncratic behavior of PMEM by performing targeted experiments with our proposed PMIdioBench benchmark suite on a real DCPMM platform. Next, we conduct detailed studies to guide the design of storage systems, considering generic PMEM characteristics. The first study guides data placement on NUMA systems with PMEM while the second study guides the design of lock-free data structures, for both eADR- and ADR-enabled PMEM systems. Our results are often counter-intuitive and highlight the challenges of system design with PMEM. SuperMem: Enabling Application-transparent Secure Persistent Memory with Low Overheads PengfeiZuo (Huazhong University of Science and Technology & Univ. of California Santa Barbara);YuHua (Huazhong University of Science and Technology);YuanXie (Univ. of California Santa Barbara); Speaker:Pengfei Zuo, Huazhong University of Science and Technology & University of California Santa Barbara AbstractNon-volatile memory (NVM) suffers from security vulnerability to physical access based attacks due to non-volatility. To ensure data security in NVM, counter mode encryption is often used by considering its high security level and low decryption latency. However, the counter mode encryption incurs new persistence problem for crash consistency guarantee due to the requirement for atomically persisting both data and its counter. To address this problem, existing work requires a large battery backup or complex modifications on both hardware and software layers due to employing a write-back counter cache. The large battery backup is expensive and software-layer modifications limit the portability of applications from the un-encrypted NVM to the encrypted one. Our paper proposes SuperMem, an application-transparent secure persistent memory by leveraging a write-through counter cache to guarantee the atomicity of data and counter writes without the needs of a large battery backup and software-layer modifications. To reduce the performance overhead of a baseline write-through counter cache, SuperMem leverages a locality-aware counter write coalescing scheme to reduce the number of write requests by exploiting the spatial locality of counter storage and data writes. Moreover, SuperMem leverages a cross-bank counter storage scheme to efficiently distribute data and counter writes to different banks, thus speeding up writes by exploiting bank parallelism. Experimental results demonstrate that SuperMem improves the performance by about $2\times$ compared with an encrypted NVM with a baseline write-through counter cache, and achieves the performance comparable to an ideal secure NVM that exhibits the optimal performance of an encrypted NVM. Speaker bioPengfei Zuo is a research scientist at Huawei Inc. He received Ph.D. degree in Computer Science from Huazhong University of Science and Technology (HUST) in 2019, and was a visiting Ph.D. student in University of California, Santa Barbara (UCSB) during 2018-2019. He received B.E. degree in Computer Science from HUST in 2014. He has published 30+ refereed papers in major conferences and journals such as OSDI, MICRO, ASPLOS, USENIX ATC, SoCC, and DAC in the areas of computer system and architecture, with a focus on non-volatile memory systems, storage systems and techniques, and security. He served as the PC member at ICDCS'21, ICPADS'20, CloudCom'20, and Eurosys'19 (shadow). Coding for Resistive Random-Access Memory Channels GuanghuiSong (Singapore University of Technology and Design);KuiCai (Singapore University of Technology and Design);XingweiZhong (Singapore University of Technology and Design);JiangYu (Singapore University of Technology and Design);JunCheng (Doshisha University); Speaker:Guanghui Song, Singapore University of Technology and Design AbstractWe propose channel coding techniques to mitigate both the sneak-path interference and the channel noise for resistive random-access memory (ReRAM) channels. The main challenge is that the sneak-path interference within one memory array is data-dependent. We propose an across-array coding scheme, which assigns a codeword to multiple independent memory arrays. Since the coded bits from different arrays experience independent channels, a ``diversity" gain can be obtained during decoding, and when the codeword is adequately distributed, the code performs as that over an independent and identically distributed (i.i.d.) channel without data-dependency. We also present a real-time channel estimation scheme and a data shaping technique to improve the decoding performance. Speaker bioGuanghui Song received his Ph.D. degree in the department of intelligent information engineering and sciences, Doshisha University, Kyoto, Japan, in 2012. He worked as a researcher in Doshisha University and University of Western Ontario, London, Canada. Currently, he is a postdoctoral researcher in Singapore University of Technology and Design. His research interests are in the areas of channel coding theory, multi-user coding, and coding for data storage systems. Lifted Reed-Solomon Codes with Application to Batch Codes LukasHolzbaur (Technical University of Munich);RinaPolyanskaya (Institute for Information Transmission Problems);NikitaPolyanskii (Technical University of Munich);IlyaVorobyev (Skolkovo Institute of Science and Technology); AbstractGuo, Kopparty and Sudan have initiated the study of error-correcting codes derived by lifting of affine-invariant codes. Lifted Reed-Solomon (RS) codes are defined as the evaluation of polynomials in a vector space over a field by requiring their restriction to every line in the space to be a codeword of the RS code. In this paper, we investigate lifted RS codes and discuss their application to batch codes, a notion introduced in the context of private information retrieval and load-balancing in distributed storage systems. First, we improve the estimate of the code rate of lifted RS codes for lifting parameter $m\ge 4$ and large field size. Second, a new explicit construction of batch codes utilizing lifted RS codes is proposed. For some parameter regimes, our codes have a better trade-off between parameters than previously known batch codes. A Back-End, CMOS Compatible Ferroelectric Field Effect Transistor for Synaptic Weights MattiaHalter (IBM Research Gmbh, ETH Zurich);LauraBégon-Lours (IBM Research Gmbh);ValeriaBragaglia (IBM Research Gmbh);MarilyneSousa (IBM Research Gmbh);Bert JanOffrein (IBM Research Gmbh);StefanAbel (Formerly IBM Research Gmbh, currently at Lumiphase);MathieuLuisier (ETH Zurich);JeanFompeyrine (Formerly IBM Research Gmbh, currently at Lumiphase); Speaker:Mattia Halter, IBM Research GmbH - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland Integrated Systems Laboratory, ETH Zurich, CH-8092 Zurich, Switzerland AbstractNeuromorphic computing architectures enable the dense co-location of memory and processing elements within a single circuit. Their building blocks are non-volatile synaptic elements such as memristors. Key memristor properties include a suitable non-volatile resistance range, continuous linear resistance modulation and symmetric switching. In this work, we demonstrate voltage-controlled, symmetric and analog potentiation and depression of a ferroelectric Hf0.57Zr0.43O2 (HZO) field effect transistor (FeFET) with good linearity. Our FeFET operates with a low writing energy (fJ) and fast programming time (40ns). Retention measurements have been done over 4-bits depth with low noise (1%) in the tungsten oxide (WOx) read out channel. By adjusting the channel thickness from 15nm to 8nm, the on/off ratio of the FeFET can be engineered from 1% to 200% with an on-resistance ideally >100kΩ, depending on the channel geometry. The device concept is compatible with a back end of line (BEOL) integration into CMOS processes. It has therefore a great potential for the fabrication of high density, large-scale integrated arrays of artificial analog synapses. Speaker bio**Bio** Mattia Halter is a PhD student at ETH Zurich doing his thesis full-time at IBM Research Laboratory Zurich. His work focuses specifically on the *design and fabrication of ferroelectric memristors for neuromorphic applications*. Starting at the development of novel materials and finishing by implementing and characterising crossbar arrays in the back end of line. This means he is spending most of his time in the cleanroom. His background in electrical engineering and nano technology he has obtained from ETH Zurich. When he is not doing research, you probably find him at home doing nothing waiting until the pandemic is over. **Why he likes California:** He spent an exchange year close to Big Sur in 2008-2009 as a high school student. He fondly remembers the great host family, classmates and fine Mexican barbecues. Separation and Equivalence results for the Crash-stop and Crash-recovery Shared Memory Models OhadBen-Baruch (BGU);SrivatsanRavi (University of Southern California); Speaker:Ohad Ben Baruch, Ben Gurion University AbstractLinearizability, the traditional correctness condition for concurrent objects is considered insufficient for the non-volatile shared memory model where processes recover following a crash.For this crash-recovery shared memory model, strict-linearizability is considered appropriate since, unlike linearizability, it ensures operations that crash take effect prior to the crash or not at all. This work formalizes and answers the question of whether an implementation of a data type derived for the crash-stop shared memory model is also strict-linearizable in the crash-recovery model.We present a rigorous study to prove how helping mechanisms, typically employed by non-blocking implementations, is the algorithmic abstraction that delineates linearizability from strict-linearizability. Our first contribution formalizes the crash-recovery model and how explicit process crashes and recovery introduces further dimensionalities over the standard crash-stop shared memory model. We make the following technical contributions: (i) we prove that strict-linearizability is independent of any known help definition; (ii) we present a natural definition of help-freedom to prove that any obstruction-free, linearizable and help-free implementation of a total object type is also strict-linearizable; (iii) we prove that for a large class of object types, a non-blocking strict-linearizable implementation cannot have helping. Viewed holistically, this work provides the first precise characterization of the intricacies in applying a concurrent implementation designed for the crash-stop(and resp. crash-recovery) model to the crash-recovery (and resp.crash-stop) model Speaker bioOhad Ben Baruch completed his PhD in computer science at Ben-Gurion University under the supervision of Prof. Danny Hendler and Prof. Hagit Attiya. His research focuses on shared memory and concurrent computation. More specifically, on complexity bounds for concurrent objects in the crash-stop and crash-recovery system models. In his PhD dissertation the notion of Nesting-Safe Recoverable Linearizability (NRL) was proposed, a novel model and correctness condition for the crash-recovery shared memory model which allows for nesting of recoverable objects, together with lower and upper bounds for objects implementations satisfying the condition. Ohad is currently working at BeyondMinds as a researcher and algorithms development. Toward Faster and More Efficient Training on CPUs Using STT-RAM-based Last Level Cache AlexanderHankin (Tufts University);MaziarAmiraski (Tufts University);KarthikSangaiah (Drexel Univeresity);MarkHempstead (Tufts University); Speaker:Alexander Hankin, Tufts University AbstractArtificial intelligence (AI), especially neural network-based AI, has become ubiquitous in modern day computing. However, the training phase required for these networks demands significant computational resources and is the primary bottleneck as the community scales its AI capabilities. While GPUs and AI accelerators have begun to be used to address this problem, many of the industry's AI models are still trained on CPUs and are limited in large part by the memory system. Breakthroughs in NVM research over the past couple of decades has unlocked the potential for replacing on-chip SRAM with an NVM-based alternative. Research into Spin-Torque Transfer RAM (STT-RAM) over the past decade has explored the impact of trading off volatility for improved write latency as part of the trend to bring STT-RAM on-chip. This is particularly STT-RAM is an especially attractive replacement for SRAM in the last-level cache due to its density, low leakage, and most notably, endurance. Speaker bioAlexander Hankin is a fourth year PhD candidate in the ECE department at Tufts University under the advisement of Prof. Mark Hempstead. His research interests are centered around building architecture-level modeling and simulation tools in the areas of emerging non-volatile memories and thermal hotspots. GBTL+Metall – Adding Persistence to GraphBLAS KaushikVelusamy (University of Maryland, Baltimore County);ScottMcMillan (Carnegie Mellon University);KeitaIwabuchi (Lawrence Livermore National Laboratory);RogerPearce (Lawrence Livermore National Laboratory); Speaker:Kaushik Velusamy, University of Maryland, Baltimore County AbstractIt is well known that software-hardware co-design is required for attaining high-performance implementations. System software libraries help us in achieving this goal. Metall persistent memory allocator is one such library. Metall enables large scale data analytics by leveraging emerging memory technologies. Metall is a persistent memory allocator designed to provide developers with rich C++ interfaces to allocate custom C++ data structures in persistent memory, not just from block storage and byte-addressable persistent memories (NVMe, Optane) but also in DRAM TempFS. Having a large capacity of persistent memory changes the way we solve problems and leads to algorithmic innovation. In this work, we present GraphBLAS as a real application use case to demonstrate Metall persistent memory allocator benefits. We show an example of how storing and re-attaching graph containers using Metall, eliminates the need for graph reconstruction at a one-time cost of re-attaching to Metall datastore. Speaker bioKaushik Velusamy is a Ph.D. candidate at the University of Maryland, Baltimore County. His doctoral research focuses on optimizing large-scale graph analytics on high-performance computing systems. POSEIDON : Safe, Fast and Scalable Persistent Memory Allocator Wook-HeeKim (Virginia Tech);AnthonyDemeri (Virginia Tech);Madhava KrishnanRamanathan (Virginia Tech);JaehoKim (Gyeongsang National University);MohannadIsmail (Virginia Tech);ChangwooMin (Virginia Tech); Speaker:Wook-Hee Kim, Virginia Tech AbstractPersistent memory allocator is an essential component of any Non-Volatile Main Memory (NVMM) application. A slow memory allocator can bottleneck the entire application stack, while an unsecure memory allocator can render applications inconsistent upon program bugs or system failure. Unlike DRAM-based memory allocators, it is indispensable for an NVMM allocator to guarantee its heap metadata safety from both internal and external errors. An effective NVMM memory allocator should be 1) safe, 2) scalable, and 3) high performing. Unfortunately, none of the existing persistent memory allocators achieve all three requisites. For example, we found that even Intel's de-facto NVMM allocator–libpmemobj is vulnerable to silent data corruption and persistent memory leaks resulting from a simple heap overflow. In this paper, we propose Poseidon, a safe, fast, and scalable persistent memory allocator. The premise of Poseidon revolves around providing a user application with per-CPU sub-heaps for scalability and high performance while managing the heap metadata in a segregated fashion and efficiently protecting the metadata using a scalable hardware-based protection scheme, Intel's Memory Protection Keys (MPK). We evaluate Poseidon with a wide array of microbenchmarks and real-world benchmarks. In our evaluation, Poseidon outperforms the state-of-art allocators by a significant margin, showing improved scalability and performance, while also guaranteeing heap metadata protection. Speaker bioWook-Hee Kim is a postdoctoral associate at Virginia Tech. His research interests fall into system software for emerging hardware technologies, including persistent memory and Remote Direct Memory Access (RDMA). He is actively working on these areas and has publications on system software for persistent memory. He obtained his B.S. and Ph.D. from Ulsan National Institute of Science and Technology (UNIST) in 2013 and 2019, respectively. Ribbon: High-Performance Cache Line Flushing for Persistent Memory KaiWu (University of California, Merced);IvyPeng (Lawrence Livermore National Laboratory);JieRen (University of California, Merced);DongLi (University of California, Merced); AbstractCache line flushing (CLF) is a fundamental building block for programming persistent memory (PM). CLF is prevalent in PM-aware workloads to ensure crash consistency. It also imposes high over-head. Extensive works have explored persistency semantics andCLF policies, but few have looked into the CLF mechanism. This work aims to improve the performance of CLF mechanism based on the performance characterization of well-established workloads on real PM hardware. We reveal that the performance of CLF is highly sensitive to the concurrency of CLF and cache line status. We introduce Ribbon, a runtime system that improves the performance of CLF mechanism through concurrency control and proactive CLF. Ribbon detects CLF bottleneck in oversupplied and insufficient concurrency, and adapts accordingly. Ribbon also proactively transforms dirty or nonresident cache lines into the clean resident status to reduce the latency of CLF. Furthermore, we investigate the cause for low dirtiness in flushed cache lines in in-memory database workloads. We provide cache line coalescing as an application-specific solution that achieves up to 33.3% (13.8% on average) improvement. Our evaluation of a variety of workloads in four configurations on PM shows that Ribbon achieves up to 49.8%improvement (14.8% on average) of the overall application performance. Generative Modeling of NAND Flash Memory Voltage Level ZiweiLiu (Center of Memory and Recording Research, UC San Diego);YiLiu (Center of Memory and Recording Research, UC San Diego);Paul H.Siegel (Center of Memory and Recording Research, UC San Diego); Speaker:Ziwei Liu, Center of Memory and Recording Research, UC San Diego AbstractProgram and erase cycling (P/E cycling) data is used to characterize flash memory channels and support realistic performance simulation of error-correcting codes (ECCs). However, these applications require a massive amount of data, and collecting the data takes a lot of time. To generate a large amount of NAND flash memory read voltages using a relatively small amount of measured data, we propose a read voltage generator based on Time-Dependent Generative Moments Matching Network (TD-GMMN). This model can generate authentic read voltage distributions over a range of possible P/E cycles for a specified program level based on known read voltage distributions. Experimental results based on data generated by a mathematical MLC NAND flash memory read voltage generator demonstrate the model's effectiveness. Speaker bioZiwei Liu is an exchange student researcher at the Center of Memory and Recording Research. Her research interest is in the intersection of flash memory, information theory, and machine learning.
CommonCrawl
I Change in Entropy of a Solid or Liquid Thread starter Philip Koeck Philip Koeck What about if we allow for a temperature and volume change in a solid or a liquid? Would the entropy change still only depend on the temperature change or also on the volume change. For a solid I would think that the volume change doesn't matter since it doesn't change the "amount of disorder", but for a liquid the volume change should matter. Related Other Physics Topics News on Phys.org How the quest for a scalable quantum computer is helping fight cancer Chestermiller Philip Koeck said: For a single phase pure substance or a constant composition mixture, the variation in entropy can be determined from $$dS=\frac{C_p}{T}dT+\left(\frac{\partial S}{\partial P}\right)_TdP$$It follows from the equation $$dG=-SdT+VdP$$ that the partial derivative of entropy with respect to pressure is given by:$$\left(\frac{\partial S}{\partial P}\right)_T=-\left(\frac{\partial V}{\partial T}\right)_P$$ For a liquid or solid, the equation of state is $$dV=V(\alpha dT-\beta dP)$$where ##\alpha## is the volumetric coefficient of thermal expansion and ##\beta## is the bulk compressibility. So, $$\left(\frac{\partial V}{\partial T}\right)_P=\alpha V$$So, we have:$$dS=\frac{C_p}{T}dT-\alpha VdP$$ Because the specific volume and coefficient of thermal expansion of solids and liquids are very small, in virtually all practical situations, the second term is negligible. Reactions: Philip Koeck Chestermiller said: I just quickly checked what that would give for an ideal gas (by replacing α and dP from the ideal gas law) and I get dS = n CV dT / T + n R dV / V, just like it should be. Very nice! I'm wondering a bit about solids versus liquids. For liquids I can understand that entropy changes with volume since a liquid can arrange itself in more different ways if it has more space. For a solid, however, I don't see that. In a perfect crystal every atom is in its spot no matter how big the distance between atoms is. How can one explain the volume dependence of entropy then? Sorry, I'm a continuum mechanics guy, so analyzing it in terms of atoms and molecules is not part of my expertise. Lord Jestocost The main reason are anharmonic effects as the phonons have, for example, frequencies that depend on volume. [PDF]Vibrational Thermodynamics of Materials - Caltech Reactions: Chestermiller and Philip Koeck "Change in Entropy of a Solid or Liquid" You must log in or register to reply here. Related Threads for: Change in Entropy of a Solid or Liquid Change of relative permittivity of liquid and solid water Changing state from solid to liquid by application of electric current roger5 Glass: Liquid or solid mceddy2001 Structure of solid and liquid curious bishal Loudness of sound in air, liquid and solid ap_cycles I Electric potential difference between a battery's + terminal and the ground Started by cianfa72 I Why doesn't the atom absorb heat energy when it is low? Started by thaiqi I Is Infinity Possible? Started by pelletboy B Is time a true variable in the scheme of things? Started by Suppaman B The period of a composite wave Started by roam
CommonCrawl
export.arXiv.org > gr-qc > arXiv:2110.08070v1 hep-ph Title: Renormalized $ρ_{\rm vac}$ without $m^4$ terms Authors: Cristian Moreno-Pulido, Joan Sola Peracaula (Submitted on 15 Oct 2021) Abstract: The cosmological constant term, $\Lambda$, in Einstein's equations has been for three decades a building block of the concordance or standard $\Lambda$CDM model of cosmology. Although the latter is not free of fundamental problems, it provides a good phenomenological description of the overall cosmological observations. However, an interesting improvement in such a phenomenological description and also a change in the theoretical status of the $\Lambda$-term occurs upon realizing that the vacuum energy density, $\rho_{\textrm{vac}}$, is actually a "running quantity" in quantum field theory in curved spacetime. Several works have shown that this option can compete with the $\Lambda$CDM with a rigid $\Lambda$term. The so-called, "running vacuum models" (RVM) are characterized indeed by a $\rho_{\textrm{vac}}$ which is evolving with time as a series of even powers of the Hubble parameter and its time derivatives. This form has been motivated by renormalization group arguments in previous works. Here we review a recent detailed computation by the authors of the renormalized energy-momentum tensor of a non-minimally coupled scalar field with the help of adiabatic regularization. The final result is noteworthy: $\rho_{\textrm{vac}}(H)$ takes the precise structure of the RVM, namely a constant term plus a dynamical component $\sim H^2$ (which may be detectable in the present universe) including also higher order effects $\mathcal{O}(H^4)$ which can be of interest during the early stages of the cosmological evolution. Besides, it is most remarkable that such renormalized form of $\rho_{\textrm{vac}}$ does not carry dangerous terms proportional to $m^4$, the quartic powers of the masses of the fields, which are a well-known source of exceedingly large contributions to the vacuum energy density and are directly responsible for fine tuning in the context of the CC problem. Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph) From: Cristian Moreno Pulido [view email] [v1] Fri, 15 Oct 2021 12:58:12 GMT (265kb)
CommonCrawl
A simplified hard output sphere decoder for large MIMO systems with the use of efficient search center and reduced domain neighborhood study Youssef Nasser1, Sebastien Aubert2,3, Fabienne Nouvel3, Karim Y. Kabalan1 & Hassan A. Artail1 The Erratum to this article has been published in EURASIP Journal on Wireless Communications and Networking 2015 2015:251 Multiple-input multiple-output (MIMO) with a spatial-multiplexing (SM) scheme is a topic of high interest for the next generation of wireless communications systems. At the receiver, neighborhood studies (NS) and lattice reduction (LR)-aided techniques are common solutions in the literature to approach the optimal and computationally complex maximum likelihood (ML) detection. However, the NS and LR solutions might not offer optimal performance for large dimensional systems, such as large number of antennas, and high-order constellations when they are considered separately. In this paper, we propose a novel equivalent metric dealing with the association of these solutions by introducing a reduced domain neighborhood study. We show that the proposed metric presents a relevant complexity reduction while maintaining near-ML performance. Moreover, the corresponding computational complexity is shown to be independent of the constellation size, but it is quadratic in the number of transmit antennas. For instance, for a 4 × 4 MIMO system with 16-QAM modulation on each layer, the proposed solution is simultaneously near-ML with perfect and real channel estimation and ten times less complex than the classical neighborhood-based K-best solution. Multiple-input multiple-output (MIMO) technology has taken a lot of attention in the last decade since it can improve link reliability without sacrificing bandwidth efficiency or, contrariwise, it can improve the bandwidth efficiency without losing link reliability [1]. Recently, the concept of large MIMO systems, i.e., high number of antennas, has also gained research interests, and it is well seen as a part of next-generation wireless communication systems [2, 3]. However, the main drawback of MIMO technology is the increased complexity at the receiver side when a non-orthogonal (NO) MIMO scheme with a large number of antennas and/or large constellation size is implemented [4, 5]. For the detection process, although the performance of the maximum likelihood (ML) detector is optimal, its computational complexity increases exponentially with the number of transmit antennas and with the constellation size. In literature, different MIMO detection techniques have been proposed. The linear-like detection (LD) [6] and decision-feedback detection (DFD) [7] are the baseline detection algorithms. Here, we distinguish the conventional linear MIMO detection techniques zero forcing (ZF) [8] and minimum mean square error (MMSE) [8]. Although linear detection approaches are attractive in terms of their computational complexity, they might lead to a non-negligible degradation in terms of performance [9]. Some non-linear detectors have been also introduced. The sphere decoder (SD), one of the most famous MIMO detectors, is based on a tree search and is very popular due to its quasi-optimal performance [10]. However, this performance is reached at the detriment of an additional implementation complexity. Indeed, the SD achieves quasi-ML performance while its average complexity is shown to be polynomial (roughly cubic) in constellation size and in the number of transmit antennas over a certain range of signal-to-noise ratio (SNR) while the worst case complexity is still exponential [11]. From a hardware implementation point of view, the SD algorithm presents two main drawbacks. Firstly, its complexity coefficients can become large when the problem dimension is high, i.e., at the high spectral efficiency, high number of antennas, and high number of users in multi-user MIMO (MU-MIMO) context. Secondly, the variance of its computation time can be also large leading to undesirable highly variable decoding delays. Despite classical optimizations such as the Schnorr-Euchner (SE) enumeration [12], the SD originally presented in [11] offers by nature a sequential tree search phase, which is an additional drawback for implementation. In order to deal with these two aspects, the authors in [13] have proposed a sub-optimal solution denoted as the K-best [13, 14], where K is the number of stored neighbors given a layer. However, even with a fixed computational complexity and a parallel nature of implementation, some optimizations are required especially for high-order constellation and large number of antennas (due to the large K required in this case) [15–18]. Aiming at reducing the neighborhood size (namely K, over all layers), different solutions are proposed. For instance, the sorted QR decomposition (SQRD)-based dynamic K-best which leads to the famous SQRD-based fixed throughput SD (FTSD) is proposed in [16]. Even with these efforts, the neighborhood size still induces a computationally expensive solution for achieving quasi-ML performance. An alternative trend has been firstly presented in the literature by Wuebben et al. in [19]. It consists in adding a pre-processing step, namely the lattice reduction (LR), aiming at applying a classical detection through a better-conditioned channel matrix [19–21]. This solution has been shown to offer the full reception diversity at the expense of a SNR offset in the system performance. This offset increases with large dimensional transmit antenna systems and high-order modulations. Recently, a promising—although complex—association of the K-best and LR solutions has been considered. It provides a convenient performance-complexity tradeoff. The general idea consists in reducing the SNR offset through a neighborhood study which yields a near-ML performance for a reasonable K. The concept has been introduced first by the authors of [22]. Later on, their basic solution has been improved by proposing to model the sphere constraint in a reduced domain or by introducing efficient symbols' enumeration algorithm [23]. However, a major aspect of this combination has not been considered yet. In particular, any SD, including the K-best, may be advantageously applied by considering a better-conditioned channel matrix through the introduction of a Reduced Domain Neighborhood (RDN) study and a judicious search center. In [5], an improved LR technique dealing with the RDN has been proposed in the context of large MIMO systems. It is based on the decomposition of the spanned space of the channel matrix into small subspaces in order to improve orthogonality of the quantization. In [24], the search center is found through an ant colony optimization and initial search through the output of the MMSE detector. In this paper, we adopt the K-best solution with fixed complexity as the basic solution of the SD. We propose to reduce the neighborhood size through an efficient pre-processing step which allows the SD process to apply a neighborhood study in a modified constellation domain. Then, using the modified domain, we propose a modified novel ML equation with an efficient search center. We show that the proposed metric presents a large complexity reduction while maintaining near-ML performance. Moreover, the corresponding complexity is shown to be independent of the constellation size and polynomial in the number of transmit antennas. In particular, for a 4 × 4 MIMO system with 16-QAM modulation on each layer, the proposed association presents near-ML performance while it is ten times less complex than the classical K-best solution. We note that because the complexity is fixed with such a detector, the exposed optimizations guarantee a performance gain for a given neighborhood size or a reduction of the neighborhood size for a given bit error rate (BER) target. The contributions of this paper are summarized as follows:Footnote 1 A promising association of the K-best and LR solutions is proposed. Modification of the SD neighborhood study by applying a pre-processing step. This is accompanied with a new and efficient search center and MMSE detector. The equivalent expression of the lattice reduction-aided (LRA)-MMSE-centered SD, which corresponds to an efficient LRA-MMSE-successive interference canceller (SIC) Babai point, is proposed to improve the performance or reduce the complexity of the detector. The (S)QRD is introduced in formulas. It provides—to the best of the authors' knowledge—the best known pseudo-linear hard detector as a Babai point, for large number of antennas as well as for high-order modulations. The proposed expression is robust by nature to any search center and constellation order and offers close-to-optimal performance with medium K values. This applies for both perfect and real channel estimation. The proposed solution offers a computational complexity that is independent of the constellation order. Therefore, it outperforms classical SD techniques for a reasonable complexity in the case of high-order constellations. We show for example that a number of neighbors K = 2 is sufficient for a 4 × 4 MIMO system with 16-QAM modulation on each layer, and it is less than 0.5 dB for a 64 × 64 and 128 × 128 MIMO system from the ML solution. The proposed solution offers a computational complexity that is quasi-constant for large number of antennas, showing the evidence of its importance. This paper is organized as follows. Section 2 presents the problem statement of the SD. In Section 3, the different existing solutions are described and analyzed. In Section 4, we propose our generalized solution based on LR with the use of an efficient search center and reduced domain neighborhood. In Section 5, the performance of the presented detectors are provided, compared, and discussed. In Section 6, we consider the computational complexity of the proposed solution in comparison with some reference detection techniques. Conclusions are drawn in Section 7. Sphere decoder detector Let us introduce a n R × n T MIMO system model with n T transmit and n R receive antennas. Then, the received symbols vector could be written as $$ \boldsymbol{y}=\boldsymbol{H}\boldsymbol{x}+\boldsymbol{n}, $$ where H represents the (n R, n T) complex channel matrix assumed to be perfectly known at the receiver, x is the transmit symbol vector of dimensions (n T, 1) where each entry is independently withdrawn from a constellation set ξ, and n is the additive white Gaussian noise of dimensions (n R, 1) and of variance σ 2/2 per dimension. The basic idea of the SD, to reach the optimal estimate \( {\widehat{x}}_{\mathrm{ML}} \) (given by the ML detector) while avoiding an exhaustive search, is to observe the lattice points that lie inside a sphere of radius d. The SD solution starts from the ML equation \( {\widehat{\boldsymbol{x}}}_{\mathrm{ML}}=\underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel \boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}{\parallel}^2 \) and reads $$ {\widehat{x}}_{\mathrm{SD}}=\underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel {\boldsymbol{Q}}^H\boldsymbol{y}-\boldsymbol{R}\boldsymbol{x}{\parallel}^2\le {d}^2 $$ where H = QR, with the classical QRD definitions. The classical SD formula in (2) is centered on the received signal y. From now on, this detection will be denoted as the naïve SD. In the case of a depth-first search algorithm [13], the first solution given by this algorithm is defined as the Babai point [25, 26]. In order to write it, the classical SD expression may be re-arranged, leading then to an exact formulation through an efficient partial Euclidean distance (PED) expression and early pruned nodes [27]. In the literature, the SD principle leads to numerous implementation problems. In particular, it is a non-deterministic polynomial-time (NP-hard) problem [28]. This aspect has been partially solved through the introduction of an efficient solution that lies in a fixed neighborhood size algorithm (FNSA), commonly known as the K-best solution. However, this solution makes the detector sub-optimal since it leads to a performance loss compared to the ML detector. It is particularly true in the case of an inappropriate choice of K according to the MIMO channel condition number and in the case of an inappropriate choice of d in (2). Indeed, an inappropriate choice of d could lead to a ML solution excluded from the search tree. On the other hand, although a neighborhood study remains the one and only one solution that achieves near-ML performance, it may lead to the use of a large-size neighborhood scan which would correspond to a dramatic increase of the computational complexity. This complexity's increase will be prohibitive for high-order modulations. Lattice reduction Through the aforementioned considerations and by using the lattice definition in [26], the system model given in (1) rewrites $$ \boldsymbol{y}=\tilde{\boldsymbol{H}}\boldsymbol{z}+\boldsymbol{n}, $$ where \( \tilde{\boldsymbol{H}}=\boldsymbol{H}\boldsymbol{T} \) and z = T − 1 x. The n T × n T complex matrix T (with |det{T}| = 1) is unimodular, i.e., its entries belong to the set of complex integers which reads ℤℂ = ℤ + jℤ, with j 2 = 1. The key idea of any LR-aided (LRA) detection scheme is to understand that the finite set of transmitted symbols \( {\xi}^{n_T} \)can be interpreted as a de-normalized, shifted then scaled version of the infinite set of complex integers subset \( \subset {\mathbf{\mathbb{Z}}}_{\mathbb{C}}^{n_T} \) according to the relations offered in [29]. To this end, various reduction algorithms have been proposed [19, 30–32]. In the following, we focus on the well-known Lenstra-Lenstra-Lovász (LLL) algorithm due to considerations presented in [30, 33]. The lattice aided (LA) is a local approach [34] that transforms the channel matrix into an LLL-reduced basis that satisfies both of the orthogonality and norm reduction conditions [31]. While it has been shown in [33] that the QRD outputs of the channel matrix is a possible starting point for the LLL, it has been subsequently introduced that the SQRD provides a better starting point [34]. In particular, it leads to a significant reduction of its computational complexity [35]. That is, the detection process in (3) is performed on z instead of x through the better-conditioned matrix \( \tilde{\boldsymbol{H}} \). Wuebben et al. [19] proposed a full description of some reference solutions, namely the LRA-ZF and LRA-ZF-SIC without noise power consideration and the LRA-MMSE, LRA-MMSE extended, and LRA-MMSE-SIC. LRA detectors constitute efficient detectors in the sense of the high quality of their hard outputs. Indeed, they offer a low overall computational complexity while the ML diversity is reached within a constant offset. However, some important drawbacks exist. In particular, the aforementioned SNR offset is important in the case of high-order modulations and of large number of antennas. This issue is expected to be bypassed through an additional neighborhood study. Lattice reduction-aided sphere decoder Contrary to the LRA-(O)SIC receivers, the application of the LR preprocessing step followed by any SD detector is not straightforward. The main problem lies in the consideration of the possibly transmit symbol vector in the reduced constellation since, unfortunately, the set of all possible transmit symbols vectors cannot be predetermined. The reason for that is because the solution does not only depend on the employed constellation but also on the T −1 matrix of (3). Hence, the number of children in the search tree and their values are not known in advance. A brute-force solution is then to determine the set of all possible transmit vectors in the reduced constellation, starting from the set of all possible transmit vectors in the original constellation and by switching to the reduced domain, thanks to the T −1 matrix. Detection process in the original domain neighborhood Zero forcing-centered sphere decoder with original domain neighborhood study detection process In order to deal with the detection process, we firstly introduce the sphere center x C search algorithm. It concerns any signal of the form ∥x C − x∥ 2 ≤ d 2 where x is a possible signal. Based on this search algorithm, different possible sphere centers could be introduced. Using a ZF detector, the received symbols given in (2) are then estimated through $$ {\widehat{\boldsymbol{x}}}_{\mathrm{ZF}\hbox{-} \mathrm{SIC}}=\underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel \boldsymbol{R}{\boldsymbol{e}}_{\mathrm{ZF}}{\parallel}^2 $$ where e ZF = x ZF − x and x ZF = (H H H)− 1 H H y. Equation (4) clearly shows that the naïve SD is unconstrained ZF-centered. It implicitly corresponds to a ZF-SIC solution with an Original Domain Neighborhood (ODN) study at each layer where each layer is defined as the number of spatial multiplexed data streams. It can be noticed that, in the case of a large ODN study, the ML performance is achieved since the computed metrics are exactly the ML metrics. However, this occurs at the detriment of a large neighborhood study and subsequently a large computational complexity. Minimum mean square error-centered sphere decoder with original domain neighborhood study detection process: equivalent formula In this section, we introduce the minimum mean square error successive interference cancellation (MMSE-SIC), a closer-to-ML Babai point than the ZF-SIC. For the sake of clearness with definitions, we firstly give a general definition of the equivalence between two ML metrics. Definition Two ML equations are equivalent if the lattice point argument outputs of the minimum distance are the same, even in the case of different metrics. Two ML equations are equivalent iff: $$ \underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\left\{\parallel \boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}{\parallel}^2\right\} = \underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\left\{\parallel \boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}{\parallel}^2+c\right\} $$ where c is a constant. Using (5), Cui et al. [36] proposed a general equivalent minimization problem given by $$ {\widehat{\boldsymbol{x}}}_{\mathrm{ML}}=\underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\left\{\parallel \boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}{\parallel}^2+\alpha {\boldsymbol{x}}^H\boldsymbol{x}\right\} $$ where the signals x have to be of constant modulus, i.e., x H x is a constant. This assumption is respected in the case of quadrature phase-shift keying (QPSK) modulations, but it is not directly applicable to 16-QAM and 64-QAM modulations. However, this assumption is not limiting in practice since a QAM constellation can be considered as a linear sum of QPSK points [36]. In Appendix 1, we discuss the constant modulus constraint on the signal x. The authors of [37] proposed to apply this solution to the FNSA detection technique of the unconstrained MMSE center, leading to a MMSE-SIC procedure with an ODN study at each layer [37]. In this case, the equivalent ML equation reads $$ {\widehat{\boldsymbol{x}}}_{\mathrm{MMSE}\hbox{-} \mathrm{SIC}}=\underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}{\left({\boldsymbol{x}}_{\mathrm{MMSE}}-\boldsymbol{x}\right)}^H\left({\boldsymbol{H}}^H\boldsymbol{H}+{\sigma}^2\boldsymbol{I}\right)\left({\boldsymbol{x}}_{\mathrm{MMSE}}-\boldsymbol{x}\right) $$ Through the use of the Cholesky factorization (CF) of H H H + σ 2 I = U H U in the MMSE case (H H H = U H U in the ZF case), the ML expression equivalently rewrites, using the proof in Appendix 2, as $$ {\widehat{\boldsymbol{x}}}_{\mathrm{SIC}}=\underset{\boldsymbol{x}\in {\xi}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\left\{\parallel \boldsymbol{U}\left(\tilde{\boldsymbol{x}}-\boldsymbol{x}\right){\parallel}^2\right\} $$ where U is upper triangular with real diagonal elements and \( \tilde{x} \) is any (ZF or MMSE) unconstrained linear estimate. Proposed detection process in the reduced domain neighborhood Due to the implementation drawbacks, the optimal SD has been proposed to be replaced by a sub-optimal FNSA. Hassibi et al. have discussed and shown in [11] that the detector performance is impacted by the noise power and the channel condition number. Hence, the presence of a well-conditioned channel could highly reduce the neighborhood. This means that realizing a LR step followed by a neighborhood study is a very interesting solution in a good-conditioned channel matrix. Accordingly, our proposed combined solution will be detailed in the next subsections. All existing solutions rely on the utilization of the efficient CF pre-processing step. However, these solutions are only functional in the case of a factorized formulation form. Although it is the case in our context, most of the advanced studies have been provided with the applicable QRD. In particular, the advantageous SIC performance optimizations such as ordering according to the corresponding decreasing SNR (from n T to 1) in the ZF-SQRD case and SINR in the MMSE-SQRD case have been proposed in [33]. Moreover, a complexity reduction of the LLL-based LR algorithm has been proposed by the same authors in [33]. In our work, we propose to modify the classical detectors by introducing the QRD instead of the CF, and subsequently of the SQRD, in the (LRA-)MMSE-(O)SIC cases. The MMSE criterion is introduced through the consideration of an extended system model [27], by introducing the (n R + n T) ‐ by ‐ n T matrix H ext and the (n R + n T) vector y ext such as $$ {\boldsymbol{H}}_{\mathrm{ext}}=\left[\begin{array}{c}\hfill \boldsymbol{H}\hfill \\ {}\hfill \sigma \boldsymbol{I}\hfill \end{array}\right]\;\mathrm{and}\;{\boldsymbol{y}}_{\mathrm{ext}}=\left[\begin{array}{c}\hfill \boldsymbol{y}\hfill \\ {}\hfill 0\hfill \end{array}\right]. $$ In this way, the pre-processing step is similar to the ZF-SQRD and the detection procedure equals that of LRA-ZF-SIC. The SQRD interest lies in the ordering of the detection symbols as a function of their S(I)NR, and consequently, it limits the error propagation in SIC procedures. Indeed, it has been shown by Wübben et al. [19] that the optimum order offers a performance improvement even if the ML diversity is not reached. On the other hand, it was shown that once the ML diversity is achieved through a LRA technique, the performance may be significantly improved with this solution [19]. Thus, The LRA-MMSE-OSIC corresponds, to the best of the authors' knowledge, to the best pseudo-linear detector proposed in the literature, in particular in the case of 4 × 4 MIMO systems with QPSK modulations on each layer [19]. For higher order constellations or larger number of antennas, it may be shown that our proposed solution offers convenient hard-decision performance with a highly reduced complexity. In order to deal with these statements, we introduce the reduced domain neighborhood by using the following notations: \( {Q}_{\xi^{n_{\mathrm{T}}}}\left\{.\right\} \) is the quantification operator in the original domain constellation, \( {Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{.\right\} \) is the quantification operator in the reduced domain constellation, a is the power normalization and scaling coefficient (i.e., \( 2/\sqrt{2},\ 2/\sqrt{10},\ \mathrm{and}\ 2/\sqrt{42} \) for QPSK, 16-QAM, and 64-QAM constellations, respectively) \( \boldsymbol{d}=\frac{1}{2}{\boldsymbol{T}}^{-1}{\left[\begin{array}{ccc}\hfill 1+j\hfill & \hfill \dots \hfill & \hfill 1+j\hfill \end{array}\right]}^T \) is a complex displacement vector. The classical LRA-FNSA is implicitly unconstrained LRA-ZF-centered, which leads to a LRA-ZF-SIC procedure with a RDN study at each layer. The exact formula has not been clearly provided but is implicitly used by any LRA-FNSA [21] and may even be considered as an incremental extension of (4): $$ {\widehat{\boldsymbol{z}}}_{\mathrm{LRA}\hbox{-} \mathrm{Z}\mathrm{F}\hbox{-} \mathrm{SIC}}=\underset{\boldsymbol{z}\in {\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel \tilde{\boldsymbol{R}}{\boldsymbol{e}}_{\mathrm{LRA}\hbox{-} \mathrm{Z}\mathrm{F}}{\parallel}^2 $$ where \( \tilde{\boldsymbol{R}} \) is the LLL-based LR algorithm output, e LRA ‐ ZF = z LRA ‐ ZF − z, and \( {\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}} \) is the n T-dimensional infinite set of complex integers. Lattice reduction-aided minimum mean square error-centered sphere decoder with reduced domain neighborhood study detection process To the best of the author's knowledge, no convincing formula has been proposed until now. Even if Jalden et al. [38] proposed a LRA-MMSE-centered solution, the introduced metrics are not equivalent to the ML expression. The solution of [38] is given by $$ {\widehat{\boldsymbol{z}}}_{\alpha, \kern0.5em \mathrm{ML}}=\underset{\boldsymbol{z}\in {\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel {\tilde{\boldsymbol{R}}}^{-1}\ {\boldsymbol{R}}^{-1\dagger }\ {\boldsymbol{H}}^{\dagger}\boldsymbol{y}-\boldsymbol{z}{\parallel}^2=\underset{\boldsymbol{z}\in {\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel {\boldsymbol{z}}_{\mathrm{LRA}\hbox{-} \mathrm{MMSE}}-\boldsymbol{z}{\parallel}^2 $$ The corresponding detector is a sub-optimal solution that consists in a RDN study around the unconstrained LRA-MMSE solution, obtained through QR decomposition. This solution's output is the constrained LRA-MMSE detection plus a list of solutions in the neighborhood. The latter is generated according to a non-equivalent metric, which would be subsequently re-ordered according to the exact ML metric. However, the list is not generated according to the correct distance minimization criterion and would not lead to a near-ML solution. Consequently, the proposed detector does not offer an acceptable uncoded BER performance in the sense that it would not lead to a near-ML solution. In particular, the ML performance is not reached in the case of a large neighborhood study. An efficient solution is derived from (11) and consists in an unconstrained LRA-MMSE center which leads to a LRA-MMSE-SIC procedure with a RDN study at each layer. The equivalent ML equation reads $$ {\widehat{\boldsymbol{z}}}_{\mathrm{LRA}\hbox{-} \mathrm{SIC}}=\underset{\boldsymbol{z}\in {\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}{\mathrm{argmin}}\parallel \tilde{\boldsymbol{U}}\left(\tilde{\boldsymbol{z}}-\boldsymbol{z}\right){\parallel}^2, $$ where \( {\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}+{\sigma}^2{\boldsymbol{T}}^H\boldsymbol{T}={\tilde{\boldsymbol{U}}}^H\tilde{\boldsymbol{U}} \) in the MMSE case (\( {\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}={\tilde{\boldsymbol{U}}}^H\tilde{\boldsymbol{U}} \) in the ZF case) and by noting that Ũ is upper triangular with real diagonal elements and \( \tilde{\boldsymbol{z}} \) is any LRA (ZF or MMSE) unconstrained linear estimate. The proof of this detector formula is given in Appendix 3. The formula introduced in (12) offers an equivalent metric to the MMSE one introduced in (11), which has been shown to be near-ML performance. The difference, and in particular the interest in the LRA case in (12), relies on the neighborhood study nature. In the case of a RDN study, the equivalent channel matrix \( \tilde{\boldsymbol{H}} \) is considered and is remembered to be only roughly, and not exactly, orthogonal. Consequently, the detection, layer by layer, of the symbol vector x does not exactly correspond to its joint detection since the mutual influence of the transformed z signal is still present. This discussion not only exhibits the interest of SD-like techniques to still improve such a detector performance but also puts a big challenge to achieve the ML performance. The general principle of RDN LRA-MMSE-OSIC-centered solution key points is depicted as a block diagram in Fig. 1. The detailed block diagram description of the proposed solution is addressed in Fig. 2. Block diagram of any LRA procedure Block diagram of any RDN LRA-SIC FNSA procedure In Fig. 2, the mapping of any estimate (or list of estimates) from the reduced domain ẑ to the original domain \( \tilde{\boldsymbol{x}} \) is processed through the T matrix multiplication (see Equation (3)). The additional quantification step aims at removing duplicate symbol vector outputs in the case of a list of solutions. For the sake of simplicity, let us consider any LRA-SIC procedure with no neighborhood study. The search center is updated at each layer as follows. By considering the k-th layer and with the knowledge of the \( {\widehat{\boldsymbol{z}}}_{k+1:{n}_{\mathrm{T}}} \) estimates at previous layers, the ẑ k unconstrained Babai point can be provided. Then, it has to be de-normalized and shifted to make it belong to \( {\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}} \). After quantization, and de-shifting and normalization, the ẑ k estimate at the k-th layer is obtained such as the next (k − 1)-th layer can be considered, until the whole symbol vector is detected. As previously introduced, the neighborhood generation is a problematic step due to the infiniteness and non-regular natures of the constellations in the reduced domain. This point is transparent with classical detectors such as LD and DFD, thanks to the straightforward quantification step in the reduced domain [39]. However, the issue of infinite lattices, addressed through a sphere constraint, appears when working with the classical considerations. It presents a performance loss or a NP-hard complexity solution. Hence, our proposed solution relies on a SE enumeration. Starting from the LRA-SIC principle, a neighborhood is considered at each layer and leads to the RDN LRA-SIC FNSA principle. In particular and due to the implementation constraints, the RDN generation is processed for bounded number of N possibilities and in a SE fashion, namely with ordered PEDs according to an increasing distance from \( {\tilde{\boldsymbol{z}}}_k \) at each layer as follows: $$ {\boldsymbol{z}}_k={Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{{\tilde{\boldsymbol{z}}}_k\right\},\ {Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{{\tilde{\boldsymbol{z}}}_k\right\}+1,\ {Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{{\tilde{\boldsymbol{z}}}_k\right\}+j,\ {Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{{\tilde{\boldsymbol{z}}}_k\right\},\ {Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{{\tilde{\boldsymbol{z}}}_k\right\}-1,\ {Q}_{{\mathrm{\mathbb{Z}}}_{\mathrm{\mathbb{C}}}^{n_{\mathrm{T}}}}\left\{{\tilde{\boldsymbol{z}}}_k\right\}-j, \dots $$ The SE strategy aims at finding the correct decision early, leading to a safe early termination criterion, which is not considered here for the sake of readability in performance comparison. Also, all the corresponding PEDs are computed and then ordered. The K-best solutions, namely with the lowest PED, in the reduced domain are stored (C ẑ ) similarly to their corresponding cumulative Euclidean distances (CED) (D tot). The whole procedure is depicted in Fig. 2. By adding the pre-processing steps, i.e., the SQRD-based then LLL-based LR blocks, and the computation of a close-to-ML unconstrained estimate (although linear) such as LRA-MMSE extended, a complete description of the detection may be obtained. Figure 3 shows the detailed block diagram of the complete proposed solution. The SQRD block offers an efficient layer re-ordering [19] that lies on the noise power. The latter is taken into account in the rest of the detector through the T matrix. Block diagram of the RDN LRA-MMSE-OSIC FNSA As a final step of the detector and in the case of a RDN-based SD, the list of possible symbols output has to be re-ordered according to the ML metrics in the original domain and duplicate solutions are removed. It is due to the presence of noise that makes some candidates to be mapped on non-legitimate constellation points in the reduced constellation, leading to non-acceptable points in the original constellation. The symbol vector associated to the minimal metric becomes the hard decision output of the detector and offers a near-ML solution. The proposed algorithm is described in detail in Appendix 4. The reader may refer to this appendix for more details. In this section, we present and compare the system performance of the different techniques previously presented, and we compare them with our proposed solution. For clearness target, we summarize the detection metrics for each solution in Table 1. Table 1 ODN naïve (O)SIC FNSA, ODN ZF-(O)SIC FNSA, ODN MMSE-(O)SIC FNSA, RDN LRA-ZF-(O)SIC FNSA, RDN LRA-MMSE-(O)SIC FNSA, and ML formulas We should note that the RDN LRA-MMSE-OSIC FNSA, to which this paper relates, is particularly efficient in the case of rank-deficient MIMO systems, i.e., spatially correlated antenna systems, for high-order modulation which are considered points of the LTE-A norm and for large number of antennas as in the future generation of cellular systems (beyond 4G networks). Moreover, since the equivalent channel matrix in the LRA case is only roughly orthogonal, the mutual influence of the transformed z is small but still present. Hence, a neighborhood study in the original constellation domain improves the performance compared to a SIC. However, contrarily to classical solutions that are not LRA, the necessary size for achieving the optimal performance is smaller. Figure 4 depicts the BER for the aforementioned techniques. Some notable points have to be highlighted from this figure. Contrary to the RDN LRA-ZF/MMSE-(O)SIC FNSA, the ODN ZF/MMSE-SIC FNSA does not reach the ML diversity for a reasonable neighborhood size, even if there is a decrease of the SNR offset in the MMSE-SIC case. However, a BER offset can be observed in the low SNR range, due to error propagation. Consequently, there exists a switching point from low to high SNR between LRA detectors and others. This aspect is removed through the use of better techniques. In particular, the SQRD in the RDN LRA-MMSE-OSIC FNSA presented in this work offers ML diversity, and the BER offset in low SNR has been highly reduced compared to the RDN LRA-MMSE-SIC FNSA and is now close-to-ML. Uncoded BER of the ODN ZF-SIC-centered FNSA (curve 1), of the ODN MMSE-SIC-centered FNSA (curve 2), of the RDN LRA-ZF-SIC-centered FNSA (curve 3), of the RDN LRA-MMSE-SIC-centered FNSA (curve 4), of the RDN LRA-MMSE-OSIC-centered FNSA (curve 5), and of the ML (curve 6), for K = {1, 2, 4, 16} (top left, top right, bottom left, and bottom right, respectively), 4 × 4 complex Rayleigh channel, QPSK modulation on each layer It may also be noticed in Fig. 4 that the RDN LRA-ZF-SIC-centered FNSA does not reach the ML performance, contrarily to other techniques. It is due to the chosen neighborhood size in the reduced constellation value (N = 5) that is not sufficient for this detector but that is sufficient for the proposed LRA-MMSE-(O)SIC Babai points. With a larger N value, the RDN LRA-ZF-SIC-centered FNSA achieves the ML performance, similarly to other presented detectors. Similarly to Fig. 4, some notable points have to be highlighted from Fig. 5. There still exists a switching point from low to high SNR regime between LRA detectors and others. This aspect is removed through the use of better techniques. In particular, the SQRD in the RDN LRA-MMSE-OSIC FNSA offers ML diversity and the BER offset in low SNR has been importantly reduced compared to the RDN LRA-MMSE-SIC FNSA, leading now to a close-to-ML solution. We can observe from both Figs. 4 and 5 that even though when ZF-SIC and equivalent MMSE-SIC are not LRA, they achieve the ML performance at the detriment of a very large neighborhood study size; it is of the order of the number of symbols contained in the employed constellation. By comparing the impact on LRA detector performance of QPSK and 16-QAM modulations, two fundamental points must be discussed. Firstly, there implicitly exists a constraint from the QPSK constellation construction that eliminates nearby lattice points that do not belong to \( {\xi}^{n_{\mathrm{T}}} \), due to the quantization operation \( {Q}_{\xi^{n_{\mathrm{T}}}}\left\{.\right\} \). This aspect annihilates a large part of the LR-aid benefit and cannot be corrected despite the increase of the neighborhood study size since many lattice points considered in the RDN would be associated with the same constellation point after quantization in the original constellation. In the case of larger constellation orders, the LRA solution is more effective, as depicted in Fig. 5. Uncoded BER of the ODN ZF-SIC-centered FNSA (curve 1), of the ODN MMSE-SIC-centered FNSA (curve 2), of the RDN LRA-ZF-SIC-centered FNSA (curve 3), of the RDN LRA-MMSE-SIC-centered FNSA (curve 4), of the RDN LRA-MMSE-OSIC-centered FNSA (curve 5), and of the ML (curve 6), for K = {1, 2, 4, 16} (top left, top right, bottom left, and bottom right, respectively), 4 × 4 complex Rayleigh channel, 16-QAM modulation on each layer Secondly, we recall that the constant modulus constellation assumption has, in theory, to be fulfilled. It was not the case in Fig. 5 with 16-QAM modulation on each layer. However, it could be assumed that this constraint would be almost respected in mean value as shown in Appendix 1 (Fig. 12). In Fig. 6, the performance of R(O)DN (LRA)-MMSE-(O)SIC FNSA detectors with or without respect of this assumption are depicted, but only for a neighborhood scan of 1 and 2 neighbors for the sake of consistency between QPSK and 16-QAM performance. Uncoded BER of the strictly equivalent ODN MMSE-SIC-centered FNSA, of the strictly equivalent RDN LRA-MMSE-SIC-centered FNSA, of the strictly equivalent RDN LRA-MMSE-OSIC-centered FNSA, compared to the assumption respect in mean, and of the ML, for K = {2, 4}, 4 × 4 complex Rayleigh channel, 16-QAM modulation on each layer. Some curves are coincident As depicted in Fig. 6 and with 16-QAM modulation, the performance is impacted by the fact that the strict equivalence assumption is not true, i.e., the term x H x (or z H z) is not exactly constant but only constant in average. As shown in this figure, this assumption is not constraining in terms of performance loss. Moreover, it is insignificant compared to the advantage of the LRA in high-order constellation, which would be annihilated by the use of QPSK constellation. The proposed solution is particularly efficient for a large number of antennas and for high-order constellations. It was not the case of the LRA-MMSE-OSIC that has been shown worse BER performance in 4 × 4 MIMO systems with a 16-QAM modulation on each layer, compared to the ML detection [40], while it was the case for 4 × 4 MIMO systems with QPSK modulation on each layer [41]. For the sake of completeness of this work, Fig. 7 shows the same results with 64-QAM modulation as those given in Fig. 5. Again this figure shows the outperformance of the proposed detection algorithm with high-order constellation. Uncoded BER of the ODN ZF-SIC-centered FNSA (curve 1), of the ODN MMSE-SIC-centered FNSA (curve 2), of the RDN LRA-ZF-SIC-centered FNSA (curve 3), of the RDN LRA-MMSE-SIC-centered FNSA (curve 4), of the RDN LRA-MMSE-OSIC-centered FNSA (curve 5), and of the ML (curve 6), for K = 4, 4 × 4 complex Rayleigh channel, 64-QAM modulation on each layer Figure 8 shows the comparison between the proposed RDN LRA-MMSE-OSIC-centered FNSA and the ML detection for high number of antennas, such that n R = n T = N = 64 and N = 128 and, K = 2. First, there is no doubt that increasing the number of antennas increases the performance gain. Secondly, the proposed solution shows a comparable performance with respect to the ML decoder. At a BER = 10−4, the SNR loss is less than 0.4 dB for 16-QAM and less than 0.5 dB for 64-QAM while the complexity of the proposed RDN LRA-MMSE-OSIC-centered FNSA solution is by far much lower than the ML decoder. This will be discussed in the next section. BER comparison between the proposed RDN LRA-MMSE-OSIC and ML detector, for n R = n T = N, 16-QAM (continuous line), 64-QAM (dash) Finally, even though it is not the target of the paper, we have drawn the simulation results of the proposed solution with real channel estimation. Figure 9 shows the simulation results when the channel estimation error variance Δ is equal to 0.001 and 0.005, assuming that the channel coefficients power is normalized by the number of antennas. This figure shows that the proposed LRA-MMSE solution still presents quasi-ML detection even with real channel estimation. Uncoded BER with imperfect channel estimation, of the ODN MMSE-SIC-centered FNSA (FNSA curve), of the proposed RDN LRA-MMSE-OSIC-centered FNSA (proposed), and of the ML, with perfect channel estimation Δ = 0 and real channel estimation, Δ = 0.001 (left) and Δ = 0.005 (right), for K = 4, (4 × 4 complex Rayleigh channel, QPSK modulation on each layer Complexity evaluation Based on the assumptions presented in Table 1, the computational complexities introduced in Table 2 can be demonstrated. The RDN study is processed in an infinite lattice which would not lead to boundary control; however, a finite set of displacements has been generated in a SE fashion in simulations. Its size has been fixed to an arbitrary value (N = 5)—decided through simulations. Although an SE technique is used, the proposed solution does not consider any complexity reduction like early termination. Table 2 Computational complexity equivalences As shown in Table 3, the computational complexities of RDN LRA-ZF/MMSE-(O)SIC FNSA detectors do not depend on the constellation order log2{M}. It may be checked in the numerical applications in Table 4, and it is the key point of the paper advantage over classical techniques for high-order modulations such as 16(64)-QAM. The SNR loss compared to ML are given in Table 4. They have been measured for an uncoded BER of 10−4 in the case of the ML decoder. For all the configurations given in Table 4, the numerical application of the corresponding computational complexity is given in Table 5 for a RDN size N = 5. Table 3 ODN ZF-(O)SIC FNSA, ODN MMSE-(O)SIC FNSA, RDN LRA-ZF-(O)SIC FNSA, RDN LRA-MMSE-(O)SIC FNSA, and ML formulas Table 4 SNR loss at BER = 10−4, ODN ZF-SIC FNSA, ODN MMSE-SIC FNSA, RDN LRA-ZF-SIC FNSA, RDN LRA-MMSE-SIC FNSA, and RDN LRA-MMSE-OSIC FNSA compared to ML Table 5 ODN ZF-SIC, ODN MMSE-SIC, RDN LRA-ZF-SIC, RDN LRA-MMSE-SIC, RDN LRA-MMSE-OSIC, and ML computational complexities in MUL Even if the proposed solution is two times more complex in the QPSK case, it offers near-ML performance and in particular a SNR gain of 0.3 dB at a BER of 10−4. The interesting point concerns higher order modulations: starting from the 16-QAM modulation, the estimated complexity of the proposed solution is ten times less complex than the classical one, for the same performance result. Identically, same conclusions are obtained for a 64-QAM modulation. In such case, the complexity gain will increase importantly to reach a hundred times. Similarly, the numerical application of the 16-QAM extension complexity is given in Table 6. As an example, in the case of 16-QAM modulations, the computational complexities read \( 8MK{n}_{\mathrm{T}}^2+4MK{n}_{\mathrm{T}}-4MK+3M \) for the ODN equivalent MMSE-(O)SIC and \( 8N \min \left\{K,N\right\}{n}_{\mathrm{T}}^2+60 \min \left\{K,N\right\}{n}_{\mathrm{T}}+4N\ \min \left\{K,N\right\}{n}_{\mathrm{T}}-4N \min \left\{K,N\right\}+24 \min \left\{K,N\right\}{n}_{\mathrm{T}}^2+8 \min \left\{K,N\right\}{n}_{\mathrm{R}}{n}_{\mathrm{T}}+2 \min \left\{K,N\right\}{n}_{\mathrm{R}}+16{n}_{\mathrm{T}}^2-32 \min \left\{K,N\right\}+2N \) for the RDN equivalent LRA-MMSE-(O)SIC, and with M = 4 since a QPSK modulation is considered in this case. As depicted in Table 6, the computational complexity of the 16-QAM extension with respect to the constant modulus criterion is more important compared to the straightforward but not strictly correct solution. Since no significant gain is provided, we consequently claim it does not offer high advantages. Table 6 ODN MMSE-SIC, RDN LRA-MMSE-SIC, and RDN LRA-MMSE-OSIC computational complexities in MUL Figure 10 shows the "measured" complexity of all solutions explored in this work versus the constellation size, expressed in terms of the exponent (in base 10) of the computational capacity in MUL, for n R = n T = 8 and K = 2. This figure shows, as explained earlier, that the proposed solution is independent of the constellation size. This is very crucial in the future large MIMO systems exploiting large dimensions. Figure 11 is in line with the previous conclusion. It provides the computation complexity of the different MIMO detection solutions, expressed as a function of the number of antennas. This figure shows that the proposed solution is almost ten times less complex than the classical K-best solutions. Moreover, it presents almost equal complexity for n T ≥ 32 yielding another important characteristic for large MIMO decoding. The exponent in base 10 of the computational complexity, n R = n T = 8, K = 2 The exponent in base 10 of the computational complexity, as a function of the number of antennas, 16-QAM Finally, to give some concrete example, Table 6 compares between ODN and RDN cases. It shows that the proposed solution offers an advantage over existing solutions when applied to any OFDM standard supporting MIMO spatial-multiplexing mode, e.g., IEEE 802.16, IEEE 802.11, 3GPP LTE, and 3GPP LTE-A. It may be advantageously considered in the case of a large number of antennas and consequently in the case of the 3GPP LTE-A standard. The main advantages reside in the following points: ▪The equivalent expression of the LRA-MMSE-centered SD, which corresponds to an efficient LRA-MMSE-OSIC Babai point, improves the performance or reduces the complexity of the detector. ▪The proposed (S)QRD formulation with reduced domain neighborhood induces the use of the best known hard detector as a Babai point, for both large number of antennas and high-order modulations. ▪The proposed expression is robust by nature to any search center and constellation order and offers close-to-optimal performance for large K. Likewise, the proposed solution offers a computational complexity that is independent of the constellation order which consequently offers a solution that outperforms classical SD techniques for a reasonable computational complexity in the case of high-order constellations. For instance, the neighborhood study size K has been reduced to K = 2 for a 16-QAM modulation compared to classical SD techniques. In this paper, the LRA-MMSE-centered SD has been proposed with a K-best neighborhood generation. A detailed and hardware implementation-oriented computational complexity estimation has been provided and combined with performance results. It has been shown that the proposed detection technique outperforms the existing solutions. In particular, the corresponding implementation complexity has been shown to be independent of the constellation size and polynomial in the number of antennas while reaching the ML performance with both real and perfect channel estimation. It implies a ten times lower computational complexity compared to the classical K-best, even for a large MIMO system, with 16-QAM modulation on each layer. It is worth mentioning that, with respect to our previous work in [1], this paper presents a detailed technical description of the proposed methodology, a detailed complexity analysis, and more results. This particularly includes a step by step implementation of the proposed algorithm in Appendix 4. S Aubert, Y Nasser, F Nouvel, Lattice reduction-aided minimum mean square error k-best detection for MIMO systems, in Proc. of the International Conference Computing, Networking and Communications (ICNC), 2012, pp. 1066–1070 F Rusek, D Persson, BK Lau, EG Larsson, TL Marzetta, O Edfors, F Tufvesson, Scaling up MIMO: opportunities and challenges with very large arrays. IEEE Signal Processing Magazine 30(1), 40–46 (2013) EG Larsson, F Tufvesson, O Edfors, TL Marzetta, Massive MIMO for next generation wireless systems. IEEE Commun. Mag. 52(2), 186–195 (2014) Y Kong, Q Zhou, X Ma, Lattice reduction aided transceiver design for multiuser MIMO downlink transmissions, in Proc. of the IEEE Military Communications Conference (MILCOM), 2014, pp. 556–562 KA Singhal, T Datta, A Chockalingam, Lattice reduction aided detection in large-MIMO systems, in Proc. of the IEEE 14th Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2013, pp. 594–598 E Zimmermann, G Fettweis, Linear MIMO receivers vs. tree search detection: a performance comparison overview, in Proc. of the IEEE Personal Indoor and Mobile Radio Communications (PIMRC), 2006, pp. 1–7 N Prasad, MK Varanasi, Analysis of decision feedback detection for MIMO Rayleigh-fading channels and the optimization of power and rate allocations. IEEE Transactions on Information Theory 50(6), 1009–1025 (2004) R Xu, FCM Lau, Performance analysis for MIMO systems using zero forcing detector over fading channels. IEE Proceedings on Communications 153(1), 74–80 (2006) Y Nasser, J-F Hélard, M Crussière. System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV; in EURASIP International Journal of Digital Multimedia Broadcasting. 2008(359206), 12 (2008) E Viterbo, J Boutros, A universal lattice code decoder for fading channels. IEEE Trans. on Information Theory 45, 1639–1642 (1997) B Hassibi, H Vikalo, On the expected complexity of sphere decoding, in Proc. of the Asimolar Conference on Signal, Systems and Computers, 2001, pp. 1051–1055 C Schnorr, M Euchner, Lattice basis reduction: improved practical algorithms and solving subset sum problems. Mathematical Programming 66, 181–199 (1994) Z Guo, P Nilsson, Algorithm and implementation of the K-best sphere decoding for MIMO detection. IEEE Journal on Selected Areas in Communications 24(3), 491–503 (2006) LG Barbero, JS Thompson, A fixed-complexity MIMO detector based on the complex sphere decoder. IEEE 7th Workshop on Signal Processing Advances in Wireless Communications, 2006. SPAWC '06. pp. 1, 5, 2–5 (2006) M Mohaisen, KyungHi Chang, On improving the efficiency of the fixed-complexity sphere decoder. 2009 IEEE 70th Vehicular Technology Conference Fall (VTC 2009-Fall), 20–23 Sept 2009, pp. 1, 5 Y Ding, Y Wang, JF Diouris, Z Yao, Robust fixed-complexity sphere decoders for rank-deficient MIMO systems. IEEE Trans. Wireless Commun 12(9), 4297–4305 (2013) J Fink, S Roger, A Gonzalez, V Almenar, VM Garciay, Complexity assessment of sphere decoding methods for MIMO detection. 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), 14–17 Dec 2009, pp. 9, 14 C Hess, M Wenk, A Burg, P Luethi, C Studer, N Felber, W Fichtner, Reduced-complexity MIMO detector with close-to ML error rate performance, in Proc. of the GLSVLSI, 2007, pp. 200–203 D Wuebben, R Bohnke, V Kuhn, K-D Kammeyer, MMSE-based lattice-reduction for near-ML detection of MIMO systems, in Proc. of the ITG Workshop on Smart Antennas, 2004, pp. 106–113 S Roger, A Gonzales, V Almenar, AM Vidal, Lattice-reduction-aided K-best MIMO detector based on the channel matrix condition number. 2010 4th International Symposium on Communications, Control and Signal Processing (ISCCSP), March 2010, pp. 1–4 C-F Liao, Y-H Huang, Cost reduction algorithm for 8x8 lattice reduction-aided K-best MIMO detector, in Proc. of the IEEE International Conference of Signal Processing, Communication and Computing, 2012, pp. 186–190 X-F Qi, K Holt, A lattice-reduction-aided soft demapper for high-rate coded MIMO-OFDM systems. IEEE Signal Processing Letters 14(5), 305–308 (2007) M Shabany, PG Gulak, The application of lattice-reduction to the K-best algorithm for near-optimal MIMO detection. IEEE International Symposium on Circuits and Systems, 2008. ISCAS 2008. 18–21 May 2008, pp. 316–319 JC Marinello, T Abrao, Lattice reduction aided detector for dense MIMO via ant colony optimization, in Proc. of the IEEE Wireless Communications and Networking Conference (WCNC), 2013, pp. 2839–2844. Shanghai LG Barbero, JS Thompson, A fixed-complexity MIMO detector based on the complex sphere decoder, in Proc. of the Workshop on Signal Processing Advances for Wireless Communications, 2006, pp. 1–5 E Agrell, T Eriksson, E Vardy, K Zeger, Closest point search in lattices. IEEE Transactions on Information Theory 48(8), 2201–2214 (2002) K-W Wong, C-Y Tsui, S-K Cheng, W-H Mow, A VLSI architecture of a K-best lattice decoding algorithm for MIMO channels, in Proc. of the IEEE International symposium on Circuits and Systems, vol. 3, 2002, pp. 273–276 E Viterbo, J Boutros, A universal lattice code decoder for fading channels. IEEE Transactions on Information Theory 45(5), 1639–1642 (1999) S Aubert, M Mohaisen, From linear equalization to lattice-reduction-aided sphere-detector as an answer to the MIMO detection problematic in spatial multiplexing systems. Vehicular Technologies, 978-953-7619-X-X, INTECH, (2011) BA Lamacchia, Basis reduction algorithms and subset sum problems. Technical report, MSc Thesis, Massachusetts Institute of Technology, 1991 AK Lenstra, HW Lenstra, L Lovász, Factoring polynomials with rational coefficients. Mathematische Annalen 261(4), 515–534 (1982) M Seysen, Simultaneous reduction of a lattice basis and its reciprocal basis. Combinatorica 13(3), 363–376 (1993) D Wübben, R Böhnke, V Kühm, K-D Kammeyer, Near-maximum-likelihood detection of MIMO systems using MMSE-based lattice-reduction, in Proc. of the IEEE International Conference on Communications, vol. 2, 2004, pp. 798–802 S Roger, A Gonzalez, V Almenar, AM Vidal, On decreasing the complexity of lattice-reduction-aided K-best MIMO detectors, in Proc. of the European Signal Processing Conference, 2009, pp. 2411–2415 B Gestner, W Zhang, X Ma, DV Anderson, VLSI implementation of a lattice reduction algorithm for low-complexity equalization, in Proc. of the IEEE International Conference on Circuits and Systems for Communications, 2008, pp. 643–647 T Cui, C Tellambura, An efficient generalized sphere decoder for rank-deficient MIMO systems. IEEE Communications Letters 9(5), 423–425 (2005) L Wang, L Xu, S Chen, L Hanzo, MMSE soft-interference-cancellation aided iterative center-shifting K-best sphere detection for MIMO channels, in the Proc. of the IEEE International Conference on Communications, 2008, pp. 3819–3823 J Jalden, B Ottersten, On the complexity of sphere decoding in digital communications. IEEE Transactions on Signal Processing 53(4), 1474–1484 (2005) X Wang, Z He, K Niu, W Wu, X Zhang, An improved detection based on lattice reduction in MIMO systems, in Proc. of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2006, pp. 1–5 C Studer, A Burg, H Bolcskei, Soft-output sphere decoding: algorithms and VLSI implementation. IEEE Journal on Selected Areas in Communications 26(2), 290–300 (2008) W Zhang, M Xiaoli, Approaching optimal performance by lattice-reduction aided soft detectors. 41st Annual Conference on Information Sciences and Systems, 2007. CISS '07. 14–16 March 2007, pp. .818–822 M Pohst, On the computation of lattice vectors of minimal length, successive minima and reduced basis with applications. ACM SIGSAM Bull. 15, 37–44 (1981) This paper was partially presented in [1]. ECE Department, American University of Beirut, Bliss Street, Beirut, Lebanon Youssef Nasser , Karim Y. Kabalan & Hassan A. Artail ST-Ericsson, 635 route des Lucioles, Sophia-Antipolis, France Sebastien Aubert Université Européenne de Bretagne, INSA, IETR, UMR 6164, 35708, Rennes, France & Fabienne Nouvel Search for Youssef Nasser in: Search for Sebastien Aubert in: Search for Fabienne Nouvel in: Search for Karim Y. Kabalan in: Search for Hassan A. Artail in: Correspondence to Youssef Nasser. An erratum to this article is available at http://dx.doi.org/10.1186/s13638-015-0478-z. Appendix 1: the constant modulus constraint in (6) The authors of [42] discussed the constant modulus constraint of x in (6) when n T is large. It has been shown that the constant modulus signal assumption becomes the time average of the n T x i entries. Figure 12 presents the probability density functions (PDF) of x H x for different number of transmit antennas and different modulations. This figure shows that, due to the weak law of large numbers, the term is Gaussian centered to a mean value that is constant in time. Consequently, the assumption may still be considered as fulfilled as n T increases. It is worth mentioning that, in order to make (6) strictly equivalent to the ML metric, any M-QAM constellation may be represented as a weighted sum of QPSK constellations: PDF of the transmit signal power for all the possible symbols vectors, 2 × 2, 4 × 4, and 8 × 8 complex Rayleigh channel, QPSK, 16-QAM, and 64-QAM modulations on each layer $$ {\boldsymbol{x}}^{\left(M\hbox{-} \mathrm{Q}\mathrm{A}\mathrm{M}\right)}={\displaystyle {\sum}_{i=0}^{\log_2\left\{M\right\}-1}{2}^i\left(\frac{\sqrt{2}}{2}\right){\boldsymbol{x}}_i^{\left(\mathrm{QPSK}\right)}} $$ Where x (M ‐ QAM) is an n T symbols vector whose entries all belong to a M-QAM constellation and \( {\boldsymbol{x}}_i^{\left(\mathrm{QPSK}\right)} \) is an n T symbol vector whose all entries belong to a QPSK constellation. Appendix 2: proof of Equation (8) Let us introduce any term \( c\;\mathrm{s}.\mathrm{t}.\parallel \boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}{\parallel}^2+c=\parallel \boldsymbol{U}\left(\tilde{\boldsymbol{x}} - \boldsymbol{x}\right){\parallel}^2 \), where \( \tilde{\boldsymbol{x}} \) is any (ZF or MMSE) unconstrained linear estimate: $$ \begin{array}{c}c=\parallel \boldsymbol{U}\left(\tilde{\boldsymbol{x}}-\boldsymbol{x}\right){\parallel}^2-\parallel \boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}{\parallel}^2\\ {}={\left(\tilde{\boldsymbol{x}}-\boldsymbol{x}\right)}^H{\boldsymbol{U}}^H\boldsymbol{U}\left(\tilde{\boldsymbol{x}}-\boldsymbol{x}\right)-{\left(\boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}\right)}^H\left(\boldsymbol{y}-\boldsymbol{H}\boldsymbol{x}\right)\\ {}={\tilde{\boldsymbol{x}}}^H\boldsymbol{G}\tilde{\boldsymbol{x}} - {\tilde{\boldsymbol{x}}}^H\boldsymbol{G}\boldsymbol{x}-\boldsymbol{x}\boldsymbol{G}\tilde{\boldsymbol{x}}+{\boldsymbol{x}}^H\boldsymbol{G}\boldsymbol{x}-{\boldsymbol{y}}^H\boldsymbol{y}+{\boldsymbol{y}}^H\boldsymbol{H}\boldsymbol{x}+{\boldsymbol{x}}^H{\boldsymbol{H}}^H\boldsymbol{y}-{\boldsymbol{x}}^H{\boldsymbol{H}}^H\boldsymbol{H}\boldsymbol{x},\ \mathrm{with}\ \boldsymbol{G}={\boldsymbol{U}}^H\boldsymbol{U},\\ {}={\boldsymbol{y}}^H\boldsymbol{H}{\boldsymbol{G}}^{-1}\boldsymbol{G}{\boldsymbol{G}}^{-1}{\boldsymbol{H}}^H\boldsymbol{y}-{\boldsymbol{y}}^H\boldsymbol{H}{\boldsymbol{G}}^{-1}\boldsymbol{G}\boldsymbol{x}-\boldsymbol{x}\boldsymbol{G}{\boldsymbol{G}}^{-1}{\boldsymbol{H}}^H\boldsymbol{y}+{\boldsymbol{x}}^H\boldsymbol{G}\boldsymbol{x}-{\boldsymbol{y}}^H\boldsymbol{y}+{\boldsymbol{y}}^H\boldsymbol{H}\boldsymbol{x}+{\boldsymbol{x}}^H{\boldsymbol{H}}^H\boldsymbol{y}-{\boldsymbol{x}}^H{\boldsymbol{H}}^H\boldsymbol{H}\boldsymbol{x},\end{array} $$ by introducing \( \tilde{\boldsymbol{x}} = {\boldsymbol{G}}^{-1}{\boldsymbol{H}}^H\boldsymbol{y} \) and \( {\tilde{\boldsymbol{x}}}^H={\boldsymbol{y}}^H\boldsymbol{H}{\boldsymbol{G}}^{-1} \) and where G = U H U = H H H in the ZF case and G = H H H + σ 2 I in the MMSE case. $$ \begin{array}{c}c={\boldsymbol{y}}^H\boldsymbol{H}{\boldsymbol{G}}^{-1}{\boldsymbol{H}}^H\boldsymbol{y}+{\boldsymbol{x}}^H\boldsymbol{G}\boldsymbol{x}-{\boldsymbol{y}}^H\boldsymbol{y}-{\boldsymbol{x}}^H{\boldsymbol{H}}^H\boldsymbol{H}\boldsymbol{x}\\ {}={\boldsymbol{y}}^H\boldsymbol{H}{\boldsymbol{G}}^{-1}{\boldsymbol{H}}^H\boldsymbol{y}+{\boldsymbol{x}}^H\left(\boldsymbol{G}-{\boldsymbol{H}}^H\boldsymbol{H}\right)\boldsymbol{x}-{\boldsymbol{y}}^H\boldsymbol{y}\end{array} $$ In the ZF case, HG − 1 H H = HH − 1(H H)− 1 H H = I and G − H H H = 0, consequently c = 0 which is a constant term. In the MMSE case, c = y H[H(H H H + σ 2 I)− 1 H H − I]y + σ 2 x H x which is a constant term in x iff the signal x entries are of constant modulus. Appendix 3: proof of Equation (12) The proof of Equation (12) is very similar to the proof of (8); however, in this appendix, we work on the LRA-based detector. Let us introduce any term \( c^{\prime}\;\mathrm{s}.\mathrm{t}.\parallel \boldsymbol{y}-\tilde{\boldsymbol{H}}\boldsymbol{z}{\parallel}^2+c^{\prime }=\parallel \tilde{\boldsymbol{U}}\left(\tilde{\boldsymbol{z}} - \boldsymbol{z}\right){\parallel}^2 \), where \( \tilde{\boldsymbol{z}} \) is any LRA (ZF or MMSE) unconstrained linear estimate: $$ \begin{array}{c}c^{\prime }=\parallel \tilde{\boldsymbol{U}}\left(\tilde{\boldsymbol{z}} - \boldsymbol{z}\right){\parallel}^2-\parallel \boldsymbol{y}-\tilde{\boldsymbol{H}}\;\boldsymbol{z}{\parallel}^2\\ {}={\left(\tilde{\boldsymbol{z}}-\boldsymbol{z}\right)}^H{\tilde{\boldsymbol{U}}}^H\tilde{\boldsymbol{U}}\left(\tilde{\boldsymbol{z}} - \boldsymbol{z}\right)-{\left(\boldsymbol{y}-\tilde{\boldsymbol{H}}\;\boldsymbol{z}\right)}^H\left(\boldsymbol{y}-\tilde{\boldsymbol{H}}\;\boldsymbol{z}\right)\\ {}={\tilde{\boldsymbol{z}}}^H\tilde{\boldsymbol{G}}\tilde{\boldsymbol{z}} - {\tilde{\boldsymbol{z}}}^H\tilde{\boldsymbol{G}}\boldsymbol{z}-{\boldsymbol{z}}^H\tilde{\boldsymbol{G}}\tilde{\boldsymbol{z}}+{\boldsymbol{z}}^H\tilde{\boldsymbol{G}}\boldsymbol{z}-{\boldsymbol{y}}^H\boldsymbol{y}+{\boldsymbol{y}}^H\tilde{\boldsymbol{H}}\boldsymbol{z}+{\boldsymbol{z}}^H{\tilde{\boldsymbol{H}}}^H\boldsymbol{y}-{\boldsymbol{z}}^H{\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}\boldsymbol{z},\ \mathrm{with}\ {\tilde{\boldsymbol{U}}}^H\tilde{\boldsymbol{U}}=\tilde{\boldsymbol{G}},\\ {}={\boldsymbol{y}}^H\tilde{\boldsymbol{H}}{\tilde{\boldsymbol{G}}}^{-1}\tilde{\boldsymbol{G}}{\tilde{\boldsymbol{G}}}^{-1}{\tilde{\boldsymbol{H}}}^H\boldsymbol{y}-{\boldsymbol{y}}^H\tilde{\boldsymbol{H}}{\tilde{\boldsymbol{G}}}^{-1}\tilde{\boldsymbol{G}}\boldsymbol{z}-\boldsymbol{z}\tilde{\boldsymbol{G}}{\tilde{\boldsymbol{G}}}^{-1}{\tilde{\boldsymbol{H}}}^H\boldsymbol{y}+{\boldsymbol{z}}^H\tilde{\boldsymbol{G}}\boldsymbol{z}-{\boldsymbol{y}}^H\boldsymbol{y}+{\boldsymbol{y}}^H\tilde{\boldsymbol{H}}\boldsymbol{z}+{\boldsymbol{z}}^H{\tilde{\boldsymbol{H}}}^H\boldsymbol{y}-{\boldsymbol{z}}^H{\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}\boldsymbol{z}\end{array} $$ by introducing \( \tilde{\boldsymbol{z}}={\tilde{\boldsymbol{G}}}^{-1}{\tilde{\boldsymbol{H}}}^H\boldsymbol{y} \) and \( {\tilde{\boldsymbol{z}}}^H={\boldsymbol{y}}^H\tilde{\boldsymbol{H}}{\tilde{\boldsymbol{G}}}^{-1} \), where \( \tilde{\boldsymbol{G}}={\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}} \) in the LRA-ZF case and \( \tilde{\boldsymbol{G}}={\tilde{H}}^H\tilde{H}+{\sigma}^2{\boldsymbol{T}}^H\boldsymbol{T} \) in the LRA-MMSE case. $$ \begin{array}{c}c^{\prime }={\boldsymbol{y}}^H\tilde{\boldsymbol{H}}{\tilde{\boldsymbol{G}}}^{-1}{\tilde{\boldsymbol{H}}}^H\boldsymbol{y}+{\boldsymbol{z}}^H\tilde{\boldsymbol{G}}\boldsymbol{z}-{\boldsymbol{y}}^H\boldsymbol{y}-{\boldsymbol{z}}^H{\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}\boldsymbol{z}\\ {}={\boldsymbol{y}}^H\tilde{\boldsymbol{H}}{\tilde{\boldsymbol{G}}}^{-1}{\tilde{\boldsymbol{H}}}^H\boldsymbol{y}+{\boldsymbol{z}}^H\left(\tilde{\boldsymbol{G}}-{\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}\right)\boldsymbol{z}-{\boldsymbol{y}}^H\boldsymbol{y}\end{array} $$ In the ZF case, \( \tilde{\boldsymbol{H}}{\tilde{\boldsymbol{G}}}^{-1}{\tilde{\boldsymbol{H}}}^H=\tilde{\boldsymbol{H}}{\tilde{\boldsymbol{H}}}^{-1}{\left({\tilde{\boldsymbol{H}}}^H\right)}^{-1}{\tilde{\boldsymbol{H}}}^H=\boldsymbol{I} \) and \( \tilde{\boldsymbol{G}}-{\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}=0 \), consequently c′ = 0 is a constant term. In the MMSE case, \( c^{\prime }={\boldsymbol{y}}^H\left[\tilde{\boldsymbol{H}}{\left({\tilde{\boldsymbol{H}}}^H\tilde{\boldsymbol{H}}+{\sigma}^2{\boldsymbol{T}}^H\boldsymbol{T}\right)}^{-1}{\tilde{\boldsymbol{H}}}^H-\boldsymbol{I}\right]\boldsymbol{y}+{\sigma}^2{\boldsymbol{z}}^H{\boldsymbol{T}}^H\boldsymbol{T}\boldsymbol{z} \) which is a constant term in x iff the signal x entries are of constant modulus since σ 2 z H T H Tz = σ 2 x H x. Appendix 4: description of the proposed detection algorithm: RDN LRA-ZF-(O)SIC K-best Nasser, Y., Aubert, S., Nouvel, F. et al. A simplified hard output sphere decoder for large MIMO systems with the use of efficient search center and reduced domain neighborhood study. J Wireless Com Network 2015, 227 (2015) doi:10.1186/s13638-015-0442-y DOI: https://doi.org/10.1186/s13638-015-0442-y Minimum Mean Square Error MIMO System Zero Force Constellation Size
CommonCrawl
Otto Engine for the q-State Clock Model Michel A. Aguilera, Francisco J. Peña, Oscar A. Negrete, Patricio Vargas Subject: Physical Sciences, Condensed Matter Physics Keywords: q-state clock model; entropy; Berezinskii-Kosterlitz-Thouless transition; Otto engine; Mean- field approximation This present work explores the performance of a thermal-magnetic engine of Otto type, considering as a working substance an effective interacting spin model corresponding to the q− state clock model. We obtain all the thermodynamic quantities for the q = 2, 4, 6, 8 cases in a small lattice size (3×3 with free boundary conditions) by using the exact partition function calculated from the energies of all the accessible microstates of the system. The extension to bigger lattices was performed using the mean-field approximation. Our results indicate that the total work extraction of the cycle is highest for the q=4 case, while the performance for the Ising model (q=2) is the lowest of all cases studied. These results are strongly linked with the phase diagram of the working substance and the location of the cycle in the different magnetic phases present, where we find that the transition from a ferromagnetic to a paramagnetic phase extracts more work than one of the Berezinskii–Kosterlitz–Thouless to paramagnetic type. Additionally, as the size of the lattice increases, the extraction work is lower than smaller lattices for all values of q presented in this study. Short-range Berezinskii-Kosterlitz-Thouless Phase Characterization for the q-state Clock Model Oscar Andres Negrete, Patricio Vargas, Francisco Jose Peña, Gonzalo Saravia, Eugenio Emilio Vogel Subject: Physical Sciences, Acoustics Keywords: q-state clock model; Entropy; Berezinskii-Kosterlitz-Thouless transition; ergodicity Beyond the usual ferromagnetic and paramagnetic phases present in spin systems, the usual q-state clock model, presents an intermediate vortex state when the number of possible orientations q for the system is equal to 5 or larger. Such vortex states give rise to the Berezinskii-Kosterlitz-Thouless (BKT) phase present up to the XY model in the limit q→∞. Based on information theory, we present here an analysis of the classical order parameters plus new short-range parameters defined here. Thus, we show that even using the first nearest neighbors spin-spin correlations only, it is possible to distinguish the two transitions presented by this system for q greater than or equal to 5. Moreover, the appearance at relatively low temperature and disappearance of the BKT phase at a rather fix higher temperature is univocally determined by the short-range interactions recognized by the information content of classical and new parameters. Entropy and Mutability for the Q-States Clock Model in Small Systems Oscar A. Negrete, Patricio Vargas, Francisco J. Peña, Gonzalo Saravia, Eugenio E. Vogel Subject: Physical Sciences, Condensed Matter Physics Keywords: q-states clock model; Entropy; Berezinskii-Kosterlitz-Thouless transition In this paper, we revisit the q-states clock model for small systems. We present results for the thermodynamics of the q-states clock model from $q=2$ to $q=20$ for small square lattices $L \times L$, with L ranging from $L=3$ to $L=64$ with free-boundary conditions. Energy, specific heat, entropy and magnetization are measured. We found that the Berezinskii-Kosterlitz-Thouless (BKT)-like transition appears for $q>5$ regardless of lattice size, while the transition at $q=5$ is lost for $L<10$; for $q\leq 4$ the BKT transition is never present. We report the phase diagram in terms of $q$ showing the transition from the ferromagnetic (FM) to the paramagnetic (PM) phases at a critical temperature T$_1$ for small systems which turns into a transition from the FM to the BKT phase for larger systems, while a second phase transition between the BKT and the PM phases occurs at T$_2$. We also show that the magnetic phases are well characterized by the two dimensional (2D) distribution of the magnetization values. We make use of this opportunity to do an information theory analysis of the time series obtained from the Monte Carlo simulations. In particular, we calculate the phenomenological mutability and diversity functions. Diversity characterizes the phase transitions, but the phases are less detectable as $q$ increases. Free boundary conditions are used to better mimic the reality of small systems (far from any thermodynamic limit). The role of size is discussed. Preprint COMMUNICATION | doi:10.20944/preprints202203.0033.v1 High Repetition, TEM00 mode, Compact Sub-Nanosecond 532nm Laser Dong-Dong Meng, Tian-Qi Wang, Mi Zhou, Zhan-Duo Qiao, Xiao-Long Liu, Zhong-Wei Fan Subject: Physical Sciences, Optics Keywords: laser remote sensing; photon-counting lidar; microchip laser; passively Q-switching; compact solid-state lasers As a critical transmitter, the compact 532 nm lasers operating on high repetition and narrow pulse widths have been used widely for airborne or space-borne laser active remote sensing. We developed a free space pumped TEM00 mode sub-nanosecond 532 nm laser that occupied a volume of less than 125 mm × 50 mm × 40 mm (0.25 liters). The fundamental 1064 nm laser consists of a passively Q-switched composite crystal microchip laser and an off-axis, two-pass power amplifier. The pump sources were two single-emitter semiconductor laser diodes (LD) of 808 nm with a maximum continuous wave (CW) power of 10 W each. The average power of fundamental 1064 nm laser was 1.26 W with the laser operating at 16 kHz repetition rates, and 857ps pulse widths. Since the beam distortion would be severe in microchip lasers in terms of the increase in heat load, for obtaining a high beam quality of 532 nm, the beam distortion was compensated by adjusting the distribution of pumping beam in our experiment of fundamental amplification. Furthermore, better than 0.6 W average power, 770 ps, beam quality of M2 <1.2, and 16 kHz pulse output at 532 nm was obtained by a Type I LiB3O5 (LBO) crystal in the critical phase matching (CPM) regime for second harmonic generation (SHG). Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model Robert M. McElderry Subject: Biology, Ecology Keywords: Anaea aidea; caterpillar demography; multi-state mark-recapture; state-space model; stage-structured matrix Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the most likely model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. Some Properties of $Q$-Hermite Fubini Numbers and Polynomials Waseem A. Khan Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: $q$-Hermite polynomials; $q$-Hermite-Fubini polynomials; $q$-Bernoulli polynomials; $q$-Euler polynomials; $q$-Genocchi polynomials; Stirling numbers of the second kind The main purpose of this paper is to introduce a new class of $q$-Hermite-Fubini numbers and polynomials by combining the $q$-Hermite polynomials and $q$-Fubini polynomials. By using generating functions for these numbers and polynomials, we derive some alternative summation formulas including powers of consecutive $q$-integers. Also, we establish some relationships for $q$-Hermite-Fubini polynomials associated with $q$-Bernoulli polynomials, $q$-Euler polynomials and $q$-Genocchi polynomials and $q$-Stirling numbers of the second kind. Online State of Charge and State of Health Estimation for Lithium-Ion Battery Based on a Data-Model Fusion Method Zhongbao Wei, Feng Leng, Zhongjie He, Wenyu Zhang, Kaiyuan Li Subject: Engineering, Energy & Fuel Technology Keywords: state of charge; state of health; model identification; estimation; lithium-ion battery The accurate monitoring of state of charge (SOC) and state of health (SOH) is critical for the reliable management of lithium-ion battery (LIB) systems. In this paper, the online model identification is scrutinized to achieve high modeling accuracy and robustness, and a model-based joint estimator is further proposed to estimate the SOC and SOH of LIB concurrently. Specifically, an adaptive forgetting recursive least squares (AF-RLS) method is exploited to optimize the estimation alertness and numerical stability, so as to achieve accurate online adaption of model parameters. Leveraging the online adapted battery model, a joint estimator is proposed by combining an open-circuit voltage (OCV) observer with a low-order state observer to co-estimate the SOC and capacity of LIB. Simulation and experimental studies are performed to evaluate the performance of the proposed data-model fusion method. Results suggest that the proposed method can effectively track the variation of model parameters by using the onboard measured current and voltage data. The SOC and capacity can be further estimated in real time with fast convergence, high accuracy and high stability. A Note on $(P,Q)$-Analogue Type of Fubini Numbers and Polynomials Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: $(p,q)$-calculus; $(p; q)$-Bernoulli polynomials; $(p; q)$-Euler polynomials; $(p; q)$-Genocchi polynomials; $(p; q)$-Fubini numbers and polynomials; $(p; q)$ Stirling numbers of the second kind In this paper, we introduce a new class of $(p,q)$-analogue type of Fubini numbers and polynomials and investigate some properties of these polynomials. We establish summation formulas of these polynomials by summation techniques series. Furthermore, we consider some relationships for $(p,q)$-Fubini polynomials associated with $(p,q)$-Bernoulli polynomials, $(p,q)$-Euler polynomials and $(p,q)$-Genocchi polynomials and $(p,q)$-Stirling numbers of the second kind. Notes on $q$-Hermite Based Unified Apostol Type Polynomials Waseem Khan Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: $q$-Hermite type polynomials, $q$-unified Apostol type polynomials, $q$-Hermite based unified Apostol type polynomials. In this article, a new class of $q$-Hermite based unified Apostol type polynomials is introduced by means of generating function and series representation. Several important formulas and recurrence relations for these polynomials are derived via different generating methods. We also introduce $q$-analog of Stirling numbers of second kind of order $\nu$ by which we construct a relation including aforementioned polynomials. On the Degenerate $(h,q)$-Changhee Numbers and Polynomials Yunjae Kim, Jin-Woo Park Subject: Mathematics & Computer Science, Applied Mathematics Keywords: (h,q)-Euler polynomials; degenerate (h,q)-Changhee polynomials; fermionic p-adic q-integral on Z_p In this paper, we investigate a new $q$-analogue of the higher order degenerate Changhee polynomials and numbers, which are called the Witt-type formula for the $q$-analogue of degenerate Changhee polynomials of order $r$. We can derive some new interesting identities related to the degenerate $(h,q)$-Changhee polynomials and numbers. Jongkyum Kwon, Yunjae Kim, Byung Moon Kim, Jin-Woo Park Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: (h,q)-Euler polynomials; degenerate (h,q)-Changhee polynomials,; fermionic p-adic q-integral on Z_p. On p-Adic Integral Representation of q-Bernoulli Numbers Arising from two Variable q-Bernstein Polynomials C.S. Ryoo, T. Kim, D.S. Kim, Y. Yao Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: q-Bernoulli numbers; q-Bernoulli polynomials; Bernstein polynomials; q-Bernstein polynomials; p-adic integral on Zp In this paper, we study the p-adic integral representation on Zp of q-Bernoulli numbers arising from two variable q-Bernstein polynomials and investigate some properties for the q-Bernoulli numbers. In addition, we give some new identities of q-Bernoulli numbers. Applications of q-Umbral Calculus to Modi…ed Apostol Type q-Bernoulli Polynomials Mehmet Acikgoz, Resul Ates, Ugur Duran, Serkan Araci Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: q-umbral calculus; apostol-bernoulli polynomials; modi.ed apostol type q-bernoulli polynomials; q-appell polynomials; generating functions This article aims to identify the generating function of modi…ed Apostol type q-Bernoulli polynomials. With the aid of this generating function, some properties of modi…ed Apostol type q-Bernoulli polynomials are given. It is shown that aforementioned polynomials are q-Appell. Hence, we make use of these polynomials to have applications on q-Umbral calculus. From those applications, we derive some theorems in order to get Apostol type modi…ed q-Bernoulli polynomials as a linear combination of some known polynomials which we stated in the paper. A Study on Some New Results Arising from (p,q)-Calculus Ugur Duran, Mehmet Acikgoz, Serkan Araci Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: q-calculus; (p,q)-calculus; exponential functions; trigonometric functions; hyperbolic functions This paper includes some new investigations and results for post quantum calculus, denoted by (p,q)-calculus. A chain rule for (p,q)-derivative is developed. Also, a new (p,q)-analogue of the exponential function is introduced and some its properties including the addition property for (p,q)-exponential functions are investigated. Several useful results involving (p,q)-binomial coefficients and (p,q)-antiderivative are discovered. At the final part of this paper, (p,q)-analogue of some elementary functions including trigonometric functions and hyperbolic functions are considered and some properties and relations among them are analyzed extensively. Energy-Momentum Tensor and Parameters in Cosmological Model Ying-Qiu Gu Subject: Keywords: cosmological model; energy-momentum tensor; equation of state; cosmic curvature; cosmological constant; negative pressure; dynamic analysis In cosmology, the cosmic curvature $K$ and the cosmological constant $\Lambda$ are two most important parameters, whose values have strong influence on the behavior of the universe. By analyzing the energy-momentum tensor and equations of state of ideal gas, scalar, spinor and vector potential in detail, we find that the total mass density of all matter is always positive, and the initial total pressure is negative. Under these conditions, by qualitatively analyzing the global behavior of the dynamical equation of cosmological model, we get the following results: (i) $K= 1$, namely, the global spatial structure of the universe should be a 3-dimensional sphere $S^3$. (ii) $0\le\Lambda < 10 ^ {-24} {\rm ly} ^ {-2}$, the cosmological constant should be zero or an infinitesimal. (iii) $a(t)>0$, the initial singularity of the universe is unreachable, and the evolution of universe should be cyclic in time. This means that the initial Big Bang is impossible at all. Since the matter components considered are quite complete and the proof is very elementary and strict, these logical conclusions should be quite reliable. Obviously, these conclusions will be much helpful to correct some popular misconceptions and bring great convenience to further research other problems in cosmology such as property of dark matter and dark energy. Zeros and Value Sharing Results for q-Shifts Difference and Differential Polynomials Rajshree Dhar Subject: Mathematics & Computer Science, Analysis Keywords: Entire and Meromorphic function; q-shift; q-difference polynomial; shared values; Nevanlinna theory In this paper, we consider the zero distributions of q-shift monomi-als and difference polynomials of meromorphic functions with zero order, that extends the classical Hayman results on the zeros of differential poly-nomials to q-shift difference polynomials. We also investigate problem of q-shift difference polynomials that share a common value. Inequalities for Hypo-q-Norms on a Cartesian Product of Inner Product Spaces Silvestru Dragomir Subject: Mathematics & Computer Science, Analysis Keywords: hypo-q-norms, Cartesian product In this paper we introduce the hypo-q-norms on a Cartesian product of inner product spaces. A representation of these norms in terms of inner products, the equivalence with the q-norms on a Cartesian product and some reverse inequalities obtained via the scalar Shisha-Mond, Birnacki et al., Grüss type inequalities, Boas-Bellman and Bombieri type inequalities are also given. Attitude Control of Highly Maneuverable Aircraft Using an Improved Q-learning Mohsen Zahmatkesh, Seyyed Ali Emami, Afshin Banazadeh, Paolo Castaldi Subject: Engineering, Control & Systems Engineering Keywords: Reinforcement Learning, Q-learning, Fuzzy Q-learning, Attitude Control, Truss-braced Wing, Flight Control Attitude control of a novel regional truss-braced wing aircraft with low stability characteristics is addressed in this paper using Reinforcement Learning (RL). In recent years, RL has been increasingly employed in challenging applications, particularly, autonomous flight control. However, a significant predicament confronting discrete RL algorithms is the dimension limitation of the state-action table and difficulties in defining the elements of the RL environment. To address these issues, in this paper, a detailed mathematical model of the mentioned aircraft is first developed to shape an RL environment. Subsequently, Q-learning, the most prevalent discrete RL algorithm will be implemented in both the Markov Decision Process (MDP), and Partially Observable Markov Decision Process (POMDP) frameworks to control the longitudinal mode of the air vehicle. In order to eliminate residual fluctuations that are a consequence of discrete action selection, and simultaneously track variable pitch angles, a Fuzzy Action Assignment (FAA) method is proposed to generate continuous control commands using the trained Q-table. Accordingly, it will be proved that by defining an accurate reward function, along with observing all crucial states (which is equivalent to satisfying the Markov Property), the performance of the introduced control system surpasses a well-tuned Proportional–Integral–Derivative (PID) controller. q-Sumudu Transforms of Product of Generalized Basic Hypergeometric Function and Application V K Vyas, Ali A. Al -Jarrah, S.D. Purohit Subject: Mathematics & Computer Science, Analysis Keywords: q-analogue of Sumudu transforms; q-analogue of hypergeometric functions; general class of q- polynomials; Fox's H-function; basic analogue of I-function The prim objective of commenced article is to determine q-sumudu transforms of a product of unified family of q-polynomials with basic (or q-) analogue of fox's H-function and q-analog of I-functions. Specialized cases of the leading outcome are further evaluated as q-sumudu transform of general class of q-polynomials and q-sumudu transforms of the basic analogues of Fox's H-function and I-functions. A New Model for Charged Anisotropic Matter with Modified Chaplygin Equation of State Manuel Malaver, Hamed Daei Kasmaei Subject: Physical Sciences, Acoustics Keywords: Einstein-Maxwell field equations; Chaplygin equation of state; Metric potential; Radial pressure; Measure of anisotropy In this paper, we found a new model for compact star with charged anisotropic matter distribution considering an extended version of the Chaplygin equation of state. We specify a particular form of the metric potential Z(x) that allows us to solve the Einstein-Maxwell field equations. The obtained model satisfies all physical properties expected in a realistic star such that the expressions for the radial pressure, energy density, metric coefficients, measure of anisotropy and the mass are fully well defined and are regular in the interior of star. The solution obtained in this work can have multiple applications in astrophysics and cosmology. On shape parameter $\alpha$ based approximation properties and $q$-statistical convergence of Baskakov-Gamma operators Ming-Yu Chen, Md. Nasiruzzaman, Mohammad Ayman Mursaleen, Nadeem Rao Subject: Mathematics & Computer Science, General Mathematics Keywords: Baskakov operators; gamma operators; rate of convergence; Lipschitz maximal space; q-density; q-statistical convergence. We construct a novel family of summation-integral type hybrid operators in terms of shape parameter $\alpha\in \lbrack 0,1]$ in this paper. Basic estimates, rate of convergence, and order of approximation are also studied using the Korovkin theorem and the modulus of smoothness. We investigate the local approximation findings for these sequences of positive linear operators utilising Peetre's K-functional, Lipschitz class, and second-order modulus of smoothness. The Molten Globule, and Two-State vs. Non-Two-State Folding of Globular Proteins Kunihiro Kuwajima Subject: Life Sciences, Biophysics Keywords: protein folding; molten globule state; two-state proteins; non-two-state proteins From experimental studies of protein folding, it is now clear that there are two types of folding behavior, i.e., two-state folding and non-two-state folding, and understanding the relationships between these apparently different folding behaviors is essential for fully elucidating the molecular mechanisms of protein folding. This article describes how the presence of the two types of folding behavior has been confirmed experimentally, and discusses the relationships between the two-state and the non-two-state folding reactions, on the basis of available data on the correlations of the folding rate constant with various structure-based properties, which are determined primarily by the backbone topology of proteins. Finally, a two-stage hierarchical model is proposed as a general mechanism of protein folding. In this model, protein folding occurs in a hierarchical manner, reflecting the hierarchy of the native three-dimensional structure, as embodied in the case of non-two-state folding with an accumulation of the molten globule state as a folding intermediate. The two-state folding is thus merely a simplified version of the hierarchical folding caused either by an alteration in the rate-limiting step of folding or by destabilization of the intermediate. Approximating Ground States by Neural Network Quantum States Ying Yang, Chengyang Zhang, Huaixin Cao Subject: Physical Sciences, Mathematical Physics Keywords: approximation; ground state; neural network quantum state The many-body problem in quantum physics originates from the difficulty of describing the non-trivial correlations encoded in the exponential complexity of the many-body wave function. Motivated by the Giuseppe Carleo's work titled solving the quantum many-body problem with artificial neural networks [Science, 2017, 355: 602], we focus on finding the NNQS approximation of the unknown ground state of a given Hamiltonian $H$ in terms of the best relative error and explore the influences of sum, tensor product, local unitary of Hamiltonians on the best relative error. Besides, we illustrate our method with some examples. An Adaptive Early Fault Detection Model of Induced Draft Fans Based on Multivariate State Estimation Technique Ruijun Guo, Guobin Zhang, Qian Zhang, Lei Zhou, Haicun Yu, Meng Li, You Lv Subject: Engineering, Automotive Engineering Keywords: fault detection; induced draft fan; multivariate state estimation technique (MSET); model update; power plant The induced draft (ID) fan is important auxiliary equipment in the thermal power plant. It is of great significance to monitor the operation of the ID fan for safe and efficient production. In this paper, an adaptive warning model is proposed to detect early faults of ID fans. First, a non-parametric monitoring model is constructed to describe the normal operation states with the multivariate state estimation technique (MSET). Then, an early warning approach is presented to identify abnormal behaviors based on the results of the MSET model. As the performance of the MSET model is heavily influenced by the normal operation data in the historic memory matrix, an adaptive strategy is proposed by using the samples with a high data quality index (DQI) to manage the memory matrix and update the model. The proposed method is applied to a 300 MW coal-fired power plant for early fault detection, and it is compared with the model without an update. Results show that the proposed method can detect the fault earlier and more accurately. PID Control as a Process of Active Inference with Linear Generative Model Manuel Baltieri, Christopher L. Buckley Subject: Mathematics & Computer Science, Probability And Statistics Keywords: approximate Bayesian inference; active inference; PID control; generalised state-space models; sensorimotor loops; information theory; control theory In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation provides also a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional. Multifarious Results for q-Hermite Based Frobenius Type Eulerian Polynomials Waseem Khan, Idrees Ahmad Khan, Mehmet Acikgoz, Ugur Duran Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Hermite polynomials, Frobenius type Eulerian polynomials, Hermite based Frobenius type Eulerian polynomials, q-numbers, q-polynomials. In this paper, a new class of q-Hermite based Frobenius type Eulerian polynomials is introduced by means of generating function and series representation. Several fundamental formulas and recurrence relations for these polynomials are derived via different generating methods. Furthermore, diverse correlations including the q-Apostol-Bernoulli polynomials, the q-Apostol-Euler poynoomials, the q-Apostol-Genocchi polynomials and the q-Stirling numbers of the second kind are also established by means of the their generating functions. Smart Core and Surface Temperature Estimation Techniques for Health-conscious Lithium-ion Battery Management Systems: A Model-to-Model Comparison Sumukh Surya, Akash Samanta, Sheldon Williamson Subject: Engineering, Electrical & Electronic Engineering Keywords: Electric Vehicles; Stationary Battery Energy Storage System; Battery Automated System; Online State Estimation; Thermal Modeling; First-order model; Second-order Model; Kalman Filtering Estimation of core and surface temperature is one of the crucial functionalities of the lithium-ion Battery Management System (BMS) towards providing effective thermal management, fault detection and operational safety. While, it is impractical to measure core temperature using physical sensors, implementing a complex estimation strategy in on-board low-cost BMS is challenging due to high computational cost and the cost of implementation. Typically, a temperature estimation scheme consists of a heat generation model and a heat transfer model. Several researchers have already proposed ranges of thermal models having different levels of accuracy and complexity. Broadly, there are first-order and second-order heat capacitor-resistor-based thermal models of lithium-ion batteries (LIBs) for core and surface temperature estimation. This paper deals with a detailed comparative study between these two models using extensive laboratory test data and simulation study to access suitability in online prediction and onboard BMS. The aim is to guide whether it's worth investing towards developing a second-order model instead of a first-order model with respect to prediction accuracy considering modelling complexity, experiments required and the computational cost. Both the thermal models along with the parameter estimation scheme are modelled and simulated using MATLAB/Simulink environment. Models are validated using laboratory test data of a cylindrical 18650 LIB cell. Further, a Kalman Filter with appropriate process and measurement noise levels are used to estimate the core temperature in terms of measured surface and ambient temperatures. Results from the first-order model and second-order models are analyzed for comparison purposes. A Mathematical Model of the Transition from the Normal Hematopoiesis to the Chronic and Accelerated Acute Stages in Myeloid Leukemia Lorand Gabriel Parajdi, Radu Precup, Eduard Alexandru Bonci, Ciprian Tomuleasa Subject: Mathematics & Computer Science, Applied Mathematics Keywords: mathematical modeling; dynamic system; steady state; stability; hematopoiesis; chronic myeloid leukemia; stem cells A mathematical model given by a two - dimensional differential system is introduced in order to understand the transition process from the normal hematopoiesis to the chronic and accelerated acute stages in chronic myeloid leukemia. A previous model of Dingli and Michor is refined by introducing a new parameter in order to differentiate the bone marrow microenvironment sensitivities of normal and mutant stem cells. In the light of the new parameter, the system now has three distinct equilibria corresponding to the normal hematopoietic state, to the chronic state, and to the accelerated acute phase of the disease. A characterization of the three hematopoietic states is obtained based on the stability analysis. Numerical simulations are included to illustrate the theoretical results. Quantum Fisher Information of W and GHZ State Superposition under Decoherence Volkan Erol Subject: Physical Sciences, General & Theoretical Physics Keywords: quantum fisher information; W state; GHZ state; decoherence We study the changes in quantum Fisher information (QFI) values for one quantum system consisting of a superposition of W and GHZ states. In a recent work [6], QFI values of this mentioned system studied. In this work, we extend this problem for the changes of QFI values in some noisy channels. We show the change in QFI depending on noise parameters. We report interesting results for different type of decoherence channels. Modeling an Inverted Pendulum via Differential Equations and Reinforcement Learning Techniques Siddharth Sharma Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Reinforcement learning; Cartpole; Q Learning; Mathematical Modeling The prevalence of differential equations as a mathematical technique has refined the fields of control theory and constrained optimization due to the newfound ability to accurately model chaotic, unbalanced systems. However, in recent research, systems are increasingly more nonlinear and difficult to model using Differential Equations only. Thus, a newer technique is to use policy iteration and Reinforcement Learning, techniques that center around an action and reward sequence for a controller. Reinforcement Learning (RL) can be applied to control theory problems since a system can robustly apply RL in a dynamic environment such as the cartpole system (an inverted pendulum). This solution successfully avoids use of PID or other dynamics optimization systems, in favor of a more robust, reward-based control mechanism. This paper applies RL and Q-Learning to the classic cartpole problem, while also discussing the mathematical background and differential equations which are used to model the aforementioned system. The Exact Evaluation of Some New Lattice Sums John Zucker Subject: Physical Sciences, Mathematical Physics Keywords: Jacobian q-series; closed form lattice sums New q-series in the spirit of Jacobi have been found in a publication rst published in 1884 written in Russian and translated into English in 1928. This work was found by chance and appears to be almost totally unknown. From these entirely new q-series, fresh lattice sums have been discovered and are presented here. Construction of Maximally Multiqubit Entangled State by Using CNOT Gates Xinwei Zha, Jun-ling Che Subject: Physical Sciences, General & Theoretical Physics Keywords: Maximally multiqubit entangled state; Bell-pair state; CNOT gates We propose a novel protocol to build a maximally entangled state based on controlled-not (CNOT) gates. In particular, we give detailed steps to construct maximally entangled state for 4-, 5-, and 6-qubit systems. The advantage of our method is the simple algebraic structure which can be realized via current experimental technology. The Class of (p,q)-spherical Distributions Wolf-Dieter Richter Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Gauss-exponential distribution; Gauss-Laplace distribution; stochastic vector representation; geometric measure representation; (p,q)-generalized polar coordinates; (p,q)-arc length; dynamic intersection proportion function; (p,q)-generalized Box-Muller simulation method; (p,q)-spherical uniform distribution; dynamic geometric disintegration For evaluating probabilities of arbitrary random events with respect to a given multivariate probability distribution, specific techniques are of great interest. An important two-dimensional high risk limit law is the Gauss-exponential distribution whose probabilities can be dealt with based upon the Gauss-Laplace law. The latter will be considered here as an element of the newly introduced family of (p,q)-spherical distributions. Based upon a suitably defined non-Euclidean arc-length measure on (p,q)-circles we prove geometric and stochastic representations of these distributions and correspondingly distributed random vectors, respectively. These representations allow dealing with the new probability measures similarly like with elliptically contoured distributions and more general homogeneous star-shaped ones. This is demonstrated at hand of a generalization of the Box-Muller simulation method. En passant, we prove an extension of the sector and circle number functions. A Review of Hybrid Automata Models Mohammad Reza Besharati, Mohammad Izadi Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Hybrid Automata; Formal Modeling; Discrete State; Continuous State; Formal Methods In this paper, Hybrid Automata, which is a formal model for hybrid systems, has been introduced. A summary of its theory is presented and some of its special and important classes are listed and some properties that can be studied and checked for it are mentioned. Finally, the purposes of use, the most widely used areas, and the tools that provide H.A. Support are addressed. Quantum Fisher Information of Decohered W and GHZ Superposition States with Arbitrary Relative Phase Subject: Physical Sciences, General & Theoretical Physics Keywords: Quantum Fisher Information; arbitrary phase; W State; GHZ State; decoherence Quantum Fisher Information (QFI) is a very useful concept for analyzing situations that require phase sensitivity. It become a popular topic especially in Quantum Metrology domain. In this work, we study the changes in quantum Fisher information (QFI) values for one relative arbitrary phased quantum system consisting of a superposition of N Qubits W and GHZ states. In a recent work [7], QFI values of this mentioned system for N qubits were studied. In this work, we extend this problem for the changes of QFI values in some noisy channels for the studied system. We show the changes in QFI depending on noise parameters. We report interesting results for different type of decoherence channels. We show the general case results for this problem. Design and Simulation of Adaptive PID Controller Based on Fuzzy Q-Learning Algorithm for a BLDC Motor Reza Rouhi Ardeshiri, Nabi Nabiyev, Shahab S. Band, Amir Mosavi Subject: Engineering, Automotive Engineering Keywords: Q-learning; Fuzzy logic; Adaptive controller; BLDC motor Reinforcement learning (RL) is an extensively applied control method for the purpose of designing intelligent control systems to achieve high accuracy as well as better performance. In the present article, the PID controller is considered as the main control strategy for brushless DC (BLDC) motor speed control. For better performance, the fuzzy Q-learning (FQL) method as a reinforcement learning approach is proposed to adjust the PID coefficients. A comparison with the adaptive PID (APID) controller is also performed for the superiority of the proposed method, and the findings demonstrate the reduction of the error of the proposed method and elimination of the overshoot for controlling the motor speed. MATLAB/SIMULINK has been used for modeling, simulation, and control design of the BLDC motor. New Fixed Point Results via (q,y)R-Weak Contractions with an Application Mohammad Imded, Based Ali, Waleed M. Alfaqih, Salvatore Sessa Subject: Mathematics & Computer Science, General Mathematics Keywords: fixed point; q-contraction; binary relation; integral equation In this paper, inspired by Jleli and Samet [journal of inequalities and applications 38 (2014) 2 1–8] we introduce two new classes of auxiliary functions and utilize the same to define (q, y)R-weak 3 contractions. Utilizing (q, y)R-weak contractions, we prove some fixed point theorems in the setting 4 of relational metric spaces. We employ some examples to substantiate the utility of our newly proved 5 results. Finally, we apply one of our newly proved results to ensure the existence and uniqueness of 6 solution of a Volterra-type integral equation. Acoustic Emissions in Compression of Building Materials: Q-Statistics Enables the Anticipation of the Breakdown Point A. Greco, Constantino Tsallis, Andrea Rapisarda, A. Pluchino, G. Fichera, L. Contrafatto Subject: Engineering, Civil Engineering Keywords: Acoustic emissions, fracture process, failure prediction, q-statistics Online: 9 January 2019 (16:35:10 CET) In this paper we present experimental results concerning Acoustic Emission (AE) recorded during cyclic compression tests on two different kinds of brittle building materials, namely concrete and basalt. The AE inter-event times were investigated through a non-extensive statistical mechanics analysis which shows that their decumulative probability distributions follow q-exponential laws. The entropic index q and the relaxation parameter q 1=Tq, obtained by fitting the experimental data, exhibit systematic changes during the various stages of the failure process, namely (q; Tq) linearly align. The Tq = 0 point corresponds to the macroscopic breakdown of the material. The slope, including its sign, of the linear alignment appears to depend on the chemical and mechanical properties of the sample. These results provide an insight on the warning signs of the incipient failure of building materials and could therefore be used in monitoring the health of existing structures such as buildings and bridges. Emergence of Shear Bands in Confined Granular Systems: Singularity of the $q$-Statistics Léo Viallon-Galiner, Gaël Combe, Vincent Richefeu, Allbens Atman Picardi Faria Subject: Physical Sciences, Condensed Matter Physics Keywords: granular materials; displacement fluctuations; $q$-gaussian; strain localization The statistics of grain displacements probability distribution function (pdf) during the shear of a granular medium displays an unusual dependence with the shear increment upscaling as recently evinced [Phys. Rev. Lett. 115 238301 2015]. Basically, the pdf of grain displacements has clear nonextensive ($q$-Gaussian) features at small scales but approaches to Gaussian characteristics at large shear window scales -- the granulence effect. Here, we extend this analysis studying a larger system (more grains considered in the experimental setup) which exhibits a severe shear band fault during the macroscopic straining. We calculate the pdf of grain displacements and the dependency of the $q$-statistics with the shear increment. This analysis have shown a singular behavior of $q$ at large scales, displaying a non-monotonic dependence with the shear increment. By means of an independent image analysis, we demonstrate that this singular non-monotonicity could be associated with the emergence of a shear band within the confined system. We show that the exact point where the $q$-value inverts its tendency coincides with the emergence of a giant percolation cluster along the system, caused by the shear band. We believe that this original approach using Statistical Mechanics tools to identify shear bands can be a very useful piece to solve the complex puzzle of the rheology of dense granular systems. Modified Degenerate Carlitz's q-Bernoulli Polynomials and Numbers with Weight (α, β) Ugur Duran, Mehmet Acikgoz Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Carlitz's q-Bernoulli polynomials; Stirling numbers of the first kind; Stirling numbers of the second kind; p-adic q-integral The main goal of the present paper is to construct some families of the Carlitz's q-Bernoulli polynomials and numbers. We firstly introduce the modified Carlitz's q-Bernoulli polynomials and numbers with weight (α, β) and investigate their some explicit properties and identities arising from the bosonic q-Volkenborn integral on ℤp. We then define the modified degenerate Carlitz's q-Bernoulli polynomials and numbers with weight (α, β) and obtain some recurrence relations and other identities. Moreover, we derive some correlations with the modified Carlitz's q-Bernoulli polynomials with weight (α, β), the modified degenerate Carlitz's q-Bernoulli polynomials with weight (α, β), the Stirling numbers of the first kind and second kind. How Should I Teach From This Month Onwards? A State-Space Model That Helps Drive Whole Classes to Achieve End-Of-Year National Standardized Test Learning Targets Obed Ulloa, Roberto Araya Subject: Social Sciences, Education Studies Keywords: Digital Systems; Educational Systems; State-space Models; Optimal Control; Long-term learning prediction; Learning Analytics Every month teachers face the dilemma of what exercises should their students practice, and what their consequences are on long-term learning. Since teachers prefer to pose their own exercises, this generates a large number of questions, each one attempted by a small number of students. Thus, we couldn't use models based on big data such as deep learning. Instead, we developed a simple to understand state-space model that predicts end-of-year national test scores. We used 2,386 online fourth-grade mathematic questions designed by teachers and each attempted by some of the 500 students in 24 low socioeconomic schools. We found that the state-space model predictions improved month-by-month and that in most months it outperformed linear regression models. Moreover, the state-space estimator provides for each month a direct mechanism to simulate different practice strategies and compute their impact on the end-of-year standardized national test. We built iso-impact curves based on two critical variables: the number of questions solved correctly in the first attempt and the total number of exercises attempted. This allows the teacher to visualize the trade-off between asking students to do exercises more carefully or doing more exercises. To the best of our knowledge, this model is the first of its kind in education. It is a novel tool that supports teachers drive whole classes to achieve long-term learning targets. Superionic Solid Electrolyte Li7La3Zr2O12 Synthesis and Thermodynamics for Application in All-Solid-State Lithium-Ion Batteries Anatoliy Popovich, Pavel Novikov, Qingsheng Wang, Daniil Aleksandrov Subject: Materials Science, General Materials Science Keywords: lithium-ion battery; solid-state electrolyte; lithium-ion thermodynamics; solid-state synthesis Li7La3Zr2O12Solid-state reaction was used for Li7La3Zr2O12 material synthesis from Li2CO3, La2O3 and ZrO2 powders. Phase investigation by XRD, SEM and EDS methods of Li7La3Zr2O12 were carried out. The molar heat capacity of Li7La3Zr2O12 at constant pressure in the temperature range 298-800 K should be calculated as Cp,m = 518.135+0.599 × T - 8.339 × T−2, where T is absolute temperature, . Thermodynamic characteristics of Li7La3Zr2O12 were determined as next: entropy S0298 = 362.3 J mol-1 K-1, molar enthalpy of dissolution ΔdHLlZO = ˗ 1471.73 ± 29.39 kJ mol−1, the standard enthalpy of formation from elements ΔfH0 = ˗ 9327.65 ± 7.9 kJ mol−1, the standard Gibbs free energy of formation ∆f G0298 = ˗9435.6 kJ mol-1. Submicron Sized Nb Doped Lithium Garnet for High Ionic Conductivity Solid Electrolyte and Performance of All Solid-State Lithium Battery Yan Ji, Cankai Zhou, Feng Lin, Bingjing Li, Feifan Yang, Huali Zhu, Junfei Duan, Zhaoyong Chen Subject: Materials Science, General Materials Science Keywords: Solid State Electrolyte; Submicron Powders; Garnet; Lithium Ion Conductivity; Solid-State Batteries The garnet Li7La3Zr2O12 (LLZO) has been widely investigated because of its high conductivity, wide electrochemical window and chemical stability to lithium metal. However, the usual preparation process of LLZO requires a long time of high-temperature sintering and a lot of mother powders against the lithium evaporation. The submicron Li6.6La3Zr1.6Nb0.4O12 (LLZNO) powders are prepared by conventional solid-state reaction method and attrition milling process, which are stable cubic phase and have high sintering activity, and Li stoichiometric LLZNO ceramics are obtained by sintering at a relative lower temperature or for a short time by using these powders which are difficult to control under high sintering temperature and long sintering time. The particle size distribution, phase structure, microstructure, distribution of element, total ionic conductivity, relative density and activation energy of submicron LLZNO powders and LLZNO ceramics are tested and analyzed by laser diffraction particle size analyzer, XRD, SEM, EIS and Archimedean method. The total ionic conductivity of sample sintered at 1200 °C for 30 min is 5.09 × 10-4 S·cm-1, the activation energy is 0.311 eV, and the relative density is 87.3%, and sintered at 1150 °C for 60 min total ionic conductivity is 3.49 × 10-4 S·cm-1, the activation energy is 0.316 eV, and the relative density is 90.4%. At the same time, all-solid-state batteries are assembled with LiMn2O4 as positive electrode and submicron LLZNO powders as solid state electrolyte. After 50 cycles, the discharge specific capacity is 105.5 mAh/g and the columbic efficiency is above 95%. A Novel High-Q Dual Mass MEMS Tuning Fork Gyroscope Based On 3D Wafer-Level Packaging Pengfei Xu, Yurong He, Zhenyu Wei, Lu Jia, Guowei Han, Chaowei Si, Jin Ning, Fuhua Yang Subject: Physical Sciences, Applied Physics Keywords: Tuning fork gyroscope; MEMS; 3D packaging; high Q-factors Tuning fork gyroscopes (TFGs) are promising for potential high-precision applications. This work proposes and experimentally demonstrates a novel high-Q dual mass tuning fork microelectro-mechanical system (MEMS) gyroscope utilizing three-dimensional (3D) packaging techniques. Except for two symmetrically-decoupled proof masses (PM) with synchronization structures, a symmetrically-decoupled lever structure is designed to force the antiparallel, antiphase drive-mode motion and basically eliminate the low-frequency spurious modes. The thermoelastic damping (TED) and anchor loss are greatly reduced by the linearly-coupled, momentum- and torque-balanced antiphase sense mode. Besides, a novel 3D packaging technique is used to realize high Q-factors. A composite substrate encapsulation cap, fabricated by through-silicon-via (TSV) and glass-in-silicon (GIS) reflow processes, is anodically bonded to the sensing structures at wafer scales. A self-developed control circuit is adopted to realize loop control and characterize gyro-scope performances. It is shown that a high-reliability electrical connection together with a high-air-impermeability package can be fulfilled with this 3D packaging technique. Furthermore, the Q-factors of the drive and sense modes reach up to 51947 and 49249, respectively. This TFG realizes a wide measurement range of ±1800° /s and a high resolution of 0.1° /s with a scale-factor nonlinearity 720 ppm after automatic mode-matching. Besides, the long-term zero-rate output (ZRO) drift can be effectively suppressed by temperature compensation, inducing a small angle random walk (ARW) of 0.923°/√h and a low bias instability (BI) of 9.270°/h. Increasing the Quality Factor (Q) of 1D Photonic Crystal Cavity with an End Loop-Mirror Mohamad Hazwan Haron, Ahmad Rifqi Md Zain, Burhanuddin Yeop Majlis Subject: Physical Sciences, Acoustics Keywords: Photonic crystal cavity; High Q-factor; loss reduction; SOI Increasing the quality factor (Q) of an optical resonator device has been a research focus to be utilized in various applications. Higher Q-factor means light is confined in a longer time which will produce a shaper peak and higher transmission. In this paper, we introduce a novel technique to increase further the Q-factor of a one-dimensional photonic crystal (1D PhC) cavity device by using an end loop-mirror (ELM). The technique utilizes and recycles the light transmission from the conventional 1D PhC cavity design. The design has been proved to work by using the 2.5D FDTD simulation with Lumerical FDTD and MODE softwares. By using the ELM technique, the Q- factor of a 1D PhC design has been shown to have increased up to 79.53 % from the initial Q value without the ELM. This novel design technique can be combined with any high Q-factor and very high Q-factor designs to increase more the Q-factor value of a photonic crystal cavity devices or any other suitable optical resonator devices. The experimental result shows that the device is measurable by adding a Y-branch component to the one-port structure and able to get the high-Q result. Preprint BRIEF REPORT | doi:10.20944/preprints202010.0121.v1 Transstadial Transmission from Nymph to Adult of Coxiella burnetii by Naturally Infected Hyalomma lusitanicum Julia González, Marta G. González, Félix Valcárcel, María Sánchez, Raquel Martín-Hernández, Raquel Martín-Hernández, A. Sonia Olmeda Subject: Life Sciences, Biochemistry Keywords: Q fever; tick, meso-Mediterranean; transstadial transmission; artificial feeding Coxiella burnetii (Derrick) Philip, the causative agent of Q fever, is mainly transmitted by aerosols, but ticks can also be a source of infection. Transstadial and transovarical transmission of C. burnetii by Hyalomma lusitanicum (Koch) has been suggested. There is a close relationship between this tick species, wild animals and C. burnetii but the transmission in a natural environment has not been demonstrated. In this study, we collected 80 engorged nymphs of H. lusitanicum from red deer and wild rabbits. They molt to adults under laboratory conditions and we feed them artificially through silicone membranes after a preconditioning period. C. burnetii DNA was tested in ticks, blood and feces samples using real-time PCR. The pathogen was found in 36.25% of fed adults demonstrating that transstadial transmission from nymph to adult occurs in nature. The presence of DNA in the 60% of blood samples confirms that adults transmit the bacteria during feeding. Further studied are needed about co-feeding and other possible transmission routes to define the role of this tick species in the cycle of C. burnetii. Molecular Detection of Rickettsia spp. and Coxiella burnetii in Cattle, Water Buffaloes, and Rhipicephalus (Boophilus) microplus Ticks in Luzon Island of the Philippines Remil Galay, Melbourne Talactac, Bea Ambita, Dawn Chu, Lali dela Costa, Cinnamon Salangsang, Darwin Caracas, Florante Generoso, Jonathan Babelonia, Joeneil Vergano, Lena Berana, Kristina Sandalo, Billy Divina, Cherry Alvarez, Emman Mago, Masako Andoh, Tetsuya Tanaka Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: Coxiella burnetii; Rickettsia; Q fever; rickettsiosis; tick-borne pathogens Online: 31 March 2020 (09:54:07 CEST) Show abstract| Supplementary Files| Share Rickettsia and Coxiella burnetii are zoonotic tick-borne pathogens that can cause febrile illnesses with or without other symptoms in humans but may cause subclinical infections in animals. There are only a few reports on the occurrence of these pathogens in cattle and water buffaloes in Southeast Asia, including the Philippines. In this study, molecular detection of Rickettsia spp. and C. burnetii in the blood and Rhipicephalus (Boophilus) microplus ticks of cattle and water buffaloes from five provinces in Luzon Island of the Philippines was done. A total of 620 blood samples of cattle and water buffaloes and 206 tick samples were collected and subjected to DNA extraction. After successful amplification of control genes, nested PCR was performed to detect gltA of Rickettsia and com1 of C. burnetii. No samples were positive for Rickettsia while 10 (cattle – 7, water buffaloes - 3) or 1.6% of blood and 5 or 1.8% of tick samples were C. burnetii-positive. Sequence analysis of the positive amplicons showed 99-100% similarity to reported C. burnetii isolates. This molecular evidence on the occurrence of C. burnetii in Philippine ruminants and cattle ticks and its zoonotic nature should prompt further investigation and surveillance to facilitate its effective control. Impact of Intellectual Capital on Firm Value: The Moderating Role of Managerial Ownership Aftab Ahmed, Muhammad Kashif Khurshid, Muhammad Usman Yousaf Subject: Social Sciences, Finance Keywords: intellectual capital; firm value; managerial ownership; Tobin's Q; VAIC Rapidly changing dynamics of globalization and increasing market competition are causing the companies all around the world confronting several new challenges and opportunities. To be competitive and successful apart from relative importance of physical resources, companies must adapt modern strategies and policies regarding market flexibility and development. The purpose of this study is to empirically investigate the relationship between intellectual capital and firm value. Furthermore, the moderating role of managerial ownership has been evaluated with the help of regression analysis. The sample included the panel data taken from non-financial firms listed on Pakistan stock exchange (PSX) covering the period 2010-2015. A sample of 79 firms out of 384 firms have been selected with the help of systematic sampling technique. VAIC (Value Added Intellectual Coefficient) model has been used for the calculation of intellectual capital. Tobin's Q has been taken as a measure of firm value. Managerial ownership has been tested as moderator. Based on data analysis, it is concluded that the relationship between intellectual capital and firm value is positively significant. It is also concluded that managerial ownership moderates the relationship between intellectual capital and firm value negatively. q-RASAR Modeling of Cytotoxicity of TiO2-based Multi-component Nanomaterials Arkaprava Banerjee, Supratik Kar, Agnieszka Gajewicz-Skretna, Kunal Roy Subject: Chemistry, Other Keywords: QSAR; q-RASAR; random forest; machine learning; TiO2-based nanoparticles Read-Across Structure-Activity Relationship (RASAR) is an emerging cheminformatic approach that combines the usefulness of a QSAR model and similarity-based Read-Across predictions. In this work, we have generated a simple, interpretable, and transferable quantitative-RASAR (q-RASAR) model which can efficiently predict the cytotoxicity of TiO2-based multi-component nanomaterials. The data set involves 29 TiO2-based nanomaterials which contain specific amounts of noble metal precursors in the form of Ag, Au, Pd, and Pt. The data set was rationally divided into training and test sets and the Read-Across-based predictions for the test set were generated using the tool Read-Across-v4.1 available from https://sites.google.com/jadavpuruniversity.in/dtc-lab-software/home. The hyperparameters were optimized based on the training set data and using this optimized setting, the Read-Across-based predictions for the test set were obtained. The optimized hyperparameters and the similarity approach, which yields the best predictions, were used to calculate the similarity and error-based RASAR descriptors using the tool RASAR-Desc-Calc-v2.0 available from https://sites.google.com/jadavpuruniversity.in/dtc-lab-software/home. These RASAR descriptors were then clubbed with the physicochemical descriptors and were subjected to features selection using the tool Best Subset Selection v2.1 available from https://dtclab.webs.com/software-tools. The final set of selected descriptors was used to develop multiple linear regression based q-RASAR models, which were validated using stringent criteria as per the OECD guidelines. Finally, a random forest model was also developed with the selected descriptors. The final machine learning model can efficiently predict the cytotoxicity of TiO2-based multi-component nanomaterials superseding previously reported models in the prediction quality. β-Ga2O3 Used as a Saturable Absorber to Realize Passively Q-Switched Laser Output Baizhong Li, Qiudi Chen, Peixiong Zhang, Ruifeng Tian, Lu Zhang, Qinglin Sai, Bin Wang, Mingyan Pan, Youchen Liu, Changtai Xia, Zhenqiang Chen, Hongji Qi Subject: Materials Science, Other Keywords: β-Ga2O3 crystal; optical floating zone; saturable absorber; Q-switch β-Ga2O3 crystal have attracted great attentions in the fields of photonics and photoelectronics because of its ultra wide-band gap and high thermal conductivity. Here, pure β-Ga2O3 crystal was successfully grown by optical floating zone (OFZ) method, and used as saturable absorbers to realize a passively Q-switched all-solid-state 1μm laser for the first time. By placing the as-grown β-Ga2O3 crystal into the resonator of Nd:GYAP solid-state laser, a Q-switched pulses at the center wavelength of 1080.4 nm are generated under a output coupling of 10%. The maximum output power is 191.5 mW while the shortest pulse width is 606.54 ns, and the maximum repetition frequency is 344.06 kHz. The maximum pulse energy and peak power are 0.567 μJ and 0.93 W, respectively. Our experimental results show that β-Ga2O3 crystal has great potential in the development of all-solid-state 1μm pulsed laser. She Thinks in English, but She Wants in Mandarin: Differences in Singaporean Bilingual English-Mandarin Maternal Mental-state-talk Michelle Cheng, Peipei Setoh, Marc H. Bornstein, Gianluca Esposito Subject: Behavioral Sciences, Developmental Psychology Keywords: bilingualism; mental-state-talk; socialization Chinese-speaking parents are argued to use less cognitive mental-state-talk than their English-speaking counterparts due to their goals in socializing their children to follow an interdependence script. To extend this research, we investigated bilingual Mandarin-English Singaporean mothers who associate different functions for each language as prescribed by their government: English for school and Mandarin for in-group contexts. English and Mandarin maternal mental-state-talk from bilingual Mandarin-English mothers with their toddlers was examined. Mothers produced more cognitive terms in English than in Mandarin and more desire terms in Mandarin than in English. We show that mental-state-talk differs between bilingual parents' languages, suggesting that mothers adjust their mental-state-talk to reflect each language's function. The Development of Russian Church Architecture in The 1990s-2017: The State and Prospects Ershov Bogdan Anatolievich, Ashmarov Igor Anatol'evich, Danilchenko Sergey Leonidovich Subject: Arts & Humanities, History Keywords: church; art; priest; state; society The article examines church architecture in modern Russia. The historical processes of the development of church architecture are analyzed and systematized not only from the point of view of formal stylistic but also global significance. For this purpose, for the first time, a wide range of sources containing information on the sacred component of church art and on the monuments of temple architecture was studied. At the same time, many fragments of sources were first translated into English. The article uses historical and retrospective research methods that allowed to study the theoretical legacy of the modern period in the history of Russia and at the same time to generalize the place of Russian church architecture in the general context of European architectural development. Category Algebras and States on Categories Hayato Saigo Subject: Mathematics & Computer Science, General Mathematics Keywords: Category; Algebra; State; Category Algebra; State on Category; Noncommutative Probability; Quantum Probability; GNS representation The purpose of this paper is to build a new bridge between category theory and a generalized probability theory known as noncommutative probability or quantum probability, which was originated as a mathematical framework for quantum theory, in terms of states as linear functional defined on category algebras. We clarify that category algebras can be considered as generalized matrix algebras and that the notions of state on category as linear functional defined on category algebra turns out to be a conceptual generalization of probability measures on sets as discrete categories. Moreover, by establishing a generalization of famous GNS (Gelfand-Naimark-Segal) construction, we obtain a representation of category algebras of †-categories on certain generalized Hilbert spaces which we call semi-Hilbert modules over rigs. Many-Electron QED with Redefined Vacuum Approach Romain N. Soguel, Andrey V. Volotka, Dmitry A. Glazov, Stephan Fritzsche Subject: Physical Sciences, Acoustics Keywords: Bound-state QED; Lamb shift; relativistic atomic theory; vacuum redefinition; ground state redefinition; gauge invariance The redefined vacuum approach, which is frequently employed in the many-body perturbation theory, proved to be a powerful tool for formula derivation. Here, we elaborate this approach within the bound-state QED perturbation theory. In addition to general formulation, we consider the particular example of a single particle (electron or vacancy) excitation with respect to the redefined vacuum. Starting with simple one-electron QED diagrams, we deduce first- and second-order many-electron contributions: screened self-energy, screened vacuum polarization, one-photon exchange, and two-photon exchange. The redefined vacuum approach provides a straightforward and streamlined derivation and facilitates its application to any electronic configuration. Moreover, based on the gauge invariance of the one-electron diagrams, we can identify various gauge-invariant subsets within derived many-electron QED contributions. Stochastic Behavior of a Two-Unit Parallel System with Dissimilar Units and Optional Vacations under Poisson Shocks Mohamed S. El-Sherbeny, Zienab M. Hussien Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Mean time to failure; Poisson shock; Steady-state availability; Steady-state frequency; Supplementary variable technique. This article examines the impact of some system parameters on an industrial system composed of two dissimilar parallel units with one repairman. The active unit may fail due to essential factors like aging or deteriorating, or exterior phenomena such as Poisson shocks that occur at various time periods. Whenever the value of a shock is larger than the specified threshold of the active unit, the active unit will fail. The article assumes that the repairman has the right to take any of two decisions at the beginning of the system operation: either a takes a vacation if the two units work in a normal way, or stay in the system to monitor the system until the first system failure. In case of having a failure in any of the two units during the absence of the repairman, the failing unit will have to wait until the repairman is called back to work. We suppose that the value of every shock is assumed to be i.i.d. with some known distribution. The length of the repairman's vacation, repair time, and recall time are arbitrary distributions. Various reliability measures have been calculated by the supplementary variable technique and the Markov's vector process theory. At last, numerical computation and graphical analysis have been given for a particular case to validate the derived indices. Application of Emerging-State Actor Theory: Analysis of Intervention & Containment Policies Using the ISIS Case Timothy Clancy Subject: Social Sciences, Other Keywords: ISIS; ISIL; DAESH; insurgency; conflict; security; non-state actor; emerging-state actor; intervention; policy analysis This paper builds upon a theory of emerging-state actors using ISIS as a case study. This paper seeks to apply the theory in analyzing intervention & containment policies to use against emerging-state actors, using ISIS as the case study. Two baseline scenarios are used for evaluation – one replicating the historical foreign intervention against ISIS and a counter-factual where no foreign intervention occurred. Eleven contemporary military policies were tested against these baseline in isolation, combination, at different timing windows and under hypothetical "best case" conditions as well as operationally constrained. Insights of these tests include the influence of ethnographic envelopes, timing windows. Finally, a policy based on emerging-state actor theory is tested performing substantially better across primary measures than other policies or the historical baseline. This is compared against a falsified-policy designed to disprove that emerging-state actor theory contributed to the benefits. This paper's contributions are a practical application of system dynamics simulations and systems-thinking to current problems, generate insights into the dynamics of emerging-state actors and intervention strategies, and demonstrate utility for future application of the underlying simulation in other scenarios involving non-state actor irregular conflict including terrorism, insurgents, or emerging-state actors. Quantitative Methods for the Use of Icg in Colorectal Surgery - An Updated Literature Review Sinziana Ionescu Subject: Medicine & Pharmacology, Other Keywords: colorectal; fluorescence; ICG; ICG-NIR; colorectal surgery; intraoperative staining; q-ICG This review looks at the use of indocyanine green (ICG) in colorectal surgery, from a quantitative point of view. The main benefits of the ICG technique in colorectal surgery, can be summarized as follows: a)in the realization of the intraoperative fluorescence angiography as an adjuvant in the process of anastomosis, b)in the fluorescence-guided detection of lymph node metastases in colorectal cancer and, also, the sentinel lymph node technique, which was proven better than formal methods in some studies, c) marking with positive fluorescence a liver nodule as small as "just" 200 tumor cells, d) offering assistance in the diagnosis of a fistula, e)in the possibility to be used for tumor tattooing also, f)providing help in maintaining a clean surgical field and preventing wound infection in abdominoperineal resection. Apart from the qualitative intraoperative use of ICG, the method can be employed in association with quantitative methods, such as maximum intensity, relative maximum intensity, and various parameters of the inflow (time-to-peak, slope, and t1/2max), this latter category being more significantly associated with anastomotic leakage. Reinforcement Learning for Electric Vehicle Charging using Dueling Neural Networks Gargya Gokhale, Bert Claessens, Chris Develder Subject: Engineering, Electrical & Electronic Engineering Keywords: Electric Vehicles; batch reinforcement learning; dueling neural networks; fitted Q-iteration We consider the problem of coordinating the charging of an entire fleet of electric vehicles (EV), using a model-free approach, i.e. purely data-driven reinforcement learning (RL). The objective of the RL-based control is to optimize charging actions, while fulfilling all EV charging constraints (e.g. timely completion of the charging). In particular, we focus on batch-mode learning and adopt fitted Q-iteration (FQI). A core component in FQI is approximating the Q-function using a regression technique, from which the policy is derived. Recently, a dueling neural networks architecture was proposed and shown to lead to better policy evaluation in the presence of many similar-valued actions, as applied in a computer game context. The main research contributions of the current paper are that (i)we develop a dueling neural networks approach for the setting of joint coordination of an entire EV fleet, and (ii)we evaluate its performance and compare it to an all-knowing benchmark and an FQI approach using EXTRA trees regression technique, a popular approach currently discussed in EV related works. We present a case study where RL agents are trained with an epsilon-greedy approach for different objectives, (a)cost minimization, and (b)maximization of self-consumption of local renewable energy sources. Our results indicate that RL agents achieve significant cost reductions (70--80%) compared to a business-as-usual scenario without smart charging. Comparing the dueling neural networks regression to EXTRA trees indicates that for our case study's EV fleet parameters and training scenario, the EXTRA trees-based agents achieve higher performance in terms of both lower costs (or higher self-consumption) and stronger robustness, i.e. less variation among trained agents. This suggests that adopting dueling neural networks in this EV setting is not particularly beneficial as opposed to the Atari game context from where this idea originated. ADCK2 Haploinsufficiency Reduces Mitochondrial Lipid Oxidation and Causes Myopathy Associated with CoQ Deficiency Luis Vázquez-Fonseca, Jochen Schäfer, Ignacio Navas-Enamorado, Carlos Santos-Ocaña, Juan D. Hernández-Camacho, Ignacio Guerra, María V. Cascajo, Ana Sánchez-Cuesta, Zoltan Horvath, Emilio Siendones, Cristina Jou, Mercedes Casado, Purificación Gutierrez-Rios, Gloria Brea-Calvo, Guillermo López-Lluch, Daniel J.M. Fernández-Ayala, Ana B. Cortés-Rodríguez, Juan C. Rodríguez-Aguilera, Cristiane Matté, Antonia Ribes, Sandra Y. Prieto-Soler, Eduardo Dominguez-del-Toro, Andrea di Francesco, Miguel A. Aon, Michel Bernier, Leonardo Salviati, Rafael Artuch, Rafael de Cabo, Sandra Jackson, Plácido Navas Subject: Medicine & Pharmacology, Other Keywords: coenzyme Q deficiency; mitochondrial disease; respiratory chain; fatty acids; myopathy; ADCK2 Fatty acids and glucose are the main bioenergetic substrates in mammals that are alternatively used during the transition between fasting and feeding. Impairment of mitochondrial fatty acid oxidation causes mitochondrial myopathy leading to decreased physical performance. Here, we report that haploinsufficiency of ADCK2, a member of the aarF domain-containing mitochondrial protein kinase family, in human is associated with liver dysfunction and severe mitochondrial myopathy with lipid droplets in skeletal muscle. In order to better understand the etiology of this rare disorder, we generated a heterozygous Adck2 knockout mouse model to perform in vivo and cellular studies using integrated analysis of physiological and omics data (transcriptomics-metabolomics). The data show that Aldh2+/- mice exhibits impaired fatty acid oxidation, liver dysfunction, and mitochondrial myopathy in skeletal muscle resulting in lower physical performance. Significant decrease in CoQ biosynthesis was observed and supplementation with CoQ partially rescued the phenotype both in the human subject and mouse model. These results indicate that ADCK2 is involved in organismal fatty acid metabolism and in CoQ biosynthesis in skeletal muscle. We propose that patients with isolated myopathies and myopathies involving lipid accumulation be tested for possible ADCK2 defect as they are likely to be responsive to CoQ supplementation. Fueling Global Energy Finance: The Emergence of China in Global Energy Investment Sucharita Gopal, Joshua Pitts, Zhongshu Li, Kevin Gallagher, William Kring Subject: Social Sciences, Other Keywords: FDI, M&Q, energy, supply chain, inbound investment, outbound investment, BRI. Global financial investments in energy production and consumption are significant since all aspects of a country's economic activity, and development require energy resources. In this paper, we assess the investment trends in the global energy sector during, before and after financial crises of 2008 using two data sources: (1) Dealogic database providing cross‐border mergers and acquisitions (M&As), and (2) fDi Intelligence fDi Markets database providing greenfield (GF) foreign direct investments (FDIs). We highlight the changing role of China and compare its M&A and GF FDI activities to those of the United States, Germany, UK, Japan and others during this period. We analyze the investments along each segment of the energy supply chain of these countries to highlight the geographical origin and destination, sectoral distribution, and cross‐border M&As and GF FDI activities. Our paper shows that while energy accounts for nearly 25% of all GF FDI, it only accounts for 4.82% of total M&A FDI activity in the period 1996-2016. China's outbound FDI in the energy sector started its ascent around the time of the global recession and had accelerated in the post-recession phase. In the energy sector, the development of China's outbound cross‐border M&As is similar to USA or UK, located mostly in the developed countries in the west, while their outbound GF investments are spread across many countries around the world. Also, China's outbound energy M&As are concentrated in certain segments (extraction, and electricity generation) while their GF covers all segments of the energy supply chain. A Note on the (p, q)-Hermite Polynomials Ugur Duran, Mehmet Acikgoz, Ayhan Esi, Serkan Araci Subject: Keywords: (p, q)-calculus; hermite polynomials; bernstein polynomials; generating function; hyperbolic functions In this paper, we introduce a new generalization of the Hermite polynomials via (p, q)-exponential generating function and investigate several properties and relations for mentioned polynomials including derivative property, explicit formula, recurrence relation, integral representation. We also de…ne a (p, q)-analogue of the Bernstein polynomials and acquire their some formulas. We then provide some (p, q)-hyperbolic representations of the (p, q)-Bernstein polynomials. In addition, we obtain a correlation between (p, q)-Hermite polynomials and (p, q)-Bernstein polynomials. Active Power Dispatch Optimization for a Grid-Connected Microgrid with Uncertain Multi-Type Loads Kai Lv, Hao Tang, Yijin Li, Xin Li Subject: Engineering, Electrical & Electronic Engineering Keywords: multi-type loads; active power dispatch optimization; simulated-annealing Q-learning An active power dispatch method for a microgrid (MG) with multi-type loads, renewable energy sources (RESs) and distributed energy storage devices (DESDs) is the focus of this paper. The MG operates in a grid-connected model, and distributed power sources contribute to the service for load demands. The outputs of multiple DESDs are controlled to optimize the active power dispatch. Our goal with optimization is to reduce the economic cost under time-of-use (TOU) price, and to adjust the excessively high or low load rate of distributed transformers (DTs) caused by the peak-valley demand and load uncertainties. To simulate a practical environment, the stochastic characteristics of multi-type loads are formulated. The transition matrix of system state is provided. Then, a finite-horizon Markov decision process (FHMDP) model is established to describe the dispatch optimization problem. A learning-based technique is adopted to search the optimal joint control policy of multiple DESDs. Finally, simulation experiments are performed to validate the effectiveness of the proposed method, and the fuzzification analysis of the method is presented. Organizer Operator as a Space Generator Yehuda Roth Subject: Physical Sciences, General & Theoretical Physics Keywords: interpretation; state construction; entropy reduction; observer The validity of our universe as a three-dimensional space (3+1 in relativity) is considered a fundamental fact in physics. In this study, we show that our observed world is thus the output of a prior fundamental operator referred to as an organizer. The organizer is an expansion of projecting operators. It is been shown that identical weighted projecting operators, which are associated with identical particles, generate subspaces of entangled states, whereas groups of unequal weighted coefficients are responsible for finite-size subspaces that are associated with unidentical particles. Considering 3D-subspaces as evidence of the coefficients' arrangement in our universe, we implement our formalism to describe the implacable vectors, location, momentum, and force within each 3D subspace. By implementing the Heisenberg relation, we drive both the classical and quantum expressions for the laws of motion. What is Life? The Observer Prescriptive Subject: Physical Sciences, Other Keywords: Interpretation; State construction; Entropy reduction, Observer Quantum mechanics introduces the concept of an observer who selects a measuring device and reads the outputs. This measurement process is irreversible. Lately, scholars on quantum collapse phenomena have presented a quantum-like formalism describing the measurement results as an interpretation of the measured object. Note that an observer must read the interpretation results after the interpretation process. Therefore, we propose that the definition of the concept of life should be expanded based on the following concept: A living system decreases entropy, measured results are interpreted, and an internal observer reads the commentary. Novel State-Space Realization Generalized from Turbine Blade Modeling Andrew van Paridon, Timothy Sands Subject: Engineering, Mechanical Engineering Keywords: transfer function; state-space; realization; conversion Mathematical models across the applied sciences often utilize a standard methodological representation called a state variable formulation more commonly referred to as state space form. Recent research in unmanned underwater vehicle motor turbine blade thermal modeling for fatigue-life is generalized here permitting the proposed novel state space from to be applied to electrodynamics, motion mechanics, and many other disciplines. Proposed here is a very compact form inherently representing time variance, with a convenient presentation of dynamic variables applicable to all proper transfer functions, where all the distinct, real poles, zeros and gain of the transfer function appear as explicit components in the state space. The resulting manifestation simplifies utilization of the state space methods broadly across the applied sciences. Relationships between Diffusion Tensor Imaging and Resting State Functional Connectivity in Patients with Schizophrenia and Healthy Controls: A Preliminary Study Matthew J. Hoptman, Umit Tural, Kelvin O. Lim, Daniel C. Javitt, Lauren E. Oberlin Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: DTI; resting state; schizophrenia; FATCAT; tractography Schizophrenia is widely seen as a disorder of dysconnectivity. Neuroimaging studies have examined both structural and functional connectivity in the disorder, but these modalities have rarely been integrated directly. We scanned 29 patients with schizophrenia and 25 healthy control subjects and acquired resting state fMRI and diffusion tensor imaging. The Functional and Tractographic Connectivity Analysis Toolbox (FATCAT) was used to estimate functional and structural connectivity of the default mode network. Correlations between modalities were investigated, and multimodal connectivity scores (MCS) were created using principal components analysis. Nine of 28 possible region pairs showed consistent (>80%) tracts across participants. Correlations between modalities were found among those with schizophrenia for the prefrontal cortex, posterior cingulate, and lateral temporal lobes with frontal and parietal regions, consistent with frontotemporoparietal network involvement in the disorder. In patients, MCS values correlated with several aspects of the Positive and Negative Syndrome Scale, positively with those involving inwardly directed psychopathology, and negatively with those involving external psychopathology. In this preliminary sample, we found FATCAT to be a useful toolbox to directly integrate and examine connectivity between imaging modalities. A consideration of conjoint structural and functional connectivity can provide important information about the network mechanisms of schizophrenia. The Quasi Steady State Cosmology in a Radiation Dominated Phase Raj Bali Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Quasi Steady State Cosmology, Radiation Phase Analytical solutions for radiation dominated phase of Quasi Steady State Cosmology in Friedmann-Robertson-Walkar models are obtained. We find that matter density is positive in all the cases (k = 0,-1,1). The nature of Hubble parameter (H) in [0,2] is discussed. The deceleration parameter (q) is marginally less than zero indicating accelerating universe. The scale factor (S) is graphically shown with time. The model represents oscillating universe between the above mentioned limits. Because of bounce in QSSC, the maximum density phase is still matter dominated. The models represent singularity free model. We also find that the models have event horizon i.e. no observer beyond the proper distance rH can communicate each other in FRW mdels for radiation dominated phase in the frame work of QSSC. The FRW models are special classes of Bianchi type I, V, IX space-times with zero, negative and positive curvatures respectively. Initially i.e. at  = 0, the model represents steady model. We have tried to show how a good fit can be obtained to the observations in the framework of QSSC during radiation dominated phase. A simple Spectral Observer Lizeth Torres, Javier Jiménez-Cabas, José Francisco Gómez-Aguilar, Pablo Pérez-Alcazar Subject: Engineering, Control & Systems Engineering Keywords: Signal Processing; Fourier Series; State Observer; Short Time Fourier Transform; Time-Frequency Analysis The principal aim of a spectral observer is twofold: the reconstruction of a signal of time via state estimation and the decomposition of such a signal into the frequencies that make it up. A spectral observer can be catalogued as an online algorithm for time-frequency analysis because is a method that can compute on the fly the Fourier transform (FT) of a signal, without having the entire signal available from the start. In this regard, this paper presents a novel spectral observer with an adjustable constant gain for reconstructing a given signal by means of the recursive identification of the coefficients of a Fourier series. The reconstruction or estimation of a signal in the context of this work means to find the coefficients of a linear combination of sines a cosines that fits a signal such that it can be reproduced. The design procedure of the spectral observer is presented along with the following applications: (1) the reconstruction of a simple periodical signal, (2) the approximation of both a square and a triangular signal, (3) the edge detection in signals by using the Fourier coefficients, (4) the fitting of the historical Bitcoin market data from 2014-12-01 to 2018-01-08 and (5) the estimation of a input force acting upon a Duffing oscillator. To round out this paper, we present a detailed discussion about the results of the applications as well as a comparative analysis of the proposed spectral observer vis-à-vis the Short Time Fourier Transform (STFT), which is a well-known method for time-frequency analysis. Asymptotics and Confluence for a Singular Nonlinear Q-Difference-Differential Cauchy Problem Stephane Malek Subject: Mathematics & Computer Science, Analysis Keywords: asymptotic expansion; confluence; formal power series; partial differential equation; q-difference equation We examine a family of nonlinear q-difference-differential Cauchy problems obtained as a coupling of linear Cauchy problems containing dilation q-difference operators, recently investigated by the author, and quasi-linear Kowalevski type problems that involve contraction q-difference operators. We build up local holomorphic solutions to these problems. Two aspects of these solutions are explored. One facet deals with asymptotic expansions in the complex time variable for which a mixed type Gevrey and q-Gevrey structure is exhibited. The other feature concerns the problem of confluence of these solutions as q tends to 1. A Review on Rhenium Disulfide: Synthesis Approaches, Optical Properties, and Applications in Pulsed Lasers Mahmoud Muhanad Fadhel, Norazida Ali, Haroon Rashid, Nurfarhana Mohamad Sapiee, Abdulwahhab Essa Hamzah, Mohd Saiful Dzulkefly Zan, Norazreen Abd Aziz, Norhana Arsad Subject: Engineering, Electrical & Electronic Engineering Keywords: saturable absorbers; Rhenium disulfide; pulsed lasers; mode-locking; Q-switching; 2D TMD Rhenium Disulfide (ReS2) has evolved as a novel 2D transition-metal dichalcogenide (TMD) material which has promising applications in optoelectronics and photonics because of its distinctive anisotropic attributes. In this review, we emphasize on formulating saturable absorbers (SAs) based on ReS2 to produce Q-switched and mode-locked pulsed lasers of diverse operation wavelengths like 1 μm, 1.5 μm, 2 μm, and 3 μm. We outline ReS2 synthesis techniques and integration platforms concerning solid-state and fiber-type lasers. We discuss the laser performance based on SAs attributes. Lastly, we draw conclusions and outlook by recommending additional improvements for SA devices so as to advance the domain of ultrafast photonic technology. Asymptotics and Confluence for Some Linear q-Difference-Differential Cauchy Problem Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: asymptotic expansion; confluence; formal power series; partial differential equation; q-difference equation A linear Cauchy problem with polynomial coefficients wich combines q-difference operators for q>1 and differential operators of irregular type is examined. A finite set of sectorial holomorphic solutions w.r.t the complex time is constructed by means of classical Laplace transforms. These functions share a common asymptotic expansion in the time variable which turns out to carry a double layers structure which couples q-Gevrey and Gevrey bounds. In the last part of the work, the problem of confluence of these solutions as q tends to 1 is investigated. A New Generalization of Fibonacci and Lucas Type Sedenions Can Kızılateş, Selihan Kırlak Subject: Mathematics & Computer Science, General Mathematics Keywords: Sedenion algebra; Horadam number; q-integer; Binet-Like formula; exponential generating function In this paper, by using the q-integer, we introduce a new generalization of Fibonacci and Lucas sedenions called q-Fibonacci and q-Lucas sedenions. We present some fundamental properties of these type of sedenions such as Binet formulas, exponential generating fuctions, summation formulas, Catalan's identity, Cassini's identity and d'Ocagne's identity. Comparison and Contrast of Islamic Water Management Principles with International Water Law Principles: A Case Study of Helmand River Basin Najibullah Loodin, Aaron Wolf Subject: Earth Sciences, Environmental Sciences Keywords: Islamic Water Management Principles (IWMP); International Water Law Principles (IWLP); Helmand River Basin; Upstream State; Downstream State Considering the negative impacts of climate changes along with the rapid increase in population in Islamic dominated states, e.g., the Middle East, water tension among upstream and downstream states is increasing. Despite the importance of water management in Islamic culture, the role of religion has been under-valued and under-emphasized by the scholars. The paper has sought to compare and contrast Islamic water management principles (IWMP) with international water law principles (IWLP). The findings from this analysis show not only that IWMP are in conformity with IWLP, but that in many cases, IWMP can be more effective. For instance, where international water accords between riparian states of a shared river basin are poorly developed and lack enforcement mechanisms under IWLP, those upstream states can abuse their geographical locations depriving those downstream-ers. In contrast, IWMPs stress the equitable and reasonable use of water resources among upstream and downstream users of a shared watercourse. Moreover, although IWLPs emphasize the conservation and preservation of ecosystems and the environment at the basin level, the inter-basin states especially those upstream can pose significant harm to the ecosystems. On the other side, Islam as the religion of peace, has placed much emphasis on the preservation of nature. For example, the verse, ".... And waste not by excess, for Allah loves not the wasters" [Quran, 7:31], illustrates the importance of the sustainable use of water and the environment. It is argued that if Islamic Water Management Principles are incorporated into the management instrument of Islamic States, the issue of equitable and sustainable use of water among Muslim-dominated riparian states (e.g., Iran, Afghanistan, etc.) will be solved. Theory of an Emerging-State Actor: the ISIS Case Subject: Social Sciences, Other Keywords: ISIS, ISIL, DAESH, insurgency, conflict, security, non-state actor, emerging-state actor, combat simulator, geospatial, national security. This paper seeks to explain the rapid growth of the Islamic State of Iraq & Syria (ISIS) and approach the question of "what is" the Islamic State? The paper offers several contributions. First is the proposal of a dynamic hypothesis that ISIS is an emerging-state actor and differs notably from traditional non-state actors and insurgencies. The theory consists of both a causal loop diagram and key propositions. A detailed system dynamics simulation (E-SAM) was constructed to test the theory. The propositions of emerging-state actor theory are constructed as synthetic experiments within the simulation and confirm evidence of emerging-state actor behavior. E-SAM's novelty is its combination of combat simulation with endogenous geospatial feedback, ethnographic behavior in choosing sides in conflict, and details internal simulation of key actor mechanisms such as financing, recruiting and governance. E-SAM can be loaded with scenarios to simulate non-state actors in different geospatial domains: ISIS in Libya, Boko Haram in Nigeria, Taliban in Afghanistan and even expatriated ISIS fighters returning to pursue new conflicts such as in Indonesia. A Vacuum of Quantum Gravity That is Ether. Sergey L. Cherkas, Vladimir L. Kalashnikov Subject: Physical Sciences, Astronomy & Astrophysics Keywords: quantum gravity; vacuum state; vacuum energy,; eicheon The fact that quantum gravity does not admit a co-variant vacuum state has far-reaching consequences for all physics. It points out that space could not be empty, and we return to the notion of an ether . Such a concept requires a preferred reference frame for, e.g., universe expansion and black holes. Here, we intend to discuss vacuum and quantum gravity from three essential viewpoints: universe expansion, black holes existence, and quantum decoherence. The Awareness of Millennial Generation during Covid-19 Pandemic towards State Defense Character Pingky Wicahyanti, Nur Rahim, Moses Pandin Subject: Keywords: state defense, millennial, covid-19 pandemic, character Covid-19 has become a non-natural national disaster that affects the national resilience of the Indonesian state. In maintaining national resilience, the implementation of state defense is needed. This research aims to describe the awareness of defending the state of millennials during the current situation of the Covid-19 pandemic in Indonesia by using a quantitative approach with descriptive analysis methods with survey techniques. The results of this study indicate that most millennials are aware of the obligation to defend the state amid the current Covid-19 pandemic, namely by implementing health protocols such as preparing cleaning equipment, conducting social distancing, and working from home. However, there are still millennials who cannot run it for some reason. Based on the results of this study, it concluded that most millennials are aware of the obligation to defend the state during a pandemic by complying with health protocols as a form of state defense character. A Graph-Theoretic Approach to Understanding Emergent Behavior in Physical Systems Alyssa Adams Subject: Physical Sciences, Acoustics Keywords: Emergence; Ising Model; Information; Computation; State Space The exact dynamics of emergence remains one of the most prominent outstanding questions for the field of complexity science. I first discuss various perspectives on emergence in various contexts, then offer a different perspective on understanding emergence in a graph-theoretic representation. From the discussion, an observer's choice in state space seems to have an effect for that observer to detect emergent behavior. To test these ideas, I analyze the dynamics of all possible spatial state spaces near the critical temperature in an Ising model. As a result, state space topologies that appear more deterministic flip more bits than topologies that appear more random, which is contrary to our intuitions about randomness. In addition, the size of different state spaces constrain a system's ability to explore various states within the same time frame. These results are important to understanding emergent phenomena in biological systems, which are layered with various state spaces and observational perspectives. Generation and Application of Nested Entanglement in Matryoshka Quantum Resource-States Mrittunjoy Guha Majumdar Subject: Keywords: Quantum Computation; Multipartite Entanglement; Quantum State Sharing Multipartite entanglement is a resource for application in disparate protocols, of computing, communication and cryptography. Nested entanglement provides resource-states for quantum information processing. In this paper, Matryoshka quantum resource-states, which contain nested entanglement patterns, has been studied. A novel scheme for the generation of such quantum states has been proposed using an anisotropic XY spin-spin interaction-based model. The application of the Matryoshka GHZ-Bell states for n-qubit teleportation is reviewed and an extension to more general Matryoshka ExhS-Bell states is posited. An example of Matryoshka ExhS-Bell states is given in the form of the genuinely entangled seven-qubit Xin-Wei Zha state. Generation, characterisation and application of this seven-qubit resource state in theoretical schemes for quantum teleportation of arbitrary one, two and three qubits states, bidirectional teleportation of arbitrary two qubit states and probabilistic circular controlled teleportation are presented. Linear, Bidirectional and Circular Controlled Quantum Teleportation and Quantum State Sharing using a Seven Qubit Genuinely Entangled State Subject: Keywords: Quantum Computation, Multipartite Entanglement, Quantum State Sharing Multipartite entanglement is a resource for application in disparate protocols, of computing, communication and cryptography. In this paper, generation, characterisation and application of a genuine genuinely entangled seven-qubit resource state is studied. Theoretical schemes for quantum teleportation of arbitrary one, two and three qubits states, bidirectional teleportation of arbitrary two qubit states and probabilistic circular controlled teleportation as well as three schemes for undertaking tripartite quantum state sharing are presented. Equation of State of Four- and Five-Dimensional Hard-Hypersphere Mixtures Mariano López de Haro, Andrés Santos, Santos B. Yuste Subject: Physical Sciences, Condensed Matter Physics Keywords: equation of state; hard hyperspheres; fluid mixtures New proposals for the equation of state of four- and five-dimensional hard-hypersphere mixtures in terms of the equation of state of the corresponding monocomponent hard-hypersphere fluid are introduced. Such proposals (which are constructed in such a way so as to yield the exact third virial coefficient) extend, on the one hand, recent similar formulations for hard-disk and (three-dimensional) hard-sphere mixtures and, on the other hand, two of our previous proposals also linking the mixture equation of state and the one of the monocomponent fluid but unable to reproduce the exact third virial coefficient. The old and new proposals are tested by comparison with published molecular dynamics and Monte Carlo simulation results and their relative merit is evaluated. Materialist Premises in Hobbes and Kropotkin for Antipodean Conclusions: The State of War and the Mutual Aid Francesco Scotognella Subject: Arts & Humanities, Philosophy Keywords: Materialist philosophy; State of nature; Hobbes; Kropotkin A methodological similarity between Thomas Hobbes and Pëtr Kropotkin is the intention to spread a theoretical foundation to everyone, in the sense that they are willing to give to all the people a clear description of the reality and a subsequent political view. To do so, they use a scientific method, deductive (starting from empirical observations) in the case of Hobbes, inductive-deductive in the case of Kropotkin. Kropotkin underlines the educational value of the scientific method.In this work we want to highlight that, although they both start their argumentations from a materialist ontology, Hobbes and Kropotkin conjecture two completely different states of nature. Hobbes describes the state of nature through the two famous metaphors homo homini lupus (citing Plautus) and bellum omnium contra omnes, while Kropotkin introduced the theory of mutual aid. Both the theory of a state of war by Hobbes and the theory of mutual aid by Kropotkin have been revolutionary. Hobbes has been influenced by the scientific revolution initiated by Francis Bacon, one of his mentors, and Galileo Galilei, together with a criticism towards the ancient Greece philosophers, in particular Aristotle. Kropotkin has been influenced by the ground-breaking writings of Charles Darwin together with a very fruitful Russian scientific environment.We want to stress here that the disenchanted view of the human nature in Hobbes, a state of war due to the fact that everyone has rights on everything, helps him to legitimate sovereignty, while the positive view of human nature in Kropotkin, a spontaneous mutual aid among people in a community, helps him to legitimate anarchy. Therefore, the fascinating scientific methods of the two materialists Hobbes and Kropotkin to structure a solid political theory cannot neglect different views on human nature due to their historical contexts. Beyond Enzyme Production: Solid State Fermentation (SSF) as an Alternative to Produce Antioxidant Polysaccharides Janet Alejandra Gutierrez-Uribe, Ramón Verduzco-Oliva Subject: Life Sciences, Biotechnology Keywords: solid state fermentation; phenolic compounds; enzymes; polysaccharides Solid state fermentation (SSF) is considered more sustainable than traditional fermentation because it uses low amounts of water and transforms agro-industrial residues into value added products. Enzymes, biofuels, nanoparticles and bioactive compounds can be obtained from SSF. The key factor in SSF processes is the choice of microorganisms and their substrates. Many fungal species can be used and are mainly used due their lower requirements of water, O2 and light. Residues rich in soluble and insoluble fiber are utilized by lignocellulolytic fungi because they have the enzymes that break fiber hard structure (lignases, celullases or hemicelullases). During the hydrolysis of lignin, some phenolic compounds are released but fungi also synthetize compounds such as mycophenolic acid, dicerandrol C, phenylacetates, anthraquinones, benzofurans and alkenyl phenols that have health beneficial effects such as antitumoral, antimicrobial, antioxidant and antiviral activities. Another important group of compounds synthetized by fungi during fermentation are polysaccharides that also have important health promoting properties. Fungal biofermentation has also proved to be a process which can release high contents of phenolics and it also increases the bioactivity of these compounds. The Fourth State of Matter Roumen Tsekov Subject: Physical Sciences, General & Theoretical Physics Keywords: state of matter, quantum entanglement, Bohmian mechanics It is shown that quantum entanglement is the only force able to maintain the fourth state of matter, possessing fixed shape at an arbitrary volume. Accordingly, a new relativistic Schrödinger equation is derived and transformed further to the relativistic Bohmian mechanics via the Madelung transformation. Three dissipative models are proposed as extensions of the quantum relativistic Hamilton-Jacobi equation. The corresponding dispersion relations are obtained. ADN_Modelica: An Open Source Sequence-Phase Coupled Frame Program for Active Distribution Network Steady State Analysis Yuntao Ju, Fuchao Ge, Yi Lin, Jing Wang Subject: Engineering, Electrical & Electronic Engineering Keywords: steady state analysis; Modelica; active distribution network Open source software such as OpenDSS has given a lot of help to distribution network researchers and educators. With high penetration of distributed renewable energy resources into distribution network, tradition distribution steady state analysis software such as OpenDSS is faced with difficulty in handling distributed generators. Three-phase distributed generators are often modeled in sequence frame while unbalanced distribution network are usually modeled in phase frame. So a load flow in sequence-phase coupled frame is proposed to handle models described in both frames. Voltage controlled DGs which are difficult to cope with in OpenDSS are handled in proposed program. The steady state analysis platform is programmed with open source Modelica language and the main aim of this paper is to introduce an open source platform for active distribution network steady analysis include load flow and short circuit analysis which can be easily adopted and improved by other educators and researchers. Stability Control of Double Inverted Pendulum on a Cart Using Full State Feedback with H infinity and H 2 Controllers Mustefa Jibril, Messay Tadese, Reta Degefa Subject: Engineering, Control & Systems Engineering Keywords: Double inverted pendulum on a cart, Full state feedback H infinity controller, Full state feedback H 2 controller In this paper a full state feedback control of a double inverted pendulum on a cart (DIPC) are designed and compared. Modeling is based on Euler-Lagrange equations derived by specifying a Lagrangian, difference between kinetic and potential energy of the DIPC system. A full state feedback control with H infinity and H 2 is addressed. Two approaches are tested: open loop impulse response and a double inverted pendulum on a cart with full state feedback H infinity and H 2 controllers. Simulations reveal superior performance of the double inverted pendulum on a cart with full state feedback H infinity controller. Phytochemical Constituents Propolis Flavonoid, Immunological Enhancement, Anti-porcine Parvovirus Activities of isolated from the Propolis Xia Ma, zhenhuan guo, zhiqiang zhang, xianghui li, yizhou lv, zhiqiang shen, Li Zhao, Yonglu Liu Subject: Biology, Anatomy & Morphology Keywords: Propolis Flavonoid; UPLC-Q/TOF-MS/MS; immunological enhancement; Ferulic acid; Anti-PPV Propolis was widely used in health preservation and disease healing, it contains many ingredients. The previous study had been revealed that the propolis has a wide range of efficacy, such as antiviral, immune enhancement, anti-inflammatory and so on, but its antiviral components and underlying mechanism of action remain unknown. In this study, we investigated the chemical composition, and anti-PPV and immunological enhancement of Propolis Flavonoid(PF). Chemical composition of PF was distinguished by UPLC-Q/TOF-MS/MS analysis.The presence and characterized of 26 major components was distinguished in negative ionization modes.To evaluate the effects of PF used as adjuvant on the immune response porcine parvovirus (PPV). Thirty Landrace-Yorkshire hybrid sows were randomly assigned to 3 groups, and the sows in adjuvant groups were intramuscular injected PPV vaccine with 2.0 mL PF adjuvant (PA), oilemulsion adjuvant (OA), respectively. After that, serum hemagglutination inhibition antibody titers, IgM and IgG subclasses, eripheral lymphocyte proliferation activity, and concentrations of cytokines were measured. Results indicated an enhancing effect of PA on IgM, IL-2, IL-4, IFN-γ and the IgG subclass responses. These findings suggested that PA could significantly enhance the immune responses. Furthermore, we screened the chemical components the effective of anti-PPV, Ferulic acid have an excellently anti-PPV effective. FQ-AGO: Fuzzy Logic Q-learning Based Asymmetric Link Aware and Geographic Opportunistic Routing Scheme for MANETs Ali Alshehri, Abdel-Hameed Badawy, Hong Huang Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Fuzzy logic; Q-learning; routing protocol; mobile ad hoc network (MANETs); opportunistic network The proliferation of mobile and IoT devices, coupled with the advances in the wireless communication capabilities of these devices, have urged the need for novel communication paradigms for such heterogeneous hybrid networks. Researchers have proposed opportunistic routing as a means to leverage the potentials offered by such heterogeneous networks. While several proposals for multiple opportunistic routing protocols exist, only a few have explored fuzzy logic to evaluate wireless links status in the network to construct stable and faster paths towards the destinations. We propose FQ-AGO, a novel Fuzzy Logic Q-learning Based Asymmetric Link Aware and Geographic Opportunistic Routing scheme that leverages the presence of long-range transmission links to assign forwarding candidates towards a given destination. The proposed routing scheme utilizes fuzzy logic to evaluate whether a wireless link is useful or not by capturing multiple network metrics, the available bandwidth, link quality, node transmission power, and distance progress. Based on the fuzzy logic evaluation, the proposed routing scheme employs a Q-learning algorithm to select the best candidate set toward the destination. We implement FQ-AGO on the NS-3 simulator and compare the performance of the proposed routing scheme with three other relevant protocols: AODV, DSDV, and GOR. For precise analysis, we consider various network metrics to compare the performance of the routing protocols. Our simulation result validates our analysis and demonstrates remarkable performance improvements in terms of total network throughput, packet delivery ration, and end-to-end delay. Method of Accounting for Higher Harmonics in the Calculation of the Electronic Structure of the States of a Linear Molecule Excited by High-Intensity X-Rays Anton Kasprzhitskii, Georgy Lazorenko, Victor Yavna Subject: Physical Sciences, Atomic & Molecular Physics Keywords: one-center method, molecular orbital, higher harmonics, excited state, ionized state, linear molecule, hydrogen fluoride, lithium monofluoride, boron monofluoride Modern development of high-intensity and high-resolution X-ray technology allows detailed studies of the multiphoton absorption and scattering of X-ray photons by deep and subvalent shells of molecular systems on a wide energy range. The interpretation of experimental data requires the improvement of computational methods for obtaining excited and ionized electron states of molecular systems with one or several vacancies. The specificity of solving these problems requires the use of molecular orbitals obtained in one-center representation. Slow convergence of one-center expansions is a significant disadvantage of this approach; it affects the accuracy of the calculation of spectroscopic quantities. We offer a method of including higher harmonics in one-center expansion of a molecular orbital with the use of wave functions of electrons of deep shells of a ligand (off-center atom of a molecule). The method allows one to take into account correctly electron density of a linear molecule near the ligand when describing vacancies created in a molecular core leading to radial relaxation of electron density. The analysis of the parameters of one-center expansion of the ligand functions depending on ligand's charge is performed. The criteria for the inclusion of higher harmonics of one-center decomposition of the ligand functions in the molecular orbital are determined. The efficiency of the method is demonstrated by the example of diatomic molecules HF, LiF, and BF by estimating energy characteristics of their ground and ionized states. Transferable, Transparent and Flexible Pseudocapacitors from Ternary V2O5/PEDOT/Graphene Electrodes with High Durability in Organic Electrolyte Sanju Gupta, Brendan Evans Subject: Materials Science, Nanotechnology Keywords: Solid-state supercapacitors, flexibility, transferability, energy storage, SECM Transparent conductive electrodes (TCEs) are of enormous significance to the emergence of flexible and wearable electronics and continued growth of modern devices. Versatile and tunable TCEs, featuring with not only high optical transmittance but also intriguing features of electrochemical energy-storage capability, remain a significant challenge. Here we develop capacitive active films comprised of graphene-conjugated V2O5@poly (3,4-ethylene dioxythiophene) ternary composite (V2O5@PEDOT/rGO) on silver nanowire coated substrates as solid-state super/pseudocapacitors. The constructed electrodes exhibit improved electrolyte ions interaction with effective graphene layer, achieving high areal capacitance 0.6-1.2 mF.cm−2 with 0.5M LiCl electrolytes at optical transparency >60% with record durability. As demonstrated, the kinetic blocking of PEDOT layer and anchoring capability of graphene upon amphoteric soluble vanadium ions from layered V2O5 nanoribbons/nanobelts contribute synergistically to the unusual electrochemical stability, also shown using scanning electrochemical microscopy (SECM) providing electroactivity sites and ion transportation rates. As-fabricated symmetric solid-state supercapacitors delivered broad potential window >1.4 V under two different electrolyte environments (aqueous LiCl and LiCl/PVA gel) and demonstrated higher power and energy density (0.27 μWh.cm−2) outperforming previously reported devices at <0.1 μWh.cm−2. The electrochemical properties are also discussed in terms of solvation in polymer gel electrolyte ions. Prevalence of Multidrug Resistance Mycobacterium Tuberculosis (MDR-TB) Using GeneXpert Assay in Northern Sudan, How Serious is the Situation? Sufian Khalid Noor, Mohamed Osman Elamin, Ziryab Imad Mahmoud, Mohammed Salah, Taqwa Anwar, Ahmed A. Osman, Hatim Abdullah Natto Subject: Medicine & Pharmacology, General Medical Research Keywords: GeneXpert; MDR-TB; Prevalence; River Nile State; Sudan Background: World Health Organization (WHO) estimates that there were 558000 new cases with resistance to Rifampicin, of which 82% had multidrug-resistant tuberculosis (MDR-TB). Objectives: We aimed to identify the prevalence of MDR-TB in River Nile state, Sudan, and the risk factors contributing to its occurrence. Methods: This was a descriptive cross-sectional hospital-based study involved 200 specimens taken from patients suspected of having MDR-TB tested using an automated GeneXpert assay. Results: Results of GeneXpert assay showed that the presence of Mycobacterium tuberculosis in 81 (40.5%), and out of 81 positive test results there were 13 (16%) had MDR-TB. Additionally, 7 cases of MDR-TB were previously treated which represented about (53%) of MDR patients, the remaining 6 MDR-TB patients were new cases and represented (47%) of MDR-TB patients. Moreover, there were 4 MDR-TB patients who had a history of contact with MDR-TB patients. Conclusion: Prevalence of MDR-TB in River Nile State, Sudan was 16%, which is greater than WHO estimation for Sudan (10.1%). The results revealed that the main risk factor to develop MDR-TB was a history of contact with MDR-TB, so adherence to treatment and social awareness about the spread of MDR-TB are crucial preventive measures. Control of Hydraulic Pulse System Based on the PLC and State Machine Programming Juraj PANCIK, Pavel MAXERA Subject: Engineering, Automotive Engineering Keywords: PLC programming, hydraulic pulse system, state machine programming, In the paper is described the control electronics for an industrial pneumatic – hydraulic system based on a low-cost PLC. The developed system is a hydraulic pulse system and it generates series of high pressure hydraulic pulses (max. 200 bar). We describe requirements, an overall concept of the embedded control system, user interface, security features and network connectivity. In the description of the software solution we describe implementation of hierarchical ordered program threads (multithreaded program) and main control state machine. At the conclusion, we describe the calibration method of the system and calibration curves and we present the schematic diagram and a photo of a functional prototype of the system. Symmetric Logarithmic Derivative of Fermionic Gaussian States Angelo Carollo, Bernardo Spagnolo, Davide Valenti Subject: Physical Sciences, Condensed Matter Physics Keywords: quantum metrology; fermionic gaussian state; quantum geometric information In this article we derive a closed form expression for the symmetric logarithmic derivative of Fermionic Gaussian states. This provides a direct way of computing the quantum Fisher Information for Fermionic Gaussian states. Applications range from quantum Metrology with thermal states to non-equilibrium steady states with Fermionic many-body systems. Public Expenditure Management in Indonesia: Islamic Economic Review on State Budget 2017 Aan Jaelani Subject: Social Sciences, Economics Keywords: state budget; fiscal policy; public expenditure; Islamic economic This paper discusses the management of public expenditures in Indonesia in State Budget 2017. The data collected from fiscal policy documents, especially about government spending plans in 2017, and then be reviewed by policy analysis, the theory of public expenditures, and the theory of public goods, and compared with the theory of public expenditure in Islamic economics. Public expenditure management in Indonesia has implemented a distribution system that divided public expenditure for central government expenditures, transfers to the regions, and the village fund. In terms of fiscal policy, public expenditure priorities to support the achievement of sustainable economic growth, job creation, poverty reduction, and the reduction of gaps in the welfare of the whole community. In Islamic economics, public expenditure is used to meet the needs of the community based on the principles of general interest derived from the shari'a. Public expenditure on Indonesia's government as an effective tool to divert economic resources and increase the income of society as a whole, and focused on the embodiment of the people's welfare. COVID-19 and Health Information Seeking Behavior: Digital Health Literacy Survey Amongst University Students in Pakistan Rubeena Zakar, Sarosh Iqbal, Muhammad Zakria Zakar, Florian Fischer Subject: Social Sciences, Accounting Keywords: eHealth Literacy; digital health literacy; sense of coherence; COVID-19; COVID-HL-Q; Pakistan. Amid to the COVID-19 pandemic, digital health literacy (DHL) has become a significant public health concern. This research aims to assess information seeking behavior, as well as the ability to find relevant information and deal with DHL among university students in Pakistan. An online-based cross-sectional survey, using a web-based interviewing technique, was conducted to collect data on DHL. Simple bivariate and multivariate linear regression was performed to assess the association of key characteristics with DHL. The results show a high DHL related to COVID-19 in 54.3% of students. Most of the Pakistani students demonstrated ~50% DHL in all dimensions, except of reliability. Multivariate findings showed that gender, sense of coherence and importance of information were found to be significantly associated with DHL. However, a negative association was observed with students' satisfaction with information. This led to the conclusion that critical operational and navigations skills are essential to achieve COVID-19 DHL and cope with stress, particularly to promote both personal and community health. Focused interventions and strategies should be designed to enhance DHL amongst university students to combat the pandemic. Ultra-Low-Loss Silicon Waveguides for Heterogeneously Integrated Silicon/III-V Photonics Minh Tran, Duanni Huang, Tin Komljenovic, Jonathan Peters, Aditya Malik, John Bowers Subject: Physical Sciences, Optics Keywords: ultra-low-loss waveguide; silicon photonics; heterogeneous integration; narrow linewidth lasers; high Q resonators. Integrated ultra-low-loss waveguides are highly desired for integrated photonics to enable applications that require long delay lines, high-Q resonators, narrow filters, etc. Here we present an ultra-low-loss silicon waveguide on 500 nm thick SOI platform. Meter-scale delay lines, million-Q resonators and tens of picometer bandwidth grating filters are experimentally demonstrated. We design a low-loss low-reflection taper to seamlessly integrate the ultra-low-loss waveguide with standard heterogeneous Si/III-V integrated photonics platform to allow realization of high-performance photonic devices such as ultra-low-noise lasers and optical gyroscopes. $(q,\sigma,\tau)$-Differential Graded Algebras Viktor Abramov, Olga Liivapuu, Abdenacer Makhlouf Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: q-differential graded algebra; (σ,τ)-differential graded algebra; generalized Clifford algebra; pre-cosimplicial complex Online: 31 October 2018 (10:03:24 CET) We propose a notion of $(q,\sigma,\tau)$-differential graded algebra, which generalizes the notions of $(\sigma,\tau)$-differential graded algebra and $q$-differential graded algebra. We construct two examples of $(q,\sigma,\tau)$-differential graded algebra, where the first one is constructed by means of generalized Clifford algebra with two generators (reduced quantum plane), where we use a $(\sigma,\tau)$-twisted graded $q$-commutator. In order to construct the second example, we introduce a notion of $(\sigma,\tau)$-pre-cosimplicial algebra. Risk Factors for Neonatal Hypothermia at Arba Minch General Hospital, Ethiopia Tilahun Ferede Asena, Tegenu Tento, Meseret Alemayehu, Asmare Wube Subject: Life Sciences, Other Keywords: Mean sojourn time; Multi-state Markov model; Transition rate Background: The first few minutes after birth are the most dangerous for the survival of an infant. Babies in neonatal intensive care units are either under heated or overheated, and hypothermic infants remain hypothermic or develop a fever. As a result, special attention must be paid to monitoring and maintaining the time of recovery from hypothermia states. Despite numerous studies, only a few have examined the transition from neonatal hypothermia and associated risk factors in depth. Method: A retrospective observational study was conducted to track axillary temperatures taken at the time of neonatal intensive care unit admission, which were then tracked every 30 minutes until the newborn's temperature stabilized. All hypothermic neonates admitted to the neonatal intensive care unit between January 2018 and December 2020 was included in the study. Temperature data were available at birth and within the first three hours of admission for 391 eligible hypothermic neonates. The effect of factors on the transition rate in different states of hypothermia was estimated using a multi-state Markov model. Result: The likelihood of progressing from mild to severe hypothermia was 5%, while the likelihood of progressing to normal was 34%. The average time spent in a severe hypothermia state was 48, 35, and 24 minutes for three different levels of birth weight, and 53, 41, and 31 minutes for low, moderate, and normal Apgar scores, respectively. Furthermore, the mean sojourn time in a severe hypothermia state was 48, 39, and 31 minutes for three different levels of high, normal, and low pulse rate, respectively. Conclusion: For hypothermic survivors within the first three hours of life, very low birth weight, low Apgar, and high pulse rate had the strongest association with hypothermia and took the longest time to improve/recover. As a result, there is an urgent need to train all levels of staff dealing with maintaining the time of recovery from neonatal hypothermia. Different Intermolecular Interactions Drive Nonpathogenic Liquid-Liquid Phase Separation and Potentially Pathogenic Fibril Formation by TDP-43 Yu-teng Zeng, Lu-lu Bi, Xiao-feng Zhuo, Lingyun Yang, Bo Sun, Jun-xia Lu Subject: Life Sciences, Biophysics Keywords: TDP-43; Liquid-liquid phase separation; Solution-state NMR Liquid-liquid phase separation (LLPS) of proteins has been found ubiquitously in eukaryotic cells, critical in the controlling of many biological processes through forming a temporary condensed phase with different bimolecular components. TDP-43 is recruited to stress granules in cells and is the main component of TDP-43 granules and proteinaceous amyloid inclusions in patients with amyotrophic lateral sclerosis (ALS). TDP-43 low complexity domain (LCD) is able to demix in solution forming the protein condensed droplets. The molecular interactions regulating its LLPS were investigated at the protein fusion equilibrium stage, where the droplets stopped growing. We found the molecules in the droplet were still liquid-like but with enhanced intermolecular helix-helix interaction in the LCD. The protein would start to aggregate after about 200 minutes of lag time and aggregate slower than at the condition when the protein does not phase separate or the molecules have a reduced intermolecular helical interaction. A structural transition intermediate towards protein aggregation was also discovered involving a decrease of the intermolecular helix-helix interaction and a reduction in the helicity. Therefore, LLPS and the intermolecular helical interaction could help maintain the stability of TDP-43 LCD. Time-Like Proton Form Factors in Initial State Radiation Process Dexu Lin, Alaa Dbeyssi, Frank Maas Subject: Physical Sciences, Nuclear & High Energy Physics Keywords: proton; electromagnetic form factors; initial state radiation; time-like The measurements of the proton electromagnetic form factors in the time-like region using the initial state radiation technique are reviewed. Recent experimental studies have shown that initial state radiation processes at high luminosity electron-positron colliders can be effectively used to probe the electromagnetic structure of hadrons. The BABAR experiment at the B-factory PEP-II in Stanford and the BESIII experiment at the $\tau$-charm factory BEPC-II in Beijing have measured the time-like form factors of the proton using the initial state radiation process $e^{+}e^{-}\to pbar{p}\gamma$. The two kinematical regions where the photon is emitted from the initial state at small and large polar angles have been investigated. In the first case the photon is in the region not covered by the detector acceptance and is not detected. The Born cross section and the proton effective form factor have been measured over a wide and continuous range of the the momentum transfer squared $q^2$ from threshold up to ~42 (GeV/c)$^2$. The ratio of electric and magnetic form factors of the proton has been also determined. In this report, the theoretical aspect and the experimental studies of the initial state radiation process $e^{+}e^{-}\to p\bar{p}\gamma$ are described. The measurements of the Born cross section and the proton form factors obtained in these analyses near the threshold region and in the relatively large $q^2$ region are examined. The experimental results are compared to the predictions from theory and models. Their impact on our understanding of the nucleon structure is discussed. Global Stability of Delayed Ecosystem via Impulsive Differential Inequality and Minimax Principle Ruofeng Rao Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Minimax principle; linear approximation theory; ecosystem; steady state solution This paper reports applying Minimax principle and impulsive differential inequality to derive the existence of multiple stationary solutions and the global stability of a positive stationary solution for a delayed feedback Gilpin-Ayala competition model with impulsive disturbance. The conclusion obtained in this paper reduces the conservatism of the algorithm compared with the known literature, for the impulsive disturbance is not limited to impulsive control.
CommonCrawl
How to obtain a Symplectic 4×4 matrix? I have a problem in obtaining a $2n \times 2n$ Symplectic matrix $T$, with $n=2$. I couldn't find a direct command in Mathematica to achieve this. Transpose[T].HH.T={{v1,0,0,0},{0,v2,0,0},{0,0,v1,0},{0,0,0,v2} Transpose[T].JJ.T = JJ JJ={{0, 0, 1, 0}, {0, 0, 0, 1}, {-1, 0, 0, 0}, {0, -1, 0, 0}}; T={{T11,T12,T13,T14},{T21,T22,T23,T24},{T31,T32,T33,T34},{T41,T42,T43,T44}}; HH={{HH11,HH12,HH13,HH14},{HH21,HH22,HH23,HH24},{HH31,HH32,HH33,HH34},{HH41,HH42,HH43,HH44}}; matrix linear-algebra PipePipe $\begingroup$ Have you looked at this article? library.wolfram.com/infocenter/MathSource/4779 $\endgroup$ – bill s Jul 14 '13 at 15:30 $\begingroup$ I have already downloaded the code, but I don't know how to handle with it. Code is generalized and very complex so I can not use it in obtaining symplectic matrix. $\endgroup$ – Pipe Jul 14 '13 at 17:03 $\begingroup$ @bill s can you halp me how to use this pakage to obtain symplectic matrix with condition. $\endgroup$ – Pipe Jul 17 '13 at 11:45 $\begingroup$ that's a pretty general question. How about if you look at the code and try and figure it out, then pose a question on this site when you get stuck? $\endgroup$ – bill s Jul 17 '13 at 12:28 $\begingroup$ thank you Bill, the problem is after running the package there many messages errors, mistakes in original code $\endgroup$ – Pipe Jul 17 '13 at 14:59 $HH$ appears linearly in $T^{\mathsf{T}}.HH.T=V$, and can be computed for a given symplectic matrix $T$ as $H=\left(T^{\mathsf{T}}\right)^{-1}.V.T^{-1}$. solsH = Inverse[ Transpose[T]].{{v1, 0, 0, 0}, {0, v2, 0, 0}, {0, 0, v1, 0}, {0, 0, 0, v2}}.Inverse[T] // Simplify; The symplectic matrix is not unique, so I am just going to get one that does not blow up $HH$. It turns out that in the expression for $solsH$ all the denominators are the same, so I don't have to manipulate that expression any more. Flatten@Map[Denominator, solsH, {2}]; Differences[%] (* {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} *) The expression for the denominator that should not be zero. den = Denominator[solsH[[1, 1]]]; Now I find an instance of $T$ such that $T^{\mathsf{T}}.JJ.T=JJ$ and $den\neq 0$. solsT = FindInstance[ Join[Thread[Flatten[Transpose[T].JJ.T - JJ] == 0], {den != 0}], Flatten[T]]; A set of solutions for $T$ and $HH$. T /. solsT[[1]] // MatrixForm $\left( \begin{array}{cccc} 1 & -2 & \frac{11}{2} & 13 \\ -1 & -\frac{1}{2} & 6 & 9 \\ 2 & -2 & 2 & 8 \\ -2 & 1 & 2 & 0 \\ \end{array} \right)$ solsH /. solsT[[1]] // MatrixForm $\left( \begin{array}{cccc} 8 \text{v1}+68 \text{v2} & -2 \text{v2} & -13 \text{v1}-108 \text{v2} & -10 \text{v1}-73 \text{v2} \\ -2 \text{v2} & 8 \text{v1}+\text{v2} & 2 \text{v2}-9 \text{v1} & \frac{\text{v2}}{2}-14 \text{v1} \\ -13 \text{v1}-108 \text{v2} & 2 \text{v2}-9 \text{v1} & \frac{125 \text{v1}}{4}+173 \text{v2} & 32 \text{v1}+118 \text{v2} \\ -10 \text{v1}-73 \text{v2} & \frac{\text{v2}}{2}-14 \text{v1} & 32 \text{v1}+118 \text{v2} & 37 \text{v1}+\frac{325 \text{v2}}{4} \\ \end{array} \right)$ The two conditions are indeed satisfied. Transpose[T].solsH.T /. solsT[[1]] // Simplify (* {{v1, 0, 0, 0}, {0, v2, 0, 0}, {0, 0, v1, 0}, {0, 0, 0, v2}} *) Transpose[T].JJ.T /. solsT[[1]] (* {{0, 0, 1, 0}, {0, 0, 0, 1}, {-1, 0, 0, 0}, {0, -1, 0, 0}} *) Suba ThomasSuba Thomas $\begingroup$ So helpful, full answer with additional explanation of code. Thank you very much Suba, congrats. $\endgroup$ – Pipe Jul 14 '13 at 23:11 $\begingroup$ I didn't mention that matrix HH already exist. So I need to obtain unique matrix T in function of matrix HH? To change solsT? T = Inverse[Transpose[T].H].V $\endgroup$ – Pipe Jul 15 '13 at 0:22 $\begingroup$ @Pipe, if $HH$ is known, then $T^{\mathsf{T}}.\text{HH}.T=V$ gives 16 equations and there are 16 variables in $T$. So we will not be able to impose the condition that $T$ be symplectic. $\endgroup$ – Suba Thomas Jul 15 '13 at 3:59 $\begingroup$ just a second, T is not unique, so if it is not unique, can I find one in agree with HH. Please, take a look in the beginning of the post, for special numerical case of given HH, can I obtain T? $\endgroup$ – Pipe Jul 15 '13 at 11:54 $\begingroup$ The symplectic matrix is not unique. That is $T^{\mathsf{T}}.JJ.T=JJ$ has many solutions for $T$. But if we start from $T^{\mathsf{T}}.HH.T=V$ it appears that it will dictate the solution for $T$ and there seems to be no way to specify that it be symplectic. I will take another shot at your modified question later, assuming that you or someone else has not already figured it out. $\endgroup$ – Suba Thomas Jul 15 '13 at 16:15 Not the answer you're looking for? Browse other questions tagged matrix linear-algebra or ask your own question. Eigenvalues and Determinant of a large matrix What's the best way to generate all the upper triangular matrix whose singular values are given? Summation matrix Find the parameter values for my matrix for it to have imaginary eigenvalues Build sparse matrix in mathematica knowing position i,j and value to store, from file No eigenvectors coming for a very simple* matrix How can I use the Solve command to find an eigenvector corresponding to a specific eigenvalue? Help with coding a matrix
CommonCrawl
How to calculate diffraction image for a given lattice? I have seen a lot of diffraction patterns such as this, taken from Wikipedia. I know how these images are measured, but I do not know how you can calculate (predict) a diffraction pattern for a specific lattice. The structure factor for a monatomic system is given as $$ S(\mathbf{q}) = \frac{1}{N}\sum\limits_{j=1}^{N}\sum\limits_{k=1}^{N}\mathrm{e}^{-i\mathbf{q}(\mathbf{r}_j - \mathbf{r}_k)}, $$ where $\mathbf{q}$ is the scattering vector and $\mathbf{r}_j$ the position of atom (or lattice point) $j$. The scattering vector $\mathbf{q}$ is given as $\mathbf{q} = \mathbf{k}_2 - \mathbf{k}_1$, where $\mathbf{k}_1$ is the incoming and $\mathbf{k}_2$ is the scattered beam. The amplitude $\lvert\mathbf{q}\rvert = \frac{4\pi}{\lambda}\sin\theta$ depends on the angle $\theta$ between the incoming and scattered beam. For an isotropic system such as an amorphous solid, a polycrystal or in powder diffraction, one typically averages over all possible directions of $\mathbf{q}$. The so-calculated static structure factor is the Fourier transform of the radial distribution function. However, if I want to calculate the 2d diffraction pattern, I can't average over all possible directions of $\mathbf{q}$. Which value of $\mathbf{q}$ should be used? Does it matter? I read that $S(\mathbf{q})$ is the Fourier transform of the lattice (the reciprocal lattice). But the Fourier transform of a 3d lattice is three dimensional. How do I obtain the 2d diffraction pattern? Related: This question on to calculate the 1d diffraction pattern. condensed-matter solid-state-physics scattering diffraction x-rays Julian HelfferichJulian Helfferich $\begingroup$ For powder diffraction, the reciprocal lattice of the crystal becomes a series of concentric shells. The image is a probe of this space (its square anyway), and so the type of image you get of these circles will depend on the geometry of your experimental setup. $\endgroup$ – CDCM Aug 14 '17 at 9:23 $\begingroup$ Yes, and by integrating over different directions of $\mathbf{q}$, you obtain a 1-d representation. But what happens in the case of a single crystal where you expect to get individual reflection peaks instead of circles/shells? $\endgroup$ – Julian Helfferich Aug 14 '17 at 9:32 $\begingroup$ There doesn't need to be integration. As an experimenter, you change the value of $\textbf{q}$, and then record the intensity of the reciprocal lattice at that value of $\textbf{q}$. In the single crystal nothing changes in terms of the method, just as you say you get points now. If you want to understand how the images form, understand in your apparatus, how $\textbf{q}$ varies as you turn your machine. If this is still unanswered later I'll write things up in full when I have time. $\endgroup$ – CDCM Aug 14 '17 at 9:44 $\begingroup$ For context: I am not an experimenter. I have obtained the crystalline structure in a computer simulation and would like to know how the structure would appear in a diffraction measurement. I can calculate the static structure factor but struggle with the 2d diffraction image. $\endgroup$ – Julian Helfferich Aug 14 '17 at 11:05 Let's first elaborate on your premises, which are not general enough in practice. A crystal is the repeat by translation of a so-called unit cell. In the most general case, this unit cell is a parallelepiped defined by three non-colinear vectors $\renewcommand{\vec}[1]{\mathbf{#1}}(\vec{a},\vec{b},\vec{c})$ and the translations to consider are $\vec{T}_{mnp}=m\vec{a}+n\vec{b}+p\vec{c}$ for integers $m,n,p$. We can then denote $F(\vec{q})$ the complex amplitude of the wave diffracted by one unit cell: if we consider only elastic scattering, this is the Fourier transform of the electron density inside the unit cell. Then the diffraction by the entire crystal, made of $M, N, P$ unit cells along $\vec{a}, \vec{b}, \vec{c}$ respectively reads $$S(\vec{q})=\sum_{m=-M}^M\sum_{n=-N}^N\sum_{p=-P}^P F(\vec{q})\exp {i2\pi\vec{q}\cdot \vec{T}_{mnp}}.$$ Then introducing $$\begin{aligned} h&=\vec{a}\cdot\vec{q}\\ k&=\vec{b}\cdot\vec{q}\\ l&=\vec{c}\cdot\vec{q}\\ \end{aligned}$$ we get $$S(\vec{q}) = F(\vec{q}) \underbrace{\frac{\sin\pi h (2M+1)}{\sin\pi h}}_{D_M(h)} \underbrace{\frac{\sin\pi k (2N+1)}{\sin\pi k}}_{D_N(k)} \underbrace{\frac{\sin\pi l (2P+1)}{\sin\pi l}}_{D_P(l)} $$ The main characteristic of the function $D_K(r)$ for a large value of $K$ is that it has very strong and sharp peaks for integers value of $r$. As a result, the diffracted wave exhibit sharp peaks for integers $h,k,l$. If we then introduce the the basis $(\vec{a}^*, \vec{b}^*, \vec{c}^*)$ dual to $(\vec{a},\vec{b},\vec{c})$, $$\vec{a}^* = \frac{\vec{b}\times\vec{c}}{V},$$ and circular permutations of $a$, $b$ and $c$, where $V=\det(\vec{a},\vec{b},\vec{c})$ is the volume of the unit cell, then $$\vec{q}=h\vec{a}^*+k\vec{b}^*+l\vec{c}^*=\vec{q}_{hkl},$$ and therefore the sharp peaks lie on a lattice whose unit cell is defined by $(\vec{a}^*, \vec{b}^*, \vec{c}^*)$, the so-called reciprocal lattice. Now that the proper framework has been laid out, I can move on to answer your question. Consider a plane passing through the reciprocal lattice. Any $\vec{q}_{hkl}$ close enough to the plane will result in some diffracted intensity around the projection of $\vec{q}_{hkl}$ onto the plane. In a real experiment, the plane would be the surface of a CCD for example. That detector surface would be moved of course but the reciprocal lattice would be moved too, i.e. $(\vec{a}^*, \vec{b}^*, \vec{c}^*)$ would be moved, because the crystal would be rotated, and the incident X-ray beam would also see its direction changed. Thus the position of that plane I was discussing becomes a rather complex function of the relative position of the detector, the crystal and the source but we don't really need to go into that complexity unless you want to model an actual experimental setup. For a general simulation, it suffices to simply move that plane. The most "beautiful" diffraction patterns would of course be obtained by choosing a plane passing through a subset of $\vec{q}_{hkl}$'s. For example, the plane containing all $\vec{q}_{hk0}$ for any integer $h,k$. Not the answer you're looking for? Browse other questions tagged condensed-matter solid-state-physics scattering diffraction x-rays or ask your own question. Generating X-ray diffraction patterns from atomic coordinates Calculating diffraction patterns using FFT If I take an XRD image of a single cubic unit cell, would the diffraction pattern simply be its reciprocal lattice? Correlation function and scattering amplitude in critical phenomena How to identify a crystal structure by its x-ray reflection bragg angles? X-ray diffraction: Is there an intuitive explanation of structure and form factors? What is the lattice sum and how can I calculate it for a general reciprocal lattice vector? What's the "effective potential" for photons in $X$-ray diffraction? Using FFT for spins in a non-cubic crystal lattice
CommonCrawl
Indoor bacterial load and its correlation to physical indoor air quality parameters in public primary schools Zewudu Andualem1, Zemichael Gizaw1, Laekemariam Bogale1 & Henok Dagne1 Multidisciplinary Respiratory Medicine volume 14, Article number: 2 (2019) Cite this article Poor indoor air quality is a great problem in schools due to a high number of students per classroom, insufficient outside air supply, poor construction and maintenance of school buildings. Bacteria in the indoor air environment pose a serious health problem. Determination of bacterial load in the indoor environment is necessary to estimate the health hazard and to create standards for indoor air quality control. This is especially important in such densely populated facilities like schools. Institutional based cross-sectional study was conducted among 51 randomly selected classrooms of eight public primary schools from March 29–April 26, 2018. To determine the bacterial load passive air sampling settle plate method was used by exposing a Petri dish of blood agar media for an hour. The Pearson correlation matrix was employed to assess the correlation between bacterial load and physical parameters. The grand total mean bacterial load was 2826.35 CFU/m3 in the morning and 4514.63 CFU/m3 in the afternoon. The lowest and highest mean bacterial load was recorded at school 3 (450.67 CFU/m3) and school 5 (7740.57 CFU/m3) in the morning and afternoon, respectively. In the morning relative humidity (r = − 0.7034), PM2.5 (r = 0.5723) and PM10 (r = 0.6856); in the afternoon temperature (r = 0.3838), relative humidity (r = − 0.4014) were correlated with indoor bacterial load. Staphylococcus aureus, Coagulase-negative Staphylococcus species and Bacillus species were among isolated bacteria. High bacterial load was found in public primary schools in the Gondar city as compared to different indoor air biological standards. Temperature, relative humidity and particulate matter concentration (PM2.5 and PM10) were associated with the indoor bacterial load. Staphylococcus aureus, Coagulase-negative Staphylococcus species and Bacillus species were among isolated bacterial species. Attention should be given to control those physical factors which favour the growth and multiplication of bacteria in the indoor environment of classrooms to safeguard the health of students and teachers in school. Clean air is a basic requirement of life [1]. Most people spend 80–95% of their time in indoor environments by breathing on average 10–14 m3 of air per day [2,3,4,5]. Millions of children and adults spend 24–30% of their time in a day in school buildings, and they need safe, healthy environments to thrive, learn, and succeed [6, 7]. The indoor air quality has been the object of several studies due to an increasing concern within the scientific community on the effects of indoor air quality upon health, especially as people spend more time indoors than outdoors [8,9,10]. The quality of air inside, homes, offices, schools or other private and public buildings is an essential determinant of healthy life and people's well-being [1]. Indoor air pollution is a major problem in people daily life. Efficient corrective methods are urgently needed to combat the problem of indoor air quality: bacteria, pollen grains, smoke, humidity, chemical substances, and gases released by anthropogenic activity which has adverse health effects in humans [11]. Several studies underscore the significant risks of global warming on human health due to increasing levels of air pollution. The last decades have seen a rise in the concentrations of pollens and pollutants in the air. This rise parallels the increase in the number of people presenting with allergic symptoms (e.g., allergic rhinitis, conjunctivitis, and asthma) [12]. Globally, 3.8 million deaths were attributed to indoor air pollution in 2016. More than 90% of air pollution-related deaths occur in low- and middle-income countries, mainly in Asia and Africa, followed by low- and middle-income countries of the Eastern Mediterranean region, Europe, and Americas [13]. Bioaerosols contribute about 5–34% of indoor air pollution [5, 6, 14, 15]. Indoor air quality problems in schools may be even more serious than in other categories of buildings, due to higher occupant density, poor sanitation of classrooms, and insufficient outside air supply, aggravated by frequent poor construction and maintenance of school buildings [16]. Poor indoor air quality can also affect scholarly performance and attendance since children are more vulnerable than adults to health risks from exposure to the environmental hazard [16,17,18]. Therefore, the purpose of this research was to assess the bacterial quality of indoor air in public primary schools to increase awareness and provide references for better understanding about bacterial indoor air quality problems in public primary schools. Study design and study area Institutional based cross-sectional study was conducted to assess indoor bacterial load and its relation to physical indoor air quality parameters of public primary schools in Gondar city, Northwest Ethiopia. Gondar city is located in the northern part of Ethiopia in Amhara national regional state, North Gondar zone at a distance of 727 km from Addis Ababa and 173.09 km from the regional capital Bahir Dar at the 12°45'north latitude and 37 ° 45′ east longitudes. In Gondar city there are twenty public primary schools from grade 1–8 with a total of 27,766 students enrolled in 266 classrooms [19]. Sample size and sampling procedures The sample size was determined based on environmental sampling and sample size determination methods [20]. Manly formula was used to determine sample size [21] by using the following equation. $$ n=\frac{4{\upsigma}^2\ }{\updelta^2} $$ Where n = Number of samples, σ = Standard deviation, δ = Acceptable error [δ is half of the width of a 95% the confidence interval on the mean (\( \overline{X}\pm \)δ)]. From a total of twenty public primary schools, 40% of the schools were selected through simple random sampling and 20% of classrooms at schools were selected as study unit through simple random sampling. The mean and standard deviation (σ) of eight randomly selected public primary schools was 13.3 and 5.37 respectively, by taking 3% acceptable error (δ). $$ n=\left[\frac{4\ast {(5.37)}^2}{(1.5)^2}\right]=51 $$ A total of fifty-one classrooms were selected from eight public primary schools of Gondar city by simple random sampling technique. Air sampling procedures Air samples were taken from 51 randomly selected classrooms from eight public primary schools in Gondar city. Bacterial measurements were made by passive air sampling method i.e., the settle plate method. Standard Petri dishes with 9 cm diameter (63.585 cm2 areas) containing culture media were exposed. Bacterial contamination determination was based on the count of the microbial fall out on to Petri dishes left open to the air, according to the 1/1/1 scheme (for 1 h, 1 m away from the floor, at least 1 m away from walls or any obstacle) [14]. Bacteria were collected on blood agar media to which an antifungal agent (Griseofulvin) had been added to inhibit the growth of fungi. To determine the bacterial load with respect to environmental variation, sampling was done in the morning (at 6:30 am before students enter to the classroom) and afternoon (5:00 pm, after students left the classroom). After exposure, the sample was taken to the laboratory (Department of Biology, at the University of Gondar) and incubated at 37 °C for 24 to 48 h. Colony forming units (CFU) was enumerated, CFU/m3 microbial concentration was determined, using the following equation [22]. $$ \raisebox{1ex}{$\mathrm{N}=\mathrm{a}\ast 10000$}\!\left/ \!\raisebox{-1ex}{$\mathrm{bt}\ast 0.2$}\right. $$ Where N = Microbial CFU/m3 of indoor air; a = Number of colonies per Petri dish; b = Dish surface area (cm2); t = Exposure time. Individual bacterial isolates were identified using standard methods (including colonial morphology, microscopy, and biochemical tests) [23, 24]. Parallel with bacterial sample collection, data on physical parameters such as CO2 concentration, particulate matter concentration (PM2.5 and PM10), indoor temperature, and relative humidity were measured by Aireveda. To minimize dilution of air contaminants, openings like doors and windows were closed [7, 25, 26]. In addition, the movement of people during sampling was restricted to avoid air disturbance and newly emitted microorganisms. Statistical analyses were carried out using STATA/SE 14.0. To assess the correlation of bacteria concentration with environmental factors like carbon dioxide concentration, particulate matter concentration (PM2.5 and PM10), temperature and relative humidity Pearson correlation was employed. One way analysis of variance (ANOVA) was carried out to know the mean difference of the bacterial load in public primary schools. Bacterial load The concentrations of bacterial aerosols in the indoor environment of public primary schools in Gondar city, estimated with the use of the settle plate method, the lowest and highest bacterial load was estimated in the morning in school 1 (208 CFU/m3) and in the afternoon in school 5 (23,504 CFU/m3) (Table 1). Table 1 Statistical summary of bacterial load, in public primary schools of Gondar city, Northwest Ethiopia, 2018. (n = 51) The grand total mean bacterial load was 2826.35 and 4514.63 CFU/m3 in the morning and afternoon, respectively, while the overall mean bacterial load was 3670.49 CFU/m3 (Table 1). ANOVA test result was presented to show the mean bacterial load difference among different public primary schools. The test showed that there was a significant mean bacterial load difference among public primary schools at p < 0.001 (Table 2). Table 2 ANOVA test result on mean bacterial load difference among public primary schools of Gondar city, Northwest Ethiopia 2018 Physical parameters of indoor air environments During physical parameter measurement, it was found that all examined classrooms did not have an HVAC (heating, ventilation, and air conditioning) system. The ranges of carbon dioxide concentration, indoor temperature, relative humidity, and particulate matter concentration (PM2.5 & PM10) during sampling time ranged from 401 to 550 ppm, 12 to 24 °C, 14 to 64%, 7 to 173 μg/m3, and 21 to 277 μg/m3 respectively (Table 3). Table 3 Statistical summary of physical indoor air quality parameters in public primary schools of Gondar city, Northwest Ethiopia, 2018 (n = 51) Isolated bacterial species Three bacterial species were isolated; Bacillus species, Staphylococcus aureus and Coagulase-negative Staphylococcus (CoNS) species. Bacillus species was found in all public primary schools (Table 4). Table 4 Type of microorganism isolated from each public Primary schools in Gondar city Northwest Ethiopia, 2018 (n = 51) According to the European sanitary standards for non-industrial premises, the degree of air pollution by bacteria population across the various classrooms of the eight public primary schools ranges largely between high to very high (Table 5). Table 5 Assessments of bacterial indoor air quality in the selected eight public Primary schools in Gondar city, according to the sanitary standards for non-industrial premises (n = 51) Relative humidity, particulate matter concentration and temperature correlated with an indoor bacterial load of public primary schools from all physical indoor air quality parameters; relative humidity had a negative strong correlation with indoor bacterial load (Table 6). Table 6 Pearson correlation coefficients between indoor bacterial concentration & physical indoor air quality parameters in public primary schools of Gondar city, Northwest Ethiopia, 2018. (n = 51) The bacterial load of indoor air environments of public primary schools in Gondar city was found in the range between 208 and 23,504 CFU/m3 with a mean bacterial load of 3670.49 CFU/m3, the finding of this study was higher than the findings of other studies, one conducted in Poland, [27] and another in Malaysia, [28]. There are no generally accepted threshold limit values concerning concentrations of the air of indoor bacteria, and the obtained results could be compared only with the values recommended by various authors or institutions. The work conducted by a WHO expert group on assessment of health risks of biological agents in indoor environments suggested that total microbial concentration should not exceed 1000 CFU/m3 [29], whereas other scholars considered that 750 CFU/m3 should be the limit for bacteria [30]. Airborne microbial concentrations ranging from 4500 to 10,000 CFU/m3 also have been suggested as the upper limit for ubiquitous bacterial aerosols [31]. According to the sanitary standards of the European Commission for non-industrial premises, the permissible limits of bacterial load were ≤ 500 CFU/m3 [32]. The variation of bacterial load in indoor environments might be due to environmental factors such as ventilation system of classroom, temperature, humidity, and particulate matter concentration. The finding of isolated bacterial species of the present study partly agrees with the work by Hussin N. et al. [28]. Likewise, it was harmonized in a study conducted in India [33], and it was partly agreed in the work by Naruka K. et al., [34]. In this study, the temperature of the indoor environment had a positive correlation with total airborne bacteria in the afternoon (r = 0.3838) while there was no correlation in the morning airborne bacterial concentration. During the study, the temperature ranged from 12 to 16 °C and 15–26 °C in morning and afternoon, respectively. This was consistent with the results reported by Brągoszewska Ewa, et al. [35], but inconsistent with the results reported by Naruka et al. [34], where the temperature was negatively correlated, and Hayleeyesus S. et al. [14] where there was no correlation between temperature and indoor bacterial load. The bacterial load would be significantly correlated with indoor temperature, i.e., the concentration of aerosols will increase as the temperature increases [35], but the variation might be due to the fact that other environmental factors increase the concentration of bacteria in classrooms and the number of students may result in a great diversity of high bacterial load [36]. In this study relative moderate to strong humidity was negatively correlated with total airborne bacteria in the afternoon (r = − 0.4014) and in the morning (r = − 0.7034), respectively. The RH in public primary schools ranged from 21 to 62 % and 14–57 % in morning and afternoon, respectively. The negative correlation between relative humidity and indoor airborne bacterial load was not consistent with what is expected since a strict correlation between bacterial load and relative humidity was already reported by Brągoszewska Ewa, et al. [35], Huang H, et al. [37]. The possible explanation might be that if relative humidity decreases, bacterial load becomes decreased because the viability of aerosols becomes inhibited if relative humidity is too low, because a dry environment decreases the metabolism and physiological activities of microorganisms [35]. Correlation of PM2.5 was positively strong with total airborne bacteria (r = 0.5723), while there is no correlation in the afternoon airborne bacterial concentration. The positive correlation of this finding is in agreement with a study conducted in Poland [38], but in other study conducted in Poland [35], PM2.5 was negatively correlated. In this study, PM10 had a strong positive correlation with airborne bacterial load (r = 0.6856) but there was no correlation with the afternoon airborne bacterial load. The positive correlation is supported by a study conducted in Poland [35]. The possible explanation might be due to the fact that the PM10 increases the bacterial load that increases because of bioaerosols attached to coarse solid particles [39]; whereas the afternoon particulate matter concentration was not correlated with indoor bacterial load due to other environmental factors which have a more significant correlation with concentration bacterial load as compared to PM10, and the concentration of PM10 in the afternoon is lower when compared in the morning. A high bacterial load was found in public primary school classrooms in the Gondar city as compared with different indoor air biological standards. Temperature, humidity, and particulate matter concentration (PM2.5 and PM10) were associated with the indoor bacterial load. Staphylococcus aureus, Coagulase-negative Staphylococcus, and Bacillus were among isolated bacterial species. Attention should be given to control those physical factors which favour the growth and multiplication of bacteria in the indoor environment of classrooms to safeguard the health of students and teachers in school. °C: Degree centigrade Ante meridian CFU: Colony forming units cm2 : Centimeter square m3 : Post meridian ppm: Part per million RH: World Health Organization. μg: Microgram World Health Organization, WHO guidelines for Indoor Air quality: selected pollutants. 2010. Fekadu HS, Melaku MA. Microbiological quality of indoor air in university libraries. Asian Pacific journal of tropical biomedicine. 2014;4:S312–7. Awad Abdel Hameed A, Farag SA. An indoor bio-contaminants air quality. Int J Environ Health Res. 1999;9(4):313–9. Peter SG, Yakubu SE. Comparative analysis of airborne microbial concentrations in the indoor environment of two selected clinical laboratories. IOSR J Pharm Biol Sci (IOSR-JPBS). 2013;8:4. Uzoechi AU, et al. Microbiological Evaluation of Indoor Air Quality of State University Library. Asian J Applied Sciences. 2017;05(03):525-30. Samson E, Ihongbe JC, Okeleke Ochei, Hi Effedua, Phillips Adeyemi O. Microbiological assessment of indoor air quality of some selected private primary schools in Ilishan-Remo, Ogun state. Nigeria. 2017;3:2454–9142. Mohan K, Madhan N, Ramprasad S, Maruthi YA. Microbiological air quality of indoors in primary and secondary schools of Visakhapatnam, India. Int J Curr Microbiol App Sci. 2014;3(8):880–7. Cahna N, Martinho M, Almeida-Silva M, Freitas MC. Indoor air quality in primary schools. Int J Environment and Pollution. 2012;50:396–410. Cahna N, Freitas MC, Almeida SM, Almeida-Silva M. Indoor school enviroment: Easy and low cost to assess inorganic pollutants. J Radioanal Nucl Chem. 2010;286(2):495–500. Freitas MC, Canha N, Martinho M, Almeida-Silva M, Almeida SM, Pegas P, et al. Chapter 20:'Indoor air quality in primary school', in Advanced Topics in Environmental Health and Air Pollution Case Studies, Anca Maria Moldoveanu, Editor. 2011. InTech Press: Croatia. 361-84. Kalpana S. Indoor air pollution. Bhartiya Krishi Anusandhan Patrika. 2016. 4. Patella V, Florio G, Magliacane D, Giuliano A, Crivellaro MA, Di Bartolomeo D, etal. Urban air pollution and climate change:"the Decalogue: allergy safe tree" for allergic and respiratory diseases care. Clin Mol Allergy. 2018;16(1):20. World Health Organization. World health organization. 2018 [cited 2018 July, 02]; available from: http://www.who.int/news-room/detail/02-05-2018-9-out-of-10-people-worldwide-breathe-polluted-air-but-more-countries-are-taking-action. Fekadu HS, Amanuel E, Aklilu DF. Quantitative assessment of bio-aerosols contamination in indoor air of university dormitory rooms. Int J Health Sci. 2015;9(3):249. Zemichael G, Mulat G, Chalachew Y. High bacterial load of indoor air in hospital wards: the case of University of Gondar teaching hospital, Northwest Ethiopia. Multidiscip Respir Med. 2016;11(1):24. Nascimento Pegas P, Alves C, Guennadievna Evtyugina M, Nunes T. Indoor air quality in elementary schools of Lisbon in Spring. Environ Geochem Health . 2010;33:455–68. Daisey Joan M, Angell William J, Apte Michael G. Indoor air quality, ventilation and health symptoms in schools: an analysis of existing information. Indoor Air. 2003;13(1):53–64. Godoi RHM, et al. Indoor air quality assessment of elementary schools in Curitiba, Brazil. Water Air Soil Pollut. 2009;9(3–4):171–7. Gondar city education office, Number of students enrolled in Gondar city public primary schools from grade 1–8 2018. Chunlong Z. Fundamentals of environmental sampling and analysis. Hoboken: Wiley; 2007. Manly Bryan FJ. Statistics for environmental science and management. New York: Chapman and Hall/CRC; 2008. Dumała Sławomira M, Dudzińska Marzenna R. Microbiological indoor air quality in polish schools. Annual Set The Environment Protection (Rocznik Ochrona Środowiska). 2013;15:231–44. Cheesbrough M. Biochemical tests to identify bacteria. In: Cheesbrough M, editor. District laboratory practice in topical countries, part 2, vol. 180. Cape Town: Cambridge University Press; 2006;63–70. Rajesh B, Lal IR. Essentials of medical microbiology. New Delhi: Jaypee Brothers, Medical Publishers Pvt. Limited; 2008. Manisha J, Srivastava RK. Identification of indoor airborne microorganisms in residential rural houses of Uttarakhand, India. Int J Curr Microbiol Appl Sci. 2013;2:146–52. Bartlett KH, Kennedy SM, Brauer M, van Netten C, Dill B. Evaluation and determinants of airborne bacterial concentrations in school classrooms. J Occup Environ Hyg. 2004;1(10):639–47. Ewa Brągoszewska E, Mainka A, Pastuszka JS, Lizończyk K. Assessment of bacterial aerosol in a preschool, primary school and high school in Poland. Atmosphere. 2018;9(3):87. Mat HNH, Sann LM, Shamsudin MN, Hashim Z. Characterization of bacteria and fungi bioaerosol in the indoor air of selected primary schools in Malaysia. Indoor Built Environ. 2011;20(6):607–17. Heseltine Elisabeth and Rosen Jerome. WHO guidelines for indoor air quality: dampness and mould. Copenhagen: WHO Regional Office Europe; 2009. Rao Carol Y, Burge Harriet A, Chang John CS. Review of quantitative standards and guidelines for fungi in indoor air. J Air Waste Manage Assoc. 1996;46(9):899–908. Hameed AA, Habeeballah T. Air microbial contamination at the holy mosque, Makkah, Saudi Arabia. Curr World Environ. 2013;8:179–87. Wanner H, Verhoeff A, Colombi A, Flannigan B, Gravesen S, Mouilleseaux A, et al. Indoor air quality and its impact on man: report no. 12: biological particles in indoor Environments. Brussels-Luxembourg: ECSC-EEC-EAEC; 1993. Kumari NK. Shravanthi Ch M, Byragi RT. Identification and assessment of airborne bacteria in selected school environments in Visakhapatnam. India Ind J Sci Res and tech. 2015;3(6):21–5. Kavita N, Jyoti G. Microbial air contamination in a school. Int J Curr Microbiol App Sci. 2012;2(12):404–10. Bragoszewska E, Mainka A, Pastuszka JS. Concentration and size distribution of culturable bacteria in ambient air during spring and winter in Gliwice: a typical urban area. Atmosphere. 2017;8(12):239. Viegas C, Faria T, Pacifico C, Guimarães dos Santos M. Microbiota and particulate matter assessment in Portuguese optical shops providing contact lens services. Healthcare. 2017;5(2):24. Hsiao-Lin H, Lee M-K, Hao-Wun S. Assessment of indoor bioaerosols in public spaces by real-time measured airborne particles. Aerosol Air Qual Res. 2017;17(9):2276–88. Bragoszewska E, Pastuszka JS. Influence of meteorological factors on the level and characteristics of culturable bacteria in the air in Gliwice, upper Silesia (Poland). Aerobiologia. 2018;34(2):241–55. Min G, Yan X, Qiu T, Han M, Wang X. Variation of correlations between factors and culturable airborne bacteria and fungi. Atmos Environ. 2016;128:10–9. Firstly, we would like to give our heartfelt thanks to the Almighty of God for giving us the wisdom, knowledge and all the support we needed to do this study. We would like to express our appreciation to the University of Gondar College of Medicine and Health Sciences and public primary school directors and teachers for their support. No funding source. Data will be made available upon request to the primary author. Department of Environmental and Occupational Health and Safety, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia Zewudu Andualem , Zemichael Gizaw , Laekemariam Bogale & Henok Dagne Search for Zewudu Andualem in: Search for Zemichael Gizaw in: Search for Laekemariam Bogale in: Search for Henok Dagne in: All the authors actively participated during the conception of the research issue, development of a research proposal, data collection, analysis and interpretation, and write various parts of the research report. ZA designed the protocol, analyzed the data, supervised the overall research process, and prepared the manuscript. ZG, LB and HD advised and commented on the overall work. All the authors read and approved the final manuscript. Correspondence to Zewudu Andualem. Ethical clearance was obtained from the Institutional Review Board of the University of Gondar (ref. number: IPH/295/2017). Then, an official letter from the University of Gondar Research and Community Service Vice President and supportive letter from Institute of public health college of Medicine and Health Sciences was written to the respective responsible bodies. Confidentiality of the data was maintained. No identifiers except coding were included in the data collection tools. This manuscript does not contain any individual person's data. Andualem, Z., Gizaw, Z., Bogale, L. et al. Indoor bacterial load and its correlation to physical indoor air quality parameters in public primary schools. Multidiscip Respir Med 14, 2 (2019). https://doi.org/10.1186/s40248-018-0167-y Settle plate method
CommonCrawl
A view of programming scalable data analysis: from clouds to exascale Domenico Talia ORCID: orcid.org/0000-0001-5392-94791 Journal of Cloud Computing volume 8, Article number: 4 (2019) Cite this article Scalability is a key feature for big data analysis and machine learning frameworks and for applications that need to analyze very large and real-time data available from data repositories, social media, sensor networks, smartphones, and the Web. Scalable big data analysis today can be achieved by parallel implementations that are able to exploit the computing and storage facilities of high performance computing (HPC) systems and clouds, whereas in the near future Exascale systems will be used to implement extreme-scale data analysis. Here is discussed how clouds currently support the development of scalable data mining solutions and are outlined and examined the main challenges to be addressed and solved for implementing innovative data analysis applications on Exascale systems. Solving problems in science and engineering was the first motivation for inventing computers. After a long time since then, computer science is still the main area in which innovative solutions and technologies are being developed and applied. Also due to the extraordinary advancement of computer technology, nowadays data are generated as never before. In fact, the amount of structured and unstructured digital datasets is going to increase beyond any estimate. Databases, file systems, data streams, social media and data repositories are increasingly pervasive and decentralized. As the data scale increases, we must address new challenges and attack ever-larger problems. New discoveries will be achieved and more accurate investigations can be carried out due to the increasingly widespread availability of large amounts of data. Scientific sectors that fail to make full use of the huge amounts of digital data available today risk losing out on the significant opportunities that big data can offer. To benefit from the big data availability, specialists and researchers need advanced data analysis tools and applications running on scalable architectures allowing for the extraction of useful knowledge from such huge data sources. High performance computing (HPC) systems and cloud computing systems today are capable platforms for addressing both the computational and data storage needs of big data mining and parallel knowledge discovery applications. These computing architectures are needed to run data analysis because complex data mining tasks involve data- and compute-intensive algorithms that require large, reliable and effective storage facilities together with high performance processors to get results in appropriate times. Now that data sources became very big and pervasive, reliable and effective programming tools and applications for data analysis are needed to extract value and find useful insights in them. New ways to correctly and proficiently compose different distributed models and paradigms are required and interaction between hardware resources and programming levels must be addressed. Users, professionals and scientists working in the area of big data need advanced data analysis programming models and tools coupled with scalable architectures to support the extraction of useful information from such massive repositories. The scalability of a parallel computing system is a measure of its capacity to reduce program execution time in proportion to the number of its processing elements (The Appendix introduces and discusses in detail scalability in parallel systems). According to scalability definition, scalable data analysis refers to the ability of a hardware/software parallel system to exploit increasing computing resources effectively in the analysis of (very) large datasets. Today complex analysis of real-world massive data sources requires using high-performance computing systems such as massively parallel machines or clouds. However in the next years, as parallel technologies advance, Exascale computing systems will be exploited for implementing scalable big data analysis in all the areas of science and engineering [23]. To reach this goal, new design and programming challenges must be addressed and solved. The focus of the paper is on discussing current cloud-based designing and programming solutions for data analysis and suggesting new programming requirements and approaches to be conceived for meeting big data analysis challenges on future Exascale platforms. Current cloud computing platforms and parallel computing systems represent two different technological solutions for addressing the computational and data storage needs of big data mining and parallel knowledge discovery applications. Indeed, parallel machines offer high-end processors with the main goal to support HPC applications, whereas cloud systems implement a computing model in which virtualized resources dynamically scalable are provided to users and developers as a service over the Internet. In fact, clouds do not mainly target HPC applications; they instrument scalable computing and storage delivery platforms that can be adapted to the needs of different classes of people and organizations by exploiting the Service Oriented (SOA) approach. Clouds offer large facilities to many users that were unable to own their parallel/distributed computing systems to run applications and services. In particular, big data analysis applications requiring access and manipulating very large datasets with complex mining algorithms will significantly benefit from the use of cloud platforms. Although not many cloud-based data analysis frameworks are available today for end users, within a few years they will become common [29]. Some current solutions are based on open source systems, such as Apache Hadoop and Mahout, Spark and SciDB, while others are proprietary solutions provided by companies such as Google, Microsoft, EMC, Amazon, BigML, Splunk Hunk, and InsightsOne. As more such platforms emerge, researchers and professionals will port increasingly powerful data mining programming tools and frameworks to the cloud to exploit complex and flexible software models such as the distributed workflow paradigm. The growing utilization of the service-oriented computing model could accelerate this trend. From the definition of the big data term, which refers to datasets so large and complex that traditional hardware and software data processing solutions are inadequate to manage and analyze, we can infer that conventional computer systems are not so powerful to process and mine big data [28] and they are not able to scale with the size of problems to be solved. As mentioned before, to face with limits of sequential machines, advanced systems like HPC, clouds and even more scalable architectures are used today to analyze big data. Starting from this scenario, Exascale computing systems will represent the next computing step [1, 34]. Exascale systems refers to high performance computing systems capable of at least one exaFLOPS, so their implementation represents a very significant research and technology challenge. Their design and development is currently under investigation with the goal of building by 2020 high-performance computers composed of a very large number of multi-core processors expected to deliver a performance of 10^18 operations per second. Cloud computing systems used today are able to store very large amounts of data, however they do not provide the high performance expected from massively parallel Exascale systems. This is the main motivation for developing Exascale systems. Exascale technology will represent the most advanced model of supercomputers. They have been conceived for single-site supercomputing centers not for distributed infrastructures as multi-clouds or fog computing systems that are aimed to decentralized computing and pervasive data management that could be interconnected with Exascale systems that could used as backbone for very large scale data analysis. The development of Exascale systems urges to address and solve issues and challenges both at hardware and software level. Indeed it requires to design and implement novel software tools and runtime systems able to manage a very high degree of parallelism, reliability and data locality in extreme scale computers [14]. New programming constructs and runtime mechanisms able to adapt to the most appropriate parallelism degree and communication decomposition for making scalable and reliable data analysis tasks are needed. Their dependence from parallelism grain size and data analysis task decomposition must be deeply studied. This is needed because parallelism exploitation depends on several features like parallel operations, communication overhead, input data size, I/O speed, problem size, and hardware configuration. Moreover, reliability and reproducibility are two additional key challenges to be addressed. Indeed at programming level, constructs for handling and recovering communication, data access, and computing failures must be designed. At the same time, reproducibility in scalable data analysis asks for rich information useful to assure similar results on environments that dynamically may change. All these factors must be taken into account in designing data analysis applications and tools that will be scalable on exascale systems. Moreover, reliable and effective methods for storing, accessing and communicating data, intelligent techniques for massive data analysis and software architectures enabling the scalable extraction of knowledge from data, are needed [28]. To reach this goal, models and technologies enabling cloud computing systems and HPC architectures must be extended/adapted or completely changed to be reliable and scalable on the very large number of processors/cores that compose extreme scale platforms and for supporting the implementation of clever data analysis algorithms that ought to be scalable and dynamic in resource usage. Exascale computing infrastructures will play the role of an extraordinary platform for addressing both the computational and data storage needs of big data analysis applications. However, as mentioned before, to have a complete scenario, efforts must be performed for implementing big data analytics algorithms, architectures, programming tools and applications in Exascale systems [24]. Pursuing this objective within a few years, scalable data access and analysis systems will become the most used platforms for big data analytics on large-scale clouds. In a longer perspective, new Exascale computing infrastructures will appear as the platforms for big data analytics in the next decades, and data mining algorithms, tools and applications will be ported on such platforms for implementing extreme data discovery solutions. In this paper we first discuss cloud-based scalable data mining and machine learning solutions, then we examine the main research issues that must be addressed for implementing massively parallel data mining applications on Exascale computing systems. Data-related issues are discussed together with communication, multi-processing, and programming issues. Section II introduces issues and systems for scalable data analysis on clouds and Section III discusses design and programming issues for big data analysis in Exascale systems. Section IV completes the paper also outlining some open design challenges. Data analysis on clouds Clouds implement elastic services, scalable performance and scalable data storage used by a large and everyday increasing number of users and applications [2, 12]. In fact, clouds enlarged the arena of distributed computing systems by providing advanced Internet services that complement and complete functionalities of distributed computing provided by the Web, Grid systems and peer-to-peer networks. In particular, most cloud computing applications use big data repositories stored within the cloud itself, so in those cases large datasets are analyzed with low latency to effectively extract data analysis models. Big data is a new and over-used term that refers to massive, heterogeneous, and often unstructured digital content that is difficult to process using traditional data management tools and techniques. The term includes the complexity and variety of data and data types, real-time data collection and processing needs, and the value that can be obtained by smart analytics. However we should recognize that data are not necessarily important per se but they become very important if we are able to extract value from them; that is if we can exploit them to make discoveries. The extraction of useful knowledge from big digital datasets requires smart and scalable analytics algorithms, services, programming tools, and applications. All these solutions need to find insights in big data will contribute to make them really useful for people. The growing use of service-oriented computing is accelerating the use of cloud-based systems for scalable big data analysis. Developers and researchers are adopting the three main cloud models, software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), to implement big data analytics solutions in the cloud [27, 31]. According to a specialization of these three models, data analysis tasks and applications can be offered as services at infrastructure, platform or software level and made available every time form everywhere. A methodology for implementing them defines a new model stack to delivery data analysis solutions that is a specialization of the XaaS (Everything as a Service) stack and is called Data Analysis as a Service (DAaaS). It adapts and specifies the three general service models (SaaS, PaaS and IaaS), for supporting the structured development of Big Data analysis systems, tools and applications according to a service-oriented approach. The DAaaS methodology is then based on the three basic models for delivering data analysis services at different levels as described here (see also Fig. 1): Data analysis infrastructure as a service (DAIaaS). This model provides a set of hardware/software virtualized resources that developers can assemble and use as a an integrated infrastructure where storing large datasets, running data mining applications and/or implementing data analytics systems from scratch; Data analysis platform as a service (DAPaaS). This model defines a supporting software platform that developers can use for programming and running their data analytics applications or extending existing ones without concerning about the underlying infrastructure or specific distributed architecture issues; and Data analysis software as a service (DASaaS). This is a higher-level model that offers to end users data mining algorithms, data analysis suites or ready-to-use knowledge discovery applications as Internet services that can be accessed and used directly through a Web browser. According to this approach, every data analysis software is provided as a service, avoiding end users to worry about implementation and execution details. The three models of the DAaaS software methodology. The DAaaS software methodology is based on three basic models for delivering data analysis services at different levels (application, platform, and infrastructure). The DAaaS methodology defines a new model stack to delivery data analysis solutions that is a specialization of the XaaS (Everything as a Service) stack and is called Data Analysis as a Service (DAaaS). It adapts and specifies the three general service models (SaaS, PaaS and SaaS), for supporting the structured development of Big Data analysis systems, tools and applications according to a service-oriented approach Cloud-based data analysis tools Using the DASaaS methodology we designed a cloud-based system, the Data Mining Cloud Framework (DMCF) [17], which supports three main classes of data analysis and knowledge discovery applications: Single-task applications, in which a single data mining task such as classification, clustering, or association rules discovery is performed on a given dataset; Parameter-sweeping applications, in which a dataset is analyzed by multiple instances of the same data mining algorithm with different parameters; and Workflow-based applications, in which knowledge discovery applications are specified as graphs linking together data sources, data mining tools, and data mining models. DMCF includes a large variety of processing patterns to express knowledge discovery workflows as graphs whose nodes denote resources (datasets, data analysis tools, mining models) and whose edges denote dependencies among resources. A Web-based user interface allows users to compose their applications and submit them for execution to the Cloud platform, following the data analysis software as a service approach. Visual workflows can be programmed in DMCF through a language called VL4Cloud (Visual Language for Cloud), whereas script-based workflows can be programmed by JS4Cloud (JavaScript for Cloud), a JavaScript-based language for data analysis programming. Figure 2 shows a sample data mining workflow composed of several sequential and parallel steps. It is just an example for presenting the main features of the VL4Cloud programming interface [17]. The example workflow analyses a dataset by using n instances of a classification algorithm, which work on n portions of the training set and generate the same number of knowledge models. By using the n generated models and the test set, n classifiers produce in parallel n classified datasets (n classifications). In the final step of the workflow, a voter generates the final classification by assigning a class to each data item, by choosing the class predicted by the majority of the models. A parallel classification workflow designed by the VL4Cloud programming interface. The figure shows a workflow designed by the VL4Cloud programming interface during its execution. The workflow implements a parallel classification application. Tasks/services included in square bracket are executed in parallel. The results produced by classifiers are selected by a voter task that produces the final classification Although DMCF has been mainly designed to coordinate coarse grain data and task parallelism in big data analysis applications by exploiting the workflow paradigm, the DMCF script-based programming interface (JS4Cloud) allows also for parallelizing fine-grain operations in data mining algorithms as it permits to program in a JavaScript style any data mining algorithm, such as classification, clustering and others. This can be done because loops and data parallel methods are run in parallel on the virtual machines of a Cloud [16, 26]. Like DMCF, other innovative cloud-based systems designed for programming data analysis applications are: Apache Spark, Sphere, Swift, Mahout, and CloudFlows. Most of them are open source. Apache Spark is an open-source framework developed at UC Berkeley for in-memory data analysis and machine learning [34]. Spark has been designed to run both batch processing and dynamic applications like streaming, interactive queries, and graph analysis. Spark provides developers with a programming interface centered on a data structure called the resilient distributed dataset (RDD), that is a read-only multi-set of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. Differently from other systems and from Hadoop, Spark stores data in memory and queries it repeatedly so as to obtain better performance. This feature can be useful for a future implementation of Spark on Exascale systems. Swift is a workflow-based framework for implementing functional data-driven task parallelism in data-intensive applications. The Swift language provides a functional programming paradigm where workflows are designed as a set of calls with associated command-line arguments and input and output files. Swift uses an implicit data-driven task parallelism [32]. In fact, it looks like a sequential language, but being a dataflow language, all variables are futures, thus execution is based on data availability. Parallelism can be also exploited through the use of the foreach statement. Swift/T is a new implementation of the Swift language for high-performance computing. In this implementation, a Swift program is translated into an MPI program that uses the Turbine and ADLB runtime libraries for scalable dataflow processing over MPI. Recently a porting of Swift/T on very large cloud systems for the execution of very many tasks has been investigated. DMCF, differently from the other frameworks discussed here, it is the only system that offers both a visual and a script-based programming interface. Visual programming is a very convenient design approach for high-level users, like domain-expert analysts having a limited understanding of programming. On the other hand, script-based workflows are a useful paradigm for expert programmers who can code complex applications rapidly, in a more concise way and with greater flexibility. Finally, the workflow-based model exploited in DMCF and Swift make these frameworks of more general use with respect to Spark that offers a very restricted set of programming patterns (e.g., map, filter and reduce) so limiting the variety of data analysis applications that can be implemented with it. These and other related systems are currently used for the development of big data analysis applications on HPC and cloud platforms. However, additional research work in this field must be done and the development of new models, solutions and tools is needed [13, 24]. Just to mention a few, active and promising research topics are listed here ordered by importance factors: Programming models for big data analytics. New abstract programming models and constructs hiding the system complexity are needed for big data analytics tools. The MapReduce model and workflow models are often used on HPC and clouds, but more research effort is needed to develop other scalable, adaptive, general-purpose higher-level models and tools. Research in this area is even more important for Exascale systems; in the next section we will discuss some of these topics in Exascale computing. Reliability in scalable data analysis. As the number of processing elements increases, reliability of systems and applications decreases, therefore mechanisms for detecting and handling hardware and software faults are needed. Although in [7] has been proved that no reliable communication protocol can tolerate crashes of processors on which the protocol runs, as stated in the same paper some ways in which systems cope with the impossibility result can be found. Among them, at programming level it is necessary to design constructs for handling communication, data access, and computing failures and for recovering from them. Programming models, languages and APIs must provide general and data-oriented mechanisms for failure detection and isolation, avoiding that an entire application can fail and assuring its completion. Reliability is an issue much more important in the Exascale domain where the number of processing elements is massive and fault occurrence increases making detection and recovering vital. Application reproducibility. Reproducibility is another open research issue for designers of complex applications running on parallel systems. Reproducibility in scalable data analysis must face, for example, with data communication, data parallel manipulation and dynamic computing environments. Reproducibility demands that current data analysis frameworks (like those based on Map-Reduce and on workflows) and the future ones, especially those implemented on Exascale systems, must provide additional information and knowledge on how data are managed, on algorithm characteristics and on configuration of software and execution environments. Data and tool integration and openness. Code coordination and data integration are main issues in large-scale applications that use data and computing resources. Standard formats, data exchange models and common APIs are needed to support interoperability and ease cooperation among design teams using different data formats and tools. Interoperability of big data analytics frameworks. The service-oriented paradigm allows running large-scale distributed applications on cloud heterogeneous platforms along with software components developed using different programming languages or tools. Cloud service paradigms must be designed to allow worldwide integration of multiple data analytics frameworks. Exascale and big data analysis As we discussed in the previous sections, data analysis gained a primary role because of the very large availability of datasets and the continuous advancement of methods and algorithms for finding knowledge in them. Data analysis solutions advance by exploiting the power of data mining and machine learning techniques and are changing several scientific and industrial areas. For example, the amount of data that social media daily generate is impressive and continuous. Some hundreds of terabyte of data, including several hundreds of millions of photos, are uploaded daily to Facebook and Twitter. Therefore it is central to design scalable solutions for processing and analysis such massive datasets. As a general forecast, IDC experts estimate data generated to reach about 45 zettabytes worldwide by 2020 [6]. This impressive amount of digital data asks for scalable high performance data analysis solutions. However, today only one-quarter of digital data available would be a candidate for analysis and about 5% of that is actually analyzed. By 2020, the useful percentage could grow to about 35% also thanks to data mining technologies. Extreme data sources and scientific computing Scalability and performance requirements are challenging conventional data storages, file systems and database management systems. Architectures of such systems have reached limits in handling very large processing tasks involving petabytes of data because they have not been built for scaling after a given threshold. This condition claims for new architectures and analytics platform solutions that must process big data for extracting complex predictive and descriptive models [30]. Exascale systems, both from the hardware and the software side, can play a key role to support solutions for these problems [23]. An IBM study reports that we are generating around 2.5 exabytes of data per day.Footnote 1 Because of that continuous and explosive growth of data, many applications require the use of scalable data analysis platforms. A well-known example is the ATLAS detector from the Large Hadron Collider at CERN in Geneva. The ATLAS infrastructure has a capacity of 200 PB of disk and 300,000 cores, with more than 100 computing centers connected via 10 Gbps links. The data collection rate is very high and only a portion of the data produced by the collider is stored. Several teams of scientists run complex applications to analyze subsets of those huge volumes of data. This analysis would be impossible without a high-performance infrastructure that supports data storage, communication and processing. Also computational astronomers are collecting and producing larger and larger datasets each year that without scalable infrastructures cannot be stored and processed. Another significant case is represented by the Energy Sciences Network (ESnet) is the USA Department of Energy's high-performance network managed by Berkeley Lab that in late 2012 rolled out a 100 gigabits-per-second national network to accommodate the growing scale of scientific data. If we go from science to society, social data and e-health are good examples to discuss. Social networks, such as Facebook and Twitter, have become very popular and are receiving increasing attention from the research community since, through the huge amount of user-generated data, they provide valuable information concerning human behavior, habits, and travels. When the volume of data to be analyzed is of the order of terabytes or petabytes (billions of tweets or posts), scalable storage and computing solutions must be used, but no clear solutions today exist for the analysis of Exascale datasets. The same occurs in the e-health domain, where huge amounts of patient data are available and can be used for improving therapies, for forecasting and tracking of health data, for the management of hospitals and health centers. Very complex data analysis in this area will need novel hardware/software solutions, however Exascale computing is still promising in other scientific fields where scalable storages and databases are not used/required. Examples of scientific disciplines where future Exascale computing will be extensively used are quantum chromodynamics, materials simulation, molecular dynamics, materials design, earthquake simulations, subsurface geophysics, climate forecasting, nuclear energy, and combustion. All those applications require the use of sophisticated models and algorithms to solve complex equation systems that will benefit from the exploitation of Exascale systems. Programming models features for exascale data analysis Implementing scalable data analysis applications in Exascale computing systems is a very complex job and it requires high-level fine-grain parallel models, appropriate programming constructs and skills in parallel and distributed programming. In particular, mechanisms and expertise are needed for expressing task dependencies and inter-task parallelism, for designing synchronization and load balancing mechanisms, handling failures, and properly manage distributed memory and concurrent communication among a very large number of tasks. Moreover, when the target computing infrastructures are heterogeneous and require different libraries and tools to program applications on them, the programming issues are even more complex. To cope with some of these issues in data-intensive applications, different scalable programming models have been proposed [5]. Scalable programming models may be categorized by Their level of abstraction: expressing high-level and low-level programming mechanisms, and How they allow programmers to develop applications: using visual or script-based formalisms. Using high-level scalable models, a programmer defines only the high-level logic of an application while hides the low-level details that are not essential for application design, including infrastructure-dependent execution details. A programmer is assisted in application definition and application performance depends on the compiler that analyzes the application code and optimizes its execution on the underlying infrastructure. On the other hand, low-level scalable models allow programmers to interact directly with computing and storage elements composing the underlying infrastructure and thus define the applications parallelism directly. Data analysis applications implemented by some frameworks can be programmed through a visual interface, which is a convenient design approach for high-level users, for instance domain-expert analysts having a limited understanding of programming. In addition, a visual representation of workflows or components intrinsically captures parallelism at the task level, without the need to make parallelism explicit through control structures [14]. Visual-based data analysis typically is implemented by providing workflows-based languages or component-based paradigms (Fig. 3). Also dataflow-based approaches, that share with workflows the same application structure, are used. However, in dataflow models, the grain of parallelism and the size of data items are generally smaller with respect to workflows. In general, visual programming tools are not very flexible because they often implement a limited set of visual patterns and provide restricted manners to configure them. For addressing this issue, some visual languages provide users with the possibility to customize the behavior of patterns by adding code that can specify operations executed a specific pattern when an event occurs. Main visual and script-based programming models used today for data analysis programming On the other hand, code-based (or script-based) formalism allows users to program complex applications more rapidly, in a more concise way, and with higher flexibility [16]. Script-based applications can be designed in different ways (see Fig. 3): With a complete language or a language extension that allows to express parallelism in applications, according to a general purpose or a domain specific approach. This approach requires the design and implementation of a new parallel programming language or a complete set of data types and parallel constructs to be fully inserted in an existing language. With annotations in the application code that allow the compiler to identify which instructions will be executed in parallel. According to this approach, parallel statements are separated from sequential constructs and they are clearly identified in the program code because they are denoted by special symbols or keywords. Using a library in the application code that adds parallelism to the data analysis application. Currently this is the most used approach since it is orthogonal to host languages. MPI and MapReduce are two well-known examples of this approach. Given the variety of data analysis applications and classes of users (from skilled programmers to end users) that can be envisioned for future Exascale systems, there is a need for scalable programming models with different levels of abstractions (high-level and low-level) and different design formalisms (visual and script-based), according to the classification outlined above. As we discussed, data-intensive applications are software programs that have a significant need to process large volumes of data [9]. Such applications devote most of their processing time to run I/O operations and exchanging and moving data among the processing elements of a parallel computing infrastructure. Parallel processing in data analysis applications typically involves accessing, pre-processing, partitioning, distributing, aggregating, querying, mining, and visualizing data that can be processed independently. The main challenges for programming data analysis applications on Exascale computing systems come from potential scalability, network latency and reliability, reproducibility of data analysis, and resilience of mechanisms and operations offered to developers for accessing, exchanging and managing data. Indeed, processing very large data volumes requires operations and new algorithms able to scale in loading, storing, and processing massive amounts of data that generally must be partitioned in very small data grains, on which thousands to millions of simple parallel operations do analysis. Exascale programming systems Exascale systems force new requirements on programming systems to target platforms with hundreds of homogeneous and heterogeneous cores. Evolutionary models have been recently proposed for Exascale programming that extend or adapt traditional parallel programming models like MPI (e.g., EPiGRAM [15] that uses a library-based approach, Open MPI for Exascale in the ECP initiative), OpenMP (e.g., OmpSs [8] that exploits an annotation-based approach, the SOLLVE project), and MapReduce (e.g., Pig Latin [22] that implements a domain-specific complete language). These new frameworks limit the communication overhead in message passing paradigms or limit the synchronization control if a shared-memory model is used [11]. As Exascale systems are likely to be based on large distributed memory hardware, MPI is one of the most natural programming systems. MPI is currently used on over about one million cores, therefore is reasonable to have MPI as one programming paradigm used on Exascale systems. The same possibility occurs for MapReduce-based libraries that today are run on very large HPC and cloud systems. Both these paradigms are largely used for implementing Big Data analysis applications. As expected, general MPI all-to-all communication does not scale well in Exascale environments, thus to solve this issue new MPI releases introduced neighbor collectives to support sparse "all-to-some" communication patterns that limit the data exchange on limited regions of processors [11]. Ensuring the reliability of Exascale systems requires a holistic approach including several hardware and software technologies for both predicting crashes and keeping systems stable despite failures. In the runtime of parallel APIs, like MPI and MapReduce-based libraries like Hadoop, if do not want to behave incorrectly in case of processor failure, a reliable communication layer must be provided using the lower unreliable layer by implementing a correct protocol that work safely with every implementation of the unreliable layer that cannot tolerate crashes of the processors on which it runs. Concerning MapReduce frameworks, reference [18] reports on an adaptive MapReduce framework, called P2P-MapReduce, which has been developed to manage node churn, master node failures, and job recovery in a decentralized way, so as to provide a more reliable MapReduce middleware that can be effectively exploited in dynamic large-scale infrastructures. On the other hand, new complete languages such as X10 [29], ECL [33], UPC [21], Legion [3], and Chapel [4] have been defined by exploiting in them a data-centric approach. Furthermore, new APIs based on a revolutionary approach, such as GA [20] and SHMEM [19], have been implemented according to a library-based model. These novel parallel paradigms are devised to address the requirements of data processing using massive parallelism. In particular, languages such as X10, UPC, and Chapel and the GA library are based on a partitioned global address space (PGAS) memory model that is suited to implement data-intensive Exascale applications because it uses private data structures and limits the amount of shared data among parallel threads. Together with different approaches, such as Pig Latin and ECL, those programming models, languages and APIs must be further investigated, designed and adapted for providing data-centric scalable programming models useful to support the reliable and effective implementation of Exascale data analysis applications composed of up to millions of computing units that process small data elements and exchange them with a very limited set of processing elements. PGAS-based models, data-flow and data-driven paradigms, local-data approaches today represent promising solutions that could be used for Exascale data analysis programming. The APGAS model is, for example, implemented in the X10 language where it is based on the notions of places and asynchrony. A place is an abstraction of shared, mutable data and worker threads operating on the data. A single APGAS computation can consist of hundreds or potentially tens of thousands of places. Asynchrony is implemented by a single block-structured control construct async. Given a statement ST, the construct async ST executes ST in a separate thread of control. Memory locations in one place can contain references to locations at other places. To compute upon data at another place, the $$ \mathrm{at}\left(\mathrm{p}\right)\mathrm{ST} $$ statement must be used. It allows the task to change its place of execution to p, executes ST at p and returns, leaving behind tasks that may have been spawned during the execution of ST. Another interesting language based on the PGAS model is Chapel [4]. Its locality mechanisms can be effectively used for scalable data analysis where light data mining (sub-)tasks are run on local processing elements and partial results must be exchanged. Chapel data locality provides control over where data values are stored and where tasks execute so that developers can ensure parallel data analysis computations execute near the variables they access, or vice-versa for minimizing the communication and synchronization costs. For example, Chapel programmers can specify how domains and arrays are distributed amongst the system nodes. Another appealing feature in Chapel is the expression of synchronization in a data-centric style. By associating synchronization constructs with data (variables), locality is enforced and data-driven parallelism can be easily expressed also at very large scale. In Chapel, locales and domains are abstractions for referring to machine resources and map tasks and data to them. Locales are language abstractions for naming a portion of a target architecture (e.g., a GPU, a single core or a multicore node) that has processing and storage capabilities. A locale specifies where (on which processing node) to execute tasks/statements/operations. For example, in a system composed of 4 locales $$ \mathbf{const}\ \mathrm{Locs}:\left[4\right]\mathbf{locale}; $$ for executing the method Filter(D) on the first locale, we can use $$ \mathbf{on}\ \mathrm{Locs}\left[0\right]\ \mathbf{do}\mathrm{Filter}\left(\mathrm{D}\right); $$ and to execute the K-means algorithm on the 4 locales we can use $$ \mathbf{forall}\mathrm{lc}\mathbf{in}\ \mathrm{Locs}\left(\mathrm{i}\right)\mathbf{do}\ \mathbf{on}\mathrm{lc}\mathbf{do}\mathrm{Kmeans}\left(\right); $$ Whereas locales are used to map tasks to machine nodes, domain maps are used for mapping data to a target architecture. Here is a simple example of a declaration of a rectangular domain $$ \mathbf{const}\ \mathrm{D}:\mathbf{domain}(2)=\left\{1..\mathrm{n},1..\mathrm{n}\right\}; $$ Domains can be also mapped to locales. Similar concepts (logical regions & mapping interfaces) are used in the Legion programming model [3, 5]. Exascale programming is a strongly evolving research field and it is not possible to discuss in details all programming models, languages and libraries that are contributing to provide features and mechanisms useful for exascale data analysis application programming. However, the next section introduces, discusses and classifies current programming systems for Exascale computing according to the most used programming and data management models. Exascale programming systems comparison As mentioned, several parallel programming models, languages and libraries are under development for providing high-level programming interfaces and tools for implementing high-performance applications on future Exascale computers. Here we introduce the most significant proposals and discuss their main features. Table 1 lists and classifies the considered systems and in it some pros and fallacies of different classes are summarized. Table 1 Exascale programming systems classification Since Exascale systems will be composed of millions of processing nodes, distributed memory paradigms, and message passing systems in particular, are candidate tools to be used as programming systems for such class of systems. In this area, MPI is currently the most used and studied system. Different adaptations of this well-known model are under development such as, for example, Open MPI for Exascale. Other systems based on distributed memory programming are Pig Latin, Charm++, Legion, PaRSEC, Bulk Synchronous Parallel (BSP), AllScale API, and Enterprise Control Language (ECL). Just considering Pig Latin, we can notice that some of its parallel operators such as FILTER, which selects a set of tuples from a relation based on a condition, and SPLIT, which partitions a relation into two or more relations, can be very useful in many highly parallel big data analysis applications. On the other side, we have shared-memory models where the major system is OpenMP that offers a simple parallel programming model although it does not provide mechanisms to explicitly map and control data distribution and includes non-scalable synchronization operations that are making very challenging its implementation on massively parallel systems. Other programming systems in this area are Threading Building Blocks (TBB), OmpSs, and Cilk++. The OpenMP synchronization model based on locks, atomic and sequential sections that limit parallelism exploitation in Exascale systems are going to be modified and integrated in recent OpenMP implementations with new techniques and routines that increase asynchronous operations and parallelism exploitation. A similar approach is used in Cilk++ that supports parallel loops and hyperobjects, a new construct designed to solve data race problems created by parallel accesses to global variables. In fact, a hyperobject allows multiple tasks to share state without race conditions and without using explicit locks. As a tradeoff between distributed and shared memory organizations, the Partitioned Global Address Space (PGAS) model has been designed for implementing a global memory address space that is logically partitioned and portions of it are local to single processes. The main goal of the PGAS model is to limit data exchange and isolate failures in very large-scale systems. Languages and libraries based on PGAS are Unified Parallel C (UPC), Chapel, X10, Global Arrays (GA), Co-Array Fortran (CAF), DASH, and SHMEM. PGAS appears to be suited for implementing data-intensive exascale applications because it uses private data structures and limits the amount of shared data among parallel threads. Its memory-partitioning model facilitates failure detection and resilience. Another programming mechanism useful for decentralized data analysis is related to data synchronization. In the SHMEM library it is implemented through the shmem_barrier operation that performs a barrier operation on a subset of processing elements, then enables them to go further by sharing synchronized data. Starting from those three main programming approaches, hybrid systems have been proposed and developed to better map application tasks and data onto hardware architectures of Exascale systems. In hybrid systems that combine distributed and shared memory, message-passing routines are used for data communication and inter-node processing whereas shared-memory operations are used for exploiting intranode parallelism. A major example in this area is given by the different MPI + OpenMP systems recently implemented. Hybrid systems have been also designed by combining message passing models, like MPI, with PGAS models for restricting data communication overhead and improving MPI efficiency in execution time and memory consumption. The PGAS-based MPI implementation EMPI4Re, developed in the EPiGRAM project, is an example of this class of hybrid systems. Associated to the programming model issues, a set of challenges concern the design of runtime systems that in exascale computing systems must be tightly integrated with the programming tools level. The main challenges for runtime systems obviously include parallelism exploitation, limited data communication, data dependence management, data-aware task scheduling, processor heterogeneity, and energy efficiency. However, together with those main issues, other aspects are addressed in runtime systems like storage/memory hierarchies, storage and processor heterogeneity, performance adaptability, resource allocation, performance analysis, and performance portability. In addressing those issues the currently used approaches aim at providing simplified abstractions and machine models that allow algorithm developers and application programmers to generate code that can run and scale on a wide range of exascale computing systems. This is a complex task that can be achieved by exploiting techniques that allow the runtime system to cooperate with the compiler, the libraries and the operating system to find integrated solutions and make smarter use of hardware resources by efficient ways to map the application code to the exascale hardware. Finally, due to the specific features of exascale hardware, runtime systems need to find methods and techniques that allow bringing the computing system closer to the application requirements. Research work in this area is carried out in projects like XPRESS, StarPU, Corvette DEGAS, libWater [10], Traleika-Glacier, OmpSs [8], SnuCL, D-TEC, SLEEC, PIPER, and X-TUNE that are proposing innovative solutions for large-scale parallel computing systems that can be used in exascale machines. For instance a system that aims at integrating the runtime with the language level is OmpSs where mechanisms for data dependence management (based on DAG analysis like in libWater) and for mapping tasks to computing nodes and handling processor heterogeneity (the target construct) are provided. Another issue to be taken into account in the interaction between the programming level and the runtime is performance and scalability monitoring. In the StarPU project, for example, performance feedback through task profiling and trace analysis is provided. In very large-scale high performance machines and in Exascale systems, the runtime systems are more complex than in traditional parallel computers. In fact, performance and scalability issues must be addressed at the inter-node runtime level and they must be appropriately integrated with intra-node runtime mechanisms [25]. All these issues relate to system and application scalability. In fact, vertical scaling of systems with multicore parallelism within a single node must be addressed. Scalability is still an open issue in Exascale systems also because speed-up requirements for system software and runtimes are much higher than in traditional HPC systems and different portions of code in applications or runtimes can generate performance bottlenecks. Concerning application resiliency, the runtime of Exascale systems must include mechanisms for restarting task and accessing data in case of software or hardware faults without requiring developer involvement. Traditional approaches for providing reliability in HPC include: checkpointing and restart (see for instance MPI_Checkpoint), reliable data storage (through file and in-memory replication or double buffering), and message logging for minimizing the checkpointing overhead. In fact, whereas the global checkpointing/restart technique is the most used to limit system/application faults, in the Exascale scenario new mechanisms with low overhead and highly scalability must be designed. These mechanisms should limit task and data duplication through smart approaches for selective replication. For example, silent data corruption (SDC) is recognized to be a critical problem in Exascale computing. However, although replication is useful, their inherent inefficiency must be limited. Research work is carried out in this area to define technique that limit replication costs while offering protection from SDC. For application/task checkpointing, instead of checkpointing the entire address space of the application, as occurs in OpenMP and MPI, the minimal state of the tasks needed to be checkpointed for the fault recovery must be identified thus limiting data size and recovery overhead. Requirements of exascale runtime for data analysis One of the most important aspect to ponder in applications that run on Exascale systems and analyze big datasets is the tradeoff between sharing data among processing elements and computing things locally to reduce communication and energy costs, while keeping performance and fault-tolerance levels. A scalable programming model founded on basic operations for data intensive/data-driven applications must include mechanisms and operations for Parallel data access that allows increasing data access bandwidth by partitioning data into multiple chunks, according to different methods, and accessing several data elements in parallel to meet high throughput requirements. Fault resiliency that is a major issue as machines expand in size and complexity. On Exascale systems with huge amount of processes, non-local communication must be prepared for a potential failure of one of the communication sides; runtimes must features failure handing mechanisms for recovering from node and communication faults. Data-driven local communication that is useful to limit the data exchange overhead in massively parallel systems composed of many cores; in this case data availability among neighbor nodes dictates the operations taken by those nodes. Data processing on limited groups of cores allows concentrating data analysis operations involving limited sets of cores and large amount of data on localities of Exascale machines facilitating a type of data affinity co-locating related data and computation. Near-data synchronization to limit the overhead generated by synchronization mechanisms and protocols that involve several far away cores in keeping data up-to-date. In-memory querying and analytics needed to reduce query response times and execution of analytics operations by caching large volumes of data in the computing node RAMs and issuing queries and other operation in parallel on the main memory of computing nodes. Group-level data aggregation in parallel systems is useful for efficient summarization, graph traversal and matrix operations, therefore it is of great importance in programming models for data analysis on massively parallel systems. Locality-based data selection and classification for limiting the latency of basic data analysis operations running in parallel on large scale machines in a way that the subset of data needed together in a given phase are locally available (in a subset of nearby cores). A reliable and high-level programming model and its associated runtime must be able to manage and provide implementation solutions for those operations together with the reliable exploitation of a very large amount of parallelism. Real-world big data analysis applications cannot be practically solved on sequential machines. If we refer to real-world applications, each large-scale data mining and machine learning software that today is under development in the areas of social data analysis and bioinformatics will certainly benefit from the availability of Exascale computing systems and from the use of Exascale programming environments that will offer massive and adaptive-grain parallelism, data locality, local communication and synchronization mechanisms, together with the other features discussed in the previous sections that are needed for reducing execution time and making feasible the solution of new problems and challenges. For example, in bioinformatics applications parallel data partitioning is a key feature for running statistical analysis or machine learning algorithms on high performance computing systems. After that, clever and complex data mining algorithms must be run on each single core/node of an Exascale machine on subsets of data to produce data models in parallel. When partial models are produced, they could be checked locally and must be merged among nearby processors to obtain, for example, a general model of gene expression correlations or of drug-gene interactions. Therefore for those applications, data locality, highly parallel correlation algorithms, and limited communication structures are very important to reduce execution time from several days to a few minutes. Moreover, fault tolerance software mechanisms are also useful in long-running bioinformatics applications to avoid restarting them from the beginning when a software/hardware failure occurs. Moving to social media applications, nowadays the huge volume of user-generated data in social media platforms, such as Facebook, Twitter and Instagram, are very precious sources of data from which to extract insights concerning human dynamics and behaviors. In fact, social media analysis is a fast growing research area that will benefit form the use of Exascale computing systems. For example, social media users moving through a sequence of places in a city or a region may create a huge amount of geo-referenced data that include extensive knowledge about human dynamics and mobility behaviors. A methodology for discovering behavior and mobility patterns of users from social media posts and tweet includes a set of steps such as collection and pre-processing of geotagged items, organization of the input dataset, data analysis and trajectory mining algorithm execution, and results visualization. In all those data analysis steps, the utilization of scalable programming techniques and tools is vital to obtain practical results in feasible time when massive datasets are analyzed. The Exascale programming features and requirements discussed here and in the previous sections will be very useful in social data analysis for executing parallel tasks like concurrent data acquisition (thus data items are collected exploiting parallel queries from different data sources), parallel data filtering and data partitioning by the exploitation of local and in-memory algorithms, classification, clustering and association mining algorithms that are very computing intensive and need a large number of processing elements working asynchronously to produce learning models from billions of posts containing text, photos and videos. The management and processing of Terabytes of data that are involved in those applications cannot be done efficiently without solving issues like data locality, near-data processing, large asynchronous execution and the other ones addressed in Exascale computing systems. Together with an accurate modeling of basic operations and of the programming languages/APIs that include them, supporting correct and effective data-intensive applications on Exascale systems will require also a significant programming effort of developers when they need to implement complex algorithms and data-driven applications such that used, for example, in big data analysis and distributed data mining. Parallel and distributed data mining strategies, like collective learning, meta-learning, and ensemble learning, must be devised using fine grain parallel approaches to be adapted on Exascale computers. Programmers must be able to design and implement scalable algorithms by using the operations sketched above specifically adapted to those new systems. To reach this goal, a coordinated effort between the operation/language designers and the application developers would be very fruitful. In Exascale systems, the cost of accessing, moving, and processing data across a parallel system is enormous [24, 30]. This requires mechanisms, techniques and operations for capable data access, placement and querying. In addition, scalable operations must be designed in such a way to avoid global synchronizations, centralized control and global communications. Many data scientists want to be abstracted away from these tricky, lower level, aspects of HPC until at least they have their code working and then potentially to tweak communication and distribution choices in a high level manner in order to further tune their code. Interoperability and integration with the MapReduce model and MPI must be investigated with the main goal of achieving scalability on large-scale data processing. Different data-driven abstractions can be combined for providing a programming model and an API that allow the reliable and productive programming of very large-scale heterogeneous and distributed memory systems. In order to simplify the development of applications in heterogeneous distributed memory environments, large-scale data-parallelism can be exploited on top of the abstraction of n-dimensional arrays subdivided in partitions, so that different array partitions are placed on different cores/nodes that will process in parallel the array partitions. This approach can allow the computing nodes to process in parallel data partitions at each core/node using a set of statements/library calls that hide the complexity of the underlying process. Data dependency in this scenario limits scalability, so it should be avoided or limited to a local scale. Abstract data types provided by libraries, so that they can be easily integrated in existing applications, should support this abstraction. As we mentioned above, another issue is the gap between users with HPC needs and experts with the skills to make the most of these technologies. An appropriate directive-based approach can be to design, implement and evaluate a compiler framework that allows generic translations from high-level languages to Exascale heterogeneous platforms. A programming model should be designed at a level that is higher than that of standards, such as OpenCL, including also checkpointing and fault resiliency. Efforts must be carried out to show the feasibility of transparent checkpointing of Exascale programs and quantitatively evaluate the runtime overhead. Approaches like CheCL show that it is possible to enable transparent checkpoint and restart, also in high-performance and dependable GPU computing including support for process migration among different processors such as a CPU and a GPU. The model should enable the rapid development with reduced effort for different heterogeneous platforms. These heterogeneous platforms need to include low energy architectures and mobile devices. The new model should allow a preliminary evaluation of results on the target architectures. Concluding remarks and future work Cloud-based solutions for big data analysis tools and systems are in an advanced phase both on the research and the commercial sides. On the other hand, new Exascale hardware/software solutions must be studied and designed to allow the mining of very large-scale datasets on those new platforms. Exascale systems raise new requirements on application developers and programming systems to target architectures composed of a very large number of homogeneous and heterogeneous cores. General issues like energy consumption, multitasking, scheduling, reproducibility, and resiliency must be addressed together with other data-oriented issues like data distribution and mapping, data access, data communication and synchronization. Programming constructs and runtime systems will play a crucial role in enabling future data analysis programming models, runtime models and hardware platforms to address these challenges, and in supporting the scalable implementation of real big data analysis applications. In particular, here we summarize a set of open design challenges that are critical for designing Exascale programming systems and for their scalable implementation. The following design choices, among others, must be taken into account: Application reliability: Data analysis programming models must include constructs and/or mechanisms for handling task and data access failures and for recovering. As new data analysis platforms appear ever larger, the fully reliable operations cannot be implicit and this assumption becomes less credible, therefore explicit solutions must be proposed. Reproducibility requirements. Big data analysis running on massively parallel systems demands for reproducibility. New data analysis programming frameworks must collect and generate metadata and provenance information about algorithm characteristics, software configuration and execution environment for supporting application reproducibility on large-scale computing platforms. Communication mechanisms: Novel approaches must be devised for facing network unreliability [7] and network latency, for example by expressing asynchronous data communications and locality-based data exchange/sharing. Communication patterns: A correct paradigm design should include communication patterns allowing application dependent features and data access models, limiting data movement and simplify the burden on Exascale runtimes and interconnection. Data handling and sharing patterns: Data locality mechanisms/constructs, like near-data computing must be designed and evaluated on big data applications when subsets of data are stored in nearby processors and by avoiding that locality is imposed when data must be moved. Other challenges concern data affinity control data querying (NoSQL approach), global data distribution and sharing patterns. Data-parallel constructs: Useful models like data-driven/data-centric constructs, dataflow parallel operations, independent data parallelism, and SPMD patterns must be deeply considered and studied. Grain of parallelism: from very fine-grain to process-grain parallelism must be analyzed also in combination with the different parallelism degree that Exascale hardware supports. Perhaps different grain size should be considered in a single model to address hardware needs and heterogeneity. Finally, since big data mining algorithms often require the exchange of raw data or, better, of mining parameters and partial models, to achieve scalability and reliability on thousands of processing elements, metadata-based information, limited-communication programming mechanisms, and partition-based data structures with associated parallel operations must be proposed and implemented. https://www.ibm.com/annualreport/2013/bin/assets/2013_ibm_annual.pdf APGAS: Asynchronous partitioned global address space BSP: Bulk Synchronous Parallel CAF: Co-Array Fortran DAaaS: Data Analysis as a Service DAIaaS: Data analysis infrastructure as a service DAPaaS: Data analysis platform as a service DASaaS: Data analysis software as a service DMCF: Data Mining Cloud Framework Enterprise Control Language ESnet: Energy Sciences Network GA: Global Array HPC: IaaS: Infrastructure as a service JS4Cloud: JavaScript for Cloud PaaS: PGAS: Partitioned global address space RDD: resilient distributed dataset SaaS: SOA: Service Oriented Computing TBB: Threading Building Blocks VL4Cloud: Visual Language for Cloud XaaS: Amarasinghe S et al (2009) Exascale software study: software challenges in extreme-scale systems. In: Defense Advanced Research Projects Agency. Arlington, VA, USA Armbrust M et al (2010) A view of cloud computing. Commun ACM 53(4 (April 2010)):50–58 Bauer M et al (2012) Legion: Expressing locality and independence with logical regions. In: Proc. International Conference on Supercomputing. IEEE CS Press Bradford L, Chamberlain CD, Zima HP (2007) Parallel programmability and the chapel language. International Journal of High Performance Computing Applications 21(3):291–312 Diaz J, Munoz-Caro C, Nino A (2012) A survey of parallel programming models and tools in the multi and many-core era. IEEE Trans Parallel Distributed Systems 23(8):1369–1386 http://www.emc.com/leadership/digital-universe/2014iview/executive-summary.htm Fekete A, Lynch N, Mansour Y, Spinelli J (1993) The impossibility of implementing reliable communication in the face of crashes. Journal of ACM 40(5):1087–1107 Fernandez A et al (2014) Task-based programming with OmpSs and its application. In: Proc. euro-par 2014: parallel processing workshops, pp 602–613 Gorton I, Greenfield P, Szalay AS, Williams R (2008) Data-intensive computing in the 21st century. Computer 41(4):30–32 Grasso I, Pellegrini S, Cosenza B, Fahringer T (2013) LibWater: heterogeneous distributed computing made easy. Procs of the 27th international ACM conference on International conference on supercomputing (ICS '13), New York, USA, pp 161–172 Gropp W, Snir M (2013) Programming for exascale computers. Computing in Science & Eng 15(6):27–35 Gu Y., Grossman R. L., Sector and Sphere: the design and implementation of a high-performance data cloud. Philosophical transactions Series A, Mathematical, physical, and engineering sciences 367 1897, 2429–2445, 2009 Lucas et al (2014) Top ten Exascale research challenges. In: Office of Science, U.S. Department of Energy, Washington, D.C Maheshwari K, Montagnat J (2010) Scientific workflow development using both visual and script-based representation. Proc. of the Sixth World Congress on Services SERVICES '10, Washington, DC, USA, pp 328–335 Markidis S et al (2016) The EPiGRAM project: preparing parallel programming models for exascale. In: Proc. ISC high performance 2016 international workshops, pp 56–58 Marozzo F, Talia D, Trunfio P (2015) JS4Cloud: script-based workflow programming for scalable data analysis on cloud platforms. Concurrency and Computation: Practice and Experience 27(17):5214–5237 Marozzo F, Talia D, Trunfio P (2013) A cloud framework for big data analytics workflows on azure. In: Catlett C, Gentzsch W, Grandinetti L, Joubert G, Vazquez-Poletti J (eds) Cloud Computing and Big Data. IOS press, advances in Parallel Computing, pp 182–191 Marozzo F, Talia D, Trunfio P (2012) P2P-MapReduce: parallel data processing in dynamic cloud environments. J Comput Syst Sci 78(5):1382–1402 Meswani MR et al (2012) Tools for benchmarking, tracing, and simulating SHMEM applications. In: Proceedings Cray user group conference (CUG) Nieplocha J (2006) Advances, applications and performance of the global arrays shared memory programming toolkit. International Journal of High Performance Computing Applications 20(2):203–231 Nishtala R et al (2011) Tuning collective communication for partitioned global address space programming models. Parallel Comput 37(9):576–591 Olston C et al (2008) Pig Latin: a not-so-foreign language for data processing. In: Proceedings SIGMOD '08. Vancouver, Canada, pp 1099–1110 Pectu, D. et al., On processing extreme data, Scalable Computing: Practice and Experience, 16, 4, 467–489, 2015 Reed DA, Dongarra J (2015) Exascale computing and big data. Commun ACM 58(7):56–68 Sarkar V, Budimlic Z, Kulkarni M (eds) (2016) Runtime systems report - 2104 runtime systems summit, U.S. Dept of Energy, USA Talia D (2013) Clouds for scalable big data analytics. Computer 46(5):98–101 Talia D, Trunfio P, Marozzo F (2015) Data Analysis in the Cloud - Models, Techniques and Applications. Elsevier, Amsterdam, Netherlands Talia D (2015) Making knowledge discovery services scalable on clouds for big data mining. In: Proc. 2nd IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services (ICSDM), IEEE computer society press, pp 1–4 Tardieu O et al (2014) X10 and APGAS at petascale. In: Proc. of the ACM SIGPLAN symposium on principles and practice of parallel programming (PPoPP'14) U.S (2013) Department of Energy, Synergistic Challenges in Data-Intensive Science and Exascale Computing. In: Report of the advanced scientific computing advisory committee subcommittee Hwang K (2017) Cloud computing for machine learning and cognitive applications, MIT Press Wozniak JM, Wilde M, Foster IT Language features for scalable distributed-memory dataflow computing. In: . Proceedings of the workshop on data-flow execution models for extreme-scale computing at PACT, Edmonton, Canada, p 2014 Yoo A, Kaplan Y (2009) Evaluating use of data flow systems for large graph analysis. In: Proceedings of the 2nd workshop on many-task computing on grids and supercomputers (MTAGS) Zaharia M et al (2014) Apache spark: a unified engine for big data processing. Commun ACM 59(11):56–65 Amdahl GM (1967) Validity of single-processor approach to achieving large-scale computing capability. Proc. of AFIPS Conference, Reston, VA, pp 483–485 Bailey D (1991) Twelve ways to fool the masses when giving performance results on parallel computers, RNR technical report, RNR-90-020, NASA Ames Research Center Grama, A. et al., Introduction to Parallel Computing, Addison Wesley, 2003 Gustafson JL (1988) Reevaluating Amdahl's Law. Commun ACM 31(5):532–533 Shi JY et al (2012) Program scalability analysis for HPC cloud: applying Amdahl's law to NAS benchmarks. In: Proc. High Performance Computing, Networking, Storage and Analysis (SCC), IEEE, CS, pp 1215–1225 Schroeder B, Gibson G (2006) A large-scale study of failures in high-performance computing systems. Proc. of the International Conference on Dependable Systems and Networks (DSN2006), IEEE CS, Philadelphia, PA, pp 25–28 This work has been partially funded by the ASPIDE Project funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 801091. Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. DIMES, Università della Calabria, Rende, Italy Domenico Talia DT carried out all the work presented in the paper. The author read and approved the final manuscript. Correspondence to Domenico Talia. The author declare that he/she has no competing interests. Scalability in parallel systems Parallel computing systems aim at exploiting the capacity of usefully employing all its processing elements during application execution. Indeed, only an ideal parallel system can do that fully because of its sequential times that cannot be parallelized (As the Amdahl's law suggests [35]) and due to several sources of overhead such as sequential operations, communication, synchronization, I/O and memory access, network speed, I/O system speed, hardware and software failures, problem size and program input. All these issues related to the ability of parallel systems to fully exploit their resources are referred as system or program scalability [36]. The scalability of a parallel computing system is a measure of its capacity to reduce program execution time in proportion to the number of its processing elements. According to this definition, scalable computing refers to the ability of a hardware/software parallel system to exploit increasing computing resources effectively in the execution of a software application [37]. Despite the difficulties that can be faced in the parallel implementation of an application, a framework or a programming system, a scalable parallel computation can always be made cost-optimal if the number of processing elements, the size of memory, the network bandwidth and the size of the problem are chosen appropriately. For evaluation and measuring scalability of a parallel program some metrics have been defined and are largely used: parallel runtime T(p), speedup S(p) and efficiency E(p). Parallel runtime is the total processing time of the program using p processor (with p > 1). Speedup is the ratio between the total processing time of the program on 1 processor and the total processing time on p processors: S(p) = T(1)/T(p). Efficiency is the ratio between speedup and the total number of used processors: E(p) = S(p)/p. Application scalability is influenced by the available hardware and software resources, their performance and reliability, and by the sources of overhead discussed before. In particular, scalability of data analysis applications are tight related to the exploitation of parallelism in data-driven operations and the overhead generated by data management mechanisms and techniques. Moreover, application scalability also depends on the programmer ability to design the algorithms reducing sequential time and exploiting parallel operations. Finally, the instruction designers and the runtime implementers contribute to exploitation of scalability [38]. All these arguments mean that for realizing exascale computing in practice many issues and aspects must ne taken into account by considering all the layers of hardware/software stack involved in the execution of Exascale programs. In addressing parallel system scalability it must be also tackled system dependability. As the number of processors and network interconnection increases and as tasks, threads and message exchanges increase, the rate of failures and faults increases too [39]. As discussed in reference [40], the design of scalable parallel systems requires assuring system dependability. Therefore understanding of failure characteristics is a key issue to couple high performance and reliability in massive parallel systems at Exascale size. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Talia, D. A view of programming scalable data analysis: from clouds to exascale. J Cloud Comp 8, 4 (2019). https://doi.org/10.1186/s13677-019-0127-x Received: 23 October 2018 DOI: https://doi.org/10.1186/s13677-019-0127-x Big data analysis Exascale computing
CommonCrawl
The association of low serum salivary and pancreatic amylases with the increased use of lipids as an energy source in non-obese healthy women Kei Nakajima ORCID: orcid.org/0000-0002-1788-38961,2,3, Ryoko Higuchi1, Taizo Iwane1 & Ayaka Iida1 It is unknown whether low serum levels of salivary and pancreatic amylases are associated with the high combustion of carbohydrates or lipids for energy. Elevated blood ketones and a low respiratory quotient (RQ) can reflect the preferential combustion of lipids relative to carbohydrates. Therefore, using the data from our previous study, we investigated if low levels of serum amylases were associated with a high serum ketone level and low RQ in 60 healthy non-obese young women aged 20–39 years old. Serum ketones [3-hydroxybutyric acid (3-HBA) and acetoacetic acid (AA)] were inversely correlated with RQs, but not body mass index (BMI) or glycated haemoglobin (HbA1c) levels. Logistic regression analysis showed that high levels of serum ketones (3-HBA ≥ 24 μmol/L and AA ≥ 17 μmol/L) and a low RQ (< 0.766) were significantly associated with low serum salivary (< 60 U/L) and pancreatic (< 29 U/L) amylase levels, respectively. These associations were not altered by further adjustments for age, BMI, HbA1c, and estimated glomerular filtration rate. These results confirm the high combustion of lipids for energy in individuals with low serum amylase levels, suggesting a close relationship between circulating amylases and internal energy production. We previously showed an association between low serum amylase and positive ketonuria assessed by dipstick urinalysis in a general population of Japanese adults who underwent a health-screening check-up. These results suggested a lower availability of carbohydrates for energy production in individuals with low serum amylase levels [1]. However, the determination by ketonuria using dipstick urinalysis may be inaccurate with dichotomized results: negative or positive responses [2, 3]. By contrast, serum ketones, such as 3-hydroxybutyric acid (3-HBA) and acetoacetic acid (AA) can reflect a more accurate condition related to the combustion of lipids caused by increased β-oxidation of fatty acids in the liver [4,5,6]. The level of respiratory quotient (RQ), which is the ratio of CO2 produced to O2 consumed, also reflects the combustion of macronutrients while food is being metabolized [7, 8]. In our recent study conducted for the purpose of elucidating association between salivary amylase gene and glucose metabolism in 60 healthy young Japanese women aged 20–39 years [9], we found no significant correlation of serum salivary and pancreatic amylase with serum ketones. However, these correlations were tested from the viewpoint of continuous variables. Additionally, these correlations were not evaluated controlling for confounding factors such as age, body mass index (BMI) and glycated haemoglobin (HbA1c). Although many clinical studies have shown that low serum amylase is significantly associated with diabetes and obesity [10, 11], underlying mechanism and potential treatments for such physiology (low serum amylase and diabetes) including diet therapy remain unknown. Therefore, to confirm and advance our previous findings [1, 9], we aimed to investigate the association of low serum salivary and pancreatic amylases with high serum ketones and a low RQ by conducting a sub-analysis of the data from our previous study that consisted of healthy non-obese young women [9]. We reanalysed the data from our previous study [9] that consisted of 60 healthy Japanese women aged 20–39 who were non-smokers, had a normal BMI (< 25.0 kg/m2), and had no history of metabolic disorders including diabetes and dyslipidemia. Anthropometric and laboratory measurements were obtained in the morning following an overnight fast. Biochemical parameters, including glycated haemoglobin (HbA1c), ketones (3-HBA and AA), and amylase (salivary and pancreatic), were measured using standard methods by SRL, Inc., a Japanese clinical laboratory test company. HbA1c (Japan Diabetes Society) was converted to HbA1c (National Glycohemoglobin Standardization Program) [12]. RQs at rest for five minutes were measured using an AR-1 portable gas monitor (ARCO SYSTEM Inc, Japan). The degree of estimated glomerular filtration rate (eGFR) was also considered as a confounding factor because kidneys play a major role in eliminating circulating amylase [13, 14]. eGFR was calculated for Japanese female subjects using the following equation [15]. $${\text{eGFR }}\left( {{\text{ml}}/{ \hbox{min} }/ 1. 7 3 {\text{m}}^{ 2} } \right)\, = \, 1 9 4\, \times \,{\text{serum creatinine }}\left( {{\text{mg}}/{\text{dl}}} \right)^{ - 1.0 9 4} \, \times \,{\text{age}}^{ - 0. 2 8 7} \, \times \,0. 7 3 9.$$ Because the distributions of serum ketones and RQs were expected to be skewed, the correlations of these were tested by Spearman's rank correlation. Since the median levels of 3-HBA, AA, serum salivary amylase, and serum pancreatic amylase were 24 µmol/L, 17 µmol/L, 60 U/L, and 29 U/L, respectively, high levels of serum 3-HBA and AA and low levels of serum salivary and pancreatic amylases were determined as ≥ 24 µmol/L, ≥ 17 µmol/L, < 60 U/L, and < 29 U/L, respectively, in this study. The proportion of serum salivary amylase in total (salivary + pancreatic) serum amylase was also considered. Low proportion of serum salivary amylase was determined as < 66%, which was the mean of proportion of serum salivary amylase. The RQ is usually divided into tertile in terms of the combustion of the three macronutrients (0.9–1.0 for carbohydrates, 0.8–0.9 for proteins or mixture, and 0.7–0.8 for lipids) [8]. The lowest tertile of the RQ equivalent to lipid combustion was 0.766 in this study. Therefore, to investigate the combustion of lipids, a low RQ was determined as < 0.766. Logistic regression analysis was used to test the association of low serum salivary and pancreatic amylases with low serum ketones and RQs considering confounding factors (age, BMI, HbA1c, and eGFR). Low proportion of serum salivary amylase was also tested instead of low salivary amylase. Statistical analysis was performed using SAS Enterprise Guide (SAS-EG 7.1) in SAS version 9.4 (SAS Institute, Cary, NC, USA). A p value < 0.05 was considered to indicate statistical significance. The characteristics of the participants were reported in our previous study [9], which indicated that the mean of each parameter was within the normal range. Figure 1 shows the distributions of serum ketones, which were highly skewed to the lowest level, whereas the distribution of the RQ was mildly skewed to a low level. The distributions of serum amylases are shown in Additional file 1: Fig S1. An almost normal distribution was observed in the concentration of serum salivary amylase, whereas the distribution of the serum pancreatic amylase was unclear. Distributions of 3-Hydroxybutyric acid, acetoacetic acid, and RQ. a 3-Hydroxybutyric acid; b acetoacetic acid; c RQ Although data is not shown, serum ketones were significantly correlated with RQ (r = − 0.45, p = 0.0005 for 3-HBA and r = − 0.36, p = 0.006 for AA) but not BMI (r = − 0.06, p = 0.63 for 3-HBA and r = − 0.10, p = 0.45 for AA) or HbA1c (r = − 0.10, p = 0.46 for 3-HBA and r = − 0.07, p = 0.60 for AA). In addition, no significant correlation was observed between RQ and BMI (r = − 0.09, p = 0.50) or between RQ and HbA1c (r = − 0.12, p = 0.35). Logistic regression analysis showed that low serum salivary amylase (< 60 U/L) was significantly associated with high levels of serum 3-HBA (≥ 24 μmol/L) and AA (≥ 17 μmol/L), but not with a low RQ (< 0.766) (Table 1), which were not altered by the further adjustment for age, BMI, HbA1c, and eGFR (Model 3). Table 2 shows the associations of low serum pancreatic amylase (< 29 U/L) with high serum ketones and a low RQ. Low serum pancreatic amylase was significantly associated with a low RQ, but not with high levels of serum 3-HBA or AA (Table 2). These associations were not altered by further adjustment for confounders (Model 3). No significant association was observed between low proportion of serum salivary amylase (< 66%) and high levels of serum 3-HBA, AA, and low RQ, regardless of the adjustment for confounders (data not shown). Table 1 Odds ratios of low salivary amylase for high serum ketones and low RQ Table 2 Odds ratios of low pancreas amylase for high serum ketones and low RQ In the current study, we found significant associations between low serum salivary amylase and high serum ketones, and between low serum pancreatic amylase and low RQs in healthy subjects. These associations were independent of relevant confounders including age, BMI, HbA1c, and eGFR. However, the proportion of salivary to total amylase was unlikely to relate with high serum ketones and low RQ, although the proportion of salivary amylase was correlated with the level of blood glucose at early time point after starch loading in our previous study [9]. Because subjects in this study are healthy non-obese young female non-smokers, current results indicate fundamental relationship among serum salivary and pancreatic amylase and metabolic indices such as blood ketones and RQ. As both high serum ketones and low RQs reflect the high combustion of lipids compared with carbohydrates [4,5,6], our current findings suggest that individuals with low serum salivary and pancreatic amylases may obtain their energy predominantly by lipid combustion (fatty acid oxidation), which is consistent with our previous study that showed an association between low total serum amylase and ketonuria in a heterogeneous population with a broad range of age (25–79 years) [1]. Notably, in our previous study [1], no significant inverse correlation between serum ketones and serum salivary and pancreatic amylases, which were assessed as continuous variables, was observed. However, significant associations between low serum salivary amylase and high serum ketones were observed in this study. This discrepancy may depend on the difference in statistical methods between nonparametric correlation tests in the previous study and logistic regression analysis in the current study, likely because serum ketones were highly skewed almost to an undetectable level (Fig. 1a, b). In contrast, the previous study showed a positive correlation between serum pancreatic amylase and RQ, which is consistent with the observed association between low pancreatic amylase and low RQs in this study. The degree of skewness of RQ values is mild compared with those of serum ketones (Fig. 1c), which may contribute to the similar RQ results of this study and the previous study. It has been shown that reduced rates of fat oxidation, namely, high RQ, may contribute to the predisposition to obesity or weight gain [16,17,18]. However, the predisposition to fat accumulation is associated with high tissue sensitivity to insulin [19, 20]. Schutz showed in his study [16] that high RQ, low fat oxidation, and high insulin sensitivity (predictors) were observed in the dynamic phase, whereas low RQ, high fat oxidation, and insulin resistance (outcomes) were observed in the static phase (compensated state). In line with this, we have considered potential underlying mechanism for the current findings (Additional file 2: Figure S2), although this study is a cross-sectional study in nature. Baseline individual levels of serum amylases, which are genetically determined in most cases, are likely to be influenced by other factors including obesity and insulin resistance, eventually resulting in alternation of energy metabolism. The subjects with low serum salivary or pancreatic amylases observed in this study may be at the static phase or feedback phase because low RQ and high serum ketones (high fat oxidation) were associated with low serum pancreatic and salivary amylase, respectively. Although high levels of ketones often reflect a deficiency in insulin secretion in diabetic patients [2,3,4], no subjects in this study had diabetes or impaired glucose metabolism. By contrast, the ketogenic diet, which involves carbohydrate restriction, frequently causes elevated serum ketones, even in healthy individuals [3, 4]. Because current subjects were young women aged 20-39 years, a proportion of the subjects may have conducted such a diet during this study, which may have contributed to the high levels of serum ketones, especially in the fasted state in the morning. This issue deserves further study. Although current study indicates that fatty acids may be predominantly used as an energy source in individuals with low serum amylase, it is unclear if a diet rich in lipids is suitable for individuals with low serum amylase. Taken together, our studies suggest that a close relationship may exist between serum amylase and the specific type of macronutrient combustion used for energy. While increases in serum amylase occur because of leakage from salivary glands and the pancreas, the clinical relevance of this remains unknown. Circulating amylases might just be a marker of leaks or damage, albeit several investigators have suggested a feedback system between serum amylase and insulin action [10, 11, 21]. Insulin resistance may downregulate the production of amylase [21], possibly for the purpose of reducing absorption of glucose digested from starch. On the contrary, high levels of serum amylase can reduce the secretion of insulin in the pancreas [21]. However, it is unknown whether this plausible feedback system is also applicable in the salivary gland. In conclusion, the current results obtained from the reanalysis of our previous study confirm the high combustion of lipids for energy in individuals with low serum amylases, suggesting a close relationship between circulating amylases and internal energy production. First, although low serum amylases may be caused by insulin resistance and reduced insulin secretion [10, 21], insulin levels were not measured in this study, and therefore, precise underlying mechanism remains to be elucidated. Second, this study was conducted on a relatively small sample of 60 Japanese women, which can influence the reliability of our results. However, the enrolled subjects in this study were considered as homogeneous in nature because they were all young healthy non-obese non-smokers without metabolic abnormality. Therefore, current results may reflect a fundamental physiological relationship between serum amylases and metabolic indices, regardless of small sample size. On the contrary, the present results may not be applicable to other populations, such as those in western countries, who may have lower blood amylases [22] than the current subjects. A confidentiality agreement with participants prevents us from sharing the data. RQ: Respiratory quotient 3-HBA: 3-hydroxybutyric acid AA: Acetoacetic acid HbA1c : Glycated haemoglobin eGFR: Estimated glomerular filtration rate Nakajima K, Oda E. Ketonuria may be associated with low serum amylase independent of body weight and glucose metabolism. Arch Physiol Biochem. 2017;123(5):293–6. MacGillivray MH, Voorhess ML, Putnam TI, Li PK, Schaefer PA, Bruck E. Hormone and metabolic profiles in children and adolescents with type I diabetes mellitus. Diabetes Care. 1982;5(Suppl 1):38–47. Comstock JP, Garber AJ. Ketonuria clinical methods: the history, physical, and laboratory examinations. 3rd ed. Butterworths: London; 1990 (Chapter 140). Akram M. A focused review of the role of ketone bodies in health and disease. J Med Food. 2013;16:965–7. Cotter DG, Schugar RC, Crawford PA. Ketone body metabolism and cardiovascular disease. Am J Physiol Heart Circ Physiol. 2013;304:H1060–76. Puchalska P, Crawford PA. Multi-dimensional roles of ketone bodies in fuel metabolism, signaling, and therapeutics. Cell Metab. 2017;25:262–84. Westerterp KR. Food quotient, respiratory quotient, and energy balance. Am J Clin Nutr. 1993;57(5 Suppl):759S–64S. Patel H, Bhardwaj A. Physiology, respiratory quotient. StatPearls. Treasure Island (FL): StatPearls Publishing; 2020–2018 Oct 27. Higuchi R, Iwane T, Iida A, Nakajima K. Copy number variation of the salivary amylase gene and glucose metabolism in healthy young Japanese women. J Clin Med Res. 2020;12:184–9. Nakajima K. Low serum amylase and obesity, diabetes and metabolic syndrome: a novel interpretation. World J Diabetes. 2016;7:112–21. Ko J, Cho J, Petrov MS. Low serum amylase, lipase, and trypsin as biomarkers of metabolic disorders: a systematic review and meta-analysis. Diabetes Res Clin Pract. 2020;159:107974. Kashiwagi A, Kasuga M, Araki E, et al. International clinical harmonization of glycated hemoglobin in Japan: from Japan Diabetes Society to National Glycohemoglobin Standardization Program values. J Diabetes Investig. 2012;3:39–40. Junge W, Mályusz M, Ehrens HJ. The role of the kidney in the elimination of pancreatic lipase and amylase from blood. J Clin Chem Clin Biochem. 1985;23:387–92. Collen MJ, Ansher AF, Chapman AB, Mackow RC, Lewis JH. Serum amylase in patients with renal insufficiency and renal failure. Am J Gastroenterol. 1990;85:1377–80. Matsuo S, Imai E, Horio M, et al. Revised equations for estimated GFR from serum creatinine in Japan. Am J Kidney Dis. 2009;53:982–92. Schutz Y. Abnormalities of fuel utilization as predisposing to the development of obesity in humans. Obes Res. 1995;3(Suppl 2):173S–8S. Ellis AC, Hyatt TC, Hunter GR, Gower BA. Respiratory quotient predicts fat mass gain in premenopausal women. Obesity (Silver Spring). 2010;18(12):2255–9. Shook RP, Hand GA, Paluch AE, et al. High respiratory quotient is associated with increases in body weight and fat mass in young adults. Eur J Clin Nutr. 2016;70(10):1197–202. Travers SH, Jeffers BW, Eckel RH. Insulin resistance during puberty and future fat accumulation. J Clin Endocrinol Metab. 2002;87:3814–8. Maffeis C, Moghetti P, Grezzani A, et al. Insulin resistance and the persistence of obesity from childhood into adulthood. J Clin Endocrinol Metab. 2002;87:71–6. Pierzynowski SG, Gregory PC, Filip R, Woliński J, Pierzynowska KG. Glucose homeostasis dependency on acini-islet-acinar (AIA) axis communication: a new possible pathophysiological hypothesis regarding diabetes mellitus. Nutr Diabetes. 2018;8:55. Viljakainen H, Andersson-Assarsson JC, Armenio M, et al. Low copy number of the AMY1 locus is associated with early-onset female obesity in Finland. PLoS ONE. 2015;10:e0131883. We thank Melissa Crawford, PhD, from Edanz Group (https://en-author-services.edanzgroup.com/) for editing a draft of this manuscript. This study was supported by grants from the Rice Stable Supply Support Organization (Public Interest Incorporated Association), Tokyo, Japan. School of Nutrition and Dietetics, Faculty of Health and Social Services, Kanagawa University of Human Services, 1-10-1 Heisei-cho, Yokosuka, Kanagawa, 238-8522, Japan Kei Nakajima, Ryoko Higuchi, Taizo Iwane & Ayaka Iida Department of Endocrinology and Diabetes, Saitama Medical Center, Saitama Medical University, 1981 Kamoda, Kawagoe, Saitama, 350-8550, Japan Kei Nakajima Graduate School of Health Innovation, Kanagawa University of Human Services, Research Gate Building Tonomachi 2-A, 3-25-10 Tonomachi, Kawasaki, Kanagawa, 210-0821, Japan Ryoko Higuchi Taizo Iwane Ayaka Iida KN contributed to the overall study design. KN and RH contributed to the interpretation of the initial analysis and discussion of the literature. RH, TI, and AI measured parameters and collected the serum data, and others, including RQ and KN, prepared the first draft of the manuscript. All authors read and approved the manuscript. Correspondence to Kei Nakajima. This study was approved by the Ethics Committee of Kanagawa University of Human Services (ID number 71-31). Written informed consent was obtained from each participant. Additional file 1: Figure S1. Distributions of serum amylases. A, serum salivary amylase; B, serum pancreatic amylase; C, serum total amylase. Potential underlying mechanism between serum amylases and metabolic indices. *Baseline individual levels of serum amylases, which are genetically determined in most cases. Nakajima, K., Higuchi, R., Iwane, T. et al. The association of low serum salivary and pancreatic amylases with the increased use of lipids as an energy source in non-obese healthy women. BMC Res Notes 13, 237 (2020). https://doi.org/10.1186/s13104-020-05078-2 Salivary Pancreatic
CommonCrawl
Journal of The American Society for Mass Spectrometry January 2019 , Volume 30, Issue 1, pp 67–76 | Cite as POPPeT: a New Method to Predict the Protection Factor of Backbone Amide Hydrogens Jürgen Claesen Argyris Politis Focus: Honoring Carol V. Robinson's Election to the National Academy of Sciences: Research Article Hydrogen exchange (HX) has become an important tool to monitor protein structure and dynamics. The interpretation of HX data with respect to protein structure requires understanding of the factors that influence exchange. Simulated protein structures can be validated by comparing experimental deuteration profiles with the profiles derived from the modeled protein structure. To do this, we propose here a new method, POPPeT, for protection factor prediction based on protein motions that enable HX. By comparing POPPeT with two existing methods, the phenomenological approximation and COREX, we show enhanced predictability measured at both protection factor and deuteration level. This method can be subsequently used by modeling strategies for protein structure prediction. ᅟ HDX-MS Protein structure Protection factor The online version of this article ( https://doi.org/10.1007/s13361-018-2068-x) contains supplementary material, which is available to authorized users. A correction to this article is available online at https://doi.org/10.1007/s13361-019-02137-2. Hydrogen exchange (HX) monitors the exchange of backbone amide hydrogens, providing information about protein structure and dynamics. To interpret the shift in mass and the changes in the isotope distribution with respect to structural properties of a protein, a better understanding of the hydrogen exchange mechanism is needed. Based on the pioneering work of Linderstrøm-Lang [1], a two-state kinetic model that describes HX was proposed [2, 3]: $$ \mathrm{N}-{\mathrm{H}}_{\mathrm{close}\mathrm{d}}\underset{{\mathrm{k}}_{\mathrm{close}}}{\overset{{\mathrm{k}}_{\mathrm{open}}}{\rightleftharpoons }}\;\mathrm{N}-{\mathrm{H}}_{\mathrm{open}}\;\overset{{\mathrm{k}}_{\mathrm{int}}}{\to}\;\mathrm{N}-{\mathrm{D}}_{\mathrm{open}}\underset{{\mathrm{k}}_{\mathrm{open}}}{\overset{{\mathrm{k}}_{\mathrm{close}}}{\rightleftharpoons }}\mathrm{N}-{\mathrm{D}}_{\mathrm{close}\mathrm{d}} $$ The exchange rate, kex, is a function of the intrinsic exchange rate of an unstructured protein, kint, and of the opening and closing rate constants, kopen and kclose, or the equilibrium constant, Kopen = kopen/kclose: $$ {k}_{ex}=\frac{k_{open}\times {k}_{int}}{k_{close}+{k}_{open}+{k}_{int}}=\frac{K_{open}}{\left(1+{K}_{open}\right){k}_{int}} $$ The equilibrium constant, Kopen, can be considered to be the inverse of the protection factor (PF): $$ \mathrm{PF}\approx 1/{K}_{\mathrm{open}}={k}_{\mathrm{int}}/{k}_{\mathrm{ex}}. $$ In case of the EX2 kinetic exchange regime, the exchange reaction is much slower than the refolding. As a consequence, the unfolding has to happen several times before exchange can take place. The exchange rate: $$ {k}_{ex}=\frac{k_{open}}{k_{close}}\times {k}_{int}={K}_{open}\times {k}_{int} $$ For EX1 kinetics, the exchange takes place during one unfolding/refolding event. As a result, the overall exchange rate is equal to kopen. Inferring features linked to protein structure from HX data requires an understanding of the factors that exert influence on the HX mechanism. A number of mechanistic models have been proposed throughout the years. Linderstrøm-Lang put forward the idea that slowly exchanging H atoms are taking part in hydrogen bonding. These bonds should be, temporarily, broken in order to allow exchange. Solvent-accessibility [4, 5] and solvent-penetration models [6, 7, 8, 9] describe an alternative procedure. These models state, respectively, that hydrogens located at the surface exchange at rates close to the intrinsic rates, while amide H atoms located in the (hydrophobic) core of the protein exchange slowly. Exchange of the latter requires penetration of the deuterium source in the protein. Other factors that complement the solvent accessibility and solvent penetration models such as acidity and polarizability have also been suggested [10, 11]. Nowadays, it is commonly accepted that the occurrence of H bonds is the main determinant of hydrogen exchange [12, 13]. It has been shown that structure-related features such as packing density, or burial, have a limited effect on the HX rates [13, 14, 15]. Other factors such as hydrogen bond length and electrostatics have little or no influence on the exchange rates [13, 14, 15]. Next to the mechanistic models that try to give insight in the HX mechanism, various predictive models that associate protection or exchange rates with protein structure features have been introduced [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Even though these methods use different structural and dynamical determinants, and apply different strategies, all models report almost identical accuracy. For each method, small differences between the predicted and measured protection factors are reported. Here, we illustrate what effect these differences have on the deuterium uptake. We also propose a new algorithm for protection factor prediction, namely, protection factor prediction based on protein motions (POPPeT). We demonstrate accuracy and applicability of POPPeT by comparing it with two existing methods, the phenomenological approximation and COREX, on two proteins, Staphylococcal nuclease A, and equine cytochrome c. Determining the Protection Factor The PF quantifies the degree of reduction in the exchange rate of a backbone amide hydrogen, compared to the intrinsic exchange rate, due to the protein structure. As such, the PF is a function of the protein structure-related features that impede HX. Methods which focus on the prediction of PFs [19, 20, 21, 22, 23, 24, 25, 26, 27, 28] can be clustered in two groups: the first group directly associates the protection factor with structure-related features, while the second group indirectly models the protection factor due to its link with the difference in free energy between folded and unfolded states, ΔGex, i: $$ \Delta {G}_{\mathrm{ex},i}=-\mathrm{RTln}\ {K}_{\mathrm{open},i}=\mathrm{RTln}\ \mathrm{P}{\mathrm{F}}_i $$ We discuss here two commonly applied methods to estimate the protection factor, i.e., the phenomenological approximation [19] and COREX [18]. These methods belong, respectively, to the first and the second group of PF estimation methods. Phenomenological Approximation According to Vendruscolo and colleagues [21, 22], the protection factor of an amide hydrogen of residue i is a function of the number of hydrogen bonds, \( {N}_i^H \), and burial, i.e., the number of non-hydrogen atoms within a 6.5 Å distance of the amide nitrogen, \( {N}_i^C \): $$ \ln\ {\mathrm{PF}}_i={\beta}_H\times {N}_i^H+{\beta}_C\times {N}_i^C $$ The coefficients βH and βC are estimated, when considering backbone atoms, and when considering backbone and side-chain atoms, based on a set of native state simulations for, respectively six, and seven proteins. The reported values are βC = 0.35 and βH = 2.0 when considering all atoms [22], and βC = 1.0 and βH = 5.0 when considering backbone atoms only [21]. COREX [18] estimates the protection factor of a residue i based on the work of Hilser and Freire [16] and Hilser [29]. It generates an ensemble of partially unfolded microstates. The probability of a microstate s is calculated as follows: $$ \Pr\ \left(\mathrm{state}\ s\right)=\frac{\exp\ \left(-\varDelta {G}_s/ RT\right)}{\sum_{i=0}^N\exp\ \left(-\varDelta {G}_s/ RT\right)} $$ with ΔGs, the Gibbs free energy, a function of the accessible surface area, and the conformational entropy. The protection factor of residue i is then defined as the ratio of the sum of the probabilities of the microstates in which residue i is folded and not exposed to the solvent to the sum of the probabilities of the microstates in which residue i is unfolded and solvent accessible: $$ \mathrm{P}{\mathrm{F}}_i=\frac{\sum_{s=1}^{N_i^{\mathrm{folded}}}\Pr \left(\mathrm{state}\ s\right)-\Pr (i)}{\sum_{s=1}^{N_i^{\mathrm{unfolded}}}\Pr \left(\mathrm{state}\ s\right)-\Pr (i)} $$ where Pr (i) is the sum of the probabilities of the microstates where residue i is solvent accessible in its native state, or becomes solvent accessible due to partially unfolding of other residues. Linking the Protection Factor with Deuterium Content The outcomes of protein structure modeling techniques can be validated with HDX-MS data. In these cases, the PFs are calculated based on the proposed protein structures. These PFs are used to calculate the expected deuterium content of a protein at residue level: $$ {D}_i^{\mathrm{time}}=1-\exp\ \left(-{k}_{ex,i}\times \mathrm{time}\right)=1-\exp\ \left\{\left(-{k}_{\mathit{\operatorname{int}},i}/{\mathrm{PF}}_i\right)\times \mathrm{time}\right\} $$ Summing \( {D}_i^{\mathrm{time}} \) over n contiguous residues gives the expected deuterium content at peptide or protein level. As the exchange of the first residue cannot be recorded due to very fast back exchange, the following equation is used to calculate the theoretical deuteration level for peptides or proteins: $$ {D}^{\mathrm{time}}=\sum \limits_{i=2}^n\left({D}_i^{\mathrm{time}}\right)=\sum \limits_{i=2}^n1-\exp\ \left\{\left(-{k}_{\mathit{\operatorname{int}},i}/{\mathrm{PF}}_i\right)\times \mathrm{time}\right\} $$ By comparing the theoretical peptide deuteration levels with the measured levels, the proposed protein structures with closely matching deuteration profiles are validated and/or selected. Obviously, the accuracy of the predicted protection factors has an effect on this process. However, it remains unclear to which extent the accuracy of the predicted PFs influences the theoretical peptide deuteration levels. Therefore, in addition to the traditional methods to determine the PF accuracy, i.e., looking at the difference between the measured and predicted protection factors and the Pearson correlation coefficient [30], we also looked at the difference between the theoretical and measured deuteration levels. Note that additional experimental factors, such as back exchange, can influence the measured deuterium levels and should be accounted for when comparing theoretical deuteration levels with measured levels. We used Staphylococcal nuclease A (SNase) [31] to calculate the deuterium content of 12 peptides (Table S1) at 16 different time points, ranging from 30 s to 16 days. The intrinsic rate of each residue was calculated with the formulas proposed by Bai et al. [32]. We used the experimentally determined protection factors, as well as the protection factors predicted by the phenomenological approximation, and by COREX as reported by Skinner et al. [14] (Table S3). POPPeT, an Alternative Approach to Determine the Protection Factor According to Skinner et al. [14], the prediction of HX has to be based on the protein motions that generate exchange competent amide hydrogens, i.e., local fluctuations and (global) unfolding reactions. It is possible to determine experimentally if an amide hydrogen becomes exchange competent due to sizeable unfolding or by local fluctuations [33, 34, 35, 36, 37, 38, 39]. We introduce a new approach to predict the protection factors of a protein, POPPeT. It is based on information about the protein motions that generate exchange competent backbone amide hydrogens. In the Supplementary Material, we show that there is a statistically significant association between HX-enabling protein motions and logPFs (see Table S4). The information about the HX-enabling protein motions is complemented with a set of structural features including secondary structure elements and hydrogen bonding information (Table 1). It also takes into account other factors such as the number of non-hydrogen atoms in its vicinity (burial). These complementary variables are added to clarify part of the variability present in the logPFs that cannot be explained by the considered protein motions. Considered structural features. The secondary structure elements are split into three different groups. Hydrogen bonding and protein motions that enable HX are divided in four categories Protein motions Hbond with H2O Hbond with main-chain O β-sheet UD + EX1 Hbond with side-chain O Information on the secondary structure elements is divided in three categories, i.e., "no," "helix," or "β-sheet" (Table 1). The category "helix" contains all H atoms that are located on residues that form an α-helix or a 3/10-helix. Hydrogens from the category "β-sheet" are part of amino acids that form a β-strand or β-bridge. The other backbone hydrogens are grouped in the "no" category. The information about the secondary structure elements is taken from the RCSB protein databank [40]. The exchangeable hydrogens are also grouped into four categories related to hydrogen bonding, i.e., "no," "Hbond with H2O," "Hbond with main-chain oxygen," and "Hbond with side-chain oxygen" (Table 1). The hydrogen bonding status is calculated with Chimera [41]. Similar to hydrogen bonding, the protein motions that enable HX are also divided into four categories, i.e., local ("L"), unfolding ("UD"), unfolding and EX1 ("UD + EX1"), and EX1 ("EX1"), as reported by [14, 15]. The category "L" groups the amide hydrogens that get exchange competent through local fluctuations. The other amide hydrogens become exchangeable due to unfolding. We divided them into three subcategories, based on the experimental procedure used to detect unfolding: the addition of denaturant ("UD"), increasing the pH level ("EX1"), and the combination of both ("UD + EX1"). The amide hydrogens that are part of the last two categories exchange with an EX1 mechanism at elevated pH. In order to predict the logPF based on the selected structural features, and other factors such as burial, we have fitted a log-linear model of the following form: $$ {\displaystyle \begin{array}{l}\mathrm{logPF}={\beta}_0+{\beta}_1\times \mathrm{UD}+{\beta}_2\times \mathrm{EX}1\\ {}\kern2.24em +{\beta}_3\times \left(\mathrm{UD}+\mathrm{EX}1\right)+{\beta}_4\times \mathrm{helix}+{\beta}_5\times \beta -\mathrm{sheet}\\ {}\kern2.24em +{\beta}_6\times \mathrm{burial}+{\beta}_7\times \mathrm{Hbond}\ \mathrm{with}\ {\mathrm{H}}_2\mathrm{O}\\ {}\kern2.24em +{\beta}_8\times \mathrm{Hbond}\ \mathrm{with}\ \mathrm{main}-\mathrm{chain}\ \mathrm{O}\\ {}\kern2.24em +{\beta}_9\times \mathrm{Hbond}\ \mathrm{with}\ \mathrm{side}-\mathrm{chain}\ \mathrm{O}+\varepsilon \end{array}} $$ where ε is the residual error, and \( \varepsilon \sim \mathcal{N}\left(0,{\sigma}^2\right) \). In the resulting model, only statistically significant parameters are retained (p value < 0.05). This log-linear model directly associates the logPF with structure-related features. As a result, it belongs to the same group of methods as the phenomenological approximation. It differs from the phenomenological approximation as it contains additional information such as information on exchange-enabling motions, and secondary structure features. Accuracy of the Predicted Protection Factors Existing methods for protection-factor-prediction reported relatively small differences between the predicted and the measured logPFs of exchangeable hydrogens and/or high correlation between them. However, a small difference on the logPF scale does not necessarily imply a small difference on the PF scale. For example, a difference of 2 between the measured and observed logPFs is a much larger difference at the PF scale, i.e., 102. As a consequence, small differences in the logPF scale can have severe effects on the deuteration level, which is a function of the PF (see Eqs. (8) and (9)). To our knowledge, the effect of these differences on the deuteration level has not been studied. We assessed the accuracy of the PFs of amide hydrogens of SNase estimated with the phenomenological approximation and COREX [14, 15]. We found that for the phenomenological approximation and COREX, only a limited number of protection factors are accurately estimated (Fig. 1). In particular, COREX systematically underestimates the measured logPF, as previously reported in [28]. A potential reason for this systematic underestimation could be the number of hydrogens that are exposed in a microstate, e.g., a smaller number of unfolded residues may increase the PF [16, 29]. Differences between the predicted and measured logPFs of SNase plotted against the residue positions The correlation between the predicted and measured protection factors is moderate, i.e., 0.52 and 0.71 for the phenomenological approximation and COREX, respectively (Fig. 2). Based on the observed differences and the moderate correlation, one can expect that there will be discrepancies between the deuterium uptake profiles of SNase peptides, calculated with the measured PFs and the predicted PFs of COREX and the phenomenological approximation. The correlation between the predicted and measured protein factors of SNase. The diagonal line indicates a perfect correlation, i.e., ρ = 1.00 To quantify the magnitude of these differences in terms of deuteration, we calculated the deuterium content of 12 peptides using the predicted and measured logPFs. The deuteration level of these peptides are calculated, for 16 time points, based on the predicted protection factors of COREX (blue line), the phenomenological approximation (red line), and the measured protection factors (black line) (Fig. S1). For COREX, the deuteration level of each peptide, except for peptide 10, is consistently higher than the level calculated with the measured PFs. This is in line with the findings of Fig. 1. The differences between the deuteration levels based on the protection factors estimated with the phenomenological approximation and the measured PFs are generally smaller than with COREX, but remain substantial (Fig. S2). For instance, for peptide 7 (Fig. 3), these differences range between − 0.13 (after 60 s of exposure to D) and 4.49 (after 4 days of exchange). In case of COREX, the differences in deuteration vary between 1.50 and 8.12 (see also Table S3). Similar differences can also be seen for the other peptides (Fig. S1). Based on these outcomes, it is clear that when selecting modeled protein structures based on the accordance between the experimental and calculated deuteration profile, one should keep the accuracy of the predicted protection factors in mind. Deuteration profile of SNase peptide 7. The black line is the deuteration level calculated with the measured PFs (∘), the red line with PFs of the phenomenological approximation (□), and the blue line with the PFs of COREX (⋄) For 43 amide hydrogens of SNase, information about their HX-enabling protein motions is available [14]. We randomly split these exchangeable hydrogens in a training set of 30 H atoms, and a test set of 13 H atoms (Table S5). The training set is used to train the model, i.e., to determine the significant parameters and their effect. The resulting model, based on Eq. (10), has the following form: $$ {\displaystyle \begin{array}{l}\mathrm{logPF}={\beta}_0+{\beta}_1\times \mathrm{UD}+{\beta}_2\times \mathrm{EX}1\\ {}\kern2.24em +{\beta}_3\times \left(\mathrm{UD}+\mathrm{EX}1\right)+{\beta}_4\times \mathrm{helix}+{\beta}_5\times \beta -\mathrm{sheet}\\ {}\kern2.24em +{\beta}_6\times \mathrm{burial}\end{array}} $$ This model differs from the phenomenological approximation as it contains information about the HX-enabling protein motions, taken from [14, 15], and the secondary structure elements next to burial. Hydrogen bonding has no significant effect, probably due to the fact that 27 of the 30 hydrogens form a hydrogen bond with an oxygen from the main chain. If a hydrogen is part of a helix or a β-sheet, its logPF increases, in comparison to a hydrogen atom located in an unstructured part of the protein, with, respectively, 0.60 or 0.36 (Table 2). The PF gets substantially bigger when a hydrogen atom needs to undergo unfolding in order to become exchangeable. For instance, when the unfolding motion is experimentally determined at elevated denaturant levels (UD), the logPF raises with 1.63 (Table 2). Burial also has a significant effect on the logPF. For an exchangeable H atom bound to an amide nitrogen which has 30 non-hydrogen atoms within a distance of 6.5 Å, the logPF increases with 30 × 0.04371≈ 1.31 (Table 2). It is worth noting that for the 30 hydrogen atoms in the training set, the average value for burial equals 59.10. Parameter estimates for POPPeT β 0 2.03e-10 The defined categories for the secondary structure elements can be further divided into subcategories. For instance, the category "helix" can be split into "α-helix" and "3/10-helix." However, we found that the adjusted coefficient of determination [42], \( {R}_a^2 \), of the model with three categories for the secondary structure elements (Table 2) is neglectable smaller (0.04) than the \( {R}_a^2 \) value of model (11). Accuracy of POPPeT We found that POPPeT accurately predicts the logPFs of the exchangeable SNase hydrogen atoms from the training set (Figs. 4 and 5, in gray). As the training set has been used to estimate the coefficients of the model (11), one should not adhere too much importance to the high accuracy of the training set. For the test set hydrogens of SNase (Table S5), the accuracy and the correlation between the predicted and measured logPFs is high (ρ = 0.94) (Figs. 4 and 5, in red). The maximum difference between the measured and predicted logPF is 1.36, which is one third of the absolute maximum differences of the phenomenological approximation (4.28) and COREX (4.36). The average difference between the measured and predicted logPFs is, in case of POPPeT, approximately 4 to 5 times smaller than the averaged differences of the PFs estimated with the phenomenological approximation and with COREX, i.e., 0.41, 2.11, and 1.51, respectively. Difference between the predicted and measured logPFs of SNase for the phenomenological approximation (left), POPPeT (center), and COREX (right). The red points are the logPFs from the test set; the gray points are from the training set Correlation between the predicted and measured logPFs of SNase. The red points are the logPFs from the test set; the gray points are from the training set Next, we tested the effect of the observed differences between the predicted and measured logPF at the deuteration level. Out of the test set of 13 amino acid SNase residues, we generated two peptides, with amino acid residues from 60 to 64, and from 101 to 107. Amino acids 63 and 104 are not part of the test set. For these two residues, we assumed that the PF of their backbone amide hydrogens was equal to 1. For the first peptide, the deuterium level calculated with the predicted PFs of POPPeT is lower than the deuterium content based on the measured PFs (Fig. 6, top). The maximum difference between these two deuteration levels is 0.96. The maximum difference between the deuteration profile of COREX and the profile based on the measured PFs is much higher, i.e., 2.30. A similar difference could be found for the phenomenological approximation (2.11). For the second peptide (Fig. 6, bottom), there is almost no difference between the deuteration profile calculated with POPPeT and with the measured protection factors. The maximum difference is 0.04. In contrast to this, the maximum difference between the deuterium content calculated with the predicted PFs of the phenomenological approximation and the deuterium content based on the measured PFs equals 4.74. The difference in deuteration between the values derived from COREX and the measured PFs lies in between these two extremes, i.e., a maximum difference of 1.35 is found at the 8-day exposure time point. Deuteration profiles of two SNase peptides. The black line is the deuteration level calculated with the measured protection factors (○), the red line with the phenomenological approximation (□), the blue line with COREX (◊), and the green line with POPPeT (△) We also tested the performance of POPPeT on equine cytochrome c (pdb: 1OCD). We calculated for 34 exchangeable hydrogens of this protein the logPFs with POPPeT and with the phenomenological approximation. For this protein, we have not compared POPPeT with COREX, as results of COREX for oxidized cytochrome c are not publicly available. For POPPeT, we needed information about the exchange-enabling protein motions. However, this information is not readily available. Based on the dependency of ΔGex on the concentration of added denaturant, as described by [34], we categorized the amide H atoms into three out of the four considered categories of protein motions (Table S6). Hydrogens with no or little change in taGex with increasing levels of denaturant are considered to exchange through local fluctuations. As a consequence, we classified them as "L." Amide H atoms requiring partial unfolding to enable hydrogen exchange show a non-linear dependency between ΔGex and the denaturant concentration. These hydrogens are grouped into the "UD" category. Lastly, hydrogens with a strong linear dependency between ΔGex and the denaturant levels have been categorized as "UD + EX1" as they require global protein unfolding in order to become exchange competent. Similarly to SNase, the accuracy and the correlation between the predicted and measured logPFs is higher for POPPeT than is the case for the phenomenological approximation (Figs. 7 and 8). The correlation between the measured and predicted PFs is moderately high for POPPeT (ρ = 0.75), and low for the phenomenological equation (ρ = 0.40). The maximum absolute difference between the predicted and measured PFs is 2.88 for POPPeT and 3.53 for the phenomenological approximation. The average difference between the predicted and measured PFs is approximately two times smaller for POPPeT than for the phenomenological approximation, i.e., 0.68 and 1.31, respectively. Difference between the predicted and measured logPFs of oxidized cytochrome c for the phenomenological approximation (left) and POPPeT (right) Correlation between the predicted and measured logPFs of oxidized cytochrome c. The correlation between the predicted and measured logPFs is moderately high for POPPeT (right; 0.75), while it is low for the phenomenological approximation (left; 0.40) We also tested the effect of the observed differences in logPFs at the deuteration level with two peptides (Fig. 9), from residue 7 to 15, and from residue 64 to 70. We calculated, as before, for each peptide, the deuterium level based on the measured and predicted logPFs at 16 different time points ranging from 30 s to 16 days. For the first peptide, with amino acid residue 7 to 15, the deuteration profiles calculated with the measured PFs and the predicted PFs of POPPeT closely match until time point 10, i.e., 8 h. From this point, the deuterium content based on the output of POPPeT is at most 1.00 Da lower than the deuteration level (Fig. 9, top). There is little or no resemblance between the deuteration profiles based on the measured PFs and the PFs predicted with the phenomenological approximation. The differences in deuteration range from − 1.27 to 1.90. For the second peptide (Fig. 9, bottom), with residues ranging from 64 to 70, no deuteration profile based on the predicted PFs matches closely with the profile calculated from the measured PFs. For POPPeT and for the phenomenological equation, the PFs are overestimated, resulting in a slower predicted deuterium uptake. The maximum difference for POPPeT is 1.08 and for the phenomenological approximation is 1.22. Deuteration profiles of two oxidized cytochrome c peptides. The black line is the deuteration level calculated with the measured protection factors (○), the red line with the phenomenological approximation (□), and the green line with POPPeT (△) In this paper, we presented a new method to estimate the PF of backbone amide hydrogen atoms. POPPeT predicts the PFs based on protein motions with higher accuracy than the existing phenomenological approximation and COREX. Given the small training set used to develop POPPeT, we would like to point out that POPPeT should not be considered as a full-fledged method for protection factor prediction, but rather as a precursor of a number of approaches that use information about HX-enabling protein motions. The statistically very significant association between the logPF and the protein motions that enable HX indicates that whenever this information is available, it should be included in any PF prediction strategy. Using a larger training set to develop POPPeT or similar methods will most likely lead to different parameter estimates, but the overall trends identified here, i.e., hydrogen atoms which become exchange competent through local fluctuations, have a significantly lower PF than hydrogens which require local or global unfolding and will remain statistically significant and thus important for PF prediction. Additionally, we hope that POPPeT and its results will be an encouragement to the structural biology community to (routinely) perform experiments that assess HX-enabling protein motions, and/or will lead to data-driven, computational methods to predict HX-enabling protein motions. Additionally, we showed that small differences and/or high correlation between the predicted and measured PFs do not necessarily imply small differences in the deuteration level of peptides. A common approach to assess the outcomes of computational methods that predict protein structure is comparing the measured deuteration levels with the predicted deuteration content. When selecting the best matching protein structure, one should keep in mind that the observed differences in deuteration content are not only the result of differences between the true and the predicted protein structure, but that the predicted protection factors also contribute to the differences in deuteration content. Overall, we expect that POPPeT or other protection factor predictors which incorporate protein motion information will be applicable to computational methods trying to predict the three-dimensional structure of proteins using restraints derived from HDX-MS. 13361_2018_2068_MOESM1_ESM.pdf (146 kb) ESM 1 (PDF 146 kb) Linderstrøm-Lang, K.U.: Deuterium exchange and protein structure. In: Symposium on protein structures, pp. 23–24. (1958)Google Scholar Hvidt, A.: A discussion of the pH dependence of the hydrogen-deuterium exchange of proteins. C. R. Trav. Lab Carlsberg. 34, 299–317 (1964)PubMedGoogle Scholar Hvidt, A., Nielsen, S.O.: Hydrogen exchange in proteins. Adv. Protein. Chem. 21, 287386 (1966)Google Scholar Woodward, C.K., Simon, I., Tuchsen, E.: Hydrogen exchange and the dynamic structure of proteins. Mol. Cell. Biochem. 48, 135–160 (1982)CrossRefGoogle Scholar Truhlar, S.M.E., Croy, C.H., Torpey, J.W., Koeppe, J.R., Komives, E.A.: Solvent accessibility of protein surfaces by amide H/H2 exchange MALDI-TOF mass spectrometry. J. Am. Soc. Mass Spectrom. 17, 1490–1497 (2006)CrossRefGoogle Scholar Rosenberg, A., Engberg, J.: Studies of hydrogen exchange in proteins. II. The reversible thermal unfolding of chymotrypsinogen A as studied by exchange kinetics. J. Biol. Chem. 244, 6153–6159 (1969)PubMedGoogle Scholar Woodward, C.K.: Dynamic solvent accessibility in the soybean trypsin inhibitor-trypsin complex. J. Mol. Biol. 11, 509–515 (1977)CrossRefGoogle Scholar Hilton, B.D., Woodward, C.K.: Nuclear magnetic resonance measurement of hydrogen exchange kinetics of single protons in basic pancreatic trypsin inhibitor. Biochemistry. 17, 3325–3332 (1978)CrossRefGoogle Scholar Hilton, B.D., Woodward, C.K.: On the mechanism of isotope exchange kinetics of single protons in bovine pancreatic trypsin inhibitor. Biochemistry. 18, 5834–5841 (1979)CrossRefGoogle Scholar Anderson, J.S., Hernandez, G., Lemaster, D.M.: A billion-fold range in acidity for the solvent-exposed amides of Pyrococcus furiosus Rubredoxin. Biochemistry. 47, 6178–6188 (2008)CrossRefGoogle Scholar Hernandez, G., Anderson, J.S., Lemaster, D.M.: Polarization and polarizability assessed by protein amide activity. Biochemistry. 48, 6482–6494 (2009)CrossRefGoogle Scholar Englander, S.W., Kallenbach, N.R.: Hydrogen exchange and structural dynamics of proteins and nucleic-acids. Q. Rev. Biophys. 16, 521–655 (1983)CrossRefGoogle Scholar Milne, J.S., Roder, M.H., Wand, A.J., Englander, S.W.: Determinants of protein hydrogen exchange studied in equine cytochrome c. Protein Sci. 7, 739–745 (1998)CrossRefGoogle Scholar Skinner, J.J., Lim, W.K., Bédard, S., Black, B.E., Englander, S.W.: Protein hydrogen exchange: testing current models. Protein Sci. 21, 987–995 (2012)CrossRefGoogle Scholar Skinner, J.J., Lim, W.K., Bédard, S., Black, B.E., Englander, S.W.: Protein dynamics viewed by hydrogen exchange. Protein Sci. 21, 996–1005 (2012)CrossRefGoogle Scholar Hilser, V.J., Freire, E.: Structure-based calculation of the equilibrium folding pathway of proteins: correlation with hydrogen-exchange protection factors. J. Mol. Biol. 262, 756–772 (1996)CrossRefGoogle Scholar Wooll, J.O., Wrabl, J.O., Hilser, V.J.: Ensemble modulation as an origin of denaturant-independent hydrogen exchange in proteins. J. Mol. Biol. 301, 247–256 (2000)CrossRefGoogle Scholar Vertrees, J., Barritt, P., Whitten, S., Hilser, V.J.: COREX/BEST server: a web browser-based program that calculates regional stability variations within protein structures. Bioinformatics. 21, 3318–3319 (2005)CrossRefGoogle Scholar Best, R.B., Vendruscolo, M.: Structural interpretation of hydrogen exchange protection factors in proteins: characterization of the native state fluctuations of CI2. Structure. 14, 97–106 (2006)CrossRefGoogle Scholar Kieseritzky, G., Morra, G., Knapp, E.W.: Stability and fluctuations of amide hydrogen bonds in a bacterial cytochrome c: a molecular dynamics study. J. Biol. Inorg. Chem. 11, 26–40 (2006)CrossRefGoogle Scholar Dovidchenko, N.V., Galzitskaya, O.V.: Prediction of residue status to be protected or not protected from hydrogen exchange using amino acid sequence only. Open Biochem. J. 2, 77–80 (2008)CrossRefGoogle Scholar Dovidchenko, N.V., Lobanov, M.Y., Garbuzynskiy, S.O., Galzitskaya, O.V.: Prediction of amino acid residues protected from hydrogen-deuterium exchange in a protein chain. Biochem. Mosc. 74, 888–897 (2009)CrossRefGoogle Scholar Craig, P.O., Lätzer, J., Weinkam, P., Hoffman, R.M., Ferreiro, D.U., Komives, E.A., Wolynes, P.G.: Prediction of native-state hydrogen exchange from perfectly funneled energy landscapes. J. Am. Chem. Soc. 15, 17463–17472 (2011)CrossRefGoogle Scholar Ma, B.Y., Nussinov, R.: Polymorphic triple beta-sheet structures contribute to amide hydrogen/deuterium (H/D) exchange protection in the Alzheimer amyloid beta 42 peptide. J. Biol. Chem. 286, 34244–34253 (2011)CrossRefGoogle Scholar Liu, T., Pantazatos, D., Li, S., Hamuro, Y., Hilser, V.J., Woods Jr., V.L.: Quantitative assessment of protein structural models by comparison of H/D exchange MS data with exchange behavior accurately predicted by DXCOREX. J. Am. Soc. Mass Spectrom. 23, 43–56 (2012)CrossRefGoogle Scholar Petruk, A.A., Defelipe, L.A., Limardo, R.G.R., Bucci, H., Marti, M.A., Turjanski, A.G.: Molecular dynamics simulations provide atomistic insight into hydrogen exchange mass spectrometry experiments. J. Chem. Theory Comput. 9, 658–669 (2013)CrossRefGoogle Scholar Sljoka, A., Wilson, D.: Probing protein ensemble rigidity and hydrogen-exchange. Phys. Biol. 10, 056013 (2013)CrossRefGoogle Scholar Park, I., Venable, J.D., Steckler, C., Cellitti, S.E., Lesley, S.A., Spraggon, G., Brock, A.: Estimation of hydrogen-exchange protection factors from MD simulation based on amide hydrogen bonding analysis. J. Chem. Inf. Model. 55, 1914–1925 (2015)CrossRefGoogle Scholar Hilser, V.J.: Modeling the native state ensemble. Methods Mol. Biol. 168, 93–116 (2001)PubMedGoogle Scholar Pearson, K.: Notes on regression and inheritance in the case of two parents. Proc. R. Soc. Lond. 58, 240242 (1895)Google Scholar Truckses, D.M., Somoza, J.R., Prehoda, K.E., Miller, S.C., Markley, J.L.: Coupling between trans/cis proline isomerization and protein stability in staphylococcal nuclease. Protein Sci. 5, 1907–1916 (1996)CrossRefGoogle Scholar Bai, Y., Milne, J.S., Mayne, L., Englander, S.W.: Primary structure effects on peptide group hydrogen exchange. Proteins Struct. Funct. Genet. 17, 75–86 (1993)CrossRefGoogle Scholar Pace, C.N.: Determination and analysis of urea and guanidine hydrochloride denaturation curves. Methods Enzymol. 131, 266–280 (1986)CrossRefGoogle Scholar Bai, Y., Sosnick, T.R., Mayne, L., Englander, S.W.: Protein folding intermediates: native-state hydrogen exchange. Science. 269, 192–197 (1995)CrossRefGoogle Scholar Bai, Y., Englander, J.J., Mayne, L., Milne, J.S., Englander, S.W.: Thermodynamic parameters form hydrogen exchange measurements. Methods Enzymol. 259, 344–356 (1995)CrossRefGoogle Scholar Fuentes, E.J., Wand, A.J.: Local stability and dynamics of apocytochrome b562 examined by the dependence of hydrogen exchange on hydrostatic pressure. Biochemistry. 37, 9877–9883 (1998)CrossRefGoogle Scholar Milne, J.S., Xu, Y., Mayne, L.C., Englander, S.W.: Experimental study of the protein folding landscape: unfolding reactions in cytochrome c. J. Mol. Biol. 290, 811822 (1999)CrossRefGoogle Scholar Hernancez, G., Jenney, F.E., Adams, M.W.W., LeMaster, D.M.: Millisecond time scale on conformational flexibility in a hyperthermophile protein at ambient temperature. Proc. Natl. Acad. Sci. U. S. A. 97, 3166–3170 (2000)CrossRefGoogle Scholar Hoang, L., Bédard, S., Krishna, M.M.G., Lin, Y., Englander, S.W.: Cytochrome c folding pathway: kinetic native-state hydrogen exchange. Proc. Natl. Acad. Sci. U. S. A. 99, 12173–12178 (2002)CrossRefGoogle Scholar Berman, H.M., Westbrook, J., Feng, Z., Gilliland, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E.: The protein data bank. Nucleic Acids Res. 28, 235–242 (2000)CrossRefGoogle Scholar Pettersen, E.F., Goddard, T.D., Huang, C.C., Couch, G.S., Greenblatt, D.M., Meng, E.C., Ferrin, T.E.: UCSF Chimera–a visualization system for exploratory research and analysis. J. Comput. Chem. 25, 1605–1612 (2004)CrossRefGoogle Scholar Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W.: Applied linear statistical models. McGraw-Hill, New York (2005)Google Scholar 1.I-BioStatHasselt UniversityHasseltBelgium 2.Department of ChemistryKing's College LondonLondonUK Claesen, J. & Politis, A. J. Am. Soc. Mass Spectrom. (2019) 30: 67. https://doi.org/10.1007/s13361-018-2068-x Revised 13 August 2018 Accepted 28 August 2018 DOI https://doi.org/10.1007/s13361-018-2068-x the American Society for Mass Spectrometry
CommonCrawl
December 2020 , 7:2 | Cite as The importance of grain and cut-off size in shaping tree beta diversity along an elevational gradient in the northwest of Colombia Johanna Andrea Martínez-Villa Sebastián González-Caro Álvaro Duque Species turnover (β-diversity) along elevational gradients is one of the most important concepts in plant ecology. However, there is a lack of consensus about the main driving mechanisms of tree β-diversity at local scales in very diverse ecosystems (e.g., Andean mountains), as well as how the sampling effect can alter β-diversity estimations. Recently, it has been hypothesized that patterns of change in β-diversity at local scales along elevational gradients are driven by sampling effects stemming from differences in the size of the species pool rather than by underlying community assembly mechanisms. Thus, we aim to evaluate the relative extent to which sampling effects, such as species pool size, grain size, and tree size cut-off, determine species sorting, and thus, the variability of β-diversity at local scales along elevational gradients in the northwest of Colombia. Using 15 1-ha permanent plots spread out along a 3000 m elevational gradient, we used standardized β-deviation to assess the extent to which either sampling effects or the community assembly mechanisms determine the changes in species composition at local scales. Standardized β-deviation was measured as the difference between the observed and null β-diversity divided by the standard deviation of the null β-diversity. We found that the magnitude of change in local β-deviation along the elevational gradient was significant and dependent on the employed spatial grain size and tree size cut-off. However, β-deviation increased with elevation in all sampling designs, which suggests that underlying community assembly mechanisms play a key role in shaping local β-diversity along the elevational gradient. Our findings suggest that grain size enlargement and the inclusion of trees with small diameters will improve our ability to quantify the extent to which the community assembly mechanisms shape patterns of β-diversity along elevational gradients. Overall, we emphasize the scale-dependent nature of the assessment of β-diversity. Likewise, we call for the need of a new generation of enlarged forest inventory plots along gradients of elevation in tropical forests that include small individuals to improve our understanding about the likely response of diversity and function to global change. Andean forests Null models Species pool Species sorting Sampling effect BDdev Deviation in Beta diversity BDexp Beta diversity expected by the null model BDobs Beta diversity observed Spatial turnover in community composition (β-diversity) along elevational gradients has been one of the most striking and studied patterns in ecology (Whittaker 1960; Lomolino 2001; Rahbek 2005). In tropical mountain systems, β-diversity is expected to decrease with elevation (Tello et al. 2015) due to the influence of different community assembly mechanisms that could vary along the elevational gradient (Laiolo et al. 2018). Overall, different assembly mechanisms, such as dispersal limitation (Condit et al. 2002), species sorting (Qian and Ricklefs 2007), habitat specialization (Janzen 1967; Jankowski et al. 2009), and priority effects (Chase 2010; Fukami 2015), have been thought to explain the spatial turnover in the composition of plant communities. However, sampling effects associated with the size of the species pools and the regional abundance distributions have recently been proposed as the main cause of the observed decreased in β-diversity along elevational gradients (Kraft et al. 2011). In other words, the observed variation in β-diversity along steep elevational gradients may be primarily driven by differences in the size of the species pools and the number of individuals per specie generated by biogeographical or regional processes (Ricklefs 1987) rather than by the underlying mechanisms of community assembly described above. Disentangling the relative importance that species pool size (Kraft et al. 2011) or community assembly mechanisms have on determining β-diversity at different scales along elevation gradients in the tropics is paramount for developing robust forest conservation plans capable of maintaining diversity (Lomolino 2001; Rahbek 2005). The spatial scale at which vegetation studies are developed is a key factor that can strongly influence β-diversity gradients (Stier et al. 2016). The concept of scale involves two factors: i) extent, the geographical area where comparisons are made; and ii) grain size, the unit of measurement at which data are collected or aggregated for analysis (Whittaker et al. 2001). In a fixed extent, a variation in grain size implies a variation in the sampled relative species abundances and, subsequently, in the spatial patterns of aggregation (Crawley and Harral 2002). Directly related to β-diversity, when the spatial grain size of local communities increases, species present in the regional species pool will be better represented, generally lending to a decline in β-diversity (Barton et al. 2013). Along an elevational gradient, the use of 0.1-ha plots with grain sizes of 0.01-ha has been widely used to assess and detect fine-grained environmental variation effects on determining β-diversity at a local scale (Kraft et al. 2011; Mori et al. 2013; Tello et al. 2015). However, in species-rich communities, smaller grain sizes may lead to the undersampling of individuals, an issue that can artificially enhance β-diversity (Condit et al. 2005). Comparative studies of β-diversity at contrasting grain sizes along elevational gradients are needed to help disentangle the extent to which either sampling effects or community assembly mechanisms shape β-diversity patterns. Along elevational gradients, another largely unexplored issue pertains to the likely effect that different diameter at breast height (DBH) cut-off sizes can have in β-diversity assessments (Mori et al. 2013). Overall, reducing the minimum size, or DBH, of the sampled individuals increases the community size, potentially increasing floristic diversity measurements as well (Stier et al. 2016). In tropical mountains, the most popular DBH cut-off size utilized to assess changes in β-diversity along elevational gradients are individuals with DBHs varying from ≥2.5 cm (Kraft et al. 2011; Myers et al. 2013; Tello et al. 2015) to ≥10 cm DBH (Girardin et al. 2014). However, none of these studies have evaluated the likely comparative effect that tree cut-off size variation can have on shaping β-diversity. The sampling effect of keeping the grain size constant and decreasing the DBH cut-off will cause a change in species relative abundance; and whereby this difference in abundance may lead to changes in the extent to which underlying ecological mechanisms can explain the overall pattern of diversity (Powell et al. 2011; Chase and Knight 2013). In other words, sampling not only has a potential effect on the diversity patterns, but also on our ability to identify the underlying community assembly mechanisms that drive these observed patterns. For example, in tropical lowlands, several studies have proposed that enhancing community size by including smaller individuals (e.g. shrubs and juveniles) may lead to a higher influence of deterministic processes, such as soil fertility, on defining species sorting (Duque et al. 2002; Comita et al. 2007). Understanding the effect of different tree cut-off sizes in determining the magnitude of β-deviation at a local scale along elevational gradients will help to distinguish sampling constructs from true ecological signals. This is essential in helping researchers to identify the underlying drivers of species distribution and forest function in the tropical Andean mountains. In order to identify the likely influence of local community assembly mechanisms on shaping β-diversity along elevational gradients, we first need to determine whether β-diversity deviates from null (stochastic) processes (Kraft et al. 2011). Null models help to disentangle ecological assembly mechanisms by quantifying random processes in the ecological community and making comparisons among regions with different species pool sizes possible (Chase and Myers 2011). A positive standardized difference between the observed β-diversity and the expected β-diversity obtained from a null model divided by the standard deviation of the null model (defined here as β-deviation), indicates a higher β-diversity than expected by chance due to the influence of local processes that cause an aggregated non-random spatial pattern of species distribution (Mori et al. 2013; Tello et al. 2015). However, a positive and systematic increase of β-deviation along the elevational gradient, after removing sampling effects and differences in the size of species pools among sites, is not enough and fails to identify the underlying community assembly mechanism (e.g species sorting or dispersal limitation) responsible for an aggregated non-random pattern along the whole elevational gradient (i.e. Tello et al. 2015). Mirroring the magnitude of the operating species assembly mechanisms found along the latitudinal gradient (Myers et al. 2013), we might expect the relative importance of biological processes, such as dispersal limitation, to decrease with elevation; an opposing effect to species sorting, which can be positively correlated with elevation. In this study, we employed a nested sampling design using a series of 15 1-ha plots scattered in wet forests located in northwestern Colombia, where the Andean mountain ranges end, to examine the role that species pool size, grain size and tree cut-off size played in determining β-diversity along elevational gradients. For this study, we had three main hypotheses: i) under the assumption that local variation in species composition primarily depends on the size of the species pool, we do not expect any significant relationships between β-deviation and elevation to occur after controlling for the species pool (Kraft et al. 2011). In contrast, if ecological mechanisms (e.g. species sorting) determine a non-random spatial species distribution, the variation on β-deviation may show a systematic change with elevation as a result of the harsh conditions imposed by highlands (after Tello et al. 2015). ii) The increase of grain size within a fixed extent increases the floristic similarity among samples (hereafter grain size hypothesis), and thus, decreases β-diversity. We expect the magnitude of the relationship between elevation and β-deviation (the slope of the line) to decrease with the increase of grain size at a local scale along the elevational gradient. iii) The reduction of the selected tree cut-off size will increase the local community size and will reduce the compositional differences between samples. We also would expect a reduction in the β-deviation of each plot along the elevational gradient. The study area was located in the northwest region of Colombia between 5°50′ and 8°61′ North and 74°61′ and 77°33′ West. This region encompasses a highly variable elevational gradient in terms of its topography, climate, and soils. The study was conducted using data collected from 15 permanent 1-ha (100 m × 100 m) forest inventory plots which were established between 2006 and 2010. The permanent plots were established across a large geographic area that covers approximately 64,000 km2, mostly within the province of Antioquia (Fig. 1) and span an elevational gradient of 50 to 2950 m asl. The average distance between plots was 160.5 km (ranging = 26.1–419.5 km). The Andean region in Colombia contains only approximately 34% of its original natural cover primarily due to historical deforestation (Duque et al. 2014; Cabrera et al. 2019). Thus, at least in some of the surveyed locations, we expected to find some previous human disturbances, specifically in the El Bagre, Carepa and Necoclí plots (Fig. 1), which were located in small forest fragments (≈ 50 ha). These plots may have experienced human disturbance and elevated tree mortality along forest edges (Duque et al. 2015). Location of 15 1-ha plots in Antioquia on a regional map (inset) to show its location within Colombia. The elevation range of the plots are presented in grayscale: white color for plots located between 0 and 1000 m asl; gray for plots located between 1000 and 2000 m asl, and black for those located between 2000 and 3000 m asl Plot censuses In each 1-ha plot, all shrubs, trees, palms, and tree ferns with a diameter at breast height (DBH) ≥ 10 cm (hereafter "large trees") were mapped, tagged, and measured. Additionally, all of the plants with a DBH ≥ 1 cm (hereafter "all trees") were also mapped, tagged and measured in a 40 m × 40 m subplot (1600 m2) located near the center of each plot (Additional file 1: Figure S1). Voucher specimens were collected for each potentially unique species in each plot. We collected vouchers in all cases where there was any doubt as to whether an individual plant was the same species as another individual that was already collected within the same plot. Taxonomic identifications were made by comparing the specimens with herbarium material and with the help of specialists for some plant groups. Vouchers are kept at the University of Antioquia's Herbarium (HUA). The plants that could not be identified to the species level were classified into morphospecies based on differences in the morphology of their vegetative characters. Approximately 3.5% of individuals were excluded from the analysis due to low-quality vouchers resulting from a lack of clear botanical characters, earlier stages of development, or incorrect enumeration. In total, we identified 26,222 individuals, 112 families, 428 genera and 1707 morphospecies. Sampling effects DBH cut-off and species pool size effect We divided the dataset into three DBH cut-off sizes: i) large trees: represented by all individuals with a DBH ≥ 10 cm tallied in the entire 100 m × 100 m plots (1-ha); ii) small trees: represented by all individuals with a 1 cm ≤ DBH < 10 cm, which were measured only in the 40 m × 40 m subplot inserted within the 1-ha plot (Additional file 1: Figure S1); iii) all trees: represented by all individuals with a DBH ≥ 1 cm tallied in the 40 m × 40 m subplot (0.16-ha) described above. In order to assess the effect of species pool size for each one of the tree DBH cut-off sizes employed to generate our three sampling communities (large, small and all trees), we used the species richness corresponding to each data set. For large trees, we used the species richness from each 1-ha plot but only including trees with a DBH ≥ 10 cm. For the small and all trees categories, we used their respective species richness from each 0.16-ha plot (40 m × 40 m) (see Table 1). Description and location of the 15 1-ha permanent plots in the northwest of Colombia. Latitude (North) and Longitude (West) are presented in geographical coordinates (degrees). N: total number of individuals. S: species richness. The columns of 0.16-ha contain the information about N (number of individuals) and S (species number) by different DBH cut-off size in the 40 m × 40 m subplot inside the plot. The column of 1-ha has information about N and S for the large trees in the whole plot 0.16 ha (DBH ≥ 1 cm) 0.16 ha (1 cm ≤ DBH < 10 cm) 1 ha (DBH ≥ 10 cm) − 76.764 −74.942 Sapzurro Porce Grain size effect The grain size hypothesis was assessed by employing three different grain sizes. For large trees, we used 10 m × 10 m (0.01-ha), 20 m × 20 m (0.04-ha) and 50 m × 50 m (0.25-ha). The grain size used to analyze the influence of the spatial scale for small and all trees were 5 m × 5 m (0.0025-ha), 10 m × 10 m (0.01-ha) and 20 m × 20 m (0.04-ha). The differences in the spatial grain size among large versus small and all trees are due to individuals with a DBH ≥ 1 cm were only measured in the 40 m × 40 m subplot. Environmental features The elevation of each plot was calculated using a GPS. Each elevation point corresponds to the 0,0 point located in the lower-left corner of each plot along the gradient (Additional file 1: Figure S1). Samples of the soil A horizon (mineral soil after removing the organic layer) from five points in each 20 m × 20 m quadrat were collected (N = 25 composite samples per 1-ha plot). At each point, a 500 g soil sample was taken from a depth of 10–30 cm; the five samples from each quadrat were then combined, and a 500 g composite sample was taken and air-dried after removing macroscopic organic matter. pH, Ca, Mg and K concentrations were analyzed at the Biogeochemical Analysis Laboratory at the National University of Colombia in Medellín. Exchangeable Ca, Mg, and K were extracted with 1 mol∙L− 1 ammonium acetate and analyzed using atomic-absorption. Soil pH was measured in water as one-part soil to two parts water. Other soil cations, such as N and P, were not measured due to logistical constraints of sampling at this spatial resolution and scale. We used geostatistical methods to obtain spatial predictions of soil variables at spatial scales smaller than 20 m × 20 m (5 m × 5 m and 10 m × 10 m). We first computed empirical variograms to test the likely spatial structure of each soil variable (pH, Ca, Mg, and K) within the 1-ha plot. The variograms for the four variables did not show any spatial significant trend. Therefore, we used a bilinear interpolation method based on resampled soil data to obtain values of soil variables at different grain sizes in each plot. This method employs the distance-weighted average of the nearest pixel values to estimate the values of no measured points (Hijmans 2016). We calculated soil variables at the 50 m × 50 m grain size using the mean of the soil variables at the 20 m × 20 m scale. Spatial analyses were conducted using the geoR (Ribeiro and Diggle 2001) and raster (Hijmans 2016) packages. Estimations of β-diversity We calculated the observed β-diversity (BDobs) based on abundance data (Legendre and Gallagher 2001; De Cáceres et al. 2012). Taking into account all living trees by species in each one of the plots, for every grain size, we built a matrix (X = [xij]) with dimension n × p (quadrat × species), where X is the community matrix of each plot and xij contains the number of individuals of species j in the quadrat (grain) i (De Cáceres et al. 2012). For each matrix X = [xij], β-diversity was estimated in two steps. First, we transformed the abundances of each species by grain size using the Hellinger transformation. This transformation consists in standardize the abundance of each species by rows. It means, to standardize the abundance of each species by the total abundance of the site (in this case, species by grain), in each plot. Then, the square root of these values is taken (Legendre and Gallagher 2001). Thus, data set express species abundance as square-root transformed proportionate abundance in each grain by site (Jones et al. 2008). The Hellinger transformation is given by: $$ {\mathbf{Y}}_{ij}=\sqrt{\frac{x_{ij}}{\sum_{k=1}^p{x}_{ik}}} $$ where Yij is the transformed matrix, xij is the value of species j in site i, k is the species index and p is the number of species in a given grain with row and column indices i and j (Tan et al. 2017). The Hellinger transformation standardizes species abundance and reduces the weight of the most abundant species in the analysis. The use of the Hellinger transformation makes community compositional data containing many zeros ("double zero") suitable for analysis by linear methods (Legendre and Gallagher 2001; Legendre 2007). Secondly, we estimated BDobs as the variance of Y (De Cáceres et al. 2012), which is calculated as follows: $$ {\mathrm{BD}}_{\mathrm{obs}}=\mathrm{Var}(Y)=\frac{\mathrm{SS}(Y)}{\left(n-1\right)} $$ where SS(Y) is the sum of squares and n is the number of quadrats. BDobs is 0 when all quadrants have exactly the same composition and 1 when they do not share any species. Null model We used a null model to quantify the extent to which the variation in the size of species pool (different species number due to the DBH cut-off size) and scale (different grain size) account for variation in β-diversity (Kraft et al. 2011). The species pool for large, small and all trees was defined as the observed number of species in either the 1-ha or the 0.16-ha plots (after Kraft et al. 2011). The null model randomizes the location of trees among grains within the plot, creating communities that vary in relation to the location of individuals, but fixing the community size (number of individuals), and thus, the observed relative species abundance of each species pool (Tello et al. 2015). This null model removes the local ecological mechanism that creates non-random patterns, such as aggregation and intraspecific co-occurrence (De Cáceres et al. 2012). The Hellinger transformation is then applied to the randomized matrix and expected β-diversity (BDexp) is calculated using the formula presented above. This process is repeated 1000 times per plot, for each grain size, and for each predefined DBH cut-off size. The BDexp is calculated as the mean of 1000 iterations of the null model. β-deviation (BDdev) was defined as the standardized effect size (SES) calculated using the difference between BDobs and BDexp divided by the standard deviation of the frequency distribution of the null model (SDexp). $$ {\mathrm{BD}}_{\mathrm{dev}}=\frac{{\mathrm{BD}}_{\mathrm{obs}}-\mathrm{mean}\ \left({\mathrm{BD}}_{\mathrm{exp}}\right)}{{\mathrm{SD}}_{\mathrm{exp}}} $$ Positive values in the slope of the variation between BDdev along elevational gradients indicate a significant effect of community assembly mechanisms on determining the rate of change in species composition at local scales (Chase and Myers 2011; Tello et al. 2015). Contrarily, values of the slope of the variation in BDdev along elevational gradients non-significantly different from zero (0) are primarily due to sampling effects that come up along with the variation in the size of the species pool (Kraft et al. 2011). We used linear mixed regression models (LMM; Zuur et al. 2009) to identify the main determinants of change in BDobs, BDexp, and BDdev along the elevational gradient. Variables included in the LMM as fixed effects were: grain size, size of the species pool, elevation (m asl) and soil heterogeneity. Soils heterogeneity was assessed for each grain size using the interpolated values from 20 m × 20 m subplots described above. To represent soils heterogeneity at a local scale, we used the variance of the subplot scores on the first axis of a principal component analysis (PCA). PCA was applied to pH, Ca, Mg, and K concentrations. PCA analyses were performed for each grain size and DBH cut-off size (Additional file 1: Methods). Soils heterogeneity was modeled as a continuous variable. Finally, plot identity (or plot name) was included as a random effect to control for particular conditions of each site (Zuur et al. 2009). The interaction term between grain size and elevation was included to directly assess the combined effect of these variables on shaping the β-diversity (BDobs, BDexp, and BDdev). In LMMs, the marginal explained variation (R2 marginal) is associated with fixed effects, while the conditional explained variation (R2 conditional) associated with random effects. Because individuals with DBH ≥ 1 cm and with 1 cm ≤ DBH < 10 cm were not sampled at the 50 m × 50 m scale, we were unable to include the three tree size categories in the same model. Therefore, separate models were used for large trees, small and all trees. The best model for each DBH cut-off size was chosen using the backward stepwise model selection based on the Akaike information criterion (AIC) (Crawley 2007). In order to assess the likely spatial autocorrelation in our models, we extracted the residuals for each model (BDobs, BDexp, and BDdev, for large, small and all trees), separating them by grain size, and assigning the respective spatial coordinate to each one. Then, we estimated a semi-variogram based on 100 draws to define an envelope for the significance of the observed spatial structure of the residuals. This analysis was performed with the geoR package (Ribeiro and Diggle 2001). All analyses were performed in R 3.3.0 (Core Team 2016). Elevation and species pool As we expected, BDobs and BDexp decreased with elevation independent of the grain size and DBH cut-off size (Fig. 2). In contrast, BDdev increase with elevation, also in all grain sizes, regardless of the DBH cut-off size (Fig. 2). After controlling for the regional species pool effect, BDdev still showed an increase with elevation. Overall, the standardized local BDdev increased from lowlands to highlands, which suggests a differential effect from the underlying species assembly mechanism in accordance to elevation. Observed (BDobs), expected (BDexp), and standardized (BDdev) patterns of variation of β-diversity along the elevational gradient. β-deviation (BDdev) is taken as (BDobs – BDexp) /SDexp. Upper panel (A, B, C): large trees (DBH ≥ 10 cm). Middle panel (D, E, F): small trees (1 ≤ BDH < 10 cm) and Lower panel (G, H, I): all trees (DBH ≥ 1 cm). Large trees are taken into account in an area of 1-ha. Small and all the trees are taken into account in 0.16-ha plot Both BDobs and BDexp decrease with grain size independent of the tree DBH cut-off size (Fig. 2). The slopes among grain size, or the relationship BDdev-elevation, were significantly different for large trees, but small and all trees did not show any significant difference among grains (Additional file 1: Figure S2). Determinants of local scale changes in tree β-diversity along the elevational gradient According to the LMMs, the BDobs was significantly associated with grain size, the size of the species pool and elevation for the three size-classes employed (large trees, small trees, all trees). The interaction between grain size and elevation was only significant for large trees. The BDexp was significantly associated with grain size and elevation for the three DBH cut-off size employed, while the size of species pool was significant for large and all the trees but only marginally significant for small trees. The BDdev was significantly associated with grain size for all the three DBH cut-off size. The interaction between grain size and elevation was significant for large and small trees, but not for all the trees. Finally, the marginal explained variation (R2 marginal) by the models was almost always the same than that explained by the conditional variation (R2 conditional) for observed and expected β-diversity and for BDdev in large trees. However, the marginal and conditional explained variation for BDdev for small and all trees had differences, which indicates greater relative importance of random effect for the last two tree sizes (Table 2). Model residuals showed no evidence of spatial autocorrelation (Additional file 1: Figs. S3–S5). Results from the best-fit linear mixed models for large (> 10 cm DBH), small (1 cm ≤ BDH < 10 cm) and all trees (DBH > 1 cm). BDobs: observed β-diversity. BDexp: expected β-diversity. BDdev: β-deviation (BDobs – BDexp)/SDexp. Conditional R2 takes into account both fixed and random effects to measure the goodness of adjustment and prediction power, while marginal R2 only has the fixed effects part. NS > 0.05, *p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001 Dependent variable Marginal R2 Conditional R2 Large trees 10 × 10 richness 20 × 20 × elevation 5 × 5 In this study, we assessed three hypotheses regarding the influence of sampling effects (size of species pool, grain size, and tree cut-off size) on the variation of local β-diversity along elevational gradients in the northern region of the Andean mountains of Colombia. Overall, we found that observed and expected β-diversity decreased with elevation, but that the standardized β-deviation followed an increasing trend with elevation after controlling for the effect of species pool size. The systematic increase in the β-deviation with elevation was independent of the grain size employed, indicating that alternative underlying community assembly mechanisms had a significant role in shaping tree β-diversity along this elevational gradient. Our finding contradicts the claim of sampling effects due to the species pool size as the key determinant of changes in β-diversity (sensu Kraft et al. 2011). Therefore, our results emphasize the importance that different community assembly mechanisms have on shaping the observed decrease in local β-diversity along elevational gradients in tropical forests (Mori et al. 2013; Tello et al. 2015), rejecting our first hypothesis. Following some studies on tree β-diversity along latitudinal gradients (De Cáceres et al. 2012; Sreekar et al. 2018), our second hypothesis predicted and confirmed a decrease in both the observed and expected tree β-diversity with the increase in grain size along an elevational gradient. Regarding the β-deviation, our findings were dependent on the DBH cut-off tree size as predicted by the third hypothesis, similar to other studies along elevational gradients (Mori et al. 2013). Mori et al. (2013) claimed that the overall β-diversity decreases in response to the DBH cut-off size, contrary to β-deviation. Therefore, for large trees (DBH ≥ 10 cm), we accept the hypothesis that changes in grain size have a significant effect on the assessment of the standardized β-deviation, and conclude that the larger the grain size, the lower the observed β-diversity, but the higher the β-deviation. In other words, especially for large trees, and along elevational gradients, the probability of detecting the influence of community assembly mechanisms increase positively at larger grain sizes (Fig. 2). A likely explanation for this pattern could be that large trees are those that survived self-thinning and their spatial distribution, at smaller spatial scales (e.g. 0.04-ha), are more random than at larger scales, which indicates that the degree of aggregation does not vary much at such small grain sizes. When assessing the β-deviation for the small and all individuals size classes (DBH ≥ 1 cm), the interaction between grain size and elevation included in the LMMs was significant for small trees but not for all trees. This contrasting result, stemming from similarly nested datasets (see Table 1), hampers our capacity to make conclusions as to the effect of grain size on the local β-deviation for the small and all individuals along the elevational gradient. In fact, when using an independent Analysis of Covariance (ANCOVA) to evaluate the grain size – elevation interaction term, only large trees were significant (Additional file 1: Table S1; Figure S2). The low sampling size (4) used to assess tree β-diversity at the largest grain size (4) may be a reason for the high variance observed when we included individuals with DBH ≥ 1 cm. In the Andean mountains, the lack of sampling schemes of plots ≥1-ha that include individuals with DBHs ≥1 cm, such as those available for tropical lowlands (i.e Anderson-Teixeira et al. 2015), prevents us from concluding about the expected trend of the β-deviation at larger grain sizes along the elevational gradient in tropical forests. Tree community assembly mechanisms along the elevational gradient The increase of β-deviation in relation to elevation indicates that in colder regions, the extent to which species assembly mechanisms operate is higher compared to warmer areas. One important conclusion to note is that low temperatures may impose constraints to plant establishment and functioning, and play a key role in determining species distribution (Kitayama and Aiba 2002; Girardin et al. 2014). For example, changes in species composition could be associated with changes in species richness along elevational gradients in very diverse understory families, such as Rubiaceae (r = − 0.58, p = 0.02). Soil variation has been shown to be a key community assembly mechanism which shapes species sorting at local scales in some tropical forests (Russo et al. 2005; John et al. 2007). However, in this study, we did not find soil variation to be significantly associated with the local β-deviation along the elevational gradient. This result did not support the idea of an increase in plant habitat-association of juveniles and shrubs (Duque et al. 2002; Comita et al. 2007; Fortunel et al. 2016). Nonetheless, our soil variation index focuses primarily focuses base content, hindering our ability to understand the likely influence of other very important soil cations, such as P and N, which, in tropical lowland forests (Condit et al. 2013), have been identified as key elements for tree species distribution. Furthermore, soil sampling was only carried out at the 20 m × 20 m scale, which might have obscured processes operating at smaller spatial scales. Additional studies testing the likely influence of topographic and edaphic variables, not considered here, will shed new insights on the still unanswered question about the extent to which environmental filtering locally shapes species sorting, and thus, the gradient of β-diversity at local scales along elevational gradients in tropical forests. The lack of significance of soil variation on shaping species sorting implies that other community assembly mechanisms, rather than environmental filtering, are likely drivering the observed change in β-diversity at a local scale with elevation. Mirroring the latitudinal gradient (Myers et al. 2013), a systematic decrease in the importance of dispersal limitation (sensu Hubbell 2001) with elevation seems the first likely alternative assembly mechanism to explain the increase in β-deviation observed in this study. Another possible explanation for the positive deviations of β-diversity is the hypothetic positive increase of density-dependence with the size of the species pool (Lamanna et al. 2017), which suggests that the stronger the conspecific and heterospecific the negative dependence is, the higher the diversity, but the weaker the influence of environmental filtering and niche partitioning. A decrease of species competition but an increase of species facilitation in highlands, due to the adverse conditions imposed by low temperatures on the ecosystem functioning and survival capacity of plants (Coyle et al. 2014), could also promote the observed increase of β-deviation with elevation observed in our study. One likely factor not assessed here that could have influenced the pattern of variation in local β-diversity is the expected biotic homogenization caused by forest disturbance (Karp et al. 2012; Solar et al. 2015). The high fragmentation and historical degradation of the tropical Andes (Armenteras et al. 2013), could have caused some of our sites to display a lower local β-diversity than under undisturbed conditions. In mountainous ecosystems, we expect the steep terrain at the highest mountain peaks to limit site access and act as a shield against human disturbances (Spracklen and Righelato 2014), thus generating a higher biotic homogenization in lowlands than in highlands. Indeed, the plots located in the smallest forest fragments (Carepa, Necoclí and El Bagre; see methods), were all located in lowlands. However, the systematic decline in the observed β-diversity (BDobs) does not support the hypothesis of biotic homogenization as a major cause of the observed pattern. For example, we did not find statistical differences (unpaired t-test) when comparing the β-deviation between the three sites located in the smallest forest fragments, which we assumed were exposed to higher disturbances, and the rest of the plots located in lowlands (< 1000 m asl). This result was a generalized outcome for any grain size for both large trees (50 m × 50 m: p = 0.79; 20 m × 20 m: p = 0.82; 10 m × 10 m: p = 0.42) and small trees (20 m × 20 m: p = 0.92; 10 m × 10 m: p = 0.78; 5 m × 5 m: p = 0.64). Methodological remarks First, for large trees, the LMMs selected species pool size (species richness) as a significant variable to explain the variation of the β-deviation with elevation (Table 2). This finding indicates that the applied null-model did not, in some cases, entirely and effectively remove the influence of the size of the species pool. Understanding the effect that changes in the shape of the species abundance distribution models have on determining the β-diversity along elevational gradients is still under debate (e.g. Qian et al. 2013). However, it could be seen as an alternative way to analyze the effect from changes in community size. Second, the absence of plots ≥1-ha that include small individuals in the Andean mountains prevents the use of sampling sizes along the elevational gradient which are large enough to properly assess the grain size and cut-off size hypotheses together in this complex ecosystem. Although our study is the first attempt in the Andean mountains to test the species pool hypothesis using plots ≥0.1 ha, our results were based on very few replicates of the largest grain sizes and need to be seen as preliminary evidence of an expected pattern rather than a conclusive view. To truly understand the pattern of β-diversity variation in mountainous tropical forests, it appears we need to transition towards a new generation of larger forest sampling schemes (e.g Garzon-Lopez et al. 2014; Duque et al. 2017; Sreekar et al. 2018) that goes beyond the valuable heritage left by A.L. Gentry. Such a big challenge should be a priority in the tropical Andes, where the availability of information is much more scarce than in their Amazon lowland counterparts (Feeley 2015). We determined that the effect of the grain size, species pool size and tree cut-off size, are paramount to identify the underlying processes that shape species assembly of tree communities. Our findings suggest that grain size enlargement and the inclusion of small size classes can help improve our ability to identify the extent to which the species assembly mechanisms shape the patterns of local β-diversity change along elevational gradients in tropical ecosystems. However, in future field campaigns that aim to assess tree local β-diversity along the elevational gradient in tropical forest inventories, we need to evaluate the limitation of the relatively small plot size employed so far. Overall, our study emphasizes the scale-dependent nature of β-diversity assessments. It showcases the advantage to decreasing the tree cut-off size and increasing the plot size in forest inventories (De Cáceres et al. 2012; Barton et al. 2013; Sreekar et al. 2018) to improve our understanding about the likely response of tree diversity to global change in tropical mountain ecosystems. Supplementary information accompanies this paper at https://doi.org/10.1186/s40663-020-0214-y. We like to thank Jonathan Myers for his reading and comments to the manuscript. Miquel De Cáceres provided code in R to evaluate the null models. Elysa Cameron kindly revised the language of the manuscript. We are indebted to two anonymous reviewers and the handling editor for all their wonderful comments that certainly helped to improve the quality oof the manuscript. AD designed the study. JA M-V and SG analyzed the data. JA M-V and AD wrote the paper. All authors jointly discussed and agreed to the final version. The project "Bosques Andinos" was funding by Helvetas Swiss development organization and developed by Medellín Botanical Garden "Joaquín Antonio Uribe". The subject has no ethic risk. 40663_2020_214_MOESM1_ESM.docx (615 kb) Additional file 1 Forest Ecosystems. Figure S1: Graphical representation of each one of the plots. Methods: Schematic description of the analytical procedure employed to extract the soils data set. Figure S2: Results post hoc ANCOVA analysis using "Tukey" test, comparisons among each slope in the linear mixed models. Figure S3: Mixed linear model validation for large trees using variograms with model residuals using Pearson method and geographical coordinates of the plots. Figure S4: Mixed linear model validation for small trees using variograms with model residuals using Pearson method and geographical coordinates of the plots. Figure S5: Mixed linear model validation for all trees using variograms with model residuals using Pearson method and geographical coordinates of the plots. Table S1 Analysis of covariance (ANCOVA). Comparison of slopes between grain size and elevation for the β-deviation and for all of the DBH cut-off sizes. Anderson-Teixeira KJ, Davies SJ, Bennett AC, Gonzalez-Akre EB, Muller-Landau HC, Wright SJ, Abu Salim K, Almeyda Zambrano AM, Alonso A, Baltzer JL, Basset Y, Bourg NA, Broadbent EN, Brockelman WY, Bunyavejchewin S, Burslem DFRP, Butt N, Cao M, Cardenas D, Chuyong GB, Clay K, Cordell S, Dattaraja HS, Deng XB, Detto M, Du XJ, Duque A, Erikson DL, Ewango CEN, Fischer GA, Fletcher C, Foster RB, Giardina CP, Gilbert GS, Gunatilleke N, Gunatilleke S, Hao ZQ, Hargrove WW, Hart TB, Hau BCH, He FL, Hoffman FM, Howe RW, Hubbell SP, Inman-Narahari FM, Jansen PA, Jiang MX, Johnson DJ, Kanzaki M, Kassim AR, Kenfack D, Kibet S, Kinnaird MF, Korte L, Kral K, Kumar J, Larson AJ, Li YD, Li XK, Liu SR, Lum SKY, Lutz JA, Ma KP, Maddalena DM, Makana JR, Malhi Y, Marthews T, Serudin RM, McMahon SM, McShea WJ, Memiaghe HR, Mi XC, Mizuno T, Morecroft M, Myers JA, Novotny V, de Oliveira AA, Ong PS, Orwig DA, Ostertag R, den Ouden J, Parker GG, Phillips RP, Sack L, Sainge MN, Sang WG, Sri-ngernyuang K, Sukumar R, Sun IF, Sungpalee W, Suresh HS, Tan S, Thomas SC, Thomas DW, Thompson J, Turner BL, Uriarte M, Valencia R, Vallejo MI, Vicentini A, Vrska T, Wang XH, Wang XG, Weiblen G, Wolf A, Xu H, Yap S, Zimmerman J (2015) CTFS-ForestGEO: a worldwide network monitoring forests in an era of global change. Glob Chang Biol 21:528–549. https://doi.org/10.1111/gcb.12712 CrossRefPubMedGoogle Scholar Armenteras D, Cabrera E, Rodrı N (2013) National and regional determinants of tropical deforestation in Colombia. Reg Environ Chang 13:1181–1193. https://doi.org/10.1007/s10113-013-0433-7 CrossRefGoogle Scholar Barton PS, Cunningham SA, Manning AD, Gibb H, Lindenmayer DB, Didham RK (2013) The spatial scaling of beta diversity. Glob Ecol Biogeogr 22:639–647. https://doi.org/10.1111/geb.12031 CrossRefGoogle Scholar Cabrera E, Galindo G, González J, Vergara L, Forero C, Cubillos A, Espejo J, Rubiano J, Corredor X, Hurtado L, Diana V, Duque A (2019) Colombian Forest monitoring system : assessing deforestation in an environmental complex country. In: Suratman MN, Latif ZA (eds) Deforestation around the world. IntechOpen, pp 1–18. https://doi.org/10.5772/intechopen.86143.Google Scholar Chase JM (2010) Stochastic community assembly causes higher biodiversity in more productive environments. Science 328:1388–1391. https://doi.org/10.1126/science.1187820 CrossRefPubMedGoogle Scholar Chase JM, Knight TM (2013) Scale-dependent effect sizes of ecological drivers on biodiversity: why standardised sampling is not enough. Ecol Lett 16:17–26. https://doi.org/10.1111/ele.12112 CrossRefPubMedGoogle Scholar Chase JM, Myers JA (2011) Disentangling the importance of ecological niches from stochastic processes across scales. Philos Trans R Soc B Biol Sci 366:2351–2363. https://doi.org/10.1098/rstb.2011.0063 CrossRefGoogle Scholar Comita LS, Condit R, Hubbell SP (2007) Developmental changes in habitat associations of tropical trees. J Ecol 95:482–492. https://doi.org/10.1111/j.1365-2745.2007.01229.x CrossRefGoogle Scholar Condit R, Engelbrecht BMJ, Pino D, Perez R, Turner BL (2013) Species distributions in response to individual soil nutrients and seasonal drought across a community of tropical trees. Proc Natl Acad Sci 110:5064–5068. https://doi.org/10.1073/pnas.1218042110 CrossRefPubMedGoogle Scholar Condit R, Pérez R, Lao S, Aguilar S, Somoza A (2005) Geographic ranges and b-diversity: discovering how many tree species there are where. In: Plant Diversity and Complexity Patterns: Local, Regional, and Global Dimensions, pp 57–71Google Scholar Condit R, Pitman N, Leigh EG, Chave J, Terborgh J, Foster RB, Nunez P, Aguilar S, Valencia R, Villa G, Muller-Landau HC, Losos E, Hubbell SP (2002) Beta-diversity in tropical forest trees. Science 295:666–669. https://doi.org/10.1126/science.1066854 CrossRefPubMedGoogle Scholar Core Team R (2016) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna https://www.R-project.org/. Accessed 20 Apr 2019Google Scholar Coyle JR, Halliday FW, Lopez BE, Palmquist KA, Wilfahrt PA, Hurlbert AH (2014) Using trait and phylogenetic diversity to evaluate the generality of the stress-dominance hypothesis in eastern north American tree communities. Ecography 37:814–826. https://doi.org/10.1111/ecog.00473 CrossRefGoogle Scholar Crawley MJ (2007) The R book. Wiley, ChichesterCrossRefGoogle Scholar Crawley MJ, Harral J (2002) Scale dependence in plant biodiversity. Science 291:864–868. https://doi.org/10.1126/science.291.5505.864 CrossRefGoogle Scholar De Cáceres M, Legendre P, Valencia R, Cao M, Chang LW, Chuyong G, Condit R, Hao ZQ, Hsieh CF, Hubbell S, Kenfack D, Ma KP, Mi XC, Noor MNS, Kassim AR, Ren HB, Su SH, Sun IF, Thomas D, Ye WH, He FL (2012) The variation of tree beta diversity across a global network of forest plots. Glob Ecol Biogeogr 21:1191–1202. https://doi.org/10.1111/j.1466-8238.2012.00770.x CrossRefGoogle Scholar Duque A, Feeley KJ, Cabrera E, Idarraga A (2014) The dangers of carbon-centric conservation for biodiversity : a case study in the Andes. Trop Conserv Sci 7:178–191CrossRefGoogle Scholar Duque A, Muller-Landau HC, Valencia R, Cardenas D, Davies S, de Oliveria A, Perez AJ, Romero-Saltos H, Vicentini A (2017) Insights into regional patterns of Amazonian forest structure, diversity, and dominance from three large terra-firme forest dynamics plots. Biodivers Conserv 26:669–686. https://doi.org/10.1007/s10531-016-1265-9 CrossRefGoogle Scholar Duque A, Sánchez M, Cavelier J, Duivenvoorden JF (2002) Different floristic patterns of woody understorey and canopy plants in Colombian Amazonia. J Trop Ecol 18:499–525. https://doi.org/10.1017/S0266467402002341 CrossRefGoogle Scholar Duque A, Stevenson PR, Feeley KJ (2015) Thermophilization of adult and juvenile tree communities in the northern tropical Andes. Proc Natl Acad Sci U S A 112:1–6. https://doi.org/10.1073/pnas.1506570112 CrossRefGoogle Scholar Feeley K (2015) Are we filling the data void? An assessment of the amount and extent of plant collection records and census data available for tropical South America. PLoS One 10:1–17. https://doi.org/10.1371/journal.pone.0125629 CrossRefGoogle Scholar Fortunel C, Timothy Paine CE, Fine PVA, Mesones I, Goret JY, Burban B, Cazal J, Baraloto C (2016) There's no place like home : seedling mortality contributes to the habitat specialisation of tree species across Amazonia. Ecol Lett 19:1256–1266. https://doi.org/10.1111/ele.12661 CrossRefPubMedGoogle Scholar Fukami T (2015) Historical contingency in community assembly: integrating niches, species pools, and priority effects. Annu Rev Ecol Evol Syst 46:1–23. https://doi.org/10.1146/annurev-ecolsys-110411-160340 CrossRefGoogle Scholar Garzon-Lopez CX, Jansen PA, Bohlman SA, Ordonez A, Olff H (2014) Effects of sampling scale on patterns of habitat association in tropical trees. J Veg Sci 25:349–362. https://doi.org/10.1111/jvs.12090 CrossRefGoogle Scholar Girardin CAJ, Farfan-Rios W, Garcia K, Feeley KJ, Jorgensen PM, Murakami AA, Perez LC, Seidel R, Paniagua N, Claros AFF, Maldonado C, Silman M, Salinas N, Reynel C, Neill DA, Serrano M, Caballero CJ, Cuadros MDL, Macia MJ, Killeen TJ, Malhi Y (2014) Spatial patterns of above-ground structure, biomass and composition in a network of six Andean elevation transects. Plant Ecol Divers 7:161–171. https://doi.org/10.1080/17550874.2013.820806 CrossRefGoogle Scholar Hijmans RJ (2016) Raster: geographic data analysis and modeling. R package version 2:5–8 https://CRAN.R-project.org/package=raster. Accessed 20 Apr 2019Google Scholar Hubbell SP (2001) A unified neutral theory of biodiversity and biogeography. Princeton University Press, Princeton, New Jersey, USAGoogle Scholar Jankowski JE, Ciecka AL, Meyer NY, Rabenold KN (2009) Beta diversity along environmental gradients: implications of habitat specialization in tropical montane landscapes. J Anim Ecol 78:315–327. https://doi.org/10.1111/j.1365-2656.2008.01487.x CrossRefPubMedGoogle Scholar Janzen DH (1967) Why mountain passes are higher in the tropics. Am Nat 101:233–249CrossRefGoogle Scholar John R, Dalling JW, Harms KE, Yavitt JB, Stallard RF, Mirabello M, Hubbell SP, Valencia R, Navarrete H, Vallejo M, Foster RB (2007) Soil nutrients influence spatial distributions of tropical tree species. Proc Natl Acad Sci U S A 104:864–869CrossRefGoogle Scholar Jones MM, Tuomisto H, Borcard D, Legendre P, Clark DB, Olivas PC (2008) Explaining variation in tropical plant community composition: influence of environmental and spatial data quality. Oecologia 155:593–604. https://doi.org/10.1007/s00442-007-0923-8 CrossRefPubMedGoogle Scholar Karp DS, Rominger AJ, Zook J, Ranganathan J, Ehrlich PR, Daily GC (2012) Intensive agriculture erodes b -diversity at large scales. Ecol Lett 15:963–970. https://doi.org/10.1111/j.1461-0248.2012.01815.x CrossRefPubMedGoogle Scholar Kitayama K, Aiba SI (2002) Ecosystem structure and productivity of tropical rain forests along altitudinal gradients with contrasting soil phosphorus pools on mount Kinabalu, Borneo. J Ecol 90:37–51. https://doi.org/10.1046/j.0022-0477.2001.00634.x CrossRefGoogle Scholar Kraft NJ, Comita LS, Chase JM, Sanders NJ, Swenson NG, Crist TO, Stegen JC, Vellend M, Boyle B, Anderson MJ, Cornell HV, Davies KF, Freestone AL, Inouye BD, Harrison SP, Myers JA (2011) Disentangling the drivers of beta diversity along latidunial and elevational gradients. Science (80- ) 333:1755–1758. https://doi.org/10.1007/s13398-014-0173-7.2 CrossRefGoogle Scholar Laiolo P, Pato J, Obeso JR (2018) Ecological and evolutionary drivers of the elevational gradient of diversity. Ecol Lett 21:1022–1032. https://doi.org/10.1111/ele.12967 CrossRefPubMedGoogle Scholar Lamanna JA, Mangan SA, Alonso A, Bourg NA, Brockelman WY, Bunyavejchewin S, Chang LW, Chiang JM, Chuyong GB, Clay K, Condit R, Cordell S, Davies SJ, Furniss TJ, Giardina CP, Gunatilleke IAUN, Gunatilleke CVS, He FL, Howe RW, Hubbell SP, Hsieh CF, Inman-Narahari FM, Janik D, Johnson DJ, Kenfack D, Korte L, Kral K, Larson AJ, Lutz JA, McMahon SM, McShea WJ, Memiaghe HR, Nathalang A, Novotny V, Ong PS, Orwig DA, Ostertag R, Parker GG, Phillips RP, Sack L, Sun IF, Tello JS, Thomas DW, Turner BL, Diaz DMV, Vrska T, Weiblen GD, Wolf A, Yap S, Myers JA (2017) Plant diversity increases with the strength of negative density dependence at the global scale. Science 356:1389–1392CrossRefGoogle Scholar Legendre P (2007) Studying beta diversity: ecological variation partitioning by multiple regression and canonical analysis. J Plant Ecol 1:3–8. https://doi.org/10.1093/jpe/rtm001 CrossRefGoogle Scholar Legendre P, Gallagher ED (2001) Ecologically meaningful transformations for ordination of species data. Oecologia 129:271–280. https://doi.org/10.1007/s004420100716 CrossRefPubMedGoogle Scholar Lomolino M (2001) Elevation gradients of species-density : historical and prospective views. Glob Ecol Biogeogr 10:3–13. https://doi.org/10.1046/j.1466-822x.2001.00229.x CrossRefGoogle Scholar Mori AS, Shiono T, Koide D, Kitagawa R, Ota AT, Mizumachi E (2013) Community assembly processes shape an altitudinal gradient of forest biodiversity. Glob Ecol Biogeogr 22:878–888. https://doi.org/10.1111/geb.12058 CrossRefGoogle Scholar Myers JA, Chase JM, Jiménez I, Jorgensen PM, Araujo-Murakami A, Paniagua-Zambrana N, Seidel R (2013) Beta-diversity in temperate and tropical forests reflects dissimilar mechanisms of community assembly. Ecol Lett 16:151–157. https://doi.org/10.1111/ele.12021 CrossRefPubMedGoogle Scholar Powell KI, Chase JM, Knight TM (2011) A synthesis of plant invasion effects on biodiversity across spatial scales. Am J Bot 98:539–548. https://doi.org/10.3732/ajb.1000402 CrossRefPubMedGoogle Scholar Qian H, Chen S, Mao L, Ouyang Z (2013) Drivers of β-diversity along latitudinal gradients revisited. Glob Ecol Biogeogr 22:659–670. https://doi.org/10.1111/geb.12020 CrossRefGoogle Scholar Qian H, Ricklefs RE (2007) A latitudinal gradient in large-scale beta diversity for vascular plants in North America. Ecol Lett 10:737–744. https://doi.org/10.1111/j.1461-0248.2007.01066.x CrossRefPubMedGoogle Scholar Rahbek C (2005) The role of spatial scale and the perception of large-scale species-richness patterns. Ecol Lett 8:224–239. https://doi.org/10.1111/j.1461-0248.2004.00701.x CrossRefGoogle Scholar Ribeiro PJ Jr, Diggle PJ (2001) geoR: a package for geostatistical analysis. R-NEWS 1:15–18Google Scholar Ricklefs RE (1987) Community diversity: relative roles of local and regional processes. Science 235:167–171CrossRefGoogle Scholar Russo SE, Davies SJ, King DA, Tan S (2005) Soil-related performance variation and distributions of tree species in a Bornean rain forest. J Ecol 93:879–889. https://doi.org/10.1111/j.1365-2745.2005.01030.x CrossRefGoogle Scholar Solar RRD, Barlow J, Ferreira J, Berenguer E, Lees AC, Thomson JR, Louzada J, Maues M, Moura NG, Oliveira VHF, Chaul JCM, Schoereder JH, Vieira ICG, Nally R, Gardner TA (2015) How pervasive is biotic homogenization in human-modified tropical forest landscapes ? Ecol Lett 18:1108–1118. https://doi.org/10.1111/ele.12494 CrossRefPubMedGoogle Scholar Spracklen DV, Righelato R (2014) Tropical montane forests are a larger than expected global carbon store. Biogeosciences 11:2741–2754. https://doi.org/10.5194/bg-11-2741-2014 CrossRefGoogle Scholar Sreekar R, Katabuchi M, Nakamura A, Corlett RT, Slik JWF, Fletcher C, He FL, Weiblen GD, Shen GC, Xu H, Sun IF, Cao K, Ma KP, Chang LW, Cao M, Jiang MX, Gunatilleke IAUN, Ong P, Yap S, Gunatilleke CVS, Novotny V, Brockelman WY, Xiang W, Mi XC, Li XK, Wang XH, Qiao XJ, Li YD, Tan S, Condit R, Harrison RD, Koh LP (2018) Spatial scale changes the relationship between beta diversity, species richness and latitude. R Soc Open Sci 5:1–10. https://doi.org/10.1098/rsos.181168 CrossRefGoogle Scholar Stier AC, Bolker BM, Osenberg CW (2016) Using rarefaction to isolate the effects of patch size and sampling effort on beta diversity. Ecosphere 7:1–15. https://doi.org/10.1002/ecs2.1612 CrossRefGoogle Scholar Tan LZ, Fan CY, Zhang CY, von Gadow K, Fan XH (2017) How beta diversity and the underlying causes vary with sampling scales in the Changbai mountain forests. Ecol Evol:1–8. https://doi.org/10.1002/ece3.3493 CrossRefGoogle Scholar Tello JS, Myers JA, Macía MJ, Fuentes AF, Cayola L, Arellano G, Loza MI, Torrez V, Cornejo M, Miranda TB, Jorgensen PM (2015) Elevational gradients in β-diversity reflect variation in the strength of local community assembly mechanisms across spatial scales. PLoS One 10:1–17. https://doi.org/10.1371/journal.pone.0121458 CrossRefGoogle Scholar Whittaker RH (1960) Vegetation of the Sisiyou Mountains, Oregon and California. Ecol Monogr 30:279–338CrossRefGoogle Scholar Whittaker RJ, Willis KJ, Field R (2001) Scale and species richness: towards a general, hierarchical theory of species diversity. J Biogeogr 28:453–470. https://doi.org/10.1046/j.1365-2699.2001.00563.x CrossRefGoogle Scholar Zuur AF, Ieno EN, Walker NJ, Saveliev AA, Smith GM (2009) Mixed effects models and extension in ecology with R. Springer, BerlinCrossRefGoogle Scholar © The Author(s). 2020 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Departamento de Ciencias ForestalesUniversidad Nacional de Colombia sede MedellínMedellínColombia 2.Département des Sciences BiologiquesUniversité du Québec à MontréalMontrealCanada Martínez-Villa, J.A., González-Caro, S. & Duque, Á. For. Ecosyst. (2020) 7: 2. https://doi.org/10.1186/s40663-020-0214-y Received 07 May 2019 Accepted 05 January 2020 DOI https://doi.org/10.1186/s40663-020-0214-y Publisher Name Springer Singapore
CommonCrawl
Faces, Vertices and Edges in a Triangular Pyramid A triangular pyramid is a three-dimensional figure, in which all its faces are triangles. These pyramids are characterized by having a triangular base and three lateral triangular faces. Triangular pyramids have 4 faces, 6 edges, and 4 vertices. Three faces of the pyramid meet at each vertex. If all the faces are equilateral triangles, the figure is called a tetrahedron. Here, we will learn more details about the faces, vertices, and edges of the triangular pyramids. We will use diagrams to illustrate the concepts. Learning about faces, vertices, and edges of triangular pyramids. See faces Faces of a triangular pyramid Vertices of a triangular pyramid Edges of a triangular pyramid The faces of the triangular pyramid are the flat surfaces that are bounded by the vertices and edges. The faces shape the three-dimensional figure. All pyramids are made up of a base and lateral triangular faces. In the case of triangular pyramids, we have a triangular face at the base and three lateral triangular faces. This means that we have four triangular faces in total. The lateral triangular faces meet at a single top point called the vertex. The surface area of the pyramid is found by adding the areas of all the faces of the figure. We know that the area of any triangle is equal to one-half the length of its base times the height of the triangle. Also, if the base is an equilateral triangle, the bases and the heights of the lateral faces will be equal, so the formula for the surface area of a pyramid is $latex \frac{1}{2}ba+\frac{3}{2}bh$. The vertices of the triangular pyramid are the points where three edges meet. In general, vertices are defined as the points where two or more line segments meet. The vertices can also be considered as the points where three faces of the pyramid meet. In total, the triangular pyramids have 4 vertices. The edges of a triangular pyramid are the line segments that join two vertices. The edges are located at the limits of the pyramid. The edges are also defined as the line segments where two triangular faces of the pyramid meet. In total, a triangular pyramid has 6 edges. In the diagram, we can see that each face has three edges and each edge is shared by two triangular faces. Interested in learning more about triangular pyramids? Take a look at these pages: Volume of a Triangular Pyramid – Formulas and Examples Surface Area of a Triangular Pyramid – Formulas and Examples Volume of a Rectangular Pyramid – Formulas and Examples Surface Area of a Rectangular Pyramid – Formulas and Examples Faces, Vertices and Edges in a Rectangular Pyramid
CommonCrawl
Repeat induced abortion and associated factors among reproductive age women who seek abortion services in Debre Berhan town health institutions, Central Ethiopia, 2019 Geremew Kindie Behulu1, Endegena Abebe Fenta2 & Getie Lake Aynalem3 To assess the magnitude and associated factors of repeat induced abortion among women aged from 15 to 49 who seek abortion care services in the health institutions of Debre Berhan town, Central Ethiopia, 2019. This study shows that the prevalence of repeat induced abortion among 355 respondents was to be 20.3%. Those who reported as they had more than one partner in the last 12 preceding months, (AOR = 7.3, 95% CI 3.21, 16.46), Age of the first sexual intercourse less than 18 years (AOR = 6, 95% CI 2.54, 13.95) and Perceiving abortion procedure as it was not painful (AOR = 7.7, 95% CI 2.9, 20.6) were variables positively associated with the repeatedly induced abortion among women who sought abortion services. Repeat-induced abortion defined as those reporting at least one previous induced abortion [1]. Abortion is a sensitive and controversial issue with religious, moral, cultural, and political scopes. It is also a public health issue in many parts of the world. More than one-fourth of the world's people live in rural areas where the procedure is prohibited or allowed only to preserve the woman's lifetime. However, irrespective of legal status, abortions still occur, and nearly half of them are performed by an unskilled practitioner or in less than sanitary conditions, or both [2]. The World Health Organization (WHO) estimates that worldwide 210 million women get pregnant each year and that nearly two-thirds of them, or roughly 130 million, deliver live babies. The remaining one-third of pregnancies end in miscarriage, stillbirth, or induced abortion [2]. Statistical reports show that around 13% of maternal deaths are contributed by unsafe abortion in the globe [3]. Induced abortion is frequently a consequence of inadequate contraception and the reasons not to use contraception originate from lack of correct information [4]. Repeat abortion, or getting more than one pregnancy termination, is bound in a vicious cycle with repeat unintended pregnancy [5]. Women who have had a recent abortion are more potential to discontinue contraceptive use during a 1-year follow up period, and both current and other previous abortion clients are more likely to have a repeat unintended pregnancy during that period [6]. The incidence of women looking for induced abortion and particularly those seeking a repeated induced abortion is an essential indicator of the frequency with which women have unintended pregnancies, and it can point to gaps in contraceptive services and effective contraceptive use [7]. Despite the high incidence of repeat abortions and their consequences, research on it is scarce in low and middle-income countries, particularly in Ethiopia. Abortion is currently legal in Ethiopia in cases of rape, incest, or fetal impairment. Also, a woman can legally terminate a pregnancy if her life or her child's life is at risk, or if continuing the pregnancy or giving birth threatens her life. A woman may also terminate a pregnancy if she is unable to bring up the child, owing to her status as a minor or to a physical or mental infirmity since 2005 and the contraceptive coverage reached 36% in 2016 [8]. The abortion rate among childbearing age women was about 23 per 1000 women aged 15–44 in 2008 [9]. The magnitude of repeat abortion in Ethiopia is not known; this study seeks to help enlighten. The overall prevalence of unintended pregnancy in Ethiopia is about 42%. Among an annual 3.27 million estimated pregnancies, half a million ends up in abortion [10]. As far as our knowledge is concerned, there are no studies done on the magnitude and associated factors of repeat induced abortion in Amhara region, so the aim of this study was to determine the prevalence and associated factors of repeat induced abortion among the reproductive age group of women at the health institutions of Debre Berhan town, Central Ethiopia. Study design and setting An institutional based cross-sectional study design was conducted among the reproductive age group of women at the health institutions in Debre Berhan town. The town is found in North Shewa Zone, North West Ethiopia, Amhara region which is about 120 km from Addis Ababa (capital city of Ethiopia). Based on the 2007 national census carried on by the Central Statistical Agency of Ethiopia (CSA), this town has total residents of 65,231, of whom 31,668 are men and 33,563 women. The most of the peoples practiced Ethiopian Orthodox Christianity, with 94.12% reporting that as their religion, while 3.32% of the population said they were Muslim and 2.15% were Protestants [11]. There are two governmental and one private health institutions in the town. Sample size and sampling procedure Sample size calculation was done by using a single proportion population formula by considering the following assumptions: p = 0.16 from previous study [12], 95% CI, and 4% marginal error. Then, using the formula \({\text{n}}\, = \,{{\left( {\left( {{\text{Z}}\alpha / 2} \right) 2 {\text{ p }}\left( { 1\, - \,{\text{p}}} \right)} \right)} \mathord{\left/ {\vphantom {{\left( {\left( {{\text{Z}}\alpha / 2} \right) 2 {\text{ p }}\left( { 1\, - \,{\text{p}}} \right)} \right)} {{\text{W2}}.}}} \right. \kern-0pt} {{\text{W2}}.}}\) Where \({\text{Z}}\alpha / 2\, = \, 1. 9 6\) $${\text{n}}\, = \,{{\left( {\left( { 1. 9 6} \right) 2 { }0. 1 6 { }\left( {0. 8 4} \right)} \right)} \mathord{\left/ {\vphantom {{\left( {\left( { 1. 9 6} \right) 2 { }0. 1 6 { }\left( {0. 8 4} \right)} \right)} {\left( {0.0 4} \right) 2\, = \, 3 2 2. 7}}} \right. \kern-0pt} {\left( {0.0 4} \right) 2\, = \, 3 2 2. 7}}$$ By adding a 10% non-response rate, the total sample size required was 355. Systematic random sampling technique was used to get the participants. The 1 month preceded case flow of each health institution was determined. Accordingly, Debre Berhan Hospital (50 cases in a month), Marie stopes private clinic (115 cases in 1 month) and Debre Berhan health center (45 cases in a month) were recorded. The total population in the data collection period (from September 12/2018 to February 12/2019) was 1050. The K interval then became 3 and the first number to start collection was selected by lottery method which was 2. Then using the proportional sample size allocation according to the case flow, samples which were systematically selected from each institution were 85, 194 and 76 for Debre Berhan Hospital, Marie Stopes private clinic and Debre Berhan health center respectively. Operational definition Intentional termination of pregnancy by any means or person other than spontaneous (excludes miscarriage) (WHO). Repeat induced abortion Women having more than one induced pregnancy termination defined in health care facility. Data collection instrument and process Data were collected using semi-structured, pretested, and face-to-face interview in a private room at the workplace in the exit time. The questionnaire was adapted from the literatures. The tool was prepared in English and then translated into the local language, Amharic, and finally returned to English for consistency checking. Six female diploma midwives and two female degree holder midwives as supervisors were involved in the data collection process. Data quality control Semi-structured data collection tool was utilized and clarity of the tool was tested before the final utilization. The pretest was conducted among 5% of the sample size in the other health institution which was out of the study area. A 1 day training was given for data collectors and supervisors regarding the objectives of the study, data collection method and significance of the study. During data collection each data collector was supervised for any difficulties and directions and necessary corrections were provided. Data were coded and entered into a computer using Epi info version 7.2.0.1. It was then checked for completeness and transferred to SPSS version 23 for analysis. Univariate analysis including mean and different frequencies were done. Crudely associated variables were identified by bivariate logistic regression model and these variables were fitted to multiple logistic regression. Then association between dependent and the explanatory variable was assessed using adjusted Odds Ratio (AOR), 95% CI and p value of ≤ 0.05 were considered statistically significant. Socio-demographic characteristics From the selected 355 participants, a total of 345 completed the questionnaire whereas ten refused to participate in the study, giving a response rate of 97.18%. The median age of the study participants was 27 with the interquartile range of 6. The maximum age was 40 years, and the minimum age was 17 years. Three hundred eight (89.3%) were urban resident. One hundred eighty-five (53.6%) were college diploma and above as to their educational status. The Majority of the respondents, 295 (85.5%) were followers of Orthodox Christianity. Two hundred sixty-four (76.5%) respondents were single (Table 1). Table 1 Socio-demographic characteristics of the participants who sought abortion services in health institution of Debre Berhan town, from Sep. 12/2018 to Feb. 12/2019 (n = 345) Reproductive health characteristics From the total participants, three-hundred thirty-nine (98.3%) responded that their last pregnancy was not wanted and seventy (20.3%) of respondents reported that they had repeat induced abortion. Eighty-five of the participants (24.6%) age of first sexual intercourse was less than eighteen years. The main reasons to have repeat induced abortion mentioned by the participants were: Economic problem (41%), Being a student (27%), Raped (16%) and separated from the husband (16%). Ninety-six (27.8%) of respondents had more than one sexual partner. Two hundred ninety-one (84.3%) respondents were ever not used family planning and three hundred nineteen (92.5%) was planned to use family planning (Table 2). Table 2 Reproductive health characteristics of reproductive age women who sought abortion services in Debre Berhan town health institution, from Sep. 12/2018 to Feb. 12/2019 (n = 345) Associated factors of repeat induced abortion Crudely associated variables were: age, Place of residence, Marital status, income, Number of sexual partners, Age of first sexual intercourse, Occupation, Ever use family planning and perceiving Procedure was painful. Independently and positively associated variables in adjusted analysis were: Having more than one sexual partner in preceding 12 months, conducting sexual intercourse less than eighteen years and perceiving the previous abortion procedure as it was not painful (Table 3). Table 3 Both bivariate and multivariate analysis of factors This institutional based cross sectional study has attempted to assess the repeated induced abortion and associated factors among reproductive age women who sought abortion service in Debre Berhan town, North Shewa, Amhara region, Central Ethiopia, 2019. The study revealed that the prevalence of repeated induced abortion was 20.3% with 95% CI of (16.4, 24.3). This finding was in line with the study from Kenya, 16% [12]. On the other hand, this study's finding was lower than the study from Italy, 60.6% [1]. This difference could be explained by the variance in a background of the study participants, variation in the study area, the cultural dissimilarity between countries, and high family planning coverage in developed countries. This study revealed those who have more than one sexual partner in the preceding 12 months were seven times more likely to engage in repeat induced abortion when compared to those who have a single sexual partner (AOR = 7.27, 95% CI 3.21, 16.46). It was consistent with studies from Northern Ethiopia [13] and Cambodia [14]. A possible explanation for this tendency is the increased probability of condom failure, corresponding to the increased number of sexual intercourse. The Government should continue to encourage women to define and trim down their number of sexual partners, both as a means to reduce HIV and STI transmission. Besides, sexual debut before 18 years was a predictor variable (AOR = 5.96, 95% CI 2.54, 13.95). It was similar to the study from Northern Ethiopia [13]. This might due to those who experience sexual intercourse exposed to a different sexual partner and fail to use a contraceptive. The Government should work to help adolescents delay sexual debut and encouraging family planning, including empowering communities and especially women, to freely discuss sexuality with young girls at home. The other positive predictor of repeat induced abortion was that perceiving abortion procedure is not painful were eight times more likely to be exposed to repeated induced abortion (AOR = 7.72, 95% CI 2.90, 20.58). It was consistent with the study from Northern Ethiopia [13]. This might due to considering the procedure not painful as well as abortion as the family planning method. May be affected by cross-sectional study design draw backs and comparing its findings with the community based studies may be difficult since it is institutional based. AOR: adjusted odds ratio CSA: Central Statistical Agency of Ethiopia FMOH: Federal ministry of health MVA: manual vacuum aspiration Citernesi A, Dubini V, Uglietti A, Ricci E, Cipriani S, Parazzini F. Intimate partner violence and repeat induced abortion in Italy: a cross sectional study. Eur J Contracept Reprod Heal Care. 2015;20(5):344–9. https://doi.org/10.3109/13625187.2014.992516. Bankole A, Hussain R, et al. Unintended Pregnancy and induced abortion in Burkina Faso: causes and consequences. Int J Gynaecol Obstet. 2015;14(1):1–8. WHO. WHO| Unsafe abortion: global and regional estimates of the incidence of unsafe abortion and associated mortality in 2008. WHO. 2014; 5th edition, 1–67. http://whqlibdoc.who.int/publications/2011/9789241501118_eng.pdf. Mosher WD, Jones J. Use of Contraception in the United States: 1982–2008 Library of Congress Cataloging-in-Publication Data. Vital Health Stat. 2010;23:1982–2008. Curtis S, Evens E, Sambisa W. Contraceptive discontinuation and unintended pregnancy: an imperfect relationship. Int Perspect Sex Reprod Health. 2011;37(2):58–66. Johnson BR, Ndhlovu S, Farr SL, Chipato T, Johnson BR, Farr L. Reducing unplanned pregnancy and abortion in zimbabwe through postabortion contraception. Popul Counc. 2014;33(2):195–202. Sedgh G, Singh S, Henshaw SK, Bankole A. Legal abortion worldwide in 2008: levels and recent trends. Int Perspect Sex Reprod Health. 2011;37(2):84–94. Report KI. Federal Democratic Republic of Ethiopia Demographic and Health Survey. Chhabra S. Unwanted pregnancies, unwanted births, consequences and unmet needs. World J Obst Gynecol. 2012;3:118. Family Health Department. Technical and Procedural Guidelines for Safe Abortion Services in Ethiopia. 2006;(June):50. http://phe-ethiopia.org/resadmin/uploads/attachment-161-safe_abortion_guideline_English_printed_version.pdf. Wikipidia. Debre Berhan. In 2019. Maina BW, Mutua MM, Sidze EM. Factors associated with repeat induced abortion in Kenya Global health. BMC Public Health. 2015;15(1):1–8. https://doi.org/10.1186/s12889-015-2400-3. Alemayehu M, Yebyo H, Medhanyie AA, Bayray A, Fantahun M. Determinants of repeated abortion among women of reproductive age attending health facilities in Northern Ethiopia: a case—control study. BMC Public Health. 2017;17:1–8. Yi S, Tuot S, Chhoun P, Pal K, Tith K, Brody C. Factors associated with induceabortion among female entertainment workers: a cross-sectional study in Cambodia. BMJ Open. 2015;5:1–8. We want to thank Debre Berhan health Science College for approving the topic, giving ethical clearance for this study. We would like to extend our thanks to the study participants for their time and willingness to participate, data collectors, and supervisors for their commitment. Our appreciation also goes to Debre Berhan health office for their cooperation and provision of supportive letters. The article was not funded. Department of Midwifery, Debre Berhan Health Sciences College, Debre Berhan, P.O. Box 37, Amhara region, Ethiopia Geremew Kindie Behulu Department of Basic Health Science, Debre-Berhan Health Science College, Debre Berhan, Ethiopia Endegena Abebe Fenta School of Midwifery, College of Medicine and Health Science, University of Gondar, Gondar, Ethiopia Getie Lake Aynalem GKB, EAF, and GLA equally contributed to proposal development, data collection process, data management and analysis, and write up. All authors read and approved the final manuscript. Correspondence to Geremew Kindie Behulu. Ethical clearance was obtained from Debre Berhan health Science College. A formal letter was submitted to Debre Berhan health bureau to obtain their co-operation. The support letter was obtained from this office and submitted to Debre Berhan health Science College to get the clearance. Written consent was obtained from the study participants prior to the data collection. Moreover, all the study participants were informed verbally about the purpose and benefit of the study along with their right to refuse. Furthermore, the study participants were assured for an attainment of confidentiality for the information obtained from them and the information they gave was not with their names or any identifiers which refers to them. Not applicable because there are no individually detailed data, videos or images. Behulu, G.K., Fenta, E.A. & Aynalem, G.L. Repeat induced abortion and associated factors among reproductive age women who seek abortion services in Debre Berhan town health institutions, Central Ethiopia, 2019. BMC Res Notes 12, 499 (2019). https://doi.org/10.1186/s13104-019-4542-3 Repeated abortion Debre Berhan
CommonCrawl