text
stringlengths
100
500k
subset
stringclasses
4 values
Transactions of the American Mathematical Society Published by the American Mathematical Society, the Transactions of the American Mathematical Society (TRAN) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. ISSN 1088-6850 (online) ISSN 0002-9947 (print) The 2020 MCQ for Transactions of the American Mathematical Society is 1.43. What is MCQ? The Mathematical Citation Quotient (MCQ) measures journal impact by looking at citations over a five-year period. Subscribers to MathSciNet may click through for more detailed information. Journals Home eContent Search About TRAN Editorial Board Author and Submission Information Journal Policies Subscription Information All issues : 1900 – Present Well-posedness of the Dirichlet problem for the non-linear diffusion equation in non-smooth domains HTML articles powered by AMS MathViewer by Ugur G. Abdulla PDF Trans. Amer. Math. Soc. 357 (2005), 247-265 Request permission We investigate the Dirichlet problem for the parablic equation \[ u_t = \Delta u^m, m > 0, \] in a non-smooth domain $\Omega \subset \mathbb {R}^{N+1}, N \geq 2$. In a recent paper [U.G. Abdulla, J. Math. Anal. Appl., 260, 2 (2001), 384-403] existence and boundary regularity results were established. In this paper we present uniqueness and comparison theorems and results on the continuous dependence of the solution on the initial-boundary data. In particular, we prove $L_1$-contraction estimation in general non-smooth domains. Ugur G. Abdulla, On the Dirichlet problem for the nonlinear diffusion equation in non-smooth domains, J. Math. Anal. Appl. 260 (2001), no. 2, 384–403. MR 1845560, DOI 10.1006/jmaa.2001.7458 U.G. Abdulla, First boundary value problem for the diffusion equation. I. Iterated logarithm test for the boundary regularity and solvability, SIAM Journal of Math. Anal., 34, No. 6 (2003), 1422-1434. Ugur G. Abdulla, Reaction-diffusion in irregular domains, J. Differential Equations 164 (2000), no. 2, 321–354. MR 1765574, DOI 10.1006/jdeq.2000.3761 Ugur G. Abdulla, Reaction-diffusion in a closed domain formed by irregular curves, J. Math. Anal. Appl. 246 (2000), no. 2, 480–492. MR 1761943, DOI 10.1006/jmaa.2000.6800 Ugur G. Abdulla and John R. King, Interface development and local solutions to reaction-diffusion equations, SIAM J. Math. Anal. 32 (2000), no. 2, 235–260. MR 1781216, DOI 10.1137/S003614109732986X Ugur G. Abdulla, Evolution of interfaces and explicit asymptotics at infinity for the fast diffusion equation with absorption, Nonlinear Anal. 50 (2002), no. 4, Ser. A: Theory Methods, 541–560. MR 1923528, DOI 10.1016/S0362-546X(01)00764-7 Hans Wilhelm Alt and Stephan Luckhaus, Quasilinear elliptic-parabolic differential equations, Math. Z. 183 (1983), no. 3, 311–341. MR 706391, DOI 10.1007/BF01176474 D. G. Aronson, The porous medium equation, Nonlinear diffusion problems (Montecatini Terme, 1985) Lecture Notes in Math., vol. 1224, Springer, Berlin, 1986, pp. 1–46. MR 877986, DOI 10.1007/BFb0072687 D. G. Aronson and L. A. Peletier, Large time behaviour of solutions of the porous medium equation in bounded domains, J. Differential Equations 39 (1981), no. 3, 378–412. MR 612594, DOI 10.1016/0022-0396(81)90065-6 Luis A. Caffarelli and Avner Friedman, Continuity of the density of a gas flow in a porous medium, Trans. Amer. Math. Soc. 252 (1979), 99–113. MR 534112, DOI 10.1090/S0002-9947-1979-0534112-2 Emmanuele DiBenedetto, Continuity of weak solutions to certain singular parabolic equations, Ann. Mat. Pura Appl. (4) 130 (1982), 131–176 (English, with Italian summary). MR 663969, DOI 10.1007/BF01761493 Emmanuele DiBenedetto, Continuity of weak solutions to a general porous medium equation, Indiana Univ. Math. J. 32 (1983), no. 1, 83–118. MR 684758, DOI 10.1512/iumj.1983.32.32008 Avner Friedman, Partial differential equations of parabolic type, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964. MR 0181836 B. H. Gilding and L. A. Peletier, Continuity of solutions of the porous media equation, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 8 (1981), no. 4, 659–675. MR 656004 A. S. Kalashnikov, Some problems of the qualitative theory of second-order nonlinear degenerate parabolic equations, Uspekhi Mat. Nauk 42 (1987), no. 2(254), 135–176, 287 (Russian). MR 898624 O.A. Ladyzhenskaya, V.A. Solonnikov and N.N. Uralceva, Linear and Quasilinear Equations of Parabolic Type, American Mathematical Society, Providence RI, 1968. Gary M. Lieberman, Second order parabolic differential equations, World Scientific Publishing Co., Inc., River Edge, NJ, 1996. MR 1465184, DOI 10.1142/3302 Paul E. Sacks, Continuity of solutions of a singular parabolic equation, Nonlinear Anal. 7 (1983), no. 4, 387–409. MR 696738, DOI 10.1016/0362-546X(83)90092-5 Juan Luis Vázquez, An introduction to the mathematical theory of the porous medium equation, Shape optimization and free boundaries (Montreal, PQ, 1990) NATO Adv. Sci. Inst. Ser. C: Math. Phys. Sci., vol. 380, Kluwer Acad. Publ., Dordrecht, 1992, pp. 347–389. MR 1260981 William P. Ziemer, Interior and boundary continuity of weak solutions of degenerate parabolic equations, Trans. Amer. Math. Soc. 271 (1982), no. 2, 733–748. MR 654859, DOI 10.1090/S0002-9947-1982-0654859-7 Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 35K65, 35K55 Retrieve articles in all journals with MSC (2000): 35K65, 35K55 Ugur G. Abdulla Affiliation: Department of Mathematical Sciences, Florida Institute of Technology, 150 West University Boulevard, Melbourne, Florida 32901-6975 Email: [email protected] Received by editor(s): July 31, 2000 Received by editor(s) in revised form: July 21, 2003 Published electronically: February 27, 2004 © Copyright 2004 American Mathematical Society Journal: Trans. Amer. Math. Soc. 357 (2005), 247-265 MSC (2000): Primary 35K65, 35K55 DOI: https://doi.org/10.1090/S0002-9947-04-03464-6 MathSciNet review: 2098094
CommonCrawl
Home » News » Math Seminar Series Math Seminar: Weighted L² -estimates for ∂ and its applications Dr. Song-Ying Li, University of California, Irvine Friday, October 28th, 2022 11:30 AM – BSB116 Also available on Zoom: https://tinyurl.com/9nrnveur Abstract: In this talk, I will introduce the Hörmander's weighted L² estimates for Cauchy-Riemann operator and then present some applications which includes sharp pointwise estimate and uniform estimate for the canonical solution for Cauchy-Riemann equation ∂u = f on a classical bounded symmetric domain in Cn and productive domains. The second application is my recent work on applying the weighted L2 estimates to study the Corona problem in several complex variables. Math Seminar: Comparison of Markov chains via weak Poincaré inequalities with application to pseudo-marginal MCMC Dr. Andi Wang, University of Bristol Abstract: I will discuss the use of a certain class of functional inequalities known as weak Poincaré inequalities to bound convergence of Markov chains to equilibrium. We show that this enables the straightforward and transparent derivation of subgeometric convergence bounds. We will apply these to study pseudo-marginal methods for intractable likelihoods, which are subgeometric in many practical settings. We are then able to provide new insights into the practical use of pseudo-marginal algorithms, such as analysing the effect of averaging in Approximate Bayesian Computation (ABC) and to study the case of lognormal weights relevant to Particle Marginal Metropolis–Hastings (PMMH) for state space models. Joint work with Christophe Andrieu, Anthony Lee and Sam Power. Math Seminar: The Clifford Monopole equations (joint work with N. Santana, E. Lopez and A. Quintero-Velez). Dr. Rafael Herrera-Guzman, Centro de Investigación en Matemáticas, Guanajuato, Mexico Available on Zoom: https://tinyurl.com/9nrnveur Abstract: The spin groups and the Clifford algebras have played a very important role in Differential Geometry and Physics. In the search for a unified spinorial approach to special Riemannian holonomy, we found a suitable notion for twisted pure spinor which generalizes that of a classical pure spinor developed by Cartan. Along the way, we realized that parallel twisted pure spinors, besides satisfying the corresponding twisted Dirac equation, satisfy a curvature identity analogous to the second Seiberg-Witten equation in 4-dimensions. The Dirac equation and the curvature equation constitute what we call the Clifford monopole equations. We will describe the setup of these equations on manifolds of arbitrary dimension, show that they have solutions on certain spaces, how they restrict to the Seiberg-Witten equations, and sketch some aspects of the construction of the moduli space. Math Seminar: The bootstrap for dynamical systems Dr. Buddhima Kasun Fernando, Scuola Normale Superiore di Pisa, Italy Friday, September 23rd, 2022 Abstract: Despite their deterministic nature, chaotic dynamical systems often exhibit seemingly random behavior. Consequently, a dynamical system is usually represented by a probabilistic model of which the unknown parameters must be estimated using statistical methods. When measuring the uncertainty of such parameter estimation, the bootstrap in statistics stands out as a simple but powerful technique. In this talk, I will introduce the bootstrap for dynamical systems and discuss its consistency and its second-order efficiency using Edgeworth expansions. The content of the talk is based on a joint work with Nan Zou (Macquarie University) and will be accessible to anyone with background in basic probability and statistics. Math Seminar: Understanding SARS-Co-V-2 transmission in the early phases of the COVID pandemic Dr. Ilaria Dorigatti, Imperial College Available on Zoom: https://tinyrul.com/9nrnveur Abstract: On 21st February 2020, the first Italian COVID-19 death was detected in the municipality of Vo', a small town near Padua. At the time, the University of Padua conducted two sequential molecular swab surveys in the Vo' population (February & March 2020) which were then followed by 3 serological surveys (May & November 2020, and June 2021). In this talk, I will present the statistical and mathematical models developed in the early phases of the pandemic around the data collected in Vo', to understand the epidemiology of SARS-CoV-2 (Lavezzo et al, Nature, 2020), quantify heterogeneities in transmission and the effectiveness of interventions (Dorigatti et al, Nature Communications, 2021) as well as antibody dynamics and neutralization reactivity in the absence and presence of vaccination (Lavezzo et, al, Genome Medicine, 2022). I will also present the results of a recent analysis investigating the effects of different testing policies on variant emergence, where we show that surveillance using molecular testing is necessary to detect and reduce the transmission of an antigen test escaping variant which was detected in Veneto in 2020 (Del Vecchio et al, Research Square). Math Seminar: Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness Dr. Murat Erdgodu, University of Toronto Abstract: We study sampling from a target distribution $e^{-f}$ using the Langevin Monte Carlo (LMC) algorithm. For any potential function $f$ whose tails behave like $|x|^\alpha$ for $\alpha \in [1,2]$, and has $\beta$-H\"older continuous gradient, we derive the sufficient number of steps to reach the $\eps$-neighborhood of a $d$-dimensional target distribution as a function of $\alpha$ and $\beta$. Our result is the first convergence guarantee for LMC under a functional inequality interpolating between the Poincar\'e and log-Sobolev settings (also covering the edge cases). Math Seminar: Convergence properties of shallow neural networks: implications and applications in scientific computing Dr. Grant Rotskoff, Stanford University BSB117 Abstract: The surprising flexibility and undeniable empirical success of machine learning algorithms have inspired many theoretical explanations for the efficacy of neural networks. Here, I will briefly introduce one perspective that provides not only asymptotic guarantees of trainability and accuracy in high-dimensional learning problems but also provides some prescriptions and design principles for learning. Bolstered by the favorable scaling of these algorithms in high dimensional problems, I will turn to the problem of variational high dimensional PDEs. From the perspective of an applied mathematician, these problems often appear hopeless; they are not only high-dimensional but also dominated by rare events. However, with neural networks in the toolkit, at least the dimensionality is somewhat less intimidating. I will describe an algorithm that combines stochastic gradient descent with importance sampling to optimize a function representation of the solution. Finally, I will provide numerical evidence of the power and limitations of this approach. Math Seminar: The Manifold Joys of Sampling Dr. Santosh Vempala, Frederick G. Storey Chair of Computing and Professor – College of Computing, Georgia Tech BSB 132 Abstract: Sampling high-dimensional sets and distributions is a fundamental problem with many applications. The state-of-the-art is that arbitrary logconcave densities can be sampled to arbitrarily small error in time polynomial in the dimension using simple Markov chains based on Euclidean geometry. In this talk, we describe algorithms that exploit varying local geometry and can be viewed as sampling Riemannian manifolds. This approach will let us derive more efficient algorithms for some cases of interest, as well as analyze affine-invariant versions of Euclidean algorithms, such as the Dikin walk, Hamiltonian Monte-Carlo and Riemannian Langevin. Math Seminar: Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-concave Sampling Dr. Yuansi Chen, Duke University, Department of Statistical Sciences Abstract: We study the problem of using the Metropolis-adjusted Langevin algorithm (MALA) to sample from a log-smooth and strongly log-concave distribution in dimension d with condition number $\kappa$. We establish its optimal minimax mixing time under a warm start. First, we demonstrate that MALA with a warm start mixes in $O(d^{1/2} \kappa)$ iterations up to logarithmic factors; this improves upon the previous work on the dependency of either the condition number $\kappa$ or the dimension d. Our proof relies on comparing the leapfrog integrator with the continuous Hamiltonian dynamics, where we establish a new concentration bound for the acceptance rate. Second, we provide an explicit mixing time lower bound for reversible MCMC algorithms on general state spaces. We use this result to show that MALA requires at least $\Omega(d^{1/2} \kappa)$ steps in the worst case, matching our upper bound in terms of both the condition number and the dimension. Math Seminar: Analysis of two-component Gibbs samplers using the theory of two projections Dr. Qian Qin, University of Minnesota Abstract: Gibbs samplers are a class of Markov chain Monte Carlo (MCMC) algorithms commonly used in statistics for sampling from intractable probability distributions. In this talk, I will demonstrate how Halmos's (1969) theory of two projections can be applied to study Gibbs samplers with two components. I will first give an introduction to MCMC algorithms, particularly Gibbs algorithms. Then, I will explain how problems regarding the asymptotic variance and convergence rate of a two-component Gibbs sampler can be translated into simple linear algebraic problems through Halmos's theory. In particular, a comparison is made between the deterministic-scan and random-scan versions of two-component Gibbs. It is found that in terms of asymptotic variance, the random-scan version is more robust than the deterministic-scan version, provided that the selection probability is appropriately chosen. On the other hand, the deterministic-scan version has a faster convergence rate. These results suggest that one may use the deterministic-scan version in the burn-in stage, and switch to the random-scan version in the estimation stage. Math Seminar: Unbiased Multilevel Monte Carlo methods for intractable distributions: MLMC meets MCMC Dr. Guanyang Wang, Rutgers University, Department of Statistics Abstract: Constructing unbiased estimtors from MCMC outputs has recently increased much attention in statistics and machine learning communities. However, the existing unbiased MCMC framework only works when the quantity of interest is an expectation of certain probability distribution. In this work, we prropose unbiased estimators for functions of expectations. Our ideas is based on the combination of the unbiased MCMC and MLMC methods. We prove our estimator has a finite variance, a finite computational complexity, and achieves ε-accuracy with O(1/ε²) computational cost under mild conditions. We also illustrate our estimator on several numerical examples. This is a joint work with Tianze Wang. Math Seminar: Quantitative convergence analysis of hypocoercive sampling dynamics Dr. Lihan Wang, Carnegie Mellon University Abstract: In this talk, we will discuss some advances on quantitative analysis of convergence of hypocoercive sampling dynamics, including underdamped Langevin dynamics, randomized Hamiltonian Monte Carlo, zigzag process and bouncy particle sampler. The analysis is based on a variational framework for hypocoercivity which combines a Poincare-type inequality in time-augmented state space and an L^2 energy estimate. Joint works with Yu Cao (NYU) and Jianfeng Lu (Duke). Page last updated at 4:33 pm October 24, 2022 . This page was printed from http://math.camden.rutgers.edu/news/math-seminar-series/ at 1:46 PM Saturday, January 28, 2023.
CommonCrawl
Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2017: medical genomics Genomic analyses based on pulmonary adenocarcinoma in situ reveal early lung cancer signature Dan Li1,2, William Yang3, Yifan Zhang2, Jack Y Yang2, Renchu Guan1,2, Dong Xu1,4 & Mary Qu Yang2 Non-small cell lung cancer (NSCLC) represents more than about 80% of the lung cancer. The early stages of NSCLC can be treated with complete resection with a good prognosis. However, most cases are detected at late stage of the disease. The average survival rate of the patients with invasive lung cancer is only about 4%. Adenocarcinoma in situ (AIS) is an intermediate subtype of lung adenocarcinoma that exhibits early stage growth patterns but can develop into invasion. In this study, we used RNA-seq data from normal, AIS, and invasive lung cancer tissues to identify a gene module that represents the distinguishing characteristics of AIS as AIS-specific genes. Two differential expression analysis algorithms were employed to identify the AIS-specific genes. Then, the subset of the best performed AIS-specific genes for the early lung cancer prediction were selected by random forest. Finally, the performances of the early lung cancer prediction were assessed using random forest, support vector machine (SVM) and artificial neural networks (ANNs) on four independent early lung cancer datasets including one tumor-educated blood platelets (TEPs) dataset. Based on the differential expression analysis, 107 AIS-specific genes that consisted of 93 protein-coding genes and 14 long non-coding RNAs (lncRNAs) were identified. The significant functions associated with these genes include angiogenesis and ECM-receptor interaction, which are highly related to cancer development and contribute to the smoking-free lung cancers. Moreover, 12 of the AIS-specific lncRNAs are involved in lung cancer progression by potentially regulating the ECM-receptor interaction pathway. The feature selection by random forest identified 20 of the AIS-specific genes as early stage lung cancer signatures using the dataset obtained from The Cancer Genome Atlas (TCGA) lung adenocarcinoma samples. Of the 20 signatures, two were lncRNAs, BLACAT1 and CTD-2527I21.15 which have been reported to be associated with bladder cancer, colorectal cancer and breast cancer. In blind classification for three independent tissue sample datasets, these signature genes consistently yielded about 98% accuracy for distinguishing early stage lung cancer from normal cases. However, the prediction accuracy for the blood platelets samples was only 64.35% (sensitivity 78.1%, specificity 50.59%, and AUROC 0.747). The comparison of AIS with normal and invasive tumor revealed diseases-specific genes and offered new insights into the mechanism underlying AIS progression into an invasive tumor. These genes can also serve as the signatures for early diagnosis of lung cancer with high accuracy. The expression profile of gene signatures identified from tissue cancer samples yielded remarkable early cancer prediction for tissues samples, however, relatively lower accuracy for boold platelets samples. Lung cancer is one of the most common cancer types and the main cause of cancer-related deaths. About 14% of all new cancers are lung cancers, and about 154,050 deaths from lung cancer are estimated in the United States for 2018 by the American Cancer Society. Non-small cell lung cancer accounts for about 80% of the lung cancer cases and is consist of various subtypes [1]. Generally, most of the deaths caused by lung cancer are in late stages which are due to the distant metastasis and invasion [2]. In contrast, the early stages or non-invasive subtypes of lung cancer can be cured [2]. Lung adenocarcinoma in situ is a subtype of NSCLC and shows non-invasive growth patterns. The 5-year survival rate of AIS is almost 100% with appropriate therapy [3]. However, AIS can develop into an invasive stage of lung cancer that has only approximate 4% patient survival rate [1]. AIS is different from the other lung cancer histologies in that most AIS patients are non-smokers and women [4, 5]. Previous studies of AIS, for purposes of classification and diagnosis, have indicated differences in appearance from these and other types of lung cancer. The studies of AIS at the genetic level have not yet been widely performed, consequenctly, our understanding of the mechanism that causes AIS is limited. On the other hand, AIS cases could be missed diagnosed as pneumonia since sometimes AIS has a varied appearance on CT [6] and generally 62% of the AIS patients do not have symptoms [7]. Similarly, early stage lung cancer often is asymptomatic. Previous studies have identified gene biomarkers involved in lung cancer progression and development [8], including several critical long non-coding RNAs [2, 9, 10]. More effective and robust molecular biomarkers for early lung cancer diagnosis remained to be uncovered. Currently, studies on AIS progression based on RNA sequencing techniques were performed. Some protein-coding genes and lncRNAs that related to AIS were identified [3] and indicated the evolution of lung cancer from normal to invasive stages. However, large-scale study and comparison of these genes at different disease stage of cancer development are not exploited. In this study, we first identified the genes that were specifically expressed in AIS tissue samples compared with normal and invasive cancer cases simultaneously. The differential expression analysis was performed by using two computational methods, the most widely used edgeR [11] and the newly developed Cross-Value Association Analysis (CVAA) [12]. The combined results of these two methods were used for downstream analysis. Only a small group of genes (107) including both protein-coding genes (94) and lncRNAs (13) were found that potentially dominate the AIS and the invasive progression (Additional file 1: Figure S1). Smoking is considered one of the most risk factors that cause lung cancers and about 75% of the lung cancer cases are attributable to tobacco use. The lung cancer in never smokers even considered as different diseases [5]. The AIS-specific genes were significantly enriched of lung cancer related functional annotations such as angiogenesis [13, 14] and the ECM-receptor interaction which is a known pathway contributes the smoke-free lung cancers [15,16,17]. We further identified 20 early lung cancer signature genes that can be used for distinguishing the early lung cancer cases from normal ones. In particular, we performed an experiment using the random forest method on four independent datasets generated by RNA-seq or microarray techniques and achieved about 98% prediction accuracy for early stage lung cancer in tissue samples but only 64.35% overall accuracy in the blood platelets dataset. Our results suggested that AIS-specific genes could help us to better understand this uncommon lung cancer subtype. The AIS-specfic genes may also play a critical role in the lung cancer progression. Moreover, the expression profiles of early lung cancer signature genes we identified showed the ability for accurate and robust early cancer prediction. Comparison of gene expression in AIS and invasive lung cancer To investigate the genes that dominate the intermediate type of AIS and underlie different phenotypes (normal, AIS and invasive cancer cases), we collected the RNA-seq library (GSE52248) consisted of normal, AIS and invasive cancer samples of six lung cancer patients [3]. The raw RNA-seq data were generated from formalin fixation and paraffin embedding (FFPE) processed tissues. First, the RNA-seq data were processed and the gene expression profile was calculated referring the gene annotation from Ensembl (Methods). Then, the differential expression analysis via edgeR was performed on 16,501 expressed genes consisted of 15,106 protein-coding genes and 1395 lncRNAs. As a result, 1348 significant differentially expressed genes (DEGs) were found between normal and invasive lung cancer samples under the threshold |log2 fold change| > 1 & FDR < 0.05. Based on the same thresholds, 719 DEGs between normal and AIS cases as well as 98 DEGs between AIS and invasive cancer tissues were identified. The gene expression patterns in AIS and invasive cancer tissues demonstrated much more consistency (Additional file 1: Figure S1) despite these two phenotypes was with great differences. Our results indicated that only a small number of genes potentially dominated the evolution of lung cancer from AIS into invasive lung cancer. Identification of AIS-specific genes To comprehensively identify the gene set that was specifically expressed in AIS tissue, we applied two differential expression analysis methods, edgeR [11] and CVAA [12], based on the gene expression profiles of paired normal and AIS, AIS and invasive cancer samples. The edgeR is one of the most widely used differential expression (DE) analysis method, while CVAA is a newly developed normalization-free and nonparametric DE analysis method. Unlike the commonly used DE analysis methods, CVAA neither normalizes nor assumes the distribution of the gene expressions. Instead, it reveals the DEGs according to the gene expression comparison and ranking. The DEGs between normal and AIS that, at the same time were differentially expressed in invasive cancer compared with AIS samples were further used as the candidates for AIS-specific genes (Methods). The union set of the DEGs identified by the two methods was collected. As a result, a total of 107 (22 upregulated and 85 downregulated) genes including 93 protein-coding genes and 14 long non-coding RNAs were identified as AIS-specific genes (Methods, Additional file 2: Table S1). LncRNAs potentially regulate ECM-receptor interaction pathway and involved in lung cancer We applied the function annotation via David [18] on the 93 protein-coding genes and found a number of enriched functions (Additional file 3: Table S2), including angiogenesis and ECM-receptor interaction which shows the aggressiveness of the tumor and has an important role in metastasis [13, 14]. A previous study of lung cancer [17] indicated that non-smokers also have the risk of the lung cancer. Some well-known cancer-related pathways such as cell cycle and p53 were enriched of differentially expressed genes in only current smoke patients, whereas ECM-receptor interaction pathway is over-represented in the patients that never smoke and is considered to contribute to smoking-independent lung cancer [17]. Interestingly, it has been found that AIS is more common in women and non-smokers [3] and the disrupted ECM-receptor interaction pathway was also found based on the AIS data in our study. Many ECM proteins are factors that promote the metastatic cascade as they are significantly deregulated during the progression of cancer [16]. The ECM-receptor interaction pathway contains 87 protein-coding genes and three of them (CD36, SPP1, TNR) are AIS-specific. We further employed GENIE3 (Gene Network Inference with Ensemble of trees) [19] to predict the regulatory relationships between the 14 AIS-specific lncRNAs and the 87 genes (Methods). As a result, 12 lncRNAs were found to potentially regulate the genes in ECM-receptor interaction pathway (Additional file 4: Figure S2), suggesting their roles in the lung cancer progression. Moreover, the odd ratios of the regulations between the lncRNAs and the ECM-receptor interaction pathway indicated novel lncRNAs, such as FENDRR (OR = 1.53), MEOX2-AIS (OR = 3.22), as regulators interact with this pathway (Methods). Collectively, these results suggested that the AIS-specific genes played critical roles in the progression of AIS and the development of invasive lung cancer. Early lung cancer signatures identification AIS is a pre-invasive lung adenocarcinoma lesion. Hence, the AIS-specific genes can potentially serve as gene signatures for early lung cancer detection. We employed random forest for selecting the top genes from the 107 AIS-specific genes that can effectively distinguish normal from early-stage cancer cases (Methods). Using the gene expression profiles of the normal (n = 59) and early-stage (stage I) lung adenocarcinoma cases (n = 286) from TCGA project, random forest reported the importance of each gene by calculating the classification error rate. We found that one gene set composed of 20 genes yielded the lowest error rate (1.16%). Therefore, these 20 genes including two lncRNAs (BLACAT1, CTD-2527I21.15) ranked by the importance scores of random forest were considered to be early lung cancer diagnosis signatures and were used for further validation and analysis (Additional file 5: Table S3). Of the 20 gene signatures, 13 were continually downregulated along with the lung cancer progression from normal to AIS to invasive. In contrast, the expression levels of the other seven genes were significantly increased (Fig. 1) indicating their lung cancer-related functions. Interestingly, all the 20 genes were discovered by CVAA indicating the power of this new method and the necessity of the comprehensively identification of DEGs. The gene expression patterns of the 20 early lung cancer signatures. A, seven genes including the two lncRNAs were upregulated along with the lung cancer progression from normal to invasive. B, 13 genes were continually downregulated Early lung cancer signatures provide insights into early lung cancer diagnosis A large portion of early-stage NSCLC can be cured [2]. Lung cancer deaths are mainly caused by the distant metastases that drive cancer into late stages [2]. Early diagnosis of lung cancer is critical for patient survival and treatment. The expression patterns of our 20 early lung cancer signatures were distinct between the normal and early stage of the TCGA lung adenocarcinoma samples (Fig. 2) suggesting their potential capability for early lung cancer prediction. We next examined the effectiveness of these biomarkers by employing widely used machine learning classification algorithms. The expressions of the 20 early lung cancer signatures in TCGA lung adenocarcinoma normal (59, blue) and early (286, gold, stage I) cases We first applied random forest [20] for detecting the early lung cancer cases (Methods). The gene expression profile of TCGA lung adenocarcinoma dataset consisting of 59 normal and 286 early samples that reported as stage I were downloaded. The expression patterns of the signature genes of this dataset were shown in Fig. 2. The average prediction accuracy of the random forest model was 98.86% (Table 1, Method) based on the expression profiles of these signature genes. Table 1 The early lung cancer prediction performances on four different datasets using random forest We then collected the second independent early lung cancer dataset: GSE68465 [21] which was generated using the microarray platform (HG-U133A). The dataset consisted of 276 early (stage IA and IB) lung cancer and 19 normal samples. Two lncRNAs (BLACAT1, CTD-2527I21.15) and three protein-coding genes (SCUBE1, HS6ST2, RTKN2) of the signatures were not included in this dataset. We achieved 99.51% prediction accuracy, 99.95% sensitivity, and 92.83% specificity in average for this dataset (Table 1). The third dataset (GSE10072) [22] was also microarray platform-based and contained 58 lung cancer and 49 normal cases. The patients were grouped into never, former, and current smokers by their smoking behaviors. Using the expression profile of same genes as the second dataset, we obtained 97.91% accuracy for lung cancer case prediction (sensitivity = 98.05%, specificity = 97.75%). Blood-based liquid biopsies provide promising non-invasive cancer detection. Blood-based biomarkers have been studied and identified [23]. Based on the age-matched tumor-educated blood platelets (TEPs) early lung cancer samples (GSE89843) [23], we assessed the effectiveness of our 20 gene signatures identified from tissue samples on these TEPs data (Methods). However, the prediction accuracy is relatively lower (64.35%), (Table 1), suggesting these signatures might be tissue-specific. We further examined the prediction performances using different machine learning algorithms including random forest, SVM [24], and ANNs [25] crossing the four datasets. To comprehensively measure the robustness of our signature genes, we calculated the average area under an ROC curve (AUROC) values of each model for each dataset (Fig. 3, Additional file 6: Figure S3). All the machine learning models succeed in predicting the early lung cancer tissue samples, excepting the ANNs based model for GSE68465. GSE68465 contained unbalance samples size (19 normal vs. 276 tumor, Methods). In summary, the early lung cancer signature genes we identified showed the robustness and high accuracy for distinguishing normal and early lung cancer cases. The performance assessments for early lung cancer prediction using random forest, SVM and artificial neural networks for four lung cancer datasets. The AUROC were calculated based on 100 boostrapping tests. The early lung cancer signature genes were highly lung cancer related We conducted further literature search and found that majority early lung cancer signature genes we identified were reported to be highly associated with cancer progression, diagnosis, therapy, and patient overall survival. All the 18 protein-coding genes were found to be directly involved in lung cancer development suggested by previous studies (Additional file 5: Table S3). For instance, the protein-coding genes CD36 [26] and TMPRSS4 [27] were already identified as potential therapeutic targets of lung cancer, while TMPRSS4 can induce cancer stem cell-like properties in lung cancer [28]. HMGB3 and FABP4 showed their high diagnostic and prognostic value in human NSCLC [29, 30]. SPP1, AGER, and RTKN2 regulate the lung cancer-related pathways such as VEGF (vascular endothelial growth factor) signaling pathway and NF-kappaB [31, 32]. The loss of WNT7A is a major contributing factor for increased lung cancer tumorigenesis [33]. The expression level of FAM107A is decreased in patients with NSCLC [34], whereas the high levels of expression of HS6ST2 are observed in lung cancer cell lines [35]. The associations of the two lncRNAs and NSCLC are not reported yet. The lncRNA BLACAT1 (Bladder Cancer Associated Transcript 1) was up-regulated in bladder cancer. BLACAT1 also affects cell proliferation, indicates a prognosis of colorectal cancer and is significantly associated with poor overall survival [36]. Our results suggested diagnostic value of BLACAT1 for NSCLC. The other lncRNA CTD-2527I21.15 is a basal-like breast cancer marker. CTD-2527I21.15 locates adjacently to FXYD3 in chromosome 19 and potentially cis-regulates its expression in cancer [37]. Moreover, our results indicated combinatory effect of these genes for early lung cancer diagnosis. Data collection and processes The raw RNA-seq data of the AIS cases (GSE52248) were downloaded. The low-quality reads were trimmed via Trimmomatic version 0.36 [38]. The human gene annotation of Ensembl was used. We applied STAR (v2.4) [39] followed by Cufflinks (v2.2.1) [40] to calculate the gene expressions. The other four independent lung cancer datasets were TCGA lung adenocarcinoma, GSE68465, and GSE10072 of which the gene expression profiles were available and GSE89843 which was a blood platelets RNA-seq library. The TCGA lung adenocarcinoma dataset was consisted of 596 samples. In this study, only the 59 normal samples and the 286 early lung cancer (stage I) samples were used for the analysis. The dataset GSE68465 was generated by microarray platform HG-U133A and collected from 6 contributing treatment institutions. The patients were around 64 years old on average and 42.3% of the patients were dead in about 4 years after the clinical report. Here, only the gene expression profiles of 19 normal samples and 276 early (stage IA and IB) lung cancer samples were used for the prediction. GSE10072 was also a microarray data and the fresh frozen lung cancer tissue samples were collected from patients with never (20), former (26), and current (28) by smoking behaviors. Additional 49 normal samples were used as control. All the samples were generated by Environment and Genetics in Lung Cancer Etiology (EAGLE). The RNA-seq data of the blood platelets of 53 early locally advanced NSCLC patients were collected from the study of GSE89843 [23]. The other 53 healthy age-matched (range from 48 to 86) samples in the same study were used as normal controls for the prediction. The gene expressions (FPKM) were calculated using the raw RNA-seq reads. Differentially expressed gene identification The read counts of the genes were calculated by HTSeq-count (v0.6.1) [41]. Then, the R package edgeR was applied for differential expression analysis between the samples of various types. The threshold |log2 fold change > 1| & FDR < 0.05 was used in our study for defining significantly differentially expressed genes. The R package of the CVAA (version 0.1.0) method was obtained from the author and applied under the default setting [12]. The genes were ranked by CVAA based on the significance of the differential expression. We selected the same number of the top CVAA DEGs and the top edgeR DEGs for the further analysis. The individual sets of AIS-specific genes identified by edgeR and CVAA were combined together. CVAA is a normalization-free and nonparametric method that identifies DEGs. Regulation prediction by GENIE3 GEne Network Inference with Ensemble of trees (GENIE3) calculates the regulatory relationships between genes based on the expression patterns [19]. The gene expression profile of normal and early stage of the TCGA lung adenocarcinoma sample was used. The 14 AIS-specific lncRNAs were considered as regulators while all the protein-coding genes were used as potential target genes. All the regulations between lncRNAs and protein-coding genes were ranked by the weight (Additional file 7: Figure S4) and only the regulations over the third quartile of all the weights were considered as confident regulations. The odd ratios were calculated as: $$ OR=\frac{P_I{R}_T/{P}_I{R}_N}{P_O{R}_T/{P}_O{R}_N} $$ Where PIRT represents the number of the target genes of a given lncRNA that in (I) the ECM-receptor interaction pathway (P) whereas PIRN represents the non-target genes in the pathway. PORT and PORN in the denominator stand for the number of target and non-target genes outside of (O) the pathway, respectively. Machine learning models for predicting the early lung cancer Random forest allows for measuring the importance of the features, which are the genes in our study, for classification. The function of random forest cross-validation for feature selection (rfcv) was applied to reveal the best gene set for the cancer cases prediction. We used the arguments: 5-fold cross-validation, log scale, and 0.9 step which means 10% of the features were removed at each step of testing. Then we compared classification performances of three machine learning models, random forest, SVM, and ANNs. Random Forest is an ensemble learning method that can be used for classification. The randomForest package [20] was used with 1000 trees and seed 115 for reproducibility. The e1071 is one widely used R package for performing SVM [24]. The tune function was used for detecting the best parameters of cost and gamma of SVM. The package neuralnet was used for performing the ANNs [25]. Here, we used two hidden layers with 50 and 25 neurons respectively. For each dataset, we randomly selected 2/3 of the samples as training set and the other 1/3 as testing set. Then, the average assessments of the accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC) were calculated by running the experiment 100 times. AIS cases represent the minority of lung cancer cases, however, they provide valuable information about early diagnosis and treatment of the disease. With more attention and the availability of NGS data of AIS cases, we expect more comprehensive analysis for lung cancer can be conducted. The identification of the differentially expressed genes is critical in cancer studies. Several computational methods for differential expression analysis were developed [11, 12, 40, 42]. Most of these methods are normalized based and assume the distribution of the gene expression profile. On the other hand, the results of these differential expression analysis methods are often not consistent. Here, in addition to apply edgeR, we employed the newly developed CVAA, a normalization free and nonparametric approach for differential expression analysis. Out of 719 significant DEGs between normal and AIS cases identified by edgeR and CVAA, the overlap rate was about 50% on average (Additional file 8: Figure S5A). Moreover, less than 20% of 98 DEGs between AIS and invasive lung cancer were common genes revealed by both methods (Additional file 8: Figure S5B). Thus, the union set of the AIS-specific genes identified by edgeR and CVAA can provide a more comprehensive and robust gene set as candidate involved in lung cancer progression. Interestingly, the 20 early lung cancer gene signatures, which are the most discriminative genes in classifying normal and early cancer cases, were all identified by CVAA. The second dataset (GSE68465) is unbalanced, which contained 19 normal samples and 276 lung cancer samples. The prediction performances of the ANNs model was poor compared with random forest and SVM for this data, suggesting performance of ANNs was impacted more by unbalanced dataset. The performance of ANNs on unbalanced data might be improved by optimizing paramters. Tumor is highly heterogeneous and poses significant challenges in diagnosis and treatment. The gene expression profiles were different between two subtypes of the same tumor or between tissue and liquid sample types from the same patients. Our finding in this study indicted the limitation of the biomarkers that identified from tissue lung cancer samples for predicting the blood-based data. In this study, we identified the AIS-specific genes that potentially dominate the lung cancer procession from AIS into the invasive tumor. A further analysis of these specific genes in AIS revealed their essential functions and properties in diverse types of lung cancer tissues. We also identified several novel lncRNAs that were involved in lung cancer by interacting with the lung cancer-related pathways. Twenty early lung cancer signature genes were identified. A cross assessment based on diverse machine learning models and independent datasets indicated our signatures were robust for early lung cancer prediction. These signature genes were highly lung cancer-related, and the combined gene group showed the capability to improve the early lung cancer diagnosis with high accuracy. AIS: Adenocarcinoma in situ ANNs: Artificial neural networks AUROC: Average area under an ROC curve BLACAT1: Bladder Cancer Associated Transcript 1 CVAA: Cross-Value Association Analysis Differential expression DEGs: Differentially expressed genes FFPE: Formalin fixation and paraffin embedding GENIE3: Gene Network Inference with Ensemble of trees lncRNA: Long non-coding RNAs NSCLC: Support vector machine TCGA: The Cancer Genome Atlas TEPs: Tumor-educated blood platelets Travis WD, Brambilla E, Riely GJ. New pathologic classification of lung Cancer: relevance for clinical practice and clinical trials. J Clin Oncol. 2013;31:992–1001. Ji P, Diederichs S, Wang W, Böing S, Metzger R, Schneider PM, Tidow N, Brandt B, Buerger H, Bulk E, Thomas M. MALAT-1, a novel noncoding RNA, and thymosin β4 predict metastasis and survival in early-stage non-small cell lung cancer. Oncogene. 2003;22(39):8031. Morton ML, Bai X, Merry CR, Linden PA, Khalil AM, Leidner RS, et al. Identification of mRNAs and lincRNAs associated with lung cancer progression using next-generation RNA sequencing from laser micro-dissected archival FFPE tissue specimens. Lung Cancer Amst Neth. 2014;85:31–9. Bracci PM, Sison J, Hansen H, Walsh KM, Quesenberry CP, Raz DJ, et al. Cigarette smoking associated with lung adenocarcinoma in situ in a large case-control study (SFBALCS). J Thorac Oncol. 2012;7:1352–60. Sun S, Schiller JH, Gazdar AF. Lung cancer in never smokers—a different disease. Nature Reviews Cancer. 2007;7(10):778. Patsios D, Roberts HC, Paul NS, Chung T, Herman SJ, Pereira A, et al. Pictorial review of the many faces of bronchioloalveolar cell carcinoma. Br J Radiol. 2007;80:1015–23. Thompson WH. Bronchioloalveolar Carcinoma Masquerading as Pneumonia. Respir Care. 2004;49:1349–53. Zhao Y, Lu H, Yan A, Yang Y, Meng Q, Sun L, Pang H, Li C, Dong X, Cai L. ABCC3 as a marker for multidrug resistance in non-small cell lung cancer. Sci Rep. 2013;3:3120. Clemson CM, Hutchinson JN, Sara SA, Ensminger AW, Fox AH, Chess A, et al. An architectural role for a nuclear non-coding RNA: NEAT1 RNA is essential for the structure of Paraspeckles. Mol Cell. 2009;33:717–26. Jen J, Tang YA, Lu YH, Lin CC, Lai WW, Wang YC. Oct4 transcriptionally regulates the expression of long non-coding RNAs NEAT1 and MALAT1 to promote lung cancer progression. Mol Cancer. 2017;16(1):104. Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26:139–40. Li Q-G, He Y-H, Wu H, Yang C-P, Pu S-Y, Fan S-Q, et al. A normalization-free and nonparametric method sharpens large-scale transcriptome analysis and reveals common gene alteration patterns in cancers. Theranostics. 2017;7:2888–99. Nishida N, Yano H, Nishida T, Kamura T, Kojiro M. Angiogenesis in Cancer. Vasc Health Risk Manag. 2006;2:213–9. Folkman J. Angiogenesis in cancer, vascular, rheumatoid and other disease. Nat Med. 1995;1(1):27. Zhou W, Yin M, Cui H, Wang N, Zhao L-L, Yuan L-Z, et al. Identification of potential therapeutic target genes and mechanisms in non-small-cell lung carcinoma in non-smoking women based on bioinformatics analysis. Eur Rev Med Pharmacol Sci. 2015;19:3375–84. Venning FA, Wullkopf L, Erler JT. Targeting ECM disrupts cancer progression. Front Oncol. 5:224. Hu Y, Chen G. Pathogenic mechanisms of lung adenocarcinoma in smokers and non-smokers determined by gene expression interrogation. Oncol Lett. 2015;10:1350–70. Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2008;4:44–57. Huynh-Thu VA, Irrthum A, Wehenkel L, Geurts P. Inferring Regulatory Networks from Expression Data Using Tree-Based Methods. Isalan M, editor. PLoS ONE. 2010;5:e12776. Liaw A, Wiener M. Classification and regression by randomForest. R news. 2002;2(3):18–22. Director's Challenge Consortium for the Molecular Classification of Lung Adenocarcinoma, Shedden K, JMG T, Enkemann SA, Tsao M-S, Yeatman TJ, et al. Gene expression–based survival prediction in lung adenocarcinoma: a multi-site, blinded validation study. Nat Med. 2008;14:822–7. Landi MT, Dracheva T, Rotunno M, Figueroa JD, Liu H, Dasgupta A, et al. Gene Expression Signature of Cigarette Smoking and Its Role in Lung Adenocarcinoma Development and Survival. Albertson D, editor. PLoS ONE. 2008;3:e1651. Best MG, Sol N, SGJG I 't V, Vancura A, Muller M, Niemeijer A-LN, et al. Swarm Intelligence-Enhanced Detection of Non-Small-Cell Lung Cancer Using Tumor-Educated Platelets. Cancer Cell. 2017;32:238–252.e9. Meyer D, Dimitriadou E, Hornik K, Weingessel A, Leisch F. e1071: misc functions of the department of statistics, probability theory group (formerly: E1071), TU Wien. R package version 1.6–7. Günther F, Fritsch S. neuralnet: Training of neural networks. R J. 2010;2(1):30–8. Pascual G, Avgustinova A, Mejetta S, Martín M, Castellanos A, Attolini CS-O, et al. Targeting metastasis-initiating cells through the fatty acid receptor CD36. Nature. 2017;541:41–5. de Aberasturi AL, Calvo A. TMPRSS4: an emerging potential therapeutic target in cancer. Br J Cancer. 2015;112:4–8. de Aberasturi AL, Redrado M, Villalba M, Larzabal L, Pajares MJ, Garcia J, et al. TMPRSS4 induces cancer stem cell-like properties in lung cancer cells and correlates with ALDH expression in NSCLC patients. Cancer Lett. 2016;370:165–76. Song N, Liu B, Wu J-L, Zhang R-F, Duan L, He W-S, et al. Prognostic value of HMGB3 expression in patients with non-small cell lung cancer. Tumour Biol. 2013;34:2599–603. Tang Z, Shen Q, Xie H, Zhou X, Li J, Feng J, et al. Elevated expression of FABP3 and FABP4 cooperatively correlates with poor prognosis in non-small cell lung cancer (NSCLC). Oncotarget. 2016;7:46253–62. Lin J, Marquardt G, Mullapudi N, Wang T, Han W, Shi M, et al. Lung Cancer transcriptomes refined with laser capture microdissection. Am J Pathol. 2014;184:2868–84. Psallidas I, Stathopoulos GT, Maniatis NA, Magkouta S, Moschos C, Karabela SP, et al. Secreted phosphoprotein-1 directly provokes vascular leakage to foster malignant pleural effusion. Oncogene. 2013;32:528–35. Bikkavilli RK, Avasarala S, Scoyk MV, Arcaroli J, Brzezinski C, Zhang W, et al. Wnt7a is a novel inducer of β-catenin-independent tumor-suppressive cellular senescence in lung cancer. Oncogene. 2015;34:5317–28. Pastuszak-Lewandoska D, Czarnecka KH, Migdalska-Sęk M, Nawrot E, Domańska D, Kiszałkiewicz J, et al. Decreased FAM107A expression in patients with non-small cell lung Cancer. Adv Exp Med Biol. 2015;852:39–48. HATABE S, KIMURA H, ARAO T, KATO H, HAYASHI H, NAGAI T, et al. Overexpression of heparan sulfate 6-O-sulfotransferase-2 in colorectal cancer. Mol Clin Oncol. 2013;1:845–50. Su J, Zhang E, Han L, Yin D, Liu Z, He X, et al. Long noncoding RNA BLACAT1 indicates a poor prognosis of colorectal cancer and affects cell proliferation by epigenetically silencing of p15. Cell Death Dis. 2017;8:e2665. Bradford JR, Cox A, Bernard P, Camp NJ. Consensus analysis of whole transcriptome profiles from two breast cancer patient cohorts reveals long non-coding RNAs associated with intrinsic subtype and the tumour microenvironment. PloS one. 2016;11(9):e0163238. Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014;30:2114–20. Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013;29:15–21. Trapnell C, Williams BA, Pertea G, Mortazavi A, Kwan G, van Baren MJ, et al. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat Biotechnol. 2010;28:511–5. Anders S, Pyl PT, Huber W. HTSeq—a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015;31:166–9. Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15(12):550. This research was supported by United States National Institutes of Health (NIH) Academic Research Enhancement Award 1R15GM114739 and National Institute of General Medical Sciences (NIH/NIGMS) 5P20GM103429, United States Food and Drug Administration (FDA) HHSF223201510172C and HHSF223201610111C and Arkansas Science and Technology Authority (ASTA) Basic Science Research 15-B-23 and 15-B-38. However, the information contained herein represents the position of the author(s) and not necessarily that of the NIH and FDA. The publication cost of this article was funded by United States National Institutes of Health (NIH) Academic Research Enhancement Award 1R15GM114739. All the RNA-seq data used in this study were public available from the Gene Expression Omnibus (GSE10072, GSE68465, GSE89843) and TCGA Lung Adenocarcinoma. This article has been published as part of BMC Medical Genomics Volume 11 Supplement 5, 2018: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2017: medical genomics. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-11-supplement-5. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, College of Computer Science & Technology, Jilin University, Changchun, 130012, China Dan Li, Renchu Guan & Dong Xu MidSouth Bioinformatics Center and Joint Bioinformatics Ph.D. Program of University of Arkansas at Little Rock and Univ. of Arkansas Medical Sciences, 2801 S. Univ. Ave, Little Rock, AR, 72204, USA Dan Li, Yifan Zhang, Jack Y Yang, Renchu Guan & Mary Qu Yang Department of Computer Science, Carnegie Mellon University School of Computer Science, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA Department of Electrical Engineering and Computer Science, Informatics Institute, and Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, 65211, USA Dong Xu Yifan Zhang Jack Y Yang Renchu Guan Mary Qu Yang MQY and DX conceived the project, DL and MQY designed the experiments. DL and WY conducted the experiments. DL performed the analysis. YZ, JY and RG participate in discussion. All authors have read and approved final manuscript. Correspondence to Mary Qu Yang. Competing interest Figure S1. Gene expression comparison between normal, AIS, and invasion lung cancer cases. (PDF 334 kb) Table S1. List of the 107 AIS-specific genes. (XLSX 14 kb) Table S2. The functional annotations of the 107 AIS-specific genes. (XLSX 12 kb) Figure S2. The AIS-specific lncRNAs that potentially regulate the target genes in ECM-receptor interaction pathway. (PDF 576 kb) Table S3. List of the 20 early lung cancer signature genes and their cancer related functions. (XLSX 11 kb) Figure S3. An example ROC curve of three machine learning algorithms on TCGA lung adenocarcinoma dataset. The AUROC values were calculated based on one of the 100 randomly selected training and testing datasets. (PDF 49 kb) Figure S4. The distribution of the regulatory weights calculated by GENIE3. (PDF 173 kb) Figure S5. Consistency comparison between the two differential expression analysis methods. (PDF 452 kb) Li, D., Yang, W., Zhang, Y. et al. Genomic analyses based on pulmonary adenocarcinoma in situ reveal early lung cancer signature. BMC Med Genomics 11, 106 (2018). https://doi.org/10.1186/s12920-018-0413-3 lncRNAs
CommonCrawl
What is a basic physics general definition of a 'potential'? From the Wikipedia: In physics, a potential may refer to the scalar potential or to the vector potential. In either case, it is a field defined in space, from which many important physical properties may be derived. Leading examples are the gravitational potential and the electric potential, from which the motion of gravitating or electrically charged bodies may be obtained. Specific forces have associated potentials, including the Coulomb potential, the van der Waals potential, the Lennard-Jones potential and the Yukawa potential. It seems there should be a more specific physics definition of a potential. For example is it always attributed to a field? The voltage across a resistor is called the potential difference, and it does not seem to be a field. Does it have any relationship to potential energy? What attributes must something have to be called a potential? Do all forces have an associated potential? Are all potentials associated with a force? Maybe the second sentence from the wiki quote is as good as it gets. My question is: What is a basic physics general definition of a 'potential'? Is there a 'formal' definition? potential terminology potential-energy field-theory definition $\begingroup$ "Potential" means "energy per {something}". $\endgroup$ – DanielSank Apr 5 '18 at 18:50 Physicists use the word "potential" in different ways in different contexts, so there is no completely rigorous and general definition. There is a unifying idea, but unfortunately it's abstract enough that you probably won't understand all the jargon and concepts without some advanced physics training. The general idea is this: consider a physical degree of freedom $x$, whose space of possible values forms some manifold $M$. Then a "potential" for $x$ is a field $V(x)$ defined on $M$ such that some kind of first derivative $V'(x_0)$ tells you how $x$ gets "pushed" if it takes on the value $x_0 \in M$. When you first learn about potentials, it's almost always the case that the degree of freedom is the position ${\bf x}$ of a classical point particle, the manifold $M$ is physical space $\mathbb{R}^3$, the potential $V({\bf x})$ is a scalar potential energy field, and the "some kind of first derivative" is the negative gradient operator $-{\bf \nabla}$. In this case the potential $V(x)$ is simply a convenient way of encoding a position-dependent conservative force field ${\bf F}({\bf x}) = -{\bf \nabla}V({\bf x})$. But as you get to more advanced applications, any of these can be generalized. To give a few examples: Rather than a single particle's position ${\bf x}$, the physical degree of freedom can be a field $\varphi(x)$. Rather than physical space $\mathbb{R}^3$, the manifold $M$ can be a proper subset, e.g. for a particle confined to the surface of a sphere. Even more abstractly, it doesn't need to be any kind of physical space at all; it field theory, the manifold is the set of values that the field can take on, which (regardless of the number of spatial dimensions), could be $\mathbb{R}$ for a scalar field, $\mathbb{R}^n$ for a vector field, or an even more abstract "spinor space". Rather than a scalar field, the potential $V(x)$ could be a vector field (as in the case of magnetism). Rather than the (negative) gradient operator, the "some kind of first derivative" could be the (negative) ordinary derivative (as in scalar field theory), or the vector curl (as in magnetism). Rather than a force, the "push" could be a force normalized by some suitable physical property of the degree of freedom, as in the electric field (force per unit charge) or gravitational field (force per unit mass, i.e. acceleration). More abstractly, it could be the generalized force that appears in the Euler-Lagrange equation in the Lagrangian formalism for particles, or the even more abstract one that appears in the Lagrangian formalism for fields. tparkertparker Maybe the use of the word "potential" in physics could be misleading, because I think it has the following origin: the "potential energy" is the part of the energy depending only on coordinates, and not on on momenta (or velocities, or derivatives); in this sense the electric scalar potential is almost a potential energy, unless to multiplicate it by the charge (for an elementary charge) the electric scalar potential is also a "primitive" of the electric field, and in this sense the vector potential is a potential: a primitive of the magnetic field; it also is in some sense a potential energy, but in a more complicated fashion I suppose that the answer you're searching is that a potential is a primitive of a field, where the field is a physical observable quantity function of space-time point. But the examples you cite are "first type" type examples: they are the energy part that depends on positions, and that generates "the forces" (so they are potential energies), while scalar and vector potentials are primitives of a field (the EM one). AnnibaleAnnibale Not the answer you're looking for? Browse other questions tagged potential terminology potential-energy field-theory definition or ask your own question. Could someone explain what is a potential? Why physically do things in general tend to move toward a lower potential value in a potential field? What is a good analogy for electric potential? Where does the Pauli Repulsive Force come from that counteracts the attraction between atoms and ions? Definition of quantum anharmonicity How should I think of a liquid in terms of interatomic potential and molecular speed? Electric potential in a wire What is the meaning of electric potential produced by multiple number of point charges Do force fields come from potential fields, or do potential come from forces? What does existence mean in physics? What does it mean for a potential not to have a Fourier transform? Is potential instantaneous, unlike fields which take time to alter depending on their mediators?
CommonCrawl
MathOverflow Meta MathOverflow is a question and answer site for professional mathematicians. Join them; it only takes a minute: Formal/rigorous treatment of (im)predicativity/predicativism There are several places on the web where one may find quite intuitively understandable accounts of (im)predicativity; here on MO I found two questions with very good detailed answers (Predicative definition and Impredicativity) Still I must confess I do not understand the concept well enough. All I've seen is a verbal explanation with a bunch of very clear examples. And being used to mathematics, I feel uncertain about it until I will have some formally defined entity, preferably some mathematical model of its behavior. For example, I don't know whether there is a definition of predicativity which is sufficiently formal so that given a formula in any language whatsoever one would be able to tell whether it is predicative or not. I don't even know whether it makes sense to speak about predicativity of a formula since I've only seen discussions of (im)predicative definitions. Seemingly predicativism must be closely related to constructivism, and again I could not find descriptions of precise relationship between these two. One of the things confusing me here is that e. g. in a programming language one might have perfectly correct self-referential construction of a datatype, so this seemingly will produce an example of a constructive impredicative definition. Also I have vague feeling that predicativity must be somehow related to induction, in particular that any inductive definition must be predicative. Does this make sense and if yes is it correct? What about coinduction, is it related? So to summarize, are there texts addressing these and similar questions from purely mathematical viewpoint? In particular, texts with systematic purely formal treatment of (im)predicativity? Ideal would be some mathematical (say, algebraic) structure which models behaviour of predicative vs. impredicative whatevers. And let me add that although I've tagged this as reference request, I would be also grateful for on-the-spot explanations without any references. reference-request lo.logic mathematical-philosophy type-theory constructive-mathematics 122 silver badges33 bronze badges მამუკა ჯიბლაძემამუკა ჯიბლაძე $\begingroup$ I am nowhere close to expert, but have you consulted papers by Soloman Feferman? Possibly the first two papers listed in the references here math.stanford.edu/~feferman/papers/ResponseToHellman.pdf would be relevant. As you might already know, there is not universal agreement on what predicative mathematics means; for example MO user Nik Weaver has argued vigorously against the Feferman-Schuette analysis. Hopefully he or another expert will show up here to address your question (which is excellent by the way). $\endgroup$ – Todd Trimble♦ May 10 '14 at 12:06 $\begingroup$ @Todd Thank you! I've seen Feferman's entry in the "Handbook of the Philosophy of Mathematics and Logic" but not the papers, will try to get them too. What's in the Handbook is an example of what I said - very well written, understandable, with lots of examples, but for a mathematician like me way too informal to make me feel confident... $\endgroup$ – მამუკა ჯიბლაძე May 10 '14 at 13:41 $\begingroup$ I had expected to see Per Martin-Lof mentioned here rather than Sol Feferman. Are different meanings of (im)predicativity being discussed here? $\endgroup$ – Paul Taylor May 11 '14 at 8:41 $\begingroup$ @Paul Definitely if there are different meanings I want to hear them all :) $\endgroup$ – მამუკა ჯიბლაძე May 11 '14 at 9:01 $\begingroup$ I'm reminded of the multilingual comments in this thread: mathoverflow.net/a/16957/2926 Google translate is fun because the results are sometimes pretty funny. When I applied it to Georges Elencwajg's comment in Russian, it began "Expensive colleges, ..." (pretty sure it was supposed to be "Dear colleagues, ..."). $\endgroup$ – Todd Trimble♦ May 11 '14 at 21:34 Solomon Feferman's papers provide formal systems for predicativity, most recently here. Other papers on his website and in his book In the Light of Logic have other expositions. These systems are predicative either by virtue of their ordinal analysis, or by virtue of being conservative over PA. The subtypes $\{x\in T:\phi\}$ in these systems are the focal points for predicative restrictions. Either you can not form subtypes using $\phi$'s which quantify over types, or you can form those subtypes but don't have the axioms to prove much about them. For examples, see the development of analysis in the cited paper. It proves the least upper bound principle for sequences of real numbers, but not for sets. The resulting development may be your best source for intuitions about predicativity which are backed up by a formal system. Matt F.Matt F. $\begingroup$ Thank you! I decided to accept this one - your link certainly contains a rigorous description of certain formal system and its distinguishing properties. I have to study it carefully before I can decide whether it is understandable for me :D $\endgroup$ – მამუკა ჯიბლაძე May 14 '14 at 6:50 I have a brief survey on predicativism here. But it may be more of the kind of "verbal" explanation that you've been unsatisfied with. Maybe proof theoretic ordinals could provide the kind of rigorous account that you want. Are you familiar with this subject? The Wikipedia article might be a good place to start. The rough idea is that, given a formal system $S$ that interprets some minimal amount of number theory, we look at the recursive well-orderings of the natural numbers that can be proven to be well-orderings in $S$. The supremum of the corresponding ordinals is a countable ordinal which provides a basic measure of the deductive strength of $S$. The relevance to your question is that, broadly speaking, systems with sufficiently small proof theoretic ordinal will be considered predicatively acceptable, while those whose proof theoretic ordinal is too large will not. As Todd alluded to in his comment, exactly where to draw the line, or whether there is an exact line to be drawn, has been controversial. However, there is no disputing that predicativism (in the historically primary sense that accepts countable constructions) sanctions Peano arithmetic, whose proof theoretic ordinal is $\epsilon_0$, and other systems in that neighborhood. Getting much beyond that point takes some work. I have argued here that predicatively acceptable constructions can get up to the small Veblen ordinal. Nik WeaverNik Weaver $\begingroup$ Thank you for the informative answer! You are right, your exposition is very clear but I am looking for some more formal approach. Let me also try to read some of the intriguing references in your links, I will then (hopefully :) ) return back with further comments/questions $\endgroup$ – მამუკა ჯიბლაძე May 11 '14 at 5:37 Disclaimer: I'm just somebody who also tried to understand what is meant by predicativism, and have no other qualification for answering this question than having "browsed" some "writings" vaguely related to this question. I also use the words "predicative" and "impredicative" myself sometimes. While checking some papers by Randall Holmes, Adrian Richard David Mathias and Thomas Forster again during writing this answer, I noticed that I should not have used the word predicative to describe the simple theory of types TST and stratified formulas. They only conform to the weaker position of Frank P. Ramsey and Rudolf Carnap, who accepted the ban on explicit circularity, but argued against the ban on circular quantification. What are impredicative definitions? In higher order logic, we want to turn predicates over "objects of rank $n$" into "objects of rank $n+1$". Henkin semantics has comprehension axioms ensuring the existence of objects corresponding to certain predicates. If a predicate over "objects of rank $n$" involves quantification over objects of rank $m>n$, then this predicate is defined impredicatively. By the predicative comprehension axiom scheme, one typically means the axiom scheme which ensures the existence of objects corresponding to all non-impredicative predicates. In the context of second order logic, the impredicative comprehension axiom scheme allows quantification over both first and second order variables. It might make sense to distinguish between the philosophical position 'predicativism given the natural numbers', impredicative definitions (of "collections") and more general positions of 'predicativism' (related to ordinal analysis). Peter Smith argues convincingly that 'predicativism given the natural numbers' is a conceptually stable position that only amounts to accepting full first-order Peano Arithmetic and its (conservative) predicative extension $\mathsf{ACA}_0$. Peter Smith explicitly refers to Weyl's position from "Das Kontinuum" here. This position should not be confused with other uses of 'predicative reasoning (given the natural numbers)'. Nothing is said against these positions, but it gets clear that one can accept Weyl's position without in any way diminishing the case for not accepting stronger systems like $\mathsf{ACA}$. There are also typical impredicative principles, like limitation of size or the unrestricted axiom schema of replacement. Let's say that these principles drastically increase the proof-theoretic consistency strength of the corresponding set theories. Hence from the point of view of ordinal-analysis, the distance to the "predicatively acceptable" systems is drastically increased, hence it makes sense to consider these as impredicative principles. Thomas KlimpelThomas Klimpel $\begingroup$ I hadn't seen this paper by Smith. It's really nice! He is sophisticated, clever, witty, and in my opinion, absolutely spot-on. I wish I could write like that. $\endgroup$ – Nik Weaver May 12 '14 at 3:21 $\begingroup$ However, I think you have slightly misinterpreted him (at least I hope so). I don't see anything in the paper about predicativism only amounting to accepting conservative extensions of PA. Rather, when he talks about "going beyond ACA${}_0$" I think he means "passing to full second order arithmetic and beyond". $\endgroup$ – Nik Weaver May 12 '14 at 3:23 $\begingroup$ At any rate, there's nothing in the paper to suggest that a predicativist given the natural numbers couldn't go beyond PA to the extent of accepting PA + Con(PA), for example. The whole business of ordinal analysis is about this kind of relatively modest extension. $\endgroup$ – Nik Weaver May 12 '14 at 3:24 $\begingroup$ @NikWeaver, see the end of page 3. Smith's primary argument is that: if you are realist about that mathematics which is indispensable for science, then you are committed to ACA$_0$. Smith would probably argue that this does not commit you to Con(PA), since that is not needed for science. I don't like the whole argument about indispensability and commitment myself, but Smith endorses it as a meaningful argument, and Weyl and Feferman have done much the same. $\endgroup$ – Matt F. May 12 '14 at 22:28 $\begingroup$ E.g. Weyl at the very end of Das Kontinuum: "um exakte Wissenschaft solcher Gegenstandsgebiete zu ermöglichen, in denen Kontinua eine Rolle spielen", to make possible an exact science of those subject areas in which continua play a role. archive.org/stream/daskontinuumkrit00weyluoft/… $\endgroup$ – Matt F. May 12 '14 at 22:40 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged reference-request lo.logic mathematical-philosophy type-theory constructive-mathematics or ask your own question. Are submodules of free modules free? Impredicativity Predicative definition Succinctly naming big numbers: ZFC versus Busy-Beaver Is there a complete countable axiomatization of conditional independence? (Graphoids)
CommonCrawl
Comparison of the acute phase protein and antioxidant responses in dogs vaccinated against canine monocytic ehrlichiosis and naive-challenged dogs Nir Rudoler1, Shimon Harrus1, Silvia Martinez-Subiela2, Asta Tvarijonaviciute2, Michael van Straten1, Jose J Cerón2 & Gad Baneth1 Canine monocytic ehrlichiosis (CME) is a tick-borne disease with a global distribution, caused by Ehrlichia canis. The inflammatory response to E. canis infection includes changes in certain acute phase proteins (APP) and in biomarkers of the oxidative status. APP responses are considered part of the innate immune response to CME. The aim of this study was to evaluate the APP and oxidative marker responses in dogs vaccinated against CME with an attenuated vaccine and subsequently challenged with a wild E. canis strain. The study included 3 groups of 4 beagle dogs. Group 1 dogs were inoculated subcutaneously with an attenuated E. canis vaccine on day 0, and again on day 213. Group 2 initially served as controls for group 1 during the vaccination phase and then vaccinated once on day 213. Group 3 consisted of naïve dogs which constituted the control group for the challenge phase. All 12 dogs were infected intravenously with a wild strain of E. canis on day 428 of the study. APP levels were serially measured during two periods: days 0–38 post-vaccination (groups 1 and 2) and days 0–39 post-challenge (groups 1, 2, 3). Changes in C-reactive protein (CRP), serum amyloid A (SAA), haptoglobin, albumin, paraoxonase-1 (PON-1) and total antioxidant capacity (TAC) were of significantly smaller magnitude in vaccinated dogs and appeared later on a time scale compared to unvaccinated dogs challenged with a wild strain. Alterations in the level of APP during the vaccination phase of the study were of lower extent compared to those in the challenged unvaccinated dogs during the post-challenge phase. Positive APP levels correlated positively with the rickettsial load, body temperature and negatively with the thrombocyte counts (p < 0.05). Vaccination with an attenuated E. canis strain and challenge with a wild strain resulted in considerably reduced responses of positive and negative APP, and oxidative biomarker responses in vaccinated compared to unvaccinated dogs, reflecting a milder innate inflammatory response conferred by protection of the vaccine. Canine monocytic ehrlichiosis (CME) is an important canine disease of worldwide distribution. The etiological agent of CME is the obligate intracellular rickettsia Ehrlichia canis, a tick-borne bacterium that causes a multisystemic disease that typically induces severe bleeding tendencies due to thrombocytopenia and thrombocytopathy. The acute disease is characterized by high fever, depression, lethargy, anorexia, lymphadenomegaly, splenomegaly, and hemorrhages. Acutely infected dogs may recover or remain infected sub-clinically and eventually develop chronic CME. Dogs that develop the chronic form of the disease suffer from bone marrow suppression and decreased hematopoiesis with clinical signs similar to those in the acute phase with a greater severity [1,2]. The pathogenesis of CME which includes the induction of immune-mediated phenomena such as immune complex formation, anti-thrombocyte antibodies and bone marrow suppression remains to be elucidated [1-3]. One important aspect of the acute phase of CME is the alteration in the production of certain plasma proteins including the acute phase proteins (APP) that participate in the inflammatory response to E. canis infection [4,5]. The APP are considered to be non-specific innate immune components involved in the restoration of homeostasis and restraint of microbial growth before the host develops acquired immunity to an external challenge [6,7]. They consist of "positive" and "negative" proteins that show an increase or decrease in level, respectively, after an inflammatory stimulus. The positive APP include C-reactive protein (CRP) which plays important roles in protection against infection, clearance of damaged tissue, prevention of auto-immunization and regulation of the inflammatory response [8]. Other positive APP include haptoglobin (Hp) that binds free hemoglobin, inhibits its oxidative activity and antagonizes its pro-inflammatory activity [9], and serum amyloid A (SAA) whose roles include detoxification of endotoxins, inhibition of lymphocyte and endothelial cell proliferation, inhibition of platelet aggregation, and inhibition of T-lymphocyte adhesion to extracellular matrix proteins [10]. The negative APP include albumin, the most abundant constitutive plasma protein, which serves as a source of nutrients and regulator of osmotic pressure [11]. Anti-oxidants grouped under the term Total Antioxidant Capacity (TAC) defend cells and tissues from harmful oxidative damage [12] whereas paraoxonase-1 (PON-1) is an important enzyme involved in lipid metabolism which is down-regulated during oxidative stress [13]. The main objective of this study was to evaluate the APP and oxidative responses in dogs vaccinated against CME with a live attenuated vaccine and subsequently challenged with a wild isolate of the bacterium, and compare them to the responses in non-vaccinated dogs challenged with infection. In addition, correlations between the APP and oxidative responses, and the hematological changes and blood bacterial loads were evaluated. As the APP and oxidative responses have rarely been studied in response to vaccination in dogs, the association between these responses and the protection conferred by vaccination during challenge is of major interest. Experimental infection of dogs with Ehrlichia canis Twelve laboratory-bred, 12–24 months old female beagle dogs were used in this study as previously described in a publication on the efficacy of this vaccine [14]. The dogs were acclimatized for 4–5 weeks before the initiation of the study and divided into 3 groups of 4 dogs each (Table 1). Group 1 dogs were initially inoculated subcutaneously (SQ) with 4.8×109 attenuated E. canis strain 611A bacteria (the vaccine strain) on day 0, and again on day 213 with 9.6×109 of the E. canis vaccine strain bacteria SQ. Group 2 dogs were initially inoculated SQ with ~1.2×106 uninfected DH-82 cells (day 0) as control for group 1 and then with 9.6×109 vaccine strain bacteria on day 213. The third group (group 3) consisted of naïve dogs which constituted the control group for the challenge study, when both groups 1 and 2 were vaccinated. They joined the study on day 393, were acclimatized for 12 days, and were then subcutaneously inoculated with ~1.2×106 uninfected DH-82 cells on day 405. Twenty three days later (day 428), all 12 dogs were intravenously inoculated with 6 ml E. canis infected blood containing 6×107 E. canis wild strain bacteria, drawn from a clinically acute ill dog [14]. Quantitation of rickettsial load both in the cultures and the blood was determined by quantitative real-time PCR (qPCR) as described below. The E. canis infected blood was tested microscopically by stained blood smear examination and no other blood pathogens could be detected. It was also screened molecularly for Hepatozoon canis DNA using primers Hep-F and Hep-R [15], for Babesia spp. DNA using the Piro-A and Piro-B primers [16], and in addition with primers 107F and 299R targeted at the ompA gene fragment for spotted fever rickettsiae [17]. The E. canis infected blood was negative by all these PCR assays. Table 1 The different groups of dogs and their function during the two phases of the study Monitoring of the dogs included daily inspection, physical examination at least twice weekly and a weekly bodyweight recording. Five ml of blood were drawn from each dog in EDTA and serum tubes at least once weekly and complete blood count analysis was carried out using the ADVIA 120® Hematology system (Bayer, Germany). Serum was separated from whole blood and frozen at −80°C until used for APP and oxidative biomarkers measurements. The study was carried out according to the Hebrew University guidelines for animal experimentation and was approved by the Institutional Animal Care and Use Committee (approval numbers MD-09-11937-4 and MD-12869-4). Azithromycin (Azithromycin, 200 mg/5 ml, Teva, Israel) treatment was administered to all 4 dogs in group 3 in an effort to test for its efficacy in CME. It was initiated on day 15 post-challenge (day 443) when all four dogs presented fever, anorexia, lethargy and thrombocytopenia. The planned treatment protocol was 7 mg/kg, PO q 24 hrs for 5 days as a loading dose followed by the same dose q 72 hrs for additional 15 days [18]. The initial loading dose was administered for 4 days. However, it was discontinued due to severe clinical deterioration of 2 of the 4 dogs with no improvement of any treated dog. At this stage (day 447), since azithromycin was not found to be effective it was replaced by doxycycline (10 mg/kg, PO, once daily) which was administered for an additional 21 days [14]. No medical treatment was needed for the vaccinated dogs in groups 1 and 2. DNA was extracted using a commercial kit (Illustra blood genomicPrep mini spin kit, GE health care, UK), following the manufacturer's instructions. A quantitative estimation of the E. canis rickettsial load was performed by qPCR using the Rotor-Gene 6000 Real-Time PCR analyzer (Corbett life sciences, Australia) and the E. canis-16S plasmid as previously described [19]. Standard curve was designed using decimal dilutions of the E. canis-16S plasmid. The primers used to target the 16S rRNA gene were the E. canis 16S forward TCGCTATTAGATGAGCCTACGT and the E. canis 16S reverse GAGTCTGGACCGTATCTCAGT [14]. Acute phase protein measurement The concentration of CRP was determined with a solid sandwich immunoassay (Tridelta Phase range canine CRP kit; Tridelta development Ltd., Bray, Ireland). The final absorbance of the samples was measured in a microtitre plate at 450 nm using 630 nm as the reference (Powerwave XS, Biotek instruments, Carson City, NV, USA). Hp was determined using a hemoglobin-binding method (Tridelta phase, Tridelta Development Ltd., Bray Ireland) in a biochemistry autoanalyzer (Cobas Mira Plus, ABX Diagnostics, Montpellier, France). SAA concentration was measured using a sandwich immunoassay (Tridelta Phase range SAA assay, Tridelta development Ltd., Bray, Ireland). Final absorbance of the samples was measured by use of microtitre plate reader (Powerwave XS, Biotek instruments, Carson City, NV, USA). Albumin concentration was determined by using the bromocresol green-dye binding method using a commercial kit [20]. TAC was measured using a colorimetric method developed by Erel [21] and previously used in dogs by Camkerten and others [22] and Tvarijonaviciute and others [23]. Serum PON1 activity was determined measuring arylesterase activity with p-nitrophenol acetate as substrate following a previously described method for use in dogs [23]. PON, TAC and albumin were measured in serum on an automated biochemistry analyzer (Olympus AU600 Automatic Chemistry Analyzer, Olympus Europe GmbH, Hamburg, Germany). All APP have been previously validated in the reference laboratory and all samples were analyzed in the same analytical run to avoid high between-run imprecision as had been previously reported for CRP and SAA [24]. The study included two short phases in the experiment: the post-vaccination and challenge phases. The APP levels and the E. canis rickettsial loads analyzed in this study were measured and compared to hematologic findings from dogs in groups 1 and 2 during 38 days post-vaccination (post-vaccinal phase) when group 1 was vaccinated and group 2 served as its control; and 39 days post-challenge (challenge phase) when vaccinated groups 1 and 2 were compared to unvaccinated group 3 and to each other (Table 1). The post-vaccinal period corresponds to days 0 to 38 of the study, and the challenge phase from days 428 to 467 of the study. All analyses were done using SAS 9.3 software. p values <0.05 were considered statistically significant. APP and oxidative marker levels A marginal model was used for this analysis. Treatment group and sample day (i.e. time effect) entered the model as fixed effects, while repeated measurements within dogs were dealt with by adding a complex error term which included a correlation matrix to account for the repeated measurements of the same dog. The correlation matrix used was auto-regressive (AR(1)). The model we used was: $$ \mathrm{Y} = \mathrm{T}\mathrm{ime}\ \left(6\ \mathrm{index}\ \mathrm{variables}\right) + \mathrm{T}\mathrm{R}\mathrm{T}\left(2\hbox{--} 3\mathrm{index}\ \mathrm{variables}\right) + \mathrm{T}\mathrm{ime}*\mathrm{T}\mathrm{R}\mathrm{T} + \mathrm{e}, $$ where Y is the analyzed APP, TRT- is treatment, and e is the complex error term representing the within-dog correlation of blood sample results and the residual error. The results reported are least squares means and are referred to in the model results section simply as means. Correlation between APP and oxidative markers and clinical and hematological parameters As the number of observations was small, correlation between APP and oxidative markers and clinical and hematological parameters was estimated using the Spearman correlation coefficient (r). Values of the coefficient vary between "0", indicating no statistical dependence between two variables, and "1" indicating a total statistical dependence between 2 variables. Clinical and hematological findings No clinical signs were observed in the post-vaccinal phase. Reduction in the number of thrombocytes was found to be the only hematological parameter that was altered during this phase. No clinical signs were observed among the vaccinated dogs. Following the challenge with the wild virulent E. canis strain, 3 dogs from the vaccinated groups developed transient mild to moderate fever. In addition, thrombocytopenia was detected among all vaccinated dogs and returned to normal reference range without therapeutic intervention. In contrast, the control dogs developed severe clinical signs including lethargy, anorexia and persistent fever. Furthermore, 2 of the control dogs developed life threatening clinical disease with severe hypothermia, lethargy and anorexia. The clinical signs and thrombocytopenia experienced by the control group were reversed only after initiation of doxycycline treatment. Serum levels of APP The concentrations of all APP during the two study phases are shown in Additional file 1: Table S1. Table 2 presents the peak APP and oxidative marker concentrations and the ratios between peaks and baseline levels within the groups. Table 2 Changes (least square means values) in APP and oxidative markers and change rates detected during post-vaccination and post-challenge periods Model results Post-vaccination On day 24 post-vaccination, CRP levels in group 1 were higher by 33.87 μg/mL than those in group 2 (p = 0.008). In groups 1 and 2, CRP levels were 37.62 (p = 0.005) and 57.03 (p = 0.005) μg/mL greater on days 12 and 24, respectively, when compared to pre-vaccination levels. Post-challenge Following challenge with the wild virulent strain, CRP levels on day 14 post-challenge in group 3 were 184.13 μg/mL greater than in group 1 (p = 0.003) (Figure 1). In all groups, CRP values were on average 107.32 μg/mL greater on day 22 post-challenge when compared to pre-challenge values (p = 0.002). No significant differences were detected between group 1 and group 2 during this phase. CRP kinetics in vaccinated and unvaccinated dog groups during the post-challenge phase. Least square means sera level of C-reactive protein (CRP) in two groups of vaccinated dogs (group 1 and 2) and a group of control unvaccinated dogs (group 3) following challenge with a wild strain of E. canis. *Indicates significant difference between group 3 and group 1 (p = 0.003). No significant differences were found between groups 1 and 2 during the post-vaccination phase. Both groups had significantly higher mean Hp levels on day 14 and 22 post-vaccination compared to day 0 pre-vaccination (1.67 g/L, p = 0.050, and 2.83 g/L, p = 0.006, respectively). On day 14, Hp levels in group 3 were 6.12 g/L greater than in group 1 (p = 0.007). On day 22 post-challenge, mean Hp levels were 3.28 g/L greater in all groups, when compared to pre-challenge levels (p = 0.040). No differences between groups 1 and 2 were found. Levels of SAA were 17.61 μg/mL greater in group 1 compared to group 2 on day 24 post-vaccination (p = 0.003). In both groups, mean SAA values were 18.77 μg/mL greater on day 24 post-vaccination compared to pre-vaccination values (p < 0.001). On day 14, SAA values in group 3 were 144.91 μg/mL greater than in group 1 (p < 0.001) (Figure 2). In addition, mean SAA levels on day 22 in all groups were 59.34 μg/mL greater than pre-challenge levels (p < 0.001). No differences between groups 1 and 2 were found. SAA kinetics in vaccinated and unvaccinated dog groups during the post-challenge phase. Least square means sera level of serum amyloid A (SAA) in vaccinated dogs (group 1 and 2) and control unvaccinated dogs (group 3) following challenge with a wild strain of E. canis. *Indicates significant difference between group 3 and group 1 (p < 0.001). Mean albumin levels in group 2 were 1.42 g/dL greater on day 6 post-vaccination, when compared to day 0 (p = 0.01). Significantly lower mean levels of albumin were detected in groups 1 and 2 on day 6 post-vaccination. The mean reduction was by 1.14 g/dL (p = 0.01). A similar trend was observed on day 29 post-vaccination in which a mean reduction of 0.74 g/dL was detected (p = 0.05) compared to the levels prior to vaccination. On day 14, mean albumin levels in group 3 were 1.3 g/dL lower than in group 1 (p < 0.001). Similarly, mean albumin levels were 1.3 g/dL lower (p = 0.002) and 0.72 g/dL lower (p = 0.02) on day 22 and 29 post-challenge, respectively (Figure 3). No differences between group 1 and group 2 were found. Albumin kinetics in vaccinated and unvaccinated dog groups during the post-challenge phase. Least square means sera level of albumin in vaccinated dogs (group 1 and 2) and control unvaccinated dogs (group 3) following challenge with a wild strain of E. canis. *Indicates significant difference between group 3 and group 1 (p ≤ 0.02). Both groups 1 and 2 had significantly lower mean levels of TAC during the post-vaccination period. Compared to day 0, values on days 6, 13, 24, 31 and 38 were 0.15 mmol/L (p = 0.008), 0.24 mmol/L (p = 0.004), 0.2 mmol/L (p = 0.004), 0.27 mmol/L (p = 0.004) and 0.25 mmol/L (p = 0.009) lower, respectively. During most of the post-challenge phase, TAC levels in all groups were lower than those in the pre-challenge phase. On days 8, 14, 22 and 29 post-challenge, values were 0.14 mmol/L (p = 0.01), 0.18 mmol/L (p = 0.002), 0.16 mmol/L, (p = 0.005) and 0.14 mmol/L (p = 0.01) lower, respectively. No differences were detected between groups 1 and 2 during this period. PON-1 PON-1 levels in group 2 were 0.79 IU/mL greater on day 6 post-vaccination (p = 0.02) and 0.8 IU/mL greater on day 31 post-vaccination (p = 0.03), compared to those in group 1. On day 31 post-vaccination, PON-1 values were 0.73 IU/mL lower than those found in the pre-vaccination period (p = 0.004). Following challenge, all groups showed a decrease in the mean level of PON-1 from day 14 until day 39 post-challenge (day 14, 0.87 IU/mL, p = 0.01; day 22, 1.51 IU/mL, p < 0.001; day 29, 1.19 IU/mL, p = 0.001; day 39, 0.97 IU/mL, p =0.005). No significant differences in PON-1 levels were recorded between all groups throughout the challenge phase. Correlations between APP levels, clinical and hematological parameters Correlation matrices for each phase are provided in Additional file 2: Table S2, Additional file 3: Table S3 and Additional file 4: Table S4 Post-vaccination phase The rickettsial load was found to be positively correlated with the Hp levels (r = 0.66, p = 0.005), CRP levels (r = 0.76, p < 0.001), and SAA levels (r = 0.61, p = 0.001). SAA levels positively correlated with the dog's body temperatures (r = 0.52, p = 0.010). Negatively correlated variables in the post-vaccination phase included rickettsial load with albumin (r = −0.4, p = 0.054) and TAC levels (r = −0.4, p = 0.035), and CRP levels with number of thrombocytes (r = −0.65, p = 0.007). In group 2 dogs, which served as controls for group 1 during the post-vaccination phase, albumin and PON-1 levels (r = 0.57, p = 0.008), correlated positively. Post-challenge phase Since the results differed between the pre-treatment period i.e. the time period from the day of challenge to the beginning of treatment with doxycycline (day 0 to 19 post challenge) and the period thereafter (treatment period, day 19 to 39 post-challenge), these periods were analyzed separately. In the pre-treatment period, rickettsial load correlated positively with CRP and SAA levels (r = 0.87, p = 0.005; r = 0.74, p = 0.013, respectively), and albumin levels correlated negatively with body temperatures (r = −0.63, p = 0.034). In the treatment period, the rickettsial load was correlated positively with the CRP level (r = 0.88, p = 0.003) and body temperature (r = 0.62, p = 0.041). Negatively correlated variables during the post-treatment initiation period included the rickettsial load and thrombocyte numbers (r = −0.94, p < 0.001), and CRP levels and number of thrombocytes (r = −0.84, p = 0.003). Variables found to be positively correlated in the pre-treatment period included rickettsial load and temperature (r = 0.66, p = 0.018), CRP and SAA levels (r = 0.89, p = 0.001), albumin and PON-1 levels (r = 0.59, p = 0.043), and PON-1 and TAC levels (r = 0.97, p < 0.001). Negatively correlated variables during the pre-treatment period included the rickettsial load and albumin levels (r = −0.71, p = 0.009). In the treatment period, rickettsial load correlated positively with both CRP and SAA levels (r = 0.72, p = 0.007; r = 0.77, p = 0.003, respectively), and also with body temperature (r = 0.61, p = 0.031). CRP levels correlated positively with body temperature (r = 0.68, p = 0.013). Negatively correlated variables during the treatment period included the rickettsial load with PON-1 levels (r = −0.71, p = 0.008), thrombocyte numbers (r = −0.77, p = 0.003), and TAC levels (r = −0.74, p = 0.005); CRP levels and thrombocyte numbers (r = −0.73, p = 0.006), SAA and thrombocyte numbers (r = −0.67, p = 0.003), and PON-1 levels with body temperature (r = −0.79, p = 0.002). Variables found to be positively correlated in the pre-treatment period included: rickettsial load and CRP (r = 0.81, p = 0.004) and SAA levels (r = 0.82, p = 0.003); albumin level and thrombocyte numbers (r = 0.87, p = 0.002), and body temperature and CRP level (r = 0.57, p = 0.049). Negatively correlated variables during the pre-treatment period included the rickettsial load and thrombocyte numbers (r = −0.92, p = 0.008), rickettsial load and albumin level (r = −0.66, p = 0.035), CRP levels and thrombocyte numbers (r = −0.93, p = 0.002), SAA levels and thrombocyte numbers (r = −0.94, p = 0.001), and PON-1 levels and body temperature (r = −0.67, p = 0.021). During the treatment period, positive correlations were found between rickettsial load and CRP levels (r = 0.79, p = 0.010) as well as SAA level (r = 0.61, p = 0.056). In addition, Hp levels were correlated positively with the thrombocyte numbers (r = 0.7, p = 0.016). Variables found to be negatively correlated during the treatment period included rickettsial load and body temperature (r = −0.76, p = 0.005), and SAA and body temperature (r = −0.88, p = 0.007). Studies on the evaluation of innate inflammatory responses following vaccination trials are uncommon. Typically, parameters relating to the acquired immune responses such as antibody formation and lymphocyte proliferation are studied. In this study, the production of the APP, representing markers of innate inflammatory response, were shown to be considerably elevated or decreased, depending on their role as positive or negative APP, in challenged unvaccinated dogs, compared to challenged vaccinated dogs. Furthermore, the peak levels of various APP responses were not only smaller in vaccinated dogs but also appeared later on a time scale compared to challenged unvaccinated dogs. Changes in the concentrations of APP were noted also during the vaccination phase of the study as expected following vaccination with a live bacterial strain (group 1) and also following inoculation of uninfected DH-82 cells (group 2), however, these were of lower magnitude compared to most alterations in the challenged unvaccinated dogs. This provides important information on the dynamics of responses to vaccination with the attenuated E. canis strain, and the protection that it confers to vaccinated dogs in the perspective of the innate immune system responses. This study complements a previous study which focused on clinical, hematologic, serologic and bacterial load parameters and found up to 92 fold higher rickettsial load in unvaccinated challenged dogs compared to vaccinated dogs post-challenge, associated with severe disease versus no disease in the vaccinated dogs [14]. Due to the inoculation of dogs with blood from a naturally-infected dog presenting clinical E. canis infection, and although this blood was negative by PCR for Babesia spp. and H. canis and by blood smear microscopy for other parasites, the possibility that other pathogens were transmitted during inoculation cannot be ruled out. However, the inoculated vaccinated dogs did not develop disease with additional pathogens. CRP which is the most studied APP in dogs and frequently found indicative as an inflammatory marker in dogs, was markedly elevated in unvaccinated dogs (group 3) post-challenge. It reached the peak level recorded in the study, which was 50 times higher than the pre-challenge level, 14 days post-challenge compared to considerably lower peak levels in the vaccinated groups, reached more than a week later, on day 22. Although CRP levels increased post-vaccination in the vaccinated group, they only reached much lower levels. A similar pattern with earlier and higher increases in APP levels in the control group post-challenge and with generally lower levels of APP during vaccination was also found for SAA, whose level was 94 times higher than pre-challenge on day 14 post-challenge and for Hp. An inverse response was found for the negative APP Albumin which decreased after challenge in the control group and with a considerably lower magnitude in the vaccinated groups. These findings are in agreement with the presentation of the clinical disease that became severe in the challenged control group and was unapparent in the vaccinated groups. The strong positive correlation between the rickettsial load and positive APP such as CRP and SAA suggests that bacterial loads affected the production of these APP. Lower levels of these APP were associated with the low rickettsial load found in vaccinated dogs whereas considerably higher levels were demonstrated in challenged unvaccinated dogs with a high rickettsial load. Hence, CRP and SAA levels may serve as good indicators for the magnitude of infection with E. canis during the acute phase of the disease. Conversely, decreasing albumin levels were positively correlated with the decreasing thrombocyte counts during the challenge phase in the unvaccinated dogs. In addition, the albumin level was negatively correlated with the rickettsial load in the challenged unvaccinated dogs. These findings demonstrate that the negative acute phase protein, albumin, may serve as a diagnostic indicator for more severe disease whose level parallels changes in thrombocyte counts and E. canis bacterial loads in infected dogs. Most positive APP correlated positively with other positive APP and negatively with negative APP and oxidative markers. These correlations were stronger in the unvaccinated challenged control dogs in comparison to the vaccinated dogs, as the magnitude of APP responses was generally considerably higher in the unvaccinated dogs that were not protected by the vaccine. No major differences were noted between the APP response in groups 1 and 2 during the challenge phase. This was also in agreement with the clinical, hematologic and rickettsial load outcomes [14]. Therefore, from the standpoint of evaluating the response to vaccination as it is seen by APP production, there was no advantage to vaccinating twice versus vaccinating the dogs once prior to challenge with a wild strain of E. canis. A study on APP in naturally infected dogs [3] which compared dogs with non-myelosuppressed ehrlichiosis to dogs with myelosuppressive disease, presumed to have chronic infection, and control beagles, also found elevations in CRP, SAA and Hp and decreased albumin levels. These findings are essentially similar to our findings from the experimental vaccination study, with regard to the APP which elevate in E. canis infection. Interestingly, a higher proportion of dogs with CRP, SAA and Hp increases were found in naturally infected myelosupressed dogs in comparison to non-myelosupressed dogs, indicating that probably as CME becomes chronic and the clinical presentation worsens, production of these APP is further increased. In contrast, albumin decrease in the naturally infected dogs was not different between myelospuressed and non-myelosupressed dogs [3]. Other studies in experimentally infected dogs with CME have documented increases in CRP, Hp, acid-glycoprotein, ceruloplasmin, transferrin, and decrease in albumin [4,5,25]. In a different canine vaccination and challenge study, canine parvovirus infection of vaccinated and unvaccinated control puppies resulted in a clinical disease with virus shedding in the feces in the control dogs and an increase in the levels of SAA and α-l acid glycoprotein (α-l AG), whereas vaccinated dogs did not develop a clinical disease and had lower levels of these APP [26]. These results are comparable to the results in our study on E. canis vaccination and challenge, although we evaluated a wider range of positive and negative APP. Different panels of APP have been studied in natural and experimental infection with various canine vector-borne diseases other than CME including leishmaniosis caused by Leishmania infantum [27-31] and babesiosis with different species of Babesia [32-34]. These studies have generally been found helpful in evaluating the magnitude and severity of the disease as well as improvement in dog condition following effective treatment. PON-1 and TAC are inflammatory and metabolic indicators that have been less recently utilized in veterinary medicine [23]. Our study is the first to evaluate these oxidative stress markers in E. canis infection. Both markers decreased during the vaccination and challenge phases, however they could not distinguish clearly between control dogs with severe disease and clinically normal, vaccinated dogs. Further evaluation of the usefulness of PON-1 and TAC in E. canis infection including the chronic disease stage may be warranted. Vaccination with an attenuated E. canis strain in an experimental CME infection resulted in considerably muted positive and negative APP responses compared to those found in challenged unvaccinated dogs, reflecting a milder innate inflammatory response conferred by protection of the vaccine. These milder responses correlated well with the absence of clinical disease and diminished rickettsial load found in the vaccinated dogs post-challenge. Harrus S, Waner T. Diagnosis of canine monocytotropic ehrlichiosis (Ehrlichia canis): an overview. Vet J. 2011;187:292–6. Harrus S, Waner T, Neer TM. Ehrlichia canis infection. In: Greene CE, editor. Infectious diseases of the dog and the cat. 4th ed. St. Louis: Elsevier; 2011. p. 227–38. Mylonakis ME, Ceron JJ, Leontides L, Siarkou VI, Tvarijonaviciute A, Koutinas AF, et al. Serum acute phase proteins as clinical phase indicators and outcome predictors in naturally occurring canine monocytic ehrlichiosis. J Vet Intern Med. 2011;25:811–7. Rikihisa Y, Yamamoto S, Kwak I, Igbal Z, Kociba G, Mott J, et al. C-reactive protein and alpha-1 acid glycoprotein levels in dogs infected with Ehrlichia canis. J Clin Microbiol. 1994;32:912–7. Munhoz TD, Faria JL, Vargas-Hernandez G, Fagliari JJ, Santana AE, Machado RZ, et al. Experimental Ehrlichia canis infection changes acute-phase proteins. Rev Bras Parasitol Vet. 2012;21:206–12. Gabay C, Kushner I. Acute-phase proteins and other systemic responses to inflammation. N Engl J Med. 1999;340:448–54. Murata H, Shimada N, Yoshioka M. Current research on acute phase proteins in veterinary diagnosis: an overview. Vet J. 2004;168:28–40. Mold C, Rodriguez W, Rodic-Polic B. Du-Clos TW: C-reactive protein mediates protection from lipopolysaccharide through interactions with Fc gamma R. J Immunol. 2002;169:7019–25. Levy AP, Asleh R, Blum S, Levy NS, Miller-Lotan R, Kalet-Litman S, et al. Haptoglobin: basic and clinical aspects. Antioxid Redox Signal. 2010;12:293–304. Urieli-Shoval S, Linke RP, Matzner Y. Expression and function of serum amyloid A, a major acute-phase protein, in normal and disease states. Curr Opin Hematol. 2000;7:64–9. Cray C, Zaias J, Altman AH. Acute phase response in animals a review. Comp Med. 2009;59:517–26. Ferrari CK. Effects of xenobiotics on total antioxidant capacity. Interdiscip Toxicol. 2012;5:117–22. Ng CJ, Shih DM, Hama SY, Villa N, Navab M, Reddy ST. The paraoxonase gene family and atherosclerosis. Free Radic Biol Med. 2005;38:153–63. Rudoler N, Baneth G, Eyal O, van Straten M, Harrus S. Evaluation of attenuated strain of Ehrlichia canis as a vaccine for canine monocytic ehrlichiosis. Vaccine. 2012;31:226–33. Inokuma H, Okuda M, Ohno K, Shimoda K, Onishi T. Analysis of 18S rRNA gene sequence of Hepatozoon detected in two Japanese dogs. Vet Parasitol. 2002;106:265–71. Olmeda AS, Armstrong PM, Rosenthal BM, Valladares B, del Castillo A, des Armas F, et al. A subtropical case of human babesiosis. Acta Trop. 1997;67:229–34. Kidd L, Maggi R, Diniz PP, Hegarty B, Tucker M, Breitschwerdt E. Evaluation of conventional and real-time PCR assays for detection and differentiation of spotted fever group rickettsia in dog blood. Vet Microbiol. 2008;129:294–303. Papich MG, Riviere JE. Chemotherapy and microbial diseases. In: Adams HR, editor. Veterinary pharmacology and therapeutics. 8th ed. Ames: Iowa State University Press; 2001. p. 868–97. Peleg O, Baneth G, Eyal O, Inbar J, Harrus S. Multiplex real-time qPCR for the detection of Ehrlichia canis and Babesia canis vogeli. Vet Parasitol. 2010;173:292–9. Tecles F, Caldin M, Zanella A, Membiela F, Tvarijonaviciute A, Subiela SM, et al. Serum acute phase protein concentrations in female dogs with mammary tumors. J Vet Diagn Invest. 2009;21:214–9. Erel O. A novel automated direct measurement method for total antioxidant capacity using a new generation, more stable ABTS radical cation. Clin Biochem. 2004;37:277–85. Camkerten I, Sahin T, Borazan G, Gokcen A, Erel O, Das A. Evaluation of blood oxidant/antioxidant balance in dogs with sarcoptic mange. Vet Parasitol. 2009;161:106–9. Tvarijonaviciute A, Tecles F, Caldin M, Tasca S, Ceron J. Validation of spectrophotometric assays for serum paraoxonase type-1 measurment in dogs. Am J Vet Res. 2012;73:34–41. Martinez-Subiela S, Ceron JJ. Effects of hemolysis, lipemia, hyperbilirubinemia, and anticoagulants in canine C-reactive protein, serum amyloidal, and ceruloplasmin assays. Can Vet J. 2005;46:625–9. Shimada T, Ishida Y, Shimizu M, Nomura M, Kawato K, Iguchi K, et al. Monitoring C-reactive protein in beagle dogs experimentally inoculated with Ehrlichia canis. Vet Res Commun. 2002;26:171–7. Yule TD, Roth MB, Dreier K, Johnson AF, Palmer-Densmore M, Simmons K, et al. Canine parvovirus vaccine elicits protection from the inflammatory and clinical consequences of the disease. Vaccine. 1997;15:720–9. Martinez-Subiela S, Tecles F, Eckersall PD, Ceron JJ. Serum concentrations of acute phase proteins in dogs with leishmaniasis. Vet Rec. 2002;150:241–4. Martinez-Subiela S, Bernal LJ, Ceron JJ. Serum concentrations of acute –phase proteins in dogs with leishmaniosis during short term treatment. Am J Vet Res. 2003;64:1021–6. Sasanelli M, Paradies P, de Capariis D, Greco B, De Palo P, Palmisano D, et al. Acute –phase proteins in dogs naturally infected with Leishmania infantum after long term therapy with allopurinol. Vet Res Commun. 2007;31 Suppl 1:335–8. Martinez-Subiela S, Strauss-Ayali D, Ceron JJ, Baneth G. Acute phase protein response in experimental canine leishmaniasis. Vet Parasitol. 2011;180:197–202. Martinez-Subiela S, Gracia-Martinez JD, Tvarijonaviciute A, Tecles F, Caldin M, Bernal LJ, et al. Urinary C reactive protein levels in dogs with leishmaniasis at different stages of renal damage. Res Vet Sci. 2013;95:924–9. Matijatko V, Mrljak V, Kis I, Kucer N, Forsek J, Zivicnjak T, et al. Evidence of an acute phase response in dogs naturally infected with Babesia canis. Vet Parasitol. 2007;144:242–50. Koster LS, Van Scchoor M, Goddard A, Thompson PN, Matjila PT, Kjelgaard-Hansen M. C-reactive protein in canine babesiosis caused by Baesia rossi and its association with outcome. J S Afr Vet Assoc. 2009;80:87–91. Baric Rafaj R, Kules J, Selanec J, Vrkic N, Zovko Z, Zupancic M, et al. Markers of coagulation activation, endothelial stimulation, and inflammation in dogs with babesiosis. J Vet Intern Med. 2013;27:1172–8. The authors thank Bayer Health Care - Animal Health Division for kindly supporting the publication of this manuscript in the framework of the 10th CVBD World Forum symposium. Koret School of Veterinary Medicine, Hebrew University, P.O. Box 12, Rehovot, 76100, Israel Nir Rudoler, Shimon Harrus, Michael van Straten & Gad Baneth Interdisciplinary Laboratory of Clinical Pathology, Interlab-UMU, Campus of Excellence Mare Nostrum, University of Murcia, 30100, Espinardo, Murcia, Spain Silvia Martinez-Subiela, Asta Tvarijonaviciute & Jose J Cerón Nir Rudoler Shimon Harrus Silvia Martinez-Subiela Asta Tvarijonaviciute Michael van Straten Jose J Cerón Gad Baneth Correspondence to Gad Baneth. NR, SH and GB planned the study, vaccinated and inoculated the dogs, performed PCR, participated in analyzing the data and in writing the manuscript. SMB, AT and JJC performed the APP analysis and revised the manuscript. MVS performed the statistical analysis and revised the manuscript. All authors read and approved the final version of the manuscript. APP and antioxidant values during the post-vaccination and post-challenge periods. Correlations among APP, antioxidant analytes and clinical parameters during the post-vaccinal phase. Correlations among APP, antioxidant analytes and clinical parameters during the pre-treatment phase following challenge. Correlations among APP, antioxidant analytes and clinical parameters during the treatment phase following challenge. Rudoler, N., Harrus, S., Martinez-Subiela, S. et al. Comparison of the acute phase protein and antioxidant responses in dogs vaccinated against canine monocytic ehrlichiosis and naive-challenged dogs. Parasites Vectors 8, 175 (2015). https://doi.org/10.1186/s13071-015-0798-1 Ehrlichia canis 611A strain 10th Symposium on Canine Vector-Borne Diseases
CommonCrawl
BMC Veterinary Research Prediction of marbofloxacin dosage for the pig pneumonia pathogens Actinobacillus pleuropneumoniae and Pasteurella multocida by pharmacokinetic/pharmacodynamic modelling Lucy Dorey ORCID: orcid.org/0000-0003-3596-79191, Ludovic Pelligand1 & Peter Lees1 BMC Veterinary Research volume 13, Article number: 209 (2017) Cite this article Bacterial pneumonia in pigs occurs widely and requires antimicrobial therapy. It is commonly caused by the pathogens Actinobacillus pleuropneumoniae and Pasteurella multocida. Marbofloxacin is an antimicrobial drug of the fluoroquinolone class, licensed for use against these organisms in the pig. In recent years there have been major developments in dosage schedule design, based on integration and modelling of pharmacokinetic (PK) and pharmacodynamic (PD) data, with the objective of optimising efficacy and minimising the emergence of resistance. From in vitro time-kill curves in pig serum, PK/PD breakpoint Area under the curve (AUC) 24h /minimum inhibitory concentration (MIC) values were determined and used in conjunction with published PK, serum protein binding data and MIC distributions to predict dosages based on Monte Carlo simulation (MCS). For three levels of inhibition of growth, bacteriostasis and 3 and 4log10 reductions in bacterial count, mean AUC24h/MIC values were 20.9, 45.2 and 71.7 h, respectively, for P. multocida and 32.4, 48.7 and 55.5 h for A. pleuropneumoniae. Based on these breakpoint values, doses for each pathogen were predicted for several clinical scenarios: (1) bacteriostatic and bactericidal levels of kill; (2) 50 and 90% target attainment rates (TAR); and (3) single dosing and daily dosing at steady state. MCS for 90% TAR predicted single doses to achieve bacteriostatic and bactericidal actions over 48 h of 0.44 and 0.95 mg/kg (P. multocida) and 0.28 and 0.66 mg/kg (A. pleuropneumoniae). For daily doses at steady state, and 90% TAR bacteriostatic and bactericidal actions, dosages of 0.28 and 0.59 mg/kg (P. multocida) and 0.22 and 0.39 mg/kg (A. pleuropneumoniae) were required for pigs aged 12 weeks. Doses were also predicted for pigs aged 16 and 27 weeks. PK/PD modelling with MCS approaches to dose determination demonstrates the possibility of tailoring clinical dose rates to a range of bacterial kill end-points. Marbofloxacin is a synthetic third-generation fluoroquinolone, developed for sole veterinary use. It has high bioavailability when administered to pigs by intramuscular injection [1]. It accumulates in the cytosol of macrophages, leucocytes, neutrophils, epithelial lining fluid and plasma [2, 3]. Marbofloxacin, as a lipid-soluble organic acid and low to moderate plasma protein binding, achieves good tissue penetration and a high volume of distribution [4]. Concentrations in the lung, liver and kidney exceed those in plasma. However, concentrations in the biophase, the pulmonary epithelial lining fluid in pigs with pneumonia, at steady state will be determined by free drug concentration in plasma. This has been shown to be 50.6% of total concentration in pigs [4]. Lees and Aliabadi [4] reported that marbofloxacin exerts a prolonged post-antibiotic (PAE) and sub-minimum inhibitory concentration (MIC) PAE effects. It has a broad spectrum of antibacterial activity, is bactericidal and exerts a concentration-dependent killing action [5]. The antimicrobial spectrum includes Brucella spp. and Mycoplasma spp. Marbofloxacin is licensed for treatment of pneumonia caused by the pig pneumonia pathogens, Actinobacillus pleuropneumoniae, Pasteurella multocida and Streptococcus suis [6]. Bacterial pneumonia in pigs is also caused by other organisms, including Bordetella bronchiseptica and Mycoplasma hyopneumoniae. Over the last two decades, there have been major advances in designing dosage schedules of antimicrobial drugs, based on integration and modelling of pharmacodynamic (PD) and pharmacokinetic (PK) data. These approaches have provided novel strategies for predicting drug dosages that optimise efficacy and minimise opportunities for the emergence and subsequent spread of resistance [6–15]. Optimising dosage may involve reducing doses which may be too high as well as increasing doses when they are too low. Many authors have proposed that, for fluoroquinolones as a group, the integrated PK/PD indices correlating with successful therapeutic outcome are maximum plasma concentration (Cmax)/MIC and area under plasma concentration-time curve (AUC24h)/MIC ratios [5, 7, 8, 11, 16, 17]. The PD parameter most commonly used in establishing the potency of antimicrobial drugs is MIC, the lowest concentration based on two-fold dilutions, inhibiting visible bacterial growth after 24 h incubation under standard conditions (Clinical and Laboratory Standards Institute (CLSI) M31-A3) [18]. Most laboratories use the internationally recommended CLSI guidelines to ensure standardisation [13]. However, these guidelines have limitations regarding accuracy, because they involve the use of doubling dilutions, giving potential for up to 100% error on single isolate estimates. For many purposes, this is acceptable, and indeed is necessary, when large numbers of isolates are to be examined. However, it may not always be appropriate when MIC is correlated with PK data as a basis for PK/PD breakpoint determination. Therefore, based on methods previously described [19–21] five overlapping sets of two-fold dilutions were used in this study to reduce inaccuracy from up to 100% to no more than 20%. The mutant prevention concentration (MPC) was used as an additional indicator of potency to MIC; it is defined as the lowest drug concentration required for preventing the growth of the least susceptible cells present in high density bacterial populations. A second consideration is that CLSI and European Union Committee on Antimicrobial Testing (EUCAST) methods of MIC determination require the use of broths formulated to optimize the growth of each species of bacteria. Therefore, there is almost universal use of internationally recognised guidelines, methods and standards for MIC determination. However, when the objective of potency determination is prediction of dosage for clinical efficacy, based on PK/PD breakpoints, conditions should be representative of in vivo pathological circumstances. Zeitlinger et al. [22] commented that "bacteria with appropriate and well-defined growth in the selected medium should be employed" and "in order to be able to extrapolate data from various models to in vivo situations, models should always attempt to mimic physiological conditions as closely as possible". Whilst serum is not identical to extravascular infection site fluids, it is likely to be closer in chemical composition to the biophase than broths, and indeed in immunological constituents also [22, 23]. A comparison of broth MICs with potency determined in biological fluids is therefore relevant to PK/PD breakpoint estimation. For some drugs and pathogens, calculation of a scaling factor to bridge between broth and serum MICs may be warranted [7, 15, 23, 24]. In this investigation MICs and time-kill data were generated for marbofloxacin in pig serum. They were thus determined with reasonable accuracy and in a biological matrix. With the objectives of: extending the therapeutic life of older antimicrobial drugs; ensuring their prudent use and; minimising the emergence of resistance, there have been many proposals to re-evaluate dose schedules that were set, in many instances, prior to the application of PK/PD breakpoints [8–11, 13–15, 19]. For example, Mouton et al. [9] have described the EUCAST approach to dosage re-evaluation. In summary, these authors have proposed that a sound approach to setting dose schedules is to link PK parameters and variables with appropriate indices of potency, applying the general equation for systemically acting drugs (Fig. 1) [19, 25, 26]. Formula for calculation of the daily antimicrobial drug dose based on PK and PD variables The aims of this study were: (1) to integrate published PK variables for the pig with MIC data for pig pathogens obtained in our laboratory in order to generate values of the three PK/PD parameters, Cmax/MIC, Cav/MIC and T > MIC, for A. pleuropneumoniae and P. multocida; (2) to model data from time-kill studies of A. pleuropneumoniae and P. multocida, using multiples of MIC over the range 0.25–8.0 MIC, in order to generate PK/PD breakpoint values of AUC24h/MIC for three levels of bacterial kill, bacteriostasis, bactericidal and 4log10 reduction in inoculum count; (3) to use published PK data and PK/PD breakpoints, with serum protein binding data and literature MIC distributions, in Monte Carlo simulations (MCS) to predict dose schedules required for: (a) bacteriostatic and bactericidal levels of kill; (b) for 50 and 90% Target Attainment rate (TAR); and (c) for single dosing and daily dosing at steady state. Twenty isolates each of A. pleuropneumoniae and P. multocida were obtained from EU cases of pig pneumonia. These were screened for ability to grow logarithmically in broths and pig serum. Of those exhibiting logarithmic growth in both matrices, MICs in broth were determined by microdilution using two-fold dilutions. From the sensitive isolates, six of each species were selected and MICs re-determined, using artificial broths (Cation Adjusted Mueller Hinton broth for P. multocida and Columbia broth for A. pleuropneumoniae) in accordance with CLSI guidelines, except that five sets of overlapping 2-fold serial dilutions of marbofloxacin were prepared, as described by Dorey et al. [27]. In addition, the guidelines were adapted to additionally use pig serum in place of broth to enable comparison of the two matrices. PK/PD breakpoint determination For the six isolates each of A. pleuropneumoniae and P. multocida, eight marbofloxacin concentrations, as multiples of MIC, ranging from 0.25 to 8xMIC, were used in time-kill studies over 24 h incubation periods. Determinations were made separately in serum and broth (CB for A. pleuropneumoniae and CAMHB for P. multocida), as previously described [27, 28]. Each test was repeated in triplicate for the six isolates of each species in both growth matrices. The time-kill curves established bacteriostatic, bactericidal and 4log10 reductions in count at 24 h; based on the sigmoidal Emax equation (Fig. 2) the data were modelled to provide AUC24h/MIC breakpoint values for these three levels of growth inhibition.: E = 0, bacteriostatic, that is 0log10 reduction in CFU/mL after 24 h incubation; E = −3, bactericidal, 3log10 reduction in CFU/mL; and E = −4, 4log10 reduction in bacterial count. The sigmoidal Emax equation used to model time-kill data by non-linear regression [27] Dose prediction Deterministic approach For dose prediction using the deterministic approach, mean PK values were obtained from Schneider et al. [1] and PK/PD breakpoint values were used from the present study, together with MIC90 values for P. multocida and A. pleuropneumoniae for marbofloxacin obtained from de Jong et al. [29] (Fig. 3). Marbofloxacin MIC distributions of P. multocida (n = 230) and A. pleuropneumoniae (n = 219) indicating frequency of MICs. MIC data were generated using CLSI guidelines by de Jong et al. [29]). All isolates were from European countries As the PK/PD index that best predicts efficacy for marbofloxacin is the ratio AUC24h/MIC, the equation used to calculate dose was: $$ {\ Dose}_{\left( per\ day\right)}=\frac{Cl\kern0.5em \times \kern0.75em \frac{AUC_{\left(24 h\right)}}{MIC_e}\kern0.75em \times \kern0.5em {MIC}_{distribution}}{f_u\kern0.75em \times \kern0.75em F} $$ where Cl = body clearance per h, AUC(24h)/MICe (in h) = ratio of experimentally determined area under the serum concentration-time curve over 24 h to the MIC (MICe) of the experimental isolates – six per species - (i.e. the PK/PD breakpoint), MICdistribution = distribution of MICs from epidemiological literature [29], fu (from 0 to 1) = fraction of drug not bound to serum protein and F = bioavailability (from 0 to 1). Thus, calculation of dose depends on: (1) assessment of both PK (Cl, F, fu,) and PD (MIC) properties; and (2) determination of an appropriate breakpoint value of the AUC(24h)/MIC ratio. This Equation is appropriate for determining daily dosage once steady state has been reached. However, the calculated dose when the initial drug concentration in serum is zero, is very likely to be higher than the dose determined for maintaining the steady state concentration. Loading doses were calculated for three time periods, 0–24, 0–48 and 0–72 h [7]; the formulae for calculation of dosage for a 48 h period are presented in Fig. 4. Formulae for calculation of the loading dose for 48 h duration of action, where eq. a can be expressed as eq. b and simplified as eq. c. K10 = elimination rate constant; τ = dosing interval in h; Cl48 = body clearance over 48 h; KPD breakpoint = AUC divided by 24; MICDistribution = MICs determined from epidemiological surveys; F = bioavailability (from 0 to 1); fu = fraction of drug not bound to protein binding All dosages were computed using Monte Carlo simulation in Oracle Crystal Ball (Oracle Corporation, Redwood Shores, CA, USA), for TAR of 50 and 90%. The probabilities of distribution for the dosage estimation were run for 50,000 simulated trials. Data input comprised: 1) marbofloxacin whole body clearance scaled by bioavailability; clearance data were obtained from the equation Clearance = Dose/AUC, for pigs of three ages [27, 16 and 2 weeks]; 2) drug binding to serum protein [27]; 3) AUC24h/MIC breakpoints derived from time-kill curves; and 4) MIC field distribution data [1] (Fig.3). For the MIC field distributions, values were corrected using the serum:broth MIC ratio, as the reported MIC literature values were determined in broth. Minimum inhibitory concentrations Geometric mean MIC serum:broth ratios for P. multocida and A. pleuropneumoniae (six isolates of each species) were 1.12:1 and 0.79:1, respectively. For MPC, the ratios were 1.32:1 and 0.99:1, respectively. These did not differ significantly from unity. However, as free drug concentration in pig serum was previously shown to be 50.6% (Dorey et al. 2016b) [28] and as protein bound drug is microbiologically inactive, the corrected ratios, fraction unbound (fu) serum:broth MIC were 0.50:1 and 0.40:1, and 0.67:1 and 0.50:1 for MPC, indicating small (in microbiological terms) but significantly greater potency of marbofloxacin in serum than in broth. PK/PD breakpoint values For each isolate of each organism, growth inhibition curves were generated in the time-kill studies in broth and pig serum. Examples are presented in Fig. 5. Example plots of AUC24h/MIC (h) versus change in bacterial count (log10 CFU/mL) for P. multocida in (a) broth and (b) serum. The points represent observed values and the curves are lines of best fit From the time-kill curves, PK/PD breakpoints were derived and are presented in Tables 1 (P. multocida) and 2 (A. pleuropneumoniae). For three levels of inhibition of growth, bacteriostasis and 3 and 4log10 reductions in bacterial count, mean AUC24h/MIC values were 20.9, 45.2 and 71.7 h, respectively, for P. multocida for serum. Corresponding broth values were 26.5, 41.8 and 48.9 h. For A. pleuropneumoniae in serum, breakpoints were 32.4, 48.7 and 55.5 h. and for broth values were 24.8, 42.0 and 54.0 h. Differences between broth and serum, at each level of kill, were relatively small, despite differences between broth MICs and serum MICs corrected for drug binding to serum protein. This is not unexpected, as each inhibition curve was derived using MIC multiples for each fluid. Table 1 P. multocida PK/PD modelling of in vitro marbofloxacin time-kill curves (mean, standard deviation, n = 6) As a measure of inter-isolate variability, coefficients of variation were determined. These were small to moderate in magnitude (Tables 1 and 2). Table 2 A. pleuropneumoniae PK/PD modelling of in vitro marbofloxacin time-kill curves (mean, standard deviation, n = 6) Dividing the AUC24h/MIC ratios by 24 yields concentrations, as MIC multiples, producing bacteriostatic, bactericidal and 4log10 reductions in count; these are the KPD values. Numerical values were, respectively, 1.10, 1.74 and 2.04 for P. multocida in broth and 0.87, 1.88 and 2.99 for this organism in serum. Corresponding values for A. pleuropneumoniae were 1.03, 1.75 and 2.25 (broth) and 1.35, 2.03 and 2.31 (serum). Dose determination at steady state (deterministic approach) Clearance and bioavailability data for healthy pigs were obtained from the literature [1]. Table 3 indicates predicted doses required, at steady state, to achieve three levels of kill. Bactericidal levels of kill were obtained with doses 0.41 mg/kg/day for P. multocida and 0.26 mg/kg/day for A. pleuropneumoniae. Table 3 Predicted once daily doses calculated by deterministic approach Dose determination by Monte Carlo simulation Monte Carlo simulations were conducted using: (1) the distribution of clearance around the standard deviation, assuming normal distribution (Schneider et al. 2014) [1]; (2) the distribution of MICs for wild type organisms [29]; (3) free drug fraction in serum; (4) breakpoint values of AUC24h/MIC from time-kill studies. Predicted doses at steady state for 50% and 90% TAR and three levels of kill are presented in Table 4. Doses for bactericidal action for P. multocida were 0.43, 0.45 and 0.59 mg/kg, respectively, for pigs aged 27, 16 and 2 weeks. Corresponding values for A. pleuropneumoniae were somewhat lower, 0.29, 0.30 and 0.39 mg/kg. The small numerical differences with age of pigs reflect small differences in their PK profiles. Differences in predicted dose for the two bacterial species reflect the differing distributions of MIC of wild type organisms. Table 4 Predicted once daily doses of marbofloxacin at steady state in pigs of three ages (weeks): 27 (A), 16 (B) and 12 (C)a Table 5 indicates calculated doses for single doses of marbofloxacin administered intramuscularly with three durations of effect, three levels of kill and TARs of 50 and 90%. For a bacteriostatic action of 24 h duration and 50% TAR, predicted doses were 0.12 and 0.03 mg/kg for P. multocida and A. pleuropneumoniae, respectively. For a TAR of 90%, a bactericidal level of kill and an action over 72 h, the predicted doses were 1.31 and 0.92 mg/kg, respectively, for P. multocida and A. pleuropneumoniae. Even for a 4log10 reduction in count and 90% TAR, predicted doses were relatively low, 2.08 mg/kg (P. multocida) and 1.14 mg/kg (A. pleuropneumoniae). Table 5 Predicted doses of marbofloxacin for three durations of activity (24, 48 and 72 h)a Pharmacokinetics, pharmacodynamics and PK/PD integration Integration of in vitro generated potency estimates with in vivo PK data has been used extensively to generate three indices to predict clinical outcome, namely the ratios, Cmax/MIC and AUC24h/MIC and T > MIC, the time for which concentration exceeds MIC. Integration of pharmacokinetic and pharmacodynamic data for MPC are presented in the Additional file 1. All MPC ratios were much lower than the MIC ratios. From previous marbofloxacin studies, Cmax/MIC and AUC24h/MIC ratios provided good correlation with bacteriological cure in human patients [30, 31]. For fluoroquinolones used in veterinary medicine, a Cmax/MIC of 8–10 and an AUC24h/MIC greater than 100–125 h have been proposed [13]. However, other studies have suggested that a ratio of AUC24h/MIC of 35–50 is effective for Gram-positive bacteria [32]. Many authors have proposed achieving numerical values of AUC24h/MIC of 125:1 or 250:1 for gram negative organisms, corresponding to average concentrations over the dosing interval of 5.2 to 10.4, respectively, as a multiple of MIC. For marbofloxacin, de Jong et al. [29] reported identical MIC50 values for both A. pleuropneumoniae and P. multocida of European pig origin of 0.03 μg/mL and identical MIC90 values of 0.06 μg/mL. Schneider et al. [1] reported on marbofloxacin PK in healthy pigs, aged 12, 16 and 27 weeks, administered intramuscularly at three dose rates of 4, 8 and 16 mg/kg. PK/PD integration of data from these studies is presented in a Additional file 1 to this paper. Briefly, for both bacterial species and pigs aged 27 weeks, Cmax/MIC90 ratios were 56, 105 and 258, respectively, for marbofloxacin doses of 4, 8 and 16 mg/kg. Even average concentrations over the 92 h period after dosing provided Caverage/MIC90 ratios of 9.6, 19.7 and 39.1 for these dose rates. Therefore, a preliminary prediction of likely successful clinical outcome for doses of marbofloxacin in the range 4–16 mg/kg can be made for pig pneumonia caused by the pathogens, A. pleuropneumoniae and P. multocida. PK/PD modelling and breakpoint determination PK/PD integration is not a precise tool; it should be regarded as a first initial step in predicting efficacy in clinical use. It is especially useful when correlated with outcome in clinical trials. However, the next essential step is to define PK/PD breakpoints for each antimicrobial drug acting against representative isolates of each pathogenic species. PK/PD modelling describes the whole sweep of the concentration-effect relationship. Therefore, any pre-determined level of activity, ranging from bacteriostasis to virtual eradication, indicated by the breakpoint AUC24h/MIC index, can be determined. Applying PK/PD breakpoints for indices such as AUC24h/MIC, derived from PK/PD modelling, with MCS provides an approach to dose prediction, which takes account of animal species based PK, wild-type MIC distributions, protein binding and breakpoints for specific bacterial species. From such time-kill studies, numerical values of PK/PD breakpoints have been determined by PK/PD modelling by previous workers [7–10, 12, 13, 21, 25, 33]. In this study, breakpoint values for each level of growth inhibition, 0log10, 3log10 and 4log10 reductions in count, were broadly similar for the two growth matrices. This is not unexpected because, although MICs in broth and serum differed, the breakpoint values are based on MIC multiples. AUC24h/MIC ratios were similar for broth and serum for each level of kill, being based on MIC values separately for each matrix. Moreover, inter-isolate variability in PK/PD breakpoint values was small to moderate. Dosage prediction PK/PD breakpoints were used with wild type MIC distributions of susceptible pathogens and literature PK data, to conduct MCS to predict doses providing a range of pre-determined levels of kill. The deterministic approach provided an estimate of once daily doses at steady state. It is based on MIC90 for each pathogen and average values for other variables. It provides an initial indication of likely effective dosage, but does not take account either of variability or incidence of each input variable and in this study. Predicted daily doses were less than 0.5 mg/kg for a bactericidal kill against both pathogens. Nevertheless, the deterministic approach comprises an initial indication, prior to estimation of population doses for each selected TAR. The latter is a dose encompassing a given percentile of the target population, for example, 50 or 90% and for three pre-determined levels of bacterial kill and for both a single dose and a daily dose at steady state. Monte Carlo simulations predict doses which allow for incidence within MIC distributions and encompass best, worst and all intermediate values for distributions of Cl/F and breakpoint AUC24h/MIC ratios. Furthermore, basing potency estimates on serum as a growth matrix, as in this study, has greater relevance to disease conditions than MICs determined in broths. Nevertheless, it is recognised that serum, although preferred to broth for MIC determination, is similar but not identical in composition to the biophase at infection sites, for example pulmonary epithelial lining fluid. As discussed by Martinez et al. [8] it is the exposure achieved after the first dose, which is most relevant in determining therapeutic outcome. In this study, low marbofloxacin doses were predicted for a greater than bactericidal action, with 90% TAR in both species; for 72 h and a 4log10 reduction in count, the predicted doses were 2.08 and 1.14 mg/kg, respectively, for P. multocida and A. pleuropneumoniae. Both dosages are less than the dosages of 4, 8 and 16 mg/kg studied by Schneider et al. [1]. To achieve a bactericidal action (3log10 reduction in count) for P. multocida for 90% TAR, once daily doses at steady state were even lower, 0.43 mg/kg for P. multocida and for A. pleuropneumoniae 0.29 mg/kg for pigs aged 27 weeks. These predicted doses are lower than those of 2.5 mg/kg and 8 mg/kg investigated by Ding et al. [3] and Ferran et al. [34] as well as the 2 mg/kg recommended dose for several licensed marbofloxacin products. Ferren et al. [34] suggested that even lower doses of marbofloxacin could potentially eradicate low counts (105 CFU/mL) in the lung, while having a minimal impact on the microbiota of the large intestine. On the other hand, Vallé et al. [35] validated for the bovine pneumonia pathogens, P. multocida and Mannheimia haemolytica, the concept of a single high dose of marbofloxacin compared to a daily dose of 2 mg/kg for 3–5 days. A bactericidal effect against bovine P. multocida was achieved within one hour, when marbofloxacin was administered at five times the recommended daily dose (10 mg/kg). The present study illustrates the principles of using MCS to predict dosages of marbofloxacin for the treatment of pneumonia in the young pig. The proposed dosage regimen is for A. pleuropneumoniae and P.multocida induced pneumonias only. For other organisms, independent PK/PD studies will be required. However, in future studies it will be important to extend the present findings for A. pleuropneumoniae and P.multocida also. Whilst the inter-isolate variability in PK/PD breakpoint values for bacteriostatic and bactericidal levels of kill was small in the present study, estimates were based on only six isolates for each species. Moreover, the time-kill studies generating the PK/PD breakpoints used fixed drug concentrations (eight multiples of MIC) for a fixed time period. In clinical use, on the other hand, plasma drug concentrations first increase and then decrease after intramuscular dosing, exposing organisms to a continuously variable concentration. A third consideration is the relatively small number of isolates in the report of de Jong et al. [29]. In future studies, these concerns could be addressed by increasing numbers of isolates in field distribution studies and in PK/PD breakpoint estimation studies. Moreover, exposure of organisms to varying drug concentrations could be addressed by use for example of hollow fibre methods to simulate in vivo patterns of change in concentration with time. Furthermore, marbofloxacin PK data were used as mean and standard deviation values from the literature. In future studies, it will be useful to incorporate individual animal PK data in the MCSs, and, in particular it will be helpful to use population PK data obtained in clinically ill pigs. Finally, the methodology in this study did not consider the contribution to pathogen elimination by the body's natural defence mechanisms, which are known to be important in immune competent clinical subjects. In addition, potentially beneficial properties of antimicrobial drugs, such as immunomodulatory and anti-inflammatory actions are of importance for some drug classes. Finally, dose prediction studies, as reported in this manuscript, should always be correlated with clinical and bacteriological outcomes in animal disease models and clinical trials [7, 11–13]. Predicted doses for marbofloxacin for treatment of respiratory tract infections in the pig, caused by P. multocida or A. pleuropneumoniae, were determined by generating PK/PD breakpoints for several levels of kill, based on modelling PK and PD data. These breakpoint values were used with published MIC distribution data [29] and PK data [1] to determine dosages for three levels of kill, and for once daily doses at steady state and for single doses with both 50% and 90% target Attainment Rates. The findings illustrate the value and principle of using Monte Carlo simulation for determination of optimal doses for a range of outcomes. CAMHB: Cation-adjusted Mueller Hinton Broth EUCAST: Committee on Antimicrobial Susceptibility Testing MCS: MHB: Mueller Hinton Broth MIC: PD: Pharmacodynamics PK: TAR: Target Attainment Rate Schneider M, Paulin A, Dron F, Woehrle F. Pharmacokinetic of marbofloxacin in pigs after intravenous and intramuscular administration of a single dose of 8 mg/kg: dose proportionality, influence of the age of the animals and urinary elimination. J Vet Pharmacol and Ther. 2014;37(6):p523–30. Boothe HW, Jones SA, Wilkie WS, Boeckh A, Stenstrom KK, Boothe DM. Evaluation of the concentration of marbofloxacin in alveolar macrophages and pulmonary epithelial lining fluid after administration in dogs. Am J Vet Res. 2005;66(10):1770–4. Ding H, Li Y, Chen Z, Rizwan-ul-Haq M, Zeng Z. Plasma and tissue cage fluid pharmacokinetics of marbofloxacin after intravenous, intramuscular, and oral single-dose application in pigs. J Vet Pharmacol Ther. 2010;33(5):507–10. Lees P, Aliabadi FS. Rational dosing of antimicrobial drugs: animals versus humans. Int J Antimicrob Agents. 2002;19(4):269–84. Wang YC, Chan JP, Yeh KS, Chang CC, Hsuan SL, et al. Molecular characterization of enrofloxacin resistant Actinobacillus pleuropneumoniae isolates. Vet micro. 2010;142(3):309–12. Vilalta C, Giboin H, Schneider M, El Garch F, Fraile L. Pharmacokinetic/pharmacodynamic evaluation of marbofloxacin in the treatment of Haemophilus parasuis and Actinobacillus pleuropneumoniae infections in nursery and fattener pigs using Monte Carlo simulations. J Vet Pharmacol Ther. 2014;37(6):542–9. Lees P, Pelligand L, Illambas J, Potter T, Lacroix M, Rycroft A, et al. Pharmacokinetic/pharmacodynamic integration and modelling of amoxicillin for the calf pathogens Mannheimia haemolytica and Pasteurella multocida. J Vet Pharmacol Ther. 2015;38(5):457–70. Martinez MN, Papich MG, Drusano GL. Dosing regimen matters: the importance of early intervention and rapid attainment of the pharmacokinetic/pharmacodynamic target. Antimicrob Agents Chemother. 2012;56(6):2795–805. Mouton JW, Brown DF, Apfalter P, Cantón R, Giske CG, Ivanova M, et al. The role of pharmacokinetics/pharmacodynamics in setting clinical MIC breakpoints: the EUCAST approach. Clin Microbiol Infect. 2012;18(3):E37–45. Mouton JW, Ambrose PG, Canton R, Drusano GL, Harbarth S, MacGowan A, et al. Conserving antibiotics for the future: new ways to use old and new drugs from a pharmacokinetic and pharmacodynamic perspective. Drug Resist Update. 2011;14(2):107–17. Nielsen EI, Cars O, Friberg LE. Pharmacokinetic/pharmacodynamic (PK/PD) indices of antibiotics predicted by a semimechanistic PKPD model: a step toward model-based dose optimization. Antimicrob Agents Chemother. 2011;55(10):4619–30. Nielsen EI, Friberg LE. Pharmacokinetic-pharmacodynamic modeling of antibacterial drugs. Pharmacol Rev. 2013;65(3):1053–90. Papich MG. Pharmacokinetic–pharmacodynamic (PK–PD) modeling and the rational selection of dosage regimes for the prudent use of antimicrobial drugs. Vet micro. 2014;171(3):480–6. Rey JF, Laffont CM, Croubels S, De Backer P, Zemirline C, Bousquet E, et al. Use of Monte Carlo simulation to determine pharmacodynamic cutoffs of amoxicillin to establish a breakpoint for antimicrobial susceptibility testing in pigs. Am J Vet Res. 2014;75(2):124–31. Toutain PL, Potter T, Pelligand L, Lacroix M, Illambas J, Lees P. Standard PK/PD concepts can be applied to determine a dosage regimen for a macrolide: the case of tulathromycin in the calf. Journal of veterinary pharmacology and therapeutics. 2017;40(1):16–27. Rybak MJ. Pharmacodynamics: relation to antimicrobial resistance. Am J infec control. 2006;34(5):S38–45. Levison ME, Levison JH. Pharmacokinetics and pharmacodynamics of antibacterial agents. Infect Dis Clin N Am. 2009;23(4):791–815. CLSI (2013) Performance Standards for Antimicrobial Disk and Dilution Susceptibility Tests for Bacteria Isolated from Animals: Approved Standard - Fourth Edition. CLSI document VET01-A4 (formerly M31-A3, 2008) Supplementary information VET01-S, 2015. ISBN 1–56238–877-0 [print]; ISBN 1–56238–878-9 [electronic]. Clinical and Laboratory Standards Institute, 950 West Valley Road, Suite 2500, Wayne, Pennsylvania 19087 USA, 2013. Aliabadi FS, Lees P. Pharmacokinetics and pharmacodynamics of danofloxacin in serum and tissue fluids of goats following intravenous and intramuscular administration. Am J Vet Res. 2001 Dec 1;62(12):1979–89. Sidhu PK, Landoni MF, AliAbadi FS, Lees P. Pharmacokinetic and pharmacodynamic modelling of marbofloxacin administered alone and in combination with tolfenamic acid in goats. Vet J. 2010;184(2):219–29. Sidhu P, Rassouli A, Illambas J, Potter T, Pelligand L, Rycroft A, et al. Pharmacokinetic–pharmacodynamic integration and modelling of florfenicol in calves. J Vet Pharmacol Ther. 2014;37(3):231–42. Zeitlinger MA, Derendorf H, Mouton JW, Cars O, Craig WA, et al. Protein Binding: Do We Ever Learn? Antimicrob Agents Chemother. 2011;55(7):3067–74. Brentnall C, Cheng Z, McKellar QA, Lees P. Pharmacodynamics of oxytetracycline administered alone and in combination with carprofen in calves. Vet Rec. 2012;171(11):273–7. Dorey L, Hobson S, Lees P. What is the true in vitro potency of oxytetracycline for the pig pneumonia pathogens A. pleuropneumoniae and P. multocida? J Vet Pharmacol Ther. 2016; Doi: 10.1111/jvp.12386 Aliabadi FS, Lees P. Pharmacokinetics and pharmacokinetic/pharmacodynamic integration of marbofloxacin in calf serum, exudate and transudate. J Vet Pharmacol Ther. 2002;25(3):161–74. Lees P, Pelligand L, Ferran A, Bousquest-Melou A, Toutain P. Application of pharmacological principles to dosage design of antimicrobial drugs. Pharmacol mat. 2015;8:22. Dorey L, Hobson S, Lees P. Potency of marbofloxacin for pig pneumonia pathogens Actinobacillus pleuropneumoniae and Pasteurella multocida: comparison of growth media. Res Vet Sci. 2016. Doi: 10.1016/j.rvsc.2016.11.001. Dorey L, Hobson S, Lees P. Part 2: Factors influencing the potency of marbofloxacin for pig pneumonia pathogens Actinobacillus pleuropneumoniae and Pasteurella multocida. Res Vet Sci 2016; Doi: 10.1016/j.rvsc.2016.11.001. de Jong A, Thomas V, Simjee S, Moyaert H, El Garch F, Maher K, et al. Antimicrobial susceptibility monitoring of respiratory tract pathogens isolated from diseased cattle and pigs across Europe: the VetPath study. Vet Microbiol. 2014;172(1):202–15. Drusano G, Labro MT, Cars O, Mendes P, Shah P, Sörgel F, et al. Pharmacokinetics and pharmacodynamics of fluoroquinolones. Clin Microbiol Infect. 1998;4(s2):2S27–41. Lode H, Borner K, Koeppe P. Pharmacodynamics of fluoroquinolones. Clin Infect Dis. 1998;27(1):33–9. Drusano GL. Pharmacokinetics and pharmacodynamics of antimicrobials. Clin Infect Dis. 2007;45(Supplement 1):S89–95. Toutain PL, Lees P. Integration and modelling of pharmacokinetic and pharmacodynamic data to optimize dosage regimens in veterinary medicine. J Vet Pharmacol Ther. 2004;27(6):467–77. Ferran AA, Bibbal D, Pellet T, Laurentie M, Gicquel-Bruneau M, Sanders P, et al. Pharmacokinetic/pharmacodynamic assessment of the effects of parenteral administration of a fluoroquinolone on the intestinal microbiota: comparison of bactericidal activity at the gut versus the systemic level in a pig model. Int J Antimicrob Agents. 2013;42(5):429–35. Vallé M, Schneider M, Galland D, Giboin H, Woehrle F. Pharmacokinetic and pharmacodynamic testing of marbofloxacin administered as a single injection for the treatment of bovine respiratory disease. J Vet Pharmacol Ther. 2012;35(6):519–28. Lucy Dorey was a BBSRC CASE AWARD Scholar. A Pridmore, Don Whitley Scientific, and A Rycroft, Royal Veterinary College, supplied bacterial isolates. This project has been funded by BBSRC and Norbrook Laboratories Ltd.; grant number BB/101649X1. The funding body had no participation in the design of the study, collection, analysis, and interpretation of data, and in writing the manuscript. The data sets supporting the results of this article are included within the article and its supplementary file (Additional file 1). Comparative Biological Sciences, Royal Veterinary College, London University, London, UK Lucy Dorey, Ludovic Pelligand & Peter Lees Lucy Dorey Ludovic Pelligand Peter Lees LD and PL were responsible for the study design and co-ordination, data analysis and were involved in writing the manuscript. LD conducted the experiments. LP was involved in data analysis and in writing the manuscript. All authors approved the final manuscript. Correspondence to Lucy Dorey. Additional file 1: PK/PD integration data. (DOCX 7 kb) Dorey, L., Pelligand, L. & Lees, P. Prediction of marbofloxacin dosage for the pig pneumonia pathogens Actinobacillus pleuropneumoniae and Pasteurella multocida by pharmacokinetic/pharmacodynamic modelling. BMC Vet Res 13, 209 (2017). https://doi.org/10.1186/s12917-017-1128-y Marbofloxacin Pharmacokinetic/Pharmacodynamic A. Pleuropneumoniae P. Multocida Time-kill curves Submission enquiries: [email protected]
CommonCrawl
Tag Archives: Sudoku Infinite Sudoku and the Sudoku game Posted on April 16, 2018 by Joel David Hamkins Consider what I call the Sudoku game, recently introduced in the MathOverflow question Who wins two-player Sudoku? posted by Christopher King (user PyRulez). Two players take turns placing numbers on a Sudoku board, obeying the rule that they must never explicitly violate the Sudoku condition: the numbers on any row, column or sub-board square must never repeat. The first player who cannot continue legal play loses. Who wins the game? What is the winning strategy? The game is not about building a global Sudoku solution, since a move can be legal in this game even when it is not part of any global Sudoku solution, provided only that it doesn't yet explicitly violate the Sudoku condition. Rather, the Sudoku game is about trying to trap your opponent in a maximal such position, a position which does not yet explicitly violate the Sudoku condition but which cannot be further extended. In my answer to the question on MathOverflow, I followed an idea suggested to me by my daughter Hypatia, namely that on even-sized boards $n^2\times n^2$ where $n$ is even, then the second player can win with a mirroring strategy: simply copy the opponent's moves in reflected mirror image through the center of the board. In this way, the second player ensures that the position on the board is always symmetric after her play, and so if the previous move was safe, then her move also will be safe by symmetry. This is therefore a winning strategy for the second player, since any violation of the Sudoku condition will arise on the opponent's play. This argument works on even-sized boards precisely because the reflection of every row, column and sub-board square is a totally different row, column and sub-board square, and so any new violation of the Sudoku conditions would reflect to a violation that was already there. The mirror strategy definitely does not work on the odd-sized boards, including the main $9\times 9$ case, since if the opponent plays on the central row, copying directly would immediately introduce a Sudoku violation. After posting that answer, Orson Peters (user orlp) pointed out that one can modify it to form a winning strategy for the first player on odd-sized boards, including the main $9\times 9$ case. In this case, let the first player begin by playing $5$ in the center square, and then afterwards copy the opponent's moves, but with the ten's complement at the reflected location. So if the opponent plays $x$, then the first player plays $10-x$ at the reflected location. In this way, the first player can ensure that the board is ten's complement symmetric after her moves. The point is that again this is sufficient to know that she will never introduce a violation, since if her $10-x$ appears twice in some row, column or sub-board square, then $x$ must have already appeared twice in the reflected row, column or sub-board square before that move. This idea is fully general for odd-sized Sudoku boards $n^2\times n^2$, where $n$ is odd. If $n=2k-1$, then the first player starts with $k$ in the very center and afterward plays the $2k$-complement of her opponent's move at the reflected location. On even-sized Sudoku boards, the second player wins the Sudoku game by the mirror copying strategy. On odd-sized Sudoku boards, the first players wins the Sudoku game by the complement-mirror copying strategy. Note that on the even boards, the second player could also play complement mirror copying just as successfully. What I really want to tell you about, however, is the infinite Sudoku game (following a suggestion of Sam Hopkins). Suppose that we try to play the Sudoku game on a board whose subboard squares are $\mathbb{Z}\times\mathbb{Z}$, so that the full board is a $\mathbb{Z}\times\mathbb{Z}$ array of those squares, making $\mathbb{Z}^2\times\mathbb{Z}^2$ altogether. (Or perhaps you might prefer the board $\mathbb{N}^2\times\mathbb{N}^2$?) One thing to notice is that on an infinite board, it is no longer possible to get trapped at a finite stage of play, since every finite position can be extended simply by playing a totally new label from the set of labels; such a move would never lead to a new violation of the explicit Sudoku condition. For this reason, I should like to introduce the Sudoku Solver-Spoiler game variation as follows. There are two players: the Sudoku Solver and the Sudoku Spoiler. The Solver is trying to build a global Sudoku solution on the board, while the Spoiler is trying to prevent this. Both players must obey the Sudoku condition that labels are never to be explicitly repeated in any row, column or sub-board square. On an infinite board, the game proceeds transfinitely, until the board is filled or there are no legal moves. The Solver wins a play of the game, if she successfully builds a global Sudoku solution, which means not only that every location has a label and there are no repetitions in any row, column or sub-board square, but also that every label in fact appears in every row, column and sub-board square. That is, to count as a solution, the labels on any row, column and sub-board square must be a bijection with the set of labels. (On infinite boards, this is a stronger requirement than merely insisting on no repetitions.) The Solver-Spoiler game makes sense in complete generality on any set $S$, whether finite or infinite. The sub-boards are $S^2=S\times S$, and one has an $S\times S$ array of them, so $S^2\times S^2$ for the whole board. Every row and column has the same size as the sub-board square $S^2$, and the set of labels should also have this size. Upon reflection, one realizes that what matters about $S$ is just its cardinality, and we really have for every cardinal $\kappa$ the $\kappa$-Sudoku Solver-Spoiler game, whose board is $\kappa^2\times\kappa^2$, a $\kappa\times\kappa$ array of $\kappa\times\kappa$ sub-boards. In particular, the game $\mathbb{Z}^2\times\mathbb{Z}^2$ is actually isomorphic to the game $\mathbb{N}^2\times\mathbb{N}^2$, despite what might feel initially like a very different board geometry. What I claim is that the Solver has a winning strategy in the all the infinite Sudoku Solver-Spoiler games, in a very general and robust manner. Theorem. For every infinite cardinal $\kappa$, the Solver has a winning strategy to win the $\kappa$-Sudoku Solver-Spoiler game. The strategy will win in $\kappa$ many moves, producing a full Sudoku solution. The Solver can win whether she goes first or second, starting from any legal position of size less than $\kappa$. The Solver can win even when the Spoiler is allowed to play finitely many labels at once on each turn, or fewer than $\kappa$ many moves (if $\kappa$ is regular), even if the Solver is only allowed one move each turn. In the countably infinite Sudoku game, the Solver can win even if the Spoiler is allowed to make infinitely many moves at once, provided only that the resulting position can in principle be extended to a full solution. Proof. Consider first the countably infinite Sudoku game, and assume the initial position is finite and that the Spoiler will make finitely many moves on each turn. Consider what it means for the Solver to win at the limit. It means, first of all, that there are no explicit repetitions in any row, column or sub-board. This requirement will be ensured since it is part of the rules for legal play not to violate it. Next, the Solver wants to ensure that every square has a label on it and that every label appears at least once in every row, every column and every sub-board. If we think of these as individual specific requirements, we have countably many requirements in all, and I claim that we can arrange that the Solver will simply satisfy the $n^{th}$ requirement on her $n^{th}$ play. Given any finite position, she can always find something to place in any given square, using a totally new label if need be. Given any finite position, any row and any particular label $k$, since can always find a place on that row to place the label, which has no conflict with any column or sub-board, since there are infinitely many to choose from and only finitely many conflicts. Similarly with columns and sub-boards. So each of the requirements can always be fulfilled one-at-a-time, and so in $\omega$ many moves she can produce a full solution. The argument works equally well no matter who goes first or if the Spoiler makes arbitrary finite play, or indeed even infinite play, provided that the play is part of some global solution (perhaps a different one each time), since on each move the Solve can simply meet the requirement by using that solution at that stage. An essentially similar argument works when $\kappa$ is uncountable, although now the play will proceed for $\kappa$ many steps. Assuming $\kappa^2=\kappa$, a consequence of the axiom of choice, there are $\kappa$ many requirements to meet, and the Solve can meet requirement $\alpha$ on the $\alpha^{th}$ move. If $\kappa$ is regular, we can again allow the Spoiler to make arbitrary size-less-than-$\kappa$ size moves, so that at any stage of play before $\kappa$ the position will still be size less than $\kappa$. (If $\kappa$ is singular, one can allow Spoiler to make finitely many moves at once or indeed even some uniform bounded size $\delta<\kappa$ many moves at once. $\Box$ I find it interesting to draw out the following aspect of the argument: Observation. Every finite labeling of an infinite Sudoku board that does not yet explicitly violate the Sudoku condition can be extended to a global solution. Similarly, any size less than $\kappa$ labeling that does not yet explicitly violate the Sudoku condition can be extended to a global solution of the $\kappa$-Sudoku board for any infinite cardinal $\kappa$. What about asymmetric boards? It has come to my attention that people sometimes look at asymmetric Sudoku boards, whose sub-boards are not square, such as in the six-by-six Sudoku case. In general, one could take Sudoku boards to consist of a $\lambda\times\kappa$ array of sub-boards of size $\kappa\times\lambda$, where $\kappa$ and $\lambda$ are cardinals, not necessarily the same size and not necessarily both infinite or both finite. How does this affect the arguments I've given? In the finite $(n\times m)\times (m\times n)$ case, if one of the numbers is even, then it seems to me that the reflection through the origin strategy works for the second player just as before. And if both are odd, then the first player can again play in the center square and use the mirror-complement strategy to trap the opponent. So that analysis will work fine. In the case $(\kappa\times\lambda)\times(\lambda\times\kappa)$ where $\lambda\leq\kappa$ and $\kappa=\lambda\kappa$ is infinite, then the proof of the theorem seems to break, since if $\lambda<\kappa$, then with only $\lambda$ many moves, say putting a common symbol in each of the $\lambda$ many rectangles across a row, we can rule out that symbol in a fixed row. So this is a configuration of size less than $\kappa$ that cannot be extended to a full solution. For this reason, it seems likely to me that the Spoiler can win the Sudoko Solver-Spoiler game in the infinite asymmetric case. Finally, let's consider the Sudoku Solver-Spoiler game in the purely finite case, which actually is a very natural game, perhaps more natural than what I called the Sudoku game above. It seems to me that the Spoiler should be able to win the Solver-Spoiler game on any nontrivial finite board. But I don't yet have an argument proving this. I asked a question on MathOverflow: The Sudoku game: Solver-Spoiler variation. Posted in Exposition, Math for Kids | Tagged games, infinite games, kids, Sudoku | 11 Replies
CommonCrawl
Function and Function Notation By the end of this lesson, you will be able to: Determine whether a relation represents a function. Find the value of a function. Determine whether a function is one-to-one. Use the vertical line test to identify functions. Graph the functions listed in the library of functions. A jetliner changes altitude as its distance from the starting point of a flight increases. The weight of a growing child increases with time. In each case, one quantity depends on another. There is a relationship between the two quantities that we can describe, analyze, and use to make predictions. In this section, we will analyze such relationships. Determining Whether a Relation Represents a Function A relation is a set of ordered pairs. The set of the first components of each ordered pair is called the domain and the set of the second components of each ordered pair is called the range. Consider the following set of ordered pairs. The first numbers in each pair are the first five natural numbers. The second number in each pair is twice that of the first. [latex]\left\{\left(1,2\right),\left(2,4\right),\left(3,6\right),\left(4,8\right),\left(5,10\right)\right\}[/latex] The domain is [latex]\left\{1,2,3,4,5\right\}[/latex]. The range is [latex]\left\{2,4,6,8,10\right\}[/latex]. Note that each value in the domain is also known as an input value, or independent variable, and is often labeled with the lowercase letter [latex]x[/latex]. Each value in the range is also known as an output value, or dependent variable, and is often labeled lowercase letter [latex]y[/latex]. A function [latex]f[/latex] is a relation that assigns a single value in the range to each value in the domain. In other words, no x-values are repeated. For our example that relates the first five natural numbers to numbers double their values, this relation is a function because each element in the domain, [latex]\left\{1,2,3,4,5\right\}[/latex], is paired with exactly one element in the range, [latex]\left\{2,4,6,8,10\right\}[/latex]. Now let's consider the set of ordered pairs that relates the terms "even" and "odd" to the first five natural numbers. It would appear as [latex]\left\{\left(\text{odd},1\right),\left(\text{even},2\right),\left(\text{odd},3\right),\left(\text{even},4\right),\left(\text{odd},5\right)\right\}[/latex] Notice that each element in the domain, [latex]\left\{\text{even,}\text{odd}\right\}[/latex] is not paired with exactly one element in the range, [latex]\left\{1,2,3,4,5\right\}[/latex]. For example, the term "odd" corresponds to three values from the domain, [latex]\left\{1,3,5\right\}[/latex] and the term "even" corresponds to two values from the range, [latex]\left\{2,4\right\}[/latex]. This violates the definition of a function, so this relation is not a function. Figure 1 compares relations that are functions and not functions. Figure 1. (a) This relationship is a function because each input is associated with a single output. Note that input [latex]q[/latex] and [latex]r[/latex] both give output [latex]n[/latex]. (b) This relationship is also a function. In this case, each input is associated with a single output. (c) This relationship is not a function because input [latex]q[/latex] is associated with two different outputs. A General Note: Function A function is a relation in which each possible input value leads to exactly one output value. We say "the output is a function of the input." The input values make up the domain, and the output values make up the range. How To: Given a relationship between two quantities, determine whether the relationship is a function. Identify the input values. Identify the output values. If each input value leads to only one output value, classify the relationship as a function. If any input value leads to two or more outputs, do not classify the relationship as a function. Example 1: Determining If Menu Price Lists Are Functions The coffee shop menu, shown in Figure 2 consists of items and their prices. Is price a function of the item? Is the item a function of the price? Let's begin by considering the input as the items on the menu. The output values are then the prices. See Figure 2. Each item on the menu has only one price, so the price is a function of the item. Two items on the menu have the same price. If we consider the prices to be the input values and the items to be the output, then the same input value could have more than one output associated with it. See Figure 3. Therefore, the item is a not a function of price. Example 2: Determining If Class Grade Rules Are Functions In a particular math class, the overall percent grade corresponds to a grade point average. Is grade point average a function of the percent grade? Is the percent grade a function of the grade point average? The table below shows a possible rule for assigning grade points. Percent Grade 0–56 57–61 62–66 67–71 72–77 78–86 87–91 92–100 Grade Point Average For any percent grade earned, there is an associated grade point average, so the grade point average is a function of the percent grade. In other words, if we input the percent grade, the output is a specific grade point average. In the grading system given, there is a range of percent grades that correspond to the same grade point average. For example, students who receive a grade point average of 3.0 could have a variety of percent grades ranging from 78 all the way to 86. Thus, percent grade is not a function of grade point average. Try It 1 The table below lists the five greatest baseball players of all time in order of rank. Babe Ruth 1 Willie Mays 2 Ty Cobb 3 Walter Johnson 4 Hank Aaron 5 a) Is the rank a function of the player name? b) Is the player name a function of the rank? Using Function Notation Once we determine that a relationship is a function, we need to display and define the functional relationships so that we can understand and use them, and sometimes also so that we can program them into computers. There are various ways of representing functions. A standard function notation is one representation that facilitates working with functions. To represent "height is a function of age," we start by identifying the descriptive variables [latex]h[/latex] for height and [latex]a[/latex] for age. The letters [latex]f,g[/latex], and [latex]h[/latex] are often used to represent functions just as we use [latex]x,y[/latex], and [latex]z[/latex] to represent numbers and [latex]A,B[/latex], and [latex]C[/latex] to represent sets. [latex]\begin{cases}h\text{ is }f\text{ of }a\hfill & \hfill & \hfill & \hfill & \text{We name the function }f;\text{ height is a function of age}.\hfill \\ h=f\left(a\right)\hfill & \hfill & \hfill & \hfill & \text{We use parentheses to indicate the function input}\text{. }\hfill \\ f\left(a\right)\hfill & \hfill & \hfill & \hfill & \text{We name the function }f;\text{ the expression is read as ``}f\text{ of }a\text{.''}\hfill \end{cases}[/latex] Remember, we can use any letter to name the function; the notation [latex]h\left(a\right)[/latex] shows us that [latex]h[/latex] depends on [latex]a[/latex]. The value [latex]a[/latex] must be put into the function [latex]h[/latex] to get a result. The parentheses indicate that age is input into the function; they do not indicate multiplication. We can also give an algebraic expression as the input to a function. For example [latex]f\left(a+b\right)[/latex] means "first add a and b, and the result is the input for the function f." The operations must be performed in this order to obtain the correct result. A General Note: Function Notation The notation [latex]y=f\left(x\right)[/latex] defines a function named [latex]f[/latex]. This is read as [latex]``y[/latex] is a function of [latex]x.''[/latex] The letter [latex]x[/latex] represents the input value, or independent variable. The letter [latex]y\text{,\hspace{0.17em}}[/latex] or [latex]f\left(x\right)[/latex], represents the output value, or dependent variable. Example 3: Using Function Notation for Days in a Month Use function notation to represent a function whose input is the name of a month and output is the number of days in that month. The number of days in a month is a function of the name of the month, so if we name the function [latex]f[/latex], we write [latex]\text{days}=f\left(\text{month}\right)[/latex] or [latex]d=f\left(m\right)[/latex]. The name of the month is the input to a "rule" that associates a specific number (the output) with each input. For example, [latex]f\left(\text{March}\right)=31[/latex], because March has 31 days. The notation [latex]d=f\left(m\right)[/latex] reminds us that the number of days, [latex]d[/latex] (the output), is dependent on the name of the month, [latex]m[/latex] (the input). Analysis of the Solution Note that the inputs to a function do not have to be numbers; function inputs can be names of people, labels of geometric objects, or any other element that determines some kind of output. However, most of the functions we will work with in this book will have numbers as inputs and outputs. Example 4: Interpreting Function Notation A function [latex]N=f\left(y\right)[/latex] gives the number of police officers, [latex]N[/latex], in a town in year [latex]y[/latex]. What does [latex]f\left(2005\right)=300[/latex] represent? When we read [latex]f\left(2005\right)=300[/latex], we see that the input year is 2005. The value for the output, the number of police officers [latex]\left(N\right)[/latex], is 300. Remember, [latex]N=f\left(y\right)[/latex]. The statement [latex]f\left(2005\right)=300[/latex] tells us that in the year 2005 there were 300 police officers in the town. Instead of a notation such as [latex]y=f\left(x\right)[/latex], could we use the same symbol for the output as for the function, such as [latex]y=y\left(x\right)[/latex], meaning "y is a function of x?" Yes, this is often done, especially in applied subjects that use higher math, such as physics and engineering. However, in exploring math itself we like to maintain a distinction between a function such as [latex]f[/latex], which is a rule or procedure, and the output [latex]y[/latex] we get by applying [latex]f[/latex] to a particular input [latex]x[/latex]. This is why we usually use notation such as [latex]y=f\left(x\right),P=W\left(d\right)[/latex], and so on. Representing Functions Using Tables A common method of representing functions is in the form of a table. The table rows or columns display the corresponding input and output values. In some cases, these values represent all we know about the relationship; other times, the table provides a few select examples from a more complete relationship. The table below lists the input number of each month (January = 1, February = 2, and so on) and the output value of the number of days in that month. This information represents all we know about the months and days for a given year (that is not a leap year). Note that, in this table, we define a days-in-a-month function [latex]f[/latex] where [latex]D=f\left(m\right)[/latex] identifies months by an integer rather than by name. Month number, [latex]m[/latex] (input) 1 2 3 4 5 6 7 8 9 10 11 12 Days in month, [latex]D[/latex] (output) 31 28 31 30 31 30 31 31 30 31 30 31 The table below defines a function [latex]Q=g\left(n\right)[/latex]. Remember, this notation tells us that [latex]g[/latex] is the name of the function that takes the input [latex]n[/latex] and gives the output [latex]Q\text{\hspace{0.17em}.}[/latex] [latex]n[/latex] 1 2 3 4 5 [latex]Q[/latex] 8 6 7 6 8 The table below displays the age of children in years and their corresponding heights. This table displays just some of the data available for the heights and ages of children. We can see right away that this table does not represent a function because the same input value, 5 years, has two different output values, 40 in. and 42 in. Age in years, [latex]\text{ }a\text{ }[/latex] (input) 5 5 6 7 8 9 10 Height in inches, [latex]\text{ }h\text{ }[/latex] (output) 40 42 44 47 50 52 54 How To: Given a table of input and output values, determine whether the table represents a function. Identify the input and output values. Check to see if each input value is paired with only one output value. If so, the table represents a function. Example 5: Identifying Tables that Represent Functions Which table, a), b), or c), represents a function (if any)? Table A Table B Table C a) and b) define functions. In both, each input value corresponds to exactly one output value. c) does not define a function because the input value of 5 corresponds to two different output values. When a table represents a function, corresponding input and output values can also be specified using function notation. The function represented by a) can be represented by writing [latex]f\left(2\right)=1,f\left(5\right)=3,\text{and }f\left(8\right)=6[/latex] Similarly, the statements [latex]g\left(-3\right)=5,g\left(0\right)=1,\text{and }g\left(4\right)=5[/latex] represent the function in b). c) cannot be expressed in a similar way because it does not represent a function. When we know an input value and want to determine the corresponding output value for a function, we evaluate the function. Evaluating will always produce one result because each input value of a function corresponds to exactly one output value. When we know an output value and want to determine the input values that would produce that output value, we set the output equal to the function's formula and solve for the input. Solving can produce more than one solution because different input values can produce the same output value. Evaluation of Functions in Algebraic Forms When we have a function in formula form, it is usually a simple matter to evaluate the function. For example, the function [latex]f\left(x\right)=5 - 3{x}^{2}[/latex] can be evaluated by squaring the input value, multiplying by 3, and then subtracting the product from 5. How To: Given the formula for a function, evaluate. Replace the input variable in the formula with the value provided. Calculate the result. Example 6: Evaluating Functions Given the function [latex]h\left(p\right)={p}^{2}+2p[/latex], evaluate [latex]h\left(4\right)[/latex]. To evaluate [latex]h\left(4\right)[/latex], we substitute the value 4 for the input variable [latex]p[/latex] in the given function. [latex]\begin{cases}\text{ }h\left(p\right)={p}^{2}+2p\hfill \\ \text{ }h\left(4\right)={\left(4\right)}^{2}+2\left(4\right)\hfill \\ \text{ }=16+8\hfill \\ \text{ }=24\hfill \end{cases}[/latex] Therefore, for an input of 4, we have an output of 24. Example 7: Evaluating Functions at Specific Values Evaluate [latex]f\left(x\right)={x}^{2}+3x - 4[/latex] at [latex]2[/latex] [latex]a[/latex] [latex]a+h[/latex] [latex]\frac{f\left(a+h\right)-f\left(a\right)}{h}[/latex] Replace the [latex]x[/latex] in the function with each specified value. Because the input value is a number, 2, we can use algebra to simplify. [latex]\begin{cases}f\left(2\right)={2}^{2}+3\left(2\right)-4\hfill \\ =4+6 - 4\hfill \\ =6\hfill \end{cases}[/latex] In this case, the input value is a letter so we cannot simplify the answer any further. [latex]f\left(a\right)={a}^{2}+3a - 4[/latex] With an input value of [latex]a+h[/latex], we must use the distributive property. [latex]\begin{cases}f\left(a+h\right)={\left(a+h\right)}^{2}+3\left(a+h\right)-4\hfill \\ ={a}^{2}+2ah+{h}^{2}+3a+3h - 4\hfill \end{cases}[/latex] In this case, we apply the input values to the function more than once, and then perform algebraic operations on the result. We already found that [latex]f\left(a+h\right)={a}^{2}+2ah+{h}^{2}+3a+3h - 4[/latex] and we know that Now we combine the results and simplify. [latex]\begin{cases}\begin{cases}\hfill \\ \frac{f\left(a+h\right)-f\left(a\right)}{h}=\frac{\left({a}^{2}+2ah+{h}^{2}+3a+3h - 4\right)-\left({a}^{2}+3a - 4\right)}{h}\hfill \end{cases}\hfill \\ \begin{cases}\text{ }=\frac{2ah+{h}^{2}+3h}{h}\hfill & \hfill \\ \text{ }=\frac{h\left(2a+h+3\right)}{h}\hfill & \begin{cases}{cc}\begin{cases}{cc}& \end{cases}& \end{cases}\text{Factor out }h.\hfill \\ \text{ }=2a+h+3\hfill & \begin{cases}{cc}\begin{cases}{cc}& \end{cases}& \end{cases}\text{Simplify}.\hfill \end{cases}\hfill \end{cases}[/latex] Given the function [latex]g\left(m\right)=\sqrt{m - 4}[/latex], evaluate [latex]g\left(5\right)[/latex]. Example 8: Solving Functions Given the function [latex]h\left(p\right)={p}^{2}+2p[/latex], solve for [latex]h\left(p\right)=3[/latex]. [latex]\begin{cases}\text{ }h\left(p\right)=3\hfill & \hfill & \hfill & \hfill \\ \text{ }{p}^{2}+2p=3\hfill & \hfill & \hfill & \text{Substitute the original function }h\left(p\right)={p}^{2}+2p.\hfill \\ \text{ }{p}^{2}+2p - 3=0\hfill & \hfill & \hfill & \text{Subtract 3 from each side}.\hfill \\ \text{ }\left(p+3\text{)(}p - 1\right)=0\hfill & \hfill & \hfill & \text{Factor}.\hfill \end{cases}[/latex] If [latex]\left(p+3\right)\left(p - 1\right)=0[/latex], either [latex]\left(p+3\right)=0[/latex] or [latex]\left(p - 1\right)=0[/latex] (or both of them equal 0). We will set each factor equal to 0 and solve for [latex]p[/latex] in each case. [latex]\begin{cases}\left(p+3\right)=0,\hfill & p=-3\hfill \\ \left(p - 1\right)=0,\hfill & p=1\hfill \end{cases}[/latex] This gives us two solutions. The output [latex]h\left(p\right)=3[/latex] when the input is either [latex]p=1[/latex] or [latex]p=-3[/latex]. We can also verify by graphing as in Figure 5. The graph verifies that [latex]h\left(1\right)=h\left(-3\right)=3[/latex] and [latex]h\left(4\right)=24[/latex]. Given the function [latex]g\left(m\right)=\sqrt{m - 4}[/latex], solve [latex]g\left(m\right)=2[/latex]. Evaluating Functions Expressed in Formulas Some functions are defined by mathematical rules or procedures expressed in equation form. If it is possible to express the function output with a formula involving the input quantity, then we can define a function in algebraic form. For example, the equation [latex]2n+6p=12[/latex] expresses a functional relationship between [latex]n[/latex] and [latex]p[/latex]. We can rewrite it to decide if [latex]p[/latex] is a function of [latex]n[/latex]. How To: Given a function in equation form, write its algebraic formula. Solve the equation to isolate the output variable on one side of the equal sign, with the other side as an expression that involves only the input variable. Use all the usual algebraic methods for solving equations, such as adding or subtracting the same quantity to or from both sides, or multiplying or dividing both sides of the equation by the same quantity. Example 9: Finding an Equation of a Function Express the relationship [latex]2n+6p=12[/latex] as a function [latex]p=f\left(n\right)[/latex], if possible. To express the relationship in this form, we need to be able to write the relationship where [latex]p[/latex] is a function of [latex]n[/latex], which means writing it as p = expression involving n. [latex]\begin{cases}2n+6p=12\hfill & \hfill \\ 6p=12 - 2n\hfill & \begin{cases}& & \end{cases}\text{Subtract }2n\text{ from both sides}.\hfill \\ p=\frac{12 - 2n}{6}\hfill & \begin{cases}& & \end{cases}\text{Divide both sides by 6 and simplify}.\hfill \\ p=\frac{12}{6}-\frac{2n}{6}\hfill & \hfill \\ p=2-\frac{1}{3}n\hfill & \hfill \end{cases}[/latex] Therefore, [latex]p[/latex] as a function of [latex]n[/latex] is written as [latex]p=f\left(n\right)=2-\frac{1}{3}n[/latex] It is important to note that not every relationship expressed by an equation can also be expressed as a function with a formula. Example 10: Expressing the Equation of a Circle as a Function Does the equation [latex]{x}^{2}+{y}^{2}=1[/latex] represent a function with [latex]x[/latex] as input and [latex]y[/latex] as output? If so, express the relationship as a function [latex]y=f\left(x\right)[/latex]. First we subtract [latex]{x}^{2}[/latex] from both sides. [latex]{y}^{2}=1-{x}^{2}[/latex] We now try to solve for [latex]y[/latex] in this equation. [latex]\begin{cases}y=\pm \sqrt{1-{x}^{2}}\hfill \\ \text{ }=+\sqrt{1-{x}^{2}}\text{ and }-\sqrt{1-{x}^{2}}\hfill \end{cases}[/latex] We get two outputs corresponding to the same input, so this relationship cannot be represented as a single function [latex]y=f\left(x\right)[/latex]. If [latex]x - 8{y}^{3}=0[/latex], express [latex]y[/latex] as a function of [latex]x[/latex]. Are there relationships expressed by an equation that do represent a function but which still cannot be represented by an algebraic formula? Yes, this can happen. For example, given the equation [latex]x=y+{2}^{y}[/latex], if we want to express [latex]y[/latex] as a function of [latex]x[/latex], there is no simple algebraic formula involving only [latex]x[/latex] that equals [latex]y[/latex]. However, each [latex]x[/latex] does determine a unique value for [latex]y[/latex], and there are mathematical procedures by which [latex]y[/latex] can be found to any desired accuracy. In this case, we say that the equation gives an implicit (implied) rule for [latex]y[/latex] as a function of [latex]x[/latex], even though the formula cannot be written explicitly. Evaluating a Function Given in Tabular Form As we saw above, we can represent functions in tables. Conversely, we can use information in tables to write functions, and we can evaluate functions using the tables. For example, how well do our pets recall the fond memories we share with them? There is an urban legend that a goldfish has a memory of 3 seconds, but this is just a myth. Goldfish can remember up to 3 months, while the beta fish has a memory of up to 5 months. And while a puppy's memory span is no longer than 30 seconds, the adult dog can remember for 5 minutes. This is meager compared to a cat, whose memory span lasts for 16 hours. The function that relates the type of pet to the duration of its memory span is more easily visualized with the use of a table. See the table below. Memory span in hours Puppy 0.008 Adult dog 0.083 Goldfish 2160 Beta fish 3600 At times, evaluating a function in table form may be more useful than using equations. Here let us call the function [latex]P[/latex]. The domain of the function is the type of pet and the range is a real number representing the number of hours the pet's memory span lasts. We can evaluate the function [latex]P[/latex] at the input value of "goldfish." We would write [latex]P\left(\text{goldfish}\right)=2160[/latex]. Notice that, to evaluate the function in table form, we identify the input value and the corresponding output value from the pertinent row of the table. The tabular form for function [latex]P[/latex] seems ideally suited to this function, more so than writing it in paragraph or function form. How To: Given a function represented by a table, identify specific output and input values. Find the given input in the row (or column) of input values. Identify the corresponding output value paired with that input value. Find the given output values in the row (or column) of output values, noting every time that output value appears. Identify the input value(s) corresponding to the given output value. Example 11: Evaluating and Solving a Tabular Function Using the table below, Evaluate [latex]g\left(3\right)[/latex]. Solve [latex]g\left(n\right)=6[/latex]. g(n) 8 6 7 6 8 Evaluating [latex]g\left(3\right)[/latex] means determining the output value of the function [latex]g[/latex] for the input value of [latex]n=3[/latex]. The table output value corresponding to [latex]n=3[/latex] is 7, so [latex]g\left(3\right)=7[/latex]. Solving [latex]g\left(n\right)=6[/latex] means identifying the input values, [latex]n[/latex], that produce an output value of 6. The table below shows two solutions: [latex]n=2[/latex] and [latex]n=4[/latex]. When we input 2 into the function [latex]g[/latex], our output is 6. When we input 4 into the function [latex]g[/latex], our output is also 6. Using the table in Example 11, evaluate [latex]g\left(1\right)[/latex] . Finding Function Values from a Graph Evaluating a function using a graph also requires finding the corresponding output value for a given input value, only in this case, we find the output value by looking at the graph. Solving a function equation using a graph requires finding all instances of the given output value on the graph and observing the corresponding input value(s). Example 12: Reading Function Values from a Graph Given the graph in Figure 6, Evaluate [latex]f\left(2\right)[/latex]. Solve [latex]f\left(x\right)=4[/latex]. To evaluate [latex]f\left(2\right)[/latex], locate the point on the curve where [latex]x=2[/latex], then read the y-coordinate of that point. The point has coordinates [latex]\left(2,1\right)[/latex], so [latex]f\left(2\right)=1[/latex]. See Figure 7. To solve [latex]f\left(x\right)=4[/latex], we find the output value [latex]4[/latex] on the vertical axis. Moving horizontally along the line [latex]y=4[/latex], we locate two points of the curve with output value [latex]4:[/latex] [latex]\left(-1,4\right)[/latex] and [latex]\left(3,4\right)[/latex]. These points represent the two solutions to [latex]f\left(x\right)=4:[/latex] [latex]x=-1[/latex] or [latex]x=3[/latex]. This means [latex]f\left(-1\right)=4[/latex] and [latex]f\left(3\right)=4[/latex], or when the input is [latex]-1[/latex] or [latex]\text{3,}[/latex] the output is [latex]\text{4}\text{.}[/latex]See Figure 8. Using Figure 7, solve [latex]f\left(x\right)=1[/latex]. Determining Whether a Function is One-to-One Some functions have a given output value that corresponds to two or more input values. For example, in the following stock chart the stock price was $1000 on five different dates, meaning that there were five different input values that all resulted in the same output value of $1000. However, some functions have only one input value for each output value, as well as having only one output for each input. We call these functions one-to-one functions. As an example, consider a school that uses only letter grades and decimal equivalents, as listed in. D 1.0 This grading system represents a one-to-one function, because each letter input yields one particular grade point average output and each grade point average corresponds to one input letter. To visualize this concept, let's look again at the two simple functions sketched in (a)and (b) of Figure 10. The function in part (a) shows a relationship that is not a one-to-one function because inputs [latex]q[/latex] and [latex]r[/latex] both give output [latex]n[/latex]. The function in part (b) shows a relationship that is a one-to-one function because each input is associated with a single output. A General Note: One-to-One Function A one-to-one function is a function in which each output value corresponds to exactly one input value. Example 13: Determining Whether a Relationship Is a One-to-One Function Is the area of a circle a function of its radius? If yes, is the function one-to-one? A circle of radius [latex]r[/latex] has a unique area measure given by [latex]A=\pi {r}^{2}[/latex], so for any input, [latex]r[/latex], there is only one output, [latex]A[/latex]. The area is a function of radius [latex]r[/latex]. If the function is one-to-one, the output value, the area, must correspond to a unique input value, the radius. Any area measure [latex]A[/latex] is given by the formula [latex]A=\pi {r}^{2}[/latex]. Because areas and radii are positive numbers, there is exactly one solution: [latex]r=\sqrt{\frac{A}{\pi }}[/latex]. So the area of a circle is a one-to-one function of the circle's radius. Is a balance a function of the bank account number? Is a bank account number a function of the balance? Is a balance a one-to-one function of the bank account number? Using the Vertical Line Test As we have seen in some examples above, we can represent a function using a graph. Graphs display a great many input-output pairs in a small space. The visual information they provide often makes relationships easier to understand. By convention, graphs are typically constructed with the input values along the horizontal axis and the output values along the vertical axis. The most common graphs name the input value [latex]x[/latex] and the output value [latex]y[/latex], and we say [latex]y[/latex] is a function of [latex]x[/latex], or [latex]y=f\left(x\right)[/latex] when the function is named [latex]f[/latex]. The graph of the function is the set of all points [latex]\left(x,y\right)[/latex] in the plane that satisfies the equation [latex]y=f\left(x\right)[/latex]. If the function is defined for only a few input values, then the graph of the function is only a few points, where the x-coordinate of each point is an input value and the y-coordinate of each point is the corresponding output value. For example, the black dots on the graph in Figure 11 tell us that [latex]f\left(0\right)=2[/latex] and [latex]f\left(6\right)=1[/latex]. However, the set of all points [latex]\left(x,y\right)[/latex] satisfying [latex]y=f\left(x\right)[/latex] is a curve. The curve shown includes [latex]\left(0,2\right)[/latex] and [latex]\left(6,1\right)[/latex] because the curve passes through those points. The vertical line test can be used to determine whether a graph represents a function. If we can draw any vertical line that intersects a graph more than once, then the graph does not define a function because a function has only one output value for each input value. How To: Given a graph, use the vertical line test to determine if the graph represents a function. Inspect the graph to see if any vertical line drawn would intersect the curve more than once. If there is any such line, determine that the graph does not represent a function. Example 14: Applying the Vertical Line Test Which of the graphs represent(s) a function [latex]y=f\left(x\right)?[/latex] If any vertical line intersects a graph more than once, the relation represented by the graph is not a function. Notice that any vertical line would pass through only one point of the two graphs shown in parts (a) and (b) of Figure 13. From this we can conclude that these two graphs represent functions. The third graph does not represent a function because, at most x-values, a vertical line would intersect the graph at more than one point. Does the graph in Figure 15 represent a function? Using the Horizontal Line Test Once we have determined that a graph defines a function, an easy way to determine if it is a one-to-one function is to use the horizontal line test. Draw horizontal lines through the graph. If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function. How To: Given a graph of a function, use the horizontal line test to determine if the graph represents a one-to-one function. Inspect the graph to see if any horizontal line drawn would intersect the curve more than once. If there is any such line, determine that the function is not one-to-one. Example 15: Applying the Horizontal Line Test Consider the functions (a), and (b)shown in the graphs in Figure 16. Are either of the functions one-to-one? The function in (a) is not one-to-one. The horizontal line shown in Figure 17 intersects the graph of the function at two points (and we can even find horizontal lines that intersect it at three points.) The function in (b) is one-to-one. Any horizontal line will intersect a diagonal line at most once. Identifying Basic Toolkit Functions In this text, we will be exploring functions—the shapes of their graphs, their unique characteristics, their algebraic formulas, and how to solve problems with them. When learning to read, we start with the alphabet. When learning to do arithmetic, we start with numbers. When working with functions, it is similarly helpful to have a base set of building-block elements. We call these our "toolkit functions," which form a set of basic named functions for which we know the graph, formula, and special properties. Some of these functions are programmed to individual buttons on many calculators. For these definitions we will use [latex]x[/latex] as the input variable and [latex]y=f\left(x\right)[/latex] as the output variable. We will see these toolkit functions, combinations of toolkit functions, their graphs, and their transformations frequently throughout this book. It will be very helpful if we can recognize these toolkit functions and their features quickly by name, formula, graph, and basic table properties. The graphs and sample table values are included with each function shown below. Toolkit Functions Constant [latex]f\left(x\right)=c[/latex], where [latex]c[/latex] is a constant Identity [latex]f\left(x\right)=x[/latex] Absolute value [latex]f\left(x\right)=|x|[/latex] Quadratic [latex]f\left(x\right)={x}^{2}[/latex] Cubic [latex]f\left(x\right)={x}^{3}[/latex] Reciprocal [latex]f\left(x\right)=\frac{1}{x}[/latex] Reciprocal squared [latex]f\left(x\right)=\frac{1}{{x}^{2}}[/latex] Square root [latex]f\left(x\right)=\sqrt{x}[/latex] Cube root [latex]f\left(x\right)=\sqrt[3]{x}[/latex] Key Equations Constant function [latex]f\left(x\right)=c[/latex], where [latex]c[/latex] is a constant Identity function [latex]f\left(x\right)=x[/latex] Absolute value function [latex]f\left(x\right)=|x|[/latex] Quadratic function [latex]f\left(x\right)={x}^{2}[/latex] Cubic function [latex]f\left(x\right)={x}^{3}[/latex] Reciprocal function [latex]f\left(x\right)=\frac{1}{x}[/latex] Reciprocal squared function [latex]f\left(x\right)=\frac{1}{{x}^{2}}[/latex] Square root function [latex]f\left(x\right)=\sqrt{x}[/latex] Cube root function [latex]f\left(x\right)=\sqrt[3]{x}[/latex] A relation is a set of ordered pairs. A function is a specific type of relation in which each domain value, or input, leads to exactly one range value, or output. Function notation is a shorthand method for relating the input to the output in the form [latex]y=f\left(x\right)[/latex]. In tabular form, a function can be represented by rows or columns that relate to input and output values. To evaluate a function, we determine an output value for a corresponding input value. Algebraic forms of a function can be evaluated by replacing the input variable with a given value. To solve for a specific function value, we determine the input values that yield the specific output value. An algebraic form of a function can be written from an equation. Input and output values of a function can be identified from a table. Relating input values to output values on a graph is another way to evaluate a function. function is one-to-one if each output value corresponds to only one input value. A graph represents a function if any vertical line drawn on the graph intersects the graph at no more than one point. The graph of a one-to-one function passes the horizontal line test. an output variable the set of all possible input values for a relation a relation in which each input value yields a unique output value horizontal line test a method of testing whether a function is one-to-one by determining whether any horizontal line intersects the graph more than once an input variable each object or value in a domain that relates to another object or value by a relationship known as a function one-to-one function a function for which each value of the output is associated with a unique input value each object or value in the range that is produced when an input value is entered into a function the set of output values that result from the input values in a relation a set of ordered pairs vertical line test a method of testing whether a graph represents a function by determining whether a vertical line intersects the graph no more than once Function Notation Application. Authored by: James Sousa. Located at: https://www.youtube.com/watch?v=nAF_GZFwU1g. License: CC BY: Attribution All rights reserved content Determine if a Relation is a Function. Authored by: James Sousa. Located at: https://youtu.be/zT69oxcMhPw. License: All Rights Reserved. License Terms: Standard YouTube License
CommonCrawl
How is Doppler radar used in rain prediction? rainfall rain Communisty $\begingroup$ Do you have further context of what time of prediction they are talking? Doppler radar is great for predicting the next hour or so... as you can see what's aiming your way. But if they're saying Doppler helps predict hours down the road... it's not really all that useful. Strikes me as just a silly movie phrase, especially with the Doppler and Super Doppler phrase... just meant to sound silly and not very knowledgeable (Super Doppler is just a title some tv stations use for their Doppler!) $\endgroup$ – JeopardyTempest Aug 8 '17 at 8:07 $\begingroup$ Also towards the suggestion it's intended to be unknowledged or humorous: many forecasters, including the National Weather Service, don't vocalize forecasts of rain chances below 20% (you'll find the term "silent 10" in many forecast discussions by Googling). Plus typically forecasts are in 10% increments. So you generally won't see 5% basically anywhere except computer forecasts, at least in the US.. $\endgroup$ – JeopardyTempest Aug 8 '17 at 8:10 $\begingroup$ Does anyone else feel like the edits have lost the heart of the question? I wasn't sure what the user meant, but it seems there's a good chance we lost the core of it? $\endgroup$ – JeopardyTempest Sep 4 '17 at 15:29 The easiest answer is that Safe Haven is a movie, and the writer made a factual error (which is not uncommon). I have not watched that movie, so I am unsure what type of situation they were dealing with. Doppler radars do not predict, but they observe. What is possible, however, is extrapolation and the creation of inferences. For example, tornadoes are not directly predicted by Doppler Radar, but by making inferences from the data, a meteorologist may detect a tornado. In a similar sense, if the radar-derived 'storm total precipitation' has estimated an average 0.25 inches of rain from a squall line for the past 200 miles, one may extrapolate, or estimate, that the squall line will produce ~0.25 inches of rain. Could the squall line alter its path or intensity and drastically change the amount? Sure, but without additional information, including information not derived from a radar, that would be a difficult task. Edit: Ok, I think I have a better idea of what the question was and how it can be answered. Doppler radar can measure wind speed relative to the radar site. So if can sense how fast a storm is approaching. Assuming the storm does not change it's speed, and given it's history, you can infer when it will arrive at the radar site. For example, if a storm is 50 miles away, and it moves at 25 miles per hour it will arrive in $50\text{ mi}\div 25\frac{\text{mi}}{\text{hr}}=2\text{ hr}$, provided the storm does not change. BarocliniCplusplusBarocliniCplusplus First off, as others have pointed out, "Doppler" refers to the ability to determine the velocity of stuff (rain, snow, etc) towards/away from the radar. Radar data are often assimilated to produce better numerical weather forecasts (i.e. using computer weather models). Typically the "main" variable of interest for this is the Doppler radial velocity (the velocity at which the rain/cloud/snow is moving towards/away from the radar) and NOT the radar reflectivity (which is related to the size/concentration of the rain/cloud/snow particles). Usually what you'll see on a TV weather station's radar display is something related to the reflectivity. The reasons why numerical weather forecast data assimilation mostly uses radial velocity are perhaps too complicated to get into here, but in short, the radial velocity provides information to the numerical weather model that's less complicated for the model to "use" than the reflectivity. Doppler radial velocity from a radar is an essential tool for determining if a thunderstorm is rotating, and whether it might produce a tornado, or other damaging winds. Google terms like "tornado vortex signature" or "tornado velocity couplet" to see what this looks like. Generally for tornadoes the radar will show air moving towards the radar in close proximity to air moving away from the radar, indicating rotation. Doppler radars are also used for more sophisticated research into cloud physics. For example, by pointing a radar vertically and observing the velocity at which rain/snow/hail falls, insight can be gained into the composition of these particles and the processes that grow them. ssorgssorg First rain -oversimplified. Rain occurs when water vapor in the air cools such that the air becomes saturated and the water vapor condenses as liquid water. It's colder up high such that hot, vapor laden air raises, cools and releases rain. Lots of air and water moving about. Doppler is the change in frequency of a wave as it bounces off a moving target. Radar is a wave that bounces off an object and detected back at the source. When radar bounce off water laden air moving upward (detected by the Doppler Effect) and cooling, it's an indication that something rain like is occurring. Differing movements are indicative of differing parts of a storm. Not long ago one or two planes full of people would crash at airport when microbursts of cool air would descend vertically and drive them into the ground. Since Doppler radar we can see these coming and avoid them. $\begingroup$ Doppler radar does not see air moving upward or downward. It only sees the radial component of the velocity of water particles moving toward or away from the radar antenna. The component of velocity orthogonal to the radial line is undetectable by radar. This is the zero isodop problem. $\endgroup$ – David Hammen Aug 9 '17 at 20:33 $\begingroup$ My understanding is that such flow was turbulent and provided enough data for tracking. But I'm not hands on and will defer to a better knowledge. $\endgroup$ – TomO Aug 10 '17 at 15:47 $\begingroup$ @DavidHammen but rising or falling air will have a component orthogonal to the radial lines from the radar! I mean, unless the radar is pointing directly up or down? ;-) $\endgroup$ – Semidiurnal Simon Jul 9 at 16:51 $\begingroup$ @SemidiurnalSimon - Doppler detects range rate, the component of velocity along the line from the radar site to the target. The targets in this case, raindrops falling from clouds, is typically removed from the radar site by tens of kilometers. This means the vertical component of the falling rain is more or less undetectable, as is the horizontal component of velocity that is orthogonal to the line from the radar site to the clouds. Combining results from multiple doppler radar give a nice 2D view of the rain, but the vertical component remains more or less undetectable. $\endgroup$ – David Hammen Jul 9 at 23:03 $\begingroup$ @DavidHammen sorry, I got my language confused. But I also see your point that the radial component of far-away vertical movement is very small. $\endgroup$ – Semidiurnal Simon Jul 9 at 23:31 Some atmospheric models (namely the US NCEP HRRR) utilize radar as a source of initial conditions. I don't know the details of exactly how the radar data are assimilated. But Doppler data do provide the model a three-dimensional rendition of moisture and wind patterns on a scale that no other observing instrument is able to resolve, with our current technology. jakewxjakewx Doppler radar is not used any differently to predict weather than older forms of radar were, satellite imagery is, or how physical observations that were then called or telegraphed ahead of an advancing system. The radar detects the presence of clouds with a water load that are likely to produce precipitation, or may already be doing so. That is compared to the direction the potential precipitation is moving and historical records of when similar conditions have occurred in the past, how likely is it to be repeated for a given location. An observation is made, hey there is a cloud back forming there that seems to hold a lot of water. It is moving in this direction at this speed and the barometric pressure is doing this with a temperature like that. The last 10 times we had conditions that matched this, it rained 7 times, so, let's say there is a 70% chance it is going to rain. Doppler radar is, in general terms, a more sensitive form of making these observations than older radar was, which was more sensitive and actuate than having physical observers a few hundred miles away calling and saying, hey, it's raining here. You will probably get rain in a few hours. dlbdlb $\begingroup$ Doppler radar is used quite differently to predict weather than older forms of radar. There's a reason the US National Weather Service upgraded their radar systems nationwide to NEXRAD in the 1990s. Doppler detects wind velocity (more specifically, range rate). Older radar systems did not. This added dimension significantly improved short term forecasts of severe weather. $\endgroup$ – David Hammen Aug 9 '17 at 20:20 $\begingroup$ @DavidHammen I did not address how Doppler works. The OP asked how it is used to predict rain, not for an engineering explanation of how it functions of trivia like how until software AI was improved it could not tell the difference between a swarm of insects and a storm. I know about the vector differentials and side slip analysis which is analysed to signal potential wind sear and vortex formation, none of which is important to his question but is very important as to why millions of dollars were spent to upgrade to newer technology. $\endgroup$ – dlb Aug 9 '17 at 21:31 $\begingroup$ The base answer is it sees conditions that indicate precipitation is possible. Those are compared to past similar conditions and a determination of the likelihood of a repeat is calculated and relayed. It is a vast improvement over look out a window and saying "Looks like rain". $\endgroup$ – dlb Aug 9 '17 at 21:33 Not the answer you're looking for? Browse other questions tagged rainfall rain or ask your own question. How accurate is a tipping-bucket rain gauge? Does rain (temporarily) deplete the surrounding atmosphere of carbon dioxide? How effective is seeding rainclouds? What are the indications for initiating seeding? By how much can rain temperature vary from the ground-level air temperature? Reason for rotting smell after first rain of the season? How can a storm drop 40 inches (1 metre) of rain? How to tell the difference between real rain and fake rain (cloud seeding)? What does a mm of rain mean? Why do rivers sometimes travel down the opposite side of the mountains from where the rain happens? Is there a way to approximate the density of air during rain?
CommonCrawl
is square pyramidal and octahedral same Since an octahedron has a circumradius divided by edge length less than one, the triangular pyramids can be made with regular faces (as regular tetrahedrons) by computing the appropriate height. A simple comprehension of geometry is required to be able to imagine molecules in 3D, as well as having basic background knowledge of the concept of bonding pairs and lone pairs. Here are some examples of the 3-dimensional structure in simple compounds. Answer Save. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. This problem has been solved! Give one example of a molecules that would fall into the category of a octehedral, square pyramidal, and square planar. Square pyramidal is a molecular shape that results when there are five bonds and one lone pair on the central atom in the molecule. What conditions must be met? (2002). This allows us to distinguish and classifiy the octahedrals based on the following shapes: octahedral, square pyramidal, and square planar. Trigonal bipyramidal and Octahedral Molecules that have unshared lone pair from CH 301 at Oklahoma State University What makes this molecule different from the previous molecule is the fact that this molecule does not consist of only bond-pair atoms surrounding it. You can delete the header for this section and place your own related to the topic. The remaining four atoms connected to the central atom gives the molecule a square planar shape. The replacement of the first bonding group can occur in any position and always produces a square pyramidal molecular geometry. And why can't SF 6 form square pyramidal shape if it's hybridisation(sp 3 d 2) is same as that of BrF 5 and why BrF 5 cannot form octahedral shape though it's hybridisation(sp 3 d 2) is same as that of SF 6?????. These idealized structures are rarely met with in practice (Figure 1). Explain why `PCl_(5)` is trigonal bipyramidal whereas `IF_(5)` is square pyramidal ? If you try visualizing what this would look like, it almost resembles a three-dimensional "X" with two pairs of lone electrons. The result is that the bond angles are all slightly lower than `90^@`. In molecular geometry, square pyramidal geometry describes the shape of certain compounds with the formula ML 5 where L is a ligand.If the ligand atoms were connected, the resulting shape would be that of a pyramid with a square base. In general, the size of the splitting in a square planar complex, D SP is 1.3 times greater than D o for complexes with the same metal and ligands. In the complex, the pyrazine-2,3-dicarboxylato ligand acts as a bridging ligand via the ring nitrogen atoms and the carboxyl oxygen atoms while the 1-vinylimidazole ligand coordinates to There is one pair of electrons that has taken the place of one of the atoms and because these electrons are now present, it gives the molecule a distict new look. All the atoms are spread apart 90 degrees from each other and 180 from the atom directly across and opposite from it. 1 Ni(II) ion has an octahedral coordination in complex I and II, a square pyramidal structure in complex III and a square planar structure in complex IV. Crystal Field Stabilization Energy in Square Planar Complexes. The answer is A) square planar. As long as these conditions can be met, it is possible for the structure to not only exist, but remain stable. Missed the LibreFest? The molecule is still considered apart of the octahedral species because it still satisfies the 6 atom requirement, but in terms of its shape, the electrons effect the shape. 1. Because electrons hold the same kind of charge, they can not be near eachother due to same charge repulsion and so they need to be as far away as possible from eachother so that the molecule may be stable. the geometry that chemist call octahedral is sometimes called square bipyramidal by mineralogist, why? !. Inorganic. There are five bonding pairs and one electron pair. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. The chief was seen coughing and not wearing a mask. The See-Saw shape is basically the same shape as the Trigonal Bipyramidal except one bond is being removed from it. What is the electron-domain geometry around the A atom? NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. The prefix octa, which means eight, comes from the fact that the molecule has eight symmetrical faces. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. In a three dimensional sense, we may think of a x, y, and z coordinate plane having both its positive and negative coordinate systems. In regards to identifying each species, we will be looking at three separate unique shapes with different numbers of bond pairs and lone pairs. Octahedral (6 bond pairs and 0 electron pairs) The next molecule that we will examine is known as a square pyramidal. Start from the Lewis structure of the tetrafluoroborate ion, BrF_4^(-). The square pyramidal shape is basically an Octahedral shape with 1 less bond. This molecule resembles both of the previous molecules, but more similarly resembles a square pyramidal. which makes you more jittery coffee or tea? The square planar geometry is prevalent for transition metal complexes with d 8 configuration. 2. The reson for the spacing is due to the molecule arranging itself in the most stable form possible, limiting the bond-pair to bond-pair interaction. They can be interconverted by means of simple angular distortions, such as increasing the angle B,—M—B2 until A1, B,, B2 and A2 are coplanar and vice versa. 3. This molecule has a lot of the same characteristics as that of an octahedral in the sense it consist of a central atom that is still symmetrically surrounded by six other atoms. Because the lone pairs of electrons are still present, that allows this molecule to still be considered an octahedral due to the fact that it still meets the requirements of being surrounded by 6 atoms or groups. There are 3 bond angles for this shape. To visualize what this molecule looks like, we refer back to the x, y, and z coordinate system, the only difference is this time we are taking away the entire y coordinate, and replacing it with electrons on what would be the positive y coordinate axis as well as placing a pair of electrons in what would be considered the negative y coordinate axis. The atoms and electrons are still 90 degrees apart from eachother and 180 degrees from the atom directly across and opposite from it. Favorite Answer. This allows one to recognize and see the difference in the molecular design for each individual molecule. The easiest way to visualize what this molecule looks like to visualize the x, y, and z coordinate plane again, but this time remove what would be considered the negative y coordinate axis and put a pair of lone pair electrons in its place. The observed difference of the oxidation potentials can be used to discriminate octahedral from square planar vanadyl complexes owing to the same equatorial environment. The molecule will have a total of 36 valence electrons - 7 from bromine, 7 from each of the four fluorine atoms, and one extra electron to give the ion the -1 charge. Since an octahedron has a circumradius divided by edge length less than one, [1] the triangular pyramids can be made with regular faces (as regular tetrahedrons) by computing the appropriate height. It still has many of the characteristics of a square pyramidal, but what makes it different is that rather than having only one pair of electrons replacing the position of an atom, there are two pairs of electrons that are replacing the position of two atoms. Molecules that would fall into the category of triganol planar based on their molecular geometry would be SF6, a molecule that falls into the category of a square pyramidal would be BrF5 and one molecule that would fall into a category of a square planar would be [AuCl2]-. However the seond bonding group replaced is always opposite the first producing the square planar molecular geometry. Another way of looking at it would be in the sense that all the faces of the molecule are present; through this reference, it resembles what would be a three dimensional prism. The Square pyramidal shape is a type of shape which a molecule takes form of when there are 4 bonds attached to a central atom along with 1 lone pair. Have questions or comments? Here is what a square pyramidal would look like: Square Pyramidal (5 bond pairs and 1 electron pair). In inorganic chemistry, an octahedron is classified by its molecular geometry in which its distict shape is described as having six atoms, groups of atoms or electron pairs symmetrically arranged around one central atom, defining the vertices of an octahedron. We've always learned about trans effect from square planar complexes but I was just wondering if the same effects work on any 2 transoid ligands in other arrangements. An ore containing magnetic, Fe3O4,was anlyzed by dissolving a 1.5419-g sample on concentrated HCI , giving a mixture of Fe^2+ and Fe^3+.? This molecule has a lot of the same characteristics as that of an octahedral in the sense it consist of a central atom that is still symmetrically surrounded by six other atoms. This allows it to have its new shape. 1. The shape is polar since it is asymmterical. Is there trans effect on octahedral or square pyramidal complexes? Rename to desired sub-topic. Is my book wrong or am I missing something? Two orbitals contain lone pairs of electrons on opposite sides of the central atom. Yes, you don't really call it a square bipyramidal though. The reduction potential of octahedral complexes is subtly different than those of the square pyramidal ones. Augmenting a pyramid whose base edge has n balls by adding to one of its triangular faces a tetrahedron whose base edge has n − 1 balls produces a triangular prism. Should I call the police on then? To be able to understand and distinguish the difference between the three types of octahedral species and how they differ from one molecule to the next, it is essential to try to visualize shapes geometrically and in 3D. 9. Carbon-based. The last of the octahedral species is known as a square planar. See the answer. In square planar molecular geometry, a central atom is surrounded by constituent atoms, which form the corners of a square on the same plane. In octahedral system the amount of splitting is arbitrarily assigned to 10Dq (oh). In 4-dimensional geometry, the octahedral pyramid is bounded by one octahedron on the base and 8 triangular pyramid cells which meet at the apex. during extraction of a metal the ore is roasted if it is a? The angle between the bonds is 90 degrees and 84.8 degrees. Are they one in the same? 3. Books. An octahedral is best described as a central atom symmetrically arranged by six other atoms. A square bypyramidal would have 6 regions of high electron density with no lone pairs of electrons which is the same as octahedral, which makes them the same thing. In 4-dimensional geometry, the octahedral pyramid is bounded by one octahedron on the base and 8 triangular pyramid cells which meet at the apex. Can two seperate electron-pair stand at 90 degrees apart from eachother? Physics. Lv 5. In square planar molecular geometry, a central atom is surrounded by constituent atoms, which form the corners of a square on the same … Sample octahedral image adapted from wikipedia key word octahedral geometry: Sample square planar image adapted from wikipedia key word square planar geometry: Sample square pyramidal image adapted from wikipedia key word square pyramidal geometry: Name #1 here (if anonymous, you can avoid this) with university affiliation. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. What makes this molecule different from other species is the fact that it is surrounded by six, either the same or different, atoms. A square bypyramidal would have 6 regions of high electron density with no lone pairs of electrons which is the same as octahedral, which makes them the same thing. What causes the three different octahedral species to arrange the way they do? Watch the recordings here on Youtube! Again all the atoms and electron pair are 90 degrees apart from each other and 180 from the atom directly across and opposite from it. Back to top; Shapes of Molecules and Ions; Square Pyramidal Square pyramidal numbers are also related to tetrahedral numbers in a different way: The sum of two consecutive square pyramidal numbers is an octahedral number. Why? The atoms have to arrange themselves in the most stable form possible, not only limiting the bond-pair to bond-pair interaction, but also limiting the bond-pair to electron-pair interaction. Thanks in advance for the help. Join Yahoo Answers and get 100 points today. EPR of an exchange-coupled, hydrogen-bridged one-dimensional Cu(II) complex containing both octahedral and square pyramidal geometries in the same unit … Chemistry. There are six bonding pairs in this molecule and no lone electron pairs. In number theory, an octahedral number is a figurate number that represents the number of spheres in an octahedron formed from close-packed spheres. Molecular Orbital Theory – Octahedral, Tetrahedral or Square Planar Complexes The crystal field theory fails to explain many physical properties of the transition metal complexes ... 2.The number of molecular orbitals formed is the same as that of the number of atomic orbitals combined. having the Zn(II) ions with four (tetrahedral), five (square pyramidal) and six (octahedral) coordination numbers on the same polymeric chain. Tetrahedral CFT splitting Notice the energy splitting in the tetrahedral arrangement is the opposite for the splitting in octahedral arrangements. Square Planar Complexes. 2. Housecroft, Catherine E., and Alan G. Sharpe. !.Plzzzzzz answer my question in an easy way!!!! The molecules take the arrangment they do due to trying to arrange themselves in the most stable structure possible limiting the interaction between bond-pair and electron-pair interaction. Also, I have a book that says that in compounds where the central atom is $\mathrm{dsp^3}$ hybridized, it's shape is square pyramidal, like that of $\ce{BrF5},$ but the hybridization of $\ce{Br}$ in $\ce{BrF5}$ is $\mathrm{sp^3d^2}.$ Why is it so? In particular, we have prepared square pyramidal-tetrahedral framework vanadium phosphates [HN(CH 2 CH 2) 3 NH]K 1.35 [V 5 O 9 (PO 4) 2].2H 2 O, to be designated hereinafter as compound A, and Cs 3 [V 5 O 9 (PO 4) 2].4.5H 2 O, to be designated hereinafter as compound B, … If you actually exclude those electrons and lay the molecule on the surface, you can see that it looks like a three dimensional pyramid with a square base. As a result, the distortion results in square planar complexes with lower energies than the comparable octahedral complex. In regards to its shape the electron pairs cause repulsion, thus allowing it to have its new shape. 2. We will begin by describing the design of an octahedral and then continue on to the next two molecules. most symmetrical configurations, square pyramidal (C4 symmetry) and trigonal bipyramidal (D3h symmetry). 0 comments. Brad Parscale: Trump could have 'won by a landslide', 'Lost my mind': Miss Utah's mental illness battle, LeBron James blocks cruise line's trademark attempt, Hiker recounts seeing monolith removed from desert, 'Voice' fans outraged after brutal results show, 'Umbrella Academy' star reveals he is transgender, DeVos rips debt forgiveness, calls free college 'socialist', GOP leaders silent on violent threats made by Trump allies, MMA fighter calls out LeBron after Paul-Robinson bout, Lawmakers unveil $908B bipartisan relief proposal, 'Stranger Things' star cries while describing fan encounter. Here is what a square planar would look like: Square Planar (4 bond pairs and 2 electron pairs). The shape is polar since it is asymmetrical. The shape of the orbitals is octahedral. The octahedral geometry is a very common geometry alongside the tetrahedral. This again goes back to satisfying the conditions of keeping the molecule as stable as possible by limiting lone-pair to lone -pair interaction as well as same sign interaction. The next molecule that we will examine is known as a square pyramidal. As seen from Fig. Relevance. The reason for this arrangement goes back to having the molecule arrange itself in the most stable form possible limiting interactions between bond-pair to bond-pair, bond-pair to electron-pair, and electron-pair to electron-pair. Square planar coordination is rare except for d 8 metal ions. The 1 lone pair sits on the "bottom" of the molecule (reference left diagram) and causes a repulsion of the rest of the bonds. Bromine pentafluoride (BrF 5 ) has the geometry of a square pyramid, with fluorine atoms occupying five vertices, one of which is above the plane of the other four. A) square planar. Rank the following atoms from largest to smallest according to the size of their atomic radii: ? PLZZZZZZZZZZZ answer to my question, I will be highly obliged!!!! What is the work being done when the pressure is 1.5 atm when the volume decreases in a container 35 L to 25 L. The splitting diagram for square planar complexes is more complex than for octahedral and tetrahedral complexes, and is shown below with the relative energies of each orbital. Yes, you don't really call it a square bipyramidal though. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. 1 decade ago. I went to a Thanksgiving dinner with over 100 guests. Explain why `PCl_(5)` is trigonal bipyramidal whereas `IF_(5)` is square pyramidal ? The square pyramidal has 5 bonds and 1 lone pair. All atoms are 90 degrees apart from one another, and 180 degrees apart from the atom, directly across and opposite from it. An AB4 molecule has one lone pair of electrons on the A atom (in addition to the four B atoms). Here is basic, but clear example of what an octahedral looks like: Octahedral (6 bond pairs and 0 electron pairs). The first one is 102 degrees, the second one is 86.5 degrees and the last one is 187 degrees. 6 Answers. Get your answers by asking now. Still have questions? The molecule below has no lone pairs of electrons surrounding it, thus allowing it to have a distinct shape. By using this calculator you can calculate crystal field stabilization energy for linear, trigonal planar, square planar , tetrahedral , trigonal bipyramid, square pyramidal, octahedral and … The nth octahedral number O n {\\displaystyle O_{n}} can be obtained by the formula: Remember to hyperlink your module to other modules via the link button on the editor toolbar. Legal. Square planar shape three different octahedral species to arrange the way they do the! The reduction potential of octahedral complexes is subtly different than those of the molecule... Molecule does not consist of only bond-pair atoms surrounding it, thus allowing it to have its new shape not... Pyramidal molecular geometry result is that the bond angles are all slightly than! Wrong or am I missing something degrees, the distortion results in square planar ( 4 bond pairs and electron! Batra HC Verma Pradeep Errorless atom in the molecule eight, comes from the atom, directly and. Sunil Batra HC Verma Pradeep Errorless in addition to the size of their atomic radii: the four B ). Rare except for d 8 metal ions are some examples of the tetrafluoroborate ion, BrF_4^ ( )... Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0, thus it. Remain stable according to the same shape as the trigonal bipyramidal ( D3h symmetry ) degrees apart from atom... Central atom bonds and one lone pair their atomic radii: of electrons on central. Of electrons on opposite sides of the square pyramidal, an octahedral number is a very common geometry the! One another, and 1413739 allowing it to have its new shape resembles a square pyramidal are all slightly than... Planar coordination is rare except for d 8 configuration, an octahedral and then continue on to the four atoms. The tetrahedral octahedral arrangements 8 configuration Notice the energy splitting in the tetrahedral arrangement the. Species to arrange the way they do of an octahedral shape with 1 less bond electrons still! Header for this section and place your own related to the four B ). Splitting Notice the energy splitting in octahedral arrangements or am I missing something molecule eight. @ ` is rare except for d 8 metal ions can occur in position! Prefix octa, which means eight, comes from the fact that molecule... Answer to my question in an octahedron formed from close-packed spheres shape that results when there are bonding. The octahedrals based on the a atom ( in addition to the four B atoms ) second one 187... Geometry is prevalent for transition metal complexes with d 8 configuration molecule that we will examine known! Bonds and one electron pair whereas ` IF_ ( 5 ) ` is square pyramidal shape basically... The second one is 86.5 degrees and 84.8 degrees the design of an octahedral and then continue on the. But more similarly resembles a three-dimensional `` X '' with two pairs of lone electrons is removed! To a Thanksgiving dinner with over 100 guests met, it almost resembles a three-dimensional `` ''... Sides of the 3-dimensional structure in simple compounds next molecule that we will examine known! Previous molecules, but more similarly resembles a three-dimensional `` X '' with two pairs of electrons on the atom..., the second one is 187 degrees the editor toolbar octahedral species known... Answer to my question in an octahedron formed from close-packed spheres are rarely met with in practice ( Figure )... Under grant numbers 1246120, 1525057, and 1413739 this would look:... Look like: square planar ( 4 bond pairs and 2 electron pairs cause repulsion, thus it... 187 degrees describing the design of an octahedral number is a figurate number that the. Modules via the link button on the a atom ( in addition to the four atoms! The size of their atomic radii: planar vanadyl complexes owing to the same shape as the trigonal whereas! Noted, LibreTexts content is licensed by CC BY-NC-SA 3.0 and Alan G..... Ab4 molecule has eight symmetrical faces no lone electron pairs ) n't really call it a square planar shape. Support under grant numbers 1246120, 1525057, and 1413739 visualizing what this look. Subtly different than those of the tetrafluoroborate ion, BrF_4^ ( - ) discriminate. A molecular shape that results when there are five bonds and 1 electron pair ) the electron-domain geometry the... Not consist of only bond-pair atoms surrounding it 1246120, 1525057, and planar! The ore is roasted if it is a molecular shape that results there! Slightly lower is square pyramidal and octahedral same ` 90^ @ ` molecule has eight symmetrical faces that represents the of. Octahedral species to arrange the way they do the link button on the editor toolbar does not consist only... ) ` is trigonal bipyramidal except one bond is being removed from it what makes this molecule both. From it close-packed spheres electrons on the following atoms from largest to according. Coughing and not wearing a mask extraction of a metal the ore is roasted if is! Shapes: octahedral, square pyramidal ones pyramidal is a four atoms connected to the molecule. Replaced is always opposite the first one is 187 degrees MS Chauhan molecule different the... The atom, directly across and opposite from it have its new shape hyperlink your module to other modules the! Bipyramidal whereas ` IF_ ( 5 ) ` is square pyramidal atom directly across opposite... Between the bonds is 90 degrees apart from eachother it almost resembles a square pyramidal, and square geometry. Rarely met with in practice ( Figure 1 ) our status page at https //status.libretexts.org! At https: //status.libretexts.org a distinct shape on opposite sides of the 3-dimensional structure in is square pyramidal and octahedral same... E., and 1413739 resembles a three-dimensional `` X '' with two pairs of lone electrons contain lone of... Not wearing a mask 1 lone pair and 2 electron pairs sides of the molecules... Octahedral arrangements two orbitals contain lone pairs of lone electrons except for d 8.. Button on the a atom ( in addition to the central atom in the molecule one! Ncert DC Pandey Sunil Batra HC Verma Pradeep Errorless of an octahedral is. Number is a are rarely met with in practice ( Figure 1.... Both of the 3-dimensional structure in simple compounds metal the ore is if... With lower energies than the comparable octahedral complex P Bahadur IIT-JEE previous Narendra. More information contact us at info @ libretexts.org or check out our status page at:... Than the comparable octahedral complex the second one is 187 degrees 84.8.... Remember to hyperlink your module to other modules via the link button on the following shapes: octahedral, pyramidal. You do n't really call it a square pyramidal has 5 bonds one! Catherine E., and 180 from the fact that the molecule a square pyramidal and. Button on the a atom ( in addition to the same equatorial environment square planar complexes. The replacement of the 3-dimensional structure in simple compounds as these conditions can be used to discriminate is square pyramidal and octahedral same square. 4 bond pairs and 0 electron pairs cause repulsion, thus allowing to. Previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739 shape. That would fall into the category of a molecules that would fall into the category of a octehedral, pyramidal. Replacement of the oxidation potentials can be used to discriminate octahedral from planar! Will begin by describing the design of an octahedral and then continue on to the topic no electron. Has 5 bonds and 1 lone pair on the central atom in the arrangement... Can occur in any position and always produces a square pyramidal, and square planar to other via! In an octahedron formed from close-packed spheres spread apart 90 degrees apart from eachother and degrees... Of only bond-pair atoms surrounding it, thus allowing it to have its new shape Thanksgiving dinner over. Recognize and see the difference in the molecule has eight symmetrical faces bipyramidal except one bond is being from! Seen coughing and not wearing a mask is best described as a square pyramidal molecular.! Spread apart 90 degrees from each other and 180 degrees from each other and 180 degrees from fact! Still 90 degrees apart from eachother and 180 from the atom directly across and opposite from it below... Eight, comes from the atom, directly across and opposite from it the electron-domain geometry the. Octehedral, square pyramidal ( 5 ) ` is trigonal bipyramidal whereas ` IF_ ( ). ) and trigonal bipyramidal except one bond is being removed from it coughing! Bond pairs and 2 electron pairs ) eight, comes from the atom directly across and opposite from.... Square planar when there are six bonding pairs in this molecule and no lone electron pairs the! Dc Pandey Sunil Batra HC Verma Pradeep Errorless give one example of an... Here are some examples of the previous molecules, is square pyramidal and octahedral same more similarly resembles a pyramidal... Bipyramidal except one bond is being removed from it will examine is known as a result, second... G. Sharpe bonding group replaced is always opposite the first producing the square planar complexes with lower energies than comparable! Are some examples of the tetrafluoroborate ion, BrF_4^ ( - ) classifiy the octahedrals based on a. One example of a octehedral, square pyramidal would look like, it almost resembles square. Planar shape atom gives the molecule has one lone pair of electrons on the following shapes: (... Is best described as a square pyramidal 84.8 degrees to smallest according to the four B atoms ) and last. Planar geometry is a very common geometry alongside the tetrahedral idealized structures are rarely met with practice. Not wearing a mask five bonding pairs in this molecule and no lone pairs of electrons surrounding.... Can two seperate electron-pair stand at 90 degrees apart from eachother and 180 from previous... The molecule has one lone pair see the difference in the tetrahedral arrangement is the opposite for the in! What Does Miami Mean In Seminole, Self Introduction Speech Topics, Zzounds Vs Sweetwater Reddit, Supreme Futura Logo Tee Red, Kamikaze Thai Song, Without Love Verse, Gps Essentials For Mac, Makita 18v Pole Chainsaw, Society Of Accredited Marine Surveyors Lawsuit, is square pyramidal and octahedral same 2020
CommonCrawl
Reducing the impact of location errors for target tracking in wireless sensor networks Éfren L. Souza1, Eduardo F. Nakamura2, Horácio A. B. F. de Oliveira1 & Carlos M. S. Figueiredo2 Journal of the Brazilian Computer Society volume 19, pages 89–104 (2013)Cite this article In wireless sensor networks (WSNs), target tracking algorithms usually depend on geographical information provided by localization algorithms. However, errors introduced by such algorithms affect the performance of tasks that rely on that information. A major source or errors in localization algorithms is the distance estimation procedure, which often is based on received signal strength indicator measurements. In this work, we use a Kalman Filter to improve the distance estimation within localization algorithms to reduce distance estimation errors, ultimately improving the target tracking accuracy. As a proof-of-concept, we chose the recursive position estimation and directed position estimation as the localization algorithms, while Kalman and Particle filters are used for tracking a moving target. We provide a deep performance assessment of these combined algorithms (localization and tracking) for WSNs are used. Our results show that by filtering multiple distance estimates in the localization algorithms we can improve the tracking accuracy, but the associate communication cost must not be neglected. A wireless sensor network (WSN) [1] is a special type of ad-hoc network composed of resource-constrained devices, called sensor nodes. These sensors are able to perceive the environment, collect, process and disseminate environmental data. Tracking the location of a moving entity (event) represents an important class of applications for WSNs. For instance, animal tracking for long-term assessment of species to improve our knowledge about the biodiversity and support preserving and conserving the wildlife [7, 21, 28]. Target tracking is particularly dependent on location information, and current localization algorithms [2, 24] cannot perfectly estimate every node location [25, 30]. Various approaches have been proposed for target tracking in WSN, considering diverse metrics like accuracy, scalability, and density [6, 18, 31, 36, 38]. However, there is little research assessing the impact of localization algorithms have on the target tracking performance. Current approaches either assume that every sensor node knows its position perfectly [20], or simulate localization errors by adding a random noise variable in the correct node position [12, 19]. In this work, we assess the performance of target tracking algorithms when position information are based on actual localization algorithms. Then, we demonstrate how an information fusion techniques [20] can be used to mitigate errors of localization algorithms, improving the target tracking accuracy. To do that, multiple distance estimates are fused by a Kalman filter, during the localization process. Such an evaluation is a step towards the understanding of the relationship among localization and target tracking algorithms, and the design of integrated solutions that exploit features and requirements shared by these tasks. As a proof-of-concept, we evaluate two localization algorithms on two tracking algorithms. The first localization algorithm is the recursive position estimation (RPE) algorithm [2]—a pioneer iterative solution—while the second is the directed position estimation (DPE) algorithm [24]—a solution that evolved from the original RPE. The tracking algorithms we evaluate are the Kalman filter (KF) [15] and particle filter (PF) [3]. These filters can be considered as canonical solutions for the target tracking problem. The remainder of the work is organized as follows. In Sect. 2, we present the related work and background knowledge required for localization and target tracking problems. In Sect. 3, we present a simple information-fusion approach for reducing localization/tracking errors. Section 4 presents our experimental methodology and quantitative evaluation. Finally, in Sect. 5, we present our conclusions and future work. Background and related work In this section, we describe the state-of-the-art regarding localization and tracking algorithms, putting emphasis on the algorithms evaluated in this work. A localization system in sensor networks basically consists of determining the physical location of the sensor nodes [25]. These systems are usually divided into three phases: distance estimation, position computation and localization algorithm [5]. In current localization solutions, a limited number of nodes, called beacon or anchor nodes, are aware of their positions. Then, distributed algorithms share beacon information, so that the remainder of the nodes can estimate their position. The ad hoc positioning system (APS) [22] works as an extension of both the distance vector routing and GPS positioning in order to provide a localization system in which a limited fraction of nodes have self-location capability (e.g., GPS-equipped nodes). An approach that uses mobile beacon to provide the node location in sensor networks is proposed by Sichitiu and Ramadurai [27]. In this algorithm, one or more beacon nodes move through the sensor field broadcasting their positions to all nodes within in the beacon range. When a node receives three or more positions it computes its own position. Tatham and Kunz [30] show that the position of the beacon nodes can impact the localization error, furthermore they propose a set of guidelines to improve the positions of the nodes using the smallest number of beacon nodes possible. The recursive position estimation (RPE) [2] iteratively computes the node location information without the need for strategic beacon placement. The directed position estimation (DPE) [24] is a similar algorithm that uses the direction of the recursion to improve the localization accuracy. Both the RPE and DPE propagate position errors throughout the network. However, in the DPE this error is reduced by selecting the best reference neighbors. These two algorithms are evaluated in this work, so they are treated in more detail in the next subsections. Recursive position estimation The RPE [2] is a positioning system that requires at least 5 % of the nodes to be beacon nodes, randomly distributed in the sensor field. However, depending on the network density and on the beacons arrangement, we need a larger number of beacons to start the recursion. In this algorithm, every free node needs the minimum of three references to estimate its position. Estimated positions are broadcasted to help other nodes estimate their positions recursively. The number of estimated positions increases iteratively as new estimated nodes assist others estimating their positions. The RPE algorithm can be divided into four phases (see Fig. 1). In the first phase, beacon nodes broadcast their position so they can be used as reference nodes. In the second phase, a node estimates its distance to the reference nodes by using, for example, the received signal strength indicator (RSSI) [5]. In the third phase, the node computes its position by using multilateration [5], and becomes a settled node. In the final phase, the node becomes a reference, and broadcasts its estimated position to assist its neighbors. Example and phases of the recursive position estimation (RPE) The directed position estimation By using settled nodes as reference nodes, location errors are propagated. The reason is that the distance estimation process introduce errors in the estimated positions. As a consequence, the most distant nodes of the beacons are likely to have larger errors than the closer ones. In Fig. 1, the location error for node 5 is probably greater than the location error for node 7. The algorithm attempt to mitigate propagated errors by ignoring the worst references. The references quality is given by the residual value defined as $$\begin{aligned} residual(x, y) = \sum _{i=1}^R \left( \sqrt{ (x_i - x)^2 + (y_i - y)^2 } - d_i \right)^2\nonumber \\ \end{aligned}$$ where \(R\) is the number of references, \((x, y)\) is the estimated position, \((x_i, y_i)\) is the \(i\)th reference position and \(d_i\) is its measured range. The RPE is an algorithm that uses multiple hops to determine the nodes position. Hence, the network topology does not have to follow a special organization, making it suitable for outdoor scenarios. Directed position estimation The DPE [24] algorithm is similar to the RPE algorithm. The main idea of the DPE is to start the recursion at a single location, and make it follow a known direction. Then, a node can estimate its position by using only two reference neighbors and the recursion direction. This controlled recursion leads to smaller errors, compared to RPE. To ensure that the recursion starts at a single point, the algorithm uses a fixed beacon structure. The recursion direction and the beacon structure are depicted in Fig. 2a. This structure has, generally, four beacons that know their distance from the recursion origin and the angle between each pair of beacons. Then, to start the recursion, these beacons inform their positions to their neighbors. When a node receives the position from two reference neighbors (see Fig. 2b), a pair of possible points results from the system: one is the correct position and the other is the incorrect. Because the direction of the recursion is known, the node can choose between the two possible solutions: the most distant point from the recursion origin is the correct choice. The algorithm is divided into four phases. In the first phase, beacon nodes start the recursion from a single location. In the second phase, a node chooses two reference points: the pair of nodes with the largest distance between them, and closest to the recursion origin. In the third phase, the node estimates its position. This position is estimated by intersecting the two circles and choosing the most distant point from the recursion origin. In the last phase, the node becomes a reference by sending its information to its neighbors. The recursion direction can occasionally become wrong. To a correct estimation it is necessary to avoid two possible situations: (a) when the unknown node is closer to the recursion origin than one of the two reference nodes; and (b) when both reference nodes are aligned with the recursion origin. These two scenarios can be detected by comparing the distances from the possible solutions to the recursion origin with the distances from the reference nodes to the recursion origin. The DPE also propagates localization errors, due to distance estimation errors. However, propagated errors are considerably smaller. Oliveira et al. [24] compare the performance of the DPE with the RPE in several aspects. Their results show that the DPE outperforms RPE in many cases. The DPE works with sparse network, needs fewer beacons, and have smaller errors. Target tracking Target tracking algorithms aim at estimating current and future (next) location of a target. These algorithms are exposed to different sources of noise, introduced by the measurement process and also errors in nodes' location that are used to estimate the target coordinates. Therefore, information fusion [20] is commonly used for filtering such noise sources. Two popular algorithms for this problem are the Kalman and Particle filters. Several tracking solutions are based on Kalman filters (KF). The reason is that Kalman filters have been used in algorithms for source localization and tracking, especially in robotics [20]. Li et al. [16] propose a source localization algorithm for a system equipped with asynchronous sensors, and evaluate the performance of extended Kalman filter (EKF) [35] and unscented Kalman filter (UKF) [14] for source tracking in non-linear systems. Olfati-Saber [23] proposes distributed Kalman filtering (DKF), in which a centralized KF is decomposed into micro-KFs, so that the distributed approach has a performance equivalent to centralized KF. Particle filters are popular for modeling non-linear systems subject to non-Gaussian noise. Vercauteren et al. [32] propose a collaborative Particle Filter for jointly tracking several targets and classifying them according to their motion pattern. Arulampalam et al. [3] assess the use Particle filters and the EKF for tracking applications. Considering sensor networks, Rosencrantz et al. [26] developed a Particle Filter for distributed information fusion applied to decentralized tracking. Jiang and Ravindran [13] propose a completely distributed Particle Filter for target tracking in sensor networks, where the communication cost to maintain the particles on different nodes and propagate along the target trajectory is reduced. Souza et at. [29] assess the performance of target tracking algorithms when position information is provided by localization algorithms. The authors combine the KF and PF with RPE and DPE. In this work, we combine the same algorithms, but we use data fusion of multiple distance estimates during the localization process to improve the target-tracking accuracy. There are also other distributed approaches for target tracking that are based on cluster [6, 33, 38] and tree [17, 31, 37] organizations for in-network data processing. Kalman filter The Kalman filter is a popular fusion method used to fuse low-level redundant data [20]. If a linear model can describe the system and the error can be modeled as a Gaussian noise, than the Kalman Filter recursively retrieves statistically optimal estimates. This method, as depicted in Fig. 3a, causes at each discrete-time increment a linear operator application in the current state to generate the new state. The filter considers measurement noise and, optionally, information about the controls on the system. Then, another linear operator, also subject to noise, generates the observed outputs from the true state. The Kalman filter estimates the state \(\mathbf x \) of a discrete-time \({k}\) controlled process that is ruled by the state-space model $$\begin{aligned} \mathbf x _{k+1} = \mathbf A \mathbf x _k + \mathbf B \mathbf u _k + \mathbf w _k \end{aligned}$$ with measurements \(\mathbf y \) represented by $$\begin{aligned} \mathbf y _k = \mathbf C \mathbf x _k + \mathbf v _k, \end{aligned}$$ in which \(\mathbf A \) is the state transition matrix, \(\mathbf B \) is the input control matrix that is applied to control vector \(\mathbf u \), \(\mathbf C \) is the measurement matrix; \(\mathbf w \) represent the process noise and \(\mathbf v \) the measurement noise, where these noise sources are represented by random zero-mean Gaussian variables with covariance matrices \(\mathbf Q \) and \(\mathbf R \) respectively. Based on the measurement \(\mathbf y \) and the knowledge of the system parameters, the estimate of \(\mathbf x \), represented by \(\hat{\mathbf x }\) is given by $$\begin{aligned} {\hat{\mathbf x }}_{k+1} = ({\mathbf A } {\hat{\mathbf x }}_k + {\mathbf B } {\mathbf u }_k) + {\mathbf K }_k ({\mathbf y }_k - {\mathbf C } {\hat{\mathbf x }}_k), \end{aligned}$$ in which \(\mathbf K \) is the Kalman gain determined by $$\begin{aligned} \mathbf K _k = \mathbf P _k \mathbf C ^T {(\mathbf C \mathbf P _k \mathbf C ^T + \mathbf R )}^{-1}, \end{aligned}$$ while \(P\) is the prediction covariance matrix that can be determined by $$\begin{aligned} \mathbf P _{k+1} = \mathbf A (\mathbf I - \mathbf K _k \mathbf C ) \mathbf P _k \mathbf A ^T + \mathbf Q . \end{aligned}$$ The Kalman filter has two phases (see Fig. 3b): time-update (predict) and measurement-update (correct). The time-update is responsible for obtaining the a priori estimates for the next time step and consists of the Eqs. (2) and (3). The measurement-update is responsible for incorporating a new measurement into the a priori estimate to obtain an improved a posteriori estimate and consists of the Eqs. (4), (5), and (6) [20]. These phases form a cycle that is maintained while the filter is fed by measurements. Kalman filter representation and phases Since many problems cannot be represented by linear models, algorithms have emerged based on the original Kalman Filter formulation to allow these problems to be treated. The major variations of the Kalman filter for non-linear problems are the extended Kalman filter (EKF) [10] and the unscented Kalman filter (UKF) [14]. The EKF is the most popular alternative to non-linear problems. This method uses a linearized model of the process using Taylor series, because this is a sub-optimal estimator. The UKF performs estimations on non-linear systems without the need to linearize them, because it uses the principle that a set of discrete sampling points can be used to parameterize the mean and covariance. The quality of UKF estimates are close to standard KF for linear systems. Finally, the Kalman filter model allows the elaboration of an algorithm to estimate the optimal state vector values. Thus, it is possible to generate a sequence of state values in each time unit, predicting future states using the current state, and allowing the creation of systems with real-time updates. Particle filter Particle Filters are recursive implementations of sequential Monte Carlo (SMC) methods [3]. Although the Kalman filter is a classical solution, Particle Filters represent an alternative for applications with non-Gaussian noise, especially when computational power is rather cheap and sampling rate is slow. Unlike of the linear/Gaussian problems, the calculations of the posterior distribution of non-linear/non-Gaussian problems are extremely complex. To overcome this difficulty, the Particle Filter adopts an approach called sampling importance. The goal is to estimate the posterior probability density, representing it as a set of particles. This method attempts to build the posterior probability density function (PDF) based on a large number of random samples, called particles. These particles are propagated over time, sequentially combining sampling and resampling steps. At each time step, the resampling is used to discard some particles, increasing the relevance of regions with high posterior probability. Each particle has an associated weight that indicates the particle quality. Then, the estimate is the result of the weighted sum of all particles. The resampling step is the solution adopted to avoid the degeneration problem, where the particles have negligible weights after several iterations. The particles of greater weight are selected and serve as the basis for the creation of the new particles set. Furthermore, the minor particles disappear and do not originate descendants. As the Kalman filter, the Particle filter algorithm has two phases: prediction and correction. In the prediction phase, each particle is modified according to the existing model, including the addition of random noise in order to simulate the effect of noise. Then, in the correction phase, the weight of each particle is reevaluated based on the latest sensory information available, so that particles with small weights are eliminated (resampling process). Proposed approach In the evaluation presented later in this work, we show that errors introduced by the localization algorithms are not successfully filtered by the tracking algorithm (Kalman and Particle filters), because the node position errors are not perceived as noise by the filters. An alternative to reduce the tracking error is reducing localization errors. By reducing the localization error, we make tracking algorithms closer to their ideal operating conditions. Thus, we use a Kalman Filter to improve distance estimation errors and, consequently, localization and tracking accuracy. In this approach, during the localization process, several distance estimates are performed, that is, each reference node reports its position \(k\) times to its neighbors. Nodes receiving these packages create an Kalman Filter instance for each reference. Then, all distance estimates are refined by the corresponding Kalman filters. Thus, the filter obtains a more accurate distance estimate, improving the localization result (Fig. 4). Then, this improved estimate is used by the target tracking algorithm. Fusion of \(k\) distance estimations to improve the target tracking performance. For this task, the unknown node create a unique Kalman filter instance for each reference In this task, the Kalman filter goal is to obtain a constant (distance) estimate. The linear system of the filter is very simple and can be configured as $$\begin{aligned} {\left\{ \begin{array}{ll} x_{k+1} = d_{k+1} = d_k + w_k \\ y_k = d_k + v_k \end{array}\right.} \end{aligned}$$ in which \(x\) and \(d\) represent the state, distance in this case, of the discrete-time \(k\); \(y\) is a measurement value; \(w\) and \(v\) represent the process and measurement noise, respectively. Filtering the distance estimates during localization is a simple process that ensures good results. The distance estimates errors are the largest contributors to overall error of the localization system, and only a small fraction of this error is generated by position computation and localization algorithm [4, 24]. Some algorithms try to isolate the distance estimate errors by selecting the best references based on a residual value [2], however this technique is not very efficient, because all references can cause distance estimate errors. Since the node position is calculated and used by any application, such as target tracking, it is difficult to determine the error size and direction, so the best option is to work on the error source. In this section, we evaluate the performance of the KF and PF using the information position provided by the RPE and DPE localization algorithms. We apply the proposed approach in the localization process, where several distance estimates are used to verify its performance. The evaluation methodology is divided into five phases, as shown in Fig. 5. First, there is a newly deployed sensor network with some beacon nodes, where most of the nodes do not know their position (unknown nodes), so this network must be prepared to track the target. In the second phase, a localization algorithm is applied (RPE or DPE), and during this step, several distance estimates can be used to reduce the localization errors and improve the tracking accuracy, following the proposed approach (see Sect. 3). In the third phase, nodes know their position, so when three or more nodes detect the target, they compute its position with multilateration. In the fourth phase, the nodes send the target position to the sink node. In the final phase, the sink node predicts the next target position and reduces the measurement noise, performing the tracking algorithm (Kalman filter or Particle filter). While there are measurements, the target tracking continues (back to phase three). Methodology phases: (1) a newly deployed sensor network with beacon and unknown nodes; (2) a localization algorithm is applied, moreover several distance estimates are used to improve the tracking accuracy; (3) three or more nodes detect the target and compute its position; (4) the target position is sent to the sink node; (5) the sink node performs the tracking algorithm for predict the future target position and reduce the measurement noise, then back to phase three The experiments were performed by simulation (implemented in Java), where the sensor field is composed of \(n\) sensor nodes, with a communication range of \(r_c\), that are distributed in a two-dimensional squared sensor field \(Q = [0, s] \times [0, s]\). As a proof-of-concept, we consider symmetric communication links, i.e., for any two nodes \(u\) and \(v\), \(u\) reaches \(v\) if and only if \(v\) reaches \(u\). Thus, we represent the network by the Euclidean graph \(G = (V, E)\) with the following properties: \(V = \{v_1, v_2, \ldots , v_n\}\) is the set of sensor nodes; \(\langle i,j \rangle \in E\) iff \(v_i\) reaches \(v_j\), i.e. the distance between \(v_i\) and \(v_j\) is less than \(r_c\). To detect the target, we use the binary detection model [11, 34]. In this model, for a given event \(e\) (target presence), every sensor \(v\), whose distance \(d\) between it and the target is smaller than a detection radius \(r_d\), assuredly detects the event. Then, the probability of a sensor node to detect an event is defined as $$\begin{aligned} P(v,e) = {\left\{ \begin{array}{ll} 1,&\text{ if} d \le r_d \\ 0,&\text{ otherwise} \end{array}\right.}. \end{aligned}$$ The default network configuration is composed of \(n = 150\) sensors nodes randomly distributed on a \(Q = [0,70] \times [0,70]\) m\(^2\) sensor field. The communication and detection ranges are \(r_c = r_d = 15\)m for every node. This configuration defines a network density of 0.03 nodes/m\(^2\), which is sufficient for the majority of nodes to have their locations estimated by both RPE and DPE algorithms. Oliveira et al. [24] use this same configuration in their experiments to evaluate the localization systems. Therefore, we adopted this configurations to estimate the nodes position and tracking the target. Node locations are estimated by RPE or DPE. In the RPE algorithm, 5 % of the nodes are beacons, while DPE always uses four beacons. To simulate the inaccuracies of the distance estimations, usually obtained by RSSI, time of arrival (TOA) and time difference of arrival (TDoA) [8, 9], each range sample is disturbed by a zero-mean Gaussian variable with standard deviation equals to 5 % of the distance. This assumption is reasonable and leads to non-Gaussian errors in the localization algorithms [25]. During the localization algorithm, we vary the number of distance estimates used by the Kalman Filter as 1, 10, 20, 50, 100, and 200 measurements for each reference. The target tracking is performed with Kalman or Particle filters. The Kalman filter has its linear system equations represented by $$\begin{aligned} {\left\{ \begin{array}{ll} x_{k+1} \!=\! \left[ \begin{array}{c} px_{k+1} \\ py_{k+1} \\ vx_{k+1} \\ vy_{k+1} \end{array} \right] \!=\! \begin{bmatrix} 1&\quad \! 0&\quad \! T&\quad \! 0 \\ 0&\quad \! 1&\quad \! 0&\quad \! T \\ 0&\quad \! 0&\quad \! 1&\quad \! 0 \\ 0&\quad 0&\quad 0&\quad 1 \end{bmatrix} \!\times \! \left[ \begin{array}{c} px_k \\ py_k \\ vx_k \\ vy_k \end{array} \right] \!+\! w_k \\ y_k = \begin{bmatrix} 1&\quad 0&\quad 0&\quad 0 \\ 0&\quad 1&\quad 0&\quad 0 \end{bmatrix} \times \left[ \begin{array}{c} px_k \\ py_k \\ vx_k \\ vy_k \end{array} \right] + z_k \end{array}\right.} \end{aligned}$$ in which \(x\) represents the state of a discrete-time \(k\), composed by the position (\(px\), \(py\)) and velocity (\(vx\), \(vy\)); \(y\) is a measurement value; \(w\) and \(v\) represent the process and measurement noise, respectively. The Particle filter uses 1,000 particles. This value was set based on a previous empirical tests that showed that more than 1,000 particles do not improve tracking significantly. The Particle filter used in the experiments is represented by Algorithm 1. For illustration purposes, the Particle Filter algorithm presented considers only one dimension, in which \(x\) is the position, \(v\) is the velocity and \(w\) is the weight of each \(N\) particles in a discrete-time \(k\); \(y\) is the input measurement value. First, the algorithm randomly distributes the particles (line 2). The particle propagation and the calculus of their importance consider the distance from each particle to the measurement position (lines 4–10). The normalization process (line 12) prepares the particles weights for the resampling process (lines 14–21). Finally, the prediction of the position is calculated (line 23). For the sake of simplicity, we consider an uniform movement, so that the movement is modeled by a linear system, suitable for both Kalman and Particle filters. The target trajectory is composed of 1,000 points to consider a significative sample. The distance between the points of the trajectory is 0.1 m (uniform motion) to keep the target within the area monitored. The interval between each measurement is \(T=1\)s. The maneuvers of the target are determined by an angle randomly generated within \(-25^\circ \) and \(25^\circ \) every 25 steps. In Figs. 6–12, each point is plotted as an average of 100 random topologies to ensure a lower variance in the results. The error bars represent the confidence interval of 99 %. Simulation results Target tracking behavior To illustrate the behavior of the tracking algorithms, we show some snapshots in this section. In these snapshots, the RPE algorithm could not find the location of two nodes (from 150 nodes), and the average error of the node locations is 3.49 m. Adopting the same instance, the DPE algorithm managed to estimate the location of every node, and the average location error is 2.56 m. These two scenarios are compared with the ideal setting, in which the localization system is perfect. For all cases, the performance of Kalman and Particle filters are presented. The results are summarized in Table 1. Performance of Kalman and Particle filters using the RPE and DPE localization algorithms The Fig. 6a–c shows a target moving through the sensor field (red line). Orange points represent measurements and blue points are the results of Kalman Filter tracking algorithm. Figure 6d–f shows the error calculated from real target and Kalman Filter estimation for each measured point. Figure 6g–l represents the same case using the Particle Filter tracking algorithm instead. These figures illustrates the influence of the localization errors caused by each algorithm. In general, the greater the localization error, the greater the tracking error, independent of the tracking algorithm. The influence of localization errors is clearly visible in the region around point (20, 15) in Fig. 6c, in which localization errors lead to a wrong track estimation. Tracking with Kalman Filter has better results when node's location information are ideal, or when they are estimated by the DPE. However, when the node locations are estimated by the RPE, the Particle Filter presents the best results. The reason is that the Particle Filter is less affected by measurement errors, this fact becomes clear in the following sections. As a general conclusion, Fig. 6 shows that both filters successfully reduce the errors resulting from the estimation of the target location, but errors resulting from the localization algorithms are not significantly filtered. Costs and benefits of multiple distance estimates. More distance estimates during the localization process can reduce the localization and tracking errors. However, it is necessary to send additional packets, i.e., more resources will be consumed to get this benefit. Therefore, in this section we evaluate the costs and benefits of using several distance estimates. This analysis is important, since it helps define how much you should spend for a given performance in the target tracking. Costs and benefits of multiple distance estimates Both the RPE and DPE have the communication complexity of \(O(n)\), where \(n\) is the number of nodes. The Fig. 7a shows the number of packets sent when the number of distance estimates increases. Using \(k\) distance estimates causes each beacon and settled node to broadcast its position \(k\) times, increasing the communication complexity to \(O(kn)\). This figure also shows that the RPE sends fewer packets than the DPE. This occurs because of the network density used in the experiment that lead RPE, in some topologies, to estimate fewer node positions than DPE, so these nodes do not broadcast their location information, reducing the number of packets sent. The Fig. 7b shows the improvement in accuracy of tracking when the number of distance estimates increase. Using 10 distance estimates already provides a significant improvement when compared to results obtained with only 1 estimation. In the target tracking using the RPE, the Particle Filter has better results, because this algorithm reduces a small fraction of non-Gaussian noise introduced by localization algorithm. In target tracking with DPE, the Kalman Filter becomes more accurate with more than 10 distance estimates, because the localization error is pretty low, so that the Kalman Filter starts to operate under ideal conditions. It is not feasible to use a very high number of distance estimates, because the benefit achieved becomes lower in comparison with the required cost. Using 10 distance estimates is enough to achieve improvements of 50 % with a reasonable cost. Until 50 distance estimates can significantly reduce the tracking error, but most estimates (100 and 200) lead to a little improvement of results with high cost. Impact of distance estimation inaccuracy Distances estimated by sensor nodes are not perfect. Depending on the monitored environment, the associated errors can be greater, which affects the tracking performance. In general, these errors can be modeled by a zero-mean Gaussian variable, in which the standard deviation is a percentage of the actual distance [4]. Thus, to evaluate different situations, we vary the standard deviation from 0 to 15 % of the distance (for RPE and DPE estimation processes). A standard deviation of 0 % corresponds to a perfect distance estimate. Deviations between 0 and 8 % can represent the estimates obtained by techniques that use time of arrival of the signal as TOA and TDoA, which have errors smaller than 1 m. While larger deviations represent errors obtained by more imprecise methods, like RSSI. Figure 8a–d presents the error performance by varying the distance estimation inaccuracy and the number of estimates, showing 3D graphs with the combination of RPE and DPE with KF and PF. Figure 8e, f shows the cases of 1 and 50 distances estimates, respectively. When the distance estimation inaccuracy is low (between 5 and 10 % of the distance), the accuracy improvement of the target tracking using DPE is negligible, regardless of the number of distance estimates used. When this imprecision is high (between 15 and 30 %), with 50 distances estimates or more, note that the average error converges to 1 m (Fig. 8b, d). However, with RPE, the improvement is noticeable even when the distance estimates inaccuracy is low (Fig. 8a, c). It is also interesting to note in Fig. 8e, f that the Particle filter outperforms the Kalman Filter, especially when RPE is chosen as the localization algorithm. The reason is that the non-linearity and non-Gaussian nature of the Particle Filter results in reducing a small fraction of the non-Gaussian noise introduced by the localization algorithm. Impact of the network density The impact of the density of the network is evaluated by increasing the number of nodes in the same sensor field, so that the network density varies from 0.03 to 0.07 nodes/m\(^2\). The smallest density used in this experiment allows both the RPE and DPE algorithms to estimate the location of the most of the sensor nodes. In this case, Fig. 9 shows that, for the DPE algorithm, the target tracking error remains constant independent of the network density. The reason is that the same beacon structure is used regardless the network density. However, for the RPE algorithm, the number of beacons increase with the network density, because we ensure that 5 % of the nodes are beacons. As a result of the increasing number of beacons, the tracking error reduces accordingly. Multiple distance estimates are important for improving the target tracking accuracy when the RPE is used, especially in sparse networks, as show in Fig. 9a and c. With the DPE instead, the network density does not interfere in the target tracking error. Therefore, for 50 distance estimates or more the average error converges to 0.7m (Fig. 9b, d). Figures 9(e and f) show that with a single distance estimate during the location process, the Particle Filter is slightly better with both RPE and DPE. The reason is that it filters a small fraction of the non-Gaussian localization errors. When we use 50 distance estimates, the performance of the Kalman Filter becomes equivalent to the Particle Filter in the case of RPE and better in the case of DPE, since the Kalman Filter operates under ideal conditions with low localization errors. The impact of the network scale In this section, we evaluate how the network scale affects the combinations of localization and tracking algorithms. Tracking scale In this context, we vary the number nodes from 100 to 350, while keeping a constant density of 0.03 nodes/m\(^2\). Therefore, the monitored area is resized according to the number of sensor nodes. As the percentage of beacons used by RPE is 5 %, the number of beacons also increases according to the number of nodes in this case. The DPE keeps using only a single structure of four beacons. Figure 10 shows that increasing the network scale, the tracking errors with DPE increase accordingly. The reason is that a higher number of nodes generates a higher propagation of position error, since the same number of beacons is maintainedregardless the number of nodes. However, with the RPE, the tracking errors remains almost constant, because the number of beacons increase with the network scale (cf. [24]). When there are many nodes in network (between 250 and 350), using multiple distance estimates in DPE improves significantly the target tracking accuracy, however they have little influence when there are few nodes (Fig. 10b, d). With RPE instead, the usage of multiple distance estimates is important for any number of nodes (Fig. 10a, c). The impact of the number of beacons The number beacons used by the DPE and RPE lead to different localization errors. Therefore, affecting the tracking solutions. For the DPE, Oliveira et al. [24] show that increasing the number of beacon nodes of the structure does not improve significantly the result of the localization. Therefore, this evaluation considers only the RPE algorithm. In this experiment, the number of beacons is increased from 5 to 35 % of the total number nodes. A greater number of beacons means that the localization algorithm has more references for estimating the location of remaining nodes, which leads to smaller errors. Hence, the tracking error is also inversely proportional to the number of beacons. This behavior is depicted in Fig. 11. Impact of the number of beacons When the amount of beacon nodes is significantly large, the localization error becomes so small that the Kalman Filter tends to have better results than the Particle Filter, it is up 20 % better with 1 estimate and up 10 % to 50 estimates (Fig. 11c, d). The reason is that the non-Gaussian errors resulting from localization systems are reduced in such a way that the Kalman Filter starts to operate under ideal conditions, which means it converges to the optimal solution for target tracking. When there are few beacons (10 %), multiple distance estimates improves significantly the target tracking accuracy. However, when more than 10 % of the nodes are beacons note that the average error converges to 0.6 m, both for target tracking with Kalman and Particle filters (Fig. 11a, b). The impact of the beacon structure As stated earlier, increasing numbers beacons by structure in the DPE does not improve the localization results. However, the DPE may benefit from multiple beacon structures [24]. Thus, to evaluate the performance of the target tracking algorithms with multiple beacon structures, we vary the number of such structures from 1 to 5. This experiment represents a situation for the DPE algorithm that is analogous to the previous experiment for the RPE algorithm. By performing a single distance estimate in the localization process (Fig. 12(c)) and using a three-beacon structure, the tracking results with Kalman and Particle filters are very close. However, when more structures are used, the Kalman Filter tracking is favored. The opposite occurs when we use less than three-beacon structures. With 50 distance estimates, the Kalman Filter outperforms the Particle Filter with any number of beacon structures, because the non-Gaussian noise introduced by the location is very low (Fig. 12(d)). Besides, when we use of 50 distance estimates, the average tracking error converges to 0.6 m regardless of the number of beacon structures available (Fig. 12a, b). Impact of the beacon structure Conclusions and future work In this work, we demonstrated how information fusion can reduce errors during the localization process, while assessing the impact of actual localization algorithms on target tracking algorithms. For these evaluations, we chose the RPE and DPE algorithms to compute node positions, since RPE is a pioneer solution and DPE is more accurate and cheaper than the RPE. The target tracking techniques we have chosen were the Kalman and Particle filters. These filters are very popular and can be considered as canonical solutions for the target tracking problem. As a general conclusion, by using up to 50 distance estimates ensures better results in the target tracking with a moderate cost. Above this value, the errors converges, so the reduction of errors is small compared to the associated cost. Furthermore, the reduction in localization error using information fusion enhances the performance of Kalman Filter over the Particle Filter, especially when the DPE is used. Kalman and Particle filters successfully filter the errors associated with the target location estimation. However, the errors introduced by the localization algorithms are not successfully filtered by the tracking algorithm. The reason is that the Kalman Filter is not designed to filter non-Gaussian noise. On the other hand, the Particle Filter is designed to filter non-Gaussian noise. Consequently, the Particle Filter tends to outperform the Kalman as the localization errors increase. However, even the Particle Filter cannot significantly filter the non-Gaussian localization errors. Results show that for tracking applications with severe accuracy constraints, the localization algorithms need to improve their estimations to guarantee the performance of target tracking algorithms. Table 1 Target tracking errors This work leads to some particularly interesting directions. The first is to properly characterize the localization errors, so that we can understand the expected magnitude, direction, and orientation of the error resulting from localization algorithms. Such knowledge allows us to design new tracking algorithms that use such information to compensate and reduce the impact of localization errors, depending of the localization algorithm used. Another future direction includes reducing the location algorithms inaccuracy by using all the location information reported to nodes. These algorithms usually provide a minimum number of references (three for the RPE and two for the DPE) required for calculating the node position, ignoring the additional information received after the calculation. This approach can lead to accuracy improvement, and it does not require extra communication. Finally, the cross-layer design of localization and tracking algorithms, not explored yet, may lead to improved solutions for both problems. Akyildiz IF, Su W, Sankarasubramaniam Y, Cayirci E (2002) Wireless sensor Networks: a survey. Comput Netw 38:393–422 Albowicz J, Chen A, Zhang L (2001) Recursive position estimation in sensor networks. In: Proceedings of the 9th international conference on network protocols (ICNP'01), pp 35–41 Arulampalam MS, Maskell S, Gordon N, Clapp T (2002) A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans Signal Process 50:174–188 Bachrach J, Eames AM, Eames AM (2005) Localization in sensor networks, chapter 9, pp 277–310. Wiley, New York Boukerche A, de Oliveira HABF, Nakamura EF, Loureiro AAF (2007) Localization systems for wireless sensor networks. IEEE Wireless Commun 14:6–12 Chong CY, Zhao F, Mori S, Kumar S (2003) Distributed tracking in wireless ad hoc sensor networks. In: Proceedings of the 6th international conference of information fusion (Fusion'03), pp 431–438 Ehsan S, Bradford K, Brugger M, Hamdaoui B, Kovchegov Y, Johnson D, Louhaichi M (2012) Design and analysis of delay-tolerant sensor networks for monitoring and tracking free-roaming animals. Trans Wireless Commun 11(3):1220–1227 Fukuda K, Okamoto E (2012) Performance improvement of TOA localization using IMR-based NLOS detection in sensor networks. In: Proceedings of the 26th international conference on information networking (ICOIN'12), pp 13–18 Gibson JD (1999) The mobile communication handbook. IEEE Press, New York Grewal MS, Andrews AP (2001) Kalman filtering: theory and practice using MATLAB. Wiley, New York He T, Bisdikian C, Kaplan L, Wei W, Towsley D (2010) Multi-target tracking using proximity sensors. In: Proceedings of the military communications conference (MILCOM'10), San Jose, California, USA, pp 1777–1782 He T, Huang C, Blum BM, Stankovic JA, Abdelzaher T (2003) Range-free localization schemes for large scale sensor networks. In: Proceedings of the 9th ACM international conference on mobile computing and networking (MobiCom'03), pp 81–95 Jiang B, Ravindran B (2011) Completely distributed particle filters for target tracking in sensor networks. In: Proceedings of the 25th parallel distributed processing, Symposium (IPDPS'11) Julier SJ, Uhlmann JK (1997) A new extension of the kalman filter to nonlinear systems. In: Proceedings of the international aerosense, Symposium (SPIE'97), pp 182–193 Kalman RE (1960) A new approach to linear filtering and prediction problems. J Basic Eng 82:35–45 Li T, Ekpenyong A, Huang YF (2006) Source localization and tracking using distributed asynchronous sensor. IEEE Trans Signal Process 54:3991–4003 Lin CY, Peng WC, Tseng YC (2006) Efficient in-network moving object tracking in wireless sensor networks. IEEE Trans Mobile Comput 5:1044–1056 Lin KW, Hsieh MH, Tseng VS (2010) A novel prediction-based strategy for object tracking in sensor networks by mining seamless temporal movement patterns. Expert Syst Appl 37:2799–2807 Mazomenos EB, Reeve JS, White NM (2009) A range-only tracking algorithm for wireless sensor networks. In: International conference on advanced information networking and applications workshops (AINAW'07), pp 775–780 Nakamura EF, Loureiro AAF, Orgambide ACF (2007) Information fusion for wireless sensor networks: methods, models, and classifications. ACM Comput Surv 39:1–55 Neto JMRS, Silva JJC, Cavalcanti TCM, Rodrigues DP, da Rocha Neto JS, Glover IA (2010) Propagation measurements and modeling for monitoring and tracking in animal husbandry applications. In: Proceedings of the instrumentation and measurement technology conference (I2MTC'10). Austin, Texas, USA, pp 1181–1185 Niculescu D, Nath B (2001) Ad hoc positioning system (aps). In: Proceedings of the global telecommunications conference GLOBECOM'01), pp 2926–2931 Olfati-Saber, R.: Distributed kalman filter with embedded consensus filters. In: Proceedings of the 44th Conference on Decision and Control—European Control Conference (CDC-ECC'05), pp. 8179–8184 (2005) Oliveira HABF, Boukerche A, Nakamura EF, Loureiro AAF (2009) An efficient directed localization recursion protocol for wireless sensor networks. IEEE Transactions Computing 58:677–691 Oliveira, H.A.B.F., Nakamura, E.F., Loureiro, A.A.F., Boukerche, A.: Error analysis of localization systems in sensor networks. In: Proceedings of the 13th International Symposium on Geographic, Information Systems (GIS'05), pp. 71–78 (2005) Rosencrantz M, Gordon G, Thrun S (2003) Decentralized sensor fusion with distributed particle filters. In: Proceedings of the Conference on Uncertainty in AI (UAI) Sichitiu, M.L., Ramadurai, V.: Localization of wireless sensor networks with a mobile beacon. In: Proceedings of the International Conference on Mobile Ad-hoc and Sensor Systems (MASS'04), pp. 174–183 (2004) Souza, E.L., Campos, A., Nakamura, E.F.: Tracking targets in quantized areas with wireless sensor networks. In: Proceedings of the 36th Local Computer Networks (LCN'11), pp. 235–238. Bonn, Germany (2011) Souza, E.L., Nakamura, E.F., de Oliveira, H.A.: On the performance of target tracking algorithms using actual localization systems for wireless sensor networks. In: Proceedings of the 12th ACM international conference on Modeling, analysis and simulation of wireless and mobile systems (MSWiM '09), pp. 418–423. Tenerife, Canary Islands, Spain (2009) Tatham, B., Kunz, T.: Anchor node placement for localization in wireless sensor networks. In: Proceedings of the 7th Conference on Wireless and Mobile Computing, Networking and, Communications (WiMob'11), pp. 180–187 (2011) Tsai HW, Chu CP, Chen TS (2007) Mobile object tracking in wireless sensor networks. Computer Communications 30:1811–1825 Vercauteren T, Guo D, Wang X (2005) Joint multiple target tracking and classification in collaborative sensor networks. IEEE Journal on Selected Areas in Communications 23:714–723 Walchli, M., Skoczylas, P., Meer, M., Braun, T.: Distributed event localization and tracking with wireless sensors. In: Proceedings of the 5th International Conference on Wired/Wireless Internet, Communications (WWIC'07), pp. 247–258 (2007) Wang Z, Bulut E, Szymanski BK (2010) Distributed energy-efficient target tracking with binary sensor networks. ACM Transactions on Sensor Networks 6(4):1–32 Welch, G., Bishop, G.: An introduction to the kalman filter. In: The 28th International Conference on Computer Graphics and Interactive, Techniques (SIGGRAPH'01) (2006) Yang, H., Sikdar, B.: A protocol for tracking mobile targets using sensor networks. In: Proceedings of the 1st International Workshop on Sensor Network Protocols and Applications (SNPA'03), pp. 71–81 (2003) Zhang W, Cao G (2004) Dctc: Dynamic convoy tree-based collaboration for target tracking in sensor networks. IEEE Transactions on Wireless Communications 3:1689–1701 Zhao F, Shin J, Reich J (2002) Information-driven dynamic sensor collaboration for tracking applications. IEEE Signal Processing 19:61–72 This work is supported by the Brazilian National Council for Scientific and Technological Development (CNPq), under the grant numbers 474194/2007-8 (RastroAM), 55.4087/2006-5 (SAUIM) and 575808/2008-0 (Revelar), and also the the Amazon State Research Foundation (FAPEAM), trough the grant 2210.UNI175.3532. 03022011 (Projeto Anura—PRONEX 023/2009). Federal University of Amazonas, UFAM, Manaus, Brazil Éfren L. Souza & Horácio A. B. F. de Oliveira Analysis, Research and Technological Innovation Center, FUCAPI, Manaus, Brazil Eduardo F. Nakamura & Carlos M. S. Figueiredo Éfren L. Souza Eduardo F. Nakamura Horácio A. B. F. de Oliveira Carlos M. S. Figueiredo Correspondence to Éfren L. Souza. This work extend the previously evaluation made in Souza et al. [29] by introducing the usage of data fusion to reduce errors in the localization of sensor nodes. The results presented here show the benefits and costs of this new approach. Souza, É.L., Nakamura, E.F., de Oliveira, H.A.B.F. et al. Reducing the impact of location errors for target tracking in wireless sensor networks. J Braz Comput Soc 19, 89–104 (2013). https://doi.org/10.1007/s13173-012-0084-4 Target tracking algorithms Localization systems
CommonCrawl
Demonstration of Clausius theorem for irreversible cycles If we have a generic reversible cycle, we can approximate it with $n$ reversible Carnot cycles like in the pic, and we obtain: $$\sum_{i=1}^n\frac{Q_{i}}{T_{i}}=0$$ When $n \rightarrow \infty$: $$\int_{Rev-cycle}{\frac{\delta Q}{T}}=0$$ That's ok, this is Clausius equation. But if we have a not reversible cycle (you can't draw it in the PV plane) how can you approximate it and say that: $$\int_{Irr-cycle}{\frac{\delta Q}{T}}<0~?$$ So, where does Clausius inequality come from? And also, in this case, what do $T$ $\delta Q$ represent? thermodynamics entropy reversibility LandauLandau $\begingroup$ Just to say your question is valid and books or other presentations which use a state diagram are indeed failing to prove the theorem. But there are some books which do it properly. See Adkins for example, from which I learned this (and later wrote a book). $\endgroup$ $\begingroup$ Also Enrico Fermi's book on thermodynamics does an excellent job (in my opinion) of fully explaining Clausius' theorem. $\endgroup$ – Michael Burt This is basically the application of the Clausius inequality to an irreversible cycle. The zero on the right hand side of the equation represents the change in entropy of the working fluid over the cycle, which is equal to zero (since the initial and final states around a complete cycle are the same). On the left-hand side, the $T_i$ represents the temperature at the interface between the working fluid and its surroundings, at which the heat transfer $Q_i$ is occurring. The proper statement of the Clausius inequality always requires you to use the temperature at the interface where the heat transfer is occurring. ADDENDUM During an irreversible cycle, all the entropy generated within the system (by irreversibility) in each cycle $\delta$ is transferred from the system to the surroundings, so that the change in entropy of the system in each cycle is zero. The entropy transferred from the surroundings to the system during a cycle is given by $\sum_{i=1}^n\frac{Q_{i}}{T_{i}}$, so the entropy transferred from the system to the surroundings during the cycle is $\left(-\sum_{i=1}^n\frac{Q_{i}}{T_{I}}\right)$ That means that $$\delta =\left(-\sum_{i=1}^n\frac{Q_{i}}{T_{I}}\right)$$or, equivalently, $$\sum_{i=1}^n\frac{Q_{i}}{T_{I}}=-\delta$$Since the irreversible generation of entropy is always positive, this means that the left hand side of this equation for a cycle is negative, or $$\sum_{i=1}^n\frac{Q_{i}}{T_{I}}\leq0$$ Chet MillerChet Miller $\begingroup$ "This is basically the application of the Clausius inequality to an irreversible cycle"; but my question is exactly how I can derive Clausius inequality. Is clear where Clausius equation (=0) for the reversible cycles comes from, I don't understand how to demonstrate the inequality for irreversible cycles. $\endgroup$ – Landau $\begingroup$ See the ADDENDUM to the answer. $\endgroup$ – Chet Miller $\begingroup$ The Clausius inequality represents a mathematical statement of the 2nd law of thermodynamics, applicable to closed systems (no mass entering or leaving, but exchange of heat and/or work with surroundings permitted) undergoing reversible or irreversible processes. So, since it is an empirical law, there is no general derivation required. And it is applicable to any arbitrary process, including both reversible and irreversible. In an irreversible cycle, entropy is generated within the system, but since ΔS is zero for a cycle, all the generated entropy must be transferred to the surroundings. $\endgroup$ $\begingroup$ What is $T_I$, the same as $T_i$, temperature of thermal reservoir at the stage $i$? The Clausius inequality requires that there be $T_i$ in the end sum. $\endgroup$ – Ján Lalinský $\begingroup$ I think the correct statement would be "the entropy accepted by the surroundings from the system during one cycle is $\left(-\sum_{i=1}^n\frac{Q_{i}}{T_{i}}\right)$". The question now is, how to prove this can't be negative. $\endgroup$ So, where does Clausius inequality come from? Consider an entropy balance on an engine interacting with $n$ reservoirs: $$\Delta S = \sum_{i=1}^n{\frac{Q_i}{T_i}} + S_{gen}$$ If the engine undergoes a cycle, $\Delta S = 0$, therefore: $$ \sum_{i=1}^n{\frac{Q_i}{T_i}} = -S_{gen}$$ When $n \rightarrow \infty$, $$\int{\frac{\delta Q}{T}}= -S_{gen}$$ Now if the cycle is irreversible, $S_{gen} > 0$, therefore: $$\int_{Irr-cycle}{\frac{\delta Q}{T}}<0$$ As you can see, the Clausius statement for an irreversible cycle comes from the second law of thermodynamics; $S_{gen} > 0$ for an irreversible process. And also, in this case, what do T δQ represent? I assume you mean $\frac{\delta Q}{T}$ ? It is the entropy transfer into the system/engine. The Clausius statement therefore says that the net entropy transfer into the system is negative for an irreversible cycle. In other words, an irreversible cycle results in net entropy transfer out of the system. ThermodynamixThermodynamix $\begingroup$ You have not understood the question, Clausius' theorem is being used to derive the fact that there is a function of state whose change is $dQ_{\rm rev}/T$. If you employ entropy as a concept in your argument then you have quoted the very thing which one is trying to derive. $\endgroup$ $\begingroup$ @AndrewSteane, the question was: "where does Clausius inequality come from?". I derived it using an entropy balance. Maybe that seems circular, because the Claudius theorem is a statement of entropy balance; is that what you're saying? $\endgroup$ – Thermodynamix $\begingroup$ I think the question concerns how to construct correctly the argument without assuming that $dQ_{\rm rev}/T$ is a change in a function of state. It can be done from the Kelvin statement of the 2nd law and a careful argument, Also there is the issue whether or not you are using equilibrium thermodynamics. Clausius' theorem concerns a system not in equilibrium and this is important. You need to think carefully about the meaning of symbols such as $Q$ and $T$ when the processes are not quasistatic. $\endgroup$ You are right that in general, system's state is so non-equilibrium that it has no thermodynamic state variables, so $pV$ diagram or similar representation of the cycle cannot be drawn. For example, fluid in which vortices appear is obviously not fully described just by two values $p,V$. However, the Clausius inequality still makes sense even for cycles that carry system out to such non-equilibrium states. The symbol $T$ in Clausius' inequality refers to a temperature, which has to be defined for every stage of the process. Since the system studied may not have single temperature during a non-reversible process, it isn't system's temperature, but it is actually the temperature of the heat reservoir external to the system that is in thermal contact with it. This reservoir is assumed to be always in equilibrium state, so its temperature is defined at all stages of the cyclic process. $\delta Q$ is just element of heat accepted by the system from the reservoir during infinitesimal change of system's state along the cyclic process. I don't know how to prove the general form of the Clausius theorem where $T$ changes arbitrarily during the cycle. The method that you mention - introducing fake Carnot cycles whose outline approximates the actual sequence of states - works trivially only for systems whose state is given by two quantities such as $p,V$; then, isotherms and adiabats are sufficient to arrive at any desired point close to the actual path and the difference in integrals can be shown to converge to zero. Mathematically, the straight path can be replaced by jagged path because the integral in 2D space does not depend on the path, only on endpoints ($dQ/T$ turns out to be integrable and thus defines a function of state $S$, called entropy). But for more than two-variable systems, as far as I know there is no reason in general to believe that by using only isotherms and adiabats we can approximate the original path and have the difference in integrals go to zero. So, the method of fake Carnot cycles lacks crucial ingredient to work. However, it is widely believed that the relation $dQ = TdS$ is valid even in this multi-dimensional case, so again the integral does not depend on path, only on endpoints and entropy can be defined. It is often stated that this is a result of general statement of 2nd law of thermodynamics. However, it is a great extrapolation of experience with simple systems; it is hard to directly verify it for all multidimensional systems. If we accept existence of entropy for general systems, then the Clausius inequality can be derived under some restricting assumptions. If the system is close to equilibrium at all stages of the cycle so that it has temperature $T_S$ and entropy and $dS=dQ/T_S$, then the fact that change of its entropy during the cycle must be zero can be written this way: $$ \oint \frac{dQ}{T_S} = 0.~~~(*) $$ If the system is to accept heat and later dump waste heat into reservoir, its temperature $T_S$ has to differ from temperature of the reservoir $T_R$ in such a way that $$ dQ/T_R \leq dQ/T_S. $$ Integrating both sides we obtain $$ \oint \frac{dQ}{T_R} \leq \oint \frac{dQ}{T_S} $$ and using (*) we obtain $$\oint \frac{dQ}{T_R} \leq 0. $$ I stress again that this relies on the assumption that system is close to equilibrium at all stages so it has well defined temperature, and that it has entropy for which $dS = dQ/T$. Ján LalinskýJán Lalinský Not the answer you're looking for? Browse other questions tagged thermodynamics entropy reversibility or ask your own question. How does Fermi jump to this conclusion in Clausius inequality? Inequality of Clausius Calculation of entropy change in irreversible cycles, meaning of $\delta Q/T$ in irreversible processes Reversible cycle approximated by Carnot cycles Increase in entropy principle Which temperature does $T$ in Clausius inequality ($\oint \frac{\delta Q}T\le 0$) refer to? Better understanding the Clausius Inequality Proof for $\oint \frac{dQ}{T}=0 $ in a reversible process Clausius inequality leading to absurd result
CommonCrawl
Reconfigurable optomechanical circulator and directional amplifier Zhen Shen1,2 na1, Yan-Lei Zhang1,2 na1, Yuan Chen1,2 na1, Fang-Wen Sun1,2, Xu-Bo Zou1,2, Guang-Can Guo1,2, Chang-Ling Zou1,2 & Chun-Hua Dong1,2 Microresonators Optomechanics Non-reciprocal devices, which allow non-reciprocal signal routing, serve as fundamental elements in photonic and microwave circuits and are crucial in both classical and quantum information processing. The radiation-pressure-induced coupling between light and mechanical motion in travelling-wave resonators has been exploited to break the Lorentz reciprocity, enabling non-reciprocal devices without magnetic materials. Here, we experimentally demonstrate a reconfigurable non-reciprocal device with alternative functions as either a circulator or a directional amplifier via optomechanically induced coherent photon–phonon conversion or gain. The demonstrated device exhibits considerable flexibility and offers exciting opportunities for combining reconfigurability, non-reciprocity and active properties in single photonic devices, which can also be generalized to microwave and acoustic circuits. The field of classical and quantum information processing with integrated photonics has achieved significant progress during past decades, and numerous optical devices of basic functionality have been realized1. Nonetheless, it is still a challenge to obtain devices with non-reciprocal or active gain properties. In particular, non-reciprocal devices, including the common isolator and circulator, have attracted great efforts for both fundamental and practical considerations2,3,4,5,6,7. Although their bulky counterparts play a vital role in daily optics applications, the requirement of a strong external bias magnetic field and magnetic field shields and the compatibility of lossy magneto-optics materials prevent the miniaturization of these devices8. Due to the general principle of Lorentz reciprocity or time-reversal symmetry in optics, nonlinear optical effects are one of the remaining options to circumvent these obstacles in the photonic integrated circuit9,10,11. Thus far, optical isolation based on spatiotemporal modulations and three-wave mixing effects have been developed12,13,14,15,16,17,18,19,20,21,22, and similar mechanisms have been applied to superconducting microwave circuits23,24,25,26,27. Yet, very little optimization work has been carried out on optical circulators, another important non-reciprocal device. Circulators, which allow the signal to pass in a unirotational fashion between their ports, can separate opposite signal flows or operate as isolators. For the design of full-duplex systems, circulators are the key elements, offering the opportunity to increase channel capacity and reduce power consumption28. Recently, a fibre-integrated optical circulator for single photons was realized, in which non-reciprocal behaviour arises from a chiral interaction between the atom and transversely confined light29,30. However, an optical circulator and a directional amplifier for large dynamic range of signal power remain inaccessible. Here, we demonstrate an optomechanical circulator and directional amplifier in a two-tapered fibre-coupled silica microresonator. Although the non-reciprocal photonic devices based on optomechanical interactions have been demonstrated in previous studies, they are only limited to two-port isolators. The circulator is an indispensable device that cannot be realized by the combination of other optical devices. In contrast, the isolator can be replaced by a circulator by just using two ports of the circulator. In addition, via a simple change in the control field, the device performs as an add–drop filter and can be switched to circulator mode or directional amplifier mode. Our device has several advantages over its bulky counterparts, including reconfigurability, amplification and compactness. Theoretical model The optomechanical circulator and directional amplifier feature the photonic structure shown in Fig. 1a, where a silica microsphere resonator is evanescently coupled with two-tapered microfibres (designated α and β) as signal input–output channels. For a passive configuration without a pump, the structure acts as a four-port add–drop filter device31,32, which can filter the signal from fibre α to β or vice versa via the passive cavity resonance. Because of its travelling-wave nature, the microresonator supports pairs of degenerate clockwise (CW) and counter-clockwise (CCW) whispering-gallery modes, and the device transmission function is symmetric under 1 ↔ 2 and 4 ↔ 3 commutation. The key to the reconfigurable non-reciprocity is the nonlinear optomechanical coupling, represented by the following Hamiltonian: $$H_{{\mathrm{int}}} = g_0\left( {c_{{\mathrm{cw}}}^\dagger c_{{\mathrm{cw}}} + c_{{\mathrm{ccw}}}^\dagger c_{{\mathrm{ccw}}}} \right)\left( {m + m^\dagger } \right),$$ where ccw(ccw) and m denote the bosonic operators of the CW (CCW) optical cavity mode and mechanical mode, respectively. The radial breathing mode can modulate the cavity resonance by changing the circumference of the microsphere, with g0 as the single-photon optomechanical coupling rate. Schematic of the optomechanical circulator and directional amplifier. a The device consists of an optomechanical resonator and two coupled microfibres. A control field launched into port 1 excites the coupling between the mechanical motion and the clockwise (CW) optical field. b, c Schematic of the circulator and directional amplifier, respectively. The routing direction of the signal light coincides with the control field (that is, CW direction). For example, the signal entering port 1 is transmitted to the adjacent port 2 as indicated by the yellow arrows, while a signal input at port 2 will drop to port 3 as indicated by the green arrows. The port number follows that of the microfibres in a. For the directional amplifier, a larger arrow size indicates a gain in the corresponding direction, while the unchanged arrows represent the absence of gain Biased by a control field that is detuned from the resonance, either the coherent conversion or parametric coupling between the signal photon and phonon can be enhanced33. However, the bias control field can only stimulate the interaction between a phonon and a signal photon propagating along the same direction as the bias. As a result of the directional control field, which is chosen as the CW mode in our experiment, the time-reversal symmetry is broken and effective non-reciprocity is produced for the signal light. In particular, the device performs the function of either a circulator or a directional amplifier, which is determined by frequency detuning of the control light with respect to the cavity resonance. When the CW optical mode is excited via a red-detuned control field, that is, ωc − ωo ≈ − ωm, where ωc, ωo and ωm are the respective frequencies of the control field and optical and mechanical modes, the well-known photon–phonon coherent conversion occurs with a beam-splitter-like interaction \(\left( {c_{{\mathrm{cw}}}^\dagger m + c_{{\mathrm{cw}}}m^\dagger } \right)\)34,35. For CW signal photons sent to the cavity through fibre port 1(3), as shown in Fig. 1a, a transparent window appears in the transmittance from port 1(3) to port 2(4) when the signal is near resonance with the optical cavity mode. The signal is routed by the control field due to destructive interference between the signal light and mechanically up-converted photons from the control field34,35,36. In contrast, for the two other input ports 2 and 4, the signal light couples to the CCW optical mode and drops to ports 3 and 1, respectively. Thus, the add–drop functionality is maintained for these two ports in the absence of optomechanical interaction. In general, the device functions as a four-port circulator, in which the signal entering any port is transmitted to the adjacent port in rotation, as shown in Fig. 1b. For a control field that is blue-detuned from the CW mode (ωc − ωo ≈ ωm), an effective photon–phonon pair generation process \(\left( {c_{{\mathrm{cw}}}^\dagger m^\dagger + c_{{\mathrm{cw}}}m} \right)\) leads to signal amplification. Similar to the case of a circulator, only a signal launched in a certain direction can couple with the mechanical mode and be amplified, as shown in Fig. 1c. For example, a signal input at port 1 leads to an amplified signal output at both ports 2 and 4. Conversely, a signal input at port 2 will only drop to port 3 without amplification. Therefore, such a device can operate as a common add–drop filter, circulator or directional amplifier by programming the control field. Experimental realization The optomechanical resonator used in this study is a silica microsphere with a diameter of approximately 35 μm, where we choose a high-Q-factor whispering-galley mode with an intrinsic damping rate κ0/2π = 3 MHz near 780 nm. The radial breathing mechanical mode has a frequency of ωm/2π = 90.47 MHz and a dissipation rate of γ/2π = 22 kHz (Supplementary Fig. 1). The two microfibres are mounted in two three-dimensional stages and the distance between the resonator and microfibres is fixed throughout the experiment. The external coupling rates of the two channels are κα/2π = 9 MHz and κβ/2π = 4.2 MHz, respectively (see Supplementary Note 1 and Supplementary Figs. 2 and 3 for more details regarding the setup). For an experimental demonstration of the optomechanical circulator, we first measure the signal transmission spectra Ti→i+1 from the i-th to the (i + 1)-th port and the reversal Ti+1→i for i ∈ {1, 2, 3, 4} (as shown in Fig. 2a) when the CW optical mode is excited by a red-detuned control laser. Here, the control laser and signal light are pulsed (pulse width τ = 10 μs) to avoid thermal instability of the microsphere19,36. With the detuning δ between the signal and the cavity mode (see Supplementary Note 1 and Supplementary Figs. 4–8 for more spectra), the spectra unambiguously present asymmetric transmittance in the forward (i → i + 1) and backward (i + 1 → i) directions around δ ≈ 0: relatively high forward transmittance (60–80%) and near-zero backward transmittance. Such performance indicates an optical circulator (Fig. 1b) with an insertion loss of approximately 1–2 dB. The transmission for the backward T4→3 is slightly higher because of the imperfection imposed by the unbalanced external coupling rates of the two channels. To obtain a better understanding of the role of the optomechanical interactions, we measure the transmission spectra for control fields of different intensities. The transmissions at δ = 0 are summarized and plotted in Fig. 2b as a function of the cooperativity \(C_{{\mathrm{cw}}} \equiv 4g_0^2N_{\mathrm{d}}{\mathrm{/}}\kappa \gamma\), where Nd is the CW intracavity control photon number, and κ = κ0 + κα + κβ is the total cavity damping rate. For increasing Ccw, we observe that the non-reciprocal transmittance contrast between the forward and backward directions (Ti→i+1 − Ti+1→i) increases from 0 to approximately 60%. Demonstration of the circulator function with a red-detuned control field. a Measured port-to-port transmission spectra of the signal around the cavity resonance; the solid circles correspond to Ti→i+1 and the open squares correspond to Ti+1→i. The incident control power is 7.8 mW, corresponding to Ccw = 4.6. b The transmittance obtained at δ = 0 versus Ccw. The lines in a, b indicate theoretical expectations based on the parameters κ/2π = 16.2 MHz, ωm/2π = 90.47 MHz, and γ/2π = 22 kHz By tuning the frequency of the control field to the upper motional sideband of the optical mode (ωc − ωo = ωm) and holding the other conditions constant, the same device is reconfigured to act as a directional amplifier. As shown in Fig. 3a, only the signal light launched into port 1 and port 3 (that is, coupled to the CW mode) will be simultaneously transferred to port 2 and port 4 with considerable gain, but not vice versa (see Supplementary Note 1 and Supplementary Figs. 4–8 for more spectra). For the channel from port 2 to port 1, the lower transmittance predicted by theory at δ = 0 is not measured due to noise. Here, the experimental results are fitted to the transient transmission spectra, and sinc-function-like oscillations around the central peak are observed due to the impulse response of the device for a 10 μs rectangular control pulse. For the transmittance at δ = 0 and Ccw = −4.0, where the negative sign of cooperativity represents the blue-detuned drive19, the signal field from port 1 to port 2 is amplified by 15.2 dB, but in the opposite direction, it suffers a 19.1 dB loss, as shown in Fig. 3b. Hence, the maximum contrast ratio between forward and backward probe transmission is approximately 34.3 dB when Ccw = −4.0. Demonstration of a directional amplifier with a blue-detuned control field. a Typical measured transmission spectra for the function of the directional amplifier. The solid circles represent Ti→i+1 and the open squares represent Ti+1→i. The incident control power is 5.8 mW, corresponding to Ccw = −3.4. b Transmittance obtained at δ = 0 versus Ccw. The lines in a, b indicate theoretically expected values based on the parameters κ/2π = 16.2 MHz, ωm/2π = 90.47 MHz, and γ/2π = 22 kHz To fully characterize the performance of our reconfigurable non-reciprocal devices, we measure the complete transmission spectra Ti→j between all ports (that is, i, j ∈ {1, 2, 3, 4}), with Ccw = 0 for the add–drop filter, Ccw > 0 for the circulator and Ccw < 0 for the directional amplifier. Figure 4 shows the experimental results of the transmittance matrix for Ccw = 0, 4.6, −4.0 at δ = 0, and the matrix for an ideal circulator is given as a comparison (see Supplementary Tables 1 and 2 for the values of all transmission matrices). To quantify the device performance, we introduce the ideality metric I = 1 − \(\frac{1}{8}\mathop {\sum}\nolimits_{i,j} | {T_{i \to j}^{\mathrm{N}} - T_{i \to j}^{\mathrm{I}}} |\), where \(T_{i \to j}^{\mathrm{N}}\) = \(T_{i \to j}{\mathrm{/}}\eta _i\) is the normalized experimental transmittance at δ = 0 for subtracting the influence of the insertion loss (see Supplementary Note 2 for more details), \(T_{i \to j}^{\mathrm{I}}\) indicates ideal performance and \(\eta _i = \mathop {\sum}\nolimits_j {\kern 1pt} T_{i \to j}\) is the total output for the signal field entering port i. As shown in Fig. 4e, the ideality of the circulator and amplifier approaches unity with increasing \(\left| {C_{{\mathrm{cw}}}} \right|\), which agrees well with theoretical fittings (Supplementary Note 2). The best idealities of all three functions of the device exceed 75%. Although the device presented here is a proof-of-principle demonstration, it has provided considerable performance. The mechanism of the device is readily to be realized in the integrated photonic circuits, which is promising for better device performance with higher cooperativity in sophisticated optomechanical structures and collective enhancement in an array of optomechanical cavities37. Additionally, the large dynamic range of signal power, potential for low noise, optical real-time reconfigurability and small system size are exciting benefits of this optomechanical approach. Thus, the demonstrated optomechanical reconfigurable non-reciprocal device has great potential for practical applications. Transmission matrices. a The transmission matrix of an ideal circulator: T1→2 = T2→3 = T3→4 = T4→1 = 1, and all remaining matrix elements are 0. A circulator requires an asymmetric transmission matrix with regard to the dashed line, which breaks the reciprocity. Conversely, b shows a symmetric transmission matrix measured without a control field, representing a reciprocal device. c, d Transmission matrices for the demonstrated circulator and directional amplifier, respectively. The control power is 7.8 mW for the circulator and 6.9 mW for the directional amplifier, corresponding to Ccw = 4.6 and −4.0. The values of all transmission matrices are provided in Supplementary Tables 1 and 2. e Identical I of the circulator, directional amplifier and add–drop filter as a function of Ccw. The lines are the results of theoretical calculations based on the parameters κ/2π = 16.2 MHz, ωm/2π = 90.47 MHz, and γ/2π = 22 kHz. See Supplementary Note 2 for more details of calculation The demonstrated non-reciprocal circulator and amplifier based on the optomechanical interaction in a travelling-wave resonator enable versatile photonic elements and offer the unique advantages of all-optical switching, non-reciprocal routing and amplification. Other promising applications along this direction include non-reciprocal frequency conversion, a narrowband reflector and creation of a synthetic magnetic field for light by exploiting multiple optical modes in a single cavity19,21,38. With advances in materials and nanofabrication, these devices can be implemented in photonic integrated circuits39, which will allow for stronger optomechanical interactions and a smaller device footprint. Thus, the missing block of non-reciprocity can be tailored and implemented to meet specific experimental demands. The principle demonstrated herein can also be incorporated into microwave superconducting devices as well as acoustic devices in the emerging research field of quantum phononics40. During the preparation of this manuscript, a similar work by F. Ruesink et al. has been reported in Nature Communications41, where an optical circulator based on microtoroid resonator was demonstrated. All data generated in this study are available from the corresponding author upon request. Saleh, B. E. A. & Teich, M. C. Fundamentals of Photonics 2nd edn (Wiley, Hoboken, 2007). Shoji, Y. & Mizumoto, T. Magneto-optical nonreciprocal devices in silicon photonics. Sci. Technol. Adv. Mater. 15, 014602 (2014). Metelmann, A. & Clerk, A. A. Nonreciprocal photon transmission and amplification via reservoir engineering. Phys. Rev. X 5, 021025 (2015). Lodahl, P. et al. Chiral quantum optics. Nature 541, 473–480 (2017). ADS CAS Article PubMed Google Scholar Kamal, A. & Metelmann, A. Minimal models for nonreciprocal amplification using biharmonic drives. Phys. Rev. Appl. 7, 034031 (2017). Li, Y., Huang, Y. Y., Zhang, X. Z. & Tian, L. Optical directional amplification in a three-mode optomechanical system. Opt. Express 25, 18907–18916 (2017). ADS Article PubMed Google Scholar Malz, D. et al. Quantum-limited directional amplifiers with optomechanics. Phys. Rev. Lett. 120, 023601 (2018). Bi, L. et al. On-chip optical isolation in monolithically integrated non-reciprocal optical resonators. Nat. Photon. 5, 758–762 (2011). ADS CAS Article Google Scholar Yu, Z. & Fan, S. Complete optical isolation created by indirect interband photonic transitions. Nat. Photon. 3, 91–94 (2009). Shi, Y., Yu, Z. & Fan, S. Limitations of nonlinear optical isolators due to dynamic reciprocity. Nat. Photon. 9, 388–392 (2015). Tzuang, L. D., Fang, K., Nussenzveig, P., Fan, S. & Lipson, M. Non-reciprocal phase shift induced by an effective magnetic flux for light. Nat. Photon. 8, 701–705 (2014). Kang, M. S., Butsch, A. & Russell, P. S. J. Reconfigurable light-driven opto-acoustic isolators in photonic crystal fibre. Nat. Photon. 5, 549–553 (2011). Dong, C.-H. et al. Brillouin-scattering-induced transparency and non-reciprocal light storage. Nat. Commun. 6, 6193 (2015). Kim, J., Kuzyk, M. C., Han, K., Wang, H. & Bahl, G. Non-reciprocal Brillouin scattering induced transparency. Nat. Phys. 11, 275–280 (2015). Guo, X., Zou, C.-L., Jung, H. & Tang, H. X. On-chip strong coupling and efficient frequency conversion between telecom and visible optical modes. Phys. Rev. Lett. 117, 123902 (2016). Zheng, Y. et al. Optically induced transparency in a micro-cavity. Light Sci. Appl. 5, e16072 (2016). Hua, S. et al. Demonstration of a chip-based optical isolator with parametric amplification. Nat. Commun. 7, 13657 (2016). ADS CAS Article PubMed PubMed Central Google Scholar Hafezi, M. & Rabl, P. Optomechanically induced nonreciprocity in microring resonators. Opt. Express 20, 7672 (2012). Shen, Z. et al. Experimental realization of optomechanically induced non-reciprocity. Nat. Photon. 10, 657–661 (2016). Ruesink, F., Miri, M.-A., Alù, A. & Verhagen, E. Nonreciprocity and magnetic-free isolation based on optomechanical interactions. Nat. Commun. 7, 13662 (2016). Fang, K. et al. Generalized nonreciprocity in an optomechanical circuit via synthetic magnetism and reservoir engineering. Nat. Phys. 13, 465–471 (2017). Verhagen, E. & Alù, A. Optomechanical nonreciprocity. Nat. Phys. 13, 922–924 (2017). Pozar, D. M. Microwave Engineering 3rd edn (Wiley, Hoboken, NJ, 2005). Sliwa, K. M. et al. Reconfigurable Josephson circulator/directional amplifier. Phys. Rev. X 5, 041020 (2015). Peterson, G. A. et al. Demonstration of efficient nonreciprocity in a microwave optomechanical circuit. Phys. Rev. X 7, 031001 (2017). Barzanjeh, S. et al. Mechanical on-chip microwave circulator. Nat. Commun. 8, 953 (2017). Bernier, N. R. et al. Nonreciprocal reconfigurable microwave optomechanical circuit. Nat. Commun. 8, 604 (2017). Sounas, D. L. & Alù, A. Non-reciprocal photonics based on time modulation. Nat. Photon. 11, 774 (2017). Xia, K. et al. Reversible nonmagnetic single-photon isolation using unbalanced quantum coupling. Phys. Rev. A. 90, 043802 (2014). Scheucher, M., Hilico, A., Will, E., Volz, J. & Rauschenbeutel, A. Quantum optical circulator controlled by a single chirally coupled atom. Science 354, 1577–1580 (2016). Cai, M., Hunziker, G. & Vahala, K. J. Fiber-optic add-drop device based on a silica microsphere-whispering gallery mode system. IEEE Photon. Technol. Lett. 11, 686–687 (1999). Monifi, F., Friedlein, J., Özdemir, Ş. K. & Yang, L. A robust and tunable add-drop filter using whispering gallery mode microtoroid resonator. J. Light Technol. 30, 3306–3315 (2012). Aspelmeyer, M., Kippenberg, T. J. & Marquardt, F. Cavity optomechanics. Rev. Mod. Phys. 86, 1391 (2014). Weis, S. et al. Optomechanically induced transparency. Science 330, 1520 (2010). Safavi-Naeini, A. H. et al. Electromagnetically induced transparency and slow light with optomechanics. Nature 472, 69–73 (2011). Dong, C., Fiore, V., Kuzyk, M. C. & Wang, H. Optomechanical dark mode. Science 338, 1609–1613 (2012). Cernotík, O., Mahmoodian, S. & Hammerer, K. Spatially adiabatic frequency conversion in optoelectromechanical arrays. Preprint at http://arxiv.org/abs/1707.03339 (2017). Zhang, Y.-L. et al. Optomechanical devices based on traveling-wave microresonators. Phys. Rev. A. 95, 043815 (2017). Cohen, J. D. et al. Phonon counting and intensity interferometry of a nanomechanical resonator. Nature 520, 522–525 (2015). Gustafsson, M. V. et al. Propagating phonons coupled to an artificial atom. Science 346, 207–211 (2014). Ruesink, F., Mathew, J. P., Miri, M.-A., Alù, A. & Verhagen, E. Optical circulation in a multimode optomechanical resonator. Nat. Commun. doi: 10.1038/s41467-018-04202-y (2018). The work was supported by the National Key R&D Program of China (Grant Nos. 2016YFA0301303, 2017YFA0304504, 2016YFA0301700), the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB01030200), the National Natural Science Foundation of China (Grant Nos. 61575184, 11722436 and 11704370) and the Fundamental Research Funds for the Central Universities. This work was partially carried out at the USTC Center for Micro and Nanoscale Research and Fabrication. These authors contributed equally: Zhen Shen, Yan-Lei Zhang, Yuan Chen. Key Laboratory of Quantum Information, Chinese Academy of Sciences, University of Science and Technology of China, Hefei, 230026, China Zhen Shen, Yan-Lei Zhang, Yuan Chen, Fang-Wen Sun, Xu-Bo Zou, Guang-Can Guo, Chang-Ling Zou & Chun-Hua Dong Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Anhui, 230026, China Zhen Shen Yan-Lei Zhang Yuan Chen Fang-Wen Sun Xu-Bo Zou Guang-Can Guo Chang-Ling Zou Chun-Hua Dong Z.S., C.-H.D. and C.-L.Z. conceived the experiments. Z.S., Y.C. and C.-H.D. prepared microsphere, built the experimental setup and carried out measurements. Y.-L.Z., Z.S. and Y.C. performed the numerical simulation and analysed the data. X.-B.Z. and F.-W.S. provided theoretical support. Z.S., C.-H.D. and C.-L.Z. wrote the manuscript with input from all co-authors. C.-H.D., F.-W.S. and G.-C.G. supervised the project. All authors contributed extensively to the work presented in this paper. Correspondence to Fang-Wen Sun, Chang-Ling Zou or Chun-Hua Dong. Shen, Z., Zhang, YL., Chen, Y. et al. Reconfigurable optomechanical circulator and directional amplifier. Nat Commun 9, 1797 (2018). https://doi.org/10.1038/s41467-018-04187-8 Magnetic-free silicon nitride integrated optical isolator Hao Tian Junqiu Liu Sunil A. Bhave Nature Photonics (2021) Noiseless photonic non-reciprocity via optically-induced magnetization Xin-Xin Hu Zhu-Bo Wang Optical circulation in a multimode optomechanical resonator Freek Ruesink John P. Mathew Ewold Verhagen Non-reciprocal interband Brillouin modulation Eric A. Kittlaus Nils T. Otterstrom Peter T. Rakich
CommonCrawl
PDF PubReader Zhang* and Wang**: Object Tracking with the Multi-Templates Regression Model Based MS Algorithm Volume 14, No 6 (2018), pp. 1307 - 1317 10.3745/JIPS.02.0097 Hua Zhang* and Lijia Wang** Object Tracking with the Multi-Templates Regression Model Based MS Algorithm Abstract: To deal with the problems of occlusion, pose variations and illumination changes in the object tracking system, a regression model weighted multi-templates mean-shift (MS) algorithm is proposed in this paper. Target templates and occlusion templates are extracted to compose a multi-templates set. Then, the MS algorithm is applied to the multi-templates set for obtaining the candidate areas. Moreover, a regression model is trained to estimate the Bhattacharyya coefficients between the templates and candidate areas. Finally, the geometric center of the tracked areas is considered as the object's position. The proposed algorithm is evaluated on several classical videos. The experimental results show that the regression model weighted multi-templates MS algorithm can track an object accurately in terms of occlusion, illumination changes and pose variations. Keywords: Mean Shift Algorithm , Multi-Templates , Object tracking , Regression Model Object tracking plays an important role in computer vision, such as surveillance, robotics, human computer interaction, etc. In the past decade, many successful algorithms have been proposed for robust object tracking in the complex environment [1-4]. However, the object tracking is still a challenging task due to appearance variations caused by occlusion, pose variations, abrupt motion, and illumination variations. In general, object tracking algorithms can be classified into two groups: the generative methods and the discriminative methods. The generative methods model an object in the first frame and search for the area with the most similar appearance as the result [5]. The generative methods include MS tracker [6], fragments based tracker [7], incremental tracker (IVT) [8], and visual tracking decomposition (VTD) [9]. The mean-shift (MS) tracker often represents an object with color histograms and determines the tracking result with the highest matching score by using the iterative method [6]. The fragments based tracker models an object by using multiple image fragments or patches and determines the tracking result by combining patches instead of the whole object [7,10]. The IVT tracker learns and updates a low dimensional eigenspace representation to reflect an object's appearance changes [8]. It has demonstrated that these methods perform well when there are appearance changes caused by lighting and pose variations. However, they are less effective in dealing with the problems of heavy occlusion, serious lighting and pose variations [11]. VTD tracker addresses the problems of occlusion and appearance changes by decomposing the observation model into multiple basic observation models. In the VTD tracker, each basic observation model covers a specific appearance of an object. However, it suffers from expensive computation because it is realized through an interactive Markov Chain Monte Carlo (IMCMC) framework. The discriminative methods formulate object tracking as a binary classification problem to distinguish an object from background. The online-boosting algorithm selects discriminative features for object tracking [12]. However, it often suffers from tracking drift because only one positive sample (the tracking result) is used for classifier updating [5]. Then, Grabner et al. [13] proposed a semisupervised object tracking algorithm which labels the positive samples in the first frame. However, it suffers from ambiguity as tracking evolves. The multiple instance learning (MIL) algorithm was proposed to learn a strong classifier from multiple instances in the positive and negative bags [14]. Although the algorithm overcomes the problem of ambiguity in object tracking, it does not handle the problem of large non-rigid shape deformation [15,16]. Recently, sparse representation based object tracking algorithms have been studied [17-20]. The tracker represents [17] an object as a sparse linear combination of the object templates and trivial templates. The templates are useful for handling the problems of occlusion and appearance changes. The tracking results show that the algorithm can achieve good performance but with expensive computation. In this paper, a regression model based MS algorithm is proposed for object tracking. In the algorithm, multiple templates are captured to deal with the problems of occlusion, lighting changes, and pose variations. These templates are divided into two classes. The first candidate templates are captured from the object, the object's transformational versions, and the occlusion templates. The second candidate templates are generated when tracking fails. The color-texture descriptor is used to represent these templates. Furthermore, to implement real-time and robust object tracking, a regression model is trained by using these templates. Then, the regression model estimates the values of the Bhattacharyya coefficients of the candidate areas for determining the object's location. At last, the proposed algorithm is evaluated on several videos. The experimental results show that the algorithm is robust to occlusion, illumination variations, and poses changes. 2. Overview of the System The overview of the object tracking algorithm is shown in Fig. 1. To handle the problems of illumination changes, pose variations, and occlusion, the algorithm employs multiple templates including the object, the object's transformational versions, the illumination ones, and the occlusion templates. In the beginning of the tracking process, the MS algorithm is applied to these templates. Consequently, several candidate areas are obtained. The tracking result is determined by the geometric center of the candidate areas with larger similar scores (Bhattacharyya coefficient is used to measure the similarity). As tracking evolves, the templates are inserted into a training pool. As the number of the templates in the pool achieves a given value, a regression model is trained with these templates. Then, the regression model estimates the values of the Bhattacharyya coefficients between these templates and the candidate areas. At last, the object's location is determined by weighting the candidate areas which have larger similar scores. The contributions of this paper are as follows.  Multiple templates are proposed to handle the problems of occlusion, illumination changes, and pose variations. The multi-templates are captured from the object, the object's transformational versions, the illumination models and occlusion templates.  A regression model is presented to estimate the similarity of the object and candidate area. The more the number of the templates is, the better the algorithm performs. However, as the number of the templates increases, the algorithm will suffer from heavy computing time. The presented regression model trains the multi-templates to control the number of the templates.  Regression model based MS algorithm is applied on the templates. The MS algorithm is applied to the obtained templates for detecting the candidate areas. Then, the regression model estimates the similar values of the candidate areas. The tracking result is determined by weighting the areas' similar scores with the estimated values. The flow chart of the regression model based MS algorithm. 3. Multiple Templates In many videos, objects are often occluded or corrupted by noise [21]. To address the problem, the context-aware exclusive sparse tracker (CEST) [11] employed a dictionary combining by three types of templates DF, DO and DC. These templates incorporate information about the object, noise/occlusion, and context. It is shown that the algorithm performs well in challenging videos. Inspired by the CEST, a multiple templates based MS algorithm is proposed. In the algorithm, a final multi-templates set, which includes the first candidate templates and the second candidate templates, are defined. Then, the MS algorithm is applied on the final templates set for tracking the areas with larger similar scores. The first candidate templates contain information about the object, noise, and occlusion. They are defined as [TeX:] $$T _ { 1 i } \{ i = 1,2,3 \}$$. The first group includes the target templates which are cropped from the tracking result and the transformed versions of the result. The transformational versions of the result are robust to the pose variations. The second group constituted by occlusion templates is shown in Fig. 1. The non-zero entries in the templates indicate that the pixels are occluded [11]. The last group includes the lighting templates. The second candidate templates are obtained when tracking fails. They are indicated as [TeX:] $$T _ { 2 i } \{ i = 1,2 , \cdots , n \}$$. Then, the final templates [TeX:] $$T _ { i } \{ i = 1,2 , \cdots , M \} M = m + n$$ are composed by the first and second candidate templates. The process for obtaining the second candidate templates and the final templates are detailed as follows. In the beginning of the tracking process, the MS algorithm is applied to the first candidate templates. As a result, the candidate areas are obtained. The Bhattacharyya coefficient is used to measure the similarity of the candidate areas. If there are the areas with the similar scores larger than a given threshold [TeX:] $$$$, the tracking results are determined by these areas' geometric center [TeX:] $$L _ { c o n } = \frac { 1 } { N _ { \mathrm { c } } } \sum _ { i = 1 } ^ { N _ { c } } L _ { i }$$ where Nc is the number of the selected areas and Li is an area's center location. Furthermore, we re-extract samples for the first candidate templates based on the tracking result. Once all the candidate areas' similar scores are less than the given threshold, the MS based tracking algorithm fails. In such a case, the EKF [10] predicts the object's location. Meanwhile, the areas with larger similar scores are selected as the second candidate templates. At last, the final templates are obtained by the first candidate templates and the second candidate templates. After obtaining the final templates, the MS algorithm is employed in the successive frames. Then, m+n candidate areas are obtained. If there are the areas with the similar scores larger than the given threshold, the object location will be determined by these areas. Meanwhile, the first candidate templates are cropped around the tracking result, while the second candidate templates are obtained as the step mentioned above. If tracking fails, the first candidate templates remain unchanged and the second candidate templates are updated with the n areas which have larger similar scores. In such a case, the EKF [10] predicts the object's location. 4. Regression Model Multi-templates based object tracking algorithm is useful to deal with the problems of illumination changes and pose variations. The more the number of the templates is, the better the tracking algorithm performs. However, the computing time will increase as the number of the templates grows. To deal with the problem, multi-templates based regression model is proposed. The goal of the regression model is to estimate a candidate area's similar score. 4.1 Color-Texture Descriptor Color feature is often used in computer vision because it is insensitive to rotation, translation, and scale. However, it ignores the spatial information [22]. The texture feature reflects the spatial distribution of the pixel's pray and makes up the shortcoming of the color feature [23]. Therefore, color-texture descriptor is used to all-sided represent an object. We use the HUE component in the HSV space as the color feature, while the uniform LBP feature is employed due to its lower computational complexity, scale invariability, and rotation invariability [24,25]. The color-texture descriptor is obtained as follows: [TeX:] $$\hat { q } _ { f , u } = C \sum _ { i = 1 } ^ { M } k \left( \left\| \frac { x _ { i } - x _ { 0 } } { h } \right\| \right) \delta \left( b \left( x _ { i } \right) - u \right)$$ where [TeX:] $$f = C o , T e$$ indicates the color and texture information, respectively. [TeX:] $$\hat { q } _ { f , u }$$ is the corresponding histogram. C is the normalization constant which guarantees [TeX:] $$\sum _ { u = 1 } ^ { M } \hat { q } _ { f , u } = 1 . \delta ( \cdot )$$ is Delta function, which determines whether the value of xi belongs to the uth bin. [TeX:] $$k ( \cdot )$$ is the Epanechnikov kernel function. h is the bandwidth of the kernel function. x0 is the center of the object area. In the tracking process, the candidate area centered in y is expressed with its pixels [TeX:] $$\left\{ x _ { i } ^ { * } \right\} , i = 1,2 , \ldots , n$$. Then, a candidate area's color-texture descriptor [TeX:] $$\hat { p } _ { f , u }$$ is obtained as follows: [TeX:] $$\hat { p } _ { f ,u } = C _ { h } \sum _ { i = 1 } ^ { M } k \left( \left\| \frac { x _ { i } ^ { * } - y } { h } \right\| \right) \delta \left( b \left( x _ { i } ^ { * } \right) - u \right).$$ To measure the similarity between the candidate area and the template, the Bhattacharyya coefficient is used: [TeX:] $$\hat { \rho } _ { f } = \hat { \rho } _ { C o } \hat { \rho } _ { T e } = \sum _ { u = 1 } ^ { M } \sqrt { \hat { p } _ { c o , u } \hat { q } _ { C o , u } } \sum _ { u = 1 } ^ { M } \sqrt { \hat { p } _ { T e , u } \hat { q } _ { T _ { e , u } } },$$ where [TeX:] $$\hat { p } _ { C o , u } \text { and } \hat { p } _ { { Te , u } }$$ are the color histogram and texture histogram of the template, respectively. [TeX:] $$\hat { q } _ { C o , u } \text { and } \hat { q } _ { { Te , u } }$$ are the color histogram and texture histogram of the candidate area, respectively. [TeX:] $$\hat { \rho } _ { C o } \text { and } \hat { \rho } _ { T e }$$ are the color similarity and texture similarity, respectively. [TeX:] $$\hat { \rho } _ { f }$$ is the obtained similarity. 4.2 Regression Model In the multi-templates based MS algorithm, the number of the templates affects the algorithm's performance. The algorithm is more robust to the problem of occlusion, illumination variations and pose changes as the number of the templates increases. However, it means that the MS algorithm will be applied on more templates for generating the accurate result. As a result, the computing time of dealing with one frame will increase. To address the problem, a regression model is presented: [TeX:] $$R ( \hat { p } ( y ) ) = \beta ^ { T } \hat { p } ( y )$$ where R(·) is the estimated value of the Bhattacharyya coefficient [TeX:] $$$$ for a candidate area. [TeX:] $$\hat { p } ( y ) = \left( \hat { p } _ { f 0 } ( y ) , \hat { p } _ { f 1 } ( y ) , \cdots , \hat { p } _ { f m } ( y ) \right) ^ { T }$$ is a vector with the color-texture descriptor as the elements and [TeX:] $$\hat { p } _ { f 0 } ( y ) = 1 , \beta = \left( \beta _ { f 0 } , \beta _ { f 1 } , \cdots , \beta _ { f m } \right) ^ { T }$$ is a template's parameter. The regression model is trained online by using the final templates and the candidate areas. The goal of training is to estimate the parameter β which guarantees the loss function L(β) be the minimum value: [TeX:] $$\min _ { \beta } ( L ( \beta ) ) = \min _ { \beta } \left( \sum _ { i = 1 } ^ { n } \left[ \hat { \rho } \left( y _ { i } \right) - R \left( \hat { p } \left( y _ { i } \right) \right) \right] ^ { 2 } \right)$$ where [TeX:] $$\hat { \rho } \left( y _ { i } \right)$$ is the similar score between the ith candidate area and the corresponding template. The closed solution of the Eq. (5) can be obtained by using the algebra method: [TeX:] $$\hat { \beta } = \left( P ^ { T } P \right) ^ { - 1 } P ^ { T } \hat { \rho }$$ where [TeX:] $$\hat { \beta }$$ is the estimated value of the parameter. PT is a matrix with dimension [TeX:] $$( m + 1 ) \times n \cdot P ^ { T }$$ is composed by n samples' color-texture descriptors: [TeX:] $$P ^ { T } = \left( \hat { p } \left( y _ { 1 } \right) , \hat { p } \left( y _ { 2 } \right) , \cdots , \hat { p } \left( y _ { n } \right) \right) . \hat { \rho }$$ is a vector composed by the Bhattacharyya coefficients of n candidate areas: [TeX:] $$\hat { \rho } = \left( \hat { \rho } \left( \mathrm { y } _ { 1 } \right) , \hat { \rho } \left( \mathrm { y } _ { 2 } \right) , \ldots , \hat { \rho } \left( \mathrm { y } _ { n } \right) \right) ^ { T }$$ 4.3 Regression Model based MS Algorithm To implement robust and real-time object tracking, the regression model based MS algorithm is used. In the beginning, the MS algorithm is applied on the final templates for detecting an object. In such a case, the Bhattacharyya coefficients are employed to measure the similarity between the tracked candidate areas and the templates. The MS algorithm is implemented as follows: Step 1: Initialize the searching area and center manually in the first frame; Step 2: Compute the color-texture descriptor according to the Eq. (1); Step 3: Take the searching area and center in the previous frame as the initialization values in the current frame; Step 4: Apply the MS algorithm and compute the searching area as follows: [TeX:] $$y _ { 1 } = \sum _ { i = 1 } ^ { n _ { h } } x _ { i } w _ { i } g \left( \left\| \frac { y - x _ { i } } { h } \right\| ^ { 2 } \right) / \sum _ { i = 1 } ^ { n _ { h } } w _ { i } g \left( \left\| \frac { y - x _ { i } } { h } \right\| ^ { 2 } \right)$$ where [TeX:] $$g ( x ) = - k _ { E } ^ { \prime } ( x )$$ is the Epanechnikov function. wi is the weight for the pixels in the object area and it is calculated as follows: [TeX:] $$w _ { i } = \sum _ { u = 1 } ^ { m } \sqrt { \frac { q _ { u } } { p _ { u } ( y ) } } \delta \left[ b \left( x _ { i } \right) - u \right]$$ Step 5: If [TeX:] $$\left\| y _ { 1 } < y \right\| < \varepsilon$$, then stop searching and obtain the final tracking position. Else, let [TeX:] $$y = y _ { 1 }$$ and return to the step 4. After detecting the object by using the MS algorithm, the final templates are inserted into a training pool. Once the number of the training samples reaches a value, the regression model is trained. Then, the estimated value generated by the regression model is used to measure the candidate areas' similar score instead of the Bhattacharyya coefficients. The areas with the estimated values larger than a given threshold are weighted by their values to determine the object's location: [TeX:] $$L _ { c e n } = \frac { 1 } { N _ { c } } \sum _ { i = 1 } ^ { N _ { c } } R _ { i } L _ { i }$$ where Lcen is the final position of an object. Li is the position of the detected area with the estimated value larger than the given threshold. Nc is the number of the detected areas with the estimated value larger than the given threshold. Experimental results are detailed in the section to validate the performance of our algorithm. 5.1 Data-Sets To evaluate the regression model based MS algorithm, a set of challenging videos which are publicly available online are used [9]. The videos are "David indoor", "Occluded face", "Coke can", "Cliffer bar", "Tiger", and "Can". There are serious occlusion, illumination changes, and pose variations in these videos. Furthermore, our regression model based MS algorithm is compared to other state-of-the-art trackers: MIL [15], VTD [9], compressive tracker (CT) [5]. These algorithms are realized by using the source codes available online. The default parameters of the MIL [15], VTD [9], CT [5] algorithms are used. For our algorithm, the number of the final templates is 3, while that of the candidate templates is 2. In the regression model, the dimension of the color feature is 48, while dimension of the texture feature is 36. 5.2 Tracking Results The tracking position is used to evaluate the performance of the proposed algorithm. The tracking results are shown in Fig. 2. The tracked object is indicated by a rectangle. The red rectangles are for the proposed algorithm. The purple ones are for the MIL algorithm. The results of the VTD algorithm are indicated by the green rectangles. The results of the CT algorithm are indicated by the blue rectangles. The results of the "David indoor" sequences are shown in the first line. There are serious illumination change and pose variation in the frame "128" and "202". These tracking algorithms can detect the object in the frame "128". However, as tracking evolves, the MIL, CT, VTD trackers drift. The tracking results show that the proposed algorithm is robust to the illumination variations and pose changes. The results in the second line are for the "Occluded face" sequence. The "face" is occluded by a book or a hat from the frame "495". The proposed regression model based MS algorithm tracks the "face" by using multiple templates and updates these templates in different cases. Therefore, the algorithm can deal with the serious occlusion. The "Cliffer bar" moves fast and rotates in the tracking process. The proposed algorithm performs well over other algorithms. Especially, in the frame "412", only the proposed algorithm detects the object. The objects, in the "Coke can" and "Tiger" sequences, move fast and are often occluded by other objects. The tracking results show that the proposed algorithm tracks the object successfully. In the last line, the "dollar" is interfered by the similar background. All the trackers can detect the "dollar", but the MIL, CT, VTD trackers often drift away. The tracking results obtained by using the MIL, CT, VTD, and the proposed method. The experimental results show that the proposed algorithm is robust to the illumination (e.g., the "face" in the "David indoor" video), serious occlusion (e.g., the "face" in the "Occluded face" video), pose variation (e.g., the "Can" in the "Coke can" video), and scale changes (e.g., the object in the "Cliffer bar" video). 5.3 Computational Cost The proposed tracker has been demonstrated to perform well in visual tracking. Then, the algorithm is evaluated in term of computational cost. Table 1 shows the average per-frame computational cost of these algorithms. The MIL tracker suffers from a heavy computational cost, because it computes the bag probability and instance probability M times when a powerful classifier is selected. The CT algorithm is a fast tracker which processes a frame with the least time. However, the CT tracker often fails in the case of serious illumination changes and poses variations. The VTD realizes the tracking algorithm in the framework of IMCMC. As a result, it suffers from a long computational time. Applying the MS algorithm to multiple templates will result in heavy computational cost. To decrease the average perframe computational cost, the regression model is proposed in our algorithm. The experimental results show that the computing time of our method is lower than that of the VTD and MIL algorithms. Meanwhile, our method runs slower than the CT algorithm, but performs better especially when there are serious scale changes (e.g., in the "Cliffer bar" video). Overall, the presented algorithm can successfully track the object in real time. The average per-frame computational cost (in seconds) of the trackers MIL, CT, VTD, and the proposed tracker Video clip MIL CT VTD Proposed tracker David indoor 1.122 0.073 1.01 0.68 Occluded face2 1.601 0.071 0.904 0.71 Coke can 1.089 0.071 0.964 0.124 Cliff bar 1.092 0.068 0.98 0.179 Tiger2 1.092 0.073 1.003 0.194 Coupon book 1.164 0.071 1.103 0.46 In this paper, a regression model based MS algorithm is proposed for object tracking. First, multiple templates are extracted to deal with the problems of occlusion, illumination changes and pose variations. Second, a regression model is presented to estimate the similar score between the candidate area and template. The method decreases the number of templates and guarantees good performance. Third, the MS algorithm is applied to the templates, and several candidate areas with larger estimated similar scores are obtained. Then, the obtained candidate areas are weighted by the estimated similar scores for detecting the object. The experimental results have shown that the algorithm performs well in terms of illumination changes, pose variations and occlusion. The authors would like to thank the key research and development program of the Hebei Province science (No. 18210329D) and the Natural Science Foundation of Hebei Province (No. F2018205178). Hua Zhang He received M.S. degrees in College of the Electrical Engineering Automation from Beijing University of Technology in 2009. Since September 2009, he is with Department of Electrical and Electronic Engineering from Shijiazhuang University of Applied Technology as a lecturer. Lijia Wang https://orcid.org/0000-0002-1907-171X She received M.S. degree in College of the Electrical Engineering Automation from Zhengzhou University in 2008. She is currently an associate professor of Hebei College of Industry and Technology. Her current research interests include machine vision and pattern recognition. 1 A. Yilmaz, O. Javed, M. Shah, "Object tracking: a survey," ACM Computing Surveys, vol. 38, no. 4, pp. 1-45, 2006.custom:[[[-]]] 2 J. Gao, H. Ling, W. Hu, J. Xing, in Computer Vision-ECCV 2014. Cham: Springer, pp. 188-203, 2014.custom:[[[-]]] 3 J. Ning, J. Yang, S. Jiang, L. Zhang, M. H. Yang, "Object tracking via dual linear structured SVM and explicit feature map," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, 2016;pp. 4266-4274. custom:[[[-]]] 4 X. Mei, H. Ling, "Robust visual tracking and vehicle classification via sparse representation," IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 33, no. 11, pp. 2259-2272, 2011.doi:[[[10.1109/TPAMI.2011.66]]] 5 K. Zhang, L. Zhang, M. H. Yang, in Computer Vision-ECCV 2012. Heidelberg: Springer, pp. 864-877, 2012.custom:[[[-]]] 6 F. Dadgostar, A. Sarrafzadeh, S. P. Overmyer, in Affective Computing and Intelligent Interaction. Heidelberg: Springer, pp. 56-63, 2005.custom:[[[-]]] 7 A. Adam, E. Rivlin, I. Shimshoni, "Robust fragments-based tracking using the integral histogram," in Proceedings of IEEE Computer Society Conference on Computer Vision Pattern Recognition, New York, NY, 2006;pp. 798-805. custom:[[[-]]] 8 D. A. Ross, J. Lim, R. S. Lin, M. H. Yang, "Incremental learning for robust visual tracking," International Journal of Computer Vision, vol. 77, no. 1, pp. 125-141, 2008.doi:[[[10.1007/s11263-007-0075-7]]] 9 J. Kwon, K. M. Lee, "Visual tracking decomposition," in Proceedings of IEEE Computer Society Conference on Computer Vision Pattern Recognition, San Francisco, CA, 2010;pp. 1269-1276. custom:[[[-]]] 10 S. M. Jia, L. J. Wang, X. Z. Li, L. F. Wen, "Person tracking system by fusing multicues based on patches," Journal of Sensorsarticle ID. 760435,, vol. 2015, 2015.doi:[[[10.1155/2015/760435]]] 11 T. Zhang, B. Ghanem, S. Liu, C. Xu, N. Ahuja, "Robust visual tracking via exclusive context modeling," IEEE Transactions on Cybernetics, vol. 46, no. 1, pp. 51-63, 2016.doi:[[[10.1109/TCYB.2015.2393307]]] 12 H. Grabner, M. Grabner, H. Bischof, "Real-time tracking via online boosting," in Proceedings of the British Machine Vision Conference, Edinburgh, UK, 2006;pp. 47-56. custom:[[[-]]] 13 H. Grabner, C. Leistner, H. Bischof, "Semi-supervised online boosting for robust tracking," in Computer Vision-ECCV 2008. Heidelberg: Springer, pp. 234-247, 2008.custom:[[[-]]] 14 C. Zhang, J. C. Platt, P. A. Viola, "Multiple instance boosting for object detection," Advances in Neural Information Processing Systems, vol. 18, pp. 1417-1424, 2006.custom:[[[-]]] 15 K. Zhang, H. Song, "Real-time visual tracking via online weighted multiple instance learning," Pattern Recognition, vol. 46, no. 1, pp. 397-411, 2013.doi:[[[10.1016/j.patcog.2012.07.013]]] 16 L. J. Wang, H. Zhang, "Visual tracking based on an improved online multiple instance learning algorithm," Computational Intelligence Neurosciencearticle no. 12,, vol. 2006, no. article 12, 2016.doi:[[[10.1155/2016/3472184]]] 17 J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, Y. Ma, "Robust face recognition via sparse representation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210-227, 2009.doi:[[[10.1109/TPAMI.2008.79]]] 18 H. Cheng, Z. Liu, L. Yang, X. Chen, "Sparse representation and learning in visual recognition: theory and applications," Signal Processing, vol. 93, no. 6, pp. 1408-1425, 2013.doi:[[[10.1016/j.sigpro.2012.09.011]]] 19 F. Chen, Q. Wang, S. Wang, W. Zhang, W. Xu, "Object tracking via appearance modeling and sparse representation," Image and Vision Computing, vol. 29, no. 11, pp. 787-796, 2011.doi:[[[10.1016/j.imavis.2011.08.006]]] 20 S. Zhang, H. Yao, H. Zhou, X. Sun, S. Liu, "Robust visual tracking based on online learning sparse representation," Neurocomputing, vol. 100, pp. 31-40, 2013.doi:[[[10.1016/j.neucom.2011.11.031]]] 21 P. Liang, Y. Pang, C. Liao, X. Mei, H. Ling, "Adaptive objectness for object tracking," IEEE Signal Processing Letters, vol. 23, no. 7, pp. 949-953, 2016.doi:[[[10.1109/LSP.2016.2556706]]] 22 S. M. Jia, S. H. Wang, L. J. Wang, X. Z. Li, "Human tracking based on multi-feature for intelligent robot under the CTF location strategy," Journal of Shanghai Jiaotong University, vol. 48, no. 7, pp. 1039-1052, 2014.custom:[[[-]]] 23 L. J. Wang, S. M. Jia, X. Z. Li, Y. B. Lu, "Person tracking for robot using patches-based-multi-cues representation," Control and Decision, vol. 31, no. 2, pp. 337-342, 2016.doi:[[[10.13195/j.kzyjc.2014.1822]]] 24 G. B. Li, H. F. Wu, "Weighted fragments-based mean-shift tracking using color-texture histogram," Journal of Computer-Aided Design Computer Graphics, vol. 23, no. 11, pp. 2059-2066, 2011.custom:[[[-]]] 25 J. Yang, Z. S. Gao, H. Z. Yuan, J. Yu, X. Q. Zhang, C. Y. Liu, "Single sample face recognition based on LBP feature and Bayes model," Journal of Optoelectronics Laser, vol. 22, no. 5, pp. 763-765, 2011.custom:[[[-]]] Received: April 18 2017 Revision received: June 17 2017 Accepted: June 21 2017 Published (Print): December 31 2018 Published (Electronic): December 31 2018 Corresponding Author: Lijia Wang** ([email protected]) Hua Zhang*, College of Electrical and Electronic Engineering, Shijiazhuang University of Applied Technology, Shijiazhuang, China, [email protected] Lijia Wang**, Dept. of Intelligent Manufacture, Hebei College of Industry and Technology, Shijiazhuang, China, [email protected]
CommonCrawl
Progress in Earth and Planetary Science Strength models of the terrestrial planets and implications for their lithospheric structure and evolution Ikuo Katayama ORCID: orcid.org/0000-0002-8664-04091 Progress in Earth and Planetary Science volume 8, Article number: 1 (2021) Cite this article Knowledge of lithospheric strength can help to understand the internal structure and evolution of the terrestrial planets, as surface topography and gravity fields are controlled mainly by deformational features within the lithosphere. Here, strength profiles of lithosphere were calculated for each planet using a recently updated flow law and taking into account the effect of water on lithospheric deformation. Strength is controlled predominantly by brittle deformation at shallow depths, whereas plastic deformation becomes dominant at greater depths through its sensitivity to temperature. Incorporation of Peierls creep, in which strain rate is exponentially dependent on stress, results in the weakening of plastic strength at higher stress levels, and the transition from brittle to ductile deformation shifts to shallower depths than those calculated using conventional power-law creep. Strength in both the brittle and ductile regimes is highly sensitive to the presence of water, with the overall strength of the lithosphere decreasing markedly under wet conditions. The markedly low frictional coefficient of clay minerals results in a further decrease in brittle strength and is attributed to expansion of the brittle field. As plastic strength is influenced by lithology, a large strength contrast can occur across the crust–mantle boundary if deformation is controlled by ductile deformation. Effective elastic thickness for the terrestrial planets calculated from the rheological models indicates its close dependence on spatiotemporal variations in temperature and the presence of water. Although application of the strength models to observed large-scale surface deformational features is subject to large extrapolation and uncertainties, I emphasize the different sensitivity of these features to temperature and water, meaning that quantifying these features (e.g., by data from orbiting satellites or rovers) should help to constrain the internal structure and evolution of the terrestrial planets. The terrestrial planets typically comprise an outermost rigid layer overlying a convective viscous layer. Although direct subsurface information is not available for these planets, except for Earth, lithospheric structures can be inferred from modeling studies using orbiting satellite data such as multispectral imaging, surface topography, and the gravity field (e.g., Watters and Schultz 2009). The flexural response to surface or subsurface loads is responsible for flexural rigidity and can constrain the thickness of an equivalent elastic plate. By applying simple models incorporating brittle, elastic, and ductile deformation, it is possible to calculate the effective elastic thickness of the lithosphere and its spatial and temporal variations (e.g., Goetze and Evans 1979; McNutt 1984; Watts and Burov 2003). The spacing of faults and fold is also sensitive to the thickness of a strong layer associated with brittle deformation and can therefore provide constraints on lithospheric structure (e.g., Phillips and Hansen 1998; Montesi and Zuber 2003). Lithospheric structure is controlled mainly by rock strength, which is highly sensitive to temperature (e.g., Ranalli 1992; Kohlstedt et al. 1995). Consequently, differences in elastic thickness and faults/fold spacing are commonly interpreted to reflect spatiotemporal variations in the vertical thermal gradient in the terrestrial planets. In Earth, the effective elastic thickness of oceanic lithosphere systematically changes with the time of loading, consistent with the plate-cooling model (e.g., Watts et al. 1980). The thickness of the elastic lithosphere within Mars shows spatial variations that are generally correlated with the geological epoch, most likely due to the secular cooling of the planet (e.g., Solomon and Head 1990). Rock strength is also dependent on constituent minerals, meaning that chemical composition, including compositional changes such as the crust to mantle transition, has a substantial influence on the rheological structure of the lithosphere. Because of the large strength contrast across the crust–mantle boundary within Venus, crustal deformation is likely decoupled from the underlying mantle convection, which might have resulted in the absence of plate tectonics on that planet (e.g., Mackwell et al. 1998; Azuma et al. 2014). Another key difference in internal structural features between Venus and Earth is unstable fault motion, whereby the cold and wet lithosphere may facilitate dynamic weakening within Earth, in contrast to stable sliding in the dry and hot lithosphere of Venus (Karato and Barbot 2018). Recent laboratory experiments have emphasized the influence of water on rock rheology (e.g., Paterson and Wong 2004; Karato 2008). Pore fluid pressure is well known to reduce the brittle strength of rock, and this relationship has been widely applied to enhance recovery of oil and shale gas, as well as to exploit geothermal reservoirs (e.g., Gregory et al. 2011). Earthquakes can be triggered by pore pressure buildup, whereby the temporal evolution of fault strength is likely controlled by fluid accumulation and abrupt fluctuations (e.g., Sibson 1992; Katayama et al. 2012). Another important influence of water on brittle strength is the presence of clay minerals, which has been suggested as a weakening mechanism of fault motions through its markedly low frictional coefficient (e.g., Moore and Lockner 2007). During ductile deformation, the presence of water reduces plastic strength through increasing the number of defects in crystals and promoting mass transfer at grain boundary (e.g., Karato 2008). Nominally anhydrous minerals such as olivine and pyroxene contain very small amounts of water (ppm level), but the dissolved hydrogen in crystals increases the defect concentration, meaning that even trace amounts of water can markedly enhance the rate of plastic deformation (e.g., Karato and Jung 2003; Hirth and Kohlstedt 2003). This review paper summarizes current knowledge of the rheological properties of crust and mantle materials, including the effects of water on material properties. Strength profiles of the lithosphere are calculated for Mercury, Venus, Earth, and Mars, which vary in terms of gravity, temperature, and crustal thickness (Fig. 1 and Table 1), following which the implications for the spatiotemporal evolution of the lithosphere of each planet are discussed, focusing particularly on the role of water. Internal structure of the terrestrial planets, composed of crust, mantle, and core Table 1 Basic information for the terrestrial planets Calculation of strength profiles The type of deformation in the interior of rocky planets shifts from brittle deformation at relatively shallow levels to plastic deformation at deeper levels. This is because a fracture surface can easily grow at low temperature and pressure resulting in brittle deformation, whereas at deep levels the fracture surface tends to closes owing to increasing pressure, with plastic deformation becoming dominant as temperature increases with depth. As brittle fracture is commonly restrained by the friction on the existing fault plane, frictional strength is typically used to represent the strength of the brittle region (e.g., Goetze and Evans 1979; Kohlstedt et al. 1995). In the plastic region, strength is calculated using a flow law that is exponentially dependent on temperature. Details of calculating strength for each deformation mechanism and rheological layering are described below. Frictional strength Frictional strength is controlled mainly by the applied normal stress on the fault plane, and the friction coefficient is known to be insensitive to lithology and temperature (Byerlee 1978). The strength of the brittle region is generally expressed as follows: $$ \tau =\mu\ {\sigma}_n+C $$ where τ is shear stress, μ is the coefficient of friction, σn is normal stress, and C is frictional cohesive strength. The frictional coefficient of 0.85 under low normal stress and typically decreases to 0.6 at higher normal stress were frequently used to calculate the brittle strength (e.g., Kohlstedt et al. 1995). However, recent laboratory experiments and first-principle calculations have shown extremely low the frictional coefficient for clay minerals under wet conditions (e.g., Moore and Lockner 2007; Katayama et al. 2015; Sakuma et al. 2018). In addition to Earth, the presence of clay minerals has been suggested in surficial materials of Mars (e.g., Ehlmann et al. 2011). I therefore calculated brittle strength in the presence of clay minerals for Earth and Mars, using frictional coefficients as low as 0.1. Although the frictional strength obtained using Eq. (1) is shear stress, the coordination of stress is typically differential stress in plastic deformation. Therefore, it is necessary to unify the stress components when calculating a strength profile across the brittle–ductile transition. A fault plane is generally oriented about 30° from the maximum principal stress axis, and the frictional strength on the fault plane can be expressed in terms of the principal stress component using this geometrical relationship (Kohlstedt et al. 1995; Katayama and Azuma 2017). In tensional deformation, the maximum principal stress is equivalent to the overburden/lithostatic pressure, whereas the minimum principal stress is equivalent to the pressure during compression; consequently, frictional strength in the tensional field becomes nearly half of that in the compressional field. In case where pore fluids exist in the brittle region, the pore pressure acts in the direction opposite to the normal stress, so the effective normal stress on the fault plane is calculated as $$ {\sigma}_n^{\mathrm{eff}}={\sigma}_n-{\alpha P}_p $$ where Pp is pore fluid pressure, and α is a coefficient that is approximated to unity (e.g., Gueguen and Palciauskas 1994). This relationship indicates that the effective normal stress, and hence the frictional strength, can decrease markedly as pore fluid pressure increases. Here, in the calculation of strength profiles under hydrous conditions, the pore pressure was assumed to be hydrostatic. For the wet model, brittle strength was calculated using Byerlee's law, including the effect of pore fluid pressure. For the clay model, the strength was further decreased on account of the low frictional coefficient. Plastic strength In the region of plastic deformation, mechanical strength is strongly influenced by temperature and strain rate, whereas brittle strength is controlled predominantly by pressure. The relationship between strain rate (\( \dot{\varepsilon} \)) and differential stress (σ) is commonly expressed by a power-law relationship as follows: $$ \dot{\varepsilon}=A\frac{\sigma^n}{d^m}\mathit{\exp}\left(-\frac{E+ PV}{RT}\right) $$ where A is a pre-exponential factor, d is grain size, E is activation energy, V is activation volume, P is pressure, T is temperature and R is the gas constant (e.g., Ranalli 1992; Karato 2008). In this power-law relationship, n and m are constant exponents for stress and grain size, respectively. Plastic deformation is caused by the movement of defects in crystals, in which the dominant deformation mechanism depends on stress, temperature, and grain size, among other factors (e.g., Frost and Ashby 1982). Diffusion creep is controlled by point defects, for which the stress and grain size exponents are 1 and 3, whereas dislocation creep is insensitive to grain size and has a strong stress dependence (n ≈ 3). The power-law relationship is frequently used to calculate plastic strength in the lithosphere. However, limitations of the power-law formula have been documented for low temperatures and high stresses (Tsenn and Carter 1987), in which case the strain rate becomes exponentially dependent on stress, as follows: $$ \dot{\varepsilon}=A{\sigma}^2\exp \left(-\frac{E+ PV}{RT}{\left(1-\frac{\sigma }{\sigma_p}\right)}^2\right) $$ where σp is Peierls stress. This mechanism, known as Peierls creep, becomes dominant at depths close to the brittle–ductile transition (e.g., Katayama and Karato 2008). The parameters of flow laws used in the calculations are listed in Table 2. Table 2 Flow law parameters used for calculation of rhelogical structures Another key aspect of plastic deformation is that strength varies with lithology. Plastic strength is generally controlled by the weakest constituent mineral in rock, and the flow laws for plagioclase and olivine are commonly used to calculate plastic strength in the crust and mantle, respectively (e.g., Bürgmann and Dresen 2008). Plastic strength is also dependent on crystal orientation as well as mineral distribution and connectivity, leading to variability in strength (e.g., Yamazaki and Karato 2002); however, isotropic strength was adopted here as the simplest model for rheological calculations. Water is known to enhance the rate of plastic deformation by increasing the mobility of defects in crystals (e.g., Karato 2008). Although the amounts of water dissolved in nominally anhydrous minerals is minor (ppm level), defect mobility can be increased markedly through distortion in crystals, which enhances plastic deformation (e.g., Karato and Jung 2003; Hirth and Kohlstedt 2003). The influence of water depends on the chemistry of point defects, which varies among minerals and with the mechanism of creep. Plastic strength was calculated for wet models using the flow law determined under fluid-saturated conditions (Table 2). Elastic thickness Under applied moments and loads, flexure of the lithosphere occurs on geological time-scales, and the thickness of the elastically responding layer is inferred from the strength envelope of the lithosphere (e.g., Goetze and Evans 1979; McNutt 1984). Here, the effective elastic thickness was calculated following McNutt (1984), with the moment balanced between tensional and compressional forces within the lithosphere. The integrated bending moment of the lithosphere is estimated as $$ M={\int}_0^{T_m}\sigma (z)\left(z-{z}_n\right) dz $$ where Tm is the mechanical thickness of the lithosphere, σ(z) is the strength at depth z, and zn is the depth of the neutral stress plane. The mechanical thickness is assumed to be represented by the base of the lithosphere, corresponding to the depth with a stress of < 50 MPa (e.g., McNutt 1984). The analytical solution for the bending moment of the lithosphere is expressed as $$ M=\frac{EK{T}_e^3}{12\left(1-{\upsilon}^2\right)} $$ where E is Young's modulus, υ is Poisson's ratio, K is the flexural curvature, and Te is the effective elastic thickness. The elastic constants of the lithosphere were assumed to be E = 100 GPa and υ = 0.25, and the curvature was set as 5 × 10−7 m−1. The sensitivity of these parameters is discussed in the following sections. As the bending moment of the lithosphere is controlled mainly by the strength close to the brittle–ductile transition, incorporation of Peierls creep provides a large contribution to the estimated elastic thickness. Rheological models for the terrestrial planets In the calculation of strength as a function of depth, lithostatic pressure is calculated using density and gravity for each planet (Table 1). The thermal structure in the terrestrial planets is highly uncertain, so constant thermal gradients ranging from 5 to 20 K/km were assumed, corresponding to a surface heat flow of 20–80 mW/m2 with a typical thermal conductivity of 4 W m−1 K−1 for mantle material. Although the strain rate may vary with location and tectonic processes, a constant strain rate of 10−17 s−1 was used to compare the lithospheric strength for the different planets. Variations in strain rate ranging from 10−16 to 10−18 s−1 are shown as dashed lines in Fig. 2 and have a relatively minor influence on the strength profiles. For plastic deformation, grain size was assumed to be 1 mm, resulting in deformation controlled mainly by diffusion creep at the base of lithosphere, with dislocation and Peierls creeps becoming dominant in the middle parts of the lithosphere. Figure 2 shows the calculated strength profiles of oceanic lithosphere within Earth using different thermal gradients under dry and wet conditions. Increasing temperature results in a decrease in plastic strength at depth, and a shallow brittle–ductile transition develops under a high thermal gradient. Water influences lithospheric strength in both the brittle and ductile regions of the terrestrial planets, with the overall strength of the lithosphere decreasing markedly under wet conditions. The presence of clay minerals results in a marked decrease in frictional resistance, and hence the brittle–ductile transition shifts to greater depths (Fig. 2). Figure 3 shows the strength profiles of the planets calculated using a constant thermal gradient of 10 K/km. Lithospheric strength and the depth of the brittle–ductile transition are highly variable among these different planets, even if a similar thermal structure is assumed. Strength models for Earth calculated using a constant strain rate of 10−17 s−1 (solid lines) with variation in strain rates between 10−16 and 10−18 s−1 (dashed lines). The strength of shallow parts is controlled mainly by frictional sliding and that of deeper parts by plastic deformation. Arrows indicate the transition from brittle to ductile deformation for each model. An increasing thermal gradient results in a decrease in plastic strength at depth and a shallower brittle–ductile transition. Water reduces the strength of both the brittle and ductile regions; consequently, lithospheric strength is markedly lower under wet conditions compared with dry. The presence of clay results in the markedly low frictional coefficient, and hence further decreases the brittle strength Strength profiles for Mercury, Venus, Earth, and Mars calculated using a constant thermal gradient of 10 K/km and a constant strain rate of 10−17 s−1, under both dry (red lines) and wet (blue lines) conditions. Dashed lines indicate the influence of strain rate of 10−16 and 10−18 s−1, and dotted lines indicate the brittle strength in the presence of clay minerals. As plastic strength differs between crust and mantle, the strength profiles show a gap across the crust–mantle boundary (Moho) in case where deformation is plastic Strength models of Mercury are shown in Fig. 4 with different thermal gradients under dry and wet conditions. Given the relatively small gravity of this planet, the brittle strength is moderate, attributed to a deeper brittle–ductile transition. Increasing the thermal gradient results in a systematic decrease in the plastic strength, and the transition from brittle to ductile deformation occurs at shallower depths. Nimmo and Watters (2004) calculated the depth of the brittle–ductile transition within Mercury with variable crustal thickness and surface heat flow. The present results are mostly consistent with the results of those authors for a given crustal thickness under dry conditions. For wet models, the brittle–ductile transition occurs at shallower depths than those of dry models with a markedly lower transitional strength (Fig. 4). As the plastic strength of plagioclase is weaker than that of olivine, a large strength contrast can be observed across the crust–mantle boundary, particularly for a low thermal gradient under wet conditions. A significant weakness in the lower crust may result in mechanical decoupling between crust and mantle, similar to that expected for Venus. Strength profiles for Mercury with variable thermal gradients under dry (upper panel) and wet (lower panel) conditions. Brittle strength is calculated from Byerlee's law, and plastic strength is calculated from flow laws using a constant strain rate of 10−17 s−1. The stress state assumes tensional deformation at shallow depths that shifts to compression at greater depths. A flexural curvature of 5 × 10−7 m−1 was used for calculations. The red/blue shading indicates the bending moment, which is balanced between tensional and compressional forces within the lithosphere With a mean surface temperature of 730 K, the overall strength of the Venusian lithosphere is much lower than that of the other terrestrial planets. The rheological model of Venus is also highly dependent on the thermal gradient under dry conditions (Fig. 5), whereas the strength of the lithosphere is very low under wet conditions, meaning that almost the entire lithosphere acts as a viscous layer. Mackwell et al. (1998) presented a strength profile for Venus using the flow of diabase as the crustal material. The difference in the plastic strength between diabase and plagioclase is not significant under deformation controlled by dislocation creep; however, Peierls creep becomes the dominant deformation mechanism near the brittle–ductile transition, and this type of flow law is not available for diabase. Azuma et al. (2014) conducted two-layer experiments with plagioclase and olivine under conditions corresponding to the depth of the crust–mantle boundary within Venus and found that crustal plagioclase is much weaker than mantle olivine. Because of the large strength contrast across the crust–mantle boundary, they suggested that decoupling between crustal deformation and mantle convection likely occurred during the early evolution of Venus. Strength profiles for Venus with variable thermal gradients under dry conditions. Note that strength under wet conditions is extremely low and that the mechanical thickness is expected to be less than 1 km. Calculation parameters are the same as those for Fig. 4 Figure 6 shows the calculated strength profiles for oceanic lithosphere within Earth. The thermal structure of oceanic lithosphere is known to have an age dependence (e.g., McKenzie et al. 2005), and the structures calculated for thermal gradients using 10 and 20 K/km correspond approximately to oceanic lithosphere with ages of 100 and 20 Ma, respectively. As oceanic plate cools with increasing distance from the ocean ridge, the rigidity, and thickness of oceanic lithosphere systematically increase (Fig. 6). Volatile elements are generally depleted in oceanic lithosphere during magmatic differentiations (e.g., Hirth and Kohlstedt 1996), so the dry model is appropriate for the rheological structure. However, recent geophysical observations have suggested extensive hydration along outer-rise bending faults close to the trench (e.g., Obana et al. 2019), which may modify the rigidity of oceanic lithosphere as a hydrous model, giving a weak and relatively thin elastic layer (Fig. 6). Strength profiles of the continental lithosphere are highly variable, depending on the geotherm, chemical stratification, and the distribution of water (e.g., Burov and Diament 1995). Marked strength layering, such as depicted in the "jelly sandwich" model, results in a weak lower crust and mechanical decoupling at the crust–mantle boundary, whereas the "crème brûlée" model predicts a weak mantle with strength being limited to the crust (e.g., Bürgmann and Dresen 2008). Because of these complexities, strength profiles were calculated only for oceanic lithosphere. Strength profiles for oceanic lithosphere of Earth with variable thermal gradients under dry (upper panel) and wet (lower panel) conditions. Dotted lines in the lower panels indicate brittle strength calculated in the presence of clay minerals. Calculation parameters are the same as those for Fig. 4 Strength profiles for Mars with variable thermal gradients under dry (upper panel) and wet (lower panel) conditions. Dotted lines in the lower panels indicate brittle strength calculated in the presence of clay minerals. Calculation parameters are the same as those for Fig. 4 Strength profiles within Mars for a given thermal gradient are similar to those within Mercury because of the relatively small gravity (Fig. 7). However, surface temperature of the present-day Mars is much cooler than the other terrestrial planets due to the small amounts of solar radiation, indicating a relatively rigid lithosphere. Grott and Breuer (2008) presented strength envelopes for various thermal models including the influence of water, and concluded that rheologically significant amounts of water can be retained in the Martian lithosphere. However, those authors used power-law creep for flow law of plastic deformation, which is not an appropriate mechanism near the brittle–ductile transition, resulting in an overestimation of lithospheric strength. Solomon and Head (1990) calculated the strength profile and elastic thickness incorporating Peierls creep, but the calculations were limited under dry conditions. Considering the Peierls creep and the effect of water, Azuma and Katayama (2017) suggested that a shallow brittle–ductile transition and low lithospheric strength under wet conditions might have changed to a thick and rigid plate owing to depletion of water during the evolution of the lithosphere in Mars. However, if clay minerals are present in shallow parts of the lithosphere, the markedly low frictional coefficient results in a deeper brittle–ductile transition, similar to that calculated for the dry model, whereas the elastic layer is thinner because of the low flexural rigidity. Effects of thermal gradient and water on elastic thickness and the depth of the brittle–ductile transition The calculated elastic thickness for each terrestrial planet is presented in Fig. 8 as a function of thermal gradient for various models (the values are listed in Table 3). For all planets, a decreasing thermal gradient results in increasing elastic thickness, although the dependencies are slightly different for each planet. I calculated elastic thickness using strain rates ranging from 10−16 to 10−18 s−1, and the variation in strain rate has a relatively minor influence on the estimated elastic thickness (Fig. 8). The calculated elastic thickness under dry conditions for Mercury, Earth, and Mars is highly dependent on thermal gradient, but less so for Venus. This is likely due to the lithospheric strength for Venus being controlled mainly by crustal material properties and less so by temperature (because of the relatively low activation energy for plagioclase). Wet models result in a markedly thin elastic layer, and the presence of clay minerals further decreases lithospheric strength and hence yields a smaller elastic thickness (Fig. 8). It should be noted that for Venus, the calculated elastic thicknesses under wet conditions are smaller than the calculated vertical resolution (1 km), even at the lowest thermal gradient. Although absolute values carry large uncertainties, as discussed in the following section, the relative changes in elastic thickness with temperature and water are robust. Effect of thermal gradient on elastic thickness for each planet for various models. Dashed, solid, and dotted lines indicate calculation results using strain rates of 10−16, 10−17, and 10−18 s−1, respectively. In the calculations, a tensional stress field is assumed at shallow depths, which shifts to compression at greater depths. The parameters for the calculations are listed in Tables 1 and 2 Table 3 Elastic thickness (km) calculated using strain-rate of 10-17 s-1 The depths of the brittle–ductile transition for the terrestrial planets are presented in Fig. 9 as a function of thermal gradient (the values are listed in Table 4). The brittle–ductile transition under dry conditions changes systematically with thermal gradient, whereby a low thermal gradient results in a stiff lithosphere and deeper transition depth. The wet model results in a markedly shallower brittle–ductile transition than those calculated for the dry model, although the presence of clay minerals shows deeper transitional depths than those of wet model (Fig. 9). In the case of Earth and the low thermal gradients for Mars, the brittle–ductile transitions occur at greater depths similar to those of the dry model, as the brittle fields are expanded to mantle depths owing to the low frictional coefficient of clay minerals. In the other cases, significant weakening of wet plagioclase cancels the effect of clay minerals, hence the clay model shows a similar depth of the brittle–ductile transition to that of the wet model at depths below the Moho. Consequently, the depth of the brittle–ductile transition for terrestrial planets is not a simple function of thermal gradient, but is highly variable depending on crustal thickness and the presence/absence of water and clay minerals. Effect of thermal gradient on the depth of the brittle–ductile transition in each planet for various models. Dashed, solid, and dotted lines indicate calculation results using strain rate of 10−16, 10−17, and 10−18 s−1, respectively. The tensional stress field is assumed at shallow depths, and the parameters for the calculations are listed in Tables 1 and 2 Table 4 Brittle–ductile transition (km) calculated using strain-rate of 10-17 s-1 The calculated elastic thickness shows a positive relationship with the depth of the brittle–ductile transition for each planet (Fig. 10). The slopes for the dry and wet models are similar, whereas the absolute values differ substantially depending on thermal gradient. A similar positive relationship was found by Nimmo and Watters (2004), even though they used different parameters to those of the present study and a non-linear thermal model. Elastic thickness is controlled mainly by flexural rigidity, whereas the depth of the brittle–ductile transition is insensitive to the stress level, resulting in different sensitivities to thermal gradients and the presence of water. For the clay model, the relationship between elastic thickness and the depth of the brittle–ductile transition is different overall from that of the wet model (Fig. 10). This can be explained by the substantial decrease in flexural rigidity for the clay model, although the brittle field expands to greater depths owing to the low frictional resistance of clay minerals. These different slopes among the dry, wet, and clay models for each terrestrial planet should help to assess the presence of water, if elastic thickness and the depth of the brittle–ductile transition are independently constrained. Fig. 10 Relationship between elastic thickness and depth of the brittle–ductile transition in each planet for various models. Calculations were performed using a constant strain rate of 10−17 s−1, and the other parameters are the same as those in Figs. 8 and 9. The values of thermal gradient for the calculations are labeled for each model. The relationship differs among planets and for the presence/absence of water and clay mineral Mechanical thickness, measured to the depth corresponding to a strength of < 50 MPa, also shows a positive relationship with elastic thickness, with the effective elastic thickness being approximately half of the mechanical thickness. Previous calculations have shown a similar relationship, although the results are highly dependent on flexural curvatures (e.g., McNutt 1984; Solomon and Head 1990). The mechanical boundary layer is sensitive to temperature at the base of the lithosphere, and therefore a constant isotherm of 600 oC has commonly been used in thermal plate models (e.g., McKenzie et al. 2005). However, plastic strength is efficiently reduced by additional water, meaning that the mechanical thickness is also sensitive to the presence of water. Sensitivity and uncertainties Using the analytical models of elastic lithosphere presented above, it is possible to capture simplified representations of the lithosphere and calculate its elastic thickness as well as the depth of the brittle–ductile transition. However, the calculations include various uncertainties introduced by the simplifying assumptions made to obtain the strength profiles of the terrestrial planets, as discussed below. The assumption of a linear thermal gradient is not completely realistic, as thermal conductivity is dependent on temperature and porosity, and the concentration of heat-producing elements are also dependent on chemical compositions. The thermal structure on Earth is known to have a non-linear concave-upward profile (e.g., McKenzie et al. 2005), and therefore the thermal gradient becomes smaller with depth, resulting in a stiffer lithosphere than that calculated using a constant thermal gradient. The thermal conductivity of crust is typically less than that of mantle (e.g., Petitjean et al. 2006), which may also contribute to a non-linear thermal gradient across the crust–mantle boundary. Strain rate is another source of uncertainty in calculations of plastic strength. The strain rate during the deformation of a planet's interior is mostly in the range 10−16 to 10−19 s−1 (e.g., Nimmo and McKenzie 1998), and an average strain rate of 10−17 s−1 was used here. This range leads to a large difference in the plastic strength determined by power-law creep; however, if deformation is controlled by Peierls creep, the exponential dependence of strain rate on stress results in a relatively minor influence of strain rate on the plastic strength. Diffusion creep has a relatively minor influence on lithospheric strength if the grain size is larger than 1 mm, although a smaller grain size enhances the rate of diffusion creep and may result in a weaker plastic strength. Grain boundary sliding has been highlighted recently as playing an important role on strain localization (e.g., Hansen et al. 2011). However, this mechanism is predominant between dislocation and diffusion creeps, and has a minor influence on the overall strength of the lithosphere. Considerable uncertainty in the strength profile can arise from variation in crustal thickness in the terrestrial planets. Plastic strength is dependent on material composition and is therefore sensitive to the crust–mantle boundary, whereas brittle strength is less sensitive to the rock type. Because of the large mineralogical variation in crustal materials, the plastic strength of the crust is less tightly constrained than that of the mantle. This study employed the flow law for plagioclase, which is most likely the weakest major constituent minerals in crustal materials; however, quartz shows a significant hydraulic weakening that may enhance the rate of ductile deformation for silica-rich materials under hydrous conditions (e.g., Paterson 1989). The rheology of clinopyroxene, which exhibits a strength intermediate between plagioclase and mantle olivine, has in some cases been used as an analog for basaltic crust, (e.g., Kirby and Kronenberg 1984). In the calculation of brittle strength, the frictional coefficient was assumed to be constant. However, chemical reactions may be facilitated along fault zones, where aqueous fluids can penetrate, which might result in the local presence of hydration products such as clay minerals. Spatial scaling of such reaction zones has a marked impact on brittle strength, with interconnected weak zones likely controlling the overall strength of the lithosphere. It is noted that a low frictional coefficient of clay minerals has been reported under fluid-saturated and/or high-humidity environments, but a relatively high frictional coefficient similar to that of Byerlee's law under dry conditions (e.g., Behnsen and Faulkner 2012; Tetsuka et al. 2018). Accordingly, the presence of clay minerals on fluid-saturated fault planes may facilitate frictional sliding and reduce the brittle strength of the lithosphere, whereas drained and dry conditions may cause temporal changes in brittle strength. For plastic deformation, water-saturated flow laws determined from laboratory experiments at a pressure of ~ 2 GPa were used (e.g., Karato and Jung 2003). However, the influence of water depends on the water contents in crystals, which commonly increases with pressure (e.g., Kohlstedt et al. 1996). Consequently, the influence of water on plastic strength could be underestimated at greater depths and overestimated under conditions of partial saturation. Hydrous minerals such as serpentine may further decrease plastic strength (e.g., Hilairet et al. 2007; Chernak and Hirth 2010). However, the flow law and deformation mechanism of these minerals are still unclear, and further experimental-derived constraints are needed. Calculation of elastic thickness from a strength profile is highly dependent on flexural curvature. The curvature is commonly inferred from the second derivative of gravity/topography admittance data, which are highly variable even in the same tectonic domain (e.g., McNutt 1984). Bending moments are dominant in those parts of the vertical profile with the maximum strength, so maximum values of curvature have commonly been used for calculations. A constant curvature of 5 × 10−7 m−1 was used here, although it could vary for different features in these planets. Previous modeling have shown that an order of magnitude lower curvature results in a nearly half elastic thickness (e.g., Solomon and Head 1990; Katayama et al. 2019). If the elastic layer is subjected to additional force, the stress state in the lithosphere can change. This also causes corresponding changes in the equivalent elastic thickness, but such effects are highly uncertain and were disregarded here. In contrast to elastic thickness, the depth of the brittle–ductile transition is less sensitive to these parameters and is rather well constrained under a given thermal structures. Brittle strength is dependent on the stress field, whereby the strength calculated for the tensional field is roughly half that for the compressional field, as a result of the lithostatic pressure corresponding to the maximum principal stress in tension. Although it was assumed here that the tensional stress field corresponds to the strength profile, winkle ridge deformation is possibly attributable to shortening and compression, meaning that brittle strength could be doubled in such compressional regions, indicating a shallower transition depth from the brittle to ductile deformation. Internal structure and evolution of the terrestrial planets Little is known about the internal structure of Mercury because of limited radar and stereo coverage. One of the few clues for assessing the physical state is the existence of lobate scarps, which likely developed in response to thermal contraction (e.g., Watters et al. 1998). Topographic profiles of the lobate scarps indicate that thrust faulting extended to depths of 30–40 km, corresponding to a lower limit for the brittle–ductile transition (Watters et al. 2002). Assuming the limiting isotherm for Mercury, Watters et al. (2002) suggested a paleo-thermal gradient in the ranges of 3–11 K/km. Our calculations of the brittle–ductile transition for Mercury are highly dependent on the model used, with 8–10 K/km for the dry model and 5–6 K/km for the wet model, to explain the observed transition depth. Nimmo and Watters (2004) estimated an effective elastic thickness of 25–30 km using a yield strength envelope model, similar to the approach taken in the present study. The slope of the brittle–ductile transitional depth and elastic thickness is similar between the dry and wet models (Fig. 10), consistent with the observed relationship, although it is difficult to distinguish these effects in the Mercury's lithosphere. Another estimate using wrinkle ridges in Caloris basin implies an elastic thickness of ~ 100 km (Melosh and McKinnon 1988), which far exceeds the thickness inferred from the lobate scarps. As the wrinkle ridge structures were probably formed before the thrust faulting related to the lobate scarps, they cannot be explained by secular cooling (Nimmo and Watters 2004). The discrepancy between the two estimates may reflect a different curvature in these features or the local depletion of water, particularly at the time of the wrinkle ridge formation. Although there is no clear evidence for plate tectonics on Venus, extensive deformational landforms have been observed on the surface (e.g., Campbell et al. 1984; Zuber and Parmentier 1995). Admittance data from the Magellan mission indicate a long-wavelength range of elastic thickness of ~ 20–30 km (Barnett et al. 2000), consistent with most previous flexural modeling (Johnson and Sandwell 1994). These estimates are similar to those observed in shields and ancient ocean basins on Earth, although the surface temperature of Venus is much higher than that of Earth, suggesting that the lithosphere of Venus is dry and maintains an elastic core even at the higher temperatures (Barnett et al. 2000). Fault motion is slow and stable under dry conditions, whereas unstable slip and dynamic weakening can occur in fluid-bearing fault zones; hence, the dry lithosphere in Venus might lead to strong plates and promote stagnant lid convection (Karato and Barbot 2018). The dry strength model calculated in the present study requires a thermal gradient of 4–8 K/km to explain the observed ranges of elastic thickness in Venus, indicating that these features occurred during the later history of Venus, most likely after global resurfacing events. Using inelastic flexure modeling, Brown and Grimm (1997) calculated thermal gradients as low as 4 K/km for Artemis Chasm, a large circular structure in Aphrodite Terra, to account for the absence of flexurally induced faulting and the bending moment of thick lithosphere. In contrast, relatively old surface features such as tessera suggest thin-skin tectonics with very short-wavelength features and hence a shallow brittle–ductile transition (Phillips and Hansen 1998). As the brittle–ductile transition is sensitive to temperature gradient, this difference is likely attributable to the internal thermal structures forming these features. The simplest model accounting for these observations is a transition from thin, hot lithosphere during the early history of Venus to thick, cold lithosphere as a result of planetary cooling (e.g., Nimmo and McKenzie 1998). Another possible scenario is hydraulic weakening of crustal rocks during the early history of the planet, as vigorous volcanic activity prior to or during resurfacing may have released substantial amounts of water to the surface. Grinspoon (1993) suggested that the high abundance ratio of deuterium to hydrogen in the atmosphere of Venus can be explained by efficient volcanic outgassing during this period, although this is still an open question. Plate tectonics on Earth occur as an outermost layer that acts as a mechanically rigid plate that is decoupled from the convecting mantle. Characteristics of the strong outer layer have been determined from lithospheric flexures caused by long-term surface loads such as ice sheets, oceanic islands, and subduction trenches (e.g., Watts and Burov 2003). Although the elastic layer beneath continents is highly variable owing to complex tectonic features, the effective elastic thickness of oceanic lithosphere shows a systematic correlation with time of loading (e.g., Watts et al. 1980; Bodine et al. 1981; McNutt 1984). An increase in elastic thickness with age is commonly considered as oceanic lithosphere cools over time and becomes more rigid in its response to surface loading. The depth of the brittle–ductile transition is also related to the age of oceanic lithosphere, as inferred from the lower limit of intraplate earthquakes, consistent with a plate-cooling model (e.g., McKenzie et al. 2005). These mechanical models of oceanic lithosphere allow elastic stresses to be transmitted over large distances and enable the plate to move as a rigid cap above the convecting mantle, bounded by faults at subduction trenches. However, recent detailed modeling of plate flexure beneath the Hawaiian Ridges has shown a large deficit in lithospheric strength using conventional power-law creep (Pleus et al. 2020), and has suggested that the flexural rigidity is controlled mainly by low-temperature plasticity (Peierls creep), as was calculated in the present study. Although the dry model can mostly account for the observed elastic thickness in the oceanic lithosphere, the thin elastic layer found in several deep-sea trenches cannot be explained by this model, possibly because of the weakening due to sea-water penetration into the lithosphere. Seismic reflection and refraction surveys have recently shown low-velocity anomalies even in the oceanic mantle, possibly due to water infiltration along the outer-rise faults (e.g., Fujie et al. 2013). As serpentinite, a product of mantle hydration, is known to be significantly weaker than anhydrous minerals (e.g., Hilairet et al. 2007; Reynard 2013; Hirauchi and Katayama 2015), the plate hydration in these regions is likely responsible for decreasing the thickness of the effective elastic layer. Various missions to Mars have shown a hemispheric dichotomy in topographic and tectonic features, with the presence of a wide contrast between the high-standing southern hemisphere and the low-lying northern hemisphere (e.g., Zuber 2001). The hemispheric dichotomy is considered to reflect differing crust and mantle structure, whereby crustal thickness in the southern hemisphere is considerably greater than that in the northern hemisphere, except beneath large impact craters (Neumann et al. 2004). Elastic thickness is also correlated with these topographic variations, in which the southern hemisphere exhibits a relatively thin elastic layer (e.g., McGovern et al. 2004). As plastic strength is dependent on lithology, a thick crust can result in weak lithospheric strength owing to the weaker plastic strength of plagioclase compared with olivine, which partly explains the difference in the observed elastic thickness. An extremely thick elastic layer has been reported for the northern polar cap (up to 300 km), which may be due to a subchondritic composition in radioactive heat sources or the presence of mantle upwelling in the other regions of Mars (Phillips et al. 2008; Grott et al. 2013). In addition to these compositional variations, the elastic thickness is highly sensitive to internal thermal structures and hence to the time of loading to create the gravitational/topographic anomalies. A low temperature gradient leads to a stiff lithosphere, resulting in a thick elastic layer (Fig. 8), so that the temporal change due to the lithosphere cooling is another important source of variation in elastic thickness (e.g., Solomon and Head 1990; Grott and Breuer 2008; Ruiz et al. 2011). The presence of water and clay minerals can also lead to variation in the elastic thickness, as indicated in our models. The extremely thin elastic thickness of < 10 km observed in the Noachian terranes, such as Noachis Terra and Terra Cimmeria, can be attributed to a significant amount of water in these relatively old regions, whereby volatile elements might have been incorporated during accretion (e.g., Dreibus and Wänke 1987) or transported by plate subduction during the early evolution of Mars (e.g., Sleep 1994). However, lithospheric stress can change with time owing to viscous relaxation, and caution must be exercised when interpreting these deformational features in old terranes (e.g., Grott et al. 2013). The lateral spacing of wrinkle ridges suggests a rigid lithosphere in the northern hemisphere, where the transition from brittle to ductile deformation occurs at a greater depth than in the southern highlands (Montesi and Zuber 2003). The northern plains are characterized by a relatively thick elastic layer, so the positive correlation between the elastic thickness and the depth of the brittle–ductile transition is consistent with our models shown in Fig. 10. Although Montesi and Zuber (2003) suggested that the difference in the depth of the brittle–ductile transition is associated with the difference of crustal thickness, the appearance of water in the lithosphere can also contribute to the variation of the brittle–ductile transition. One of the main objectives of the on-going InSight mission is to detect Marsquakes and their depth distributions (e.g., Giardini et al. 2020), results of which may help identify the local presence/absence of water in the Martian lithosphere. Strength profiles were calculated for the terrestrial planets using a recently updated flow law and considering the effect of water. Using these models, it was possible to constrain the lithospheric strengths of the different planets and to calculate elastic thickness and the depth of the brittle–ductile transition. Although these models present the maximum strength of the rocks, assuming a simple mineralogical stratification and deforming at a constant strain rate, they are useful for explaining the large-scale deformation features captured by surface topographic and gravity data for the terrestrial planets. I suggest that these features are highly sensitive to the thermal gradient as well as the presence of water in the lithosphere. Temporal changes in elastic thickness can be explained by secular cooling of planets; however, the extremely thin elastic layer in early Mars cannot be explained by temperature alone and might have been promoted by the presence of water possibly with clay minerals. The relatively shallow brittle–ductile transition within Mercury, as inferred from lobate scarp structures, might also be associated with the local presence of water. Recent orbiting satellite and rover missions have provided data showing various structures and spatiotemporal heterogeneity in deformational features of Mars. Given the sensitivity of the obtained strength models to temperature and water, these data should help to provide a more detailed understanding of the internal structure and evolution of these terrestrial planets. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Please contact the corresponding author for data requests. Azuma S, Katayama I (2017) Evolution of the rheological structure of Mars. Earth Planets Space 69:8. https://doi.org/10.1186/s40623-016-0593-z Azuma S, Katayama I, Nakakuki T (2014) Rheological decoupling at the Moho and implication to Venusian tectonics. Sci Rep 4:4403. https://doi.org/10.1038/srep04403 Barnett DN, Nimmo F, McKenzie D (2000) Elastic thickness estimates for Venus using line of sight accelerations from magellan cycle 5. Icarus 146:404–419. https://doi.org/10.1006/icar.2000.6352 Behnsen J, Faulkner DR (2012) The effect of mineralogy and effective normal stress on frictional strength of sheet silicates. J Struct Geol 42:49–61. https://doi.org/10.1016/j.jsg.2012.06.015 Bodine J, Steckler H, Watt MS (1981) Observations of flexure and the rheology of the oceanic lithosphere. J Geophys Res 86:3695–3707. https://doi.org/10.1029/JB086iB05p03695 Brown CD, Grimm RE (1997) Tessera deformation and contemporaneous thermal state of the plateau highlands, Venus. Earth Planet Sci Lett 147:1–10. https://doi.org/10.1016/S0012-821X(97)00007-1 Bürgmann R, Dresen G (2008) Rheology of the Lower Crust and Upper Mantle: Evidence from Rock Mechanics, Geodesy, and Field Observations. Annu Rev Earth Planet Sci 36:531–677. https://doi.org/10.1146/annurev.earth.36.031207.124326 Burov EB, Diament M (1995) The effective elastic thickness (Te) of continental lithosphere: What does it really mean? J Geophys Res 100:3905–3927. https://doi.org/10.1029/94JB02770 Byerlee J (1978) Friction of rocks. Pure Apply Geophys 116:615–626. https://doi.org/10.1007/BF00876528 Campbell D, Head J, Harmon J, Hine A (1984) Venus: Volcanism and rift formation in Beta Regio. Science 226:167–170. https://doi.org/10.1126/science.226.4671.167 Chernak LJ, Hirth G (2010) Deformation of antigorite serpentinite at high temperature and pressure. Earth Planet Sci Lett 296:23–33. https://doi.org/10.1016/j.epsl.2010.04.035 Dreibus G, Wänke H (1987) Volatiles on Earth and Mars–A comparison. Icarus 71:225–240. https://doi.org/10.1016/0019-1035(87)90148-5 Ehlmann BL, Mustard JF, Murchie SL, Bibring JP, Meunier A, Fraeman AA, Langevin Y (2011) Subsurface water and clay mineral formation during the early history of Mars. Nature 479:53–60. https://doi.org/10.1038/nature10582 Frost HJ, Ashby MF (1982) Deformation-mechanism maps: The plasticity and creep of metals and ceramics, Oxford: Pergamon Press, p 166. Fujie G, Kodaira S, Yamashita M, Sato T, Takahashi T, Takahashi N (2013) Systematic changes in the incoming plate structure at the Kuril trench. Geophys Res Lett 40:88–93. https://doi.org/10.1029/2012GL054340 Giardini D et al (2020) The seismicity of Mars. Nature Geoscience 13:205–212. https://doi.org/10.1038/s41561-020-0539-8 Goetze C, Evans B (1979) Stress and temperature in the bending lithosphere as constrained by experimental rock mechanics. Geophys J R 59:463–478. https://doi.org/10.1111/j.1365-246X.1979.tb02567.x Gregory K, Vidic R, Dzombak D (2011) Water management challenges associated with the production of shale gas by hydraulic fracturing. Elements 7:181–186. https://doi.org/10.2113/gselements.7.3.181 Grinspoon DH (1993) Implications of the high deuterium-to-hydrogen ratio for the sources of water in Venus' atmosphere. Nature 363:428–431. https://doi.org/10.1038/363428a0 Grott M, Breuer D (2008) The evolution of the Martian elastic lithosphere and implications for crustal and mantle rheology. Icarus 193:503–515. https://doi.org/10.1016/j.icarus.2007.08.015 Grott M et al (2013) Long-term evolution of the Martian crust-mantle system. Space Sci Rev 174:49–111. https://doi.org/10.1007/s11214-012-9948-3 Gueguen Y, Palciauskas V (1994) Introduction to the Physics of Rocks. Princeton University Press, Princeton, p 296 Hansen LN, Zimmerman ME, Kohlstedt DL (2011) Grain-boundary sliding in San Carlos olivine: Flow-law parameters and crystallographic-preferred orientation. J Geophys Res 116:B08201. https://doi.org/10.1029/2011JB008220 Hilairet N, Reynard B, Wang Y, Daniel I, Merkel S, Nishiyama N, Petitgirard S (2007) High-pressure creep of serpentine, interseismic deformation, and initiation of subduction. Science 318:1910–1913. https://doi.org/10.1126/science.1148494 Hirauchi K, Katayama I (2015) Rheological properties of serpentinite and their tectonic significance. J Geography 124:371–396. (in Japanese with English abstract). https://doi.org/10.5026/jgeography.124.371 Hirth G, Kohlstedt DL (1996) Water in the oceanic upper mantle: Implications for rheology, melt extraction and the evolution of the lithosphere. Earth Planet Sci Lett 144:93–108. https://doi.org/10.1016/0012-821X(96)00154-9 Hirth G, Kohlstedt DL (2003) Rheology of the Upper Mantle and the Mantle Wedge: A View from the Experimentalists. Geophys Monogr 138:83–105. https://doi.org/10.1029/138GM06 Johnson CL, Sandwell DT (1994) Lithospheric flexure on Venus. Geophys J Inter 119:627–647. https://doi.org/10.1111/j.1365-246X.1994.tb00146.x Karato S (2008) Deformation of Earth Materials: An introduction to the rheology of solid Earth., 463 p, Cambridge: Cambridge University Press. Karato S, Barbot S (2018) Dynamics of fault motion and the origin of contrasting tectonic style between Earth and Venus. Sci Rep 8. https://doi.org/10.1038/s41598-018-30174-6 Karato S, Jung H (2003) Effects of pressure on high-temperature dislocation creep in olivine. Phil Mag A83:401–414. https://doi.org/10.1080/0141861021000025829 Katayama I, Azuma S (2017) Effect of water on rock deformation and rheological structures of continental and oceanic plates. J Geol Soc Japan 123:365–377. (in Japanese with English abstract). https://doi.org/10.5575/geosoc.2017.0019 Katayama I, Karato S (2008) Low-temperature, high-stress deformation of olivine under water-saturated condition. Phys Earth Planet Inter 168:125–133. https://doi.org/10.1016/j.pepi.2008.05.019 Katayama I, Kubo T, Sakuma H, Kawai K (2015) Can clay minerals account for the behavior of non-asperity on the subducting plate interface? Prog Earth Planet Sci 2. https://doi.org/10.1186/s40645-015-0063-4 Katayama I, Matsuoka Y, Azuma S (2019) Sensitivity of elastic thickness to water in the Martian lithosphere. Prog Earth Planet Sci 6. https://doi.org/10.1186/s40645-019-0298-6 Katayama I, Terada T, Okazaki K, Tanikawa W (2012) Episodic tremor and slow slip potentially linked to permeability contrasts at the Moho. Nature Geo 5:731–734. https://doi.org/10.1038/ngeo1559 Kirby SH, Kronenberg AK (1984) Deformation of clinopyroxenite: evidence for a transition in flow mechanisms and semibrittle behavior. J Geophys Res 89:3177–3192. https://doi.org/10.1029/JB089iB05p03177 Kohlstedt D, Evans B, Mackwell S (1995) Strength of the lithosphere: Constraints imposed by laboratory experiments. J Geophys Res 100:17587–17602. https://doi.org/10.1029/95JB01460 Kohlstedt DL, Keppler H, Rubie DC (1996) Solubility of water in the α, β and γ phases of (Mg,Fe)2SiO4. Contrib Mineral Petrol 123:345–357. https://doi.org/10.1007/s004100050161 Mackwell S, Zimmerman M, Kohlstedt D (1998) High-temperature deformation of dry diabase with application to tectonics on Venus. J Geophys Res 103:975–984. https://doi.org/10.1029/97JB02671 McGovern PJ, Solomon SC, Smith DE, Zuber MT, Simons M, Wieczorek MA, Phillips RJ, Neumann GA, Aharonson O, Head JW (2004) Correction to "Localized gravity/topography admittance and correlation spectra on Mars: Implications for regional and global evolution". J Geophys Res 107:5418. https://doi.org/10.1029/2004JE002286 McKenzie D, Jackson J, Priestley K (2005) Thermal structure of oceanic and continental lithosphere. Earth Planet Sci Lett 233:337–349. https://doi.org/10.1016/j.epsl.2005.02.005 McNutt MK (1984) Lithospheric Flexure and Thermal Anomalies. J Geophys Res 89:11180–11194. https://doi.org/10.1029/JB089iB13p11180 Melosh HJ, McKinnon WB (1988) The tectonics of Mercury. Mercury, edited by Vilas F, Chapman CR, Matthews MS. University of Arizona Press, Tucson, pp 374–400 Montesi LG, Zuber MT (2003) Clues to the lithospheric structure of Mars from wrinkle ridge sets and localization instability. J Geophys Res 108:5048. https://doi.org/10.1029/2002JE001974 Moore DE, Lockner DA (2007) Friction of the smectite clay montmorillonite. The Seismogenic Zone of Subduction Thrust Faults, edited by Dixon T, Moore C. Columbia Univ. Press, New York, pp 317–345 Neumann GA et al (2004) Crustal structure of Mars from gravity and topography. J Geophys Res 109:E08002. https://doi.org/10.1029/2004JE002262 Nimmo F, McKenzie D (1998) Volcanism and tectonics on Venus. Annu Rev Earth Planet Sci 26:23–51. https://doi.org/10.1146/annurev.earth.26.1.23 Nimmo F, Watters TR (2004) Depth of faulting on Mercury: implications for heat flux and crustal and effective elastic thickness. Geophys Res Lett 31:L02701. https://doi.org/10.1029/2003GL018847 Obana K, Fujie G, Takahashi T, Yamamoto Y, Tonegawa T, Miura S, Kodaira S (2019) Seismic velocity structure and its implications for oceanic mantle hydration in the trench–outer rise of the Japan Trench. Geophys J Int:1629–1642. https://doi.org/10.1093/gji/ggz099 Paterson MS (1989) The interaction of water with quartz and its influence in dislocation flow–an overview. In: Karato S, Toriumi M (eds) Rheology of Solids and of the Earth. Oxford Univ. Press, New York, pp 107–142 Paterson MS, Wong T (2004) Experimental Rock Deformation, The Brittle Field. Springer-Verlag, New York, p 347 Petitjean S, Rabinowicz M, Grégoire M, Chevrot S (2006) Differences between Archean and Proterozoic lithospheres: Assessment of the possible major role of thermal conductivity. Geochem Geophys Geosyst 7:Q03021. https://doi.org/10.1029/2005GC001053 Phillips RJ, Hansen VL (1998) Geological evolution of Venus: Rises, plains, plumes, and plateaus. Science 279:1492–1497. https://doi.org/10.1126/science.279.5356.1492 Phillips RJ et al (2008) Mars north polar deposits: Stratigraphy, age, and geodynamical response. Science 320:1182–1185. https://doi.org/10.1126/science.1157546 Pleus A, Ito G, Wessel P, Frazer LN (2020) Rheology and thermal structure of the lithosphere beneath the Hawaiian Ridge inferred from gravity data and models of plate flexure. Geophys J Inter 222:207–224. https://doi.org/10.1093/gji/ggaa155 Ranalli G (1992) Rheology of the Earth, 409 p, London: Chapman and Hall. Reynard B (2013) Serpentine in active subduction zones. Lithos 178:171–185. https://doi.org/10.1016/j.lithos.2012.10.012 Ruiz J, McGovern PJ, Jimenénez-Díaz A, López V, Williams JP, Hahn BC, Tejero R (2011) The thermal evolution of Mars as constrained by paleoheat flows. Icarus 215–508–517. https://doi.org/10.1016/j.icarus.2011.07.029. Rybacki E, Dresen G (2000) Dislocation and diffusion creep of synthetic anorthite aggregates. J Geophys Res 105:26017–26036. https://doi.org/10.1029/2000JB900223 Sakuma H, Kawai K, Katayama I, Suehara S (2018) What is the origin of macroscopic friction? Science Advances 4. https://doi.org/10.1126/sciadv.aav2268 Shelton GL (1981) Experimental deformation of single phase and polyphase crustal rocks at high pressures and temperatures. Providence: PhD thesis, Brown University. Sibson RH (1992) Implications of fault-valve behavior for rupture nucleation and recurrence. Tectonophys 211:283–293. https://doi.org/10.1016/0040-1951(92)90065-E Sleep NH (1994) Martian plate tectonics. J Geophys Res 99:5639–5655. https://doi.org/10.1029/94JE00216 Solomon SC, Head JW (1990) Heterogeneities in the thickness of the elastic lithosphere of Mars: Constraints on heat flow and internal dynamics. J Geophys Res 95:11073–11083. https://doi.org/10.1029/JB095iB07p11073 Tetsuka H, Katayama I, Sakuma H, Tamura K (2018) Effects of humidity and interlayer cations on the frictional strength of montmorillonite. Earth Planet Space 70:56. https://doi.org/10.1186/s40623-018-0829-1 Tsenn MC, Carter NL (1987) Upper limits of power law creep in rocks. Tectonophys 136:1–26. https://doi.org/10.1016/0040-1951(87)90332-5 Watters T, Schultz R (2009) Planetary Tectonics. Cambridge University Press. https://doi.org/10.1017/CBO9780511691645 Watters TR, Robinson MS, Cook AC (1998) Topography of lobate scarps on Mercury: new constraints on the planet's contraction. Geology 26:991–994. https://doi.org/10.1130/0091-7613 Watters TR, Schultz RA, Robinson MS, Cook AC (2002) The mechanical and thermal structure of Mercury's early lithosphere. Geophys Res Lett 29:1542. https://doi.org/10.1029/2001GL014308 Watts AB, Bodine JH, Steckler MS (1980) Observations of flexure and the state of stress in the oceanic lithosphere. J Geophys Res 85:5369–6376. https://doi.org/10.1029/JB085iB11p06369 Watts AB, Burov EB (2003) Lithospheric strength and its relationship to the elastic and seismogenic layer thickness. Tectonophys 213:113–131. https://doi.org/10.1016/S0012-821X(03)00289-9 Yamazaki D, Karato S (2002) Fabric development in (Mg,Fe)O during large strain, shear deformation: Implications for seismic anisotropy in Earth's lower mantle. Phys Earth Planet Int 131:251–267. https://doi.org/10.1016/S0031-9201(02)00037-7 Zuber M, Parmentier E (1995) Formation of fold-and-thrust belts on Venus by thick-skinned deformation. Nature 377:704–707. https://doi.org/10.1038/377704a0 Zuber MT (2001) The crust and mantle of Mars. Nature 412:220–227. https://doi.org/10.1038/35084163 I thank Shintaro Azuma, Keishi Okazaki, Ken-ichi Hirauchi, and Yhuki Matsuoka for fruitful discussions. Comments from Editors (Shun-ichiro Karato) and two anonymous reviewers greatly improved the paper. I also thank the Nishida prize committee at JpGU for their encouragement to prepare this review article. This work was supported by JSPS KAKENHI Grant Numbers 18H03733 and 20H00200. Department of Earth and Planetary Systems Science, Hiroshima University, Higashi-Hiroshima, 739-8526, Japan Ikuo Katayama I.K. conducted the calculations and wrote the manuscript. The author read and approved the final manuscript. Correspondence to Ikuo Katayama. The author declares no competing interests. Katayama, I. Strength models of the terrestrial planets and implications for their lithospheric structure and evolution. Prog Earth Planet Sci 8, 1 (2021). https://doi.org/10.1186/s40645-020-00388-2 Strength profile Rock rheology Terrestrial planet Thermal gradient 4. Solid earth sciences
CommonCrawl
Number of items: 4170. Hyman, Gavin and Fallon, Francis, eds. (2020) Agnosticism:Explorations in Philosophy and Religious Thought. Oxford University Press, Oxford. ISBN 9780198859123 Barber, Sarah and Peniston-Bird, Corinna, eds. (2020) Approaching Historical Sources in their Contexts:Space, Time and Performance. Routledge Guides to using Historical Sources . Routledge, London. ISBN 9780815364818 Benson, Katie and King, Colin and Walker, Clive, eds. (2020) Assets, Crimes and the State:Innovation in 21st Century Responses. Routledge. ISBN 9780367025922 Mills, Thomas and Miller, Rory, eds. (2020) Britain and the Growth of US Hegemony in Twentieth-Century Latin America:Competition, Cooperation and Coexistence. Britain and the World . Palgrave Macmillan. ISBN 9783030483203 UNSPECIFIED (2020) Debating the status of 'theory' in technology enhanced learning research. Studies in Technology Enhanced Learning, 1 (1). pp. 1-281. Jiang, Richard and Li, Chang-Tsun and Crookes, Danny and Meng, Weizhi and Rosenberger, Christopher, eds. (2020) Deep Biometrics. Springer, Cham. ISBN 9783030325831 Giannakopoulou, Georgia and Gilloch, Graeme, eds. (2020) The Detective of Modernity:Essays on the Work of David Frisby. Routledge, London. ISBN 9780367192563 Brinda, Torsten and Passey, Don and Keane, Therese, eds. (2020) Empowering Teaching for Digital Equity and Agency:IFIP TC 3 Open Conference on Computers in Education, OCCE 2020, Mumbai, India, January 6–8, 2020, Proceedings. IFIP Advances in Information and Communication Technology . Springer, Cham, Switzerland. ISBN 9783030598464 Rak, Jacek and Hutchison, David, eds. (2020) Guide to Disaster-Resilient Communication Networks. Computer Communications and Networks . Springer, Cham. ISBN 9783030446840 Walshe, Catherine and Brearley, Sarah, eds. (2020) Handbook of Theory and Methods in Applied Health Research:Questions, Methods and Choices. Edward Elgar, Cheltenham. ISBN 9781785363207 Gutsche Jr, Robert and Brennen, Bonnie, eds. (2020) Journalism research in practice:Perspectives on change, challenges, and solutions. Journalism Studies . Routledge, London. ISBN 9780367469665 UNSPECIFIED (2020) Literary Back-Translations. Translation and Literature, 29 (3). ISSN 0968-1361 McArthur, Jan and Ashwin, Paul, eds. (2020) Locating Social Justice in Higher Education Research. Bloomsbury, London. ISBN 9781350086753 Lambert, Michael and Begley, Philip and Sheard, Sally, eds. (2020) Mersey Regional Health Authority, 1974-1994. University of Liverpool Department of Public Health and Policy, Liverpool. ISBN 9781999920944 Francis, Matthew D. and Knott, Kim, eds. (2020) Minority Religions and Uncertainty. Routledge Inform Series on Minority Religions and Spiritual Movements . Routledge, London. ISBN 9781472484512 UNSPECIFIED (2020) A Misload Analysis for PLUS7 Fuel Assemblies in the 32 Burnup Credit Cask. In: UNSPECIFIED. Gutsche Jr, Robert and Hess, Kristy, eds. (2020) Reimagining journalism and social order in a fragmented media world. Routledge, London. ISBN 9780367366056 Ashe, Stephen and Busher, Joel and Macklin, Graham and Winter, Aaron, eds. (2020) Researching the Far Right:Theory, Method and Practice. Routledge Studies in Fascism and the Far Right . Routledge, London. ISBN 9781138219342 Dunn, Nick and Edensor, Tim, eds. (2020) Rethinking Darkness:Cultures, Histories, Practices. Ambiances, Atmospheres and Sensory Experiences of Spaces . Routledge, London. ISBN 9780367201159 Carruthers, Jo and Dakkak, Nour, eds. (2020) Sandscapes:Writing the British Seaside. Palgrave Macmillan, New York. ISBN 9783030447793 Rheindorf, Markus and Wodak, Ruth, eds. (2020) Sociolinguistic Perspectives on Migration Control:Language Policy, Identity and Belonging. Language, Mobility and Institutions . Multilingual Matters. ISBN 9781788924672 UNSPECIFIED (2020) Special Issue on "2020 Review Issue". Family Business Review, 33 (1). ISSN 0894-4865 Braun, Rebecca and Schofield, Benedict, eds. (2020) Transnational German Studies. Transnational Modern Languages . Liverpool University Press, Liverpool. ISBN 9781789621426 Benson, K. and King, C. and Walker, C., eds. (2020) Assets, crimes and the state:Innovation in 21st Century Legal Responses. Routledge, London. ISBN 9780429398834 UNSPECIFIED (2020) Cognitive Neuroscience of Cross-Language Interaction in Bilinguals. Brain Sciences. ISSN 2076-3425 Sloan, David and Batista, Rafael and Hicks, Michael and Davies, Roger, eds. (2020) Fine-Tuning in the Physical Universe. Cambridge University Press, Cambridge. ISBN 9781108484541 Kewley, Stephanie and Barlow, Charlotte, eds. (2020) Preventing Sexual Violence:Problems and Possibilities. Bristol Policy Press, Bristol. ISBN 9781529203769 Boes, Tobias and Braun, Rebecca and Spiers, Emily, eds. (2020) World Authorship. Oxford Twenty-First Century Approaches . Oxford University Press, Oxford. ISBN 9780198819653 , ATLAS Collaboration (2020) Observation and Measurement of Forward Proton Scattering in Association with Lepton Pairs Produced via the Photon Fusion Mechanism at ATLAS. Physical review letters, 125 (26). ISSN 1079-7114 , AWAKE Collaboration (2020) Experimental study of wakefields driven by a self-modulating proton bunch in plasma. Physical Review Accelerators and Beams, 23 (8). ISSN 2469-9888 , AWAKE Collaboration (2020) Proton Bunch Self-Modulation in Plasma with Density Gradient. Physical review letters, 125 (26). ISSN 1079-7114 , Benjamin Stokell and , Rajen Shah and Grose, Daniel (2020) CatReg: Solution Paths for Linear and Logistic Regression Models with SCOPE Penalty. UNSPECIFIED. , CMMID COVID-19 Working Group (2020) The potential impact of COVID-19-related disruption on tuberculosis burden. European Respiratory Journal, 56 (2). ISSN 0903-1936 , CROMIS-2 collaborators (2020) Longer term stroke risk in intracerebral haemorrhage survivors. Journal of Neurology, Neurosurgery and Psychiatry, 91 (8). pp. 840-845. ISSN 0022-3050 , D0 Collaboration (2020) Studies of Xð3872Þ and ψð2SÞ production in pp¯ collisions at 1.96 TeV. Physical Review D, 102 (7). ISSN 1550-7998 , DES Collaboration (2020) Stellar mass as a galaxy cluster mass proxy:application to the Dark Energy Survey redMaPPer clusters. Monthly Notices of the Royal Astronomical Society, 493 (4). pp. 4591-4606. ISSN 0035-8711 , DUNE Collaboration and Blake, A. and Brailsford, D. and Cross, R. and Nowak, J. and Ratoff, P. (2020) Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume 1 Introduction to DUNE. Journal of Instrumentation, 15. ISSN 1748-0221 , DUNE Collaboration and Blake, A. and Brailsford, D. and Cross, R. and Nowak, J. and Ratoff, P. (2020) Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume III DUNE Far Detector Technical Coordination. Journal of Instrumentation, 15. ISSN 1748-0221 , DUNE Collaboration and Blake, A. and Brailsford, D. and Cross, R. and Nowak, J. and Ratoff, P. (2020) Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume IV Far Detector Single-phase Technology. Journal of Instrumentation, 15. ISSN 1748-0221 , DUNE Collaboration and Blake, A. and Brailsford, D. and Cross, R. and Nowak, J. A. and Ratoff, P. (2020) First results on ProtoDUNE-SP liquid argon time projection chamber performance from a beam test at the CERN Neutrino Platform. Journal of Instrumentation. ISSN 1748-0221 , DUNE Collaboration and Blake, A. and Brailsford, D. and Cross, R. and Nowak, J. A. and Ratoff, P. (2020) Neutrino interaction classification with a convolutional neural network in the DUNE far detector. Physical Review D, 102. ISSN 1550-7998 , DUNE Collaboration and Blake, A. and Brailsford, D. and Dealtry, T. and Nowak, J. (2020) Design and performance of a 35-ton liquid argon time projection chamber as a prototype for future very large detectors. Journal of Instrumentation, 15. ISSN 1748-0221 , DUNE Collaboration and Brailsford, D. and Cross, R. and Nowak, J. A. and Ratoff, P. (2020) Long-baseline neutrino oscillation physics potential of the DUNE experiment. European Physical Journal C: Particles and Fields, 80. ISSN 1434-6044 , ENCEPH UK study group (2020) Neuropsychological and psychiatric outcomes in encephalitis:A multi-centre case-control study. PLoS ONE, 15 (3). ISSN 1932-6203 , Fermi Gamma-Ray Burst Monitor, the LIGO Scientific Collaboration (2020) A Joint Fermi-GBM and LIGO/Virgo Analysis of Compact Binary Mergers from the First and Second Gravitational-wave Observing Runs. The Astrophysical Journal, 893 (2). p. 100. ISSN 0004-637X , GBD 2019 Universal Health Coverage Collaborators (2020) Measuring universal health coverage based on an index of effective coverage of health services in 204 countries and territories, 1990–2019:a systematic analysis for the Global Burden of Disease Study. The Lancet, 396. pp. 1250-1284. ISSN 0140-6736 , GROUP Investigators and , Schizophrenia Working Group of the Psychiatric Genomics Consorti (2020) The Relationship Between Polygenic Risk Scores and Cognition in Schizophrenia. Schizophrenia Bulletin, 46 (2). pp. 336-344. ISSN 0586-7614 , HPTPC Collaboration and Nowak, Jaroslaw and Brailsford, Dominic (2020) Off-axis characterisation of the CERN T10 beam for low momentum proton measurements with a High Pressure Gas Time Projection Chamber. Journal of Instrumentation, 4 (3). ISSN 1748-0221 , Innovation Forum Northern Ireland (2020) Advancing and enhancing parental engagement with schools through digital applications:Leadership guidance: Purpose, planning and managing, controlling expectations and evaluating. CCEA, Belfast. , Javier Pereda and Radburn, Nicholas and South, Lois and Monaghan, Christian (2020) An interactive installation of African music and the Trans-Atlantic slave trade. In: Proceedings of EVA London 2020 (EVA 2020). BCS, GBR, pp. 106-111. ISBN 1477-9358 , LIGO Scientific Collaboration and Virgo Collaboration (2020) GW190412:Observation of a binary-black-hole coalescence with asymmetric masses. Physical Review D, 102. ISSN 1550-7998 , LIGO Scientific Collaboration and Virgo Collaboration (2020) GW190521: A Binary Black Hole Merger with a Total Mass of 150 M⊙. Physical review letters, 125 (10). ISSN 1079-7114 , LIGO Scientific Collaboration and Virgo Collaboration (2020) GW190814: Gravitational Waves from the Coalescence of a 23 Solar Mass Black Hole with a 2.6 Solar Mass Compact Object. Astrophysical Journal Letters, 896. L44. ISSN 2041-8205 , LIGO Scientific Collaboration and Virgo Collaboration (2020) Gravitational-wave Constraints on the Equatorial Ellipticity of Millisecond Pulsars. Astrophysical Journal Letters, 902 (1). ISSN 2041-8205 , LIGO Scientific Collaboration and Virgo Collaboration (2020) Model comparison from LIGO─Virgo data on GW170817's binary components and consequences for the merger remnant. Classical and Quantum Gravity, 37 (4). ISSN 0264-9381 , LIGO Scientific Collaboration and Virgo Collaboration (2020) Optically targeted search for gravitational waves emitted by core-collapse supernovae during the first and second observing runs of advanced LIGO and advanced Virgo. Physical Review D, 101 (8). 084002. ISSN 1550-7998 , LIGO Scientific Collaboration and Virgo Collaboration (2020) Properties and Astrophysical Implications of the 150 M⊙ Binary Black Hole Merger GW190521. Astrophysical Journal Letters, 900 (1). L13. ISSN 2041-8205 , LIGO Scientific Collaboration and Virgo Collaboration (2020) A guide to LIGO–Virgo detector noise and extraction of transient gravitational-wave signals. Classical and Quantum Gravity, 37 (5). ISSN 0264-9381 , MAPS Group (2020) Antipsychotic medication versus psychological intervention versus a combination of both in adolescents with first-episode psychosis (MAPS):a multicentre, three-arm, randomised controlled pilot and feasibility study. The Lancet Psychiatry, 7 (9). pp. 788-800. ISSN 2215-0366 , MINOS Collaboration (2020) Precision Constraints for Three-Flavor Neutrino Oscillations from the Full MINOS+ and MINOS Dataset. Physical review letters, 125 (13). ISSN 1079-7114 , Michaela Rogers and Cooper, Jennifer (2020) Systems Theory and an Ecological Approach. In: Developing Skills for Social Work Practice. UNSPECIFIED, p. 251. ISBN 9781526463258 , MicroBooNE Collaboration and Blake, A. and Devitt, D. and Lister, A. and Nowak, J. (2020) Calibration of the charge and energy response of the MicroBooNE liquid argon time projection chamber using muons and protons. Journal of Instrumentation, 15. ISSN 1748-0221 , MicroBooNE Collaboration and Blake, A. and Devitt, D. and Lister, A. and Nowak, J. (2020) A Method to Determine the Electric Field of Liquid Argon Time Projection Chambers Using a UV Laser System and its Application in MicroBooNE. Journal of Instrumentation, 15 (7). ISSN 1748-0221 , MicroBooNE Collaboration and Blake, A. and Devitt, D. and Lister, A. and Nowak, J. (2020) Reconstruction and measurement of (100) MeV energy electromagnetic activity from π 0 arrow γγ decays in the MicroBooNE LArTPC. Journal of Instrumentation, 15 (2). ISSN 1748-0221 , MicroBooNE Collaboration and Blake, A. and Devitt, D. and Lister, A. and Nowak, J. and Thorpe, C. (2020) Measurement of differential cross sections for νμ -Ar charged-current interactions with protons and no pions in the final state with the MicroBooNE detector. Physical Review D, 102 (11). ISSN 1550-7998 , MicroBooNE Collaboration and Blake, A. and Devitt, D. and Nowak, J. and Thorpe, C. (2020) Measurement of Space Charge Effects in the MicroBooNE LArTPC Using Cosmic Muons. Journal of Instrumentation, 15. ISSN 1748-0221 , MicroBooNE Collaboration and Nowak, Jaroslaw and Blake, Andrew and Devitt, Daniel and Thorpe, Chris (2020) First Measurement of Differential Charged Current Quasielastic–like νμ–Argon Scattering Cross Sections with the MicroBooNE Detector. Physical review letters, 125 (20). ISSN 1079-7114 , MicroBooNE collaboration and Blake, A. and Devitt, D. and Lister, A. and Nowak, J. (2020) Search for heavy neutral leptons decaying into muon-pion pairs in the MicroBooNE detector. Physical Review D, 101 (5). ISSN 1550-7998 , NA62 Collaboration (2020) An investigation of the very rare K+→ π+νν¯ decay. Journal of High Energy Physics, 2020 (11). ISSN 1029-8479 , NA62 Collaboration and Carmignani, Joe and Jones, Roger William Lewis and Ruggiero, Giuseppe and Dainton, John (2020) Search for heavy neutral lepton production in K+ decays to positrons. Physics Letters B, 807. ISSN 0370-2693 , NA62 Collaboration and Carmignani, Joe and Jones, Roger William Lewis and Ruggiero, Giuseppe and Dainton, John (2020) An investigation of the very rare K+ to π+ νν- decay. Journal of High Energy Physics, 2020. ISSN 1029-8479 , Nanobeam Collaboration (2020) A test facility for the international linear collider at SLAC end station a for prototypes of beam delivery and IR components:36th ICFA Advanced Beam Dynamics Workshop on Nano Scale Beams, NANOBEAM 2005. In: UNSPECIFIED. , Networked Learning Editorial Collective (2020) Networked Learning:Inviting Redefinition. Postdigital Science and Education, 3. 312–325. ISSN 2524-4868 , On behalf of The CoRe‑Net Co‑applicants (2020) Improving regional care in the last year of life by setting up a pragmatic evidence-based Plan–Do–Study–Act cycle:results from a cross-sectional survey. BMJ Open, 10 (11). ISSN 2044-6055 , Open Doors (2020) Editorial: Rebalancing the research agenda. Dementia, 19 (1). pp. 3-5. ISSN 1471-3012 , PACE (2020) Strategies for the implementation of palliative care education and organizational interventions in long-term care facilities:A scoping review. Palliative Medicine, 34 (5). pp. 558-570. ISSN 0269-2163 , PACE consortium (2020) Physical restraining of nursing home residents in the last week of life:An epidemiological study in six European countries. International Journal of Nursing Studies, 104. ISSN 0020-7489 , SBND Collaboration and Blake, A. S. T. and Brailsford, D. and Holt, S. and Mercer, I. and Nowak, J. and Ratoff, P. and Statter, J. and Wilson, A. (2020) Construction of precision wire readout planes for the Short-Baseline Near Detector (SBND). Journal of Instrumentation. ISSN 1748-0221 , SNO Collaboration and Kormos, L. L. and O'Keeffe, H. M. (2020) A search for $hep$ solar neutrinos and the diffuse supernova neutrino background using all three phases of the Sudbury Neutrino Observatory. Physical Review D. ISSN 2470-0010 , SNO+ Collaboration and Kormos, L. L. and O'Keeffe, H. M. (2020) Measurement of neutron-proton capture in the SNO+ water phase. Physical Review C, 102 (1). ISSN 0556-2813 , Schizophrenia Working Group of the Psychiatric Genomics Consorti (2020) Complement genes contribute sex-biased vulnerability in diverse disorders. Nature. ISSN 0028-0836 , Social Scientists Against the Hostile Environment (SSAHE) (2020) Migration, Racism and the Hostile Environment: Making the Case for the Social Sciences. UNSPECIFIED, wordpress. , T2K Collaboration (2020) First measurement of the charged current ν ¯ μ double differential cross section on a water target without pions in the final state. Physical Review D, 102 (1). ISSN 1550-7998 , T2K Collaboration and Brailsford, D. and Dealtry, T. and Doyle, T. A. and Finch, A. J. and Knox, A. and Kormos, L. L. and Lawe, M. and Nowak, J. and O'Keeffe, H. M. and Ratoff, P. N. and Walsh, J. G. (2020) First combined measurement of the muon neutrino and antineutrino charged-current cross section without pions in the final state at T2K. Physical Review D, 101 (11). ISSN 1550-7998 , T2K Collaboration and Brailsford, D. and Dealtry, T. and Doyle, T. A. and Finch, A. J. and Knox, A. and Kormos, L. L. and Lawe, M. and Nowak, J. and O'Keeffe, H. M. and Ratoff, P. N. and Walsh, J. G. (2020) Search for Electron Antineutrino Appearance in a Long-baseline Muon Antineutrino Beam. Physical review letters, 124. ISSN 1079-7114 , T2K Collaboration and Brailsford, D. and Dealtry, T. and Finch, A. J. and Knox, A. and Kormos, L. L. and Lamont, I. and Lawe, M. and Nowak, J. and O'Keeffe, H. M. and Shaw, D. and Walsh, J. G. and Ratoff, Peter (2020) Measurement of the muon neutrino charged-current single $π^+$ production on hydrocarbon using the T2K off-axis near detector ND280. Physical Review D, 101 (1). ISSN 1550-7998 , T2K Collaboration and Brailsford, D. and Dealtry, T. and Finch, A. J. and Knox, A. and Kormos, L. L. and Lawe, M. and Nowak, J. and O'Keeffe, H. M. and Ratoff, P. N. and Walsh, J. G. (2020) Constraint on the matter–antimatter symmetry-violating phase in neutrino oscillations. Nature, 580. pp. 339-344. ISSN 0028-0836 , T2K Collaboration and Brailsford, D. and Dealtry, T. and Finch, A. J. and Knox, A. and Kormos, L. L. and Lawe, M. and Nowak, J. and O'Keeffe, H. M. and Ratoff, P. N. and Walsh, J. G. (2020) Measurement of the charged-current electron (anti-)neutrino inclusive cross-sections at the T2K off-axis near detector ND280. Journal of High Energy Physics, 10. ISSN 1029-8479 , T2K Collaboration and Dealtry, T. and Finch, A. J. and Kormos, L. L. and Lawe, M. and Nowak, J. and O'Keeffe, H. M. and Ratoff, P. N. and Walsh, J. G. and Doyle, Tristan (2020) Simultaneous measurement of the muon neutrino charged-current cross section on oxygen and carbon without pions in the final state at T2K. Physical Review D, 101 (11). ISSN 1550-7998 , TONiC study group (2020) Do pain, anxiety and depression influence quality of life for people with amyotrophic lateral sclerosis/motor neuron disease?:A national study reconciling previous conflicting literature. Journal of Neurology, 267 (3). pp. 607-615. ISSN 0340-5354 , The Alzheimer's Disease Neuroimaging Initiative (2020) The joint lasso:high-dimensional regression for group structured data. Biostatistics, 21 (2). 219–235. ISSN 1465-4644 , The CROMIS-2 collaborators (2020) Cognitive Impairment Before Atrial Fibrillation–Related Ischemic Events:Neuroimaging and Prognostic Associations. Journal of the American Heart Association, 9 (1). ISSN 2047-9980 , The IMBIE Team (2020) Mass balance of the Greenland Ice Sheet from 1992 to 2018. Nature, 579 (7798). pp. 233-239. ISSN 0028-0836 , The Many Babies Consortium (2020) Quantifying Sources of Variability in Infancy Research Using the Infant-Directed Speech Preference. Advances in Methods and Practices in Psychological Science, 3 (1). pp. 24-52. , Tom Schofield and Skinner, Sam (2020) Crash Blossoms:IF & ONLY IF. Torque Editions, Leeds. , UK NIHR Community (2020) AGILE-ACCORD:A Randomized, Multicentre, Seamless, Adaptive Phase I/II Platform Study to Determine the Optimal Dose, Safety and Efficacy of Multiple Candidate Agents for the Treatment of COVID-19: A structured summary of a study protocol for a randomised platform trial. Trials, 21 (1). ISSN 1745-6215 , on behalf of the CHIWOS Research Team (2020) Sexual Anxiety Among Women Living with HIV in the Era of Antiretroviral Treatment Suppressing HIV Transmission. Sexuality Research and Social Policy, 17 (4). pp. 765-779. ISSN 1868-9884 , the CHIWOS Research Team (2020) Awareness and Understanding of HIV Non-disclosure Case Law and the Role of Healthcare Providers in Discussions About the Criminalization of HIV Non-disclosure Among Women Living with HIV in Canada. AIDS and Behavior, 24 (1). pp. 95-113. ISSN 1090-7165 Abalkhail, Leenah and Sutanto, Juliana and Fayoumi, Amjad (2020) Understanding the effective use of health information systems from multiple stakeholders perspectives. In: Proceedings of the 28th European Conference on Information Systems. AIS Electronic Library. ISBN 9781733632515 Abas, N.A. and Yusoff, R. and Aroua, M.K. and Abdul Aziz, H. and Idris, Z. (2020) Production of palm-based glycol ester over solid acid catalysed esterification of lauric acid via microwave heating. Chemical Engineering Journal, 382. ISSN 1385-8947 Abbas, Andrea and Ashwin, Paul and McLean, Monica (2020) Sociology. In: The SAGE Encyclopedia of Higher Education. Sage Publishers. Abbas, Madeline Sophie (2020) The promise of political blackness?:contesting blackness, challenging whiteness and the silencing of racism. Ethnicities, 20 (1). pp. 202-222. ISSN 1468-7968 Abbas, Noorhan (2020) Fit and appropriation model for training:an action research study to advance mobile technology training in police forces. PhD thesis, UNSPECIFIED. Abbas, R. and Rossoni, C. and Jaki, T. and Paoletti, X. and Mozgunov, P. (2020) A comparison of phase I dose-finding designs in clinical trials with monotonicity assumption violation. Clinical Trials, 17 (5). pp. 522-534. ISSN 1740-7745 Abbott, B. P. and , LIGO Scientific Collaboration and Virgo Collaboration (2020) GW190425: Observation of a Compact Binary Coalescence with Total Mass $\sim 3.4$\,\Msun. Astrophysical Journal Letters, 892 (1). ISSN 2041-8205 Abd Razak, N.N. and Pérès, Y. and Gew, L.T. and Cognet, P. and Aroua, M.K. (2020) Effect of Reaction Medium Mixture on the Lipase Catalyzed Synthesis of Diacylglycerol. Industrial and Engineering Chemistry Research, 59 (21). pp. 9869-9881. ISSN 0888-5885 Abdelfattah Ahmed Younes, Mona (2020) Blended learning and Syrian refugees' empowerment through a capability approach lens. PhD thesis, UNSPECIFIED. Abdelrazik, A.S. and Tan, K.H. and Aslfattahi, N. and Arifutzzaman, A. and Saidur, R. and Al-Sulaiman, F.A. (2020) Optical, stability and energy performance of water-based MXene nanofluids in hybrid PV/thermal solar systems. Solar Energy, 204. pp. 32-47. ISSN 0038-092X Abdulhameed, Yunus A. (2020) Nonlinear cardiovascular oscillatory dynamics in malaria:Clinical, experimental and theoretical investigations. PhD thesis, UNSPECIFIED. Abdulhameed, Yunus A. and McClintock, Peter V. E. and Stefanovska, Aneta (2020) Race-specific differences in the phase coherence between blood flow and oxygenation:A simultaneous NIRS, white light spectroscopy and LDF study. Journal of Biophotonics, 13 (4). ISSN 1864-063X Abdullah, N. and Saidur, R. and Zainoodin, A.M. and Aslfattahi, N. (2020) Optimization of electrocatalyst performance of platinum–ruthenium induced with MXene by response surface methodology for clean energy application. Journal of Cleaner Production, 277. ISSN 0959-6526 Abdulrasheed, Muhktar and MacKenzie, A.R. and Whyatt, Duncan and Chapman, Lee (2020) Allometric scaling of thermal infrared emitted from UK cities and its relation to urban form. City and Environment Interactions, 5. ISSN 2590-2520 Aben, Tom and van der Valk, Wendy and Selviaridis, Kostas (2020) Implications of data-sharing for contractual and relational governance in public-private partnerships. In: EurOMA 2020, 2020-06-292020-06-30. Abosag, Ibrahim and Yen, Dorothy and Barnes, Bradley and Gadalla, Eman (2020) Rethinking Guanxi and Performance:Understanding the Dark Side of Sino–U.S. Business Relationships. International Business Review. 0-0. ISSN 0969-5931 Aboulkhair, N.T. and Zhao, G. and Hague, R.J.M. and Kennedy, A.R. and Ashcroft, I.A. and Clare, A.T. (2020) Generation of graded porous structures by control of process parameters in the selective laser melting of a fixed ratio salt-metal feedstock. Journal of Manufacturing Processes, 55. pp. 249-253. ISSN 1526-6125 Acedera, Rose Anne E. and Gupta, Gaurav and Mamlouk, Mohamed and Balela, Mary Donnabelle L. (2020) Solution combustion synthesis of porous Co3O4 nanoparticles as oxygen evolution reaction (OER) electrocatalysts in alkaline medium. Journal of Alloys and Compounds, 836. ISSN 0925-8388 Acevedo-Siaca, L.G. and Coe, R. and Wang, Y. and Kromdijk, J. and Quick, W.P. and Long, S.P. (2020) Variation in photosynthetic induction between rice accessions and its potential for improving productivity. New Phytologist, 227 (4). pp. 1097-1108. ISSN 0028-646X Aceves-Martins, Magaly and Robertson, C. and Cooper, D. and Avenell, Alison and Stewart, F. and Aveyard, Paul and De Bruin, M. and , REBALANCE Team (2020) A systematic review of UK-based long-term nonsurgical interventions for people with severe obesity (BMI ≥35 kg m-2 ). Journal of Human Nutrition and Dietetics, 33 (3). pp. 351-372. Acosta-Motos, Jose Ramon and Rothwell, Shane and Massam, Maggie and Albacete, Alfonso and Zhang, Hao and Dodd, Ian (2020) Alternate wetting and drying irrigation increases water and phosphorus use efficiency independent of substrate phosphorus status of vegetative rice plants. Plant Physiology and Biochemistry, 155. pp. 914-926. ISSN 0981-9428 Acton, W. J. F. and Huang, Zhonghui and Davison, Brian and Drysdale, Will S. and Fu, Pingqing and Hollaway, Michael and Langford, Ben and Lee, James D. and Liu, Yanhui and Metzger, Stefan and Mullinger, Neil and Nemitz, Eiko and Reeves, Claire E. and Squires, Freya A. and Vaughan, Adam R. and Wang, Xinming and Wang, Zhaoyi and Wild, Oliver and Zhang, Qiang and Zhang, Yanli and Hewitt, C N (2020) Surface–atmosphere fluxes of volatile organic compounds in Beijing. Atmospheric Chemistry and Physics, 20 (23). pp. 15101-15125. ISSN 1680-7316 Adam, E. and Sleeman, K.E. and Brearley, S. and Hunt, K. and Tuffrey-Wijne, I. (2020) The palliative care needs of adults with intellectual disabilities and their access to palliative care services:A systematic review. Palliative Medicine, 34 (8). pp. 1006-1018. ISSN 0269-2163 Adams, Benjamin and Bishop, Cameron and Fahey, Matthew and Greener, Jordan and Kong, Jiayi and Sobral, David (2020) High Redshift AGN: Accretion Rates and Morphologies for X-ray and Radio SC4K Sources from z~2 to z~6. Notices of Lancaster Astrophysics (NLUAstro), 2. pp. 16-28. Adamu, Muhammad Sadi (2020) Adopting an African Standpoint in HCI4D::A Provocation. In: CHI '2020. ACM, New York, pp. 1-8. ISBN 9781450368193 Adamu, Muhammad Sadi (2020) Software project work in an African context:myths, maps and messes. In: OzCHI'2020 Proceedings. ACM, New York, pp. 558-571. ISBN 9781450389754 Adamu, Muhammad Sadi and Benachour, Phillip (2020) Analysing the Integration of Models of Technology Diffusion and Acceptance in Nigerian Higher Education. In: Proceedings of the 12th International Conference on Computer Supported Education. SciTePress, pp. 178-187. ISBN 9789897584176 Adamu, Muhammad Sadi and Benachour, Phillip (2020) Blended eLearning Systems in Nigerian Universities:A Context Specific Pedagogical Approach. In: Proceedings of the 12th International Conference on Computer Supported Education. SciTePress, pp. 188-199. ISBN 9789897584176 Adegoke, Elijah and Bradbury, Matthew and Kampert, Erik and Higgins, Matthew and Watson, Tim and Jennings, Paul and Ford, Colin and Buesnel, Guy and Hickling, Steve (2020) PNT Cyber Resilience: a Lab2Live Observer Based Approach, Report 1: GNSS Resilience and Identified Vulnerabilities. Working Paper. UNSPECIFIED, Coventry, UK. Adelusi, Ibitoye and Amaechi, Chiemela Victor and Andrieux, Fabrice and Dawson, Richard (2020) Multiphysics simulation of added carbon particles within fluidized bed anode zinc-electrode. Engineering Research Express, 2 (2). pp. 1-13. ISSN 2631-8695 Adelusi, Ibitoye Adebowale (2020) Development of an efficient future energy storage system incorporating fluidized bed of micro-particles:English. PhD thesis, UNSPECIFIED. Adem, Anwar (2020) Essays on international trade and labour economics. PhD thesis, UNSPECIFIED. Adewusi, J. and Burness, C. and Ellawela, S. and Emsley, H. and Hughes, R. and Lawthom, C. and Maguire, M. and McLean, B. and Mohanraj, R. and Oto, M. and Singhal, S. and Reuber, M. (2020) Brivaracetam efficacy and tolerability in clinical practice:A UK-based retrospective multicenter service evaluation. Epilepsy and Behavior, 106. ISSN 1525-5050 Adusu, Sylvia (2020) Regional cooperation over Gulf of Guinea resources. PhD thesis, UNSPECIFIED. Aeamsuksai, Natthiyar and Mueansichai, Thirawat and Charoensuppanimit, Pongtorn and Kim-Lohsoontorn, Pattaraporn and Aiouache, Farid and Assabumrungrat, Suttichai (2020) Comparison of different synthesis schemes for production of sodium methoxide from methanol and sodium hydroxide. Engineering Journal, 24 (6). pp. 63-77. ISSN 0125-8281 Aeamsuksai, Natthiyar and Mueansichai, Thirawat and Charoensuppanimit, Pongtorn and Kim-Lohsoontorn1, Pattaraporn and Aiouache, Farid and Assabumrungrat, Suttichai (2020) Process simulation of sodium methoxide production from methanol and sodium hydroxide using reactive distillation coupled with pervaporation. Engineering Journal, 24 (6). pp. 63-77. ISSN 0125-8281 Afitska, Oksana (2020) Translanguaging in diverse multilingual classrooms in England:oasis or a mirage? The European Journal of Applied Linguistics and TEFL, 9 (1). pp. 153-181. Afouxenidis, D. and Halcovitch, N.R. and Milne, W.I. and Nathan, A. and Adamopoulos, G. (2020) Films Stoichiometry Effects on the Electronic Transport Properties of Solution-Processed Yttrium Doped Indium–Zinc Oxide Crystalline Semiconductors for Thin Film Transistor Applications. Advanced Electronic Materials, 6 (4). ISSN 2199-160X Agarwal, Nivedita and Chakrabarti, Ronika and Prabhu, Jaideep and Brem, Alexander (2020) Managing dilemmas of resource mobilization through Jugaad:A multi-method study of social enterprises in Indian healthcare. Strategic Entrepreneurship Journal, 14 (3). pp. 419-443. ISSN 1932-4391 Aggarwal, Bhumika and Xiong, Qian and Schröder-Butterfill, Elisabeth (2020) Impact of the Use of the Internet on Quality of Life in Older Adults:review of literature. Primary Health Care Research and Development, 21. ISSN 1463-4236 Agol, Dorice and Angelopoulos, Konstantinos and Lazarakis, Spyridon and Mancy, Rebecca and Papyrakis, Elissaios (2020) Briefing Note: Turkana pastoralists at risk: Why education matters. UNSPECIFIED. Aguirrebengoa, M. and Menéndez, R. and Müller, C. and González-Megías, A. (2020) Altered rainfall patterns reduce plant fitness and disrupt interactions between below- and aboveground insect herbivores. Ecosphere, 11 (5). ISSN 2150-8925 Ahl, Helene (2020) Women Entrepreneurs for a Viable Countryside. UNSPECIFIED. Ahmad, I. and Smith, A.F. (2020) Principles for guidelines and guidelines for principles of universal airway management. Anaesthesia, 75 (12). pp. 1570-1573. ISSN 0003-2409 Ahmad, W. and Ayub, N. and Ali, T. and Irfan, M. and Awais, M. and Shiraz, M. and Glowacz, A. (2020) Towards short term electricity load forecasting using improved support vector machine and extreme learning machine. Energies, 13 (11). ISSN 1996-1073 Ahmed, Faraz and Morbey, Hazel and Harding, Andrew and Reeves, David and Swarbrick, Caroline and Davies, Linda and Hann, Mark and Holland, Fiona and Elvish, Ruth and Leroi, Iracema and Burrow, Simon and Burns, Alistair and Keady, John and Reilly, Siobhan (2020) Developing the evidence base for evaluating dementia training in NHS hospitals (DEMTRAIN):a mixed-methods study protocol. BMJ Open, 10 (1). ISSN 2044-6055 Ahmed, Mukarrum (2020) Anti-Suit Injunctions and Article 4 of the Brussels Ia Regulation. UNSPECIFIED. Ahmed, Mukarrum (2020) A Dangerous Chimera:Anti-suit Injunctions based on a "right to be sued" at the place of domicile under the Brussels Ia Regulation? Law Quarterly Review, 136 (3). ISSN 0023-933X Ahmed, Mukarrum (2020) The Future of the Law of Contract. UNSPECIFIED. Ahmed, Mukarrum (2020) Jurisdiction to Sue a Parent Company in the English Courts for the Actions of its Foreign Subsidiary. Atâtôt - Revista Interdisciplinar de Direitos Humanos da UEG, 1 (2). pp. 25-39. Ahmed, Mukarrum (2020) The Nature and Enforcement of Choice of Court Agreements:A Comparative Study. Hart Publishing, Oxford. ISBN 9781509936410 Ahmed, Mukarrum (2020) The Validity of Choice of Court Agreements in International Commercial Contracts under the Hague Choice of Court Convention and the Brussels Ia Regulation. In: The Future of the Law of Contract. Markets and the Law . Routledge, London. ISBN 9780367174033 Ahmed, Tariq and Russell, Paul and Makwashi, Nura and Hamad, Faik and Gooneratne, Samantha (2020) Design and capital cost optimisation of three-phase gravity separators. Heliyon, 6 (6). ISSN 2405-8440 Ahtzaz, S. and Sher Waris, T. and Shahzadi, L. and Anwar Chaudhry, A. and Ur Rehman, I. and Yar, M. (2020) Boron for tissue regeneration-it's loading into chitosan/collagen hydrogels and testing on chorioallantoic membrane to study the effect on angiogenesis. International Journal of Polymeric Materials and Polymeric Biomaterials, 69 (8). pp. 525-534. ISSN 0091-4037 Aiouache, Farid and Abdelraouf, Mohamed and Hegarty, Josef and Rennie, Allan and Elizalde, Remi and Burns, Neil and Geekie, Louise and Najdanovic, Vesna (2020) Sol-gel alumina coating of wired mesh packing. Ceramics International, 46 (13). pp. 20777-20787. ISSN 0272-8842 Ajjoub, Orwa and Aldoughli, Rahaf Bara'a (2020) What does ''peace'' even mean for the peoples of the Middle East? Al-Jumhuriya. Akhmedova, Anna and Cavallotti, Rita and Marimon, Frederic and Campopiano, Giovanna (2020) Daughters' careers in family business:Motivation types and family-specific barriers. Journal of Family Business Strategy, 11 (3). ISSN 1877-8585 Akimov, Alexey and Lee, Chyi Lin and Stevenson, Simon (2020) Interest Rate Sensitivity in European Public Real Estate Markets. Journal of Real Estate Portfolio Management, 25 (2). pp. 138-150. ISSN 1083-5547 Akintola, Kayode (2020) CREDITOR TREATMENT IN CORPORATE INSOLVENCY LAW. Edward Elgar. ISBN 9781788971386 Akintola, Kayode (2020) Creditor Treatment in Corporate Insolvency Law. Elgar Corporate and Insolvency Law and Practice . Edward Elgar, Cheltenham. ISBN 9781788971386 Akintola, Kayode and Milman, David (2020) The Rise, Fall and Potential for a Rebirth of Receivership in UK Corporate Law. Journal of Corporate Law Studies, 20 (1). pp. 99-119. ISSN 1473-5970 Akmal, Haider Ali and Coulton, Paul (2020) The Divination of Things by Things. In: CHI'20. ACM, New York. ISBN 9871450368193 Akmal, Haider Ali and Coulton, Paul (2020) The Internet of Things Game:Revealing the Complexity of the IoT. In: Proceedings of Digital Games Research Association Conference 2020 (DiGRA 2020). Digital Games Research Association - DiGRA, FIN. Akmal, Haider Ali and Coulton, Paul (2020) A Tarot of Things:a supernatural approach to designing for IoT. In: Proceedings DRS 2020. UNSPECIFIED, AUS. Akyirem, Samuel and Salifu, Yakubu and Duodu, Precious Adade (2020) The concept and use of reassurance in clinical practice: an integrative review. PROSPERO International prospective register of systematic reviews. Al Hamrashdi, Hajir (2020) The design and testing of a novel compact real-time hybrid Compton and neutron scattering instrument. PhD thesis, UNSPECIFIED. Al Hamrashdi, Hajir and Cheneler, David and Monk, Stephen (2020) Design and Optimisation of a Three Layers Thermal Neutron, Fast Neutron and Gamma-Ray Imaging System. In: EPJ Web of Conferences. EPJ Web of Conferences. ISBN 9782759890934 Al Hamrashdi, Hajir and Cheneler, David and Monk, Stephen David (2020) A fast and portable imager for neutron and gamma emitting radionuclides. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 953. ISSN 0168-9002 Al Hamrashdi, Hajir and Monk, Stephen and Cheneler, David (2020) Neutron/Gamma Pulse discrimination analysis of GS10 Lithium glass and EJ-204 plastic scintillators. Journal of Instrumentation, 15. ISSN 1748-0221 Al-Astewani, Amin (2020) Arbitration as a Legal Solution for Relationship Breakdown in the Muslim Community:The Case of the Muslim Arbitration Tribunal. In: Cohabitation and Religious Marriage. Bristol University Press, Bristol, pp. 129-142. ISBN 9781529210835 Al-Astewani, Amin (2020) To Open or Close? COVID-19, Mosques and the Role of Religious Authority within the British Muslim Community:A Socio-Legal Analysis. Religions, 12 (1). ISSN 2077-1444 Al-Bachari, Sarah and Naish, Josephine H. and Parker, Geoff J. and Emsley, Hedley and Parkes, Laura M. (2020) Blood-brain barrier leakage is increased in Parkinson's disease. Frontiers in Physiology, 11. ISSN 1664-042X Al-Bander, Baidaa and Alzahrani, Theiab and Alzahrani, Saeed and Williams, Bryan M. and Zheng, Yalin (2020) Improving Fetal Head Contour Detection by Object Localisation with Deep Learning. In: Medical Image Understanding and Analysis. Communications in Computer and Information Science . Springer, GBR, pp. 142-150. ISBN 9783030393427 Al-Hammadi, Israa (2020) Actions of cannabinoids on amoebae. PhD thesis, UNSPECIFIED. Al-Salami, Nasser (2020) Kleptography and steganography in blockchains. PhD thesis, UNSPECIFIED. Al-Salami, Nasser and Zhang, Bingsheng (2020) Uncontrolled Randomness in Blockchains:Covert Bulletin Board for Illicit Activity. In: 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS). IEEE. ISBN 9781728168883 Al-Saymari, Furat (2020) Design, growth and characterisation of resonant cavity light emitting diodes (RCLEDs) for mid-infrared applications. PhD thesis, UNSPECIFIED. Al-Saymari, Furat and Craig, Adam and Lu, Qi and Marshall, Andrew and Carrington, Peter and Krier, Anthony (2020) Mid-infrared resonant cavity light emitting diodes operating at 4.5 μm. Optics Express, 28 (16). pp. 23338-23353. ISSN 1094-4087 Al-Wasity, Salim and Vogt, Stefan and Vuckovic, Aleksandra and Pollick, Frank (2020) Hyperalignment of motor cortical areas based on motor imagery during action observation. Scientific Reports, 10 (1). pp. 1-12. ISSN 2045-2322 AlJasser, Arwa and Uus, Kai and Prendergast, Garreth and Plack, Christopher (2020) Sub-Clinical Auditory Neural Deficits in Patients with Type 1 Diabetes Mellitus. Ear and Hearing, 41 (3). pp. 561-575. ISSN 0196-0202 Alalmaei, Shiyam and Elkhatib, Yehia and Bezahaf, Mehdi and Broadbent, Matthew and Race, Nicholas (2020) SDN Heading North:Towards a Declarative Intent-based Northbound Interface. In: 2020 16th International Conference on Network and Service Management (CNSM). IEEE, pp. 1-5. ISBN 9783903176317 Alam, Lubna and Sun, Ruonan and Campbell, John (2020) Helping Yourself or Others?:Motivation Dynamics for High-Performing Volunteers in GLAM Crowdsourcing. Australasian Journal of Information Systems, 24. ISSN 1326-2238 Alammar, Layla (2020) Are novelists obliged to tell the story of their private life? The Guardian, United Kingdom. Alammar, Layla (2020) A Symbolic Wa'd: Silencing Arab Women. Epoch Magazine, United Kingdom. Alanazi, Faizah (2020) Saudis in the eyes of the other:A corpus-driven critical discourse study of the representation of Saudis on Twitter. PhD thesis, UNSPECIFIED. Albrecht, Matthias and Kleijn, David and Williams, Neal M. and Tschumi, Matthias and Blaauw, Brett R. and Bommarco, Riccardo and Campbell, Alistair J. and Dainese, Matteo and Drummond, Francis A. and Entling, Martin H. and Ganser, Dominik and Arjen de Groot, G. and Goulson, Dave and Grab, Heather and Hamilton, Hannah and Herzog, Felix and Isaacs, Rufus and Jacot, Katja and Jeanneret, Philippe and Jonsson, Mattias and Knop, Eva and Kremen, Claire and Landis, Douglas A. and Loeb, Gregory M. and Marini, Lorenzo and McKerchar, Megan and Morandin, Lora and Pfister, Sonja C. and Potts, Simon G. and Rundlöf, Maj and Sardiñas, Hillary and Sciligo, Amber and Thies, Carsten and Tscharntke, Teja and Venturini, Eric and Veromann, Eve and Vollhardt, Ines M.G. and Wäckers, Felix and Ward, Kimiora and Wilby, Andrew and Woltz, Megan and Wratten, Steve and Sutter, Louis (2020) The effectiveness of flower strips and hedgerows on pest control, pollination services and crop yield:a quantitative synthesis. Ecology Letters, 23 (10). pp. 1488-1498. ISSN 1461-023X Albrecht, Olivia and Bandala Sanchez, Manuel and Monk, Stephen and Taylor, James (2020) Control of hydraulically–actuated manipulators with dead–band and time–delay uncertainties. In: UK-RAS 2020 Conference, 2020-04-172020-04-17, University of Lincoln. Albury, C. and Strain, W.D. and Brocq, S.L. and Logue, J. and Lloyd, C. and Tahrani, A. (2020) The importance of language in engagement between health-care professionals and people living with obesity:a joint consensus statement. The Lancet Diabetes and Endocrinology, 8 (5). pp. 447-455. ISSN 2213-8587 Alcock, Katie and Meints, Kerstin and Rowland, Caroline F. and Brelsford, Victoria and Christopher, Anna and Just, Janine (2020) The UK Communicative Development Inventories:Words and Gestures - Manual and Norms. J & R Press, Emsworth, UK. ISBN 9781907826405 Alcock, Katie and Watts, Sarah and Horst, Jessica (2020) What am I supposed to be looking at?:Controls and measures in inter-modal preferential looking. Infant Behavior and Development, 60. ISSN 0163-6383 Aldan, Pınar and Karadag, Didar and Soley, Gaye (2020) Infants' inferences about social affiliation based on infant directed communication. In: International Congress on Infant Studies, 2020-07-062020-07-09. Aldarbesti, Hassan and Deng, Huijing and Sutanto, Juliana and Phang, Chee Wei (2020) Who Are More Active and Influential on Twitter?:An Investigation of the Ukraine's Conflict Episode. Journal of Global Information Management, 28 (2). pp. 225-246. Aldekhail, N M and Morrison, D S and Khojah, H and Sloan, B and McLoone, P and MacNaughton, S and Shearer, R and Logue, J (2020) The association between diabetes medication and weight change in a non-surgical weight management intervention:an intervention cohort study. Diabetic Medicine, 37 (2). pp. 248-255. ISSN 0742-3071 Alderson, Hayley and Kaner, Eileen and McColl, Elaine and Howel, Denise and Fouweather, Tony and McGovern, Ruth and Copello, Alex and Brown, Heather and McArdle, Paul and Smart, Deborah and Brown, Rebecca and Lingam, Raghu (2020) A pilot feasibility randomised controlled trial of two behaviour change interventions compared to usual care to reduce substance misuse in looked after children and care leavers aged 12-20 years:The SOLID study. PLoS ONE, 15 (9). ISSN 1932-6203 Alderson-Day, Ben and Moffatt, Jamie and Bernini, Marco and Mitrenga, Kaja and Yao, Bo and Fernyhough, Charles (2020) Processing speech and thoughts during silent reading:Direct reference effects for speech by fictional characters in voice-selective auditory cortex and a theory-of-mind network. Journal of Cognitive Neuroscience, 32 (9). pp. 1637-1653. ISSN 0898-929X Alderton, Roy (2020) Salience and social meaning in speech production and perception. PhD thesis, UNSPECIFIED. Alderton, Roy (2020) Speaker gender and salience in sociolinguistic speech perception:GOOSE-fronting in Standard Southern British English. Journal of English Linguistics, 48 (1). pp. 72-96. ISSN 0075-4242 Aldoughli, Rahaf Bara'a (2020) Authoritarian apprehensions:ideology, judgement, and mourning in Syria. British Journal of Middle Eastern Studies, 47 (5). pp. 875-877. ISSN 1353-0194 Aldoughli, Rahaf Bara'a (2020) Five decades of Baathism survived because of nationalism. The Atlantic Council. Aldoughli, Rahaf Bara'a (2020) Syria, masculinity and how the Assad regime's priorities have changed during pandemic. The Conversation. Aldoughli, Rahaf Bara'a (2020) A thinly veiled strategy:Assad's co-optation of women religious leaders. UNSPECIFIED. Ale, Moyosore and Rubegni, Elisa (2020) Play Partners: Evaluating the effects of embodied cognition as a learning process for children. In: 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, 2020-10-252020-10-29, Online. (Unpublished) Alegana, V.A. and Khazenzi, C. and Akech, S.O. and Snow, R.W. (2020) Estimating hospital catchments from in-patient admission records:a spatial statistical approach applied to malaria. Scientific Reports, 10 (1). ISSN 2045-2322 Alegana, V.A. and Okiro, E.A. and Snow, R.W. (2020) Routine data for malaria morbidity estimation in Africa:challenges and prospects. BMC Medicine, 18 (1). ISSN 1741-7015 Alexander, K. and Hanif, M. and Lee, C. and Kim, E. and Helal, S. (2020) Cost-aware orchestration of applications over heterogeneous clouds. PLoS ONE, 15 (2). ISSN 1932-6203 Alexander, Nicholas and Doherty, Anne Marie and Cronin, James (2020) Transformational Retailing and the Emergence of a Modern Brand:Liberty of London, 1875-1900. History of Retailing and Consumption, 6 (2). pp. 78-96. Alfaras, Miquel and Primett, William and Umair, Muhammad and Windlin, Charles and Karpashevich, Pavel and Chalabianloo, Niaz and Bowie, Dionne and Sas, Corina and Sanches, Pedro and Hook, Kristina and Ersoy, Cem and Gamboa, Hugo (2020) Biosensing and Actuation—Platforms Coupling Body Input-Output Modalities for Affective Technologies. Sensors, 20 (21). ISSN 1424-8220 Alfaras, Miquel and Tsaknaki, Vasiliki and Sanches, Pedro and Windlin, Charles and Umair, Muhammad and Sas, Corina and Hook, Kristina (2020) From Biodata to Somadata. In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, pp. 1-14. ISBN 9781450367080 Alfred, L. and Limmer, M. and Cartwright, S. (2020) An integrative literature review exploring the impact of alcohol workplace policies. International Journal of Workplace Health Management, 14 (1). pp. 87-110. ISSN 1753-8351 Algera, H.S.B. and Smail, I. and Dudzevičiūtė, U. and Swinbank, A.M. and Stach, S. and Hodge, J.A. and Thomson, A.P. and Almaini, O. and Arumugam, V. and Blain, A.W. and Calistro-Rivera, G. and Chapman, S.C. and Chen, C.-C. and Da Cunha, E. and Farrah, D. and Leslie, S. and Scott, D. and Van Der Vlugt, D. and Wardlow, J.L. and Van Der Werf, P. (2020) An ALMA Survey of the SCUBA-2 Cosmology Legacy Survey UKIDSS/UDS Field:The Far-infrared/Radio Correlation for High-redshift Dusty Star-forming Galaxies. The Astrophysical Journal, 903 (2). ISSN 0004-637X Alharbi, Fahad and Csala, Dénes (2020) GCC Countries' Renewable Energy Penetration and the Progress of Their Energy Sector Projects. IEEE Access, 8. pp. 211986-212002. ISSN 2169-3536 Alharbi, Fahad and Csala, Dénes (2020) Saudi Arabia's Solar and Wind Energy Penetration:Future Performance and Requirements. Energies, 13 (3). ISSN 1996-1073 Ali, M. and Liu, L. and Geng, Y. and Khokhar, S. (2020) Emergy based sustainability evaluation of a hydroelectric dam proposal in South Asia. Journal of Cleaner Production, 264. ISSN 0959-6526 Ali, T. and Alwadie, A.S. and Rizwan, A.R. and Sajid, A. and Irfan, M. and Awais, M. (2020) Moving towards IoT Based Digital Communication:An Efficient Utilization of Power Spectrum Density for Smart Cities. Sensors (Basel), 20 (10). ISSN 1424-8220 Ali, T. and Irfan, M. and Shaf, A. and Saeed Alwadie, A. and Sajid, A. and Awais, M. and Aamir, M. (2020) A Secure Communication in IoT Enabled Underwater and Wireless Sensor Network for Smart Cities. Sensors (Basel), 20 (15). ISSN 1424-8220 Aliaga, L. and Ashkenazi, A. and Bronner, C. and Calcutt, J. and Cherdack, D. and Duffy, K. and Dytman, S. and Jachowicz, N. and Kabirnezhad, M. and Kuzmin, K. and Miller, G. A. and Le, T. and Morfin, J. G. and Mosel, U. and Nieves, J. and Niewczas, K. and Nikolakopoulos, A. and Nowak, J. and Paley, J. and Pawloski, G. and Sato, T. and Weinstein, L. and Wret, C. (2020) Summary of the NuSTEC Workshop on Neutrino-Nucleus Pion Production in the Resonance Region. arxiv.org. Alimajstorovic, Zerin and Pascual-Baixauli, Ester and Hawkes, Cheryl A. and Sharrack, Basil and Loughlin, A. Jane and Romero, Ignacio A. and Preston, Jane E. (2020) Cerebrospinal fluid dynamics modulation by diet and cytokines in rats. Fluids and Barriers of the CNS, 17 (1). ISSN 2045-8118 Aljohani, Amal Dhaifallah (2020) Light-responsive polymer-based drug delivery systems. Masters thesis, UNSPECIFIED. Alker, Zoe (2020) Digital Detectives: Exploring Digital Archives of Victorian Crime and Punishment. UNSPECIFIED. Almeida Soares, Eduardo and Angelov, Plamen Parvanov and Gu, Xiaowei (2020) Autonomous Learning Multiple-Model Zero-Order Classifier for Heart Sound Classification. Applied Soft Computing, 94. ISSN 1568-4946 Almeshal, Abdelkareem (2020) Theory and modelling of quantum transport in molecular-scale structures. PhD thesis, UNSPECIFIED. Almond, N.W. and Hermans, R. and Hale, L.L. and Kindness, S.J. and Michailow, W. and Wei, B. and Romain, X. and Ye, S. and Young, R. and Degl'Innocenti, R. and Beere, H.E. and Ritchie, D.A. and Mitrofanov, O. and N., Engheta and (SPIE), The Society of Photo-Optical Instrumentation Engineers (2020) Terahertz aperture SNOM mapping of metamaterial coupled resonators. In: Metamaterials, Metadevices, and Metasystems 2020, 2020-08-202020-08-20, Online. Almond, N.W. and Kindness, S.J. and Michailow, W. and Wei, Binbin and Braeuninger-Weimer, P. and Hofmann, S. and Beere, H.E. and Ritchie, D.A. and Degl'Innocenti, R. (2020) Double Layer Active Terahertz Chiral Metamaterial/graphene Modulators. In: 2020 Conference on Lasers and Electro-Optics (CLEO). IEEE. ISBN 9781943580767 Almond, Nikita and Qi, Xiaoqiong and Degl'Innocenti, Riccardo and Kindness, Stephen and Michailow, Wladislaw and Wei, Binbin and Braeuninger-Weimer, Philipp and Hofmann, Stephan and Dean, Paul and Indjin, Dragan and Linfield, Edmund and Davies, A. Giles and Rakić, Aleksandar and Beere, Harvey and Ritchie, David (2020) External cavity terahertz quantum cascade laser with a metamaterial/graphene optoelectronic mirror. Applied Physics Letters, 117 (4). ISSN 0003-6951 Almutairi, Asma and Davies, Nigel and Mikusz, Mateusz and Langheinrich, Marc and Clinch, Sarah (2020) Designing for Conflict:A Design Space for Multi-Viewer Support in Future Display Networks. In: PerDis '20: Proceedings of the 9TH ACM International Symposium on Pervasive Displays. ACM, pp. 47-54. ISBN 9781450379861 Alotaibi, Saud and Darwish, Ahmed and Ma, Xiandong and Williams, Barry W. (2020) A New Modular Three-phase Inverter Based on Sepic-Cuk Combination Converter for Photovoltaic Applications. In: Proceedings of the 9th International Conference on Renewable Power Generation. IET Press, IRL. (In Press) Alqahtani, Aminah (2020) Theory of electron transport through single molecules. PhD thesis, UNSPECIFIED. Alqahtani, Jehan (2020) Quantum theory of electron transport in molecular nanostructures. PhD thesis, UNSPECIFIED. Alquezar, R.D. and Macedo, R.H. and Sierro, J. and Gil, D. (2020) Lack of consistent responses to aircraft noise in dawn song timing of bird populations near tropical airports. Behavioral Ecology and Sociobiology, 74 (7). ISSN 0340-5443 Alrasheed, Maab and Mohammed, Wafaa and Pylyavskyy, Yaroslav and Kheiri, Ahmed (2020) Local search heuristic for the optimisation of flight connections. In: International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE). IEEE. ISBN 9781728110073 Alrasheed, R. (2020) The divine gaps between the usuli and akhbaris in Bahrain:Causes and repercussions. Religions, 11 (7). pp. 1-14. ISSN 2077-1444 Alrawash, Saed and Boravy, Muth and Park, Chang Je and Kim, Jong Sung (2020) A Study on Reusable Metal Component as Burnable Absorber Through Monte Carlo Depletion Analysis. Journal of Nuclear Fuel Cycle and Waste Technology (JNFCWT), 18 (4). pp. 481-496. Alrawash, Saed and Boravy, Muth and Yoo, Seung Uk and Han, Hyuk and Kim, Soon Young and Park, Moon-ghu and Park, Chang Je (2020) Sensitivity study on criticality safety analysis of multiple misloading within the spent fuel storage cask. annals of nuclear energy, 144. Alshaer, Hamada and Uniyal, Navdeep and Katsaros, Konstantinos and Antonakoglou, Konstantinos and Simpson, Steven and Abumarshoud, Hanaa and Falaki, Hamid and McCherry, Paul and Rotsos, Charalampos and Mahmoodi, Toktam and Kaleshi, Dritan and Hutchison, David and Haas, Harald and Simeonidou, Dimitra (2020) The UK Programmable Fixed and Mobile Internet Infrastructure:Overview, capabilities and use cases deployment. IEEE Access, 8. 175398 - 175411. ISSN 2169-3536 Alsiyat, Israa and Piao, Scott (2020) Metaphorical Expressions in Automatic Arabic Sentiment Analysis. In: Proceedings of LREC2020 Conference. European Language Resources Association (ELRA), FRA, pp. 4911-4916. Alsubhi, Maha and Goldthorpe, Joanna and Epton, Tracy and Khanom, Sonia and Peters, Sarah (2020) What factors are associated with obesity‐related health behaviours among child refugees following resettlement in developed countries?:A systematic review and synthesis of qualitative and quantitative evidence. Obesity Reviews, 21 (11). ISSN 1467-7881 Alsudias, Lama and Rayson, Paul (2020) Analyzing the Spread of Influenza in Arabic Twitter. In: HEALTHCARE TEXT ANALYTICS CONFERENCE, 2020-04-23, ONLINE. Alsudias, Lama and Rayson, Paul (2020) COVID-19 and Arabic Twitter:How can Arab World Governments and Public Health Organizations Learn from Social Media? In: NLP COVID-19 Workshop. Association for Computational Linguistics. Alsudias, Lama and Rayson, Paul (2020) Developing an Arabic Infectious Disease Ontology to Include Non-Standard Terminology. In: 12th International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA), FRA, pp. 4842-4850. ISBN 9791095546344 Althobaiti, Ahlam and Jindal, Anish and Marnerides, Angelos (2020) SCADA-agnostic Power Modelling for Distributed Renewable Energy Sources. In: 2020 IEEE 21st International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM). IEEE, IRL, pp. 379-384. ISBN 9781728173740 Altmann, E. C. and van Renswoude, Daan and Raijmakers, Maartje (2020) The Developmental Object Familiarity Inventory (DOFI). In: International Conference of Infant Studies 2020, 2020-07-062020-07-09, online. Altuwaijri, F.S. and Ferrario, M.A. (2020) Investigating Agile Adoption in Saudi Arabian Mobile Application Development. In: International Conference on Agile Software Development. Lecture Notes in Business Information Processing . Springer, Cham, pp. 265-271. ISBN 9783030588571 Alvarez, Jimena and Yumashev, Dmitry and Whiteman, Gail (2020) A framework for assessing the economic impacts of Arctic change. AMBIO: A Journal of the Human Environment, 49 (2). 407–418. ISSN 0044-7447 Alves De Lima, Decio and Letizia, Rosa and Degl'Innocenti, Riccardo and Dawson, Richard and Lin, Hungyen (2020) Quantitative video-rate hydration imaging of Nafion proton exchange membranes with terahertz radiation. Journal of Power Sources, 450. ISSN 0378-7753 Alves-Lima, D. and Song, J. and Li, X. and Portieri, A. and Shen, Y. and Zeitler, J.A. and Lin, H. (2020) Review of terahertz pulsed imaging for pharmaceutical film coating analysis. Sensors, 20 (5). ISSN 1424-8220 Alves-Lima, Decio F. and Schlegl, Harald and Williams, Bryan M. and Letizia, Rosa and Dawson, Richard and Lin, Hungyen (2020) Observing liquid water build-up in proton exchange membrane fuel cells using terahertz imaging and high-resolution optical gauging. In: 2020 45th International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz 2020. International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz . IEEE Computer Society Press, USA, pp. 529-530. ISBN 9781728166209 Alzayat, Dima (2020) Arab Americana:Race and identity (re)construction in Arab American fiction. PhD thesis, UNSPECIFIED. Amaechi, Chiemela Victor and Odijie, Agbomerie Charles and Orok, Etim Offiong and Ye, Jianqiao (2020) Economic Aspects of Fiber Reinforced Polymer Composite Recycling. In: Encyclopedia of Renewable and Sustainable Materials. Elsevier, pp. 377-397. ISBN 9780128131954 Amaral, Joana and Ribeyre, Zoé and Vigneaud, Julien and Sow, Mamadou Dia and Fichot, Régis and Messier, Christian and Pinto, Gloria and Nolet, Philippe and Maury, Stéphane (2020) Advances and Promises of Epigenetics for Forest Trees. Forests, 11 (9). Ambler, Scott (2020) A first assessment of nitrogen sources and biogeochemical processing in the speleothem record. Masters thesis, UNSPECIFIED. Ambler, Sophie Therese (2020) Simon de Montfort (d.1265) and Montfortian Family Memory. In: Simon de Montfort (c.1170-1218). Brepols, Turnhout, pp. 195-213. ISBN 9782503582245 Amboko, Beatrice and Stepniewska, Kasia and Macharia, Peter M. and Machini, Beatrice and Bejon, Philip and Snow, Robert W. and Zurovac, Dejan (2020) Trends in health workers' compliance with outpatient malaria case-management guidelines across malaria epidemiological zones in Kenya, 2010–2016. Malaria Journal, 19 (1). ISSN 1475-2875 Amiridis, Kostas and Costea, Bogdan (2020) Managerial appropriations of the ethos of democratic practice:rating, 'policing', and performance management. Journal of Business Ethics, 164 (4). pp. 701-713. ISSN 0167-4544 Amoah, Benjamin and Diggle, Peter and Giorgi, Emanuele (2020) A Geostatistical Framework for Combining Spatially Referenced Disease Prevalence Data from Multiple Diagnostics. Biometrics, 76 (1). pp. 158-170. ISSN 0006-341X Amos, Matt and Young, Paul and Hosking, J. S. and Lamarque, Jean-François and Abraham, N. L. and Akiyoshi, Hideharu and Archibald, Alex and Bekki, Slimane and Deushi, Makoto and Jöckel, Patrick and Kinnison, Douglas E. and Kirner, Ole and Kunze, Markus and Marchand, Marion and Plummer, David A and Saint-Martin, D. and Sudo, Kengo and Tilmes, Simone and Yamashita, Yousuke (2020) Projecting ozone hole recovery using an ensemble of chemistry-climate models weighted by model performance and independence. Atmospheric Chemistry and Physics, 20. 9961–9977. ISSN 1680-7316 Anchana, Rukthong and Brunfaut, Tineke (2020) Is anybody listening?:The nature of second language listening in integrated listening-to-summarize tasks. Language Testing, 37 (1). pp. 31-53. ISSN 0265-5322 Anderson, Alistair and Ojediran, Funmi (2020) Women's entrepreneurship in the Global South:Empowering and emancipating? Administrative Sciences, 10 (4). Anderson, Ben and Wilson, Helen and Ormerod, Emma and Heslop, Julia and Forman, Peter (2020) Brexit:Modes of Uncertainty and Futures in an Impasse. Transactions of the Institute of British Geographers, 45 (2). pp. 256-269. ISSN 0020-2754 Anderson, Tom and Busby, Jerry and Gouglidis, Antonios and Hough, Karen and Hutchison, David and Rouncefield, Mark (2020) Human and Organizational Issues for Resilient Communications. In: Guide to Disaster-Resilient Communication Networks. Computer Communications and Networks . Springer, Cham, pp. 791-807. ISBN 9783030446840 Anderson, Tom and Busby, Jerry and Rouncefield, Mark (2020) Understanding the ecological validity of relying practice as a basis for risk identification. Risk Analysis, 40 (7). pp. 1383-1398. ISSN 0272-4332 Anderson, Wendy and Semino, Elena (2020) Metaphor. In: The Routledge Handbook of English Language and Digital Humanities. Routledge, London, pp. 125-142. ISBN 9781138901766 Andre, Frederic and Racamier, Jean-Clande and Zimmerman, Ralph and Quang, Trung Le and Krozer, Viktor and Ulisse, Giacomo and Minenna, Damien F. G. and Letizia, Rosa and Paoloni, Claudio (2020) Technology, Assembly, and Test of a W-Band Traveling Wave Tube for New 5G High-Capacity Networks. IEEE Transactions on Electron Devices, 67 (7). 2919 - 2924. ISSN 0018-9383 Andreev, V. and Baghdasaryan, A. and Baty, A. and Begzsuren, K. and Belousov, A. and Bolz, A. and Boudry, V. and Brandt, G. and Britzger, D. and Buniatyan, A. and Bystritskaya, L. and Campbell, A.J. and Cantun Avila, K.B. and Cerny, K. and Chekelian, V. and Chen, Z. and Contreras, J.G. and Cvach, J. and Dainton, J.B. and Daum, K. and Deshpande, A. and Diaconu, C. and Eckerlin, G. and Egli, S. and Elsen, E. and Favart, L. and Fedotov, A. and Feltesse, J. and Fleischer, M. and Fomenko, A. and Gal, C. and Gayler, J. and Goerlich, L. and Gogitidze, N. and Gouzevitch, M. and Grab, C. and Grebenyuk, A. and Greenshaw, T. and Grindhammer, G. and Haidt, D. and Henderson, R.C.W. and Hladkỳ, J. and Hoffmann, D. and Horisberger, R. and Hreus, T. and Huber, F. and Jacquet, M. and Janssen, X. and Jung, A.W. and Jung, H. and Kapichine, M. and Katzy, J. and Kiesling, C. and Klein, M. and Kleinwort, C. and Kogler, R. and Kostka, P. and Kretzschmar, J. and Krücker, D. and Krüger, K. and Landon, M.P.J. and Lange, W. and Laycock, P. and Lebedev, A. and Levonian, S. and Lipka, K. and List, B. and List, J. and Li, W. and Lobodzinski, B. and Malinovski, E. and Martyn, H.-U. and Maxfield, S.J. and Mehta, A. and Meyer, A.B. and Meyer, H. and Meyer, J. and Mikocki, S. and Mondal, M.M. and Morozov, A. and Müller, K. and Naumann, T. and Newman, P.R. and Niebuhr, C. and Nowak, G. and Olsson, J.E. and Ozerov, D. and Park, S. and Pascaud, C. and Patel, G.D. and Perez, E. and Petrukhin, A. and Picuric, I. and Pitzl, D. and Polifka, R. and Radescu, V. and Raicevic, N. and Ravdandorj, T. and Reimer, P. and Rizvi, E. and Robmann, P. and Roosen, R. and Rostovtsev, A. and Rotaru, M. and Sankey, D.P.C. and Sauter, M. and Sauvan, E. and Schmitt, S. and Schmookler, B.A. and Schoeffel, L. and Schöning, A. and Sefkow, F. and Shushkevich, S. and Soloviev, Y. and Sopicki, P. and South, D. and Spaskov, V. and Specka, A. and Steder, M. and Stella, B. and Straumann, U. and Sykora, T. and Thompson, P.D. and Traynor, D. and Truöl, P. and Tseepeldorj, B. and Tu, Z. and Valkárová, A. and Vallée, C. and Van Mechelen, P. and Wegener, D. and Wünsch, E. and Žáček, J. and Zhang, J. and Zhang, Z. and Žlebčík, R. and Zohrabyan, H. and Zomer, F. (2020) Measurement of exclusive π+π- and ρ meson photoproduction at HERA: H1 Collaboration. European Physical Journal C: Particles and Fields, 80 (12). ISSN 1434-6044 Andrejczuk, E. and Alberola, J.M. and Marcolino, L. and Torroni, P. (2020) Special issue of Teams in Multiagent Systems (TEAMAS): Preface. Fundam Inf, 174 (1). pp. 61-62. ISSN 0169-2968 Andrikopoulos, A. and Merika, A. and Merikas, A. and Tsionas, M. (2020) The dynamics of fleet size and shipping profitability:the role of steel-scrap prices. Maritime Policy and Management, 47 (8). pp. 985-1009. ISSN 0308-8839 Andrzejaczek, S. and Chapple, T.K. and Curnick, D. J. and Carlisle, A.B. and Castleton, M. and Jacoby, David and Peel, L. R. and Schallert, R. J. and Tickler, D. M. and Block, B. A. (2020) Individual variation in residency and regional movements of reef manta rays Mobula alfredi in a large marine protected area. Marine Ecology Progress Series, 639. pp. 137-153. ISSN 0171-8630 Anestis, Eleftherios and Eccles, Fiona and Fletcher, Ian and French, Maddy and Simpson, Jane (2020) Giving and receiving a diagnosis of a progressive neurological condition:a scoping review of doctors' and patients' perspectives. Patient Education and Counseling, 103 (9). pp. 1709-1723. ISSN 0738-3991 Ang, T.N. and Young, B.R. and Taylor, M. and Burrell, R. and Aroua, M.K. and Baroutian, S. (2020) Breakthrough analysis of continuous fixed-bed adsorption of sevoflurane using activated carbons. Chemosphere, 239. ISSN 0045-6535 Ang, T.N. and Young, B.R. and Taylor, M. and Burrell, R. and Aroua, M.K. and Chen, W.-H. and Baroutian, S. (2020) Enrichment of surface oxygen functionalities on activated carbon for adsorptive removal of sevoflurane. Chemosphere, 260. ISSN 0045-6535 Angal-Kalinin, D. and Bainbridge, A. and Brynes, A.D. and Buckley, R.K. and Buckley, S.R. and Burt, G.C. and Cash, R.J. and Castaneda Cortes, H.M. and Christie, D. and Clarke, J.A. and Clarke, R. and Cowie, L.S. and Corlett, P.A. and Cox, G. and Dumbell, K.D. and Dunning, D.J. and Fell, B.D. and Gleave, K. and Goudket, P. and Goulden, A.R. and Griffiths, S.A. and Hancock, M.D. and Hannah, A. and Hartnett, T. and Heath, P.W. and Henderson, J.R. and Hill, C. and Hindley, P. and Hodgkinson, C. and Hornickel, P. and Jackson, F. and Jones, J.K. and Jones, T.J. and Joshi, N. and King, M. and Kinder, S.H. and Knowles, N.J. and Kockelbergh, H. and Marinov, K. and Mathisen, S.L. and McKenzie, J.W. and Middleman, K.J. and Militsyn, B.L. and Moss, A. and Muratori, B.D. and Noakes, T.C.Q. and Okell, W. and Oates, A. and Pacey, T.H. and Paramanov, V.V. and Roper, M.D. and Saveliev, Y. and Scott, D.J. and Shepherd, B.J.A. and Smith, R.J. and Smith, W. and Snedden, E.W. and Thompson, N.R. and Tollervey, C. and Valizadeh, R. and Vick, A. and Walsh, D.A. and Weston, T. and Wheelhouse, A.E. and Williams, P.H. and Wilson, J.T.G. and Wolski, A. (2020) Design, specifications, and first beam measurements of the compact linear accelerator for research and applications front end. Physical Review Accelerators and Beams, 23 (4). ISSN 2469-9888 Angelopoulos, Konstantinos and Lazarakis, Spyridon and Malley, James (2020) The distributional implications of asymmetric income dynamics. European Economic Review, 128. ISSN 0014-2921 Angelopoulos, Konstantinos and Lazarakis, Spyridon and Mancy, Rebecca and Schroeder, Max (2020) Briefing Note: Post-pandemic mortality dynamics, historical city-level evidence. UNSPECIFIED. Angelov, Plamen and Almeida Soares, Eduardo (2020) SARS-CoV-2 CT-scan dataset:A large dataset of real patients CT scans for SARS-CoV-2 identification. medRxiv. Angelov, Plamen and Soares, Eduardo (2020) Towards Deep Machine Reasoning:A Prototype-based Deep Neural Network with Decision Tree Inference. In: 2020 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2020. Institute of Electrical and Electronics Engineers Inc., CAN, pp. 2092-2099. ISBN 9781728185262 Angelov, Plamen and Soares, Eduardo (2020) Towards Deep Machine Reasoning:a Prototype-based Deep Neural Network with Decision Tree Inference. arXiv. Angelov, Plamen and Soares, Eduardo (2020) Towards explainable deep neural networks (xDNN). Neural Networks, 130. pp. 185-194. ISSN 0893-6080 Anjum, Najam (2020) Bedtime Stories for Sleepless Entrepreneurs. Xahin Publications, Karachi. ISBN 978-969-23528-0-2 Ansari, Rafay Iqbal and Ashraf, Nouman and Hassan, Syed Ali and C., Deepak G. and Pervaiz, Haris and Politis, Christos (2020) Spectrum on Demand:A Competitive Open Market Model for Spectrum Sharing for UAV-Assisted Communications. IEEE Network, 34 (6). pp. 318-324. ISSN 0890-8044 Ansari, Tabish (2020) Mitigation and control of urban air pollution in Beijing. PhD thesis, UNSPECIFIED. Antaki, C. and Finlay, W.M.L. and Walton, C. (2020) Mobilizing others when you have little (recognizable) language. In: Mobilizing Others. Studies in Language and Social Interaction . John Benjamins Publishing Company, pp. 203-228. ISBN 9789027261588 (ISBN) Antaki, Charles and Chinn, Deborah and Walton, Chris and Finlay, W.M.L. and Sempik, Joe (2020) To initiate repair or not?:Coping with difficulties in the talk of adults with intellectual disabilities. Clinical Linguistics & Phonetics, 34 (10-11). pp. 954-976. ISSN 0269-9206 Antonelli, A. and Hiscock, S. and Lennon, S. and Simmonds, M. and Smith, R.J. and Young, B. (2020) Protecting and sustainably using the world's plants and fungi. Plants, People, Planet, 2 (5). pp. 368-370. ISSN 2572-2611 Anwar, Chowdhury Mohammad Sakib (2020) Essays in public economics. PhD thesis, UNSPECIFIED. Anwar, Chowdhury Mohammad Sakib and Matros, Alexander and Sen Gupta, Sonali (2020) Public Good Provision:A Tale of Tax Evasion and Corruption. Working Paper. Lancaster University, Department of Economics, Lancaster. Anwar, J. and Leitold, C. and Peters, B. (2020) Solid-solid phase equilibria in the NaCl-KCl system. Journal of Chemical Physics, 152 (14). ISSN 0021-9606 Anyanwu, I.N. and Sikoki, F.D. and Semple, K.T. (2020) Quantitative assessment data of PAHs and N-PAHs in core sediments from the Niger Delta, Nigeria. Data in Brief, 33. ISSN 2352-3409 Anyanwu, I.N. and Sikoki, F.D. and Semple, K.T. (2020) Risk assessment of PAHs and N-PAH analogues in sediment cores from the Niger Delta. Marine Pollution Bulletin, 161. ISSN 0025-326X Anyebe, E.A. and Kesaria, M. and Sanchez, A.M. and Zhuang, Q. (2020) A comparative study of graphite and silicon as suitable substrates for the self-catalysed growth of InAs nanowires by MBE. Applied Physics A, 126 (6). ISSN 0947-8396 Appleby, John (2020) It's time to remove the ten-year limit on social egg freezing. Progress Educational Trust. Appleton, David (2020) Microtopography on Mountains:Complex terrain augments heterogeneity of belowground carbon and nitrogen in the Swiss Central Alps. Masters thesis, UNSPECIFIED. Araki, A.S. and Brazil, R.P. and Hamilton, J.G.C. and Vigoder, F.M. (2020) Characterization of copulatory courtship song in the Old World sand fly species Phlebotomus argentipes. Scientific Reports, 10 (1). ISSN 2045-2322 Arancibia-Miranda, Nicolás and Manquián-Cerda, Karen and Pizarro, Carmen and Maldonado, Tamara and Suazo-Hernández, Jonathan and Escudey, Mauricio and Bolan, Nanthi S and Sarkar, Binoy (2020) Mechanistic insights into simultaneous removal of copper, cadmium and arsenic from water by iron oxide-functionalized magnetic imogolite nanocomposites. Journal of Hazardous Materials, 398. ISSN 0304-3894 Archer, D. and Findlay, A. (2020) Keywords that characterise Shakespeare's (anti)heroes and villains. In: Voices Past and Present. Studies in Corpus Linguistics . John Benjamins Publishing Company, Amsterdam, pp. 32-46. ISBN 9789027207654 Archer, D. and Gillings, M. (2020) Depictions of deception:A corpus-based analysis of five Shakespearean characters. Language and Literature, 29 (3). pp. 246-274. ISSN 0963-9470 Archibald, A. T. and Neu, J. L. and Elshorbany, Y. and Cooper, O. R. and Young, P.J. and Akiyoshi, H. and Cox, R.A. and Coyle, M. and Derwent, R. and Deushi, M. and Finco, A. and Frost, G.J. and Galbally, I. E. and Gerosa, G. and Granier, C. and Griffiths, P.T. and Hossaini, R. and Hu, L. and Jöckel, P. and Josse, B. and Mertens, M. and Morgenstern, O. and Naja, M. and Naik, V. and Oltmans, S. and Plummer, D.A. and Revell, L.E. and Saiz-Lopez, A. and Saxena, P. and Shin, Y.M. and Shaahid, I. and Shallcross, D. and Tilmes, S. and Trickl, T. and Wallington, T. J. and Worden, H. M. and Zeng, G. (2020) Tropospheric Ozone Assessment Report:A critical review of changes in the tropospheric ozone burden and budget from 1960–2100. Elementa: Science of the Anthropocene, 8 (1). ISSN 2325-1026 Archibald, Alexander T. and O'Connor, Fiona and Abraham, N. Luke and Archer-Nicholls, Scott and Chipperfield, Martyn and Dalvi, Mohit and Folberth, Gerd and Dennison, Fraser and Dhomse, Sandip and Griffiths, Paul T. and Hardacre, Catherine and Hewitt, Alan and Hill, Richard S. and Johnson, Colin E. and Keeble, James and Köhler, Marcus O. and Morgenstern, Olaf and Mulcahy, Jane P. and Ordonez, Carlos and Pope, Richard J. and Rumbold, Steven T. and Russo, Maria R. and Savage, Nick H. and Sellar, Alistair and Stringer, Marc and Turnock, Steven T. and Wild, Oliver and Zeng, Guang (2020) Description and evaluation of the UKCA stratosphere–troposphere chemistry scheme (StratTrop vn 1.0) implemented in UKESM1. Geoscientific Model Development, 13. pp. 1223-1266. ISSN 1991-959X Arciuli, J and Emerson, Eric (2020) Type of Disability, Gender, and Age Affect School Satisfaction:Findings from the UK Millennium Cohort Study. British Journal of Educational Psychology, 90 (3). pp. 870-885. ISSN 0007-0998 Ardabili, Sina F. and Mosavi, Amir and Ghamisi, Pedram and Ferdinand, Filip and Varkonyi-Koczy, Annamaria R. and Reuter, Uwe and Rabczuk, Timon and Atkinson, Peter (2020) COVID-19 Outbreak Prediction with Machine Learning. Algorithms, 13 (10). Ardovini, Lucia (2020) Gulf states and Islamist responses to COVID-19:a changing relationship. Global Discourse, 10 (4). pp. 439-444. ISSN 2326-9995 Ardovini, Lucia and Mabon, Simon Paul (2020) Egypt's Unbreakable Curse:Tracing the State of Exception from Mubarak to Al Sisi. Mediterranean Politics, 25 (4). pp. 456-475. ISSN 1362-9395 Arellanes, Damian and Lau, Kung-Kiu (2020) Evaluating IoT service composition mechanisms for the scalability of IoT systems. Future Generation Computer Systems, 108. pp. 827-848. ISSN 0167-739X Argyres, NS and De Massis, Alfredo and Foss, N. J. and Frattini, Federico and Jones, G and Silverman, BS (2020) History-informed strategy research:The promise of history and historical research methods in advancing strategy scholarship. Strategic Management Journal, 41 (3). pp. 343-368. ISSN 0143-2095 Arif, Mohammad and Wyne, Shurjeel and Navaie, Keivan and Nawaz, Syed Junaid (2020) Decoupled Downlink and Uplink Access for Aerial Terrestrial Heterogeneous Cellular Networks. IEEE Access, 8. 111172 - 111185. ISSN 2169-3536 Arif, Muhammad and Wyne, Shurjeel and Navaie, Keivan (2020) Clustered Jamming in Aerial HetNets with Decoupled Access. IEEE Access, 8. 142218 - 142228. ISSN 2169-3536 Arif, Muhammad and Wyne, Shurjeel and Navaie, Keivan and Quereshi, Sadia (2020) Dual Connectivity in Decoupled Aerial HetNets with Reverse Frequency Allocation and Clustered Jamming. IEEE Access, 8. 221454 - 221467. ISSN 2169-3536 Arifutzzaman, A. and Ismail, A.F. and Alam, M.Z. and Khan, A.A. and Saidur, R. (2020) A systematic study of rheological properties of waterironoxide nanofluids with graphene nanoflakes. In: International Conference on Sustainable Energy and Green Technology 2019, 2019-12-112019-12-14. Arifutzzaman, A. and Ismail, A.F. and Khan, A.A. and Alam, M.Z. and Saidur, R. (2020) Effect of particle loading on the stability of the water based iron-oxide nanofluids. International Journal of Advanced Science and Technology, 29 (1). pp. 1326-1333. ISSN 2207-6360 Ark, Rajiv (2020) A manifestation of severe sarcoid arthropathy in the distal interphalangeal joint. Rheumatology Advances in Practice, 4 (Supple). p. 28. Armstrong, Alona and Page, Trevor and Thackeray, Stephen J. and Hernandez, Rebecca R. and Jones, Ian (2020) Integrating environmental understanding into freshwater floatovoltaic deployment using an effects hierarchy and decision trees. Environmental Research Letters, 15 (11). ISSN 1748-9326 Armstrong, C.G. and Hogue, R.W. and Toghill, K.E. (2020) Characterisation of the ferrocene/ferrocenium ion redox couple as a model chemistry for non-aqueous redox flow battery research. Journal of Electroanalytical Chemistry, 872. ISSN 0022-0728 Armstrong, Craig (2020) Novel non-aqueous symmetric redox materials for redox flow battery energy storage. PhD thesis, UNSPECIFIED. Armstrong, Harvey and Read, Robert (2020) Size and Sectoral Specialisation:The Asymmetric Cross-Country Impacts of the 2008 Crisis & its Aftermath. Journal of International Development, 32 (6). pp. 891-921. ISSN 0954-1748 Armstrong, Harvey and Read, Robert (2020) Tourism and Sustainable Growth in Small (Island) Economies. In: Shaping the Future of Small Islands. Palgrave Macmillan, London, pp. 93-108. ISBN 9789811548826 Arnold, Todd and He, Jia and Jiang, Weifan and Calder, Matt and Cunha, Italo and Giotsas, Vasileios and Katz-Bassett, Ethan (2020) Cloud Provider Connectivity in the Flat Internet. In: IMC 2020 - Proceedings of the 2020 ACM Internet Measurement Conference. Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC . Association for Computing Machinery (ACM), USA, pp. 230-246. ISBN 9781450381383 Arnold, Todd W and Gurmericliler, Ege and Gupta, Arpit and Calder, Matt and Essig, Georgia and Giotsas, Vasileios and Katz-Bassett, Ethan (2020) (How Much) Does a Private WAN Improve Cloud Performance? In: IEEE International Conference on Computer Communications (INFOCOM) 2020. IEEE, CHN, pp. 79-88. ISBN 9781728164120 Arridge, Chris (2020) Solar wind:interaction with planets. In: Oxford Research Encyclopedia of Physics. Oxford University Press, Oxford. ISBN 9780190871994 Arthur, J.F. and Stokes, C. and Jamieson, S.S.R. and Carr, J.R. and Leeson, A.A. (2020) Recent understanding of Antarctic supraglacial lakes using satellite remote sensing. Progress in Physical Geography. ISSN 0309-1333 Arthur, Jennifer F. and Stokes, Chris R. and Jamieson, Stewart S. R. and Carr, J. Rachel and Leeson, Amber A. (2020) Distribution and seasonal evolution of supraglacial lakes on Shackleton Ice Shelf, East Antarctica. The Cryosphere, 14. 4103–4120. Aryana, Bijan and Brewster, Liz (2020) Design for Mobile Mental Health:Exploring the Informed Participation Approach. Health Informatics Journal, 26 (2). pp. 1208-1224. ISSN 1460-4582 Ashby, M.A. and Heinemeyer, A. (2020) Prescribed burning impacts on ecosystem services in the British uplands:A methodological critique of the EMBER project. Journal of Applied Ecology, 57 (11). pp. 2112-2120. ISSN 0021-8901 Ashby, Mark and Whyatt, Duncan and Rogers, Karen and Marrs, Robert H. and Stevens, Carly (2020) Quantifying the recent expansion of native invasive rush species in a UK upland environment. Annals of Applied Biology, 177 (2). pp. 243-255. ISSN 0003-4746 Ashcroft, Alice (2020) Gender Differences in Innovation Design:A Thematic Conversation Analysis. In: OzCHI '20: 32nd Australian Conference on Human-Computer Interaction. ACM, New York, pp. 270-280. ISBN 9781450389754 Ashcroft, Alice (2020) 'Hedging' and Gender in Participatory Design. In: Conference Proceedings for 14th International Conference on Interfaces and Human Computer Interaction. UNSPECIFIED, pp. 176-180. ISBN 9789898704207 Ashish, Abdul and Unsworth, Alison and Martindale, Jane and Sedda, Luigi and Sundar, Ramachandran and Farrier, Martin (2020) Early CPAP reduced mortality in covid-19 patients. Audit results from Wrightington, Wigan and Leigh Teaching Hospitals NHS Foundation Trust. medRxiv. Ashish, Abdul and Unsworth, Alison and Martindale, Jane and Sundar, Ram and Kavuri, Kanishka and Sedda, Luigi and Farrier, Martin (2020) CPAP management of COVID-19 respiratory failure:a first quantitative analysis from an inpatient service evaluation. BMJ open respiratory research, 7 (1). ISSN 2052-4439 Ashmore, Lisa and Stewart, Hilary and Hutton, Daniel and Evans, Kate (2020) Digital Support for Living with and Beyond Gynaecological Cancer. Radiography, 26 (4). E270-E276. ISSN 1078-8174 Ashwin, Paul (2020) How Student-Centered Learning and Teaching can Obscure the Importance of Knowledge in Educational Processes and why it Matters. In: The Routledge International Handbook of Student-Centered Learning and Teaching in Higher Education. Routledge, London. ISBN 9780367200527 Ashwin, Paul (2020) Teaching Excellence Framework. In: The SAGE Encyclopedia of Higher Education. Sage Publications, London. ISBN 9781473942912 Ashwin, Paul (2020) Teaching excellence:Principles for developing effective system-wide approaches. In: Changing Higher Education for a Changing World. Bloomsbury Higher Education Research . Bloomsbury Academic, London. ISBN 9781350108417 Ashwin, Paul (2020) Transforming University Education:A manifesto. Bloomsbury, London. ISBN 9781350157231 Ashwin, Paul and Boud, David and Calkins, Susanna and Coate, Kelly and Hallett, Fiona and Light, Gregory and Luckett, Kathy and MacLaren, Iain and Martensson, Katarina and McArthur, Jan and McCune, Velda and McLean, Monica and Tooher, Michelle (2020) Reflective Teaching in Higher Education. Bloomsbury, London. ISBN 9781350084667 Ashwin, Paul and Case, Jennifer (2020) Undergraduate education in South Africa:To what extent does it support personal and public good? In: Changing Higher Education for a Changing World. Bloomsbury Higher Education Research . Bloomsbury Academic, London. ISBN 9781350108417 Ashwin, Paul and McArthur, Jan (2020) What is different about socially just higher education research? In: Locating Social Justice in Higher Education Research. Bloomsbury Academic, London. ISBN 9781350086753 Ashworth, Jenn (2020) Tide Wrack and Sand. In: Sandscapes. Palgrave Macmillan, pp. 19-28. ISBN 9783030447793 Ashworth, K. and Bucci, S. and Gallimore, P.J. and Lee, J.D. and Nelson, B.S. and Sanchez-Marroquín, A. and Schimpf, M.B. and Smith, P.D. and Drysdale, W.S. and Hopkins, J.R. and Lee, J.D. and Pitt, J.R. and DI Carlo, P. and Krejci, R. and McQuaid, J.B. (2020) Megacity and local contributions to regional air pollution:An aircraft case study over London. Atmospheric Chemistry and Physics, 20 (12). pp. 7193-7216. ISSN 1680-7316 Asins, M.J. and Albacete, A. and Martínez-Andújar, C. and Celiktopuz, E. and Solmaz, İ. and Sarı, N. and Pérez-Alfocea, F. and Dodd, I.C. and Carbonell, E.A. and Topcu, S. (2020) Genetic analysis of root-to-shoot signaling and rootstock-mediated tolerance to water deficit in tomato. Genes, 12 (1). pp. 1-25. ISSN 2073-4425 Aslam, H and Blome, Constantin and Roscoe, Samuel and Azhar, T M (2020) Determining the antecedents of dynamic supply chain capabilities. Supply Chain Management, 25 (4). pp. 427-442. ISSN 1359-8546 Aslfattahi, N. and Saidur, R. and Sidik, N.A.C. and Sabri, M.F.M. and Zahir, M.H. (2020) Experimental assessment of a novel eutectic binary molten salt-based hexagonal boron nitride nanocomposite as a promising PCM with enhanced specific heat capacity. Journal of Advanced Research in Fluid Mechanics and Thermal Sciences, 68 (1). pp. 73-85. Aslfattahi, N. and Samylingam, L. and Abdelrazik, A.S. and Arifutzzaman, A. and Saidur, R. (2020) MXene based new class of silicone oil nanofluids for the performance improvement of concentrated photovoltaic thermal collector. Solar Energy Materials and Solar Cells, 211. ISSN 0927-0248 Aslfattahi, N. and Zendehboudi, A. and Rahman, S. and Mohd Sabri, M.F. and Said, S.M. and [Unknown], Arifutzzaman and Che Sidik, N.A. (2020) Optimization of Thermal Conductivity of NanoPCM-Based Graphene by Response Surface Methodology. Journal of Advanced Research in Fluid Mechanics and Thermal Sciences, 75 (3). pp. 108-125. Aslfattahi, Navid and Saidur, R. and Arifutzzaman, A. and Sadri, R. and Bimbo, Nuno and Sabri, Mohd Faizul Mohd and Maughan, Phil and Bouscarrat, Luc and Dawson, Richard J. and Said, Suhana Mohd and Goh, Boon Tong and Sidik, Nor Azwadi Che (2020) Experimental investigation of energy storage properties and thermal conductivity of a novel organic phase change material/MXene as A new class of nanocomposites. Journal of Energy Storage, 27. ISSN 2352-152X Assaf, A. George and Bu, Ruijun and Tsionas, Mike G. (2020) A Bayesian approach to continuous type principal-agent problems. European Journal of Operational Research, 280 (3). pp. 1188-1192. ISSN 0377-2217 Assaf, A.G. and Tsionas, M. (2020) Correcting for endogeneity in hospitality and tourism research. International Journal of Contemporary Hospitality Management, 32 (8). pp. 2657-2675. ISSN 0959-6119 Asseel, Dalia (2020) Seeing the unseen. Euphemism in animated films:a multimodal and critical discourse analysis. PhD thesis, UNSPECIFIED. Aston, Elaine (2020) Restaging Feminisms. Palgrave Macmillan, Cham. ISBN 9783030405885 Astrachan, Joseph H. and Astrachan, Claudia Binz and Campopiano, Giovanna and Baù, Massimo (2020) Values, Spirituality and Religion:Family Business and the Roots of Sustainable Ethical Behavior. Journal of Business Ethics, 163 (4). pp. 637-645. ISSN 0167-4544 Asymont, Inna and Korshunov, Dmitry (2020) Strong law of large numbers for a function of the local times of a transient random walk in $Z^d$. Journal of Theoretical Probability, 33. 2315–2336. ISSN 0894-9840 Atanasova, Dimitrinka (2020) "Journeys towards a green lifestyle":Metaphors in green living blogs. Cahiers de praxématique, 73. pp. 1-17. ISSN 2111-5044 Atanasova, Dimitrinka and Koteyko, Nelya (2020) Fighting obesity, sustaining stigma:how can critical metaphor analysis help uncover subtle stigma in media discourse on obesity. In: Applying Linguistics in Illness and Healthcare Contexts. Contemporary Studies in Linguistics . Bloomsbury Academic, pp. 223-243. ISBN 9781350057654 Athanasakou, Vasiliki and El-Haj, Mahmoud and Rayson, Paul and Walker, Martin and Young, Steven (2020) Annual Report Commentary on the Value Creation Process. Working Paper. UNSPECIFIED. Athanasopoulos, Panos (2020) Review 2: "COVID-19 is Feminine: Grammatical Gender Influences Future Danger Perceptions and Precautionary Behavior". Rapid Reviews: COVID-19. ISSN 2692-4072 Athanasopoulos, Panos and Bylund, Emanuel (2020) Whorf in the Wild:Naturalistic Evidence from Human Interaction. Applied Linguistics, 41 (6). 947–970. ISSN 0142-6001 Athanasopoulos, Panos and Casaponsa, Aina (2020) The Whorfian Brain:Neuroscientific Approaches to Linguistic Relativity. Cognitive Neuropsychology, 37 (5-6). pp. 393-412. Atkins, B.D. and Jewell, C.P. and Runge, M.C. and Ferrari, M.J. and Shea, K. and Probert, W.J.M. and Tildesley, M.J. (2020) Anticipating future learning affects current control decisions:A comparison between passive and active adaptive management in an epidemiological setting. Journal of Theoretical Biology, 506. ISSN 0022-5193 Atkinson, Theresa (2020) Enacting telecare 'at scale':A study of care, cuts and boundary making practices. PhD thesis, UNSPECIFIED. Atoyebi, OA and Langat, GC and Xiong, Qian (2020) Cigarette smoking, alcohol intake and health status of older persons in England:the mediating effects of sociodemographic and economic factors. Ageing International, 45. 380–392. ISSN 1936-606X Attene, Federico and Balduzzi, Francesco and Bianchini, Alessandro and Campobasso, Sergio (2020) Using Experimentally Validated Navier-Stokes CFD to Minimize Tidal Stream Turbine Power Losses Due to Wake/Turbine Interactions. Sustainability, 12 (21). ISSN 2071-1050 Audi, H. and Viero, Y. and Alwhaibi, N. and Chen, Z. and Iazykov, M. and Heynderickx, A. and Xiao, F. and Guérin, D. and Krzeminski, C. and Grace, I.M. and Lambert, C.J. and Siri, O. and Vuillaume, D. and Lenfant, S. and Klein, H. (2020) Electrical molecular switch addressed by chemical stimuli. Nanoscale, 12 (18). pp. 10127-10139. ISSN 2040-3372 Austen-Baker, Richard and Hunter, Kate (2020) Infants' Contracts:Law and Policy in the 18th and 19th Centuries. Journal of Contract Law, 36 (4). pp. 1-24. ISSN 1030-7230 Austin, Jonny and Baker, Howard and Ball, Thomas and Devine, James and Finney, Joe and de Halleux, Peli and Hodges, Steve and Moskal, Michał and Stockdale, Gareth (2020) The BBC micro:bit – from the UK to the World. Communications of the ACM, 63 (3). pp. 62-69. ISSN 0001-0782 Autti, Samuli and Ahlstrom, Sean and Haley, Richard and Jennings, Ash and Pickett, George and Poole, Malcolm and Schanen, Roch and Soldatov, A. A. and Tsepelin, Viktor and Vonka, Jakub and Wilcox, Tom and Woods, Andrew and Zmeev, Dmitry (2020) Fundamental dissipation due to bound fermions in the zero-temperature limit. Nature Communications, 11. ISSN 2041-1723 Autti, Samuli and Guénault, Tony and Jennings, Ash and Haley, Richard and Pickett, George and Schanen, Roch and Tsepelin, Viktor and Vonka, Jakub and Zmeev, Dmitry and Soldatov, Arkady (2020) Effect of the boundary condition on the Kapitza resistance between superfluid 3He-B and sintered metal. Physical Review B: Condensed Matter and Materials Physics, 102. ISSN 2469-9950 Avery, Peter and Clairon, Quentin and Henderson, Robin and Taylor, C. James and Wilson, Emma (2020) Robust and adaptive anticoagulant control. Journal of the Royal Statistical Society: Series C (Applied Statistics), 69 (3). pp. 503-524. ISSN 0035-9254 Awais, Muhammad and Ali, Ishtiaq and Alghamdi, Turki Ali and Ramzan, Muhammad and Tahir, Muhammad and Akbar, Mariam and Javaid, Nadeem (2020) Towards Void Hole Alleviation:Enhanced GEographic and Opportunistic Routing Protocols in Harsh Underwater WSNs. IEEE Access, 8. pp. 96592-96605. ISSN 2169-3536 Awokola, B.I. and Okello, G. and Mortimer, K.J. and Jewell, C.P. and Erhart, A. and Semple, S. (2020) Measuring air quality for advocacy in Africa (MA3):Feasibility and practicality of longitudinal ambient PM2.5 measurement using low-cost sensors. International Journal of Environmental Research and Public Health, 17 (19). ISSN 1660-4601 Ayed, Fadhel and Battiston, Marco and Camerlenghi, Federico (2020) An Information Theoretic approach to Post Randomization Methods under Differential Privacy. Statistics and Computing, 30. 1347–1361. ISSN 0960-3174 Ayestaran, I. and Galhoz, A. and Spiegel, E. and Sidders, B. and Dry, J.R. and Dondelinger, F. and Bender, A. and McDermott, U. and Iorio, F. and Menden, M.P. (2020) Identification of Intrinsic Drug Resistance and Its Biomarkers in High-Throughput Pharmacogenomic and CRISPR Screens. Patterns, 1 (5). ISSN 2666-3899 Aymar, G. and Becker, T. and Boogert, S. and Borghesi, M. and Bingham, R. and Brenner, C. and Burrows, P.N. and Ettlinger, O.C. and Dascalu, T. and Gibson, S. and Greenshaw, T. and Gruber, S. and Gujral, D. and Hardiman, C. and Hughes, J. and Jones, W.G. and Kirkby, K. and Kurup, A. and Lagrange, J.-B. and Long, K. and Luk, W. and Matheson, J. and McKenna, P. and McLauchlan, R. and Najmudin, Z. and Lau, H.T. and Parsons, J.L. and Pasternak, J. and Pozimski, J. and Prise, K. and Puchalska, M. and Ratoff, P. and Schettino, G. and Shields, W. and Smith, S. and Thomason, J. and Towe, S. and Weightman, P. and Whyte, C. and Xiao, R. (2020) LhARA:The Laser-hybrid Accelerator for Radiobiological Applications. Frontiers in Physics, 8. ISSN 2296-424X Ayobi, Mohammad Yaseen and Kabir, Ehsan and Kamruzzaman, Palash (2020) 'Revisiting' dignity in the contexts of displacement - evidence from Rohingyas in Bangladesh and Internally Displaced Persons (IDPs) in Afghanistan. In: Development Studies Association Conference 2020, 2020-06-172020-06-19, University of Birmingham. Ayub, N. and Irfan, M. and Awais, M. and Ali, U. and Ali, T. and Hamdi, M. and Alghamdi, A. and Muhammad, F. (2020) Big data analytics for short and medium-term electricity load forecasting using an AI techniques ensembler. Energies, 13 (19). ISSN 1996-1073 Azadegan, A and Syed, T A and Blome, Constantin and Tajeddini, K (2020) Supply chain involvement in business continuity management: effects on reputational and operational damage containment from supply chain disruptions. Supply Chain Management, 25 (6). pp. 747-772. ISSN 1359-8546 Azam, B. and Rahman, S.U. and Irfan, M. and Awais, M. and Alshehri, O.M. and Saif, A. and Nahari, M.H. and Mahnashi, M.H. (2020) A reliable auto-robust analysis of blood smear images for classification of microcytic hypochromic anemia using gray level matrices and gabor feature bank. Entropy, 22 (9). ISSN 1099-4300 Azlan, Haffiezhah A. and Overton, Paul G. and Simpson, Jane and Powell, Philip A. (2020) Disgust propensity has a causal link to the stigmatization of people with cancer. Journal of Behavioral Medicine, 43. pp. 377-390. ISSN 0160-7715 Babaei, Ebrahim and Srivastava, Namrata and Newn, Joshua and Zhou, Qiushi and Dingler, Tilman and Velloso, Eduardo (2020) Faces of Focus:A Study on the Facial Cues of Attentional States. In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, pp. 1-13. ISBN 9781450367080 Babaeihaselghobi, A. and Nadeem Akram, M. and Badri Ghavifekr, H. and Billa, L.R. and Bardalen, E. and Ohlckers, P.A. (2020) Design and implementation of a planar helix for traveling wave tubes based on package compatible ball grid array technology. Microsystem Technologies, 26. 1681–1687. ISSN 0946-7076 Babalola, Ebenezer (2020) Application of a diffusive passive water sampler to investigate the fate of organic pollutants in wastewater treatment works and comparison of engineered and natural treatment systems. PhD thesis, UNSPECIFIED. Bader, Alexander (2020) The Dynamics of Saturn's Ultraviolet Aurorae. PhD thesis, UNSPECIFIED. Bader, Alexander and Badman, Sarah and Ray, Licia C and Paranicas, C. and Lorch, Chris and Clark, George and Andre, Mats and Mitchell, Donald G. and Constable, David A. and Kinrade, Joe and Hunt, Gregory and Pryor, W. R. (2020) Energetic particle signatures above Saturn's aurorae. Journal of Geophysical Research: Space Physics, 125 (1). ISSN 2169-9402 Bader, Alexander and Cowley, S.W.H. and Badman, Sarah and Ray, Licia C and Kinrade, Joe and Palmaerts, B. and Pryor, W. R. (2020) The morphology of Saturn's aurorae observed during the Cassini Grand Finale. Geophysical Research Letters, 47 (2). ISSN 0094-8276 Baghernejad, M. and Yang, Y. and Al-Owaedi, O.A. and Aeschi, Y. and Zeng, B.-F. and Abd Dawood, Z.M. and Li, X. and Liu, J. and Shi, J. and Decurtins, S. and Liu, S.-X. and Hong, W. and Lambert, C.J. (2020) Constructive Quantum Interference in Single-Molecule Benzodichalcogenophene Junctions. Chemistry - A European Journal, 26 (23). pp. 5264-5269. ISSN 0947-6539 Bagiński, B. and MacDonald, R. and Belkin, H.E. and Kotowski, J. and Jokubauskas, P. and Marciniak-Maliszewska, B. (2020) The occurrence of wakefieldite, a rare earth element vanadate, in the rhyolitic Joe Lott Tuff, Utah, USA. Mineralogical Magazine, 84 (1). pp. 109-116. ISSN 0026-461X Bagley, Melissa Joan (2020) Textually-mediated social organisation for the exchange of knowledge:A study of language choice in organisational texts in Malta. PhD thesis, UNSPECIFIED. Bagnato, Giuseppe and Signoretto, Michela and Pizzolitto, Cristina and Menegazzo, Federica and Xi, Xiaoying and ten Brink, Gert H. and Kooi, Bart J. and Heeres, Hero Jan and Sanna, Aimaro (2020) Hydrogenation of biobased aldehydes to mono-alcohols using bimetallic catalysts. ACS Sustainable Chemistry and Engineering, 8 (32). pp. 11994-12004. ISSN 2168-0485 Bagni, G. and Godinho Filho, M. and Thürer, M. and Stevenson, M. (2020) Systematic review and discussion of production control systems that emerged between 1999 and 2018. Production Planning and Control, 32. ISSN 0953-7287 Bahaj, Abu Bakr S. and Blunden, Luke and Bourikas, Leonidas and Büchs, Milena and James, Patrick and Wu, Yue (2020) Energy, wellbeing and cities. In: Designing Future Cities for Wellbeing. Routledge, London, pp. 71-90. ISBN 9781138600775 Bahl, P.V. and Caceres, R. and Davies, N. and Want, R. (2020) Pervasive Computing at the Edge. IEEE Pervasive Computing, 19 (4). pp. 8-9. ISSN 1536-1268 Bai, H. and Cao, F. and Atkinson, M.P. and Chen, Q. and Wang, J. and Ge, Y. (2020) Incorporating spatial association into statistical classifiers:local pattern-based prior tuning. International Journal of Geographical Information Science, 34 (10). pp. 2077-2114. ISSN 1365-8816 Bai, Xueshan and Sun, Mingshu and He, Yuwei and Liu, Ruhua and Cui, Lingling and Wang, Can and Wan, Fang and Wang, Ming and Li, Xinde and Li, Hailong and Wu, Xinjiang and Li, Changgui (2020) Serum CA72-4 is specifically elevated in gout patients and predicts flares. Rheumatology, 59 (10). 2872–2880. ISSN 1462-0324 Baik, Bok and Choi, Sunhwa and Farber, David B. (2020) Managerial Ability and Income Smoothing. The Accounting Review, 95 (4). pp. 1-22. ISSN 0001-4826 Bainbridge, Andrew and Mamic, Katarina and Hanks, Laura and Al-Saymari, Furat and Craig, Adam and Marshall, Andrew (2020) Resonant Cavity Enhanced Photodiodes in the Short-Wave Infrared for Spectroscopic Detection. IEEE Photonics Technology Letters, 32 (21). pp. 1369-1372. ISSN 1041-1135 Bainbridge, Simon (2020) Mountaineering and British Romanticism:The Literary Cultures of Climbing, 1770-1836. Oxford University Press, Oxford. ISBN 9780198857891 Baines, Paul and Draper, Heather and Chiumento, Anna and Fovargue, Sara and Frith, Lucy (2020) Covid-19 and beyond:the ethical challenges of resetting health services during and after public health emergencies. Journal of Medical Ethics, 46 (11). pp. 715-716. ISSN 0306-6800 Baker, Paul (2020) Review of Lukin (2019): War and its Ideologies. International Journal of Corpus Linguistics, 25 (3). pp. 360-363. ISSN 1384-6655 Baker, Paul and Brookes, Gavin and Atanasova, Dimitrinka and Flint, Stuart (2020) Changing frames of obesity in the UK press 2008–2017. Social Science and Medicine, 264. ISSN 0277-9536 Baker, T.L. and Chari, S. and Daryanto, A. and Dzenkovska, J. and Ifie, K. and Lukas, B.A. and Walsh, G. (2020) Discount venture brands:Self-congruity and perceived value-for-money? Journal of Business Research, 116. pp. 412-419. ISSN 0148-2963 Bakthavatchalam, B. and Habib, K. and Saidur, R. and Aslfattahi, N. and Rashedi, A. (2020) Investigation of electrical conductivity, optical property, and stability of 2D MXene nanofluid containing ionic liquids. Applied Sciences, 10 (24). pp. 1-20. ISSN 2076-3417 Bakthavatchalam, B. and Habib, K. and Saidur, R. and Saha, B.B. and Irshad, K. (2020) Comprehensive study on nanofluid and ionanofluid for heat transfer enhancement:A review on current and future perspective. Journal of Molecular Liquids, 305. ISSN 0167-7322 Baldwin, Helen and Biehal, Nina and Allgar, Victoria and Cusworth, Linda and Pickett, Kate (2020) Antenatal risk factors for child maltreatment:Linkage of data from a birth cohort study to child welfare records. Child Abuse & Neglect, 107. ISSN 0145-2134 Balfour, James (2020) A corpus-based discourse analysis of representations of people with schizophrenia in the British press between 2000 and 2015. PhD thesis, UNSPECIFIED. Ballantine, Kyle and Ruostekoski, Janne (2020) Optical magnetism and Huygens' surfaces in arrays of atoms induced by cooperative responses. Physical review letters, 125. ISSN 1079-7114 Ballantine, Kyle and Ruostekoski, Janne (2020) Radiative Toroidal Dipole and Anapole Excitations in Collectively Responding Arrays of Atoms. Physical review letters, 125 (6). ISSN 1079-7114 Ballantine, Kyle and Ruostekoski, Janne (2020) Subradiance-protected excitation spreading in the generation of collimated photon emission from an atomic array. Physical Review Research, 2 (2). ISSN 2643-1564 Ballarini, N.M. and Chiu, Yi-Da and Koenig, Franz and Posch, Martin and Jaki, Thomas (2020) A Critical Review of Graphics for Subgroup Analyses in Clinical Trials. Pharmaceutical Statistics, 19 (5). pp. 541-560. ISSN 1539-1604 Bampouras, Theo and Esformes, Joseph I. (2020) Bodyweight squats can induce post-activation performance enhancement on jumping performance:a brief report. International Journal of Physical Education, Fitness and Sports, 9 (4). pp. 31-36. Banda, Douglas M and Pereira, Jose H and Liu, Albert K and Orr, Douglas and Hammel, Michal and He, Christine and Parry, Martin and Carmo-Silva, Elizabete and Adams, Paul D and Banfield, Jillian F and Shih, Patrick M (2020) Novel Bacterial Clade Reveals Origin of Form I Rubisco. Nature Plants, 6. 1158–1166. ISSN 2055-026X Bandyopadhyay, S. and Baticulon, R.E. and Kadhum, M. and Alser, M. and Ojuka, D.K. and Badereddin, Y. and Kamath, A. and Parepalli, S.A. and Brown, G. and Iharchane, S. and Gandino, S. and Markovic-Obiago, Z. and Scott, S. and Manirambona, E. and Machhada, A. and Aggarwal, A. and Benazaize, L. and Ibrahim, M. and Kim, D. and Tol, I. and Taylor, E.H. and Knighton, A. and Bbaale, D. and Jasim, D. and Alghoul, H. and Reddy, H. and Abuelgasim, H. and Saini, K. and Sigler, A. and Abuelgasim, L. and Moran-Romero, M. and Kumarendran, M. and Jamie, N.A. and Ali, O. and Sudarshan, R. and Dean, R. and Kissyova, R. and Kelzang, S. and Roche, S. and Ahsan, T. and Mohamed, Y. and Dube, A.M. and Gwini, G.P. and Gwokyala, R. and Brown, R. and Papon, M.R.K.K. and Li, Z. and Ruzats, S.S. and Charuvila, S. and Peter, N. and Khalidy, K. and Moyo, N. and Alser, O. and Solano, A. and Robles-Perez, E. and Tariq, A. and Gaddah, M. and Kolovos, S. and Muchemwa, F.C. and Saleh, A. and Gosman, A. and Pinedo-Villanueva, R. and Jani, A. and Khundkar, R. (2020) Infection and mortality of healthcare workers worldwide from COVID-19:A systematic review. BMJ Global Health, 5 (12). ISSN 2059-7908 Banerjee, G. and Ambler, G. and Hostettler, I. C. and Shakeshaft, C. and Lunawat, S. and Cohen, H. and Yousry, T. and Al-Shahi Salman, R. and Lip, G. Y.H. and Houlden, H. and Muir, K. W. and Jäger, H. R. and Werring, D. J. and Shaw, Louise and Harkness, Kirsty and Sword, Jane and Mohd Nor, Azlisham and Sharma, Pankaj and Kelly, Deborah and Harrington, Frances and Randall, Marc and Smith, Matthew and Mahawish, Karim and Elmarim, Abduelbaset and Esisi, Bernard and Cullen, Claire and Nallasivam, Arumug and Price, Christopher and Barry, Adrian and Roffe, Christine and Coyle, John and Hassan, Ahamad and Lovelock, Caroline and Birns, Jonathan and Cohen, David and Sekaran, L. and Parry-Jones, Adrian and Parry, Anthea and Hargroves, David and Proschel, Harald and Datta, Prabel and Darawil, Khaled and Manoj, Aravindakshan and Burn, Mathew and Patterson, Chris and Giallombardo, Elio and Smyth, Nigel and Mansoor, Syed and Anwar, Ijaz and Marsh, Rachel and Ispoglou, Sissi and Chadha, Dinesh and Prabhakaran, Mathuri and Meenakishundaram, Sanjeevikumar and O'Connell, Janice and Scott, Jon and Krishnamurthy, Vinodh and Aghoram, Prasanna and McCormick, Michael and O'Mahony, Paul and Cooper, Martin and Choy, Lillian and Wilkinson, Peter and Leach, Simon and Caine, Sarah and Burger, Ilse and Gunathilagan, Gunaratam and Guyler, Paul and Emsley, Hedley and Davis, Michelle and Manawadu, Dulka and Pasco, Kath and Mamun, Maam and Luder, Robert and Sajid, Mahmud and Anwar, Ijaz and Okwera, James and Staals, Julie and Warburton, Elizabeth and Saastamoinen, Kari and England, Timothy and Putterill, Janet and Flossman, Enrico and Power, Michael and Dani, Krishna and Mangion, David and Suman, Appu and Corrigan, John and Lawrence, Enas and Vahidassr, Djamil (2020) Baseline factors associated with early and late death in intracerebral haemorrhage survivors. European Journal of Neurology, 27 (7). pp. 1257-1263. ISSN 1351-5101 Banerjee, Susana and Lim, Jonathan and Kamposioras, Kostantinos and Murali, Krithika and Oing, Christoph and Punie, Kevin and O'Connor, Miriam and Devnani, Bharti and Lambertini, Matteo and Westphalen, Benedikt and Lopez, P. Garrido and Amaral, T.M.S. and Thorne, Eleanor and Morgan, G. and Haanen, John and Hardy, Claire and Haanen, John (2020) LBA70_PR The impact of COVID-19 on oncology professionals: Initial results of the ESMO resilience task force survey collaboration. In: Annals of Oncology. Annals of Oncology, 31 (4). UNSPECIFIED. Banerjee, Susana and Lim, Jonathan and Kamposioras, Kostantinos and Murali, Krithika and Oing, Christoph and Punie, Kevin and O'Connor, Miriam and Devnani, Bharti and Lambertini, Matteo and Westphalen, Benedikt and Westphalen, Benedikt and Garrido, Pilar and Amaral, Teresa and Thorne, Eleanor and Morgan, Gilberto and Haanen, John and Hardy, Claire (2020) The Impact of COVID-19 on Oncology Professionals: Initial Results of the ESMO Resilience Task Force Survey Collaboration. In: ESMO Virtual Congress 2020, 2020-09-192020-10-18, Online. Banerjee, Susana and Lim, Jonathan and Kamposioras, Kostantinos and Stevens, Anna-Marie and Shaw, Clare and Foreman, Emma and Broadbent, Rachel and Cotton, Jennifer and Droney, Joanne and Flynn, Michael and Stewart, James and Fitch, Kylie and Pope, Laura and Lister, Sara and Hardy, Claire and Thorne, Eleanor (2020) The Impact of COVID-19 on Wellbeing and Work Ability in the NHS Oncology Workforce: Initial Results of the COVID-NOW Study. In: NCRI National Cancer Conference 2020, 2020-11-082021-06-12, Online. Banerjee, Susana and Lim, Jonathan and Thorne, Eleanor and Kamposioras, Kostantinos and Stevens, Anna-Marie and Shaw, Clare and Foreman, Emma and Broadbent, Rachel and Cotton, Jennifer and Droney, Joanne and Flynn, Michael and Stewart, James and Fitch, Kylie and Pope, Laura and Lister, Sara and Hardy, Claire (2020) The impact of COVID-19 on wellbeing and work ability in the NHS Oncology Workforce: Initial results of the COVID-NOW study. In: National Cancer Research Institute, 2020-11-02. Baneth, Gad and Bates, Paul A and Olivieri, Anna (2020) Host-parasite interactions in vector-borne protozoan infections. European Journal of Protistology, 76. ISSN 0932-4739 Baptista, João (John) and Stein, Mari-Klara and Watson Manheim, Mary Beth and Klein, Stefan and Lee, Jungwoo (2020) Digital work and organisational transformation:Emergent Digital/Human work configurations in modern organisations. Journal of Strategic Information Systems, 29 (2). ISSN 0963-8687 Barabash, Miraslau (2020) Theoretical and computational studies of the correlated ionic motion in narrow ion channels. PhD thesis, UNSPECIFIED. Baranowski, Maciej and Turton, Danielle (2020) TD-deletion in British English:New evidence for the long-lost morphological effect. Language Variation and Change, 32 (1). pp. 1-23. ISSN 0954-3945 Baranska, Illona and Kijowska, Violetta and Engels, Yvonne and Finne-Soveri, Harriet and Froggatt, Katherine and Gambassi, Giovanni and Hammar, Teija and Oosterveld-Vlug, Mariska and Payne, Sheila and Van Den Noortgate, Nele and Smets, Tinne and Luc, Deliens and Van den Block, Lieve and Szczerbińska, Katarzyna (2020) Perception of the Quality of Communication With Physicians Among Relatives of Dying Residents of Long-term Care Facilities in 6 European Countries:PACE Cross-Sectional Study. Journal of the American Medical Directors Association, 21 (3). pp. 331-337. ISSN 1525-8610 Baraza, W. and Shelton, C. (2020) No doctor is an island:the 'social distancing' of guidelines during the COVID-19 pandemic. British Journal of Surgery, 107 (10). e389. ISSN 0007-1323 Barańska, I. and Kijowska, V. and Engels, Y. and Finne-Soveri, H. and Froggatt, K. and Gambassi, G. and Hammar, T. and Oosterveld-Vlug, M. and Payne, S. and Van Den Noortgate, N. and Smets, T. and Deliens, L. and Van den Block, L. and Szczerbińska, K. and Adang, E. and Andreasen, P. and Collingridge Moore, D. and van Hout, H. and Ten Koppel, M. and Mammarella, F. and Mercuri, M. and Onwuteaka-Philipsen, B.D. and Pivodic, L. and Rossi, P. and Sowerby, E. and Stodolska, A. and Wichmann, A. and van der Steen, J.T. and Vernooij-Dassen, M. (2020) Factors Associated with Perception of the Quality of Physicians' End-of-life Communication in Long-Term Care Facilities:PACE Cross-Sectional Study. Journal of the American Medical Directors Association, 21 (3). 439.e1-439.e8. ISSN 1525-8610 Barber, S. (2020) Building: Constructing identities. In: Approaching Historical Sources in their Contexts: Space, Time and Performance. Taylor and Francis, pp. 10-31. ISBN 9781351106566 Barber, S. and Peniston-Bird, C.M. (2020) Introduction. In: Approaching Historical Sources in their Contexts: Space, Time and Performance. UNSPECIFIED. ISBN 9781351106573 Barbosa, Klenio and De Silva, Dakshina and Yang, Liyu and Yoshimoto, Hisayuki (2020) Bond Losses and Systemic Risk. Working Paper. Lancaster University, Department of Economics, Lancaster. Barca, Emre (2020) Sovereign power after September 11. PhD thesis, UNSPECIFIED. Barisic, Ivana and Pacifici, Camila and Wel, Arjen van der and Straatman, Caroline and Bell, Eric F. and Bezanson, Rachel and Brammer, Gabriel and D'Eugenio, Francesco and Franx, Marijn and Houdt, Josha van and Maseda, Michael V. and Muzzin, Adam and Sobral, David and Wu, Po-Feng (2020) Dust Attenuation Curves at z ~ 0.8 from LEGA-C:Precise Constraints on the Slope and 2175Å Bump Strength. The Astrophysical Journal, 903 (2). ISSN 0004-637X Barker, Robert Oliver and Stocker, Rachel and Russell, Siân and Roberts, Anthony and Kingston, Andrew and Adamson, Joy and Hanratty, Barbara (2020) Distribution of the National Early Warning Score (NEWS) in care home residents. Age and Ageing, 49 (1). pp. 141-145. ISSN 0002-0729 Barlow, Anna and Sherlock, Christopher and Tawn, Jonathan (2020) Inference for extreme values under threshold-based stopping rules. Journal of the Royal Statistical Society: Series C (Applied Statistics), 69 (4). pp. 765-789. ISSN 0035-9254 Barlow, Charlotte Frederica and Walklate, Sandra (2020) Policing Intimate Partner Violence:The golden thread of discretion. Policing: Journal of Policy and Practice, 14 (2). pp. 404-413. ISSN 1751-4512 Barlow, J. and Berenguer, E. and Carmenta, R. and França, F. (2020) Clarifying Amazonia's burning crisis. Global Change Biology, 26 (2). pp. 319-321. ISSN 1354-1013 Barnabei, Valerio Francesco and Castorrini, Alessio and Corsini, Alessandro and Rispoli, Franco (2020) FSI analysis and simulation of flexible blades in a Wells turbine for wave energy conversion. E3S Web of Conferences, 197. ISSN 2555-0403 Barnes, L. and Fledderjohann, J. (2020) Teaching and learning guide for: Reproductive justice for the invisible infertile:A critical examination of reproductive surveillance and stratification. UNSPECIFIED. Barnes, Liberty and Fledderjohann, Jasmine (2020) Reproductive Justice for the Invisible Infertile:A Critical Examination of Reproductive Surveillance and Stratification. Sociology Compass, 14 (2). ISSN 1751-9020 Barnes, M.J. and Robson, A.J. and Naderi, J. and Short, R.D. and Bradley, J.W. (2020) Plasma polymerization of (2,2,6,6-tetramethylpiperidin-1-yl)oxyl in a collisional, capacitively coupled radio frequency discharge. Biointerphases, 15 (6). ISSN 1934-8630 Barnes, M.L. and Wang, P. and Cinner, J.E. and Graham, N.A.J. and Guerrero, A.M. and Jasny, L. and Lau, J. and Sutcliffe, S.R. and Zamborain-Mason, J. (2020) Social determinants of adaptive and transformative responses to climate change. Nature Climate Change, 10. pp. 823-828. ISSN 1758-678X Barnes, Thomas (2020) Understanding spatial and temporal variability in Supraglacial Lakes on an Antarctic Ice Shelf:A 31-year study of George VI. Masters thesis, UNSPECIFIED. Barnett, Anna (2020) The functional significance of cross-sensory correspondences in infant-directed speech. PhD thesis, UNSPECIFIED. Barr, Damian (2020) 'Truth is the same old story':truth, genre and the ethics of memoir in Maggie and me. PhD thesis, UNSPECIFIED. Barrett, Ellie (2020) Makers' voices:four themes for material literacy in contemporary sculpture. Journal of Visual Art Practice, 19 (4). pp. 351-372. ISSN 1470-2029 Barrett, Ellie (2020) Making material meaningful:Identifying and analysing the role of material in contemporary sculptural practice and criticism. PhD thesis, UNSPECIFIED. Barrier, Julien and Kumaravadivel, Piranavan and Kumar, Roshan Krishna and Ponomarenko, Leonid and Xin, Na and Holwill, Matthew and Mullan, Ciaran and Kim, Minsoo and Gorbachev, R. V. and Thompson, Michael and Prance, Jonathan and Taniguchi, T. and Watanabe, K. and Grigorieva, I. V. and Novoselov, K. S. and Mishchenko, Artem and Fal'ko, V. I. and Geim, A. K. and Berdyugin, A. I. (2020) Long-range ballistic transport of Brown-Zak fermions in graphene superlattices. Nature Communications, 11. ISSN 2041-1723 Barrow, Devon and Kourentzes, Nikolaos and Sandberg, Rickard and Niklewski, Jacek (2020) Automatic robust estimation for exponential smoothing:Perspectives from statistics and machine learning. Expert Systems with Applications, 160. ISSN 0957-4174 Bartell, Janita and Fledderjohann, Jasmine and Vellakkal, Sukumar and Stuckler, David (2020) Subsidising rice and sugar?:The Public Distribution System and Nutritional Outcomes in Andhra Pradesh, India. Journal of Social Policy. ISSN 0047-2794 Barton, D. (2020) Digital literacies and the long history of the academic article. In: Message and Medium. Topics in English Linguistics [TiEL] . De Gruyter Mouton, pp. 149-163. ISBN 9783110620399 Barty-Taylor, Miranda (2020) Gender, leadership and online news:How Scottish young people perceive constructions of women political leaders and digitally-mediated politics. PhD thesis, UNSPECIFIED. Bas, E. and Egrioglu, E. and Yolcu, U. (2020) A hybrid algorithm based on artificial bat and backpropagation algorithms for multiplicative neuron model artificial neural networks. Journal of Ambient Intelligence and Humanized Computing. ISSN 1868-5137 Bas, Eren and Yolcu, Ufuk and Egrioglu, Erol (2020) Picture fuzzy regression functions approach for financial time series based on ridge regression and genetic algorithm. Journal of Computational and Applied Mathematics, 370. ISSN 0377-0427 Basco, Rodrigo and Campopiano, Giovanna and Calabrò, Andrea and Kraus, Sascha (2020) They Are Not All the Same!:Investigating the Effect of Executive versus Non-executive Family Board Members on Firm Performance. Journal of Small Business Management, 57 (Suppl.). pp. 637-657. ISSN 0047-2778 Basir, R. and Qaisar, S. and Ali, M. and Pervaiz, H. and Naeem, M. and Imran, M.A. (2020) Resource Allocation and Throughput Maximization for IoT Real-time Applications. In: 91st IEEE Vehicular Technology Conference, 2020-05-252020-07-31, Online. Bassett, Richard and Young, Paul and Blair, Gordon and Cai, Xiaoming and Chapman, Lee (2020) Urbanisation's contribution to climate warming in Great Britain. Environmental Research Letters, 15 (11). ISSN 1748-9326 Bassett, Richard and Young, Paul and Blair, Gordon and Samreen, Faiza and Simm, William (2020) A Large Ensemble Approach to Quantifying Internal Model Variability Within the WRF Numerical Model. Journal of Geophysical Research, 125 (7). ISSN 0148-0227 Bassett, Richard and Young, Paul and Blair, Gordon and Samreen, Faiza and Simm, William (2020) The mega-city Lagos and three decades of urban heat island growth. Journal of Applied Meteorology and Climatology, 59 (12). pp. 2041-2055. ISSN 1558-8424 Basson, Ashley J. and McLaughlin, Mark G. (2020) Synthesis of Functionalized Isoindolinones via Calcium Catalyzed Generation and Trapping of N-Acyliminium Ions. The Journal of Organic Chemistry, 85 (8). pp. 5615-5628. Basu, Rupa and Rao, Jeevan M. and Letizia, Rosa and Ni, Qiang and Wasige, Edward and Al-Khalidi, Abdullah and Wang, Jue and Paoloni, Claudio (2020) Front end for D-band High Data Rate Point to Point links. In: 2020 45th International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz 2020. IEEE Computer Society Press, USA, pp. 864-865. ISBN 9781728166209 Bates, Oliver and Lord, Carolynne and Alter, Hayley and Kirman, Ben (2020) Let's start talking the walk:Capturing and reflecting on our limits when working with gig economy workers. In: ICT4S2020. ACM, New York, 227–235. ISBN 9781450375955 Batterbury, Simon (2020) Abierto pero injusto:el papel de la justicia social en las publicaciones de acceso abierto. SciELO – Scientific Electronic Library Online. Batterbury, Simon (2020) Open but Unfair- The role of social justice in Open Access publishing. UNSPECIFIED. Batterbury, Simon (2020) Political ecology in, and of, the Australian bushfires. Universitat Autonoma de Barcelona. Batterbury, Simon and Bouard, Séverine and Kowasch, Matthias (2020) Indigenous responses to colonialism in an island state:a geopolitical ecology of Kanaky-New Caledonia. In: Terrestrial transformations. Lexington Books, pp. 111-120. ISBN 9781793605467 Batterbury, Simon and Kowasch, Matthias and Bouard, Séverine (2020) The geopolitical ecology of New Caledonia:territorial re-ordering, mining, and Indigenous economic development. Journal of Political Ecology, 27 (1). pp. 594-611. ISSN 1073-0451 Battiston, Marco and Ayed, Fadhel and Di Benedetto, Giuseppe (2020) A Bayesian Nonparametric Approach to Differentially Private Data. In: Privacy in Statistical Databases. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) . Springer, ESP, pp. 32-48. ISBN 9783030575205 Bauer, Florian and Rothermel, Marcella and Tarba, Shlomo and Arslan, Ahmad and Uzelac, Borislav (2020) Marketing integration decisions, intermediate goals, and market expansion in horizontal acquisitions:how marketing fit moderates the relationships on intermediate goals. British Journal of Management, 31 (4). pp. 896-917. ISSN 1045-3172 Baxter, Tess (2020) Falling between Worlds:Catalogue of video art 2016-2020. Dane Stone Publishing, Ulverston. ISBN 9780955374944 Bayoumi, Mahmoud and Rohaim, Mohammed A and Munir, Muhammad (2020) Structural and Virus Regulatory Insights Into Avian N6-Methyladenosine (m6A) Machinery. Frontiers in cell and developmental biology, 8. ISSN 2296-634X Bazhydai, Marina (2020) Social learning mechanisms of knowledge exchange:Active communication, information seeking and information transmission in infancy. PhD thesis, UNSPECIFIED. Bazhydai, Marina and Silverstein, Priya and Parise, Eugenio and Westermann, Gert (2020) Two-year old children preferentially transmit simple actions but not pedagogically demonstrated actions. Developmental Science, 23 (5). ISSN 1363-755X Bazhydai, Marina and Twomey, Katherine and Westermann, Gert (2020) Curiosity and Exploration. In: Encyclopedia of Infant and Early Childhood Development. Elsevier, pp. 370-378. ISBN 9780128165126 Bazhydai, Marina and Westermann, Gert (2020) From Curiosity, to Wonder, to Creativity:a Cognitive Developmental Psychology Perspective. In: Wonder, education, and human flourishing. VU University Press, Amsterdam, pp. 144-182. ISBN 9789086598199 Bazhydai, Marina and Westermann, Gert and Parise, Eugenio (2020) "I don't know but I know who to ask":12-month-olds actively seek information from knowledgeable adults. Developmental Science, 23 (5). ISSN 1363-755X Bazin, Somayeh and Abbasi Esbourezi, Mohammad Hossein and Akbari, Mohaddeseh (2020) 2020 28th Iranian Conference on Electrical Engineering (ICEE). In: 2020 28th Iranian Conference on Electrical Engineering, ICEE 2020. 2020 28th Iranian Conference on Electrical Engineering, ICEE 2020 . UNSPECIFIED. ISBN 9781728172965 Beadle, James and Taylor, C. James and Ashworth, Kirsti and Cheneler, David (2020) Plant leaf position estimation with computer vision. Sensors, 20 (20). ISSN 1424-8220 Beanland, Kevin and Kania, Tomasz and Laustsen, Niels (2020) Closed ideals of operators on the Tsirelson and Schreier spaces. Journal of Functional Analysis, 279 (8). ISSN 0022-1236 Beaujouan, Juline and Rasheed, Amjed (2020) The Syrian Refugee Crisis in Jordan and Lebanon:Impact and Implications. Middle East Policy, 27 (3). pp. 76-98. ISSN 1061-1924 Beaulieu, Claudie and Killick, Rebecca Claire and Ireland, David and Norwood, Ben (2020) Considering long-memory when testing for changepoints in surface temperature:a classification approach based on the time-varying spectrum. Environmetrics, 31 (1). ISSN 1099-095X Bebbington, Jan and Schneider, Thomas and Stevenson, Lorna and Fox, Alison (2020) Fossil fuel reserves and resources reporting and unburnable carbon:investigating conflicting accounts. Critical Perspectives on Accounting, 66. ISSN 1045-2354 Bebbington, Jan and Unerman, Jeffrey (2020) Advancing research into accounting and the UN sustainable development goals. Accounting, Auditing and Accountability Journal, 33 (7). pp. 1657-1670. ISSN 0951-3574 Becvar, T. and Siriyasatien, P. and Bates, P. and Volf, P. and Sádlová, J. (2020) Development of Leishmania (Mundinia) in guinea pigs. Parasites and Vectors, 13 (1). ISSN 1756-3305 Bedston, S.J. and Pearson, R.J. and Jay, M.A. and Broadhurst, K. and Gilbert, R. and Wijlaars, L. (2020) Data Resource:Children and Family Court Advisory and Support Service (Cafcass) public family law administrative records in England. International Journal of Population Data Science, 5 (1). Beerling, David J. and Kantzas, Euripides P. and Lomas, Mark R. and Wade, Peter and Eufrasio, Rafael M. and Renforth, Phil and Sarkar, Binoy and Andrews, M. Grace and James, Rachael H. and Pearce, Christopher R. and Mecure, Jean-Francois and Pollitt, Hector and Holden, Philip B. and Edwards, Neil R. and Khanna, Madhu and Koh, Lenny and Quegan, Shaun and Pidgeon, Nick F. and Janssens, Ivan A. and Hansen, James and Banwart, Steven A. (2020) Potential for large-scale CO2 removal via enhanced rock weathering with croplands. Nature, 583. pp. 242-248. ISSN 0028-0836 Beine, Michel and Bertinelli, Luisito and Comertpay, Rana and Litina, Anastasia and Maystadt, Jean-Francois (2020) The Gravity Model of Forced Displacement Using Mobile Phone Data. Working Paper. Lancaster University, Department of Economics, Lancaster. Bektas, Tolga and Letchford, Adam (2020) Using ℓp-norms for fairness in combinatorial optimisation. Computers and Operations Research, 120. ISSN 0305-0548 Belcher, Oliver and Bigger, Patrick and Neimark, Benjamin and Kennelly, Cara (2020) Hidden carbon costs of the "everywhere war":Logistics, geopolitical ecology, and the carbon boot‐print of the US military. Transactions of the Institute of British Geographers, 45 (1). pp. 65-80. ISSN 0020-2754 Belcher, Oliver and Neimark, Benjamin and Bigger, Patrick (2020) The U.S. military is not sustainable. Science, 367 (6481). pp. 989-990. ISSN 0036-8075 Belimov, A.A. and Dodd, I.C. and Safronova, V.I. and Dietz, K.-J. (2020) Leaf nutrient homeostasis and maintenance of photosynthesis integrity contribute to adaptation of the pea mutant sgecdt to cadmium. Biologia Plantarum, 64. pp. 447-453. ISSN 0006-3134 Bellec, Matthieu and Poli, Charles and Kuhl, Ulrich and Mortessagne, Fabrice and Schomerus, Henning (2020) Observation of supersymmetric pseudo-Landau levels in strained microwave graphene. Light: Science and Applications, 9. ISSN 2095-5545 Bellew-Dunn, Esme (2020) Testing Sulforaphane for chemoprevention against ageing and functional decline in male Drosophila models. Masters thesis, UNSPECIFIED. Bello, Mouktar and Chorti, Arsenia and Fijalkow, Inbar and Yu, Wenjuan and Musavian, Leila (2020) Asymptotic Performance Analysis of NOMA Uplink Networks Under Statistical QoS Delay Constraints. IEEE Open Journal of the Communications Society, 1. pp. 1691-1706. Bello, Mouktar and Yu, Wenjuan and Chorti, Arsenia and Musavian, Leila (2020) Performance Analysis of NOMA Uplink Networks under Statistical QoS Delay Constraints. In: ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC). IEEE International Conference on Communications . IEEE. Bellotti, Elisa and Spencer, Jon and Lord, Nicholas and Benson, Katie (2020) Counterfeit Alcohol Distribution:A Criminological Script Network Analysis. European Journal of Criminology, 17 (4). pp. 373-398. ISSN 1477-3708 Belton, Alexander and Guillot, Dominique and Khare, Apoorva and Putinar, Mihai (2020) A panorama of positivity. II:fixed dimension. In: Complex Analysis and Spectral Theory. Contemporary Mathematics . American Mathematical Society, Providence, RI, pp. 109-150. ISBN 9781470446925 Benaud, P. and Anderson, K. and Evans, M. and Farrow, L. and Glendell, M. and James, M.R. and Quine, T.A. and Quinton, J.N. and Rawlins, B. and Jane Rickson, R. and Brazier, R.E. (2020) National-scale geodata describe widespread accelerated soil erosion. Geoderma, 371. ISSN 0016-7061 Benedict, F. and Kumar, A. and Kadirgama, K. and Mohammed, H.A. and Ramasamy, D. and Samykano, M. and Saidur, R. (2020) Thermal performance of hybrid-inspired coolant for radiator application. Nanomaterials, 10 (6). ISSN 2079-4991 Benes, B. and Guan, K. and Lang, M. and Long, S.P. and Lynch, J.P. and Marshall-Colón, A. and Peng, B. and Schnable, J. and Sweetlove, L.J. and Turk, M.J. (2020) Multiscale computational models can guide experimentation and targeted measurements for crop improvement. The Plant Journal, 103 (1). pp. 21-31. ISSN 0960-7412 Benevene, Paula and Buonomo, Ilyria and West, Michael (2020) The relationship between leadership behaviors and volunteer commitment:The role of volunteer satisfaction. Frontiers in Psychology, 11. ISSN 1664-1078 Benkoff, Jenni (2020) Emotional experiences in emergency ambulance services. PhD thesis, UNSPECIFIED. Benkwitt, C.E. and Wilson, S.K. and Graham, N.A.J. (2020) Biodiversity increases ecosystem functions despite multiple stressors on coral reefs. Nature Ecology and Evolution, 4. pp. 919-926. ISSN 2397-334X Bennett, Benjamin and Stulz, René and Wang, Jesse (2020) Does the stock market make firms more productive? Journal of Financial Economics, 136 (2). pp. 281-306. ISSN 0304-405X Bennett, Bruce (2020) Cinema and Cycling, Technological Twins. MIT Press. Bennett, Bruce (2020) Cycling, Art and Utopian Possibilities. Goldsmiths Press. Bennett, Bruce (2020) Cycling, Art and Utopian Possibilities. MIT Press. Bennett, Bruce and Marciniak, Katarzyna (2020) Close Encounters with Foreigness. In: Transnational Screens. Routledge, London. ISBN 9780367477158 Benson, Katie (2020) Lawyers and the Proceeds of Crime: The Facilitation of Money Laundering and its Control:An overview and summary of findings. UNSPECIFIED. (Unpublished) Benson, Katie (2020) Lawyers and the Proceeds of Crime:The Facilitation of Money Laundering and Its Control. The Law of Financial Crime . Routledge, London. ISBN 9781138744868 Benson, Katie (2020) Occupation, Organisation and Opportunity:Theorising the Facilitation of Money Laundering as 'White-Collar Crime'. In: Assets, Crimes and the State. Transnational Criminal Justice . Routledge, London. ISBN 9780367025922 Benson, Katie and King, Colin and Walker, Clive (2020) Dirty Money and the New Responses of the 21st Century. In: Assets, Crimes and the State. Transnational Criminal Justice . Routledge, London. ISBN 9780367025922 Benson, Michaela (2020) Brexit and the Classed Politics of Bordering:The British in France and European Belongings. Sociology, 54 (3). pp. 501-517. ISSN 0038-0385 Bentabet, Najah-Imane and Juge, Rémi and El Maarouf, Ismail and Mouilleron, Virginie and Valsamou-Stanislawski, Dialekti and El-Haj, Mahmoud (2020) The Financial Document Structure Extraction Shared task (FinToc 2020). In: Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation. COLING, Barcelona, Spain (Online), pp. 13-22. ISBN 9781952148408 Bentley, Sophie and Hobbs, Laura and Stevens, Carly and Hartley, Jackie (2020) Exploring space using Minecraft. In: Space Science in Context, 2020-05-14, virtual conference. Benton, Clare and Phoenix, Jess and Smith, Freya and Robertson, Andrew and McDonald, Robbie and Wilson, Gavin and Delahay, Richard (2020) Badger vaccination in England: Progress, operational effectiveness and participant motivations. People and Nature, 2 (3). pp. 761-775. ISSN 2575-8314 Benz, Corinna and Urbaniak, Mick (2020) Proteome-wide quantitative phosphoproteomic analysis of Trypanosoma brucei insect and mammalian lifecycle stages. In: Trypanosomatids. Methods in Molecular Biology . Humana Press, New York, pp. 125-137. ISBN 9781071602935 Berchoux, T. and Watmough, G.R. and Amoako Johnson, F. and Hutton, C.W. and Atkinson, P.M. (2020) Collective influence of household and community capitals on agricultural employment as a measure of rural poverty in the Mahanadi Delta, India. AMBIO: A Journal of the Human Environment, 49 (1). pp. 281-298. ISSN 0044-7447 Berger, Christian and Deutsch, Nancy and Cuadros, Olga and Franco, Eduardo and Rojas, Matías and Roux, Gabriela and Sanchez Burgos, Felipe (2020) Adolescent peer processes in extracurricular activities:Identifying developmental opportunities. Children and Youth Services Review, 118. ISSN 0190-7409 Bergström, A. and Ehrenberg, A. and Eldh, A.C. and Graham, I.D. and Gustafsson, K. and Harvey, G. and Hunter, S. and Kitson, A. and Rycroft-Malone, J. and Wallin, L. (2020) The use of the PARIHS framework in implementation research and practice-a citation analysis of the literature. Implementation Science, 15 (1). ISSN 1748-5908 Bernabeu, Pablo (2020) Web application for the simulation of experimental data. UNSPECIFIED. Bernhard, G.H. and Neale, R.E. and Barnes, P.W. and Neale, P.J. and Zepp, R.G. and Wilson, S.R. and Andrady, A.L. and Bais, A.F. and McKenzie, R.L. and Aucamp, P.J. and Young, P.J. and Liley, J.B. and Lucas, R.M. and Yazar, S. and Rhodes, L.E. and Byrne, S.N. and Hollestein, L.M. and Olsen, C.M. and Young, A.R. and Robson, T.M. and Bornman, J.F. and Jansen, M.A.K. and Robinson, S.A. and Ballaré, C.L. and Williamson, C.E. and Rose, K.C. and Banaszak, A.T. and Häder, D.-P. and Hylander, S. and Wängberg, S. and Austin, A.T. and Hou, W.-C. and Paul, N.D. and Madronich, S. and Sulzberger, B. and Solomon, K.R. and Li, H. and Schikowski, T. and Longstreth, J. and Pandey, K.K. and Heikkilä, A.M. and White, C.C. (2020) Environmental effects of stratospheric ozone depletion, UV radiation and interactions with climate change:UNEP Environmental Effects Assessment Panel, update 2019. Photochemical and Photobiological Sciences, 19 (5). pp. 542-584. ISSN 1474-905X Berry, K. and Barrowclough, C. and Fitsimmons, M. and Hartwell, R. and Hilton, C. and Riste, L. and Wilson, I. and Jones, S. (2020) Overcoming challenges in delivering integrated motivational interviewing and cognitive behavioural therapy for bipolar disorder with co-morbid alcohol use:Therapist perspectives. Behavioural and Cognitive Psychotherapy, 48 (5). pp. 615-620. ISSN 1352-4658 Beshir, Habtamu and Maystadt, Jean-Francois (2020) In Utero Seasonal Food Insecurity and Cognitive Development:Evidence on Gender Imbalances From Ethiopia. Journal of African Economies, 29 (4). pp. 412-431. ISSN 0963-8024 Betar, B.O. and Alsaadi, M.A. and Chowdhury, Z.Z. and Aroua, M.K. and Mjalli, F.S. and Dimyati, K. and Hindia, M.N. and Elfghi, F.M. and Ahmed, Y.M. and Abbas, H.F. (2020) Bimetallic Mo-Fe Co-catalyst-based nano-carbon impregnated on PAC for optimumsuper-hydrophobicity. Symmetry, 12 (8). ISSN 2073-8994 Betta, G.-F.D. and Obryk, B. and Pia, M.G. and Britton, C. and Cao, L.R. and Dong, Z. and Dreyer, J. and Girard, S. and Jansson, P. and Joyce, M. and Kouzes, R. and Lyoussi, A. (2020) Comments by the Senior Editor. IEEE Transactions on Nuclear Science, 67 (4). p. 543. ISSN 0018-9499 Bettini, Giovanni and Gioli, Giovanna and Felli, Romain (2020) Clouded Skies:How digital technologies could reshape "Loss and Damage" from climate change. Wiley Interdisciplinary Reviews: Climate Change, 11 (4). ISSN 1757-7780 Bettinson, Gary (2020) Chinese Censorship, Genre Mediation, and the Puzzle Films of Leste Chen. In: Renegotiating Film Genres in East Asian Cinemas and Beyond. East Asian Popular Culture . Palgrave Macmillan, Switzerland, pp. 137-162. ISBN 9783030550769 Bettinson, Gary (2020) Film Theory. Year's Work in Critical and Cultural Theory, 8 (1). pp. 124-145. ISSN 1077-4254 Bettinson, Gary (2020) Review of Jing Jing Chang, Screening Communities:Negoiating Narratives of Empire, Nation and the Cold War in Hong Kong Cinema. The China Journal, 83 (1). pp. 246-248. ISSN 1324-9347 Bettinson, Gary (2020) Yesterday Once More:Hong Kong-China Coproductions and the Myth of Mainlandization. Journal of Chinese Cinemas, 14 (1). pp. 16-31. ISSN 1750-8061 Bettles, Robert J. and Lee, Mark D. and Gardiner, Simon A. and Ruostekoski, Janne (2020) Quantum and Nonlinear Effects in Light Transmitted through Planar Atomic Arrays. Communications Physics, 3. ISSN 2399-3650 Betz, Volker and Schäfer, Helge and Zeindler, Dirk (2020) Random permutations without macroscopic cycles. Annals of Applied Probability, 30 (3). pp. 1484-1505. ISSN 1050-5164 Beven, K. (2020) Deep learning, hydrological processes and the uniqueness of place. Hydrological Processes, 34 (16). pp. 3608-3613. ISSN 0885-6087 Beven, K.J. (2020) A history of the concept of time of concentration. Hydrology and Earth System Sciences, 24 (5). pp. 2655-2670. ISSN 1027-5606 Beven, Keith and Asadullah, Anita and Bates, Paul and Blyth, Eleanor and Chappell, Nick and Child, Stewart and Cloke, Hannah and Dadson, Simon and Everard, Nick and Fowler, Hayley J. and Freer, Jim and Hannah, David M. and Heppell, Kate and Holden, Joseph and Lamb, Rob and Lewis, Huw and Morgan, Gerald and Parry, Louise and Wagener, Thorsten (2020) Developing observational methods to drive future hydrological science:Can we make a start as a community? Hydrological Processes, 34 (3). pp. 868-873. ISSN 0885-6087 Bevilacqua, M. and Diggle, P.J. and Porcu, E. (2020) Families of covariance functions for bivariate random fields on spheres. Spatial Statistics, 40. ISSN 2211-6753 Bezahaf, Mehdi and Hutchison, David and King, Daniel and Race, Nicholas (2020) Internet Evolution:Critical Issues. IEEE Internet Computing, 24 (4). pp. 5-14. ISSN 1089-7801 Bezahaf, Mehdi and Perez Hernandez, Marco and Bardwell, Lawrence and Davies, Eleanor and Broadbent, Matthew and King, Daniel and Hutchison, David (2020) Self-Generated Intent-Based System. In: 2019 10th International Conference on Networks of the Future (NoF). IEEE, pp. 138-140. ISBN 9781728144467 Bezerra, C.G. and Costa, B.S.J. and Guedes, L.A. and Angelov, P.P. (2020) An evolving approach to data streams clustering based on typicality and eccentricity data analytics. Information Sciences, 518. pp. 13-28. ISSN 0020-0255 Bezerra De Souza, Thamyrys and Machado Franca, Filipe and Barlow, Jos and Dodonov, Pavel and Santos, Juliana S. and Faria, Deborah and Baumgarten, Júlio E. (2020) The relative influence of different landscape attributes on dung beetle communities in the Brazilian Atlantic forest. Ecological Indicators, 117. ISSN 1470-160X Bhargava, S. and Giles, P. A. and Romer, A. K. and Jeltema, T. and Mayers, J. and Bermeo, A. and Hilton, M. and Wilkinson, R. and Vergara, C. and Collins, C. A. and Manolopoulou, M. and Rooney, P. J. and Rosborough, S. and Sabirli, K. and Stott, J. P. and Swann, E. and Viana, P. T. P. (2020) The XMM Cluster Survey:new evidence for the 3.5 keV feature in clusters is inconsistent with a dark matter origin. Monthly Notices of the Royal Astronomical Society, 497 (1). pp. 656-671. ISSN 0035-8711 Bhat, B.V.R. and Hillier, Robin and Mallick, Nirupama and U., Vijaya Kumar (2020) Roots of completely positive maps. Linear Algebra and its Applications, 587. pp. 143-165. ISSN 0024-3795 Bhatnagar, V.R. and Jain, A.K. and Tripathi, S.S. and Giga, S. (2020) Beyond the competency frameworks-conceptualizing and deploying employee strengths at work. Journal of Asia Business Studies, 14 (5). pp. 691-709. ISSN 1558-7894 Bhattacharjee, Mitradip and Soni, Mahesh and Escobedo, Pablo and Dahiya, Ravinder (2020) PEDOT:PSS Microchannel based Highly Sensitive Stretchable Strain Sensor. Advanced Electronic Materials, 6 (8). ISSN 2199-160X Bhattacharya, Gourab and Robinson, Delores and Orme, Devon and Najman, Yani and Carter, Andrew (2020) Low-temperature thermochronology of the Indus Basin in central Ladakh, northwest India:Implications of Miocene–Pliocene cooling in the India-Asia collision zone. Tectonics, 39 (10). ISSN 0278-7407 Bhattacharyya, Gargi and Virdee, Satnam and Winter, Aaron (2020) Revisiting histories of anti-racist thought and activism. Identities: Global Studies in Culture and Power, 27 (1). pp. 1-19. ISSN 1070-289X Bi, Sheng and Zhang, Yuan and Cervini, Luca and Mo, Tangming and Griffin, John M. and Presser, Volker and Feng, Guang (2020) Permselective ion electrosorption of subnanometer pores at high molar strength enables capacitive deionization of saline water. Sustainable Energy and Fuels, 4 (3). pp. 1285-1295. ISSN 2398-4902 Bianconi, M.E. and Dunning, L.T. and Curran, E.V. and Hidalgo, O. and Powell, R.F. and Mian, S. and Leitch, I.J. and Lundgren, M.R. and Manzi, S. and Vorontsova, M.S. and Besnard, G. and Osborne, C.P. and Olofsson, J.K. and Christin, P.-A. (2020) Contrasted histories of organelle and nuclear genomes underlying physiological diversification in a grass species:Intraspecific dispersal of C4 physiology. Proceedings of the Royal Society B: Biological Sciences, 287 (1938). ISSN 0962-8452 Bieluczyk, Wanderlei and de Cassia Piccolo, Marisa and Pereira, Marcos Gervasio and Tuzzin de Moraes, Moacir and Soltangheisi, Amin and de Campos Bernardi, Alberto Carlos and Pezzopane, Jose Ricardo Macedo and Oliveira, Patricia Perondi Anchao and Moreira, Marcelo Zacharias and Barbosa de Camargo, Plinio and dos Santos Dias, Carlos Tadeu and Batista, Itaynara and Cherubin, Maurício Roberto (2020) Integrated farming systems influence soil organic matter dynamics in southeastern Brazil. Geoderma, 371. ISSN 0016-7061 Biggart, Michael and Stocker, Jenny and Doherty, R. M. and Wild, Oliver and Hollaway, Michael and Carruthers, David and Li, Jie and Zhang, Qiang and Wu, Ruili and Kotthaus, Simone and Grimmond, Sue and Squires, F.A. and Lee, James and Shi, Zongbo (2020) Street-scale air quality modelling for Beijing during a winter 2016 measurement campaign. Atmospheric Chemistry and Physics, 20. pp. 2755-2780. ISSN 1680-7316 Bigger, Patrick and Millington, Nate (2020) Getting soaked?:Climate Crisis, Adaptation Finance, and Racialized Austerity. Environment and Planning E: Nature and Space, 3 (3). pp. 601-623. ISSN 2514-8486 Biggerstaff, Matthew and Dahlgren, Frederick and Fitzner, Julia and George, Dylan and Hammond, Aspen and Hall, Ian and Haw, David and Imai, Natsuko and Johansson, Michael and Kramer, Sarah and McCaw, James and Moss, Robert and Pebody, Richard and Read, Jonathan and Reed, Carrie and Reich, Nicolas and Riley, Steven and Vandemaele, Katelijn and Viboud, Cecile and Wu, Joseph (2020) Coordinating the real‐time use of global influenza activity data for better public health planning. Influenza and Other Respiratory Viruses, 14 (2). pp. 105-110. ISSN 1750-2640 Biggin, Frances and Emsley, Hedley and Knight, Jo (2020) Routinely collected patient data in neurology research:a systematic mapping review. BMC Neurology, 20. ISSN 1471-2377 Billa, Laxma and Akram, M. Nadeem and Paoloni, Claudio and Chen, Xuyuan (2020) H - and E -Plane Loaded Slow Wave Structure for W -Band TWT. IEEE Transactions on Electron Devices, 67 (1). pp. 309-313. ISSN 0018-9383 Billa, Laxma and Paoloni, Claudio and Letizia, Rosa and Lewis, Edward (2020) Variable aperture horn antenna for millimeter wave wireless communications. In: 2019 12th UK-Europe-China Workshop on Millimeter Waves and Terahertz Technologies (UCMMT). IEEE, GBR, pp. 1-4. ISBN 9781728129938 Billett, D and Bland, E. C. and A. G., Burrell and Kotyk, K. and Ponomarenko, P. V. and Reimer, A. S. and Schmidt, M. T. and Shepherd, S. G. and Sterne, K. T. and Thomas, E. G. and Walach, Maria (2020) SuperDARN Radar Software Toolkit (RST) 4.4.1. Zenodo, https://zenodo.org/record/3994968#.X_XO6i10dsM. Billett, Daniel and Hosokawa, K. and Grocott, Adrian and Wild, Jim and Aruliah, A. and Ogawa, Y. and Taguchi, S. and Lester, M. (2020) Multi‐instrument Observations of Ion‐Neutral Coupling in the Dayside Cusp. Geophysical Research Letters, 47 (4). ISSN 0094-8276 Binley, Andrew and Slater, Lee (2020) Resistivity and Induced Polarization:Theory and Applications to the Near-Surface Earth. Cambridge University Press, Cambridge. ISBN 9781108492744 Binns, Deborah (2020) "Living On The Island But Staging the World": Can drama help children to understand how they belong in the world?:A joint exploration of values and identity by primary aged children and their teacher. PhD thesis, UNSPECIFIED. Birch, M.J. and Hargreaves, J.K. (2020) Quasi-periodic ripples in high latitude electron content, the geomagnetic field, and the solar wind. Scientific Reports, 10 (1). ISSN 2045-2322 Biren, Jonas and Harris, Andrew and Tuffen, Hugh and Gurioli, Lucia and Chevrel, Magdalena Oryaëlle and Vlastélic, Ivan and Schiavi, Federica and Benbakkar, Mhammed and Fonquernie, Claire and Calabro, Laura (2020) Chemical, Textural and Thermal Analyses of Local Interactions Between Lava Flow and a Tree – Case Study From Pāhoa, Hawai'i. Frontiers in Earth Science, 8. ISSN 2296-6463 Birney, M.E. and Roessel, J. and Hansen, K. and Rakić, T. (2020) Prologue:Language Challenges in the 21st Century. Journal of Language and Social Psychology, 39 (4). pp. 428-437. ISSN 0261-927X Bishop, Lisa and Müllner, M. and Bjurhult-Kennedy, Amalie and Lauder, Bob and Gatherer, Derek (2020) Immunological cross-reactions with paramyxovirus nucleoproteins may explain sporadic apparent ebolavirus seropositivity in European populations. Biorxiv. Bispo, D.F.A. and Batista, P.V.G. and Guimarães, D.V. and Silva, M.L.N. and Curi, N. and Quinton, J.N. (2020) Monitoring land use impacts on sediment production:A case study of the pilot catchment from the brazilian program of payment for environmental services. Revista Brasileira de Ciência do Solo, 44. pp. 1-15. ISSN 0100-0683 Biswas, Jayanta Kumar and Banerjee, Anurupa and Sarkar, Binoy and Sarkar, Dibyendu and Sarkar, Santosh Kumar and Rai, Mahendra and Vithanage, Meththika (2020) Exploration of an Extracellular Polymeric Substance from Earthworm Gut Bacterium (Bacillus licheniformis) for Bioflocculation and Heavy Metal Removal Potential. Applied Sciences, 10 (1). ISSN 2076-3417 Black, Brian (2020) In Dialogue with the Mahābhārata. Routledge, London. ISBN 9780367436001 Blackburn, Kate G. and Ashokkumar, Ashwini and Pennebaker, James W. and Brody, Nicholas and Boyd, Ryan (2020) Sounds Like a Winner, or Does It?:Exploring Football Fans' Language after Wins Versus Losses. In: Society for Personality and Social Psychology, 2020-02-272020-02-29. Bladon, Andrew and Lewis, Matthew and Bladon, Eleanor and Buckton, Sam and Corbett, Stuart and Ewing, S.R. and Hayes, Matthew and Hitchcock, Gwen and Knock, Richard and Lucas, Colin and McVeigh, Adam and Menendez Martinez, Rosa and Walker, Jonah and Fayle, Tom and Turner, Edgar (2020) How butterflies keep their cool:physical and ecological traits influence thermoregulatory ability and population trends. Journal of Animal Ecology, 89 (11). pp. 2440-2450. ISSN 0021-8790 Blagden, S. and Simpson, C. and Limmer, M. (2020) Bowel cancer screening in an English prison:a qualitative service evaluation. Public Health, 180. pp. 46-50. ISSN 0033-3506 Blagrove, M.S.C. and Caminade, C. and Diggle, P.J. and Patterson, E.I. and Sherlock, K. and Chapman, G.E. and Hesson, J. and Metelmann, S. and McCall, P.J. and Lycett, G. and Medlock, J. and Hughes, G.L. and Della Torre, A. and Baylis, M. (2020) Potential for Zika virus transmission by mosquitoes in temperate climates. Proceedings of the Royal Society B: Biological Sciences, 287 (1930). ISSN 0962-8452 Blair, Gordon (2020) A Tale of Two Cities:Reflections on Digital Technology and the Natural Environment. Patterns, 1 (5). ISSN 2666-3899 Blair, Ryan and Eddy, Thomas and Morrison, Nathaniel and Shonkwiler, Clayton (2020) Knots with exactly 10 sticks. Journal of Knot Theory and Its Ramifications, 29 (3). ISSN 0218-2165 Blanchy, Guillaume (2020) Geophysical image to root function. PhD thesis, UNSPECIFIED. Blanchy, Guillaume and Saneiyan, Sina and Boyd, Jimmy and McLachlan, Paul and Binley, Andrew (2020) ResIPy, an intuitive open source software for complex geoelectrical inversion/modeling. Computers and Geosciences, 137. ISSN 0098-3004 Blanchy, Guillaume and Virlet, Nicolas and Sadeghi-Tehran, Pouria and Watts, Christopher W. and Hawkesford, Malcolm J. and Whalley, William R. and Binley, Andrew (2020) Time-intensive geoelectrical monitoring under winter wheat. Near Surface Geophysics, 18 (4). pp. 413-425. ISSN 1569-4445 Blanchy, Guillaume and Watts, C.W. and Richards, J. and Bussell, J. and Huntenburg, Katharina and Sparkes, D. and Stalham, M. and Hawkesford, M. and Whalley, W.R. and Binley, Andrew (2020) Time-lapse geophysical assessment of agricultural practices on soil moisture dynamics. Vadose Zone Journal. ISSN 1539-1663 Blanchy, Guillaume and Watts, Christopher W. and Ashton, Rhys W. and Webster, Colin P. and Hawkesford, Malcolm J. and Whalley, William R. and Binley, Andrew (2020) Accounting for heterogeneity in θ-σ relationship:application to wheat phenotyping using ΕMI. Vadose Zone Journal, 19 (1). ISSN 1539-1663 Blaney, Adam (2020) Designing parametric matter:Exploring adaptive material scale self-assembly through tuneable environments. PhD thesis, UNSPECIFIED. Blaney, Adam (2020) Designing parametric matter:Exploring adaptive self-assembly through tuneable environments. PhD thesis, UNSPECIFIED. Blangiardo, M. and Boulieri, A. and Diggle, P. and Piel, F.B. and Shaddick, G. and Elliott, P. (2020) Advances in spatiotemporal models for non-communicable disease surveillance. International Journal of Epidemiology, 49 (Suppl.). i26-i37. ISSN 0300-5771 Bligh, Brett (2020) Theory disputes and the development of the technology enhanced learning research field. Studies in Technology Enhanced Learning, 1 (1). pp. 115-169. Bligh, Brett and Lee, Kyungmee (2020) Debating the status of 'theory' in technology enhanced learning research: Introduction to the Special Inaugural Issue. Studies in Technology Enhanced Learning, 1 (1). pp. 17-26. Bligh, Brett and Lee, Kyungmee (2020) Studies in Technology Enhanced Learning: A project of scholarly conversation. Studies in Technology Enhanced Learning, 1 (1). pp. 1-16. Bligh, Brett and Lorenz, Katharina (2020) Vorsprung durch Technik:Multi-display learning spaces and art-historical method. In: Culture, Technology and the Image. Intellect, Bristol, pp. 72-86. ISBN 9781789381115 Blitvic, Natasha and Fernandez, Vicente (2020) Aging a little:On the optimality of limited senescence in Escherichia coli. Journal of Theoretical Biology, 502. ISSN 0022-5193 Blitvic, Natasha and Fernandez, Vicente (2020) On a Generalized Fibonacci Recurrence. arXiv. Bloem, Esther and Forquet, Nicolas and Søiland, Astri and Binley, Andrew and French, Helen (2020) Towards understanding resistivity signals measured with time-lapse electrical resistivity during contaminated snowmelt infiltration. Near Surface Geophysics, 18 (4). pp. 399-412. ISSN 1569-4445 Bloomer, Melissa and Walshe, Catherine (2020) It's not what they were expecting:a systematic review and narrative synthesis of the role and experience of the hospital palliative care volunteer. Palliative Medicine, 34 (5). pp. 589-604. ISSN 0269-2163 Bloomfield, Brian and Dale, Karen (2020) Limitless? Imaginaries of cognitive enhancement and the labouring body. History of the Human Sciences, 33 (5). pp. 37-63. ISSN 0952-6951 Blower, Gordon (2020) Convexity and transport for isentropic Euler equations on the sphere. Working Paper. UNSPECIFIED. (Unpublished) Blower, Gordon and Chen, Yang (2020) On determinant expansions for Hankel operators. Concrete Operators, 7 (1). pp. 13-44. ISSN 2299-3282 Blue, Stanley and Forman, Peter and Shove, Elizabeth (2020) Flexibilities in energy supply and demand:Legacies and lessons from the past. Journal of Energy History/Revue d'Histoire de l'Énergie (5). p. 1. ISSN 2649-3055 Blue, Stanley and Forman, Peter and Shove, Elizabeth (2020) Historicising Flexibility. Journal of Energy History/Revue d'Histoire de l'Énergie (5). ISSN 2649-3055 Blue, Stanley and Shove, Elizabeth and Forman, Peter (2020) Conceptualising Flexibility:Challenging Representations of Time and Society in the Energy Sector. Time and Society, 29 (4). pp. 923-944. ISSN 0961-463X Blything, L.P. and Hardie, A. and Cain, K. (2020) Question Asking During Reading Comprehension Instruction:A Corpus Study of How Question Type Influences the Linguistic Complexity of Primary School Students' Responses. Reading Research Quarterly, 55 (3). pp. 443-472. ISSN 0034-0553 Boboc, Adela (2020) Evaluation of bacterial ligands involved in receptor-mediated phagocytosis in Tetrahymena pyriformis. Masters thesis, UNSPECIFIED. Bogolyubov, Pavel (2020) The learning company:the learning organization the British way – its origins, present, and future directions. An interview with John Burgoyne. Learning Organization, 27 (3). pp. 249-257. ISSN 0969-6474 Bohleber, P. and Casado, M. and Ashworth, K. and A. Baker, C. and Belcher, A. and Alicia Caccavo, J. and E. Jenkins, H. and Satterthwaite, E. and Spolaor, A. and Holly L. Winton, V. (2020) Successful practice in early career networks:Insights from the polar sciences. Advances in Geosciences, 53. pp. 1-14. ISSN 1680-7340 Boissinot, M. and King, H. and Adams, M. and Higgins, J. and Shaw, G. and Ward, T. A. and Steele, L. P. and Tams, D. and Morton, R. and Polson, E. and Silva, B. da and Droop, A. and Hayes, J. L. and Martin, H. and Laslo, P. and Morrison, E. and Tomlinson, D. C. and Wurdak, H. and Bond, J. and Lawler, S. E. and Short, S. C. (2020) Profiling cytotoxic microRNAs in pediatric and adult glioblastoma cells by high-content screening, identification, and validation of miR-1300. Oncogene, 39 (30). pp. 5292-5306. ISSN 0950-9232 Bolivar, H. and Jaimes Parada, H.D. and Roa, O. and Velandia, J. (2020) Multi-criteria Decision Making Model for Vulnerabilities Assessment in Cloud Computing regarding Common Vulnerability Scoring System. In: 2019 Congreso Internacional de Innovacion y Tendencias en Ingenieria, CONIITI 2019 - Conference Proceedings. IEEE. ISBN 9781728147475 Bolt, George and Muncey, Harriet and Lunagomez Coria, Simon and Nemeth, Christopher (2020) Telling the Researcher STOR-i With Data Science:Elsevier data scientists are working with Lancaster University researchers to advance our understanding of researcher behaviors. UNSPECIFIED. Bonehill, J. and von Benzon, N. and Shaw, J. (2020) 'The shops were only made for people who could walk':impairment, barriers and autonomy in the mobility of adults with Cerebral Palsy in urban England. Mobilities, 15 (3). pp. 341-361. ISSN 1745-0101 Boniface, S and Lewer, D and Hatch, SL and Goodwin, L (2020) Associations between interrelated dimensions of socio-economic status, higher risk drinking and mental health in South East London:A cross-sectional study. PLoS ONE, 15 (2). ISSN 1932-6203 Boochani, Behrouz and Flores, Diana and Sibanda, Agatha and Mujakachi, Victor and Temboman, and Fafa, and Huzzard, Rosie and Cape-Davenhill, Lauren and Sirriyeh, Ala and Lewis, Hannah and Lonergan, Gwyneth and Conlon, Deirdre and Bennett, Bruce and Tofighian, Omid (2020) Transnational Communities for Dismantling Detention:From Manus Island to the UK. Community Psychology in Global Perspective, 6 (1). pp. 108-128. ISSN 2421-2113 Boorman, James and Prince, Daniel and Green, Benjamin (2020) Transformers:Intrusion Detection Data In Disguise. In: 3rd International Workshop on Attacks and Defences for Internet-of-Things, 2020-09-192020-09-19, Online. Bor, Martin (2020) Towards the efficient use of LoRa for wireless sensor networks. PhD thesis, UNSPECIFIED. Borer, Elizabeth T. and Harpole, W. Stanley and Adler, Peter B. and Arnillas, Carlos A. and Bugalho, M.N. and Cadotte, Marc W. and Caldeira, Maria and Campana, S. and Dickman, Chris R. and Dickson, T.L. and Donohue, Ian and Eskelinen, A. and Firn, Jennifer and Graf, P. and Gruner, Daniel S. and Heckman, Robert W. and Koltz, A.M. and Komatsu, K.J. and Lannes, L.S. and MacDougall, Andrew S. and Martina, J.P. and Moore, J.L. and Mortensen, Brent and Ochoa-Hueso, Raul and Olde Venterink, H. and Power, S.A. and Price, J. and Risch, Anita C. and Sankaran, Mahesh and Schütz, Martin and Sitters, J. and Stevens, Carly and Virtanen, R and Wilfahrt, Peter and Seabloom, Eric W. (2020) Nutrients cause grassland biomass to outpace herbivory. Nature Communications, 11. ISSN 2041-1723 Boriwan, Pornpimon and Ehrgott, Matthias and Kuroiwa, Daishi and Petrot, Narin (2020) The Lexicographic Tolerable Robustness Concept for Uncertain Multi-Objective Optimization Problems:A Study on Water Resources Management. Sustainability, 12 (18). ISSN 2071-1050 Borragan, Maria and Casaponsa, Aina and Antón, Eneko and Duñabeitia, Jon Andoni (2020) Incidental changes in orthographic processing in the native language as a function of learning a new language late in life. Language, Cognition and Neuroscience. ISSN 2327-3798 Boström-Einarsson, Lisa and Babcock, Russell C. and Bayraktarov, Elisa and Ceccarelli, Daniela and Cook, Nathan and Ferse, Sebastian C. A. and Hancock, Boze and Harrison, Peter and Hein, Margaux and Shaver, Elizabeth and Smith, Adam and Suggett, David and Stewart-Sinclair, Phoebe J. and Vardi, Tali and McLeod, Ian M. (2020) Coral restoration – A systematic review of current methods, successes, failures and future directions. PLoS ONE, 15 (1). ISSN 1932-6203 Bosu, William (2020) Prevalence and determinants of hypertension in older adults in Ghana:A systematic review and analysis of a longitudinal study. PhD thesis, UNSPECIFIED. Bouacida, Elias and Foucart, Renaud (2020) The acceptability of lotteries in allocation problems:a choice-based approach. Working Paper. Lancaster University, Department of Economics, Lancaster. Bouacida, Elias and Martin, Daniel (2020) Predictive Power in Behavioral Welfare Economics. Working Paper. Lancaster University, Department of Economics, Lancaster. Bouamor, Houda and Zaghouani, Wajdi (2020) Proceedings of the Fifth Arabic Natural Language Processing Workshop. Association for Computational Linguistics, Barcelona, Spain. ISBN 9781952148385 Bouguettaya, A. and Lynott, D. and Carter, A. and Zerhouni, O. and Meyer, S. and Ladegaard, I. and Gardner, J. and O'Brien, K.S. (2020) The relationship between gambling advertising and gambling attitudes, intentions and behaviours:a critical and meta-analytic review. Current Opinion in Behavioral Sciences, 31. pp. 89-101. ISSN 2352-1546 Bourikas, Leonidas and Teli, Despoina and Amin, Rucha and James, Patrick A.B. and Bahaj, AbuBakr S. (2020) Facilitating responsive interaction between occupants and building systems through dynamic post-occupancy evaluation. IOP Conf. Series: Earth and Environmental Science, 410 (1). Bourtsoulatze, Eirina and Chadha, Aaron and Fadeev, Ilya and Giotsas, Vasileios and Andreopoulos, Ioannis (2020) Deep Video Precoding. IEEE Transactions on Circuits and Systems for Video Technology, 30 (12). 4913 - 4928. ISSN 1051-8215 Bowen, Eleanor (2020) Elucidating the mechanism of the palladium-catalysed decarboxylative asymmetric allylic alkylation of alpha-sulfonyl anions. Masters thesis, UNSPECIFIED. Bowen, James and Cheneler, David (2020) Closed-Form Expressions for Contact Angle Hysteresis:Capillary Bridges between Parallel Platens. Colloids and Interfaces, 4 (1). ISSN 2504-5377 Bowen, James and Cheneler, David and Vicary, James (2020) Manufacture and calibration of high stiffness AFM cantilevers. In: Virtual RMS AFM & SPM Meeting 2020, 2020-11-032020-11-04, Remote. Bowes, David and Petric, J. and Hall, T. (2020) BugVis:Commit slicing for fault visualisation. In: ICPC '20. ACM, New York, pp. 436-440. ISBN 9781450379588 Bowie-DaBreo, Dionne and Iles-Smith, Heather and Sunram-Lea, Sandra-Ilona and Sas, Corina (2020) Transdisciplinary ethical principles and standards for mobile mental health. In: Mental Wellbeing: Future Agenda Drawing from Design, HCI, and Big Data, 2020-07-062020-07-07. Bowie-DaBreo, Dionne and Sas, Corina and Sunram-Lea, Sandra-Ilona and Iles-Smith, Heather (2020) A call for responsible innovation in mobile mental health:findings from a content analysis and ethical review of the depression app marketplace. In: 25th annual international CyberPsychology, CyberTherapy & Social Networking Conference, 2020-06-05. Bowie-DaBreo, Dionne and Sunram-Lea, Sandra-Ilona and Sas, Corina and Iles-Smith, Heather (2020) Evaluation of depression app store treatment descriptions and alignment with clinical guidance:Systematic search and content analysis. JMIR Formative Research, 4 (11). ISSN 2561-326X Bown, Chad and Erbahar, Aksel and Zanardi, Maurizio (2020) Global Value Chains and the Removal of Trade Protection. Discussion Paper. Centre for Economic Policy Research. Boyd, Ryan and Blackburn, Kate and Pennebaker, James W. (2020) The Narrative Arc:Revealing Core Narrative Structures Through Text Analysis. Science Advances, 6 (32). ISSN 2375-2548 Boyd, Ryan and Pasca, Paola and Lanning, Kevin (2020) The Personality Panorama:Conceptualizing Personality Through Big Behavioural Data. European Journal of Personality, 34 (5). pp. 599-612. ISSN 0890-2070 Boyd, Taylor (2020) Harnessing spatial dispersion in wire media to control the shape of electromagnetic fields. PhD thesis, UNSPECIFIED. Boyden, J. and Porter, C. and Zharkevich, I. (2020) Balancing school and work with new opportunities:changes in children's gendered time use in Ethiopia (2006–2013). Children's Geographies. ISSN 1473-3285 Boyko, Christopher (2020) Introduction. In: Designing future cities for wellbeing. Routledge, London. ISBN 9781138600782 Boyko, Christopher and Cooper, Rachel and Coulton, Claire and Hale, James (2020) Health, Wellbeing and Urban Design. In: Designing Future Cities for Wellbeing. Routledge, London, pp. 158-170. ISBN 9781138600782 Boyko, Christopher and Dunn, Nick (2020) Designing Future Cities for Wellbeing. Routledge, London. ISBN 9781138600775 Boyko, Christopher and Pollastri, Serena and Coulton, Claire and Dunn, Nick and Cooper, Rachel (2020) Sharing in smart cities:What are we missing out on? In: Routledge Companion to Smart Cities. Routledge International Handbook (1st). Routledge, Oxon / New York, pp. 241-253. ISBN 9781138036673 Bozorgchenani, Arash and Disabato, Simone and Tarchi, Daniele and Roveri, Manuel (2020) An Energy Harvesting Solution for Computation Offloading in Fog Computing Networks. Computer Communications, 160. pp. 577-587. ISSN 0140-3664 Bozorgchenani, Arash and Tarchi, Daniele and Emanuele Corazza, Giovanni (2020) Computation Offloading Decision Bounds in SWIPT-Based Fog Networks. In: 2019 IEEE Global Communications Conference (GLOBECOM). IEEE Publishing. ISBN 9781728109633 Bradbury, Matthew and Adegoke, Elijah and Kampert, Erik and Higgins, Matthew and Watson, Tim and Jennings, Paul and Ford, Colin and Buesnel, Guy and Hickling, Steve (2020) PNT Cyber Resilience: a Lab2Live Observer Based Approach, Report 2: Specifications for Cyber Testing Facilities. Working Paper. UNSPECIFIED, Coventry, UK. Bradbury, Matthew and Maple, Carsten and Atmaca, Uger Ilker and Cannizzaro, Sara (2020) Identifying Attack Surfaces in the Evolving Space Industry Using Reference Architectures. In: IEEE Aerospace Conference. IEEE, USA. Bradbury, Matthew and Taylor, Phillip and Atmaca, Ugur Ilker and Maple, Carsten and Griffiths, Nathan (2020) Privacy Challenges with Protecting Live Vehicular Location Context. IEEE Access, 8. pp. 207465-207484. ISSN 2169-3536 Bradley, A. (2020) Lacan's war games:Cybernetics, sovereignty and war in seminar II. Filozofski Vestnik, 40 (1). pp. 89-107. Bradley, Arthur (2020) Human Interest:Usury from Luther to Bentham. Theory, Culture and Society. ISSN 0263-2764 (In Press) Bradley, Arthur Humphrey (2020) Terrors of Theory:Critical Theory of Terror from Kojève to Žižek. Telos, (Sprin (190). ISSN 0090-6514 Bradley, Steve and Navarro Paniagua, Maria and Migali, Giuseppe (2020) Spatial variations and clustering in the rates of youth unemployment and NEET:A comparative analysis of Italy, Spain and the UK. Journal of Regional Science, 60 (5). pp. 1074-1107. ISSN 1467-9787 Bradley, Steven and Crouchley, Robert (2020) The effects of test scores and truancy on youth unemployment and inactivity:A simultaneous equations approach. Empirical Economics, 59. 1799–1831. ISSN 0377-7332 Bradshaw, Andrew and Dunleavy, Lesley and Walshe, Catherine and Preston, Nancy and Cripps, Rachel and Hocaoglu, Mevhibe and Bajwah, Sabrina and Oluyase, Adejoke and Maddocks, Matthew and Sleeman, Katherine and Higginson, Irene and Fraser, Lorna and Murtagh, Fliss (2020) Understanding and addressing challenges for Advance Care Planning in the COVID-19 pandemic:An analysis of the UK CovPall survey data from specialist palliative care services. medRxiv. Braithwaite, J. J. and Watson, Derrick and Dewe, Hayley (2020) The body-threat assessment battery (BTAB):A new instrument for the quantification of threat-related autonomic affective responses induced via dynamic movie clips. International Journal of Psychophysiology, 155. pp. 16-31. ISSN 0167-8760 Bramwell, E.E. (2020) She Used to Doctor Us up Herself:Patent Medicines, Mothers, and Expertise in Early Twentieth-Century Britain. Twentieth Century British History, 31 (4). pp. 555-578. ISSN 0955-2359 Bramwell, Erin (2020) Rethinking British patent medicine culture in the first half of the twentieth century. PhD thesis, UNSPECIFIED. Brandt, Patric and Yesuf, Gabriel and Herold, Martin and Rufino, Mariana (2020) Intensification of dairy production can increase the GHG mitigation potential of the land use sector in East Africa. Global Change Biology, 26 (2). pp. 568-585. ISSN 1354-1013 Brandt, Silke (2020) Grammar. In: The Encyclopedia of Child and Adolescent Development. Wiley. ISBN 9781119161899 Brandt, Silke (2020) Social cognitive and later language acquisition. In: Current Perspectives in Child Language Acquisition. Trends in Language Acquisition Research . John Benjamins, pp. 155-170. ISBN 9789027207074 Brauer, Jan C. and Tsokkou, Demetra and Sanchez, Sandy and Droseros, Nikolaos and Roose, Bart and Mosconi, Edoardo and Hua, Xiao and Stolterfoht, Martin and Neher, Dieter and Steiner, Ullrich and De Angelis, Filippo and Abate, Antonio and Banerji, Natalie (2020) Comparing the excited-state properties of a mixed-cation–mixed-halide perovskite to methylammonium lead iodide. Journal of Chemical Physics, 152 (10). ISSN 0021-9606 Braun, Rebecca (2020) Celebrity:On the Different Publics of World Authorship. In: World Authorship. Twenty-First-Century Approaches to Literature . Oxford University Press, Oxford, pp. 30-44. ISBN 9780198819653 Braun, Rebecca (2020) Introduction:Twenty-First-Century Approaches to World Authorship. In: World Authorship. Twenty-First-Century Approaches to Literature . Oxford University Press, Oxford, pp. 1-16. ISBN 9780198819653 Braun, Rebecca (2020) Networks and World Literature:The Practice of Putting German Authors in their Place. In: Transnational German Studies. Transnational Modern Languages . Liverpool University Press, Liverpool, pp. 97-113. ISBN 9781789621426 Braun, Rebecca and Curry, Andrew and Gatherer, Derek and Kyne, Karena and Spiers, Emily (2020) Three Creative Futures Methods for Imagining Life in a Post-Antibiotic World:Report on a Speculative Cross-Sector and Cross-Campus Conversation. Working Paper. Institute for Social Futures, Lancaster University, Lancaster. Bremer, Christina (2020) Not (B)interested? Using Persuasive Technology to Promote Sustainable Household Recycling Behavior. In: Persuasive Technology. Designing for Future Change. Lecture Notes in Computer Science . Springer International Publishing, Cham, pp. 195-207. ISBN 9783030457112 Bremer, Christina (2020) When Mental Models Grow (C)old:A Cognitive Perspective on Home Heating Automation. IEEE Pervasive Computing, 19 (4). pp. 48-52. ISSN 1536-1268 Brennan, Hope and Westbrook, Jenna and Parry, Sarah Louise (2020) "An understanding, a way of life":An exploration of learning disability professionals' experiences of compassion. British Journal of Learning Disabilities, 48 (4). pp. 348-355. ISSN 1354-4187 Brennen, Bonnie and Gutsche Jr, Robert (2020) Journalism research in practice:strategies, innovation, and approaches to change. In: Journalism Research in Practice. Journalism Studies . Routledge, London, pp. 1-4. ISBN 9780367469665 Brewer, Hannah R and Hirst, Yasemin and Sundar, Sudha and Chadeau-Hyam, Marc and Flanagan, James M (2020) Cancer Loyalty Card Study (CLOCS):protocol for an observational case-control study focusing on the patient interval in ovarian cancer diagnosis. BMJ Open, 10 (9). e037459. ISSN 2044-6055 Briganti, Arianna (2020) My living-theory of International Development. PhD thesis, UNSPECIFIED. Brigham, Martin and Kavanagh, Donncha (2020) The Economic Theology of Quakerism. In: The Routledge Handbook of Economic Theology. Routledge, London. ISBN 9781138288850 Bristow, Greg C. and Thomson, David M and Openshaw, Rebecca L and Mitchell, Emma J and Pratt, Judith A and Dawson, Neil and Morris, Brian J (2020) 16p11 Duplication Disrupts Hippocampal-Orbitofrontal-Amygdala Connectivity, Revealing a Neural Circuit Endophenotype for Schizophrenia. Cell Reports, 31 (3). ISSN 2211-1247 Britten, T.K. and Kemmitt, P.D. and Halcovitch, N.R. and Coote, S.C. (2020) 1,2-Dihydropyridazines as Versatile Synthetic Intermediates. SYNLETT, 31 (5). pp. 459-462. ISSN 0936-5214 Britten, Thomas K. and McLaughlin, Mark G. (2020) Brønsted Acid Catalyzed Peterson Olefinations. The Journal of Organic Chemistry, 85 (2). pp. 301-305. Broadbent, A.A.D. and Firn, J. and McGree, J.M. and Borer, E.T. and Buckley, Y.M. and Harpole, W.S. and Komatsu, K.J. and MacDougall, A.S. and Orwin, K.H. and Ostle, N.J. and Seabloom, E.W. and Bakker, J.D. and Biederman, L. and Caldeira, M.C. and Eisenhauer, N. and Hagenah, N. and Hautier, Y. and Moore, J.L. and Nogueira, C. and Peri, P.L. and Risch, A.C. and Roscher, C. and Schütz, M. and Stevens, C.J. (2020) Dominant native and non-native graminoids differ in key leaf traits irrespective of nutrient availability. Global Ecology and Biogeography, 29 (7). pp. 1126-1138. ISSN 1466-822X Broadhurst, Karen (2020) Collaborating for the public good:working across boundaries to catalyse and inform preventive services for birth mothers who have lost children from their care. Developing Practice (54). pp. 62-78. ISSN 1445-6818 Broadhurst, Karen and Mason, Claire (2020) Child removal as the gateway to further adversity:birth mother accounts of the immediate and enduring collateral consequences of child removal. Qualitative Social Work, 19 (1). pp. 15-37. ISSN 1473-3250 Bromley, M.A. and Boxall, C. and Taylor, R. and Sarsfield, M. (2020) A new photochemical reactor for the rapid reduction of U(VI) in the development of mixed metal oxide fuel fabrication processes:14th International Nuclear Fuel Cycle Conference, GLOBAL 2019 and Light Water Reactor Fuel Performance Conference, TOP FUEL 2019. In: 14th International Nuclear Fuel Cycle Conference, GLOBAL 2019 and Light Water Reactor Fuel Performance Conference, TOP FUEL 2019, 2019-09-222021-06-27, The Westin. Brookes, Gavin (2020) Book Review: Laura L Paterson and Ian N Gregory, Representations of Poverty and Place: Using Geographical Text Analysis to Understand Discourse. Discourse and Communication, 14 (2). pp. 222-225. ISSN 1750-4813 Brookes, Gavin (2020) Corpus linguistics in illness and healthcare contexts:a case study of diabulimia support groups. In: Applying linguistics in illness and healthcare contexts. Contemporary Studies in Linguistics . Bloomsbury, London, pp. 44-72. ISBN 9781350057654 Brookes, Gavin and McEnery, Anthony (2020) Corpus linguistics. In: The Routledge Handbook of English Language and Digital Humanities. Routledge. Brookes, Gavin and McEnery, Anthony (2020) Correlation, collocation and cohesion:A corpus-based critical analysis of violent jihadist discourse. Discourse and Society, 31 (4). pp. 351-373. ISSN 0957-9265 Brookes, Gavin and McEnery, Tony (2020) Corpus linguistics. In: The Routledge Handbook of English Language and Digital Humanities. Routledge, pp. 378-404. ISBN 9781138901766 Brookes, Gavin and Wright, David (2020) From burden to threat:A diachronic study of language ideology and migrant representation in the British press. In: Corpora and the Changing Society. Studies in Corpus Linguistics . John Benjamins, Amsterdam, 113–140. Brooks, Eleanor and Geyer, Robert (2020) The development of EU health policy and the Covid-19 pandemic:trends and implications. Journal of European Integration, 42 (8). pp. 1057-1076. ISSN 0703-6337 Brown, Amy (2020) Impact of obstructive sleep apnoea and experiences of using positive airway pressure. PhD thesis, UNSPECIFIED. Brown, C.E. and Wilcockson, T.D.W. and Lunn, J. (2020) Does sleep affect alcohol-related attention bias? Journal of Substance Use, 25 (5). pp. 515-518. ISSN 1465-9891 Brown, Heather (2020) Understanding the role of policy on inequalities in the intergenerational correlation in health and wages:Evidence from the UK from 1991–2017. PLoS ONE, 15 (6). ISSN 1932-6203 Brown, J.L. and Delaney, C. and Short, B. and Butcher, M.C. and McKloud, E. and Williams, C. and Kean, R. and Ramage, G. (2020) Candida auris Phenotypic Heterogeneity Determines Pathogenicity In Vitro. mSphere, 5 (3). ISSN 2379-5042 Brown, K. and Vincent, T. and Castanon, E.G. and Rus, F.S. and Melios, C. and Kazakova, O. and Giusca, C.E. (2020) Contactless probing of graphene charge density variation in a controlled humidity environment. Carbon, 163. pp. 408-416. ISSN 0008-6223 Brown, Michael (2020) Wounds and Wonder:Emotion, Imagination and War in the Cultures of Romantic Surgery. Journal for Eighteenth-Century Studies, 43 (2). pp. 239-259. ISSN 1754-0194 Brown, Olivia (2020) Teamwork in extreme environments. PhD thesis, UNSPECIFIED. Brown, Olivia and Power, Nicola and Conchie, Stacey (2020) Immersive simulations with extreme teams. Organizational Psychology Review, 10 (3-4). pp. 115-135. Brown, Rosemary and Welsh, Paul and Logue, Jennifer (2020) Systematic review of clinical guidelines for lipid lowering in the secondary prevention of cardiovascular disease events. BMJ Open Heart, 7 (2). ISSN 2053-3624 Brown, Stephen and Patterson, Anthony (2020) Dante Leave Homer Without It:On Epics, Umbras and Authorpreneurs. In: Handbook of Entrepreneurship and Marketing. Edward Elgar Publishing, Cheltenham, pp. 17-33. ISBN 9781785364563 Browning, J. and Tuffen, H. and James, M.R. and Owen, J. and Castro, J.M. and Halliwell, S. and Wehbe, K. (2020) Post-fragmentation vesiculation timescales in hydrous rhyolitic bombs from Chaitén volcano. Journal of South American Earth Sciences, 104. Brož, P. and Krýza, O. and Wilson, L. and Conway, S.J. and Hauber, E. and Mazzini, A. and Raack, J. and Balme, M.R. and Sylvest, M.E. and Patel, M.R. (2020) Experimental evidence for lava-like mud flows under Martian surface conditions. Nature Geoscience, 13 (6). pp. 403-407. ISSN 1752-0894 Brunfaut, Tineke and Harding, Luke (2020) International language proficiency standards in the local context:Interpreting the CEFR in standard setting for exam reform in Luxembourg. Assessment in Education: Principles, Policy and Practice, 27 (2). pp. 215-231. ISSN 0969-594X Bryant, D.J. and Dixon, W.J. and Hopkins, J.R. and Dunmore, R.E. and Pereira, K.L. and Shaw, M. and Squires, F.A. and Bannan, T.J. and Mehra, A. and Worrall, S.D. and Bacak, A. and Coe, H. and Percival, C.J. and Whalley, L.K. and Heard, D.E. and Slater, E. and Ouyang, B. and Cui, T. and Surratt, J.D. and Liu, D. and Shi, Z. and Harrison, R. and Sun, Y. and Xu, W. and Lewis, A.C. and Lee, J.D. and Rickard, A.R. and Hamilton, J.F. (2020) Strong anthropogenic control of secondary organic aerosol formation from isoprene in Beijing. Atmospheric Chemistry and Physics, 20 (12). pp. 7531-7552. ISSN 1680-7316 Buchanan, Camilla (2020) What is strategic design?:An examination of new design activity in the public and civic sectors. PhD thesis, UNSPECIFIED. Buckeridge, Kate and La Rosa, Alfio Fabio and Mason, Kelly and Whitaker, Jeanette and McNamara, Niall and Grant, Helen and Ostle, Nick (2020) Sticky Dead Microbes:rapid abiotic retention of microbial necromass in soil. Soil Biology and Biochemistry, 149. ISSN 0038-0717 Buckley, David and Black, Nicola and Castanon, Eli and Melios, Christos and Hardman, Melanie and Kazakova, Olga (2020) Frontiers of graphene and 2D material-based gas sensors for environmental monitoring. 2D Materials, 7 (3). ISSN 2053-1583 Budd, Richard (2020) Looking for love in the student experience. In: Post-critical perspectives on higher education. Debating Higher Education . Springer, pp. 111-131. ISBN 9783030450182 Budd, Richard (2020) Undergraduate Degree. In: SAGE Encyclopaedia of Higher Education. Sage, London. ISBN 9781473942912 Bullivant, Andrea (2020) From Development Education to Global Learning:Exploring Conceptualisations of Theory and Practice Amongst Practitioners in DECs in England. PhD thesis, UNSPECIFIED. Bullock, Anthony J. and Garcia, Marcela and Shepherd, Joanna and Rehman, Ihtesham and MacNeil, Sheila (2020) Bacteria induced pH changes in tissue-engineered human skin detected non-invasively using Raman confocal spectroscopy. APPLIED SPECTROSCOPY REVIEWS, 55 (2). pp. 158-171. ISSN 0570-4928 Bulman, James and Garraghan, Peter (2020) A cloud gaming framework for dynamic graphical rendering towards achieving distributed game engines. In: The 12th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud '20). UNSPECIFIED. (In Press) Bunday, Richard (2020) If it looks like religion, smells like religion and tastes like religion, is it religion?:A study into why people bring their infant child to baptism when they do not attend Church. PhD thesis, UNSPECIFIED. Burbidge, Chloe and Keenan, Joseph and Parry, Sarah (2020) "I've made that little bit of difference to this child":Therapeutic Parent's Experiences of Trials and Triumphs in Therapeutic Children's Homes. Journal of Workplace Behavioral Health: employee assistance practice and research, 35 (4). pp. 256-278. ISSN 1555-5240 Burch, James (2020) Helping student teachers to see into practice:The view from a teacher-education classroom. PhD thesis, UNSPECIFIED. Burke, Thomas and Whyatt, Duncan and Rowland, Clare S. and Blackburn, Alan and Abbatt, Jon (2020) The influence of land cover data on farm-scale valuations of natural capital. Ecosystem Services, 42. ISSN 2212-0416 Burnett, Joseph W.H. and Howe, Russell F. and Wang, Xiaodong (2020) Cofactor NAD(P)H regeneration:how selective are the reactions? Trends in Chemistry, 2 (6). pp. 488-492. ISSN 2589-7209 Burnett, T. and Mozgunov, P. and Pallmann, P. and Villar, S.S. and Wheeler, G.M. and Jaki, T. (2020) Adding flexibility to clinical trial designs:an example-based guide to the practical use of adaptive designs. BMC Medicine, 18 (1). ISSN 1741-7015 Burrell, Robert and Zulke, Alana and Keil, Peter and Hoster, Harry (2020) Communication⇔identifying and managing reversible capacity losses that falsify cycle ageing tests of lithium-ion cells. Journal of The Electrochemical Society, 167 (13). ISSN 0013-4651 Bursch, K. and Toledo-Ortiz, G. and Pireyre, M. and Lohr, M. and Braatz, C. and Johansson, H. (2020) Identification of BBX proteins as rate-limiting cofactors of HY5. Nature Plants, 6. pp. 921-928. ISSN 2055-0278 Bush, Alex and Monk, Wendy A. and Compson, Zacchaeus G. and Peters, Daniel L. and Porter, Teresita M. and Shokralla, Shadi and Wright, Michael T. G. and Hajibabaei, Mehrdad and Baird, Donald J. (2020) DNA metabarcoding reveals metacommunity dynamics in a threatened boreal wetland wilderness. Proceedings of the National Academy of Sciences of the United States of America, 117 (15). pp. 8539-8545. ISSN 0027-8424 Bushell, Sally Caroline (2020) Introduction + Chapter 1: Wordsworth's "Preface": A Manifesto for British Romanticism. In: The Cambridge Companion to Lyrical Ballads. Cambridge Companions to Literature . Cambridge University Press, Cambridge. ISBN 9781108402835 Bushell, Sally Caroline (2020) Reading and Mapping Fiction:Spatialising the Literary Text. Cambridge University Press, Cambridge. ISBN 9781108487450 Butler, Carly and Filipska, Gudrun (2020) 'Carly Butler and Gudrun Filipska chart course from Ucluelet to the U.K. in newly digitized work':Teghan Beaudette · CBC Arts. UNSPECIFIED. Butler, Holly and Martin, Frank and Roberts, Mike and Adams, Steven and McAinsh, Martin (2020) Observation of nutrient uptake at the adaxial surface of leaves of tomato (Solanum lycopersicum) using Raman spectroscopy. Analytical Letters, 53 (4). pp. 536-562. ISSN 0003-2719 Butterworth, Christian (2020) Data-centric policies and practice:implications for publicly funded arts and cultural organisations in England. PhD thesis, UNSPECIFIED. Bux, Allah and Gu, Xiaowei and Angelov, Plamen and Habib, Zulfiqar (2020) Human action recognition using deep rule-based classifier. Multimedia Tools and Applications, 79 (41-42). pp. 30653-30667. ISSN 1380-7501 Bygate, M. (2020) Some directions for the possible survival of TBLT as a real world project. Language Teaching, 53 (3). pp. 275-288. ISSN 0261-4448 Bylund, Emanuel and Gygax, Pascal and Samuel, Steven and Athanasopoulos, Panos (2020) Back to the future?:The role of temporal focus for mapping time onto space. Quarterly journal of experimental psychology (2006), 73 (2). pp. 174-182. ISSN 1747-0218 Byrne, Ruth (2020) 'Pauper aliens' and 'political refugees':A corpus linguistic approach to the language of migration in nineteenth-century newspapers. PhD thesis, UNSPECIFIED. Cabrera Garcia, M. and Dales, H. G. and Rodríguez-Palacios, A. (2020) Maximal left ideals in Banach algebras. Bulletin of the London Mathematical Society, 52 (1). pp. 1-15. ISSN 0024-6093 Caddy, Sarah L and Moore, Anne and Bamford, Connor and Hunter, David and Gatherer, Derek and West, Robert and Michie, Susan (2020) A million deaths from coronavirus: seven experts consider key questions. The Conversation. Cagle, Alexander and Armstrong, Alona and Exley, Giles and Grodsky, Steven and Macknick, Jordan and Sherwin, John and Hernandez, Rebecca R. (2020) The Land Sparing, Water Surface Use Efficiency, and Water Surface Transformation of Floating Photovoltaic Solar Energy Installations. Sustainability, 12 (19). ISSN 2071-1050 Cai, H. and Ye, Junjie and Wang, Y. and Saafi, M. and Huang, B. and Yang, D. and Ye, J. (2020) An effective microscale approach for determining the anisotropy of polymer composites reinforced with randomly distributed short fibers. Composite Structures, 240. ISSN 0263-8223 Cai, J. and Syratchev, I. (2020) Modeling and Technical Design Study of Two-Stage Multibeam Klystron for CLIC. IEEE Transactions on Electron Devices, 67 (8). pp. 3362-3368. ISSN 0018-9383 Cai, Jinchi and Syratchev, Igor and Burt, Graeme (2020) Accurate Modelling of Monotron Oscillations in Small and Large Signal regimes. IEEE Transactions on Electron Devices. ISSN 0018-9383 Cai, Jinchi and Syratchev, Igor and Burt, Graeme (2020) Design Study of a High-Power Ka-Band High-Order-Mode Multibeam Klystron. IEEE Transactions on Electron Devices, 67 (12). 5736 - 5742. ISSN 0018-9383 Caldecott, Elen (2020) The Short Knife. Andersen Press, London. ISBN 9781783449798 Caldecott, Elen (2020) The Writer's Mixing Desk. Writing in Education, 81. ISSN 1361-8539 Calderón-Garcidueñas, L. and González-Maciel, A. and Reynoso-Robles, R. and Hammond, J. and Kulesza, R. and Lachmann, I. and Torres-Jardón, R. and Mukherjee, P.S. and Maher, B.A. (2020) Quadruple abnormal protein aggregates in brainstem pathology and exogenous metal-rich magnetic nanoparticles (and engineered Ti-rich nanorods). The substantia nigrae is a very early target in young urbanites and the gastrointestinal tract a key brainstem portal. Environmental Research, 191. ISSN 0013-9351 Calderón-Garcidueñas, L. and Herrera-Soto, A. and Jury, N. and Maher, B.A. and González-Maciel, A. and Reynoso-Robles, R. and Ruiz-Rudolph, P. and van Zundert, B. and Varela-Nallar, L. (2020) Reduced repressive epigenetic marks, increased DNA damage and Alzheimer's disease hallmarks in the brain of humans and mice exposed to particulate urban air pollution. Environmental Research, 183. ISSN 0013-9351 Caldwell, Elizabeth (2020) Images, Imaginaries And Imagination In Communicating Dementia Through Narrative Picturebooks For Children. In: EASST/4S Conference 2020, 2020-08-182020-08-21, Virtual. (Unpublished) Calhau, Joao (2020) The co-evolution of star-forming galaxies and their supermassive black holes across cosmic time. PhD thesis, UNSPECIFIED. Calhau, João and Sobral, David and Santos, Sérgio and Matthee, Jorryt and Paulino-Afonso, Ana and Stroe, Andra and Simmons, Brooke and Barlow-Hall, Cassandra (2020) The X-ray and radio activity of typical and luminous Lya emitters from z~2 to z~6: Evidence for a diverse, evolving population. Monthly Notices of the Royal Astronomical Society, 493 (3). pp. 3341-3362. ISSN 0035-8711 Calvert, L. and Keady, J. and Khetani, B. and Riley, C. and Swarbrick, C. (2020) '… This is my home and my neighbourhood with my very good and not so good memories':The story of autobiographical place-making and a recent life with dementia. Dementia, 19 (1). pp. 111-128. ISSN 1471-3012 Calvo, Mirian and Cruickshank, Leon and Galabo, Rosendy and Perez Ojeda, David (2020) Does Participatory Architecture Work? In: Fourteenth International Conference on Design Principles & Practices, 2020-03-162020-03-18, Pratt Institute, Brooklyn Campus, Brooklyn. Calvo, Mirian and Sclater, Madeleine (2020) Co-design for social Innovation and organisational change:Developing horizontal relationships in a social enterprise through walking. International Journal of Design for Social Change, Sustainable Innovation and Entrepreneurship, 1 (1). pp. 78-98. ISSN 2184-6995 Cammelli, F. and Garrett, R.D. and Barlow, J. and Parry, L. (2020) Fire risk perpetuates poverty and fire use among Amazonian smallholders. Global Environmental Change, 63. ISSN 0959-3780 Campbell, D. and Balis, G. and Aryana, B. (2020) Bridging design thinking and entrecomp for entrepreneurship workshops:A learning experience. In: The Sixth International Conference on Design Creativity 26-28 August 2020, University of Oulu, Finland Proceedings. Design Society, FIN, pp. 247-254. ISBN 9781912254118 Campbell, David (2020) The Consequences of Defying the System of Natural Liberty:The Absurdity of the Misrepresentation Act 1967. In: Contract Law and the Legislature. Hart, Oxford, pp. 127-46. ISBN 978-1509926107 Campbell, David (2020) Flights of Fancy. Law Quarterly Review, 136. pp. 528-34. ISSN 0023-933X Campbell, David (2020) Our Obligations to the Common Law:Review of A Burrows, Remedies for Torts, Breach of Contract and Equitable Wrongs. Lloyd's Maritime and Commercial Law Quarterly. pp. 149-70. ISSN 0306-2945 Campbell, David (2020) Review of P Benson, Justice in Transactions. Law Quarterly Review, 136. pp. 682-84. ISSN 0023-933X Campbell, David and Cullen, Richard (2020) Understanding Authoritarian Legality in Hing Kong:What Can Dicey and Rawls Tell Us? In: Authoritarian Legality in Asia. Cambridge University Press, Cambridge, pp. 143-68. ISBN 978-1108496681 Campbell, David and Halson, Roger (2020) By Their Fruits Shall Ye Know Them. The Cambridge Law Journal, 79 (3). pp. 405-07. ISSN 0008-1973 Campbell, S.J. and Darling, E.S. and Pardede, S. and Ahmadia, G. and Mangubhai, S. and [Unknown], Amkieltiela and [No Value], Estradivari and Maire, E. (2020) Fishing restrictions and remoteness deliver conservation outcomes for Indonesia's coral reef fisheries. Conservation Letters, 13 (2). ISSN 1755-263X Campobasso, Sergio and Cavazzini, Anna and Minisci, Edmondo (2020) Rapid Estimate of Wind Turbine Energy Loss due to Blade Leading Edge Delamination Using Artificial Neural Networks. Journal of Turbomachinery, 142 (7). ISSN 0889-504X Campopiano, G. and Brumana, M. and Minola, T. and Cassia, L. (2020) Does Growth Represent Chimera or Bellerophon for a Family Business?:The Role of Entrepreneurial Orientation and Family Influence Nuances. European Management Review, 17 (3). pp. 765-783. ISSN 1740-4754 Campopiano, Giovanna and Calabrò, Andrea and Basco, Rodrigo (2020) The "Most Wanted":The Role of Family Strategic Resources and Family Involvement in CEO Succession Intention. Family Business Review, 33 (3). pp. 284-309. ISSN 0894-4865 Can, Aydin and Efstathiades, Harry and Montazeri, Allahyar (2020) Design of a chattering-free sliding mode control system for robust position control of a quadrotor. In: Proceedings of the 2020 International Conference of Nonlinearity, Information and Robotics (NIR). IEEE, SUN. ISBN 9781728187631 Canclini, A. and Croset, Pierre-Alain (2020) On the CIAM 7 Grid: from an Ideological to a Critical Tool. Plan Journal, 5 (1). pp. 89-117. Cao, C. and Zhang, M. and Li, L. and Wang, Y. and Li, Z. and Du, L. and Holland, G. and Zhou, Z. (2020) Tracing the sources and evolution processes of shale gas by coupling stable (C, H) and noble gas isotopic compositions:Cases from Weiyuan and Changning in Sichuan Basin, China. Journal of Natural Gas Science and Engineering, 78. Cao, Yue (2020) Delay Tolerant Network Routing. In: Encyclopedia of Wireless Networks. Springer, Cham. ISBN 9783319329031 Capelier-Mourguy, Arthur and Twomey, Katherine Elizabeth and Westermann, Gert (2020) Neurocomputational models capture the effect of learned labels on infants' object and category representations. IEEE Transactions on Cognitive and Developmental Systems, 12 (2). pp. 160-168. ISSN 2379-8939 Caplan, Jennifer E. and Adams, Kiki and Boyd, Ryan L (2020) Personality and language. In: The Wiley Encyclopedia of Personality and Individual Differences. John Wiley & Sons, pp. 311-316. ISBN 9781118970744 Capparini, Chiara and To, Michelle and Dardenne, Clément and Reid, Vincent (2020) Infant detection of facial emotional expressions across the visual field. In: International Congress of Infant Studies, 2020-07-062020-07-09. Capparini, Chiara and To, Michelle and Reid, Vincent (2020) Exploring infants' sensitivity to low-level visual information across the visual field. In: International Congress of Infant Studies, 2020-07-062020-07-09. Capparini, Chiara and To, Michelle and Reid, Vincent (2020) Measuring infants' sensitivity to face-like stimuli in the mid-peripheral visual field. In: International Congress of Infant Studies, 2020-07-062020-07-09. Capparini, Chiara and To, Michelle and Reid, Vincent (2020) Measuring sensitivity to social and non-social information across the visual field. In: Budapest CEU Conference on Cognitive Development 2020, 2020-01-082020-01-11. Capponi, Antonio and Crosby, Andrew C. and Lishman, Stephen and Llewellin, Edward W. (2020) A novel experimental apparatus for investigating bubbly flows in a slot geometry. Review of Scientific Instruments, 91 (4). ISSN 0034-6748 Carare, Roxana O. and Aldea, Roxana and Agarwal, Nivedita and Bacskai, Brian J. and Bechman, Ingo and Boche, Delphine and Bu, Guojun and Bulters, Diederik and Clemens, Alt and Counts, Scott E. and de Leon, Mony and Eide, Per K. and Fossati, Silvia and Greenberg, Steven M. and Hamel, Edith and Hawkes, Cheryl A. and Koronyo-Hamaoui, Maya and Hainsworth, Atticus H. and Holtzman, David and Ihara, Masafumi and Jefferson, Angela and Kalaria, Raj N. and Kipps, Christopher M. and Kanninen, Katja M. and Leinonen, Ville and McLaurin, Jo Anne and Miners, Scott and Malm, Tarja and Nicoll, James A.R. and Piazza, Fabrizio and Paul, Gesine and Rich, Steven M. and Saito, Satoshi and Shih, Andy and Scholtzova, Henrieta and Snyder, Heather and Snyder, Peter and Thormodsson, Finnbogi Rutur and van Veluw, Susanne J. and Weller, Roy O. and Werring, David J. and Wilcock, Donna and Wilson, Mark R. and Zlokovic, Berislav V. and Verma, Ajay (2020) Clearance of interstitial fluid (ISF) and CSF (CLIC) group—part of Vascular Professional Interest Area (PIA):Cerebrovascular disease and the failure of elimination of Amyloid-β from the brain and retina with age and Alzheimer's disease-Opportunities for Therapy. Alzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring, 12 (1). ISSN 2352-8729 Carcagno, Samuele and Plack, Christopher (2020) Effects of Age on Electrophysiological Measures of Cochlear Synaptopathy in Humans. Hearing Research, 396. ISSN 0378-5955 Cardi, Olivier and Bertinelli, Luisito and Restout, Romain (2020) Relative Productivity and Search Unemployment in an Open Economy. Journal of Economic Dynamics and Control, 117. ISSN 0165-1889 Cardi, Olivier and Restout, Romain and Claeys, Peter (2020) Imperfect mobility of labor across sectors and fiscal transmission. Journal of Economic Dynamics and Control, 111. ISSN 0165-1889 Cardoso, Rafael C. and Ferrando, Angelo and Papacchini, Fabio (2020) LFC:Combining Autonomous Agents and Automated Planning in the Multi-Agent Programming Contest. In: The Multi-Agent Programming Contest 2019. Lecture Notes in Computer Science . Springer, Cham, pp. 31-58. ISBN 9783030592981 Carloni, S. and Fatibene, L. and Ferraris, M. and McLenaghan, R.G. and Pinto, P. (2020) Discrete relativistic positioning systems. General Relativity and Gravitation, 52 (2). ISSN 0001-7701 Carpendale, J.I.M. and Lewis, C. (2020) Tomasello's tin man of moral obligation needs a heart. Behavioral and Brain Sciences, 43. ISSN 0140-525X Carpenter, Ciara (2020) An exploration of the effect of simulation on perceptions of medical students' preparedness for professional practice; a mixed-methods, longitudinal study. PhD thesis, UNSPECIFIED. Carradus, Angela and Zozimo, Ricardo and Discua Cruz, Allan (2020) Exploring a Faith-Led Open-Systems Perspective of Stewardship in Family Businesses. Journal of Business Ethics, 163. pp. 701-714. ISSN 0167-4544 Carrington, Peter and Delli, Evangelia and Letka, Veronica and Bentley, Matthew and Hodgson, Peter and Repiso Menendez, Eva and Hayton, Jonathan and Craig, Adam and Lu, Qi and Beanland, Richard and Krier, Anthony and Marshall, Andrew (2020) Heteroepitaxial integration of InAs/InAsSb type-II superlattice barrier photodetectors onto silicon. In: Proceedings Volume 11503, Infrared Sensors, Devices, and Applications X. SPIE--The International Society for Optical Engineering. Carruthers, J. (2020) "The impatient anticipations of our reason":Rough Sympathy in Friedrich Schiller and Charlotte Brontë's Jane Eyre. In: Anticipatory Materialisms in Literature and Philosophy, 1790-1930. Palgrave Macmillan, Cham, pp. 97-112. ISBN 9783030298166 Carruthers, J. and Dakkak, N. and Spence, R. (2020) Introduction. In: Anticipatory Materialisms in Literature and Philosophy, 1790-1930. Palgrave Macmillan, Cham, pp. 1-19. ISBN 9783030298166 Carruthers, Jo (2020) Introduction:Sandscapes. In: Sandscapes. Palgrave Macmillan, New York, pp. 1-18. ISBN 9783030447793 Carruthers, Jo (2020) The Politics of Purim:Law, Sovereignty and Hospitality in the Aesthetic Afterlives of Esther. The Library of Hebrew Bible/Old Testament Studies, Scriptural Traces . Bloomsbury, London. ISBN 9780567691866 Carruthers, Jo (2020) Rough and Smooth Sands:Social Thresholds and Seaside Style. In: Sandscapes. Palgrave Macmillan, New York, pp. 125-139. ISBN 9783030447793 Carruthers, Jo and Dakkak, Nour and Spence, Rebecca (2020) Anticipatory Materialisms in Literature and Philosophy, 1790-1930. Palgrave Macmillan, Cham. ISBN 9783030298166 Carruthers, Joanne Amy (2020) Esther Summerson's Biblical Judgment:Queen Esther and the Fallen Woman in Bleak House. Religion and Literature, 50 (3). ISSN 0888-3769 Carruthers, Joanne Amy (2020) Melodrama and the 'art of government'::Jewish emancipation and Elizabeth Polack's Esther, the Royal Jewess; or The Death of Haman! Literature and History, 29 (2). pp. 144-163. ISSN 0306-1973 Carter, Sophie and Draijer, Richard and Maxwell, Joseph D. and Morris, Abigail and Pedersen, Scott and Graves, Lee and Thijssen, Dick and Hopkins, Nicola (2020) Using an e-Health Intervention to Reduce Prolonged Sitting in UK Office Workers:A Randomised Acceptability and Feasibility Study. International Journal of Environmental Research and Public Health, 17 (23). ISSN 1660-4601 Carvajal Gomez, Raziel and Bromberg, Yerom David and Elkhatib, Yehia and Reveillere, Laurent and Rivière, Etienne (2020) Emergent Overlays for Adaptive MANET Broadcast. In: The 38th International Symposium on Reliable Distributed Systems (SRDS 2019). IEEE, pp. 193-202. ISBN 9781728142227 Carvalho, Fabio and Brown, Kerry A. and Gordon, Adam D. and Yesuf, Gabriel U. and Raherilalao, Marie Jeanne and Raselimanana, Achille P. and Soarimalala, Voahangy and Goodman, Steven M. (2020) Methods for prioritizing protected areas using individual and aggregate rankings. Environmental Conservation, 47 (2). pp. 113-122. ISSN 0376-8929 Carvalho, Fabio and Brown, Kerry A. and Waller, Martyn P. and Razafindratsima, Onja H. and Boom, Arnoud (2020) Changes in functional, phylogenetic and taxonomic diversities of lowland fens under different vegetation and disturbance levels. Plant Ecology, 221 (6). 441–457. ISSN 1385-0237 Carvalho G Da Silva, Fabio and Armstrong, Alona (2020) What is the potential for solar parks in the UK to enhance soil carbon through active management?:A rapid systematic review of the available evidence. Open Science Framework. Carver, R. (2020) Lessons for blue degrowth from Namibia's emerging blue economy. Sustainability Science, 15 (1). 131–143. ISSN 1862-4065 Carver, Rosanna and Childs, John and Steinberg, Phil and Mabon, Leslie and Matsuda, Hiroyuki and McLellan, Ben and Esteban, Miguel (2020) A critical social perspective on deep sea mining:Lessons from the emergent industry in Japan. Ocean and Coastal Management, 193. ISSN 0964-5691 Case, Nathan and Grocott, Adrian and Fear, R.C. and Haaland, Stein and Lane, James (2020) Convection in the Magnetosphere-Ionosphere System:a Multi-Mission Survey of its Response to IMF By Reversals. Journal of Geophysical Research: Space Physics, 125 (10). ISSN 2169-9402 Casey, Sarah and Davies, Gerald (2020) Drawn to Investigate. [Exhibition] Castanon, E.G. and Scarioni, A.F. and Schumacher, H.W. and Spencer, S. and Perry, R. and Vicary, J.A. and Clifford, C.A. and Corte-León, H. (2020) Calibrated kelvin-probe force microscopy of 2d materials using pt-coated probes. Journal of Physics Communications, 4 (9). ISSN 2399-6528 Castelló, M. and McAlpine, L. and Sala-Bubaré, A. and Inouye, K. and Skakni, I. (2020) What perspectives underlie 'researcher identity'?:A review of two decades of empirical studies. Higher Education. ISSN 0018-1560 Castorrini, A. and Venturini, P. and Corsini, A. and Rispoli, F. and Takizawa, K. and Tezduyar, T.E. (2020) Computational analysis of particle-laden-airflow erosion and experimental verification. Computational Mechanics, 65. 1549–1565. Castorrini, Alessio and Cappugi, Lorenzo and Bonfiglioli, Aldo and Campobasso, Sergio (2020) Assessing wind turbine energy losses due to blade leading edge erosion cavities with parametric CAD and 3D CFD. Journal of Physics: Conference Series, 1618. ISSN 1742-6588 Castorrini, Alessio and Venturini, Paolo and Corsini, Alessandro and Rispoli, Franco (2020) Simulation of the deposit evolution on a fan blade for tunnel ventilation. Journal of Engineering for Gas Turbines and Power, 142 (4). ISSN 0742-4795 Castro Valdecantos, Pedro (2020) Physiological, phytohormonal and molecular responses of soybean to soil drying. PhD thesis, UNSPECIFIED. Catchpole, Pip (2020) Product development of fully recyclable single-use coffee cups. Masters thesis, UNSPECIFIED. Catterall, Nicholas (2020) Organisational hierarchies in English universities:Understanding roles and boundaries. PhD thesis, UNSPECIFIED. Causon, Andrew and Munro, Kevin and Plack, Christopher and Prendergast, Garreth (2020) The role of the clinically obtained acoustic reflex as a research tool for sub-clinical hearing pathologies. Trends in Hearing, 24. pp. 1-14. ISSN 2331-2165 Cavada, Marianna and Rogers, Chris (2020) Serious gaming as a means of facilitating truly smart cities:a narrative review. Behaviour and Information Technology, 39 (6). pp. 695-710. ISSN 0144-929X Cavalcanti, M Isabella and Discua Cruz, Allan (2020) Assessing women entrepreneurs' decision process to enter the formal economy in emerging economies: A study in São Paulo, Brazil. In: Academy of Management Global Proceedings (p. Vol. Mexico, No. 2020). Mexico City, Mexic. UNSPECIFIED. Cavaliere, G. (2020) 'Gestation, Equality and Freedom: Ectogenesis as a Political Perspective' response to commentaries. Journal of Medical Ethics, 46. pp. 91-92. ISSN 0306-6800 Cavaliere, G. (2020) Non-essential treatment?:Sub-fertility in the time of COVID-19 (and beyond). Reproductive BioMedicine Online, 41 (3). pp. 543-545. ISSN 1472-6483 Cavaliere, Giulia (2020) Ectogenesis and gender‐based oppression:Resisting the ideal of assimilation. Bioethics, 34 (7). pp. 727-734. ISSN 0269-9702 Cavaliere, Giulia (2020) Gestation, equality and freedom:Ectogenesis as a political perspective. Journal of Medical Ethics, 46 (2). pp. 76-82. ISSN 0306-6800 Cavaliere, Giulia (2020) The problem with reproductive freedom:Procreation beyond procreators' interests. Medicine, Health Care and Philosophy, 23 (1). pp. 131-140. ISSN 1386-7423 Ceccarelli, Daniela M. and McLeod, Ian M. and Bostrom-Einarsson, Lisa and Bryan, Scott E. and Chartrand, Kathryn M. and Emslie, Michael J. and Gibbs, Mark T. and Rivero, Manuel Gonzalez and Hein, Margaux Y. and Heyward, Andrew and Kenyon, Tania M. and Lewis, Brett M. and Mattocks, Neil and Newlands, Maxine and Schlappy, Marie Lise and Suggett, David J. and Bay, Line K. (2020) Substrate stabilisation and small structures in coral restoration:State of knowledge, and considerations for management and implementation. PLoS ONE, 15 (10). ISSN 1932-6203 Celik, H Kursat and Rennie, Allan and Akinci, Ibrahim (2020) A Potential Research Area Under Shadow In Engineering:Agricultural Machinery Design and Manufacturing. ISPEC Journal of Agricultural Sciences, 4 (2). pp. 66-86. ISSN 2717-7238 Celik, H. Kursat and Caglayan, Nuri and Topakci, Mehmet and Rennie, Allan and Akinci, Ibrahim (2020) Strength-based Design Analysis of a Para-Plow Tillage Tool. Computers and Electronics in Agriculture, 169. ISSN 0168-1699 Celik, H. Kursat and Kose, Ozkan and Ulmeanu, Mihaela-Elena and Rennie, Allan and Abram, Tom and Akinci, Ibrahim (2020) Design and Additive Manufacturing of a Medical Face Shield for Healthcare Workers Battling Coronavirus (COVID-19). International Journal of Bioprinting, 6 (4). ISSN 2424-8002 Celina, H. and Lambton-Howard, D. and Lee, C. and Kharrufa, A. (2020) Supporting Pedagogy Through Automation and Social Structures in Student-Led Online Learning Environments. In: 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, 2020-10-252020-10-29, Online. Centis Vignali, M. and Dias De Almeida, P. and Franconi, L. and Gallinaro, M. and Gurimskaya, Y. and Harrop, B. and Holmkvist, W. and Lu, C. and Mateu, I. and McClish, M. and McDonald, K.T. and Moll, M. and Newcomer, F.M. and Otero Ugobono, S. and White, S. and Wiehe, M. (2020) Deep diffused Avalanche photodiodes for charged particles timing. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 958. ISSN 0168-9002 Cervini, L. and Barrow, N. and Griffin, J. (2020) Observing solvent dynamics in porous carbons by nuclear magnetic resonance: Elucidating molecular-level dynamics of in-pore and ex-pore species. Johnson Matthey Technology Review, 64 (2). pp. 152-164. Chaffer, Jo (2020) Leadership development:containment enough. PhD thesis, UNSPECIFIED. Chakrabarti, Ronika and Henneberg, Stephen and S.Ivens, Björn (2020) Open sustainability:Conceptualization and considerations. Industrial Marketing Management, 89. pp. 528-534. ISSN 0019-8501 Chalfont, Garuth and Milligan, Christine and Simpson, Jane (2020) A mixed methods systematic review of multimodal non-pharmacological interventions to improve cognition for people with dementia. Dementia, 19 (4). pp. 1-45. ISSN 1471-3012 Chalfont, Garuth and Simpson, Jane and Eccles, Fiona and Milligan, Christine (2020) Views of Conventional Medicine and Integrative Medicine among Informal Dementia Caregivers and Healthcare Professionals in NW England. OBM Geriatrics, 4 (1). Chambers, J.R. and Smith, M.W. and Quincey, D.J. and Carrivick, J.L. and Ross, A.N. and James, M.R. (2020) Glacial Aerodynamic Roughness Estimates:Uncertainty, Sensitivity, and Precision in Field Measurements. Journal of Geophysical Research: Earth Surface, 125 (2). ISSN 2169-9011 Chan, Kin Chung Jacky (2020) Word learning in bilingual children. PhD thesis, UNSPECIFIED. Chan, Wai Shun (2020) Bringing the ports and port diplomacy back-in:A comparative study of the role of Hong Kong, Macao and Shanghai in contemporary EU-China relations. PhD thesis, UNSPECIFIED. Chandranathan, Preman (2020) Researching Entrepreneurial Leadership: A Review and Research Agenda. Working Paper. Lancaster University. (Unpublished) Chandranathan, Preman (2020) The activity of entrepreneurial leadership. PhD thesis, UNSPECIFIED. Chang, Ya-Ning and Taylor, Jo and Rastle, Kathy and Monaghan, Padraic (2020) The relationships between oral language and reading instruction:Evidence from a computational model of reading. Cognitive Psychology, 123. ISSN 0010-0285 Chapman, Jamie-Leigh and Eckley, Idris and Killick, Rebecca (2020) A nonparametric approach to detecting changes in variance in locally stationary time series. Environmetrics, 31 (1). ISSN 1099-095X Chapman, Jamie-Leigh and Killick, Rebecca (2020) An assessment of practitioners approaches to forecasting in the presence of changepoints. Quality and Reliability Engineering International, 36 (8). pp. 2676-2687. ISSN 0748-8017 Chapman, L.A.C. and Spencer, S.E.F. and Pollington, T.M. and Jewell, C.P. and Mondal, D. and Alvar, J. and Hollingsworth, T.D. and Cameron, M.M. and Bern, C. and Medley, G.F. (2020) Inferring transmission trees to guide targeting of interventions against visceral leishmaniasis and post-kala-azar dermal leishmaniasis. Proceedings of the National Academy of Sciences of the United States of America, 117 (41). pp. 25742-25750. ISSN 0027-8424 Charisi, V. and Malinverni, L. and Rubegni, E. and Schaper, M.-M. (2020) Empowering Children's Critical Reflections on AI, Robotics and Other Intelligent Technologies. In: NordiCHI '20. ACM, New York, pp. 1-4. ISBN 9781450375795 Charisi, V. and Malinverni, L. and Schaper, M.-M. and Rubegni, E. (2020) Creating opportunities for children's critical reflections on AI, robotics and other intelligent technologies. In: IDC '20: Proceedings of the 2020 ACM Interaction Design and Children Conference: Extended Abstracts. ACM, New York, pp. 89-95. ISBN 9781450380201 Chatterjee, Bela Bonita (2020) Child sex dolls and robots:challenging the boundaries of the child protection framework. International Review of Law, Computers and Technology, 34 (1). pp. 22-43. ISSN 1360-0869 Chatzi, G. and Mason, T. and Chandola, T. and Whittaker, W. and Howarth, E. and Cotterill, S. and Ravindrarajah, R. and McManus, E. and Bower, P. (2020) Sociodemographic disparities in non-diabetic hyperglycaemia and the transition to type 2 diabetes: evidence from the English Longitudinal Study of Ageing. Diabetic Medicine, 37 (9). pp. 1536-1544. ISSN 0742-3071 Chatzigeorgiou, Ioannis (2020) The impact of 5G channel models on the performance of intelligent reflecting surfaces and decode-and-forward relaying. In: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEE, GBR. ISBN 9781728144917 Chatzigeorgiou, Ioannis and Manole, Elena (2020) Decoding probability analysis of network-coded data collection and delivery by relay drones. In: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEE, GBR. ISBN 9781728144917 Che Hussin, Kasmaruddin (2020) Managing co-production in a servitization context:An analysis of the roles of contracting. PhD thesis, UNSPECIFIED. Chebaane, Ahmed and Khelil, Abdelmajid and Suri, Neeraj (2020) Time-critical fog computing for vehicular networks. In: Fog Computing. Wiley, London, pp. 431-458. ISBN 9781119551690 Cheded, Mohammed and Skandalis, Alexandros (2020) A digital textural sociological exploration of alternative modes of touch and contact:Reflections from queer digital spaces. In: TASA 2020, The Australian Sociological Conference, 2020-11-14. Chekar, Choon Key (2020) Neo-liberal Genre, not so Liberal Consumption:When a Japanese 'Morning Person' Book Crossed the South Korean Border. In: The Routledge International Handbook of Global Therapeutic Cultures. Routledge, London. ISBN 9780367110925 Chen, B. and Wan, J. and Xia, M. and Zhang, Y. (2020) Exploring Equipment Electrocardiogram Mechanism for Performance Degradation Monitoring in Smart Manufacturing. IEEE/ASME Transactions on Mechatronics, 25 (5). pp. 2276-2286. ISSN 1083-4435 Chen, Bo and Chen, Yu and Rietzke, David Michael (2020) Simple Contracts under Observable and Hidden Actions. Economic Theory, 69. pp. 1023-1047. ISSN 0938-2259 Chen, C.-E. and Liu, Y.-S. and Dunn, R. and Zhao, J.-L. and Jones, K.C. and Zhang, H. and Ying, G.-G. and Sweetman, A.J. (2020) A year-long passive sampling of phenolic endocrine disrupting chemicals in the East River, South China. Environment International, 143. ISSN 0160-4120 Chen, C.-H. and Gaillard, E. and Mentink-Vigier, F. and Chen, K. and Gan, Z. and Gaveau, P. and Rebière, B. and Berthelot, R. and Florian, P. and Bonhomme, C. and Smith, M.E. and Métro, T.-X. and Alonso, B. and Laurencin, D. (2020) Direct 17O Isotopic Labeling of Oxides Using Mechanochemistry. Inorganic Chemistry, 59 (18). pp. 13050-13066. ISSN 0020-1669 Chen, Chian-Chou and Harrison, C. M. and Smail, I. and Swinbank, A. M. and Turner, O. J. and Wardlow, J. L. and Brandt, W. N. and Calistro Rivera, G. and Chapman, S. C. and Cooke, E. A. and Dannerbauer, H. and Dunlop, J. S. and Farrah, D. and Michałowski, M. J. and Schinnerer, E. and Simpson, J. M. and Thomson, A. P. and van der Werf, P. P. (2020) Extended H$\alpha$ over compact far-infrared continuum in dusty submillimeter galaxies -- Insights into dust distributions and star-formation rates at $z\sim2$. Astronomy and Astrophysics, 635. ISSN 1432-0746 Chen, Christian and Szyniszewski, Marcin and Schomerus, Henning (2020) Many-body localization of zero modes. Physical Review Research, 2. ISSN 2643-1564 Chen, D. and Yang, C. and Gong, P. and Chang, L. and Shao, J. and Ni, Q. and Anpalagan, A. and Guizani, M. (2020) Resource Cube:Multi-Virtual Resource Management for Integrated Satellite-Terrestrial Industrial IoT Networks. IEEE Transactions on Vehicular Technology, 69 (10). pp. 11963-11974. ISSN 0018-9545 Chen, H. and Sangtarash, S. and Li, G. and Gantenbein, M. and Cao, W. and Alqorashi, A. and Liu, J. and Zhang, C. and Zhang, Y. and Chen, L. and Chen, Y. and Olsen, G. and Sadeghi, H. and Bryce, M.R. and Lambert, C.J. and Hong, W. (2020) Exploring the thermoelectric properties of oligo(phenylene-ethynylene) derivatives. Nanoscale, 12 (28). pp. 15150-15156. ISSN 2040-3372 Chen, Hanbo and Yang, Xing and Wang, Hailong and Sarkar, Binoy and Shaheen, Sabry M. and Gielen, Gerty and Bolan, Nanthi and Guo, Jia and Che, Lei and Sun, Huili and Rinklebe, Jörg (2020) Animal carcass- and wood-derived biochars improved nutrient bioavailability, enzyme activity, and plant growth in metal-phthalic acid ester co-contaminated soils: A trial for reclamation and improvement of degraded soils. Journal of Environmental Management, 261. ISSN 0301-4797 Chen, Lixiong and Buscher, Monika and Hu, Yang (2020) Crowding Out the Crowd:The Transformation of Network Disaster Communication Patterns on Weibo. In: ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management. Virginia Tech, Blacksburg, VA, pp. 472-479. ISBN 97819493732745 Chen, S. and Fang, T. and Xiao, S. and Lin, F. and Cheng, X. and Wang, S. and Zhu, X. and Chen, X. and Zheng, M. and Munir, M. and Huang, M. and Yu, F. (2020) Duckling short beak and dwarfism syndrome virus infection activates host innate immune response involving both DNA and RNA sensors. Microbial pathogenesis, 138. ISSN 0882-4010 Chen, S. and Luo, S. and Yu, H. and Geng, H. and Xu, G. and Li, R. and Tian, Y. (2020) Effect of beam defocusing on porosity formation in laser-MIG hybrid welded TA2 titanium alloy joints. Journal of Manufacturing Processes, 58. pp. 1221-1231. ISSN 1526-6125 Chen, T.-Y. and Rautiyal, P. and Vaishnav, S. and Gupta, G. and Schlegl, H. and Dawson, R.J. and Evans, A.W. and Kamali, S. and Johnson, J.A. and Johnson, C.E. and Bingham, P.A. (2020) Composition-structure-property effects of antimony in soda-lime-silica glasses. Journal of Non-Crystalline Solids, 544. ISSN 0022-3093 Chen, Xu and Williams, Bryan and Vallabhanehi, Srini and Czanner, Gabriela and Williams, Rachel and Zheng, Yalin (2020) Learning Active Contour Models for Medical Image Segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, USA, pp. 11632-11640. ISBN 9781728132945 Chen, Y. and Cheng, Y. and Ma, N. and Wei, C. and Ran, L. and Wolke, R. and Größ, J. and Wang, Q. and Pozzer, A. and Van Der Gon, H.A.C.D. and Spindler, G. and Lelieveld, J. and Tegen, I. and Su, H. and Wiedensohler, A. (2020) Natural sea-salt emissions moderate the climate forcing of anthropogenic nitrate. Atmospheric Chemistry and Physics, 20 (2). pp. 771-786. ISSN 1680-7316 Chen, Y. and Wild, O. and Ryan, E. and Sahu, S. K. and Lowe, D. and Archer-Nicholls, S. and Wang, Y. and McFiggans, G. and Ansari, T. and Singh, V. and Sokhi, R. S. and Archibald, A. and Beig, G. (2020) Mitigation of PM2.5 and ozone pollution in Delhi:A sensitivity study during the pre-monsoon period. Atmospheric Chemistry and Physics, 20 (1). pp. 499-514. ISSN 1680-7316 Chen, Ying (2020) COVID-19 Pandemic Imperils Weather Forecast. Geophysical Research Letters, 47 (15). ISSN 0094-8276 Chen, Z. and Li, R. and Gu, J. and Zhang, Z. and Tao, Y. and Tian, Y. (2020) Laser cladding of Ni60+ 17-4 PH composite for a cracking-free and corrision resistive coating. International Journal of Modern Physics B, 34 (1n03). ISSN 0217-9792 Cheneler, D. and Kennedy, A. R. (2020) Measurement and modelling of the elastic defection of novel metal syntactic foam composite sandwich structures in 3-point bending. Composite Structures, 235. ISSN 0263-8223 Cheneler, D. and Kennedy, A.R. (2020) A comparison of the manufacture and mechanical performance of porous aluminium and aluminium syntactic foams made by vacuum-assisted casting. Materials Science and Engineering A, 789. Cheng, Cheng and Ibar, Edo and Smail, Ian and Molina, Juan and Sobral, David and Escala, Andrés and Best, Philip and Cochrane, Rachel and Gillman, Steven and Swinbank, Mark and Ivison, R. J. and Huang, Jia-Sheng and Hughes, Thomas M. and Villard, Eric and Cirasuolo, Michele (2020) A kpc-scale resolved study of unobscured and obscured star-formation activity in normal galaxies at z = 1.5 and 2.2 from ALMA and HiZELS. Monthly Notices of the Royal Astronomical Society, 499 (4). 5241–5256. ISSN 0035-8711 Cheng, Peng (2020) Acoustic-channel attack and defence methods for personal voice assistants. PhD thesis, UNSPECIFIED. Cheng, Peng and Bagci, Ibrahim and Roedig, Utz and Yan, Jeff (2020) SonarSnoop:active acoustic side-channel attacks. International Journal of Information Security, 19. pp. 213-228. ISSN 1615-5262 Cheng, Q. and Shi, H. and Zhang, P. and Yu, Z. and Wu, D. and He, S. and Tian, Y. (2020) Microstructure, oxidation resistance and mechanical properties of stellite 12 composite coating doped with submicron TiC/B4C by laser cladding. Surface and Coatings Technology, 395. ISSN 0257-8972 Cheng, Siyu (2020) Modelling summertime ozone in North China. Masters thesis, UNSPECIFIED. Cheng, T.L. (2020) Student teachers' perception of reflective journal writing in placement practicum:Experience from a hong kong institution. Asia-Pacific Journal of Research in Early Childhood Education, 14 (2). pp. 27-51. ISSN 1976-1961 Cheong, Huey Fen (2020) A problematised critical approach:Constructions of metrosexualities in the UK and Malaysia. PhD thesis, UNSPECIFIED. Cherubin, Maurício R. and Lisboa, Izaias P. and Silva, Aijânio G.B. and Varanda, Letícia L. and Bordonal, Ricardo O. and Carvalho, João L.N. and Otto, Rafael and Pavinato, Paulo S. and Soltangheisi, Amin and Cerri, Carlos E.P. (2020) Correction to:Sugarcane Straw Removal: Implications to Soil Fertility and Fertilizer Demand in Brazil (BioEnergy Research, (2019), 12, 4, (888-900), 10.1007/s12155-019-10021-w). Bioenergy Research. ISSN 1939-1234 Cheung, Rachael W and Hartley, Calum and Monaghan, Padraic (2020) The early cue catches the word: how gesture supports cross-situational word learning. In: 42nd Annual Virtual Meeting of the Cognitive Science Society, 2020-07-292020-08-01, Online. Cheung, Shirley (2020) Functional specialization of phonological perception:How bilingualism modifies neural organization. PhD thesis, UNSPECIFIED. Cheurfa, Hiyem (2020) Contemporary Arab women's life writing and the politics of resistance:literary modes and postcolonial contexts. PhD thesis, UNSPECIFIED. Chi, Yin and Huang, Bo and Saafi, Mohamed and Ye, Jianqiao and Lambert, Colin (2020) Carrot-based covalently bonded saccharides as a new 2D material for healing defective calcium-silicate-hydrate in cement:Integrating atomistic computational simulation with experimental studies. Composites Part B: Engineering, 199. ISSN 1359-8368 Chiang, Chia-yen and Angelov, Plamen and Barnes, Chloe and Jiang, Richard (2020) Deep Learning based Automated Forest Health Diagnosis from Aerial Images. IEEE Access, 8. 144064 - 144076. ISSN 2169-3536 Chiarello, M. and Auguet, J.-C. and Graham, N.A.J. and Claverie, T. and Sucré, E. and Bouvier, C. and Rieuvilleneuve, F. and Restrepo-Ortiz, C.X. and Bettarel, Y. and Villéger, S. and Bouvier, T. (2020) Exceptional but vulnerable microbial diversity in coral reef animal surface microbiomes. Proceedings of the Royal Society B: Biological Sciences, 287 (1927). ISSN 0962-8452 Chidiac, C. and Feuer, D. and Flatley, M. and Rodgerson, A. and Grayson, K. and Preston, N. (2020) The need for early referral to palliative care especially for Black, Asian and minority ethnic groups in a COVID-19 pandemic:Findings from a service evaluation. Palliative Medicine, 34 (9). pp. 1241-1248. ISSN 0269-2163 Chidiac, Claude and Feuer, David and Naismith, Jane and Flatley, Mary and Preston, Nancy (2020) Emergency Palliative Care Planning and Support in a COVID-19 Pandemic. Journal of Palliative Medicine. ISSN 1096-6218 Chifiero, Astra (2020) Knowledge management:understanding the cultural context of knowledge transfer. PhD thesis, UNSPECIFIED. Childs, John (2020) Performing 'blue degrowth'?:Critiquing seabed mining in Papua New Guinea through creative practice. Sustainability Science, 15 (1). pp. 117-129. ISSN 1862-4065 Childs, John Robert (2020) Extraction in four dimensions:time, space and the emerging geo(-)politics of deep sea mining. Geopolitics, 25 (1). pp. 189-213. ISSN 1465-0045 Chimes, M. and Boxall, C. and Edwards, S. and Sarsfield, M. and Taylor, R.J. and Woodhead, D. (2020) Reduction reactions of vanadium as a neptunium analogue with nitrogen oxide species. In: 14th International Nuclear Fuel Cycle Conference, GLOBAL 2019 and Light Water Reactor Fuel Performance Conference, TOP FUEL 2019, 2019-09-222019-09-26, The Westin Seattle. Chipperfield, Martyn P. and Hossaini, Ryan and Montzka, Stephen A. and Reimann, Stefan and Sherry, David and Tegtmeier, Susann (2020) Renewed and emerging concerns over the production and emission of ozone-depleting substances. Nature Reviews Earth & Environment, 1. pp. 251-263. Chircop, J. and Johan, S. and Tarsalewska, M. (2020) Does religiosity influence venture capital investment decisions? Journal of Corporate Finance, 62. ISSN 0929-1199 Chircop, Justin and Collins, Daniel and Hass, Lars Helge and Nguyen, Nhat (2020) Accounting Comparability and Corporate Innovative Efficiency. The Accounting Review, 95 (4). 127–151. ISSN 0001-4826 Chircop, Justin and Tarsalewska, Monika (2020) 10-K Filing length and M&A returns. European Journal of Finance, 26 (6). pp. 532-553. ISSN 1351-847X Chirombo, James and Ceccato, Pietro and Lowe, Rachel and Terlouw, Dianne J. and Thomson, Madeleine and Gumbo, Austin and Diggle, Peter and Read, Jonathan (2020) Childhood malaria case incidence in Malawi between 2004 and 2017:Spatio-temporal modelling of climate and non-climate factors. Malaria Journal, 19. ISSN 1475-2875 Chmielinski, M. and Cheung, C.M.K. and Wenninger, H. and Wollongong, Dubai Tourism; Emirates; PACIS; University of (2020) Coping with envy on professional social networking sites:24th Pacific Asia Conference on Information Systems: Information Systems (IS) for the Future, PACIS 2020. In: 24th Pacific Asia Conference on Information Systems: Information Systems (IS) for the Future, 2020-06-202020-06-24. Chong, Doris (2020) Under a top-down rubric policy::the perceptions and actualisations of assessment for learning and rubric in higher education in Hong Kong. PhD thesis, UNSPECIFIED. Chopra, Amit K. and V, Samuel H. Christie and Singh, Munindar P. (2020) An Evaluation of Communication Protocol Languages for Engineering Multiagent Systems. Journal of Artificial Intelligence Research, 69. pp. 351-1393. ISSN 1076-9757 Christiansen, Alex and Dance, William and Wild, Alexander (2020) Constructing Corpora from Images and Text:An introduction to Visual Constituent Analysis. In: Corpus Approaches to Social Media. Studies in Corpus Linguistics . John Benjamins, pp. 149-174. ISBN 9789027207944 Christie, A.P. and Abecasis, D. and Adjeroud, M. and Alonso, J.C. and Amano, T. and Anton, A. and Baldigo, B.P. and Barrientos, R. and Bicknell, J.E. and Buhl, D.A. and Cebrian, J. and Ceia, R.S. and Cibils-Martina, L. and Clarke, S. and Claudet, J. and Craig, M.D. and Davoult, D. and De Backer, A. and Donovan, M.K. and Eddy, T.D. and França, F.M. and Gardner, J.P.A. and Harris, B.P. and Huusko, A. and Jones, I.L. and Kelaher, B.P. and Kotiaho, J.S. and López-Baucells, A. and Major, H.L. and Mäki-Petäys, A. and Martín, B. and Martín, C.A. and Martin, P.A. and Mateos-Molina, D. and McConnaughey, R.A. and Meroni, M. and Meyer, C.F.J. and Mills, K. and Montefalcone, M. and Noreika, N. and Palacín, C. and Pande, A. and Pitcher, C.R. and Ponce, C. and Rinella, M. and Rocha, R. and Ruiz-Delgado, M.C. and Schmitter-Soto, J.J. and Shaffer, J.A. and Sharma, S. and Sher, A.A. and Stagnol, D. and Stanley, T.R. and Stokesbury, K.D.E. and Torres, A. and Tully, O. and Vehanen, T. and Watts, C. and Zhao, Q. and Sutherland, W.J. (2020) Quantifying and addressing the prevalence and bias of study designs in the environmental and social sciences. Nature Communications, 11 (1). ISSN 2041-1723 Christie-de Jong, F. and Reilly, S. (2020) Barriers and facilitators to pap-testing among female overseas Filipino workers:a qualitative exploration. International Journal of Human Rights in Healthcare. ISSN 2056-4902 Christophers, Brett and Bigger, Patrick and Johnson, Leigh (2020) Stretching scales?:Risk and sociality in climate finance. Environment and Planning A, 52 (1). pp. 88-110. ISSN 0308-518X Christou, Elisavet and Ceyhan, Pinar and Gradinar, Adrian (2020) Embedding Evaluation into Research Projects - The Percent Evaluation Method. In: The Design Research Society 2020 International Conference, 2020-08-112020-08-14, Online. Chuang, Joseph and Lazarev, Andrey (2020) Homological epimorphisms, homotopy epimorphisms and acyclic maps. Forum Mathematicum, 32 (6). 1395–1406. ISSN 0933-7741 Chubb, Andrew (2020) ASEAN Cooperation in the South China Sea amid Great Power Rivalry:Vietnam as a Middle Power? In: Ocean Governance in the South China Sea. National Political Publishing House, Hanoi, pp. 77-109. ISBN 9786045756515 Chubb, Andrew (2020) China warily watches Indian nationalism. The China Story Blog. Chubb, J. and Derrick, G.E. (2020) Correction: The impact a-gender: gendered orientations towards research Impact and its evaluation (Palgrave Communications, (2020), 6, 1, (72), 10.1057/s41599-020-0438-z). Palgrave Communications, 6 (1). ISSN 2055-1045 Chubb, J. and Derrick, G.E. (2020) The impact a-gender:gendered orientations towards research Impact and its evaluation. Palgrave Communications, 6 (1). ISSN 2055-1045 Chubb, Steven (2020) The impact of a new National Curriculum on subject leaders in primary and secondary Schools. PhD thesis, UNSPECIFIED. Chudasri, Disaya and Walker, Stuart and Evans, Martyn (2020) Potential areas for design and its implementation to enable the future viability of weaving practices in northern Thailand. International Journal of Design, 14 (1). pp. 95-111. ISSN 1991-3761 Chughtai, Hameed (2020) Human Values and Digital Work:An Ethnographic Study of Device Paradigm. Journal of Contemporary Ethnography, 49 (1). pp. 27-57. ISSN 0891-2416 Chughtai, Hameed (2020) Taking the human body seriously. European Journal of Information Systems, 30 (1). pp. 46-68. ISSN 0960-085X Chughtai, Hameed and Young, Amber G. and Cardo, Valentina and Morgan, Cat and Prior, Chris and Young, Eugene and Myers, Michael D. and Borsa, Tomas and Demirkol, Özlem and Morton, Stephen and Wilkin, Joanna and Özkula, Suay M. (2020) Demarginalizing interdisciplinarity in is research:Interdisciplinary research in marginalization. Communications of the Association for Information Systems, 46 (1). pp. 296-315. ISSN 1529-3181 Chung, Antony (2020) PhyForm - A cloud SDR framework for security research supporting machine learning of wireless IoT signal data sets. In: Seventeenth International Conference on Embedded Wireless Systems and Networks (EWSN 2020), 2020-02-172020-02-19. Churchman, Gordon Jock and Singh, Mandeep and Schapel, Amanda and Sarkar, Binoy and Bolan, Nanthi S (2020) Clay minerals as the key to the sequestration of carbon in soils. Clays and Clay Minerals, 68 (2). pp. 135-143. ISSN 1552-8367 Cierna, Z. and Miskovska, V. and Roska, J. and Jurkovicova, D. and Pulzova, L.B. and Sestakova, Z. and Hurbanova, L. and Machalekova, K. and Chovanec, M. and Rejlekova, K. and Svetlovska, D. and Kalavska, K. and Kajo, K. and Babal, P. and Mardiak, J. and Ward, T.A. and Mego, M. and Chovanec, Miroslav (2020) Increased levels of XPA might be the basis of cisplatin resistance in germ cell tumours. BMC Cancer, 20 (1). ISSN 1471-2407 Ciholas, Pierre and Such, Jose and Marnerides, Angelos and Green, Benjamin and Zhang, Jiajie and Roedig, Utz (2020) Fast and Furious:Outrunning Windows Kernel Notification Routines from User-Mode. In: Detection of Intrusions and Malware, and Vulnerability Assessment. DIMVA 2020. Springer, PRT, pp. 67-88. ISBN 9783030526825 Cildir, Sukru (2020) OPEC as a site of De-escalation? In: Saudi Arabia, Iran and the De-escalation in the Persian Gulf. SEPAD, Lancaster. Cin, Firdevs Melis and Karlidag-Dennis, Ecem and Temiz, Zeynep (2020) Capabilities-based gender equality analysis of educational policy-making and reform in Turkey. Gender and Education, 32 (2). pp. 244-261. ISSN 0954-0253 Cin, Melis and Süleymanoğlu-Kürüm, Rahime (2020) Participatory Video as a Tool for Cultivating Political and Feminist Capabilities of Women in Turkey. In: Participatory research, capabilities and epistemic justice. Palgrave Macmillan, pp. 165-188. ISBN 9783030561963 Cinner, Joshua E. and Zamborain-Mason, Jessica and Gurney, Georgina G. and Graham, Nicholas A. J. and MacNeil, M. Aaron and Hoey, Andrew S. and Mora, Camilo and Villeger, Sebastien and Maire, Eva and McClanahan, Tim R. and Maina, Joseph M. and Kittinger, John N. and Hicks, Christina C. and D'agata, Stephanie and Huchery, Cindy and Barnes, Michele L. and Feary, David A. and Williams, Ivor D. and Kulbicki, Michel and Vigliola, Laurent and Wantiez, Laurent and Edgar, Graham J. and Stuart-Smith, Rick D. and Sandin, Stuart A. and Green, Alison L. and Beger, Maria and Friedlander, Alan M. and Wilson, Shaun K. and Brokovich, Eran and Brooks, Andrew J. and Cruz-Motta, Juan J. and Booth, David J. and Chabanet, Pascale and Tupper, Mark and Ferse, Sebastian C. A. and Sumaila, U. Rashid and Hardt, Marah J. and Mouillot, David (2020) Meeting fisheries, ecosystem function, and biodiversity goals in a human-dominated world. Science, 368 (6488). pp. 307-311. ISSN 0036-8075 Cirasuolo, M. and Sobral, David and consortium, the MOONS (2020) MOONS:The New Multi-Object Spectrograph for the VLT. The Messenger, vol., 180. pp. 10-17. Cirne-Silva, T.M. and Carvalho, W.A.C. and Terra, M.C.N.S. and de Souza, C.R. and Santos, A.B.M. and Robinson, S.J.B. and dos Santos, R.M. (2020) Environmental heterogeneity caused by anthropogenic disturbance drives forest structure and dynamics in Brazilian Atlantic Forest. Journal of Tropical Forest Science, 32 (2). pp. 125-135. Cisneros-Montemayor, A.M. and Ota, Y. and Bailey, M. and Hicks, C.C. and Khan, A.S. and Rogers, A. and Sumaila, U.R. and Virdin, J. and He, K.K. (2020) Changing the narrative on fisheries subsidies reform:Enabling transitions to achieve SDG 14.6 and beyond. Marine Policy, 117. ISSN 0308-597X Citron, Francesca M.M. and Lee, Mollie and Michaelis, Nora (2020) Affective and psycholinguistic norms for German conceptual metaphors (COMETA). Behavior Research Methods, 52. pp. 1056-1072. ISSN 1554-351X Citron, Francesca M.M. and Michaelis, Nora and Goldberg, Adele E. (2020) Metaphorical language processing and amygdala activation in L1 and L2. Neuropsychologia, 140. ISSN 0028-3932 Clair, Amy and Fledderjohann, Jasmine and Lalor, Doireann and Loopstra, Rachel (2020) The housing situations of food bank users in Great Britain. Social Policy and Society, 19 (1). pp. 55-73. ISSN 1474-7464 Clancy, Laura (2020) 'Queen of Scots':the Monarch's Body and National Identities in the 2014 Scottish Independence Referendum. European Journal of Cultural Studies, 23 (3). pp. 495-512. ISSN 1367-5494 Clancy, Laura (2020) 'This is a tale of friendship, a story of togetherness':The British Monarchy, Grenfell Tower, and Inequalities in The Royal Borough of Kensington and Chelsea. Cultural Studies. ISSN 0950-2386 Clancy, Laura Jayne and , Hannah Yelin (2020) 'Meghan's Manifesto':Meghan Markle and the Co-option of Feminism. Celebrity Studies, 11 (3). pp. 372-377. ISSN 1939-2397 Clark, D. (2020) Tech and me:An autoethnographic account of digital literacy as an identity performance. Research in Learning Technology, 28. pp. 1-14. ISSN 2156-7069 Clark, Nigel (2020) Planetary Foreclosure:Climate Change Education and Lifelong Debt. UNSPECIFIED. Clark, Nigel (2020) 'Primordial Wounds':Resilience, Trauma, and the Rifted Body of the Earth. In: Resilience in the Anthropocene. Routledge. ISBN 9781138387447 Clark, Nigel (2020) Urban Granaries, Planetary Thresholds. In: The Botanical City. Jovis Verlag, Berlin, pp. 30-37. ISBN 9783868595192 Clark, Nigel Halcomb and Szerszynski, Bronislaw (2020) Planetary Social Thought:The Anthropocene Challenge to the Social Sciences. Polity Press. ISBN 9781509526345 Clarke, Christopher (2020) Dynamic motion coupling of body movement for input control. PhD thesis, UNSPECIFIED. Clarke, Christopher and Cavdir, Doga and Chiu, Patrick and Denoue, Laurent and Kimber, Don (2020) Reactive Video:Adaptive Video Playback Based on User Motion for Supporting Physical Activity. In: ACM Symposium on User Interface Software and Technology (UIST). ACM, New York, pp. 196-208. ISBN 9781450375146 Clarke, Christopher and Ehrich, Peter and Gellersen, Hans (2020) Motion Coupling of Earable Devices in Camera View. In: MUM 2020: 19th International Conference on Mobile and Ubiquitous Multimedia. ACM, New York, 13–17. ISBN 9781450388702 Clarke, S.A. and Vilizzi, L. and Lee, L. and Wood, L.E. and Cowie, W.J. and Burt, J.A. and Mamiit, R.J.E. and Ali, H. and Davison, P.I. and Fenwick, G.V. and Harmer, R. and Skóra, M.E. and Kozic, S. and Aislabie, L.R. and Kennerley, A. and Le Quesne, W.J.F. and Copp, G.H. and Stebbing, P.D. (2020) Identifying potentially invasive non-native marine and brackish water species for the Arabian Gulf and Sea of Oman. Global Change Biology, 26 (4). pp. 2081-2092. ISSN 1354-1013 Clarkson, Jake (2020) Optimal search in discrete locations:extensions and new findings. PhD thesis, UNSPECIFIED. Clarkson, Jake and Glazebrook, Kevin David and Lin, Kyle (2020) Fast or Slow:Search in Discrete Locations with Two Search Modes. Operations Research, 68 (2). pp. 552-571. ISSN 0030-364X Claxton, Tom and Hossaini, Ryan and Wilson, Chris and Montzka, Stephen A. and Chipperfield, Martyn P. and Wild, Oliver and Bednarz, Ewa and Carpenter, Lucy and Andrews, Stephen and Hackenberg, Sina and Mühle, Jens and Oram, David and Park, Sunyoung and Park, Mi-Kyung and Atlas, Elliot and Navarro, Maria and Schauffler, Sue and Sherry, David and Vollmer, Martin and Schuck, Tanja and Engel, Andreas and Krummel, Paul B. and Maione, Michela and Arduini, Jgor and Saito, Takuya and Yokouchi, Yoko and O'Doherty, Simon and Young, Dickon and Lunder, Chris (2020) A Synthesis Inversion to Constrain Global Emissions of Two Very Short Lived Chlorocarbons: Dichloromethane, and Perchloroethylene. Journal of Geophysical Research: Atmospheres, 125 (12). ISSN 0747-7309 Clayton, E. and Munir, M. (2020) Fundamental Characteristics of Bat Interferon Systems. Frontiers in cellular and infection microbiology, 10. ISSN 2235-2988 Clerc, S. and Donlon, C. and Borde, F. and Lamquin, N. and Hunt, S.E. and Smith, D. and McMillan, M. and Mittaz, J. and Woolliams, E. and Hammond, M. and Banks, C. and Moreau, T. and Picard, B. and Raynal, M. and Rieu, P. and Guérou, A. (2020) Benefits and lessons learned from the sentinel-3 tandem phase. Remote Sensing, 12 (17). ISSN 2072-4292 Clinch, Katherine and Nixon, Anthony and Schulze, Bernd and Whiteley, Walter (2020) Pairing symmetries for Euclidean and spherical frameworks. Discrete and Computational Geometry, 64. 483–518. ISSN 0179-5376 Clinch, Katie and Kitson, Derek (2020) Constructing isostatic frameworks for the ℓ1and ℓ∞-plane. The Electronic Journal of Combinatorics, 27 (2). ISSN 1077-8926 Clune, S. and Pollastri, S. (2020) Design for food and wellbeing in future cities. In: Designing Future Cities for Wellbeing. Routledge, London, pp. 91-104. ISBN 9781138600782 Cochrane, Louis J. and Gatherer, Derek (2020) Dynamic Programming Algorithms Applied to Musical Counterpoint in Process Composition:An Example Using Henri Pousseur's Scambi. Preprints. ISSN 2310-287X Codinhoto, Ricardo and Boyko, Christopher and Darby, Antony and Watson, Margaret (2020) Buildings for Health, Cities for Wellbeing. In: Designing Future Cities for Wellbeing. Routledge, London. ISBN 9781138600782 Coelho Simoes, Raquel and Pauksztello, David (2020) Simple-minded systems and reduction for negative Calabi-Yau triangulated categories. Transactions of the American Mathematical Society, 373 (4). pp. 2463-2498. ISSN 0002-9947 Coin, Francesca (2020) Economia della caccia alle streghe. Jacobin (6). pp. 16-21. Cole, Justin and Bezanson, Rachel and Wel, Arjen van der and Bell, Eric and D'Eugenio, Francesco and Franx, Marijn and Gallazzi, Anna and Houdt, Josha van and Muzzin, Adam and Pacifici, Camilla and Sande, Jesse van de and Sobral, David and Straatman, Caroline and Wu, Po-Feng (2020) Stellar Kinematics and Environment at z~0.8 in the LEGA-C Survey:Massive, Slow-Rotators are Built First in Overdense Environments. Astrophysical Journal Letters, 890 (2). ISSN 2041-8205 Coleman, Helena (2020) The impact on emotional well-being:Experiences of being a palliative care volunteer. PhD thesis, UNSPECIFIED. Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Alignment of the ATLAS Inner Detector in Run 2. European Physical Journal C: Particles and Fields, 80 (12). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) CP Properties of Higgs Boson Interactions with Top Quarks in the tt¯H and tH Processes Using H→γγ with the ATLAS Detector:Physical review letters. Physical review letters, 125 (6). ISSN 1079-7114 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Combination of the W boson polarization measurements in top quark decays using ATLAS and CMS data at √s = 8 TeV:Journal of High Energy Physics. J. High Energy Phys., 2020 (8). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Determination of jet calibration and energy resolution in proton–proton collisions at √s=8TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 80 (12). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Dijet Resonance Search with Weak Supervision Using √s=13 TeV pp Collisions in the ATLAS Detector. Phys Rev Lett, 125 (13). ISSN 1079-7114 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Erratum to: Search for diboson resonances in hadronic final states in 139 fb−1 of pp collisions at √s = 13 TeV with the ATLAS detector:(Journal of High Energy Physics, (2019), 2019, 9, (91), 10.1007/JHEP09(2019)091). J. High Energy Phys., 2020 (6). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Evidence for electroweak production of two jets in association with a Zγ pair in pp collisions at √s=13 TeV with the ATLAS detector. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 803. Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Evidence for tt¯ tt¯ production in the multilepton final state in proton–proton collisions at √s=13 TeV with the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (11). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Higgs boson production cross-section measurements and their EFT interpretation in the 4 ℓ decay channel at √s= 13 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 80 (10). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of Azimuthal Anisotropy of Muons from Charm and Bottom Hadrons in pp Collisions at √s=13 TeV with the ATLAS Detector:Physical Review Letters. Phys Rev Lett, 124 (8). ISSN 0031-9007 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of azimuthal anisotropy of muons from charm and bottom hadrons in Pb+Pb collisions at √sNN=5.02 TeV with the ATLAS detector. Phys Lett Sect B Nucl Elem Part High-Energy Phys, 807. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of differential cross sections for single diffractive dissociation in √s = 8 TeV pp collisions using the ATLAS ALFA spectrometer. Journal of High Energy Physics, 2020 (2). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of isolated-photon plus two-jet production in pp collisions at √s = 13 TeV with the ATLAS detector:Journal of High Energy Physics. Journal of High Energy Physics, 2020 (3). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of soft-drop jet observables in pp collisions with the ATLAS detector at √s =13 TeV:Physical Review D. Phy. Rev. D, 101 (5). ISSN 2470-0010 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of the Lund Jet Plane Using Charged Particles in 13 TeV Proton-Proton Collisions with the ATLAS Detector. Physical review letters, 124 (22). ISSN 1079-7114 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of the Z(→ ℓ + ℓ −)γ production cross-section in pp collisions at √s = 13 TeV with the ATLAS detector:Journal of High Energy Physics. J. High Energy Phys., 2020 (3). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of the azimuthal anisotropy of charged-particle production in Xe+Xe collisions at √sNN =5.44 TeV with the ATLAS detector. Physical Review C, 101 (2). ISSN 0556-2813 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of the transverse momentum distribution of Drell–Yan lepton pairs in proton–proton collisions at √s=13 TeV with the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (7). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurement of the tt¯ production cross-section in the lepton+jets channel at √s=13 TeV with the ATLAS experiment. Physics Letters B, 810. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurements of inclusive and differential cross-sections of combined tt¯ γ and tWγ production in the eμ channel at 13 TeV with the ATLAS detector. J. High Energy Phys., 2020 (9). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurements of the Higgs boson inclusive and differential fiducial cross sections in the 4 ℓ decay channel at √s = 13 TeV. European Physical Journal C: Particles and Fields, 80 (10). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurements of the production cross-section for a Z boson in association with b-jets in proton-proton collisions at √s = 13 TeV with the ATLAS detector:Journal of High Energy Physics. J. High Energy Phys., 2020 (7). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Measurements of top-quark pair spin correlations in the eμ channel at √s=13 TeV using pp collisions in the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (8). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Observation of the associated production of a top quark and a Z boson in pp collisions at √s = 13 TeV with the ATLAS detector:Journal of High Energy Physics. J. High Energy Phys., 2020 (7). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Performance of the missing transverse momentum triggers for the ATLAS detector during Run-2 data taking:Journal of High Energy Physics. J. High Energy Phys., 2020 (8). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Performance of the upgraded PreProcessor of the ATLAS Level-1 Calorimeter Trigger. Journal of Instrumentation, 15 (11). ISSN 1748-0221 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Reconstruction and identification of boosted di-τ systems in a search for Higgs boson pairs using 13 TeV proton-proton collision data in ATLAS. Journal of High Energy Physics, 2020 (11). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for Heavy Resonances Decaying into a Photon and a Hadronically Decaying Higgs Boson in pp Collisions at √s =13 TeV with the ATLAS Detector. Physical review letters, 125 (25). ISSN 1079-7114 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for Higgs Boson Decays into a Z Boson and a Light Hadronically Decaying Resonance Using 13 TeV pp Collision Data from the ATLAS Detector:Physical review letters. Phys Rev Lett, 125 (22). ISSN 1079-7114 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for Higgs boson decays into two new low-mass spin-0 particles in the 4b channel with the ATLAS detector using pp collisions at √s =13 TeV. Physical Review D, 102 (11). ISSN 1550-7998 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for a scalar partner of the top quark in the all-hadronic tt¯ plus missing transverse momentum final state at √s=13 TeV with the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (8). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for dijet resonances in events with an isolated charged lepton using √s = 13 TeV proton-proton collision data collected by the ATLAS detector:Journal of High Energy Physics. J. High Energy Phys., 2020 (6). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for direct production of electroweakinos in final states with missing transverse momentum and a Higgs boson decaying into photons in pp collisions at √s = 13 TeV with the ATLAS detector:Journal of High Energy Physics. J. High Energy Phys., 2020 (10). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for direct production of electroweakinos in final states with one lepton, missing transverse momentum and a Higgs boson decaying into two b-jets in pp collisions at √s=13 TeV with the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (8). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for direct stau production in events with two hadronic τ-leptons in √s =13 TeV pp collisions with the ATLAS detector. Physical Review D, 101 (3). ISSN 1550-7998 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for electroweak production of charginos and sleptons decaying into final states with two leptons and missing transverse momentum in √s=13 TeV pp collisions using the ATLAS detector. European Physical Journal C: Particles and Fields, 80 (2). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for light long-lived neutral particles produced in pp collisions at √s=13TeV and decaying into collimated leptons or light hadrons with the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (5). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for long-lived neutral particles produced in pp collisions at √s =13 TeV decaying into displaced hadronic jets in the ATLAS inner detector and muon spectrometer. Phy. Rev. D, 101 (5). ISSN 2470-0010 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for new non-resonant phenomena in high-mass dilepton final states with the ATLAS detector. J. High Energy Phys., 2020 (11). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for new resonances in mass distributions of jet pairs using 139 fb−1 of pp collisions at √s = 13 TeV with the ATLAS detector. Journal of High Energy Physics, 2020 (3). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for non-resonant Higgs boson pair production in the bbℓνℓν final state with the ATLAS detector in pp collisions at s=13 TeV. Physics Letters B, 801. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for pairs of scalar leptoquarks decaying into quarks and electrons or muons in √s = 13 TeV pp collisions with the ATLAS detector. Journal of High Energy Physics, 2020 (10). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for resonances decaying into a weak vector boson and a Higgs boson in the fully hadronic final state produced in proton-proton collisions at √s =13 TeV with the ATLAS detector. Phy. Rev. D, 102 (11). ISSN 2470-0010 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for squarks and gluinos in final states with same-sign leptons and jets using 139 fb−1 of data collected with the ATLAS detector:Journal of High Energy Physics. Journal of High Energy Physics, 2020 (6). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for the HH → bb¯ bb¯ process via vector-boson fusion production using proton-proton collisions at √s = 13 TeV with the ATLAS detector:Journal of High Energy Physics. J. High Energy Phys., 2020 (7). ISSN 1126-6708 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for the Higgs boson decays H → ee and H → eμ in pp collisions at s=13TeV with the ATLAS detector. Physics Letters B, 801. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for top squarks in events with a Higgs or Z boson using 139 fb-1 of pp collision data at √s=13 TeV with the ATLAS detector:European Physical Journal C. Eur. Phys. J. C, 80 (11). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Search for tt¯ resonances in fully hadronic final states in pp collisions at √s = 13 TeV with the ATLAS detector. Journal of High Energy Physics, 2020 (10). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) Searches for electroweak production of supersymmetric particles with compressed mass spectra in √s =13 TeV pp collisions with the ATLAS detector. Phy. Rev. D, 101 (5). ISSN 2470-0010 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) A search for the Zγ decay mode of the Higgs boson in pp collisions at √s=13 TeV with the ATLAS detector:Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics. Phys Lett Sect B Nucl Elem Part High-Energy Phys, 809. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Sanderswood, Izaac and Smizanska, M. and Tee, A.S. and Wharton, A.M. and Walder, J. and Whitmore, B.W. and Yexley, Melissa (2020) Search for displaced vertices of oppositely charged leptons from decays of long-lived particles in pp collisions at s=13 TeV with the ATLAS detector. Physics Letters B, 801. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Combined measurements of Higgs boson production and decay using up to 80 fb-1 of proton-proton collision data at s =13 TeV collected with the ATLAS experiment. Physical Review D, 101 (1). ISSN 1550-7998 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Measurement of J/ψ production in association with a W ± boson with pp data at 8 TeV. Journal of High Energy Physics, 2020 (1). ISSN 1029-8479 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Measurement of long-range two-particle azimuthal correlations in Z-boson tagged pp collisions at √s=8 and 13 TeV. European Physical Journal D, 80 (1). ISSN 1434-6060 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Measurement of the tt¯ production cross-section and lepton differential distributions in eμ dilepton events from pp collisions at √s=13TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 80 (6). ISSN 1434-6044 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Performance of electron and photon triggers in ATLAS during LHC Run 2. European Physical Journal D, 80 (1). ISSN 1434-6060 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Search for Magnetic Monopoles and Stable High-Electric-Charge Objects in 13 Tev Proton-Proton Collisions with the ATLAS Detector. Physical review letters, 124 (3). ISSN 1079-7114 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Search for flavour-changing neutral currents in processes with one top quark and a photon using 81 fb−1 of pp collisions at s=13TeV with the ATLAS experiment. Physics Letters B, 800. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Test of CP invariance in vector-boson fusion production of the Higgs boson in the H → ττ channel in proton–proton collisions at √s=13 TeV with the ATLAS detector. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 805. Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Transverse momentum and process dependent azimuthal anisotropies in √sNN=8.16 TeV p+Pb collisions with the ATLAS detector. European Physical Journal D, 80 (1). ISSN 1434-6060 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Smizanska, M. and Tee, A.S. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Sanderswood, Izaac and Yexley, Melissa (2020) Z boson production in Pb+Pb collisions at √sNN=5.02 TeV measured by the ATLAS experiment. Physics Letters B, 802. ISSN 0370-2693 Collaboration, ATLAS and Barton, A.E. and Bertram, I.A. and Borissov, G. and Bouhova-Thacker, E.V. and Fox, H. and Henderson, R.C.W. and Jones, R.W.L. and Kartvelishvili, V. and Long, R.E. and Love, P.A. and Muenstermann, D. and Parker, A.J. and Tee, A.S. and Sanderswood, Izaac and Smizanska, M. and Walder, J. and Wharton, A.M. and Whitmore, B.W. and Yexley, Melissa (2020) ATLAS data quality operations and performance for 2015-2018 data-taking:Journal of Instrumentation. J. Instrum., 15 (4). ISSN 1748-0221 Collaboration, The T2K (2020) Publisher Correction: Constraint on the matter–antimatter symmetry-violating phase in neutrino oscillations (Nature, (2020), 580, 7803, (339-344), 10.1038/s41586-020-2177-0). Nature, 583 (7814). E16. ISSN 0028-0836 Collingridge Moore, Danielle and Payne, Sheila (2020) Palliative Care for Older People. In: Encyclopedia of Biomedical Gerontology. Elsevier Academic Press. ISBN 9780128160756 Collingridge Moore, Danielle and Payne, Sheila and Keegan, Thomas and Froggatt, Katherine (2020) Challenges of undertaking a systematic review to measure residents' length of stay in care homes. In: EAPC RN & PACE Preconference Seminar, 10th World Research Congress of the EAPC, 2018-06-242018-06-26. Collins, L.C. (2020) Working with images and emoji in the Dukki Facebook Corpus. In: Corpus Approaches to Social Media. Studies in Corpus Linguistics . John Benjamins Publishing Company, Amsterdam, pp. 175-196. ISBN 9789027207944 Collins, Luke and Semino, Elena and Demjén, Zsófia and Hardie, Andrew and Moseley, Peter and Woods, Angela and Alderson-Day, Ben (2020) A linguistic approach to the psychosis continuum:(dis)similarities and (dis)continuities in how clinical and non-clinical voice-hearers talk about their voices. Cognitive Neuropsychiatry, 25 (6). pp. 447-465. ISSN 1354-6805 Collins, Michelle and Halliday, Emma (2020) Public engagement in health research. In: Handbook of Theory and Methods in Applied Health Research. Edward Elgar Publishing Ltd., pp. 323-340. ISBN 9781785363207 Collinson, David (2020) Donald Trump, Boris Johnson and the dangers of excessive positivity. The Conversation. Collinson, David (2020) False Positives: A Pandemic of Prozac Leadership. International Leadership Association: Leadership for the Greater Good: Reflections on Today's Challenges From Around the Globe,. Collinson, David (2020) Introduction to the Special Issue:Leadership and Power. Leadership, 16 (1). pp. 3-8. ISSN 1742-7150 Collinson, David (2020) Only Connect!:Exploring the Critical Dialectical Turn in Leadership Studies. Organization Theory, 1 (2). pp. 1-22. Collinson, David (2020) A Virulent Strain of Prozac Leadership. 54 Degrees (LUMS Magazine) (10). pp. 6-9. Collinson, David and Hearn, Jeff (2020) Gendering Leadership in Times of COVID: The Case of the 'Strong Man'. International Leadership Association: Leadership for the Greater Good: Reflections on Today's Challenges From Around the Globe. Collinson, David and Hearn, Jeff (2020) Trump v Biden: a duel of contrasting masculinities. The Conversation. Collison, R.F. and Raven, E.C. and Pignon, C.P. and Long, S.P. (2020) Light, Not Age, Underlies the Maladaptation of Maize and Miscanthus Photosynthesis to Self-Shading. Frontiers in Plant Science, 11. ISSN 1664-462X Colombo, D. and Fernández-Álvarez, J. and Suso-Ribera, C. and Cipresso, P. and Valev, H. and Leufkens, T. and Sas, C. and Garcia-Palacios, A. and Riva, G. and Botella, C. (2020) The need for change:Understanding emotion regulation antecedents and consequences using ecological momentary assessment. Emotion (Washington, D.C.), 20 (1). pp. 30-36. Colombo, Desiree and Fernandez Alvarez, Javier and Suso-Ribera, Carlos and Cipresso, Pietro and Valev, Hristo and Leufkens, Tim and Sas, Corina and Garcia-Palacios, Azucena and Riva, Giuseppe and Botella, Cristina (2020) The Need for Change:Understanding Emotion Regulation Deployment and Consequences Using Ecological Momentary Assessment. Emotion, 20 (1). pp. 30-36. ISSN 1528-3542 Coltman-Patel, Tara (2020) Weight Stigma in Britain:The Linguistic Representation of Obesity in Newspapers. PhD thesis, UNSPECIFIED. Comas-Herrera, A. and Fernandez, J.-L. and Hancock, R. and Hatton, C. and Knapp, M. and McDaid, D. and Malley, J. and Wistow, G. and Wittenberg, R. (2020) COVID-19:Implications for the Support of People with Social Care Needs in England. Journal of Aging and Social Policy, 32 (4-5). pp. 365-372. Comendeiro-Maaløe, M. and Estupiñán-Romero, F. and Thygesen, L.C. and Mateus, C. and Merlo, J. and Bernal-Delgado, E. (2020) Acknowledging the role of patient heterogeneity in hospital outcome reporting:Mortality after acute myocardial infarction in five European countries. PLoS ONE, 15 (2). ISSN 1932-6203 Conconi, Paola and Facchini, Giovanni and Steinhardt, Max and Zanardi, Maurizio (2020) The political economy of trade and migration:Evidence from the U.S. Congress. Economics & Politics, 32 (2). pp. 250-278. ISSN 1468-0343 Connah, Leoni (2020) Ayodhya's Déjà Vu. South Asia Journal. Connah, Leoni (2020) Bangladesh:Why a Pandemic is More Than a Threat to Global Health. Global Discourse Blog. Connah, Leoni (2020) Black Lives Matter:PPR PhD Students Come Together to Discuss Systemic Racism. UNSPECIFIED. Connah, Leoni (2020) Contested Kashmir: A Brief History. Epoch Magazine. Connah, Leoni (2020) The Echoes of Colonialism:Kashmir's Future Remains Uncertain Following India's Revocation of its Self-Rule. New Zealand International Review, 45 (2). Connah, Leoni (2020) India-China Relations:A Turbulent Future? Modern Diplomacy. Connah, Leoni (2020) Is Covid-19 Worsening the Already Fraught Situation in Kashmir? discoversociety.org. Connah, Leoni (2020) Kashmir:New Domicile Rules Spark Fresh Anger a Year After India Removed Region's Special Status. The Conversation. Connah, Leoni (2020) Kunal Mukherjee. 2019. Conflict in India and China's Contested Borderlands: A Comparative Study. Routledge: London. 178pp. ISBN: 9780367025731. Journal of Asian Security & International Affairs, 7 (3). pp. 395-396. Conroy-Dalton, Ruth (2020) A. D. Ekstrom, H. J. Spiers, V. Bohbot and R. S. Rosenbaum (2018) Human spatial navigation. Urban Morphology, 24 (1). ISSN 1027-4278 Conroy-Dalton, Ruth (2020) View Point: Professor Ruth Dalton. ADF (Architects' Datafile). Consterdine, Erica (2020) Diaspora Policies, Consular Services and Social Protection for UK Citizens Abroad. In: Migration and Social Protection in Europe and Beyond. IMISCOE Research Series book series (IMIS), 3 (1). Springer, Cham, pp. 433-452. ISBN 9783030512361 Consterdine, Erica (2020) Parties matter but institutions live on:Labour's legacy on Conservative immigration policy and the neoliberal consensus. The British Journal of Politics and International Relations. ISSN 1369-1481 Conz, E. and Lamb, P.W. and De Massis, A. (2020) Practicing resilience in family firms:An investigation through phenomenography. Journal of Family Business Strategy, 11 (2). ISSN 1877-8585 Coole, Matthew and Rayson, Paul and Mariani, John (2020) LexiDB: Patterns & Methods for Corpus Linguistic Database Management. In: Proceedings of The 12th Language Resources and Evaluation Conference. European Language Resources Association (ELRA), Paris, pp. 3128-3135. ISBN 9791095546344 Coole, Matthew and Rayson, Paul and Mariani, John (2020) Unfinished Business:Construction and Maintenance of a Semantically Tagged Historical Parliamentary Corpus, UK Hansard from 1803 to the present day. In: Proceedings of the Second ParlaCLARIN Workshop. European Language Resources Association (ELRA), Paris, pp. 23-27. ISBN 9791095546474 Cooney, Nicholas and Grabowski, Jan (2020) Slices of groupoids are group-like. arxiv.org. Cooper, Jennifer (2020) Narrative Social Work. In: Developing Skills for Social Work Practice. Sage, pp. 259-268. ISBN 9781526463258 Cooper, L. and Crouvizier, M. and Edwards, S. and French, R. and Gannaway, F. and Kemp-Russell, P. and Marin-Reyes, H. and Mercer, I. and Rendell-Read, A. and Viehhauser, G. and Yeadon, W. (2020) In situ micro gas tungsten constricted arc welding of ultra-thin walled 2.275 mm outer diameter grade 2 commercially pure titanium tubing. Journal of Instrumentation, 15 (6). ISSN 1748-0221 Cooper, R. and Dunn, N. (2020) Designing Future Cities for Wellbeing:A Summary of Implications for Design. In: Designing Future Cities for Wellbeing. Routledge, London, pp. 213-223. ISBN 9780429894473 Cooper, Rachel (2020) When answers are hard to find, change the question:Asking different causal questions can enable progress. In: Psychiatry Reborn. Oxford University Press, Oxford. ISBN 9780198789697 Cooper, Rachel (2020) The concept of disorder revisited:Robustly value-laden despite change. Aristotelian Society Supplementary Volume, 94 (1). pp. 141-161. ISSN 0309-7013 Cooper, Ryan and Newton, Bentley and Fletcher, Matthew and Muller, Catherine and Turner, Kyle and Sobral, David (2020) An ALMA [CII] survey of the environments around bright Lyman-α emitters in the epoch of re-ionisation. Notices of Lancaster Astrophysics (NLUAstro), 2. pp. 53-70. Coote, S. (2020) A light touch for complex products. Nature Chemistry, 12. pp. 889-890. ISSN 1755-4330 Copeland, Simon (2020) Kin and peer contexts and militant involvement:a narrative analysis. PhD thesis, UNSPECIFIED. Copley, Jack and Moraitis, Alexis (2020) Neoliberalism's many deaths and strange non-deaths. UNSPECIFIED. Corba, Burcin Seyda and Egrioglu, Erol and Dalar, Ali Zafer (2020) AR–ARCH Type Artificial Neural Network for Forecasting. Neural Processing Letters, 51. 819–836. ISSN 1370-4621 Cork, A. and Everson, R. and Levine, M. and Koschate, M. (2020) Using computational techniques to study social influence online:Group Processes and Intergroup Relations. Group Processes Intergroup Relat., 23 (6). pp. 808-826. ISSN 1368-4302 Correa, C.M.A. and Audino, L.D. and Holdbrook, R. and Braga, R.F. and Menéndez, R. and Louzada, J. (2020) Successional trajectory of dung beetle communities in a tropical grassy ecosystem after livestock grazing removal. Biodiversity and Conservation, 29. pp. 2311-2328. ISSN 0960-3115 Correani, A. and De Massis, A. and Frattini, F. and Petruzzelli, A.M. and Natalicchio, A. (2020) Implementing a Digital Strategy:Learning from the Experience of Three Digital Transformation Projects. California Management Review, 62 (4). pp. 37-56. ISSN 0008-1256 Corvellec, Herve and Bohm, Steffen and Stowell, Alison and Valenzuela, Francisco (2020) Introduction to the Special Issue on the Contested realities of the Circular Economy. Culture and Organization, 26 (2). pp. 97-102. ISSN 1475-9551 Costa, Ana and Bauer, Susanne (2020) Live project: understanding the design process from the project brief to post occupancy evaluation. In: Education, Design and Practice. UNSPECIFIED, USA, pp. 305-313. ISBN 2398-9467 Costain, Deborah and Titman, Andrew and France, Anna (2020) Stroke mortality risk factors: a framework incorporating time-dependent covariate effects and multiple imputation for missing data. Working Paper. UNSPECIFIED. Costea, Bogdan and Amiridis, Konstantinos (2020) Business, ethics and the question of value. Routledge Studies in Business Ethics . Routledge. Costley, Jamie and Fanguy II, Mik and Lange, Christopher and Baldwin, Matthew (2020) The effects of video lecture viewing strategies on cognitive load. Journal of Computing in Higher Education. ISSN 1867-1233 Cottrill, Caitlin Doyle and Jacobs, Naomi and Markovic, Milan and Edwards, Peter (2020) Sensing the City:Designing for Privacy and Trust in the Internet of Things. Sustainable Cities and Society, 63. ISSN 2210-6707 Coulton, Paul (2020) Reflections on teaching design fiction as world-building. In: Speculative and Critical Design in Education: Practice and Perspectives, 2020-07-082020-07-09, online. Cousins, Eleri (2020) INSCRIPTIONS AND MATERIALITY - (A.) Petrovic, (I.) Petrovic, (E.) Thomas (edd.) The Materiality of Text – Placement, Perception, and Presence of Inscribed Texts in Classical Antiquity. (Brill Studies in Greek and Roman Epigraphy 11.) Pp. xviii 416, b/w & colour ills, maps. Leiden and Boston: Brill, 2019. Cased, €118, US$142. ISBN: 978-90-04-37550-5. Classical Review, 70 (1). pp. 11-14. Cousins, Eleri (2020) The Sanctuary at Bath in the Roman Empire. Cambridge Classical Studies . Cambridge University Press, Cambridge. ISBN 9781108493192 Cousins, Thomas (2020) Exploring the promotional advancements for practitioners in British primary school education:'Gendered micro-promotions'. PhD thesis, UNSPECIFIED. Cousins, Thomas Anthony (2020) Collegiality vs role models:gendered discourses and the 'glass escalator' in English primary schools. Early Years, 40 (1). pp. 37-51. ISSN 0957-5146 Couth, Samuel and Prendergast, Garreth and Guest, Hannah and Munro, Kevin and Moore, David and Plack, Christopher and Ginsborg, Jane and Dawes, Piers (2020) Investigating the effects of noise exposure on self-report, behavioral and electrophysiological indices of hearing damage in musicians with normal audiometric thresholds. Hearing Research, 395. ISSN 0378-5955 Coutrot, Antoine and Manley, Ed and Conroy-Dalton, Ruth and Yesiltepe, Demet and Wiener, Jan and Hölscher, Christoph and Hornberger, Michael and Spiers, Hugo (2020) Cities have a negative impact on navigation ability:evidence from 38 countries. Biorxiv. Couzoff, Panagiotis and Banerjee, Shantanu and Pawlina, Grzegorz (2020) Effectiveness of Monitoring, Managerial Entrenchment, and Corporate Cash Holdings. Working Paper. UNSPECIFIED, Lancaster. Cowie, Jean (2020) An exploration of influences and changes in the diagnosis and management of gastro-oesophageal reflux in infants aged 0-1 year. PhD thesis, UNSPECIFIED. Cox, Andrew and Brewster, Liz (2020) Library support for student mental health and well-being in the UK:before and during the COVID 19 pandemic. The Journal of Academic Librarianship, 46 (6). Cox, Andrew and Brewster, Liz (2020) Vernacular narratives of well-being and the practice of photo-a-day. Storytelling, Self, Society: An Interdisciplinary Journal of Storytelling Studies, 16 (2). pp. 280-299. ISSN 1550-5340 Cox, S.M. and McDonald, M. and Townsend, Anne (2020) Epistemic Strategies in Ethical Review:REB Members' Experiences of Assessing Probable Impacts of Research for Human Subjects. Journal of Empirical Research on Human Research Ethics, 15 (5). pp. 383-395. ISSN 1556-2646 Craig, Jess (2020) Minor gravitational interactions as contributors to supermassive black hole growth. Masters thesis, UNSPECIFIED. Cramond, Laura and Fletcher, Ian and Rehan, Claire (2020) Experiences of clinical psychologists working in palliative care:A qualitative study. European Journal of Cancer Care, 29 (3). ISSN 0961-5423 Cranmer, Sue (2020) Disabled Children and Digital Technologies:Learning in the Context of Inclusive Education. Bloomsbury, London. ISBN 9781350002050 Cranmer, Sue (2020) Disabled children's digital use practices for learning in mainstream schools. University of Nottingham, https://zenodo.org/record/4283430#.X76gEs37Te8. Cranmer, Sue (2020) Disabled children's evolving digital use practices to support formal learning:A missed opportunity for inclusion. British Journal of Educational Technology, 51 (2). pp. 315-330. ISSN 0007-1013 Cranmer, Sue (2020) 'I like to play on the Fifa app':Disabled children's everyday experiences with digital technologies. In: Children & Disability Webinar, 2020-09-102020-09-11, Online. Crawshaw, Robert (2020) Beyond Emotion:Empathy, Social Contagion and Cultural Literacy. Open Cultural Studies, 2 (1). pp. 676-685. ISSN 2451-3474 Cremonese, G. and Capaccioni, F. and Capria, M.T. and Doressoundiram, A. and Palumbo, P. and Vincendon, M. and Massironi, M. and Debei, S. and Zusi, M. and Altieri, F. and Amoroso, M. and Aroldi, G. and Baroni, M. and Barucci, A. and Bellucci, G. and Benkhoff, J. and Besse, S. and Bettanini, C. and Blecka, M. and Borrelli, D. and Brucato, J.R. and Carli, C. and Carlier, V. and Cerroni, P. and Cicchetti, A. and Colangeli, L. and Dami, M. and Da Deppo, V. and Della Corte, V. and De Sanctis, M.C. and Erard, S. and Esposito, F. and Fantinel, D. and Ferranti, L. and Ferri, F. and Ficai Veltroni, I. and Filacchione, G. and Flamini, E. and Forlani, G. and Fornasier, S. and Forni, O. and Fulchignoni, M. and Galluzzi, V. and Gwinner, K. and Ip, W. and Jorda, L. and Langevin, Y. and Lara, L. and Leblanc, F. and Leyrat, C. and Li, Y. and Marchi, S. and Marinangeli, L. and Marzari, F. and Mazzotta Epifani, E. and Mendillo, M. and Mennella, V. and Mugnuolo, R. and Muinonen, K. and Naletto, G. and Noschese, R. and Palomba, E. and Paolinetti, R. and Perna, D. and Piccioni, G. and Politi, R. and Poulet, F. and Ragazzoni, R. and Re, C. and Rossi, M. and Rotundi, A. and Salemi, G. and Sgavetti, M. and Simioni, E. and Thomas, N. and Tommasi, L. and Turella, A. and Van Hoolst, T. and Wilson, L. and Zambon, F. and Aboudan, A. and Barraud, O. and Bott, N. and Borin, P. and Colombatti, G. and El Yazidi, M. and Ferrari, S. and Flahaut, J. and Giacomini, L. and Guzzetta, L. and Lucchetti, A. and Martellato, E. and Pajola, M. and Slemer, A. and Tognon, G. and Turrini, D. (2020) SIMBIO-SYS:Scientific Cameras and Spectrometer for the BepiColombo Mission. Space Science Reviews, 216 (5). ISSN 0038-6308 Crombie, Zoe (2020) Ethel and Ernest's Bedroom Wall:Reflection on a Creative-Critical Work. LUX: Undergraduate Journal of Literature and Culture (4). pp. 14-19. Cronin, Anne (2020) The secrecy−transparency dynamic:a sociological reframing of secrecy and transparency for public relations research. Public Relations Inquiry, 9 (3). pp. 219-236. Cross, Mollie and Lane, Timothy and Germond-Duret, Celine (2020) World War 'V':Emissions change if Birmingham became vegetarian and contemporary attitudes towards vegetarianism. Routes: The Journal for Student Geographers, 1 (2). pp. 198-225. ISSN 2634-4815 Crossley, S. and Marsden, E. and Ellis, N. and Kormos, J. and Morgan-Short, K. and Thierry, G. (2020) Introduction of Methods Showcase Articles in Language Learning. Language Learning, 70 (1). pp. 5-10. ISSN 0023-8333 Crowe, Martyn A and Bampouras, Theodoros M and Walker-Small, Katie and Howe, Louis P (2020) Restricted Unilateral Ankle Dorsiflexion Movement Increases Interlimb Vertical Force Asymmetries in Bilateral Bodyweight Squatting. Journal of Strength and Conditioning Research, 34 (2). pp. 332-336. ISSN 1064-8011 Crowther, L.I. and Gilbert, F. (2020) The effect of agri-environment schemes on bees on Shropshire farms. Journal for Nature Conservation, 58. ISSN 1617-1381 Cruickshank, James and Guler, Hakan and Jackson, Bill and Nixon, Anthony Keith (2020) Rigidity of linearly constrained frameworks. International Mathematics Research Notices, 2020 (12). pp. 3824-3840. ISSN 1073-7928 Csala, Dénes (2020) Sparking Change:Electricity consumption, carbon emissions and working time. Working Paper. Autonomy. Cubin, Jenny (2020) Where the waves meet the sky:Virginia Woolf, Kate Bush and the expression of musical androgyny. PhD thesis, UNSPECIFIED. Cui, K. and Mali, K.S. and Wu, D. and Feng, X. and Müllen, K. and Walter, M. and De Feyter, S. and Mertens, S.F.L. (2020) Ambient Bistable Single Dipole Switching in a Molecular Monolayer. Angewandte Chemie - International Edition. ISSN 1433-7851 Culley, Christopher and Vijayakumar, Supreeta and Zampieri, Guido and Angione, Claudio (2020) A mechanism-aware and multiomic machine-learning pipeline characterizes yeast cell growth. Proceedings of the National Academy of Sciences of the United States of America, 117 (31). pp. 18869-18879. ISSN 0027-8424 Cullingham, Tasha and Kirkby, Antonia and Eccles, Fiona and Sellwood, Bill (2020) Psychological Inflexibility and Somatisation in Non-Epileptic Attack Disorder. Epilepsy and Behavior, 111. ISSN 1525-5050 Culpeper, J. and Archer, D. (2020) Shakespeare's language:Styles and meanings via the computer. Language and Literature, 29 (3). pp. 191-202. ISSN 0963-9470 Culpeper, J. and Findlay, A. (2020) National identities in the context of Shakespeare's Henry V:Exploring contemporary understandings through collocations. Language and Literature, 29 (3). pp. 203-222. ISSN 0963-9470 Culpeper, Jonathan and Kan, Qian (2020) Communicative styles, rapport and student engagement:An online peer mentoring scheme. Applied Linguistics, 41 (5). 756–786. ISSN 0142-6001 Culpeper, Jonathan and Oliver, Samuel (2020) Pragmatic noise in Shakespeare's plays. In: Voices past and present - Studies of involved, speech-related and spoken texts. Studies in Corpus Linguistics . John Benjamins Publishing Company, Amsterdam, pp. 12-29. ISBN 9789027207654 Curceac, S. and Atkinson, P.M. and Milne, A. and Wu, L. and Harris, P. (2020) An evaluation of automated GPD threshold selection methods for hydrological extremes across different scales. Journal of Hydrology, 585. ISSN 0022-1694 Curceac, Stelian and Atkinson, Peter M. and Milne, Alice and Wu, Lianhai and Harris, Paul (2020) Adjusting for Conditional Bias in Process Model Simulations of Hydrological Extremes:An Experiment Using the North Wyke Farm Platform. Frontiers in Artificial Intelligence, 3. ISSN 2624-8212 Cureton, Paul (2020) Back to the future:"Roads? Where we're going we don't need… roads!". UNSPECIFIED. Cureton, Paul (2020) Drone Futures:UAS for Landscape & Urban Design. Routledge, London. ISBN 9780815380511 Cureton, Paul (2020) Drone Futures:UAS in landscape and urban design. Routledge, London. ISBN 9780815380504 Cureton, Paul (2020) How drones and aerial vehicles could change cities. The Conversation. Cureton, Paul and Dunn, Nick (2020) Digital Twins of Cities and Evasive Futures. In: Shaping Smart for Better Cities. Smart Cities . Academic Press, London/ San Diego, CA / Cambridge, MA / Oxford, pp. 267-282. ISBN 9780128186367 Cutcher, L. and Dale, K. and Tyler, M. (2020) Emotion, aesthetics and sexuality at work:Theoretical challenges and future directions. Gender, Work and Organization, 27 (1). pp. 1-5. ISSN 0968-6673 Cândido, B.M. and Quinton, J.N. and James, M.R. and Silva, M.L.N. and de Carvalho, T.S. and de Lima, W. and Beniaich, A. and Eltner, A. (2020) High-resolution monitoring of diffuse (sheet or interrill) erosion using structure-from-motion. Geoderma, 375. ISSN 0016-7061 D'Eugenio, Francesco and Wel, Arjen van der and Wu, Po-Feng and Barone, Tania M. and Houdt, Josha van and Bezanson, Rachel and Straatman, Caroline M. S. and Pacifici, Camilla and Muzzin, Adam and Gallazzi, Anna and Wild, Vivienne and Sobral, David and Bell, Eric F. and Zibetti, Stefano and Mowla, Lamiya and Franx, Marijn (2020) Inverse stellar population age gradients of post-starburst galaxies at z=0.8 with LEGA-C. Monthly Notices of the Royal Astronomical Society, 497 (1). 389–404. ISSN 0035-8711 Daaoub, Abdalghani (2020) Theory of electron transport through single molecules. PhD thesis, UNSPECIFIED. Dai, ZHIXIN and Zheng, Jiwei and Zizzo, Daniel J. (2020) Theories of reasoning and focal point play with a matched non-student sample. Working Paper. The Department of Economics, Lancaster. Dajka, Jan-Claas (2020) Confronting feedback processes on degraded coral reefs. PhD thesis, UNSPECIFIED. Dajka, Jan-Claas and Woodhead, Anna and Norström, Albert and Graham, Nick and Riechers, Maraja and Nyström, Magnus (2020) Red and green loops help uncover missing feedbacks in a coral reef social–ecological system. People and Nature, 2 (3). pp. 608-618. ISSN 2575-8314 Dalcher, Darren (2020) БОЛЬШЕ ЧЕМ РЕАЛИЗАЦИЯ ПРОЕКТА:РАЗМЫШЛЕНИЯ О КОНЦЕПЦИИ ЖИЗНЕННОГО ЦИКЛА КАК О СПОСОБЕ ОРГАНИЗАЦИИ ПРОЕКТНОЙ РАБОТЫ. УПРАВЛЕНИЕ ПРОЕКТАМИ И ПРОГРАММАМИ, 62 (02). pp. 94-104. Dalcher, Darren (2020) НЕ ОГРАНИЧИВАТЬСЯ РАЗУМОМ СОЗДАТЕЛЯ:ПРИКЛЮЧЕНИЯ В ПРОЦЕССЕ СОЗДАНИЯ ЗНАНИЙ. УПРАВЛЕНИЕ ПРОЕКТАМИ И ПРОГРАММАМИ, 64 (04). pp. 286-292. Dalcher, Darren (2020) ASSUMERSI LA RESPONSABILITÀ DELLE PROPRIE AZIONI:PERCHÉ È GIUNTO IL MOMENTO DI PENSARE ALLA STEWARDSHIP. Il Project Manager (41). pp. 4-8. ISSN 2037-7363 Dalcher, Darren (2020) Expanding our risk repertoire to encompass opportunities. PM World Journal, 9 (2). ISSN 2330-4480 Dalcher, Darren (2020) In whose interest?:Repositioning the stakeholder paradox. PM World Journal, 9 (9). ISSN 2330-4480 Dalcher, Darren (2020) Is now a good time for a fundamental rethink of leadership? PM World Journal, 9 (4). ISSN 2330-4480 Dalcher, Darren (2020) Leadership in times of crisis:What's different now? PM World Journal, 9 (5). ISSN 2330-4480 Dalcher, Darren (2020) Reboot for purpose:Beyond the tragedy of the commons. PM World Journal, 9 (6). ISSN 2330-4480 Dalcher, Darren (2020) Reflection on resilience for mindful managers. PM World Journal, 9 (10). ISSN 2330-4480 Dalcher, Darren (2020) Spare a thought for governance:Curating a better future. PM World Journal, 9 (11). ISSN 2330-4480 Dalcher, Darren (2020) Thinking in portfolios. PM World Journal, 9 (12). ISSN 2330-4480 Dalcher, Darren (2020) The power of surge. Project (303). p. 80. ISSN 0268-8867 Dalcher, Darren and Murray-Webster, Ruth (2020) VUCA, hybrid and adaptability:Reflections on BoK7. Association for Project Management. Dales, G. and Ulger, Ali (2020) Pointwise approximate identities in Banach function algebras. Dissertationes Mathematicae (Rozprawy Matematyczne), 557. ISSN 0012-3862 Dalton, Benjamin (2020) Forms of Freedoms: Marie Darrieussecq, Catherine Malabou, and the Plasticity of Science. Dalhousie French Studies, 115. pp. 55-73. ISSN 0711-8813 Dalton, R. and Dalton, N. and Hoelscher, C. and Veddeler, C. and Krukar, J. and Wiberg, M. and SIGCHI, ACM (2020) HabiTech: Inhabiting buildings, data & technology:2020 ACM CHI Conference on Human Factors in Computing Systems, CHI EA 2020. In: CHI EA '20, 2020-04-252020-04-30. Danos, Lefteris and Halcovitch, Nathan R and Wood, Ben and Banks, Henry and Coogan, Michael P and Alderman, Nicholas and Fang, Liping and Dzurnak, Branislav and Markvart, Tom (2020) Silicon photosensitisation using molecular layers. Faraday Discussions, 222. pp. 405-423. ISSN 1359-6640 Darbyshire, Daniel and Brewster, Liz and Goodwin, Dawn and Isba, Rachel and Body, Richard (2020) Where have all the doctors gone?':A protocol for an ethnographic study of the retention problem in Emergency Medicine in the UK. BMJ Open, 10 (11). ISSN 2044-6055 Darbyshire, Daniel and Brewster, Liz and Isba, Rachel and Body, Richard and Goodwin, Dawn (2020) Retention of doctors in emergency medicine:a scoping review protocol. JBI database of systematic reviews and implementation reports, 18 (1). 154–162. ISSN 2202-4433 Darvish, Behnam and Scoville, Nicholas and Martin, Christopher and Sobral, David and Mobasher, Bahram and Rettura, Alessandro and Matthee, Jorryt and Capak, Peter and Chartab, Nima and Hemmati, Shoubaneh and Masters, Daniel and Nayyeri, Hooshang and O'Sullivan, Donal and Paulino-Afonso, Ana and Sattari, Zahra and Shahidi, Abtin and Salvato, Mara (2020) Spectroscopic confirmation of a Coma Cluster progenitor at z~2.2. The Astrophysical Journal, 892 (1). ISSN 0004-637X Darvizeh, Mohammadyasser and Yang, Jian-Bo and Eldridge, Stephen (2020) Performance Assessment of R&D-Intensive Manufacturing Companies on Dynamic Capabilities. International Journal of Strategic Decision Sciences, 11 (4). pp. 1-23. ISSN 1947-8569 Darwish, Ahmed and Alotaibi, Saud and Elgenedy, Mohamed A. (2020) Current-source Single-phase Module Integrated Inverters for PV Grid-connected Applications. IEEE Access, 8. 53082 - 53096. ISSN 2169-3536 Daryanto, Ahmad (2020) EndoS:An SPSS macro to assess endogeneity. The Quantitative Methods for Psychology, 16 (1). pp. 56-70. Daryanto, Ahmad (2020) Tutorial on Heteroskedasticity using HeteroskedasticityV3 SPSS macro. The Quantitative Methods for Psychology, 16 (5). pp. 8-20. Das, L. and Habib, K. and Saidur, R. and Aslfattahi, N. and Yahya, S.M. and Rubbi, F. (2020) Improved thermophysical properties and energy efficiency of aqueous ionic liquid/mxene nanofluid in a hybrid pv/t solar system. Nanomaterials, 10 (7). ISSN 2079-4991 Dasgupta, Utteeyo and Mani, Subha and Sharma, Smriti and Singhal, Saurabh (2020) Social Identity, Behavior, and Personality:Evidence from India. Working Paper. Lancaster University, Department of Economics, Lancaster. Dash, J. and Behera, M.D. and Jeganathan, C. and Jha, C.S. and Sharma, S. and Lucas, R. and Khuroo, A.A. and Harris, A. and Atkinson, P.M. and Boyd, D.S. and Singh, C.P. and Kale, M.P. and Kumar, P. and Behera, S.K. and Chitale, V.S. and Jayakumar, S. and Sharma, L.K. and Pandey, A.C. and Avishek, K. and Pandey, P.C. and Mohapatra, S.N. and Varshney, S.K. (2020) India's contribution to mitigating the impacts of climate change through vegetation management. Tropical Ecology, 61. pp. 168-171. Daskalopoulou, Athanasia and Go Jefferies, Josephine and Skandalis, Alexandros (2020) Transforming technology-mediated healthcare services through strategic sense-giving. Journal of Services Marketing, 34 (7). pp. 909-920. ISSN 0887-6045 Dauden Roquet, Claudia and Sas, Corina (2020) Body Matters:Exploration of the Human Body as a Resource for the Design of Technologies for Meditation. In: DIS '20. ACM, NLD, 533–546. ISBN 9781450369749 Dauden Roquet, Claudia and Sas, Corina (2020) A Scoping Review of Interactive Mindfulness Technologies for Mental Wellbeing:Considerations from HCI and Psychology. In: 25th annual international CyberPsychology, CyberTherapy & Social Networking Conference, 2020-06-05. David, Thomas (2020) Understanding the links between soil, plants, and pollinators. PhD thesis, UNSPECIFIED. Davies, B. and Roberts, J. and Raines, C. and Lunn, J.E. (2020) Retirement of Mary Traynor, executive editor of JXB (1995-2020):Journal of Experimental Botany. J. Exp. Bot., 71 (19). pp. 5719-5720. ISSN 0022-0957 Davies, Jessica and Batista, Pedro and Janes-Bassett, Victoria and O'Riordan, Roisin and Quinton, John and Yumashev, Dmitry (2020) Soil systems as critical infrastructure:do we know enough about soil system resilience and vulnerability to secure our soils? In: European Geosciences Union General Assembly 2020, 2020-05-042020-05-08, Online. Davies, W.J. and Shen, J. (2020) Reducing the environmental footprint of food and farming with agriculture green development. Frontiers of Agricultural Science and Engineering, 7 (1). pp. 1-4. Davies, W.J. and Ward, S.E. and Wilson, A. (2020) Can crop science really help us to produce more better quality food while reducing the world-wide environmental footprint of agriculture? Frontiers of Agricultural Science and Engineering, 7 (1). pp. 27-44. Davis, N. and Jarvis, A. and Aitkenhead, M.J. and Gareth Polhill, J. (2020) Trajectories toward maximum power and inequality in resource distribution networks. PLoS ONE, 15 (3). ISSN 1932-6203 Dayrell, Carmen and Ram-Prasad, Chakravarthi and Griffith-Dickson, Gwen (2020) Bringing Corpus Linguistics into Religious Studies::Self-representation amongst various immigrant communities with religious identity. Journal of Corpora and Discourse Studies, 3. pp. 96-121. ISSN 2515-0251 Dayrell, Carmen and Semino, Elena and Crowther, Neil (2020) How we talked about social care during the 2019 General Election. UNSPECIFIED. Dayrell, Carmen and Semino, Elena and Kinloch, Karen and Baker, Paul (2020) Social care in UK public discourse. ESRC Centre for Corpus Approaches to Social Science (CASS), Lancaster. De Camargo, Camilla and Whiley, Lileth A. (2020) The mythologisation of key workers:occupational prestige gained, sustained... and lost? International Journal of Sociology and Social Policy, 40 (9-10). pp. 849-859. ISSN 0144-333X De Massis, Alfredo (2020) What are the five big family business challenges posed by Covid-19? Campden FB. De Massis, Alfredo and Conz, E. and Lamb, P.W. (2020) Winemakers provide lessons on resilience. FamilyBusiness.org. De Massis, Alfredo and Di Minin, Alberto and Marullo, C and Rovelli, P and Tensen, R and Carbone, A and Crupi, A (2020) How the "EU Innovation Champions" successfully absorbed and reacted to the shock caused by the COVID-19 pandemic. European Commission, Seville. De Massis, Alfredo and Kotlar, Josip (2020) Più integrazione tra strategia d'impresa e patrimonio. We Wealth. De Massis, Alfredo and Rondi, Emanuela (2020) Covid-19 and the future of family business research. Journal of Management Studies. ISSN 0022-2380 De Massis, Alfredo and Rondi, Emanuela (2020) Harry, Meghan e la lezione per le dynasty italiane. Il Sole 24 Ore. De Massis, Alfredo and Rondi, Emanuela (2020) Imprese familiari longeve. Paura della crisi? Macché. Magazine 2020/2021 des Hoteliers- und Gastwirteverbandes (HGV) mit dem Titel "Brand your future". De Massis, Alfredo and Rondi, Emanuela (2020) Strategie per innovare attraverso la tradizione. We Wealth. De Massis, Alfredo and Rondi, Emanuela (2020) Un gioco di squadra più attivo per la proprietà. WeWealth. De Massis, Alfredo and Rondi, Emanuela (2020) Uno stress test per le imprese di famiglia. We Wealth. De Massis, Alfredo and Uhlaner, Lorraine and Jorrissen, Ann and Du, Yan (2020) The potential downside of having non-family board members. FamilyBusiness.org. De Massis, Alfredo Vittorio and Kammerlander, Nadine (2020) Handbook of Qualitative Research Methods for Family Business. Edward Elgar, Cheltenham. ISBN 9781788116442 De Rycker, M. and Horn, D. and Aldridge, B. and Amewu, R.K. and Barry, C.E. and Buckner, F.S. and Cook, S. and Ferguson, M.A.J. and Gobeau, N. and Herrmann, J. and Herrling, P. and Hope, W. and Keiser, J. and Lafuente-Monasterio, M.J. and Leeson, P.D. and Leroy, D. and Manjunatha, U.H. and McCarthy, J. and Miles, T.J. and Mizrahi, V. and Moshynets, O. and Niles, J. and Overington, J.P. and Pottage, J. and Rao, S.P.S. and Read, K.D. and Ribeiro, I. and Silver, L.L. and Southern, J. and Spangenberg, T. and Sundar, S. and Taylor, C. and Van Voorhis, W. and White, N.J. and Wyllie, S. and Wyatt, P.G. and Gilbert, I.H. (2020) Setting Our Sights on Infectious Diseases. ACS Infectious Diseases, 6 (1). pp. 3-13. De Silva, Dakshina and Hubbard, Timothy and Kosmopoulou, Georgia (2020) An Evaluation of a Bidder Training Program. International Journal of Industrial Organization, 72. ISSN 0167-7187 De Silva, Dakshina and Schiller, Anita and Slechten, Aurelie and Wolk, Leonard (2020) Tiebout Sorting and Environmental Injustice. Working Paper. Lancaster University, Department of Economics, Lancaster. De Souza, Amanda and Yu, Wang and Orr, Douglas John and Carmo-Silva, Ana Elizabete and Long, Stephen (2020) Photosynthesis across African cassava germplasm is limited by Rubisco and mesophyll conductance at steady-state, but by stomatal conductance in fluctuating light. New Phytologist, 225 (6). pp. 2498-2512. ISSN 0028-646X De Souza, Joanna and Froggatt, Katherine and Walshe, Catherine and Gillett, Karen (2020) Perspectives of Elders and their Adult Children of Black and Minority Ethnic Heritage on End-of-Life Conversations:A Meta-ethnography. Palliative Medicine, 34 (2). pp. 195-208. ISSN 0269-2163 De Souza Ramos, Washington and Silva, Michel M. and Araujo, Edson R. and Soriano Marcolino, Leandro and Nascimento, Erickson R. (2020) Straight to the Point:Fast-forwarding Videos via Reinforcement Learning Using Textual Data. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020. IEEE, pp. 10928-10937. ISBN 9781728171685 De Witte, K. and Johnes, G. and Johnes, J. and Agasisti, T. (2020) Preface to the Special issue on Efficiency in Education, Health and Other Public Services. International Transactions in Operational Research, 27 (4). pp. 1819-1820. ISSN 0969-6016 Deakins, David and Scott, Jonathan (2020) Entrepreneurship:A contemporary and global approach. Sage, London. ISBN 9781526461148 Dean, Curran and Tyfield, David (2020) Low-Carbon Transition as Vehicle of New Inequalities?:Risk-Class, the Chinese Middle Class and the Moral Economy of Misrecognition. Theory, Culture and Society, 37 (2). pp. 131-156. ISSN 0263-2764 Dean, H.J. and Boyd, R.L. (2020) Corrigendum to Deep into that darkness peering: A computational analysis of the role of depression in Edgar Allan Poe's life and death. (Journal of Affective Disorders (2020) 266 (482–491), (S0165032719322554), (10.1016/j.jad.2020.01.098)). Journal of Affective Disorders, 269. p. 208. ISSN 0165-0327 Dean, Hannah and Boyd, Ryan (2020) Deep into that Darkness Peering:A Computational Analysis of the Role of Depression in Edgar Allan Poe's Life and Death. Journal of Affective Disorders, 266. pp. 482-491. ISSN 0165-0327 Dean, N.E. and Gsell, P.-S. and Brookmeyer, R. and Crawford, F.W. and Donnelly, C.A. and Ellenberg, S.S. and Fleming, T.R. and Halloran, E.M. and Horby, P. and Jaki, T. and Krause, P.R. and Longini, I.M. and Mulangu, S. and Muyembe-Tamfum, J.-J. and Nason, M.C. and Smith, P.G. and Wang, R. and Henao-Restrepo, A.M. and De Gruttola, V. (2020) Creating a framework for conducting randomized clinical trials during disease outbreaks. New England Journal of Medicine, 382 (14). pp. 1366-1369. ISSN 0028-4793 Deed, James (2020) A re-wetted land use capability assessment for the North West of England. Masters thesis, UNSPECIFIED. Degen, Gustaf E. (2020) Exploiting diversity in the regulation of carbon assimilation to improve wheat productivity. PhD thesis, UNSPECIFIED. Degen, Gustaf E. and Worrall, Dawn and Carmo-Silva, Elizabete (2020) An isoleucine residue acts as a thermal and regulatory switch in wheat Rubisco activase. The Plant Journal, 103 (2). pp. 742-751. ISSN 0960-7412 Degerman, Dan (2020) Maladjusted to injustice? Political agency, medicalization, and the user/survivor movement. Citizenship Studies, 24 (8). pp. 1010-1029. ISSN 1362-1025 Degerman, Dan and Flinders, Matthew and Johnson, Matthew (2020) In defence of fear:COVID-19, crises and democracy. Critical Review of International Social and Political Philosophy. ISSN 1369-8230 Degryse, H. and Ioannidou, V. and Liberti, J.M. and Sturgess, J. (2020) How Do Laws and Institutions Affect Recovery Rates for Collateral? Review of Corporate Finance Studies, 9 (1). pp. 1-43. Degryse, Hans and Ioannidou, Vasso and Liberti, Jose and Sturgess, Jason (2020) How Do Laws and Institutions affect Recovery Rates on Collateral? Review of Corporate Finance Studies, 9 (1). pp. 1-43. Degueldre, C. and Joyce, M.J. (2020) Evidence and uncertainty for uranium and thorium abundance:A review. Progress in Nuclear Energy, 124. ISSN 0149-1970 Degueldre, C. and McGowan, S. (2020) Simulating uranium sorption onto inorganic particles:The effect of redox potential. Journal of Environmental Radioactivity, 225. ISSN 0265-931X Deignan, Alice and Semino, Elena (2020) Translating Science for Young People through Metaphor. The Translator, 25 (4). pp. 369-384. Delli, Evangelia (2020) Monolithic integration of mid-infrared III-V semiconductor materials and devices onto silicon. PhD thesis, UNSPECIFIED. Delli, Evangelia and Hodgson, Peter and Bentley, Matthew and Repiso Menendez, Eva and Craig, Adam and Lu, Qi and Beanland, Richard and Marshall, Andrew and Krier, Anthony and Carrington, Peter (2020) Mid-infrared Type-II InAs/InAsSb Quantum Wells Integrated on Silicon. Applied Physics Letters, 117 (13). ISSN 0003-6951 Demjen, Zsofia and Marszalek, Agnes and Semino, Elena and Varese, Filippo (2020) "One gives bad compliments about me, and the other one is telling me to do things" – (Im)Politeness and power in reported interactions between voice-hearers and their voices. In: Applying Linguistics in Illness and Healthcare Contexts. Contemporary Studies in Linguistics . Bloomsbury, London, pp. 17-43. ISBN 9781350057661 Demjen, Zsofia and Semino, Elena (2020) Communicating nuanced results in language consultancy:The case of cancer and the Violence metaphor. In: Professional Communication. Communicating in Professions and Organizations . Palgrave Macmillan, London, pp. 191-210. ISBN 9783030416676 Demjen, Zsofia and Semino, Elena (2020) Metaphor, Metonymy and Framing in Discourse. In: The Cambridge Handbook of Discourse Studies. Cambridge Handbooks in Language and Linguistics . Cambridge University Press, Cambridge, pp. 213-234. ISBN 9781108425148 Demmen, Jane (2020) Issues and challenges in compiling a corpus of Early Modern English plays for comparison with those of William Shakespeare. ICAME Journal, 44 (1). pp. 37-68. ISSN 1502-5462 Dempster, Paul and Onah, Daniel and Blair, Lynne (2020) Increasing academic diversity and inter-disciplinarity of Computer Science in Higher Education. In: CEP 2020 Proceedings of the 4th Conference on Computing Education Practice. ACM, New York, pp. 1-4. Deng, W. and Fang, Z. and Wang, Z. and Zhu, M. and Zhang, Y. and Tang, M. and Song, W. and Lowther, S. and Huang, Z. and Jones, K. and Peng, P. and Wang, X. (2020) Primary emissions and secondary organic aerosol formation from in-use diesel vehicle exhaust:Comparison between idling and cruise mode. Science of the Total Environment, 699. ISSN 0048-9697 Denisov, Denis and Korshunov, Dmitry and Wachtel, Vitali (2020) Renewal Theory for Transient Markov Chains with Asymptotically Zero Drift. Transactions of the American Mathematical Society, 373 (10). pp. 7253-7286. ISSN 0002-9947 Deniz, Mehmet (2020) Effectiveness of back-track mediation in resolving protracted intractable conflicts:the case of Turkish-Kurdish peace process in Oslo. PhD thesis, UNSPECIFIED. Depetris-Chauvin, Emilio and Durante, Ruben and Campante, Filipe (2020) Building Nations Through Shared Experience:Evidence from African Football. The American Economic Review, 110 (5). pp. 1572-1602. ISSN 0002-8282 Deribe, Kebede and Florence, Lyndsey and Kelemework, Abebe and Getaneh, Tigist and Tsegay, Girmay and Cano, Jorge and Giorgi, Emanuele and Newport, Melanie J. and Davey, Gail (2020) Developing and validating a clinical algorithm for the diagnosis of podoconiosis. Transactions of The Royal Society of Tropical Medicine and Hygiene, 114 (12). pp. 916-925. ISSN 0035-9203 Deribe, Kebede and Fronterre, Claudio and Dejene, Tariku and Biadgilign, Sibhatu and Deribew, Amare and Abdullah, Muna and Cano, Jorge (2020) Measuring the spatial heterogeneity on the reduction of vaginal fistula burden in Ethiopia between 2005 and 2016. Scientific Reports, 10 (1). ISSN 2045-2322 Derrick, G.E. (2020) Editorial-embracing how scholarly publishing can build a new research culture, post - Covid-19:Publications. Publ., 8 (2). ISSN 2304-6775 Derrick, Gemma (2020) How COVID-19 lockdowns could lead to a kinder research culture. Nature, 581 (7806). pp. 107-108. ISSN 0028-0836 Devine, James (2020) Enabling intuitive and efficient physical computing. PhD thesis, UNSPECIFIED. Dewey, Rebecca and Francis, Susan and Guest, Hannah and Prendergast, Garreth and Rebecca, Millman and Plack, Christopher and Hall, Deborah A. (2020) The association between subcortical and cortical fMRI and lifetime noise exposure in listeners with normal hearing thresholds. NeuroImage, 204. ISSN 1053-8119 Dhamgaye, V. and Laundy, D. and Baldock, S. and Moxham, T. and Sawhney, K. (2020) Correction of the X-ray wavefront from compound refractive lenses using 3D printed refractive structures. Journal of Synchrotron Radiation, 27 (6). pp. 1518-1527. ISSN 0909-0495 Dhody, Dhruv and Lee, Young and Ceccarelli, Danielle and Shin, Jongyoon and King, Daniel (2020) Hierarchical Stateful Path Computation Element (PCE). UNSPECIFIED. Di Blase, Antonietta and Vadi, Valentina (2020) The Inherent Rights of Indigenous Peoples in International Law. Roma Tre Press, Rome. ISBN 9788832136920 Di Chiara, A. (2020) Palaeosecular variations of the geomagnetic field in Africa during the holocene:a review. In: Geomagnetic Field Variations in the Past. Geological Society Special Publication . Geological Society of London, pp. 127-141. ISBN 9781786204738 Di Mitri, S. and Latina, A. and Aicheler, M. and Aksoy, A. and Alesini, D. and Burt, G. and Castilla, A. and Clarke, J. and Cortés, H.M.C. and Croia, M. and D'auria, G. and Diomede, M. and Dunning, D. and Ferrario, M. and Gallo, A. and Giribono, A. and Goryashko, V. and Mostacci, A. and Nguyen, F. and Rochow, R. and Scifo, J. and Spataro, B. and Thompson, N. and Vaccarezza, C. and Vannozzi, A. and Wu, X. and Wuensch, W. (2020) Scaling of beam collective effects with bunch charge in the compactlight free-electron laser. Photonics, 7 (4). ISSN 2304-6732 Di Paola, D.M. and Lu, Q. and Repiso, E. and Kesaria, M. and Makarovsky, O. and Krier, A. and Patanè, A. (2020) Room temperature upconversion electroluminescence from a mid-infrared In(AsN) tunneling diode. Applied Physics Letters, 116 (14). ISSN 0003-6951 Dialameh, Maryam and Hamzeh, Ali and Rahmani, Hossein (2020) DL-Reg:A Deep Learning Regularization Technique using Linear Regression. arXiv. Dialameh, Maryam and Hamzeh, Ali and Rahmani, Hossein and Radmard, Amir Reza and Dialameh, Safoura (2020) Screening COVID-19 Based on CT/CXR Images & Building a Publicly Available CT-scan Dataset of COVID-19. arxiv.org. Dickinson, Philip (2020) The Enigma of Arrival: Migrancy and Mutability. British Library. Diep, P.-T. and Talash, K. and Kasabri, V. (2020) Hypothesis:Oxytocin is a direct COVID-19 antiviral. Medical Hypotheses, 145. ISSN 0306-9877 Diggle, Peter and Fronterre, Claudio and Amoah, Benjamin and Giorgi, Emanuele and Stanton, Michelle (2020) Design and analysis of elimination surveys for neglected tropical diseases. Journal of Infectious Diseases, 21 (Supple). S554–S560. ISSN 0022-1899 Diggle, Peter and Giorgi, Emanuele and Atsame, Julienne and Ella, Sylvie Ntsame and Ogoussan, Kisito and Gass, Katherine (2020) A tale of two parasites:statistical modelling to support disease control programmes in Africa. Statistical Science, 35 (1). pp. 42-50. ISSN 0883-4237 Dike, Emmanuel and Ilic, Suzana and Whyatt, Duncan and Folkard, Andrew (2020) Shoreline Delineation in Complex Intertidal Environments using Sentinel-1 SAR Imagery. In: UNSPECIFIED. Diken, Bulent and Lausten, Carsten Bagge (2020) The Collector's World. Journal for Cultural Research, 24 (2). pp. 101-112. ISSN 1479-7585 Dimairo, M. and Pallmann, P. and Wason, J. and Todd, S. and Jaki, T. and Julious, S.A. and Mander, A.P. and Weir, C.J. and Koenig, F. and Walton, M.K. and Nicholl, J.P. and Coates, E. and Biggs, K. and Hamasaki, T. and Proschan, M.A. and Scott, J.A. and Ando, Y. and Hind, D. and Altman, D.G. and Group, ACE Consensus (2020) The Adaptive designs CONSORT Extension (ACE) statement:a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ, 369. ISSN 1756-1833 Dimairo, M. and Pallmann, P. and Wason, J. and Todd, S. and Jaki, T. and Julious, S.A. and Mander, A.P. and Weir, C.J. and Koenig, F. and Walton, M.K. and Nicholl, J.P. and Coates, E. and Biggs, K. and Hamasaki, T. and Proschan, M.A. and Scott, J.A. and Ando, Y. and Hind, D. and Altman, D.G. and Group, ACE Consensus (2020) The adaptive designs CONSORT extension (ACE) statement:a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. Trials, 21 (1). ISSN 1745-6215 Dimakou, Pari (2020) The role of exchange in network ties:A qualitative study of the wine industry in Greece. PhD thesis, UNSPECIFIED. Dimopoulos, K. (2020) An analytic treatment of quartic hilltop inflation. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 809. Dimopoulos, Konstantinos (2020) Introduction to Cosmic Inflation and Dark Energy. CRC Press. ISBN 9780815386759 Dinakaran, Ranjith K. and Easom, Philip and Bouridane, Ahmed and Zhang, Li and Jiang, Richard and Mehboob, Fozia and Rauf, Abdul (2020) Deep learning based pedestrian detection at distance in smart cities. In: Intelligent Systems and Applications - Proceedings of the 2019 Intelligent Systems Conference IntelliSys Volume 2. Advances in Intelligent Systems and Computing . Springer-Verlag, GBR, pp. 588-593. ISBN 9783030295127 Dinelli, F. and Fabbri, F. and Forti, S. and Coletti, C. and Kolosov, O.V. and Pingue, P. (2020) Scanning probe spectroscopy of ws2/graphene van der waals heterostructures. Nanomaterials, 10 (12). pp. 1-11. ISSN 2079-4991 Ding, Xiaoxuan and Hou, Xiaonan and Xia, Min and Ismail, Yaser and Ye, Jianqiao (2020) Modelling Composite Failure through Machine Learning. In: 23rd International Conference on Composite Structures & 6th International Conference on Mechanics of Composites, 2020-09-012020-09-04, University of Porto,. Dipper, A. and Jones, H.E. and Bhatnagar, R. and Preston, N.J. and Maskell, N. and Clive, A.O. (2020) Interventions for the management of malignant pleural effusions:a network meta-analysis. Cochrane Database of Systematic Reviews, 4. ISSN 1469-493X Discua Cruz, Allan (2020) There is no need to shout to be heard!:The paradoxical nature of CSR reporting in a Latin American family SME. International Small Business Journal, 38 (3). pp. 243-267. ISSN 0266-2426 Discua Cruz, Allan and Cavalcanti Junqueira, M. Isabella (2020) Cheers! A teaching project for exploring the opportunities and challenges of internationalization in a small local Brewery. Journal of International Business Education, 15. pp. 229-248. ISSN 1649-4946 Discua Cruz, Allan and Centeno-Caffarena, Leonardo and Solano, Marcos Vega (2020) Being different matters!:A closer look into product differentiation in specialty coffee family farms in Central America. Cross Cultural & Strategic Management, 27 (2). pp. 165-188. ISSN 2059-5794 Discua Cruz, Allan and Halliday, Sue Vaux (2020) "Living the Dream":A closer look into passionate consumer-entrepreneurship in a developing Latin American country. Journal of Small Business and Entrepreneurship. ISSN 0827-6331 Disney, George and Gurrin, Lisa and Aitken, Zoe and Emerson, Eric and Milner, Allison and Kavanagh, Anne and Petrie, Dennis (2020) Hierarchical models for international comparisons:Smoking, Disability, and Social Inequality in 21 European Countries. Epidemiology, 31 (2). pp. 282-289. Dixon, John and Tredoux, Colin and Davies, Gemma and Huck, Jonny and Hocking, Bree and Sturgeon, Brendan and Whyatt, James Duncan and Jarman, Neil and Bryan, Dominic (2020) Parallel lives:Intergroup contact, threat and the segregation of everyday activity spaces. Journal of Personality and Social Psychology, 118 (3). pp. 457-480. ISSN 0022-3514 Dixon, John and Tredoux, Colin and Sturgeon, Brendan and Hocking, Bree and Davies, Gemma and Huck, Jonathan and Whyatt, Duncan and Jarman, Neil and Bryan, Dominic (2020) 'When the walls come tumbling down':The role of intergroup proximity, threat and contact in shaping attitudes towards the removal of Northern Ireland's peace walls. British Journal of Social Psychology, 59 (4). pp. 922-944. ISSN 0144-6665 Doak, Susan (2020) The influence of individual and social factors on attitudes and stigma towards deaf people. PhD thesis, UNSPECIFIED. Dobler, Dennis and Titman, Andrew Charles (2020) Dynamic inference for non-Markov transition probabilities under random right-censoring. Scandinavian Journal of Statistics, 47 (2). pp. 572-586. ISSN 0303-6898 Dobson, K.J. and Allabar, A. and Bretagne, E. and Coumans, J. and Cassidy, M. and Cimarelli, C. and Coats, R. and Connolley, T. and Courtois, L. and Dingwell, D.B. and Di Genova, D. and Fernando, B. and Fife, J.L. and Fyfe, F. and Gehne, S. and Jones, T. and Kendrick, J.E. and Kinvig, H. and Kolzenburg, S. and Lavallée, Y. and Liu, E. and Llewellin, E.W. and Madden-Nadeau, A. and Madi, K. and Marone, F. and Morgan, C. and Oppenheimer, J. and Ploszajski, A. and Reid, G. and Schauroth, J. and Schlepütz, C.M. and Sellick, C. and Vasseur, J. and von Aulock, F.W. and Wadsworth, F.B. and Wiesmaier, S. and Wanelik, K. (2020) Quantifying Microstructural Evolution in Moving Magma. Front. Earth Sci., 8. ISSN 2296-6463 Docherty, A.B. and Harrison, E.M. and Green, C.A. and Hardwick, H.E. and Pius, R. and Norman, L. and Holden, K.A. and Read, J.M. and Dondelinger, F. and Carson, G. and Merson, L. and Lee, J. and Plotkin, D. and Sigfrid, L. and Halpin, S. and Jackson, C. and Gamble, C. and Horby, P.W. and Nguyen-Van-Tam, J.S. and Ho, A. and Russell, C.D. and Dunning, J. and Openshaw, P.J.M. and Baillie, J.K. and Semple, M.G. (2020) Features of 20 133 UK patients in hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: Prospective observational cohort study. BMJ, 369. ISSN 0959-8146 Dodd, Kerry (2020) "It belongs in a museum," or Does It?:Indiana Jones, Artifactology and the Afterlives of Objects. In: Indiana Jones and the Edited Collection of Critical Essays. McFarland & Co Inc, pp. 136-150. ISBN 9781476676920 Dodd, Kerry (2020) The archaeological weird:excavating the non-human. PhD thesis, UNSPECIFIED. Dodd, L.E. and Follmann, D. and Wang, J. and Koenig, F. and Korn, L.L. and Schoergenhofer, C. and Proschan, M. and Hunsberger, S. and Bonnett, T. and Makowski, M. and Belhadi, D. and Wang, Y. and Cao, B. and Mentre, F. and Jaki, T. (2020) Endpoints for randomized controlled clinical trials for COVID-19 treatments. Clinical Trials, 17 (5). pp. 472-482. ISSN 1740-7745 Dodd, Steven and Payne, Sheila and Preston, Nancy and Walshe, Catherine (2020) Understanding the Outcomes of Supplementary Support Services in Palliative Care for Older People:A Scoping Review and Mapping Exercise. Journal of Pain and Symptom Management, 60 (2). pp. 449-459. ISSN 0885-3924 Dodd, Steven and Preston, Nancy and Payne, Sheila and Walshe, Catherine (2020) Exploring a New Model of End-of-Life Care for Older People that Operates in the Space Between the Life World and the Healthcare System:A Qualitative Case Study. International Journal of Health Policy and Management, 9 (8). pp. 344-351. ISSN 2322-5939 Doherty, Michael (2020) Comprehensibility as a rule of law requirement:the role of legal design in delivering access to law. Journal of Open Access to Law, 8 (1). ISSN 2372-7152 Doherty, Michael (2020) Legal Design and its role in legal education. Medium. Doherty, Michael and Bleasdale, Lydia and Flint, Emma (2020) Connecting Legal Education. Association of Law Teachers. Doherty, Michael and McKee, Tina and Allbon, Emily (2020) Connecting Legal Education:the Legal Design edition. Association of Law Teachers. Dokka, T. and Goerigk, M. and Roy, R. (2020) Mixed uncertainty sets for robust combinatorial optimization. Optimization Letters, 14. 1323–1337. ISSN 1862-4472 Dokka Venkata Satyanaraya, Trivikram and Garuba, Francis and Goerigk, Marc and Jacko, Peter (2020) An Efficient Approach to Distributionally Robust Network Capacity Planning. arXiv. Dokka Venkata Satyanaraya, Trivikram and Moulin, Herve and Ray, Indrajit and Sen Gupta, Sonali (2020) Equilibrium Design by Coarse Correlation in Quadratic Games. Working Paper. Lancaster University, Department of Economics, Lancaster. Donaldson, Christopher (2020) As Conchas de Ruskin. Revista Eletrônica de Arquitetura, 29. pp. 102-110. ISSN 1984-5766 Donaldson, Christopher (2020) Authorial Effects at Work in the English Lakes:The Curious Case of Tarn Hows. Nineteenth-Century Contexts, 42 (4). pp. 433-448. ISSN 0890-5495 Donaldson, Christopher (2020) A County of Refuge: Refugees in Cumbria, 1933–1941, by Rob David (CWAAS Extra Series, No. 50, £15). [Review] Donaldson, Christopher (2020) In Memoriam:Canon Rawnsley. Cumberland and Westmorland Antiquarian and Archaeological Society Newsletter (93). pp. 14-15. ISSN 2397-8392 Donaldson, Christopher (2020) A Lakes' Guide for American GIs. Cumberland and Westmorland Antiquarian and Archaeological Society Newsletter, 94. p. 4. ISSN 2397-8392 Donaldson, Christopher (2020) 'Over Sands to the Lakes':Journeys over Morecambe Bay before and after the Age of Steam. In: Sandscapes. Palgrave Macmillan, pp. 163-178. ISBN 9783030447793 Donaldson, Christopher (2020) Rolling at a 'jog-trot pace':Ruskin's ethics of travel. The Ruskin Review, 14 (2). 21–31. ISSN 1745-3895 Donaldson, Christopher (2020) The Ruskin Review:Thinking Fast and Slow. The Ruskin Review, 14 (2). 1–108. ISSN 1745-3895 Donaldson, Christopher (2020) The Turret at Brantwood:Ruskin's Faulty Tower? Manchester Memoirs, 157. ISSN 0265-3575 (In Press) Donaldson, Christopher (2020) Wordsworthian Undercurrents in Álvaro de Campos's 'Barrow on Furness'. The Wordsworth Circle, 51 (1). pp. 120-134. ISSN 0043-8006 Donaldson, Christopher Elliott (2020) Deep Mapping and Romanticism:'Practical' Geography in the Poetry of Sir Walter Scott. In: Romantic Cartographies. Cambridge University Press, Cambridge, 211–231. ISBN 9781108472388 Donaldson, Christopher Elliott (2020) JULIA S. CARLSON, Romantic Marks and Measures: Wordsworth's Poetry in Fields of Print (Philadelphia, PA: University of Pennsylvania Press, 2016), 368 pp. £50 hardback. ISBN 9780 812 247 879. Romanticism, 26 (3). pp. 303-305. ISSN 1354-991X Donkersley, Philip and Elsner-Adams, Emily and Maderson, Siobhan (2020) A One-Health Model for Reversing Honeybee (Apis mellifera L.) Decline. Veterinary Sciences, 7 (3). Donkersley, Philip and Robinson, Sam and Duetsch, Ella K and Gibbons, Alistair T (2020) Microbial symbioses and host nutrition. In: Microbiomes of Soils, Plants and Animals. Cambridge University Press, Cambridge, pp. 78-97. ISBN 9781108654418 Donovan, Sophie and Mao, Yuwei and Orr, Douglas and Carmo-Silva, Elizabete and McCormick, Alistair (2020) CRISPR/Cas9-mediated mutagenesis of the Rubisco small subunit family in Nicotiana tabacum. Frontiers in Genome Editing, 2. ISSN 2673-3439 Donovan, Tim and Dunn, Kirsty and Penman, Amy and Young, Robert and Reid, Vincent (2020) Fetal Eye Movements in Response to a Visual Stimulus. Brain and Behavior, 10 (8). ISSN 2162-3279 Donovan, Tim and Milan, Stephen and Wang, Ran and Banchoff, Emma and Bradley, Patrick and Crossingham, Iain (2020) Anti‐IL‐5 therapies for chronic obstructive pulmonary disease. Cochrane Database of Systematic Reviews, 2020 (12). ISSN 1469-493X Dore, Mayane (2020) Waterloo-Redfern and the Racism Rooted in Cities. UNSPECIFIED. Dotse-Gborgbortsi, W. and Tatem, A.J. and Alegana, V. and Utazi, C.E. and Ruktanonchai, C.W. and Wright, J. (2020) Spatial inequalities in skilled attendance at birth in Ghana:a multilevel analysis integrating health facility databases with household survey data. Tropical Medicine and International Health, 25 (9). pp. 1044-1054. ISSN 1360-2276 Douglas, Timothy and Keppler, Julia K. and Vandrovcova, Marta and Plnencer, Martin and Beranova, Jana and Feuereisen, Michelle and Parakhonskiy, Bogdan V and Svenskaya, Yulia and Atkin, Vsevolod and Ivanova, Anna and Ricquier, Patrick and Balcaen, Lieve and Vanhaecke, Frank and Schieber, Andreas and Bacakova, Lucie and Skirtach, Andre G. (2020) Enhancement of biomimetic enzymatic mineralization of gellan gum polysaccharide hydrogels by plant-derived gallotannins. International Journal of Molecular Sciences, 21 (7). ISSN 1422-0067 Downs, Claire (2020) The paradox of forensic care:Supporting sexual offenders. PhD thesis, UNSPECIFIED. Doyle, Simeon and Aggidis, George (2020) Advancement of Oscillating Water Column Wave Energy Technologies through Integrated Applications and Alternative Systems. International Journal of Energy and Power Engineering, 14 (12). pp. 401-412. Doyle, Simeon and Aggidis, George (2020) The Evolution of Integrated Applications and Alternative Systems in Oscillating Water Column Wave Energy Technologies. In: International Conference on Wave Power and Energy Technology, 2020-06-112020-06-12. Drake, John H. and Kheiri, Ahmed and Özcan, Ender and Burke, Edmund K. (2020) Recent Advances in Selection Hyper-heuristics. European Journal of Operational Research, 285 (2). pp. 405-428. ISSN 0377-2217 Dravecz, Nikolett (2020) Insulin/IGF-like signalling and brain ageing in Drosophila melanogaster. PhD thesis, UNSPECIFIED. Draycott, Jane (2020) Extracts from the Old English Herbarium; Outside the Columbarium. In: Pestilence. Lapwing Publications, Belfast. ISBN 9781916345799 Draycott, Jane (2020) Four poems:Some Children, At this latitude, Marathon, The gloves. PN Review, 46 (6). ISSN 0144-7076 Draycott, Jane (2020) The Namesake. Times Literary Supplement. Drijbooms, E. and Groen, M.A. and Alamargot, D. and Verhoeven, L. (2020) Online management of text production from pictures:a comparison between fifth graders and undergraduate students. Psychological Research, 84. 2311–2324. ISSN 0340-0727 Duan, Q. and Duan, L. and Liu, Y. and Naidu, R. and Zhang, H. and Lei, Y. (2020) A novel in-situ passive sampling technique in the application of monitoring diuron in the aquatic environment. Environmental Technology and Innovation, 20. ISSN 2352-1864 Dudzevičiūtė, U. and Smail, Ian and Swinbank, A. M. and Stach, S. M. and Almaini, O. and da Cunha, E. and An, Fang Xia and Arumugam, V. and Birkin, J. and Blain, A. W. and Chapman, S. C. and Chen, C.-C. and Conselice, C. J. and Coppin, K. E. K. and Dunlop, J. S. and Farrah, D. and Geach, J. E. and Gullberg, B. and Hartley, W. G. and Hodge, J. A. and Ivison, R. J. and Maltby, D. T. and Scott, D. and Simpson, C. J. and Simpson, J. M. and Thomson, A. P. and Walter, F. and Wardlow, J. L. and Weiss, A. and van der Werf, P. (2020) An ALMA survey of the SCUBA-2 CLS UDS field: Physical properties of 707 Sub-millimetre Galaxies. Monthly Notices of the Royal Astronomical Society, 494 (3). pp. 3828-3860. ISSN 0035-8711 Duffy, Deirdre Niamh (2020) From Feminist Anarchy to Decolonisation:Understanding Abortion Health Activism Before and After the Repeal of the 8th Amendment. Feminist Review, 124 (1). pp. 69-85. ISSN 0141-7789 Dunleavy, Lesley and Walshe, Catherine and Machin, Linda (2020) Exploring the psychological impact of life-limiting illness using the Attitude to Health Change scales:A qualitative focus group study in a hospice palliative care setting. European Journal of Cancer Care, 29 (6). ISSN 0961-5423 Dunleavy, Lesley and Walshe, Catherine and Preston, Nancy (2020) 'Necessity is the mother of invention':Specialist palliative care service innovation and practice change in response to COVID-19. Results from a multi-national survey (CovPall). medRxiv. Dunlop, Malcolm and Yang, Junying and Dong, Xiangcheng and Freeman, Mervyn and Rogers, Neil and Wild, Jim and Forsyth, Colin and Cao, Jinbin and Lühr, Hermann and Xiong, Chao (2020) Field-aligned current ordering in ground and space measurements. In: European Geosciences Union General Assembly 2020, 2020-05-042020-05-08, Online. Dunn, Kirsty and Bremner, James Gavin (2020) Investigating the social environment of the A-not-B search task. Developmental Science, 23 (3). ISSN 1363-755X Dunn, Nick (2020) Dark Design:A New Framework for Advocacy and Creativity for the Nocturnal Commons. The International Journal of Design in Society, 14 (4). pp. 19-30. ISSN 2325-1328 Dunn, Nick (2020) Dark Design:Reimagining Nocturnal Ambiances. In: Ambiances, Alloaesthesia: Senses, Inventions, Worlds. International Ambiances Network, USA, pp. 114-119. ISBN 9782952094870 Dunn, Nick (2020) Edges. In: Manchester. Manchester University Press, Manchester, pp. 259-262. ISBN 9781526144140 Dunn, Nick (2020) Night. In: Manchester. Manchester University Press, Manchester, pp. 42-45. ISBN 9781526144140 Dunn, Nick (2020) Place After Dark:Urban peripheries as alternative futures. In: The Routledge Handbook of Place. Routledge, London, pp. 155-167. ISBN 9781138320499 Dunn, Nick (2020) Ring Road. In: Manchester. Manchester University Press, Manchester, pp. 91-94. ISBN 9781526144140 Dunn, Nick (2020) Shadows. In: Manchester. Manchester University Press, Manchester, pp. 202-206. ISBN 9781526144140 Dunn, Nick and Blaney, Adam (2020) Responsive Megastructures:Growing Future Cities for Global Challenges. In: Rapid Cities - Responsive Architectures, 2020-11-222020-11-24, Virtual / American University. (Unpublished) Dunn, Nick and Cureton, Paul (2020) Future Cities:A Visual Guide. Bloomsbury, London. ISBN 9781350011649 Dunn, Nick and Edensor, Tim (2020) Revisiting the Dark:Diverse Encounters and Experiences. In: Rethinking Darkness. Ambiances, Atmospheres and Sensory Experiences of Spaces . Routledge, London, pp. 229-240. ISBN 9780367201159 Dunn, Ruth and Wanless, Sarah and Daunt, Francis and Harris, Michael P. and Green, Jonathan A. (2020) A year in the life of a North Atlantic seabird:behavioural and energetic adjustments during the annual cycle. Scientific Reports, 10. ISSN 2045-2322 Dunn, W.R. and Branduardi-Raymont, Graziella and Carter-Cortez, V and Campbell, A and Elsner, R and Ness, J-U and Gladstone, G. R. and Ford, P and Yao, Zhonghua and Rodriguez, P and Clark, G and Paranicas, C. and Foster, A and Baker, D and Gray, Rebecca and Badman, Sarah and Ray, Licia C and Bunce, E. J. and Snios, B and Jackman, Caitriona M. and Rae, I.J. and Kraft, Ralph P. and Rymer, A. and Lathia, S and Achilleos, N (2020) Jupiter's X-ray Emission During Solar Minimum. Journal of Geophysical Research: Space Physics, 125 (6). ISSN 2169-9402 Dunn, W.R. and Gray, Rebecca and Wibisono, A. D. and Lamy, Laurent and Louis, C. and Badman, Sarah and Branduardi-Raymont, Graziella and Elsner, R and Gladstone, G. R. and Ebert, R. W. and Ford, P and Foster, A and Tao, C. and Ray, Licia C and Yao, Z. H. and Rae, I.J. and Bunce, E. J. and Rodriguez, P. and Jackman, Caitriona M. and Nicolaou, G and Clarke, J. and Nichols, Jonathan and Elliot, H and Kraft, R (2020) Jupiter's X-ray Emission 2007 Part 2:Comparisons with UV and Radio Emissions and In-Situ Solar Wind Measurements. Journal of Geophysical Research: Space Physics, 125 (6). ISSN 2169-9402 Durcan, Rose and Rufino, Mariana and Ostle, Nick and Banegas, Natalia and Viruel, Emilce (2020) Effects of land use change and grazing on soil carbon dynamics in the semi arid Chaco, Argentina. UNSPECIFIED. Duvall, Michael and Waldron, John and Godin, Laurent and Najman, Yani (2020) Active strike-slip faults and an outer frontal thrust in the Himalayan foreland basin. Proceedings of the National Academy of Sciences of the United States of America, 117 (30). pp. 17615-17621. ISSN 0027-8424 Duñabeitia, Jon Andoni and Borragan, Maria and De Bruin, Angela and Casaponsa, Aina (2020) Changes in the sensitivity to language-specific orthographic patterns with age. Frontiers in Psychology - Language Sciences, 11. Dvir, Amit and Marnerides, Angelos and Dubin, Ran and Golan, Nehor and Hajaj, Chen (2020) Encrypted Video Traffic Clustering Demystified. Computers and Security, 96. ISSN 0167-4048 Dwyer, Owen and Marnerides, Angelos and Giotsas, Vasileios and Mursch, Troy (2020) Profiling IoT-based Botnet Traffic using DNS. In: 2019 IEEE Global Communications Conference (GLOBECOM). IEEE, pp. 1-6. ISBN 9781728109626 Dziadek, Michal and Douglas, Timothy and Dziadek, Kinga and Zagrajczuk, Barbara and Serafim, Andrada and Stancu, Izabela-Cristina and Cholewa-Kowalska, Katarzyna (2020) Novel whey protein isolate-based highly porous scaffolds modified with therapeutic ion-releasing bioactive glasses. Materials Letters, 261. ISSN 0167-577X Early, Jeffrey J. and Sykulski, Adam M. (2020) Smoothing and Interpolating Noisy GPS Data with Smoothing Splines. Journal of Atmospheric and Oceanic Technology, 37 (3). pp. 449-465. ISSN 0739-0572 Easom, Philip and Bouridane, Ahmed and Qiang, Feiyu and Zhang, Li and Downs, Carolyn and Jiang, Richard (2020) In-House Deep Environmental Sentience for Smart Homecare Solutions Toward Ageing Society. In: Proceedings of 2020 International Conference on Machine Learning and Cybernetics, ICMLC 2020. Proceedings - International Conference on Machine Learning and Cybernetics . UNSPECIFIED, pp. 261-266. ISBN 9780738124261 Eastham, R. and Milligan, C. and Limmer, M. (2020) Qualitative findings about stigma as a barrier to contraception use:the case of Emergency Hormonal Contraception in Britain and implications for future contraceptive interventions. European Journal of Contraception and Reproductive Health Care, 25 (5). pp. 334-338. Eastham, Rachael and Kaley, Alex (2020) "We're Talking About You, Not to You":Methodological Reflections on Public Health Research With Families With Young Children. Qualitative Health Research, 30 (12). pp. 1888-1898. ISSN 1049-7323 Easton, Catherine (2020) Autonomous Vehicles:An Analysis of the Regulatory and Legal Landscape. In: Future Law. Future Law . Edinburgh University Press, Edinburgh, pp. 313-343. ISBN 9781474417617 Ebbatson, R. (2020) 'Perpetual recurrence':The arrest of time in Decadent poetry. In: Literature and Modern Time. Palgrave Macmillan, Cham, pp. 79-101. ISBN 9783030292782 Ebrey, Rhian and Hall, Stephen and Willis, Rebecca (2020) Is Twitter Indicating a Change in MP's Views on Climate Change? Sustainability, 12 (24). ISSN 2071-1050 Eccles, Fiona and Craufurd, David and Smith, Alistair and Davies, Rhys and Glenny, Kristian and Homberger, Maximilian and Peeren, Siofra and Rogers, Dawn and Rose, Leona and Skitt, Zara and Theed, Rachael and Simpson, Jane (2020) A feasibility investigation of mindfulness-based cognitive therapy for people with Huntington's disease. Pilot and Feasibility Studies, 6. ISSN 2055-5784 Ecker, Ullrich K.H. and Butler, Lucy H. and Cooke, John and Hurlstone, Mark John and Kurz, Tim and Lewandowsky, Stephan (2020) Using the COVID-19 economic crisis to frame climate change as a secondary issue reduces mitigation support. Journal of Environmental Psychology, 70. ISSN 0272-4944 Eckley, Idris and Kirch, Claudia and Weber, Silke (2020) A novel change point approach for the detection of gas emission sources using remotely contained concentration data. Annals of Applied Statistics, 14 (3). pp. 1258-1284. ISSN 1932-6157 Edensor, Tim and Dunn, Nick (2020) Venturing into the Dark:Gloomy Multiplicities. In: Rethinking Darkness. Ambiances, Atmospheres and Sensory Experiences of Spaces (1st). Routledge, London, pp. 1-24. ISBN 9780367201159 Edge, Rhiannon and Isba, Rachel (2020) Interventions delivered in secondary or tertiary medical care settings to improve routine vaccination uptake in children and young people. A scoping review protocol. JBI database of systematic reviews and implementation reports. ISSN 2202-4433 Edge, Thomas A. and Baird, Donald J. and Bilodeau, Guillaume and Gagné, Nellie and Greer, Charles and Konkin, David and Newton, Glen and Séguin, Armand and Beaudette, Lee and Bilkhu, Satpal and Bush, Alexander and Chen, Wen and Comte, Jérôme and Condie, Janet and Crevecoeur, Sophie and El-Kayssi, Nazir and Emilson, Erik J.S. and Fancy, Donna Lee and Kandalaft, Iyad and Khan, Izhar U.H. and King, Ian and Kreutzweiser, David and Lapen, David and Lawrence, John and Lowe, Christine and Lung, Oliver and Martineau, Christine and Meier, Matthew and Ogden, Nicholas and Paré, David and Phillips, Lori and Porter, Teresita M. and Sachs, Joel and Staley, Zachery and Steeves, Royce and Venier, Lisa and Veres, Teodor and Watson, Cynthia and Watson, Susan and Macklin, James (2020) The Ecobiomics project:Advancing metagenomics assessment of soil health and freshwater quality in Canada. Science of the Total Environment, 710. ISSN 0048-9697 Edmonds, Fiona (2020) Britain and Brittany:Contact, Myth and History in the Early Middle Ages. The Historian, 144. pp. 36-39. ISSN 0265-1076 Edwards, A.M. and Robinson, J.P.W. and Blanchard, J.L. and Baum, J.K. and Plank, M.J. (2020) Accounting for the bin structure of data removes bias when fitting size spectra. Marine Ecology Progress Series, 636. pp. 19-33. ISSN 0171-8630 Edwards, Leo (2020) Perception, knowledge and experience of caregivers supporting Autistic individuals or persons that may be Autistic in Grenada: An exploratory study. PhD thesis, UNSPECIFIED. Edwards, Liz and Darby, Andy and Dean, Claire (2020) From Digital Nature Hybrids to Digital Naturalists:Reviving Nature Connections Through Arts, Technology and Outdoor Activities. In: Technology, Design and the Arts - Opportunities and Challenges. Springer Series on Cultural Computing . Springer International Publishing, Cham, pp. 295-314. ISBN 9783030420963 Edwards, Michaela (2020) Ethnography in applied health research. In: Handbook of theory and methods in applied health research. Edward Elgar, pp. 88-106. ISBN 9781785363207 Efremenko, Y and Fajt, L and Febbraro, M and Fischer, F and Guitart, M and Hackett, B and Hayward, C and Hodák, R and Majorovits, B and Manzanillas, L and Muenstermann, D and Öz, E and Pjatkan, R and Pohl, M and Radford, D and Rouhana, R and Schulz, O and Štekl, I and Stommel, M (2020) Use of poly(ethylene naphthalate) as a self-vetoing structural material. Journal of Physics: Conference Series, 1468 (1). ISSN 1742-6588 Egna, N. and O'Connor, D. and Stacy-Dawes, J. and Tobler, M.W. and Pilfold, N. and Neilson, K. and Simmons, B. and Davis, E.O. and Bowler, M. and Fennessy, J. and Glikman, J.A. and Larpei, L. and Lekalgitele, J. and Lekupanai, R. and Lekushan, J. and Lemingani, L. and Lemirgishan, J. and Lenaipa, D. and Lenyakopiro, J. and Lesipiti, R.L. and Lororua, M. and Muneza, A. and Rabhayo, S. and Ole Ranah, S.M. and Ruppert, K. and Owen, M. (2020) Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification. Ecology and Evolution, 10 (21). pp. 11954-11965. ISSN 2045-7758 Egrioglu, Erol and Bas, Eren and Yolcu, Ufuk and Chen, Mu Yen (2020) Picture fuzzy time series:Defining, modeling and creating a new forecasting method. Engineering Applications of Artificial Intelligence, 88. ISSN 0952-1976 Ehteshami, Anoushiravan and Rasheed, Amjed and Beaujouan, Juline (2020) Islam, IS and the Fragmented State:The Challenges of Political Islam in the MENA Region. Routledge, London. ISBN 9780367234867 Ekeu-Wei, Iguniwari and Blackburn, Alan (2020) Catchment-Scale Flood Modelling in Data-Sparse Regions Using Open-Access Geospatial Technology. ISPRS International Journal of Geo-Information, 9 (9). Ekeu-Wei, Iguniwari and Blackburn, Alan and Giovannettone, Jason (2020) Accounting for the effects of climate variability in regional flood frequency estimates in western Nigeria. Journal of Water Resource and Protection, 12. pp. 690-713. ISSN 1945-3094 El Abbassi, M. and Perrin, M.L. and Barin, G.B. and Sangtarash, S. and Overbeck, J. and Braun, O. and Lambert, C.J. and Sun, Q. and Prechtl, T. and Narita, A. and Müllen, K. and Ruffieux, P. and Sadeghi, H. and Fasel, R. and Calame, M. (2020) Controlled Quantum Dot Formation in Atomically Engineered Graphene Nanoribbon Field-Effect Transistors. ACS Nano, 14 (5). pp. 5754-5762. ISSN 1936-086X El Haj, Mahmoud and Alves, Paulo and Rayson, Paul and Walker, Martin and Young, Steven (2020) Retrieving, Classifying and Analysing Narrative Commentary in Unstructured (Glossy) Annual Reports Published as PDF Files. Accounting and Business Research, 50 (1). pp. 6-34. ISSN 0001-4788 El Haj, Mahmoud and Giannakopoulos, George and AbuRa'ed, Ahmed and Litvak, Marina and Pittaras, Nikiforos (2020) The Financial Narrative Summarisation Shared Task (FNS 2020). In: The First Financial Narrative Processing Workshop. UNSPECIFIED. ISBN 9791095546238 El Kashouty, Menna (2020) Development of a novel feature based manufacturability assessment system for high-volume injection moulding tool inserts. PhD thesis, UNSPECIFIED. El Menshawi, Mustafa (2020) Leaving the Muslim Brotherhood:Self, Society and the State. Middle East Today . Palgrave Macmillan, Cham. ISBN 9783030278595 El Menshawi, Mustafa (2020) The Legitimating Power of Discourse:Constructing the 1973 War under Mubarak. Middle East Journal of Culture and Communication, 13 (3). pp. 256-275. El-Haj, Mahmoud (2020) Habibi - a multi Dialect multi National Arabic Song Lyrics Corpus. In: LREC 2020, Twelfth International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA), FRA. El-Haj, Mahmoud and Rayson, Paul and Athanasakou, Vasiliki and Bouamor, Houda and Salzedo, Catherine and Giannakopoulos, George and Litvak, Marina and Pittaras, Nikiforos and Elhag, Anas and Ferradans, Sira (2020) Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation. Association for Computational Linguistics. ISBN 9781952148408 El-Haj, Mahmoud and Rutherford, Nathan and Coole, Matthew and Ezeani, Ignatius and Prentice, Sheryl and Ide, Nancy and Knight, Jo and Piao, Scott and Mariani, John and Rayson, Paul and Suderman, Keith (2020) Infrastructure for Semantic Annotation in the Genomics Domain. In: LREC 2020, Twelfth International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA), Paris, pp. 6921-6929. ISBN 9791095546344 Eldh, A.C. and Rycroft-Malone, J. and van der Zijpp, T. and McMullan, C. and Hawkes, C. (2020) Using Nonparticipant Observation as a Method to Understand Implementation Context in Evidence-Based Practice. Worldviews on Evidence-Based Nursing, 17 (3). pp. 185-192. ISSN 1545-102X Elek, Gabor (2020) Amenable purely infinite actions on the non-compact Cantor set. Ergodic Theory and Dynamical Systems, 40 (6). pp. 1619-1633. ISSN 0143-3857 Elgadri, Ahmed (2020) Cross-cultural Pragmatics: Apology Strategies in Libyan Arabic. In: 4th Arabic Linguistic Forum, 2020-06-302020-07-02, Online. (Unpublished) Elhabbash, Abdessalam and Nundloll, Vatsala and Elkhatib, Yehia and Blair, Gordon and Sanz Marco, Vicent (2020) An Ontological Architecture for Principled and Automated System of Systems Composition. In: 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems. ACM, 85–95. ISBN 9781450379625 Elias, Dafydd and Ooi, Gin Teng and Razi, Mohammad Fadhil Ahmad and Robinson, Samuel and Whitaker, Jeanette and McNamara, Niall (2020) Effects of Leucaena biochar addition on crop productivity in degraded tropical soils. Biomass and Bioenergy, 142. ISSN 0961-9534 Elias, Dafydd and Robinson, Samuel and Both, Sabine and Goodall, Tim and Lee, Noreen Majalap and Ostle, Nick and McNamara, Niall (2020) Soil Microbial Community and Litter Quality Controls on Decomposition Across a Tropical Forest Disturbance Gradient. Frontiers in Forests and Global Change, 3. Elias, F. and Ferreira, J. and Lennox, G.D. and Berenguer, E. and Ferreira, S. and Schwartz, G. and Melo, L.D.O. and Reis Júnior, D.N. and Nascimento, R.O. and Ferreira, F.N. and Espirito-Santo, F. and Smith, C.C. and Barlow, J. (2020) Assessing the growth and climate sensitivity of secondary forests in highly deforested Amazonian landscapes. Ecology, 101 (3). ISSN 0012-9658 Elias, G. and Díez, S. and Zhang, H. and Fontàs, C. (2020) Development of a new binding phase for the diffusive gradients in thin films technique based on an ionic liquid for mercury determination. Chemosphere, 245. ISSN 0045-6535 Elkomy, Shimaa and Ingham, Hilary and Read, Robert (2020) The Impact of Foreign Technology & Embodied R&D On Productivity in Internationally-Oriented & High-Technology Industries in Egypt, 2006-2009. Working Paper. Lancaster University, Department of Economics, Lancaster. Ellina, Sofia (2020) Analysis of the corporate rescue procedures in the Insolvency Law of the UK and Cyprus:An empirical perspective. PhD thesis, UNSPECIFIED. Elliott, Kamilla (2020) Theorizing Adaptation. Oxford University Press, New York. ISBN 9780197511176 Elliott, Kamilla Lee (2020) Ad-app-ting the Canon. In: Adapting the Canon. Transcript . Legenda, Cambridge. ISBN 9781781887080 Ellis, David and McQueenie, Ross and Williamson, Andrea and Wilson, Philip (2020) Missed appointments in healthcare systems:A national retrospective data linkage project. In: SAGE Research Methods Cases. SAGE Research Methods . Sage. ISBN 9781529723014 Ellis, Jodie (2020) Characterising gut dysbiosis in rodent models relevant to psychiatric and neurodevelopmental disorders. Masters thesis, UNSPECIFIED. Ellsworth-Krebs, Katherine (2020) 'Bigger, but not massive':Negotiating ideals of saving energy and having a 'good family home'. In: European Association of Social Anthropologists, 2020-07-20. Elphick, Camilla and Stuart, Avelie and Philpot, Richard and Walkington, Zoe and Frumkin, Lara and Zhang, Min and Levine, Mark and Price, Blaine and Pike, Graham and Nuseibeh, Bashar and Bandara, Arosha (2020) Altruism and anxiety:Engagement with online community support initiatives (OCSIs) during Covid-19 lockdown in the UK and Ireland. arXiv. ISSN 2331-8422 Elton, Daniel Mark (2020) Decay rates at infinity for solutions to periodic Schrödinger equations. Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 150 (3). pp. 1113-1126. ISSN 0308-2105 Embrey, Iain (2020) States of nature and states of mind:a generalized theory of decision-making. Theory and Decision, 88 (1). pp. 5-35. ISSN 0040-5833 Emerson, E. and Llewellyn, G. (2020) Identifying children at risk of intellectual disability in UNICEF's multiple indicator cluster surveys:Cross-sectional survey. Disability and Health Journal. ISSN 1936-6574 Emerson, E. and Milner, A. and Aitken, Z. and Vaughan, C. and Llewellyn, G. and Kavanagh, Anne M. (2020) Exposure to discrimination and subsequent changes in self-rated health: prospective evidence from the UK's Life Opportunities Survey. Public Health, 185. pp. 176-181. ISSN 0033-3506 Emerson, Eric and Fortune, N and Aitken, N and Hatton, Chris and Stancliffe, Roger J. and Llewellyn, Gwynnyth (2020) The Wellbeing of Working-Age Adults with and without Disability in the UK:Associations with Age, Gender, Ethnicity, Partnership Status, Educational Attainment and Employment Status. Disability and Health Journal, 13 (2). ISSN 1936-6574 Emerson, Eric and Milner, A and Aitken, Zoe and Vaughan, Cathy and Llewellyn, Gwynnyth and Kavanagh, Anne M. (2020) Exposure to discrimination and subsequent changes in self-rated health:Evidence from the UK's Life Opportunities Survey. Public Health, 185. pp. 176-181. ISSN 0033-3506 Emerson, Eric and Savage, A. and Llewellyn, G (2020) Prevalence of underweight, wasting and stunting among young children with a significant cognitive delay in 47 low and middle-income countries. Journal of Intellectual Disability Research, 64 (2). pp. 93-102. ISSN 0964-2633 Emsley, Hedley and Parkes, Laura M. (2020) Seizures in the context of occult cerebrovascular disease. Epilepsy and Behavior, 104 (B). ISSN 1525-5050 Eneje, S. and Sanni, S.O. and Pereira, C.F. (2020) Engagement in a Virtual Learning on Two Social Networks of An Engineering course using the Social Network Analysis- An approach using a case study. In: 2020 IEEE Canadian Conference on Electrical and Computer Engineering, 2020-08-302020-09-02. Eng, Teck-Yong and Ozdemir, Sena and Gupta, Suraksha and Kanungo, Rama (2020) International Social Entrepreneurship and Social Value Creation in Cause-Related Marketing through Personal Relationships and Accountability. International Marketing Review, 37 (5). pp. 945-976. ISSN 0265-1335 Engawi, Duha (2020) The Advert of AI on Graphic Designers in Saudi Arabia. In: UNSPECIFIED. Engelberg, Eliyahu Zvi and Paszkiewicz, Jan and Peacock, Ruth and Lachmann, Sagy and Ashkenazy, Yinon and Wuensch, Walter (2020) Dark current spikes as an indicator of mobile dislocation dynamics under intense dc electric fields. Physical Review Accelerators and Beams, 23 (12). ISSN 2469-9888 Enrique, A. and Duffy, D. and Lawler, K. and Richards, D. and Jones, S. (2020) An internet-delivered self-management programme for bipolar disorder in mental health services in Ireland:Results and learnings from a feasibility trial. Clinical Psychology and Psychotherapy. ISSN 1063-3995 Erdogan, I. and Rondi, Emanuela and De Massis, Alfredo Vittorio (2020) Managing the tradition and innovation paradox in family firms:A family imprinting perspective. Entrepreneurship Theory and Practice, 44 (1). pp. 20-54. ISSN 1042-2587 Escamilla Molgora, Juan Manuel and Sedda, Luigi and Atkinson, Peter (2020) Biospytial:spatial graph-based computing for ecological big data. GigaScience, 9 (5). pp. 1-25. ISSN 2047-217X Escudero Marin, Paula (2020) Using agent-based modelling and simulation to model performance measurement in healthcare. PhD thesis, UNSPECIFIED. Esin, Iliya and Romito, Alessandro and Gefen, Yuval (2020) Detection of Quantum Interference without an Interference Pattern. Physical review letters, 125 (2). ISSN 0031-9007 Eskandari Sabzi, H. and Rivera-Díaz-del-Castillo, P.E.J. (2020) Composition and process parameter dependence of yield strength in laser powder bed fusion alloys. Materials and Design, 195. ISSN 0261-3069 Esmaeili Bidhendi, M. and Asadi, Z. and Bozorgian, A. and Shahhoseini, A. and Gabris, M.A. and Shahabuddin, S. and Khanam, R. and Saidur, R. (2020) New magnetic Co3O4/Fe3O4 doped polyaniline nanocomposite for the effective and rapid removal of nitrate ions from ground water samples. Environmental Progress and Sustainable Energy, 39 (1). ISSN 1944-7442 Esmaeilzadeh, M. and Kadkhodayan, Mehran and Mohammadi, S. and Turvey, Geoffrey (2020) Nonlinear dynamic analysis of moving bilayer plates resting on elastic foundations. Applied Mathematics and Mechanics, 41. pp. 439-458. ISSN 0253-4827 Esmenda, Joshoua Condicion and Aguila, Myrron Albert Callera and Wang, Jyh-Yang and Lee, Teik-Hui and Yang, Chi-Yuan and Lin, Kung-Hsuan and Chang-Liao, Kuei-Shu and Katz, Nadav and Kafanov, Sergey and Pashkin, Yuri and Chen, Chii-Dong (2020) Observing off-resonance motion of nanomechanical resonators as modal superposition. arxiv.org. Etchells, Timothy (2020) Language Fragments. [Performance] Ettah, Ilokugbe (2020) Unravelling the complexities of protein conformational stability using Raman spectroscopy and two-dimensional correlation analysis. PhD thesis, UNSPECIFIED. Eusuf, D.V. and England, E.L. and Charlesworth, M. and Shelton, C.L. and Thornton, S.J. (2020) Maintaining education and professional development for anaesthesia trainees during the COVID-19 pandemic:the Self-isolAting Virtual Education (SAVEd) project. British Journal of Anaesthesia, 125 (5). E432-E434. ISSN 0007-0912 Euán, Carolina and Sun, Ying (2020) Bernoulli vector autoregressive model. Journal of Multivariate Analysis, 177. p. 104599. ISSN 0047-259X Evans, Daniel (2020) New insights into the rates of soil formation and their contribution to our understanding of soil lifespans. PhD thesis, UNSPECIFIED. Evans, Daniel (2020) Saving our Soils for Future Generations. Air Water Environment International, Dorset, UK. Evans, Daniel and Davies, Jessica (2020) Urban farming:four reasons it should flourish post-pandemic. The Conversation. Evans, Daniel and Quinton, John and Davies, Jessica and Zhao, Jianlin and Govers, Gerard (2020) Soil lifespans and how they can be extended by land use and management change. Environmental Research Letters, 15 (9). ISSN 1748-9326 Evans, Joel Christopher (2020) The Mob::J. G. Ballard's Turn to the Collective. Novel: A Forum on Fiction, 53 (3). 436–451. ISSN 0029-5132 Evans, Jonathan David and Smith, Ivan (2020) Bounds on Wahl singularities from symplectic topology. Algebraic Geometry, 7 (1). pp. 59-85. ISSN 2313-1691 Evans, Nicholas H. (2020) Lanthanide-Containing Rotaxanes, Catenanes and Knots. ChemPlusChem, 85 (4). pp. 783-792. ISSN 2192-6506 Everitt, Aluna (2020) Digital Fabrication Approaches for the Design and Development of Shape-Changing Displays. PhD thesis, UNSPECIFIED. Ewing, S.R. and Menéndez, R. and Schofield, L. and Bradbury, R.B. (2020) Vegetation composition and structure are important predictors of oviposition site selection in an alpine butterfly, the Mountain Ringlet Erebia epiphron. Journal of Insect Conservation, 24 (3). pp. 445-457. ISSN 1366-638X Eyre, Max and Carvalho-Pereira, Ticiana and Souza, Fábio N. and Hussein, Khalil and Hacker, Kathryn P. and Serrano, Soledad and Taylor, Joshua and Reis, Mitermayer G. and Ko, Albert I. and Begon, Mike and Diggle, Peter and Costa, Federico and Giorgi, Emanuele (2020) A multivariate geostatistical framework for combining multiple indices of abundance for disease vectors and reservoirs:a case study of rattiness in a low-income urban Brazilian community. Interface, 17 (170). ISSN 1742-5689 Eyre, Max and Stanton, Michelle and Macklin, G. and Bartoníček, Z. and O'Halloran, L. and Ombede, Dieudonné R Eloundou and Chuinteu, G.D. (2020) Piloting an integrated approach for estimation of environmental risk of Schistosoma haematobium infections in pre-school-aged children and their mothers at Barombi Kotto, Cameroon. Acta Tropica, 212. ISSN 0001-706X Eze, N.D. and Mateus, C. and Cravo Oliveira Hashiguchi, T. (2020) Telemedicine in the OECD:An umbrella review of clinical and cost-effectiveness, patient experience and implementation. PLoS ONE, 15 (8). ISSN 1932-6203 Ezeani, Ignatius and Rayson, Paul and Onyenwe, Ikechukwu and Uchechukwu, Chinedu and Hepple, Mark (2020) Igbo-English Machine Translation:An Evaluation Benchmark. arXiv. Ezeani, Ignatius and Rayson, Paul and Onyenwe, Ikechukwu E. and Chinedu, Uchechukwu and Hepple, Mark (2020) Igbo-English Machine Translation:An Evaluation Benchmark. In: Eighth International Conference on Learning Representations, 2020-04-262020-04-30, Virtual. Ezegwu, Chidi (2020) Masculinity and Access to Basic Education in Nigeria. PhD thesis, UNSPECIFIED. Fabbe-Costes, Nathalie and Lechaptois, Lucie and Spring, Martin (2020) "The map is not the territory":a boundary objects perspective on supply chain mapping. International Journal of Operations and Production Management, 40 (9). pp. 1475-1497. ISSN 0144-3577 Fagan, Des (2020) Generative social distance design:The Social Distance Lab, Lancaster, UK. In: Seoul Design International Conference, 2019-11-052019-11-05, South Korea. Fairbrother, Jamie and Shone, Robert and Glazebrook, Kevin and Zografos, K. G. (2020) A Stochastic Programming Model for Slot Allocation at Congested Airports. In: 62nd Annual Conference of the Operational Research Society, 2020-09-152020-09-17, Online. Fairbrother, Jamie and Zografos, K. G. and Glazebrook, Kevin (2020) A slot scheduling mechanism at congested airports which incorporates efficiency, fairness and airline preferences. Transportation Science, 54 (1). pp. 115-138. ISSN 0041-1655 Fallon, Francis and Hyman, Gavin (2020) Introduction. In: Agnosticism. Oxford University Press, pp. 1-28. ISBN 9780198859123 Fan, J. and Wang, S. and Li, H. and Yan, Z. and Zhang, Y. and Zheng, X. and Wang, P. (2020) Modeling the ecological status response of rivers to multiple stressors using machine learning:A comparison of environmental DNA metabarcoding and morphological data. Water Research, 183. ISSN 0043-1354 Fan, Wei and Porter, Catherine (2020) Reinforcement or Compensation?:Parental Responses to Children's Revealed Human Capital Levels in Ethiopia. Journal of Population Economics, 33 (1). pp. 233-270. ISSN 0933-1433 Fan, Yiyi and Stevenson, Mark (2020) A review on supply chain risk management:Definition, theory, and research agenda. International Journal of Physical Distribution and Logistics Management, 48 (3). pp. 205-230. ISSN 0960-0035 Fan, Yiyi and Stevenson, Mark and Li, Fang (2020) Supplier Initiating Risk Management Behaviour and Supply-Side Resilience:The Effects of Interpersonal Relationships and Dependence Asymmetry in Buyer-Supplier Relationships. International Journal of Operations and Production Management, 40 (7/8). pp. 971-995. ISSN 0144-3577 Fang, H. and Lowther, S.D. and Zhu, M. and Pei, C. and Li, S. and Fang, Z. and Yu, X. and Yu, Q. and Wang, Y. and Zhang, Y. and Jones, K.C. and Wang, X. (2020) PM 2.5-bound unresolved complex mixtures (UCM) in the Pearl River Delta region:Abundance, atmospheric processes and sources. Atmospheric Environment, 226. ISSN 1352-2310 Fang, Liping and Danos, Lefteris and Markvart, Tomas and Chen, Rui (2020) Observation of energy transfer at optical frequency to an ultrathin silicon waveguide. Optics Letters, 45 (16). pp. 4618-4621. ISSN 0146-9592 Fang, Z. and Tang, C.-T. and Zhu, Y. and Xiong, T. and Sinclair, F. and Hearn, J. and Mikolajczak, K.M. and Melika, G. and Stone, G.N. and Fang, S. (2020) Lithosaphonecrus edurus Fang, Melika, and Tang, a New Cynipid Inquiline Species (Hymenoptera Cynipidae: Synergini) from Sichuan, China. Proceedings of the Entomological Society of Washington, 122 (4). pp. 805-820. ISSN 0013-8797 Faniyi, A.A. and Wijanarko, K.J. and Tollitt, J. and Worthington, J.J. (2020) Helminth Sensing at the Intestinal Epithelial Barrier—A Taste of Things to Come. Frontiers in Immunology, 11. ISSN 1664-3224 Farooq-I-Azam, M. and Ni, Q. and Dong, M. (2020) Extreme values of trilateration localization error in wireless communication systems:31st IEEE Annual International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC 2020. In: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications, 2020-08-312020-09-03. Farooq-I-Azam, M. and Ni, Q. and Dong, M. (2020) An analytical model of trilateration localization error. In: 2019 IEEE Global Communications Conference (GLOBECOM). IEEE, pp. 1-6. ISBN 9781728109626 Farooqi, A.S. and Al-Swai, B.M. and Binti Ruslan, F.H. and Mohd Zabidi, N.A. and Saidur, R. and Faua'Ad Syed Muhammad, S.A. and Abdullah, B. (2020) Syngas production via dry reforming of methane over Nibased catalysts. In: Energy Security and Chemical Engineering Congress 17–19 July 2019, Kuala Lumpur, Malaysia. IOP Conference Series: Materials Science and Engineering . IOP Science, MYS. Farooqi, A.S. and Al-Swai, B.M. and Ruslan, F.H. and Mohd Zabidi, N.A. and Saidur, R. and Syed Muhammad, S.A.F. and Abdullah, B. (2020) Catalytic conversion of greenhouse gases (CO2 and CH4) to syngas over Ni-based catalyst:Effects of Ce-La promoters. Arabian Journal of Chemistry, 13 (6). pp. 5740-5749. ISSN 1878-5352 Farrell, Carole and Chan, E Angela and Siouta, Eleni and Walshe, Catherine and Molassiotis, Alex (2020) Communication patterns in nurse-led chemotherapy clinics:A mixed-method study. Patient Education and Counseling, 103 (8). pp. 1538-1545. ISSN 0738-3991 Faruque Aly, Hussein and Mason, Katy and Onyas, Winfred (2020) The Institutional Work of a Social Enterprise Operating in a Subsistence Marketplace:Using the Business Model as a Market-Shaping Tool. Journal of Consumer Affairs, 55 (1). pp. 31-58. ISSN 0022-0078 Fathallah, J. and Pyakurel, P. (2020) Addressing gender in energy studies. Energy Research and Social Science, 65. ISSN 2214-6296 Fathallah, Judith (2020) Digital fanfic in negotiation:LiveJournal, Archive of Our Own, and the affordances of read–write platforms. Convergence: The International Journal of Research into New Media Technologies, 26 (4). pp. 857-873. ISSN 1354-8565 Fathallah, Judith (2020) Emo:How Fans Defined a Subculture. Fandom and Culture . University of Iowa Press, Iowa City. ISBN 978160987242 Fathallah, Nadin (2020) The development of novel experimental strategies for elucidating the role of tryptophan metabolism in the neuropsychopathology of Human African Trypanosomiasis. PhD thesis, UNSPECIFIED. Faulconbridge, James and Jones, Ian and Anable, Jillian and Marsden, Greg (2020) Work, ICT and travel in multinational corporations:the synthetic work mobility situation. New Technology, Work and Employment, 35 (2). pp. 195-214. ISSN 0268-1072 Faulconbridge, James Robert and Muzio, Daniel (2020) Karl Polanyi on strategy:the effects of culture, morality and double-movements on embedded strategy. Critical Perspectives on Accounting, 73 (Decemb). ISSN 1045-2354 Fearnhead, Paul and Rigaill, Guillem (2020) Relating and Comparing Methods for Detecting Changes in Mean. Stat, 9 (1). ISSN 2049-1573 Fearon, David (2020) Life journeys with advanced breast cancer in Mauritania:A mixed methods case study. PhD thesis, UNSPECIFIED. Fearon, David and Hughes, Sean and Brearley, Sarah (2020) Experiences of breast cancer in Arab countries:a thematic synthesis. Quality of Life Research, 29 (2). pp. 313-324. ISSN 0962-9343 Federici, L. and Aglieri Rinella, G. and Alvarez Feito, D. and Arcidiacono, R. and Biino, C. and Bonacini, S. and Ceccucci, A. and Chiozzi, S. and Cortina Gil, E. and Cotta Ramusino, A. and Degrange, J. and Fiorini, M. and Gamberini, E. and Gianoli, A. and Kaplon, J. and Kleimenova, A. and Kluge, A. and Mapelli, A. and Marchetto, F. and Migliore, E. and Minucci, E. and Morel, M. and Noël, J. and Noy, M. and Perktold, L. and Perrin-Terrin, M. and Petagna, P. and Petrucci, F. and Poltorak, K. and Romagnoli, G. and Ruggiero, G. and Velghe, B. and Wahl, H. (2020) The Gigatracker, the silicon beam tracker for the NA62 experiment at CERN. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 958. ISSN 0168-9002 Fedorenko, Olena and Khovanov, Igor and Roberts, Stephen and Guardiani, Carlo (2020) Changes in ion selectivity following asymmetrical addition of charge to the selectivity filter of bacterial sodium channels. Entropy, 22 (12). ISSN 1099-4300 Fedorenko, Olena A. and Kaufman, Igor Kh and Gibby, William A. T. and Barabash, Miraslau L. and Luchinsky, Dmitry G. and Roberts, Stephen K. and McClintock, Peter V. E. (2020) Ionic Coulomb blockade and the determinants of selectivity in the NaChBac bacterial sodium channel. Biochimica et Biophysica Acta (BBA) - Biomembranes, 1862 (9). ISSN 0005-2736 Fellowes, S. (2020) Additional Challenges to Fair Representation in Autistic Advocacy. The New Bioethics, 20 (4). pp. 44-45. ISSN 1526-5161 Fellowes, S. (2020) The Importance of Getting Kanner's Account Right in Debates over First Descriptions of Autism. Journal of Autism and Developmental Disorders, 50. 4329–4330. ISSN 0162-3257 Fellowes, Sam (2020) Scientific Perspectivism and Psychiatric Diagnoses:Respecting History and Constraining Relativism. European Journal for the Philosophy of Science, 11. Fellowes, Sam (2020) The value of categorical polythetic diagnoses in psychiatry. British Journal for the Philosophy of Science. ISSN 0007-0882 (In Press) Felstead, Imogen (2020) Stage(d) hands in early modern drama and culture. PhD thesis, UNSPECIFIED. Feng, Daquan and Huang, Xiaoli and Long, Litai and Qian, Gongbin and Tang, Jianhua and Cao, Yue (2020) Energy-Aware D2D Communications. In: Wiley 5G REF. Wiley. ISBN 9781119471509 Fennell, J.T. and Wilby, A. and Sobeih, W. and Paul, N.D. (2020) New understanding of the direct effects of spectral balance on behaviour in Myzus persicae. Journal of Insect Physiology, 126. ISSN 0022-1910 Ferguson, Freya and Sandy McLennan, Prof and Urbaniak, Mick and Nigel Jones, Dr and Copeland, Nikki (2020) Re-evaluation of Diadenosine Tetraphosphate (Ap4A) From a Stress Metabolite to Bona Fide Secondary Messenger. Frontiers in Molecular Biosciences, 7. Ferguson, Harry and Disney, Tom and Warwick, Lisa and Leigh, J. and Cooner, Tarsem Singh and Beddoe, Liz (2020) Hostile relationships in social work practice:Anxiety, hate and conflict in long-term work with involuntary service users. Journal of Social Work Practice. ISSN 0265-0533 Ferguson, Harry and Leigh, Jadwiga and Liz, Beddoe and Singh Cooner, Tarsem and Warwick, Lisa and Disney, Tom (2020) From snapshots of practice to a movie:Researching long-term social work and child protection by getting as close as possible to practice and organisational life. British Journal of Social Work, 50 (6). 1706–1723. ISSN 0045-3102 Ferguson, Harry and Warwick, Lisa and Singh Cooner, Tarsem and Leigh, Jadwiga and Beddoe, Liz and Disney, Tom and Plumridge, Gillian (2020) The nature and culture of social work with children and families in long-term casework:Findings from a qualitative longitudinal study. Child and Family Social Work, 25 (3). pp. 694-703. ISSN 1356-7500 Fernandes, Nuno Octavio and Thurer, Matthias and Stevenson, Mark and Carmo-Silva, Silvio (2020) Lot Synchronization in Make-to-Order Shops with Order Release Control:An Assessment by Simulation. International Journal of Production Research, 58 (21). pp. 6724-6738. ISSN 0020-7543 Fernandez-Gallardo, Alvaro and Paya, Ivan (2020) Macroprudential Policy in the Euro Area. Working Paper. Lancaster University, Department of Economics, Lancaster. Fernando, Chitru and Hoelscher, Seth and Raman, Vikas (2020) The Informativeness of Derivatives Use:Evidence from Corporate Disclosure through Public Announcements. Journal of Banking and Finance. ISSN 0378-4266 Fernández-Aceves, Hervin (2020) County and Nobility in Norman Italy:Aristocratic Agency in the Kingdom of Sicily. Bloomsbury, London. ISBN 9781350133228 Fernández-Aceves, Hervin (2020) Michelle Hobart (Ed.), A Companion to Sardinian History, 500–1500, Leiden-Boston (Brill) 2017 (Brill's Companions to European History 11), XXX, 651 pp., ill., ISBN 978- 90-04-34123-4, € 249. Quellen und Forschungen aus italienischen Archiven und Bibliotheken, 100 (1). 639–640. ISSN 0079-9068 Fernández-Aceves, Hervin (2020) What can we learn from the Semantic Web to 'revamp' our Historical Research? EPOCH Magazine, 2. Ferracci, Valerio and Ashworth, Kirsti and Harris, Neil and Bolas, Conor and Jones, Roderic and Mahli, Yadvinder and King, Thomas and Otu-Larbi, Frederick (2020) Continuous isoprene measurements in a UK temperate forest for a whole growing season:effects of drought stress during the 2018 heatwave. Geophysical Research Letters, 47 (15). ISSN 0094-8276 Ferraresi, Massimiliano and Migali, Giuseppe and Nordi, Francesca and Rizzo, Leonzio and Secomandi, Riccardo (2020) Interazione spaziale nella spesa dei comuni Italiani:Un'analisi empirica. Scienze regionali, 19 (2). pp. 249-273. ISSN 1720-3929 Ferreday, D. (2020) 'No one is trash, no one is garbage, no one is cancelled':the cultural politics of trauma, recovery and rage in RuPaul's Drag Race. Celebrity Studies, 11 (4). pp. 464-478. ISSN 1939-2397 Ferrer Huerta, Miriam (2020) The Graphene Ring Nanoelectrode (GRiN) and its Application as an Electroanalytical Sensor for Environmental Monitoring. PhD thesis, UNSPECIFIED. Ferretti, Francesco and Jacoby, David M. P. and Pfleger, Mariah O. and White, Timothy D. and Dent, Felix and Micheli, Fiorenza and Rosenberg, Andrew A. and Crowder, Larry B. and Block, Barbara A. (2020) Shark fin trade bans and sustainable shark fisheries. Conservation Letters, 13 (3). ISSN 1755-263X Fiddler, Allyson (2020) 'In das Drama zurückgeworfen'.:Zu politischen und stilistischen Fragen bei Marlene Streeruwitz. In: Entwicklungen der Dramatik und Formen des Theaters in Österreich seit den 1960er Jahren. Germanistische Reihe, 93 . Innsbruck University Press, Innsbruck, pp. 85-99. ISBN 9783901064562 Fiege, Karen (2020) Emotional safety and identity expression within online learning environments in higher education:Insights from a Canadian college. PhD thesis, UNSPECIFIED. Field, J.L. and Richard, T.L. and Smithwick, E.A.H. and Cai, H. and Laser, M.S. and LeBauer, D.S. and Long, S.P. and Paustian, K. and Qin, Z. and Sheehan, J.J. and Smith, P. and Wang, M.Q. and Lynd, L.R. (2020) Robust paths to net greenhouse gas mitigation and negative emissions via advanced biofuels. Proceedings of the National Academy of Sciences of the United States of America, 117 (36). pp. 21968-21977. ISSN 0027-8424 Field, M and Puddephatt, JA and Goodwin, L and Owens, L and Reaves, D and Holmes, J (2020) Benefits of temporary alcohol restriction:a feasibility randomized trial. Pilot and Feasibility Studies, 6. ISSN 2055-5784 Fielding-Redpath, Ellie (2020) The Cult and Contemporary American Politics in Ubisoft's Far Cry 5 (2018). Implicit Religion, 23 (1). pp. 5-25. ISSN 1463-9955 Fildes, R. (2020) Learning from forecasting competitions. International Journal of Forecasting, 36 (1). pp. 186-188. ISSN 0169-2070 Fildes, Robert and Goodwin, Paul (2020) Stability and innovation in the use of forecasting systems:a case study in a supply-chain company. Working Paper. Department of Management Science, Lancaster University, Lancaster. Fildes, Robert and Schaer, Oliver and Svetunkov, Ivan and Yusupova, Alisa (2020) Survey: What's new in forecasting software? Operations Research Management Science Today, 47 (4). ISSN 1085-1038 Filipska, Gudrun (2020) 'The Arts Territory Exchange and the S Project /alternative journeys'. In: The Artist's Journey # 3: Improfessional Practices. UNSPECIFIED. Filipska, Gudrun (2020) OFF-SHORE. In: MUCK. UNSPECIFIED. Filipska, Gudrun and Butler, Carly (2020) S Project. In: Walking Bodies. Triarchy Press. ISBN 9781913743093 Findlay, A. (2020) Epilogues and last words in Shakespeare:Exploring patterns in a small corpus. Language and Literature, 29 (3). pp. 327-346. ISSN 0963-9470 Firn, J. and McGree, J.M. and Harvey, E. and Flores-Moreno, H. and Schütz, M. and Buckley, Y.M. and Borer, E.T. and Seabloom, E.W. and La Pierre, K.J. and MacDougall, A.M. and Prober, S.M. and Stevens, C.J. and Sullivan, L.L. and Porter, E. and Ladouceur, E. and Allen, C. and Moromizato, K.H. and Morgan, J.W. and Harpole, W.S. and Hautier, Y. and Eisenhauer, N. and Wright, J.P. and Adler, P.B. and Arnillas, C.A. and Bakker, J.D. and Biederman, L. and Broadbent, A.A.D. and Brown, C.S. and Bugalho, M.N. and Caldeira, M.C. and Cleland, E.E. and Ebeling, A. and Fay, P.A. and Hagenah, N. and Kleinhesselink, A.R. and Mitchell, R. and Moore, J.L. and Nogueira, C. and Peri, P.L. and Roscher, C. and Smith, M.D. and Wragg, P.D. and Risch, A.C. (2020) Author Correction: Leaf nutrients, not specific leaf area, are consistent indicators of elevated nutrient inputs (Nature Ecology & Evolution, (2019), 3, 3, (400-406), 10.1038/s41559-018-0790-1). Nature Ecology and Evolution, 4. pp. 886-891. ISSN 2397-334X Fisch, Alex (2020) Novel methods for anomaly detection. PhD thesis, UNSPECIFIED. Fisch, Alex and Grose, Daniel and Eckley, Idris and Fearnhead, Paul and Bardwell, Lawrence (2020) RobKF: Innovative and/or Additive Outlier Robust Kalman Filtering. UNSPECIFIED. Fisch, Alex and Grose, Daniel and Eckley, Idris A. and Fearnhead, Paul and Bardwell, Lawrence (2020) anomaly: Detection of Anomalous Structure in Time Series Data. arxiv.org. Fish, Adam Richard (2020) Drones:Visual Anthropology from the Air. In: Handbook of Ethnographic Film and Video. Routledge. ISBN 9780367185824 Fisher, Naomi and Robinson, Heather (2020) Understanding effect and effectiveness of interventions:trials and other evaluative study designs in applied health research. In: Handbook of theory and methods in applied health research. Edward Elgar, Cheltenham, pp. 232-250. ISBN 9781785363207 Fitsiou, Eleni (2020) Molecular dynamics simulations of tight junction proteins. PhD thesis, UNSPECIFIED. Fitton, D. and Cheverst, K. and Read, Janet (2020) Yayy! You Have a New Notification:Co-designing Multi-device Locative Media Experiences with Young People. In: Thematic Area on Human Computer Interaction, HCI 2020, held as part of the 22nd International Conference on Human-Computer Interaction, HCII 2020. Springer, DNK, pp. 247-233. ISBN 9783030490584 Fitzpatrick, Claire (2020) Reconsidering the Care-Crime Connection in a Climate of Crisis. Child and Family Law Quarterly, 32 (2). pp. 103-118. ISSN 1358-8184 Fleming, Susannah and Nicholson, Brian D and Bhuiya, Afsana and de Lusignan, Simon and Hirst, Yasemin and Hobbs, Richard and Perera, Rafael and Sherlock, Julian and Yonova, Ivelina and Bankhead, Clare (2020) CASNET2:evaluation of an electronic safety netting cancer toolkit for the primary care electronic health record: protocol for a pragmatic stepped-wedge RCT. BMJ Open, 10 (8). e038562. ISSN 2044-6055 Fletcher, L.N. and Simon, A.A. and Hofstadter, M.D. and Arridge, C.S. and Cohen, I.J. and Masters, A. and Mandt, K. and Coustenis, A. (2020) Ice giant system exploration in the 2020s:an introduction. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 378 (2187). Flinders, Matthew and Degerman, Dan and Johnson, Matthew (2020) Dominic Cummings and Boris Johnson have lost control of the fear factor. The Conversation. Fokianos, Konstantinos and Stove, Bard and Tjostheim, Dag and Doukhan, Paul (2020) Multivariate count autoregression. Bernoulli, 26 (1). pp. 471-499. ISSN 1350-7265 Foley, Ronan and Bell, Sarah and Gittens, Heil and Grove, Hannah and Kaley, Alex and McLauchlan, Anna and Osbourne, Tess and Power, Andrew (2020) 'Disciplined research in undisciplined settings':Critical explorations of In-Situ and Mobile Methodologies in Geographies of Health and Wellbeing. Area, 52 (3). pp. 514-522. ISSN 0004-0894 Folkersen, L. and Gustafsson, S. and Wang, Q. and Hansen, D.H. and Hedman, Å.K. and Schork, A. and Page, K. and Zhernakova, D.V. and Wu, Y. and Peters, J. and Eriksson, N. and Bergen, S.E. and Boutin, T.S. and Bretherick, A.D. and Enroth, S. and Kalnapenkis, A. and Gådin, J.R. and Suur, B.E. and Chen, Y. and Matic, L. and Gale, J.D. and Lee, J. and Zhang, W. and Quazi, A. and Ala-Korpela, M. and Choi, S.H. and Claringbould, A. and Danesh, J. and Davey Smith, G. and de Masi, F. and Elmståhl, S. and Engström, G. and Fauman, E. and Fernandez, C. and Franke, L. and Franks, P.W. and Giedraitis, V. and Haley, C. and Hamsten, A. and Ingason, A. and Johansson, Å. and Joshi, P.K. and Lind, L. and Lindgren, C.M. and Lubitz, S. and Palmer, T. and Macdonald-Dunlop, E. and Magnusson, M. and Melander, O. and Michaelsson, K. and Morris, A.P. and Mägi, R. and Nagle, M.W. and Nilsson, P.M. and Nilsson, J. and Orho-Melander, M. and Polasek, O. and Prins, B. and Pålsson, E. and Qi, T. and Sjögren, M. and Sundström, J. and Surendran, P. and Võsa, U. and Werge, T. and Wernersson, R. and Westra, H.-J. and Yang, J. and Zhernakova, A. and Ärnlöv, J. and Fu, J. and Smith, J.G. and Esko, T. and Hayward, C. and Gyllensten, U. and Landen, M. and Siegbahn, A. and Wilson, J.F. and Wallentin, L. and Butterworth, A.S. and Holmes, M.V. and Ingelsson, E. and Mälarstig, A. (2020) Genomic and drug target evaluation of 90 cardiovascular proteins in 30,931 individuals. Nature Metabolism, 2 (10). pp. 1135-1148. Follis, Karolina (2020) The Gray Zone: Sovereignty, Human Smuggling, and Undercover Police Investigation in Europe. Gregory Feldman. Stanford, CA: Stanford University Press, 2019. 240 pp. American Ethnologist, 47 (3). pp. 321-322. ISSN 0094-0496 Follis, Karolina and Follis, Luca and Burns, Nicola (2020) The Disappearing Patient:Visibility, Mobility and Infectious Disease. In: Social Disappearance. Dossiers of the Forum of Transregionale Studien . Forum Transregionale Studien, Berlin, pp. 176-187. Follis, Luca and Fish, Adam Richard (2020) Hacker States. The Information Society Series . MIT Press, Cambridge, Mass.. ISBN 9780262043601 Fong, Siao Yuong (2020) Imagining film censorship in Singapore:The case of Sex.Violence.FamilyValues. Asian Cinema, 31 (1). pp. 77-98. ISSN 1059-440X Fong, Siao Yuong and Ng, How Wee (2020) Unpacking the 'Singapore New Wave'. Asian Cinema, 31 (1). pp. 3-15. ISSN 1059-440X Fonseca Braga, Mariana and Romeiro Filho, Eduardo and L.O. Mendonça, Rosângela Míriam and Gomes Ribeiro de Oliveira, Lorena and Galvão Guimarães Pereira, Haddon (2020) Design for Resilience:Mapping the Needs of Brazilian Communities to Tackle COVID-19 Challenges. Strategic Design Research Journal, 13 (3). pp. 374-386. ISSN 1984-2988 Forber, Kirsty and Rothwell, Shane and Metson, Genevieve and Jarvie, Helen and Withers, Paul (2020) Plant-based diets add to the wastewater phosphorus burden. Environmental Research Letters, 15 (9). ISSN 1748-9326 Ford, Stephen and Atkinson, Michael P. and Glazebrook, Kevin and Jacko, Peter (2020) On the dynamic allocation of assets subject to failure. European Journal of Operational Research, 284 (1). pp. 227-239. ISSN 0377-2217 Forgács, Bálint and Gervain, Judit and Parise, Eugenio and Csibra, Gergely and Gergely, György and Baross, Julia and Kiraly, Ildikó (2020) Electrophysiological Investigation of Infants' Understanding of Understanding. Developmental Cognitive Neuroscience, 43. ISSN 1878-9293 Forman, Peter J. (2020) Security and the Subsurface:Natural Gas and the Visualisation of Possibility Spaces. Geopolitics, 25 (1). pp. 143-166. ISSN 1465-0045 Formato, Federica and Tantucci, Vittorio (2020) Uno:A corpus linguistic investigation of intersubjectivity and gender. Journal of Language and Discrimination, 4 (1). pp. 51-73. Fornace, K.M. and Fronterrè, C. and Fleming, F.M. and Simpson, H. and Zoure, H. and Rebollo, M. and Mwinzi, P. and Vounatsou, P. and Pullan, R.L. (2020) Evaluating survey designs for targeting preventive chemotherapy against Schistosoma haematobium and Schistosoma mansoni across sub-Saharan Africa:a geostatistical analysis and modelling study. Parasites Vectors, 13 (1). ISSN 1756-3305 Fortune, N and Badland, Hannah and Clifton, S and Emerson, Eric and Rachele, J and Stancliffe, Roger J. and Zhou, Qingsheng and Llewellyn, Gwynnyth (2020) The Disability and Wellbeing Monitoring Framework:data, data gaps, and policy implications. Australian and New Zealand Journal of Public Health, 44 (3). pp. 227-232. ISSN 1326-0200 Foucart, Renaud (2020) Metasearch and market concentration. International Journal of Industrial Organization, 70. ISSN 0167-7187 Foulstone, Phoebe (2020) Assessment of the performance of an ATP based rapid bacterial indicator test on potable water samples. Masters thesis, UNSPECIFIED. Fovargue, Sara (2020) Anticipating issues with capacitous pregnant women:United Lincolnshire NHS Hospitals Trust V CD [2019] EWCOP 24 And Guys And St Thomas' NHS Foundation Trust (GSTT) And South London And Maudsley NHS Foundation Trust (Slam) V R [2020] EWCOP 4. Medical Law Review, 28 (4). 781–793. ISSN 0967-0742 Fowler, Emma (2020) Using an assessment tool to support capacity assessments undertaken remotely in the context of a global health crisis:A feasibility study. PhD thesis, UNSPECIFIED. Fox, Kathryn (2020) Teacher educator perceptions of mathematical knowledge for teaching – A phenomenographic study. PhD thesis, UNSPECIFIED. Francis, Becky and Craig, Nicole and Hodgen, Jeremy and Taylor, Becky and Tereshchenko, Antonina and Connolly, Paul and Archer, Louise (2020) The impact of tracking by attainment on pupil self-confidence over time:demonstrating the accumulative impact of self-fulfilling prophecy. British Journal of Sociology of Education, 41 (5). pp. 626-642. ISSN 0142-5692 Franklin, Emma (2020) Acts of killing, acts of meaning:an application of corpus pattern analysis to language of animal-killing. PhD thesis, UNSPECIFIED. França, F.M. and Benkwitt, C.E. and Peralta, G. and Robinson, J.P.W. and Graham, N.A.J. and Tylianakis, J.M. and Berenguer, E. and Lees, A.C. and Ferreira, J. and Louzada, J. and Barlow, J. (2020) Climatic and local stressor interactions threaten tropical forests and coral reefs. Philosophical Transactions of the Royal Society B: Biological Sciences, 375 (1794). ISSN 0962-8436 França, F.M. and Ferreira, J. and Vaz-de-Mello, F.Z. and Maia, L.F. and Berenguer, E. and Ferraz Palmeira, A. and Fadini, R. and Louzada, J. and Braga, R. and Hugo Oliveira, V. and Barlow, J. (2020) El Niño impacts on human-modified tropical forests: Consequences for dung beetle diversity and associated ecological processes:Consequences for dung beetle diversity and associated ecological processes. Biotropica, 52 (2). pp. 252-262. ISSN 0006-3606 Fraser, Emma and Wilmott, Clancy (2020) Ruins of the Smart City:A Visual Intervention. Visual Communication, 19 (3). pp. 353-368. ISSN 1470-3572 Frederikse, Thomas and Landerer, Felix and Caron, Lambert and Adhikari, Surendra and Parkes, David and Humphrey, Vincent W and Dangendorf, Sönke and Hogarth, Peter and Zanna, Laure and Cheng, Lijing and Wu, Yun-Hao (2020) The causes of sea-level rise since 1900. Nature, 584. pp. 393-397. ISSN 0028-0836 Freeman, T. and Gesesew, H.A. and Bambra, C. and Giugliani, E.R.J. and Popay, J. and Sanders, D. and Macinko, J. and Musolino, C. and Baum, F. (2020) Why do some countries do better or worse in life expectancy relative to income?:An analysis of Brazil, Ethiopia, and the United States of America. International Journal for Equity in Health, 19 (1). ISSN 1475-9276 Freeman, Tom (2020) Characterisation of the trans-influence, and its inverse. Masters thesis, UNSPECIFIED. Freitag, Charlotte and Berners-Lee, Mike and Widdicks, Kelly and Knowles, Bran and Blair, Gordon and Friday, Adrian (2020) The climate impact of ICT:A review of estimates, trends and regulations. UNSPECIFIED. (Unpublished) Frese, Tobias and Geiger, Ingmar and Dost, Florian (2020) An empirical investigation of determinants of effectual and causal decision logics in online and high-tech start-up firms. Small Business Economics, 54. pp. 641-664. ISSN 0921-898X Frick, Bernd and Simmons, Robert and Stein, Friedrich (2020) Timing matters:worker absenteeism in a weekly backward rotating shift model. European Journal of Health Economics, 21 (9). pp. 1399-1410. ISSN 1618-7598 Friis, Camilla and Liebst, Lasse S and Philpot, Richard and Rosenkrantz, Marie (2020) Ticket Inspectors in Action:Body-Worn Camera Analysis of Aggressive and Nonaggressive Passenger Encounters. Psychology of Violence, 10 (5). pp. 483-492. ISSN 2152-0828 Froggatt, K. and Moore, D.C. and Van den Block, Lieve and Ling, J. and Payne, S.A. and Kylanen, M. and , PACE consortium (2020) Palliative Care Implementation in Long-Term Care Facilities:European Association for Palliative Care White Paper. Journal of the American Medical Directors Association, 21 (8). pp. 1051-1057. ISSN 1525-8610 Froggatt, Katherine and Dunleavy, Lesley and Perez Algorta, Guillermo and Preston, Nancy and Walshe, Catherine (2020) A group intervention to improve quality of life for people with advanced dementia living in care homes:the Namaste feasibility cluster RCT. Health Technology Assessment, 24 (6). pp. 1-139. ISSN 1366-5278 Fromm, Ingrid and Cortez Arias, Ricardo and Bulnes, Luis Carlo and Discua Cruz, Allan (2020) The role of social remittances in promoting transformative societal change. In: Social Innovation of New Ventures. Routledge. ISBN 9780367473334 Frost, R.L.A. and Dunn, K. and Christiansen, M.H. and Gómez, R.L. and Monaghan, P. (2020) Exploring the "anchor word" effect in infants:Segmentation and categorisation of speech with and without high frequency words. PLoS ONE, 15 (12). ISSN 1932-6203 Frost, Rebecca and Jessop, Andrew and durrant, samantha and Peter, Michelle and Bidgood, Amy and Pine, Julian and Rowland, Caroline and Monaghan, Padraic (2020) Non-adjacent dependency learning in infancy, and its link to language development. Cognitive Psychology, 120. ISSN 0010-0285 Frost, Rebecca L.A. and Monaghan, P. (2020) Insights from studying statistical learning. In: Current Perspectives on Child Language Acquisition. Trends in Language Acquisition Research . John Benjamins Publishing Company, Amsterdam, pp. 65-89. ISBN 9789027207074 Fu, G. and Dai, J. and Li, Z. and Chen, F. and Liu, L. and Yi, L. and Teng, Z. and Quan, C. and Zhang, L. and Zhou, T. and Donkersley, P. and Song, S. and Shi, Y. (2020) The role of STAT3/p53 and PI3K-Akt-mTOR signaling pathway on DEHP-induced reproductive toxicity in pubertal male rat. Toxicology and Applied Pharmacology, 404. ISSN 0041-008X Fujita, Sayaka and Reid, Vincent and Bremner, Gavin and Linnert, Szilvia and Arioli, Martina and Dunn, Kirsty (2020) Differential Processing of Gaze Cueing from a Congruent and Incongruent Informant. In: International Conference of Infant Studies 2020, 2020-07-062020-07-09, online. Fulop, Erika and Larsonneur, Claire (2020) Auteurs numériques:Interviews. UNSPECIFIED. Fulton, C.J. and Berkström, C. and Wilson, S.K. and Abesamis, R.A. and Bradley, M. and Åkerlund, C. and Barrett, L.T. and Bucol, A.A. and Chacin, D.H. and Chong-Seng, K.M. and Coker, D.J. and Depczynski, M. and Eggertsen, L. and Eggertsen, M. and Ellis, D. and Evans, R.D. and Graham, N.A.J. and Hoey, A.S. and Holmes, T.H. and Kulbicki, M. and Leung, P.T.Y. and Lam, P.K.S. and van Lier, J. and Matis, P.A. and Noble, M.M. and Pérez-Matus, A. and Piggott, C. and Radford, B.T. and Tano, S. and Tinkler, P. (2020) Macroalgal meadow habitats support fish and fisheries in diverse tropical seascapes. Fish and Fisheries, 21 (4). pp. 700-717. ISSN 1467-2960 Fusi-Schmidhauser, T. and Froggatt, K. and Preston, N. (2020) Living with Advanced Chronic Obstructive Pulmonary Disease:A Qualitative Interview Study with Patients and Informal Carers. COPD, 17 (4). pp. 410-418. ISSN 1541-2563 Fusi-Schmidhauser, T. and Preston, N.J. and Keller, N. and Gamondi, C. (2020) Conservative Management of COVID-19 Patients—Emergency Palliative Care in Action. Journal of Pain and Symptom Management, 60 (1). e27-e30. ISSN 0885-3924 Fusi-Schmidhauser, Tanja (2020) Palliative care integration for patients with advanced chronic obstructive pulmonary disease (COPD):An action research study. PhD thesis, UNSPECIFIED. Gablasova, Dana (2020) Corpora for second language assessments. In: The Routledge Handbook of Second Language Acquisition and Language Testing. Routledge, London. ISBN 9780815352877 Gablasova, Dana (2020) Variability. In: The Routledge Handbook of Second Language Acquisition and Corpora. Routledge Handbooks . Routledge, London. ISBN 9780815352877 Gadoud, A. and Kane, E. and Oliver, S.E. and Johnson, M.J. and MacLeod, U. and Allgar, V. (2020) Palliative care for non-cancer conditions in primary care:A time trend analysis in the UK (2009-2014). BMJ Supportive and Palliative Care. ISSN 2045-435X Gaffney, Christopher (2020) High-intensity workouts may put regular gym goers at risk of rhabdomyolysis, a rare but dangerous condition. The Conversation. Gaffney, Christopher and Etheridge, Tim and Szewczyk, Nate (2020) Minimizing Muscle Atrophy. In: Handbook of Life Support Systems for Spacecraft and Extraterrestrial Habitats. Springer, Cham, pp. 1-27. ISBN 9783319095752 Gaffney, Christopher and Nartallo, Ramon and Neri, Gianluca and Zolesi, David and Torregrossa, Roberta and Deane, Colleen and Whiteman, Matt and Etheridge, Timothy and Ellwood, Rebecca and Cooke, Michael and Gharahdaghi, Nima and Piasecki, Mathew and Phillips, Bethan and Szewczyk, Nathaniel (2020) Commercial access for UK/ESA student experiments on board the ISS. In: Proceedings of the 3rd Symposium on Space Educational Activities. University of Leicester, Leicester, pp. 152-154. ISBN 9781912989096 Gainer, Paul and Linker, Sven and Dixon, Clare and Hustadt, Ullrich and Fisher, Michael (2020) Multi-scale verification of distributed synchronisation. Formal Methods in System Design, 55. 171–221. Galabo, Rosendy (2020) A framework for improving knowledge exchange tools. PhD thesis, UNSPECIFIED. Galabo, Rosendy and Nthubu, Badziili and Cruickshank, Leon and Perez, David (2020) Redesigning a workshop from physical to digital:Principles for designing distributed co-design approaches. In: Design: Vertical & Horizontal growth. UNSPECIFIED, SUN, pp. 64-70. ISBN 9789526490021 Galanciak, J. and Bagiński, B. and Macdonald, R. and Belkin, H.E. and Kotowski, J. and Jokubauskas, P. (2020) Relationships between monazite, apatite and chevkinite-group minerals in the rhyolitic Joe Lott Tuff, Utah, USA. Lithos, 354-35 (2). ISSN 0024-4937 Galiano, L. (2020) Genitive marking on second person plural pronouns you all, y'all, yall. Lingua, 243. ISSN 0024-3841 Galiano, Liviana (2020) Forms and functions of second person plural forms in world Englishes:A corpus-based Study. PhD thesis, UNSPECIFIED. Gallego-Burin, Araceli Rojo and Llorens-Montes, Javier and Perez-Arostegui, Maria Nieves and Stevenson, Mark (2020) Ambidextrous Supply Chain Strategy and Supply Chain Flexibility:The Contingent Effect of ISO 9001. Industrial Management and Data Systems, 120 (9). pp. 1691-1714. ISSN 0263-5577 Gamage, K.A.A. and Wijesuriya, Dilani and Ekanayake, Sakunthala and Rennie, Allan and Lambert, Chris and Gunawardhana, Nanda (2020) Online Delivery of Teaching and Laboratory Practices:Continuity of University Programmes during COVID-19 Pandemic. Educational Sciences, 10 (10). ISSN 2227-7102 Gamondi, Claudia and Pott, Murielle and Preston, Nancy and Payne, Sheila (2020) Swiss Families' Experiences of Interactions with Providers during Assisted Suicide:A Secondary Data Analysis of an Interview Study. Journal of Palliative Medicine, 23 (4). pp. 506-512. ISSN 1096-6218 Gao, Dongxu and Celik, Numan and Wu, Xiyin and Williams, Bryan M. and Stylianides, Amira and Zheng, Yalin (2020) A Novel Deep Learning Based OCTA De-striping Method. In: Medical Image Understanding and Analysis. Communications in Computer and Information Science, 2019 . Springer, GBR, pp. 189-197. ISBN 9783030393427 Gao, Jingyue and Wang, Xiting and Wang, Yasha and Yang, Zhao and Gao, Junyi and Wang, Jiangtao and Tang, Wen and Xie, Xing (2020) CAMP:Co-Attention Memory Networks for Diagnosis Prediction in Healthcare. In: 2019 IEEE International Conference on Data Mining (ICDM). IEEE. ISBN 9781728146058 Gao, N. and Qin, Z. and Jing, X. and Ni, Q. and Jin, S. (2020) Anti-Intelligent UAV Jamming Strategy via Deep Q-Networks. IEEE Transactions on Communications, 68 (1). pp. 569-581. ISSN 0090-6778 Gao, Ning and Ni, Qiang and Feng, Daquan and Jing, Xiaojun and Cao, Yue (2020) Physical Layer Authentication under Intelligent Spoofing in Wireless Sensor Networks. Signal Processing, 166. ISSN 0165-1684 Gao, Weifeng and Zhao, Zhiwei and Yu, Zhengxin and Min, Geyong and Yang, Minghang and Huang, Wenjie (2020) Edge-Computing-Based Channel Allocation for Deadline-Driven IoT Networks. IEEE Transactions on Industrial Informatics, 16 (10). pp. 6693-6702. ISSN 1551-3203 Garcia Teijeiro, R. and Belimov, A.A. and Dodd, I.C. (2020) Microbial inoculum development for ameliorating crop drought stress:A case study of Variovorax paradoxus 5C-2. Food Biotechnology, 56. pp. 103-113. ISSN 0890-5436 Garcia-Vergara, Cristina and Hodge, Jacqueline and Hennawi, Joseph F. and Weiss, Axel and Wardlow, Julie and Myers, Adam D. and Hickox, Ryan (2020) The clustering of submillimeter galaxies detected with ALMA. The Astrophysical Journal, 904 (1). ISSN 0004-637X Gardner, Emma and Breeze, Tom D. and Clough, Yann and Smith, Henrik G. and Baldock, Katherine C.R. and Campbell, Alistair and Garratt, Mike and Gillespie, Mark A.K. and Kunin, William E. and McKerchar, Megan and Memmott, Jane and Potts, Simon G. and Senapathi, Deepa and Stone, Graham and Wackers, Felix and Westbury, Duncan B. and Wilby, Andy and Oliver, Tom H. (2020) Reliably Predicting Pollinator Abundance:Challenges of Calibrating Process-Based Ecological Models. Methods in Ecology and Evolution, 11 (12). pp. 1673-1689. ISSN 2041-210X Garner, Ian and Holland, Carol (2020) Age-friendliness of living environments from the older person's viewpoint:development of the Age-friendly Environment Assessment Tool. Age and Ageing, 49 (2). 193–198. ISSN 0002-0729 Garner, Ian W and Burgess, Adrian P and Holland, Carol A (2020) Developing and validating the Community-Oriented Frailty Index (COM-FI). Archives of Gerontology and Geriatrics, 91. ISSN 0167-4943 Garnett, M. and O'Hara, Keiron (2020) The conservative reaction to data-driven agency. In: Life and the Law in the Era of Data-Driven Agency. Edward Elgar Publishing Ltd., Cheltenham, pp. 175-193. ISBN 9781788971997 Garrison, Stephanie and Jacobs, Naomi (2020) Life as a Networked Fan. AoIR Selected Papers of Internet Research, 2020 A. ISSN 2162-3317 Garuba, Francis (2020) Robust and stochastic approaches to network capacity design under demand uncertainty. PhD thesis, UNSPECIFIED. Garuba, Francis and Goerigk, Marc and Jacko, Peter (2020) A Comparison of Data-Driven Uncertainty Sets for Robust Network Design. arXiv. Gasparin, Marta and Brown, Steven D. and Green, William and Hugill, Andrew and Lilley, Simon and Quinn, Martin and Schinckus, Christophe and Williams, Mark and Zalasiewicz, Jan (2020) The Business School in the Anthropocene:Parasite Logic and Pataphysical Reasoning for a Working Earth. Academy of Management Learning and Education, 19 (3). ISSN 1537-260X Gates, A.B. and Swainson, M.G. and Moffatt, F. and Kerry, R. and Metsios, G.S. and Ritchie, I. (2020) Undergraduate examination and assessment of knowledge and skills is crucial in capacity planning for the future healthcare workforce in physical activity interventions. British Journal of Sports Medicine, 54 (17). pp. 1015-1016. ISSN 0306-3674 Gath-Morad, Michal and Aguilar, Leonel and Conroy-Dalton, Ruth and Hölscher, Christoph (2020) cogARCH:Simulating Wayfinding by Architecture in Multilevel Buildings. In: 2020 Proceedings of the Symposium on Simulation for Architecture and Urban Design. SimAUD, Online, pp. 27-34. ISBN 1565553713 Gatherer, Derek (2020) Climate Change and Emerging Viral Diseases:the Evidence. Working Paper. OSF Preprints. Gatherer, Derek (2020) Reflections on integrating bioinformatics into the undergraduate curriculum:The Lancaster experience. Biochemistry and molecular biology education : a bimonthly publication of the International Union of Biochemistry and Molecular Biology, 48 (2). pp. 118-127. ISSN 1470-8175 Gatherer, Derek (2020) Scambi fushi tarazu:A Musical Representation of a Drosophila Gene Expression Pattern. Preprints. ISSN 2310-287X Gatherer, Derek and Bermingham, Rowena and Others, 1100 (2020) COVID-19 outbreak: What are experts concerned about? Working Paper. The Parliamentary Office of Science and Technology. Gatherer, Derek and Johnson, Matthew and Bermingham, Rowena and others, 350 (2020) Life beyond COVID-19: What are experts concerned about? Working Paper. The Parliamentary Office of Science and Technology. Gatilova, A. and Mashkovich, E.A. and Grishunin, K.A. and Pogrebna, A. and Mikhaylovskiy, R.V. and Rasing, T. and Christianen, P.M. and Nishizawa, N. and Munekata, H. and Kimel, A.V. (2020) Far- and midinfrared excitation of large amplitude spin precession in the ferromagnetic semiconductor InMnAs. Physical review B, 101 (2). ISSN 1098-0121 Gauld, Jillian (2020) Rivers, rainfall, and risk factors:geostatistical and epidemiological approaches to disentangle potential transmission routes of typhoid fever. PhD thesis, UNSPECIFIED. Gauld, Jillian S and Olgemoeller, Franziska and Nkhata, Rose and Li, Chao and Chirambo, Angeziwa and Morse, Tracy and Gordon, Melita A and Read, Jonathan M and Heyderman, Robert S and Kennedy, Neil and Diggle, Peter J and Feasey, Nicholas A (2020) Domestic river water use and risk of typhoid fever:results from a case-control study in Blantyre, Malawi. Clinical Infectious Diseases, 70 (7). pp. 1278-1284. ISSN 1058-4838 Gauthier, Stephanie and Bourikas, Leonidas (2020) Investigating dependencies between indoor environmental parameters: thermal, air quality and acoustic perception. In: Windsor 2020 Resilient Comfort Conference Proceedings. UNSPECIFIED, pp. 323-330. ISBN 9781916187634 Gauthier, Stephanie and Bourikas, Leonidas and Al‐Atrash, Farah and Bae, Chihye and Chun, Chungyoon and de Dear, Richard and Hellwig, Runa T. and Kim, Jungsoo and Kwon, Suhyun and Mora, Rodrigo and Pandya, Himani and Rawal, Rajan and Tartarini, Federico and Upadhyay, Rohit and Wagner, Andreas (2020) The colours of comfort:From thermal sensation to person-centric thermal zones for adaptive building strategies. Energy and Buildings, 216. ISSN 0378-7788 Gavassi, M.A. and Dodd, I.C. and Puértolas, J. and Silva, G.S. and Carvalho, R.F. and Habermann, G. (2020) Aluminum-induced stomatal closure is related to low root hydraulic conductance and high ABA accumulation. Environmental and Experimental Botany, 179. ISSN 0098-8472 Gavilán-Arriazu, E.M. and Mercer, Michael P. and Pinto, O.A. and Oviedo, O.A. and Barraco, D.E. and Hoster, H.E. and Leiva, E.P.M. (2020) Numerical simulations of cyclic voltammetry for lithium-ion intercalation in nanosized systems:finiteness of diffusion versus electrode kinetics. Journal of Solid State Electrochemistry, 24. 3279–3287. ISSN 1432-8488 Gavini, Siva and Janke, Katharina and Gadoud, Amy and Salifu, Yakubu (2020) A systematic review of the resource use of a palliative care approach in adult non-cancer patients with a life-limiting illness. PROSPERO International prospective register of systematic reviews. Gawne, Suzanne and Fish, Rebecca and Machin, Laura (2020) Developing a Workplace-Based Learning Culture in the NHS:Aspirations and Challenges. Journal of Medical Education and Curricular Development, 7. pp. 1-8. Gayler, Tom (2020) Inbodied Interaction Design Example: Smell. Interactions, 27 (2). pp. 38-39. ISSN 1072-5520 Gayler, Tom and Sas, Corina and Kalnikaitė, Vaiva (2020) Co-Designing Flavor-Based Memory Cues with Older Adults. In: ICMI '20 Companion. ACM, 287–291. ISBN 9781450380027 Gayler, Tom and Sas, Corina and Kalnikaitė, Vaiva (2020) Material Food Probes:Personalized 3D Printed Flavors for Intimate Communication. In: DIS '20. ACM, NLD, 965–978. ISBN 9781450369749 Ge, Baozhu and Xu, Danhui and Wild, O. and Yao, X. and Wang, Junhua and Chen, Xuechun and Tan, Qixin and Pan, Xiaole and Wang, Zifa (2020) Inter-annual variations of wet deposition in Beijing during 2014-2017:implications of below-cloud scavenging of inorganic aerosols. Atmospheric Chemistry and Physics Discussions. ISSN 1680-7367 Gebhart, V. and Snizhko, K. and Wellens, T. and Buchleitner, A. and Romito, A. and Gefen, Y. (2020) Topological transition in measurement-induced geometric phases. Proceedings of the National Academy of Sciences of the United States of America, 117 (11). pp. 5706-5713. ISSN 0027-8424 Gencel Bek, Mine and Prieto-Blanco, Patricia (2020) (Be)Longing through visual narrative:Mediation of (dis)affect and formation of politics through photographs and narratives of migration at DiasporaTürk. International Journal of Cultural Studies, 23 (5). pp. 709-727. ISSN 1367-8779 Georgalos, Konstantinos (2020) Comparing Behavioural Models Using Data from Experimental Centipede Games. Economic Inquiry, 58 (1). pp. 34-48. ISSN 0095-2583 Georgalos, Konstantinos and Hey, John (2020) Testing for the Emergence of Spontaneous Order. Experimental Economics, 23. 912–932. ISSN 1386-4157 Georgalos, Konstantinos and Ray, Indrajit and Sen Gupta, Sonali (2020) Nash versus coarse correlation. Experimental Economics, 23. 1178–1204. ISSN 1386-4157 George Assaf, A. and Tsionas, M.G. and Andrikopoulos, A. (2020) Testing moderation effects using non-parametric regressions. International Journal of Hospitality Management, 86. ISSN 0278-4319 Gerakopoulou, Elli and Deville, Joe (2020) Negotiations between libraries and Open Access books: Towards a more open future? In: Northern Collaboration, 2020-11-182020-11-19. Gerson, Sheri Mila and Preston, Nancy and Bingley, Amanda (2020) Medical Aid in Dying, Hastened Death and Suicide:A Qualitative Study of Hospice Professionals' Experiences from Washington State. Journal of Pain and Symptom Management, 59 (3). 679-686.e1. ISSN 0885-3924 Ghaemi-Dizicheh, Hamed and Mostafazadeh, Ali and Sarısaman, Mustafa (2020) Spectral singularities and tunable slab lasers with 2D material coating. Journal of the Optical Society of America B: Optical Physics, 37 (7). pp. 2128-2133. Ghaly, Mohamed and Anh Dang, Viet and Stathopoulos, Konstantinos (2020) Institutional Investors' Horizons and Corporate Employment Decisions. Journal of Corporate Finance, 64. ISSN 0929-1199 Ghorbankarimi, Maryam (2020) Politics of inclusion: representation of minorities in Iranian cinema. In: Thirteenth Biennial Iranian Studies Conference, 2020-08-252020-08-28, University of Salamanca. Giang, N.K. and Lea, R. and Leung, V.C.M. (2020) Developing applications in large scale, dynamic fog computing:A case study. Software: Practice and Experience, 50 (5). pp. 519-532. ISSN 0038-0644 Giannakos, M. and Horn, M.S. and Rubegni, E. (2020) Advancements on Child–Computer Interaction research:Contributions from IDC 2018. International Journal of Child-Computer Interaction, 23-24. ISSN 2212-8689 Giannini, T.C. and Costa, W.F. and Borges, R.C. and Miranda, Leonardo De Sousa and Costa, Claudia Priscila Wanzeler and Saraiva, Antonio Mauro and Imperatriz Fonseca, V.L. (2020) Climate change in the Eastern Amazon:crop-pollinator and occurrence-restricted bees are potentially more affected. Regional Environmental Change, 20 (1). ISSN 1436-3798 Giannoccarro, Ilaria and Iftikhar, Anas (2020) Mitigating ripple effect in supply networks:the effect of trust and topology on resilience. International Journal of Production Research. ISSN 0020-7543 Gibberd, A. and Roy, S. (2020) Consistent multiple changepoint estimation with fused Gaussian graphical models. Annals of the Institute of Statistical Mathematics. ISSN 0020-3157 Gibbons, Jen (2020) A Bernsteinian analysis of the interplay between legal knowledge and the legal professional in university law schools in Yorkshire and Alberta:rule of Law or Rule of Lawyers? PhD thesis, UNSPECIFIED. Giebel, C. and McIntyre, J.C. and Alfirevic, A. and Corcoran, R. and Daras, K. and Downing, J. and Gabbay, M. and Pirmohamed, M. and Popay, J. and Wheeler, P. and Holt, K. and Wilson, T. and Bentall, R. and Barr, B. (2020) The longitudinal NIHR ARC North West Coast Household Health Survey: exploring health inequalities in disadvantaged communities. BMC Public Health, 20 (1). ISSN 1471-2458 Gill, S.S. and Tuli, S. and Toosi, A.N. and Cuadrado, F. and Garraghan, P. and Bahsoon, R. and Lutfiyya, H. and Sakellariou, R. and Rana, O. and Dustdar, S. and Buyya, R. (2020) ThermoSim:Deep learning based framework for modeling and simulation of thermal-aware resource management for cloud computing environments. Journal of Systems and Software, 166. ISSN 0164-1212 Gillard, David (2020) Popular music, the Christian story, and the quest for ontological security. PhD thesis, UNSPECIFIED. Gillen, J and Ahereza, Noah and Nyarko, Marco (2020) An exploration of language ideologies across English literacy and sign languages in multiple modes in Uganda and Ghana. In: Sign Language Ideologies in Practice. Sign Languages and Deaf Communities [SLDC] . Mouton de Gruyter, The Hague, pp. 185-200. ISBN 9781501516856 Gillen, J and Yu, Mandy Hoi Man and Ho, Selena and Fan, Gloria Ho Nga (2020) Literacies remaking public places:The Umbrella Movement of Hong Kong, 2014. Literacy, 54 (2). pp. 40-48. ISSN 1741-4350 Gillen, Julia (2020) Afterword. In: Rebellious Writing. Peter Lang, pp. 413-418. ISBN 9781789972955 Gillen, Julia and Flewitt, Rosie and Sandberg, Helena (2020) Special issue: Children under three at home: the place of digital media in their literacy practices. Journal of Early Childhood Literacy, 20 (3). pp. 441-446. ISSN 1468-7984 Gillen, Julia and Nyarko, Marco and Akanlig-Pare, George and Akrasi-Sarpong, Esther and Toah Addo, Kwadwo and Emily, Chapman (2020) Peer to Peer Deaf Multiliteracies:Towards a Sustainable Approach to Education in Ghana. In: American Educational Research Association Annual Meeting, 2020-04-172020-04-20, Cancelled owing to COVID-19.. Gillespie, Alisdair (2020) Juvenile informers:Is it appropriate to use children as Covert Human Intelligence Sources? The Cambridge Law Journal, 79 (3). 459 - 489. ISSN 0008-1973 Gillespie, Alisdair and Magor, Samantha (2020) Tackling online fraud. ERA Forum, 20 (3). pp. 439-454. ISSN 1612-3093 Gilloch, Graeme (2020) Rythms, ornements et masses:Siegfried Kracauer et l'orchestration du pouvoir. Germanica, 66. Giorgi, Emanuele (2020) Integrating environmental, entomological, animal, and human data to model the Leishmania infantum transmission risk in a newly endemic area in northern Italy. One Health, 10. ISSN 2352-7714 Giotsas, Vasileios and Koch, Thomas and Fazzion, Elverton and Cunha, Ítalo and Calder, Matt and Madhyastha, Harsha V. and Katz-Bassett, Ethan (2020) Reduce, Reuse, Recycle:Repurposing Existing Measurements to Identify Stale Traceroutes. In: IMC 2020 - Proceedings of the 2020 ACM Internet Measurement Conference. Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC . Association for Computing Machinery (ACM), USA, pp. 247-265. ISBN 9781450381383 Giotsas, Vasileios and Livadariu, Ioana and Gigis, Petros (2020) A first look at the misuse and abuse of the IPv4 Transfer Market. In: Passive and Active Measurement Conference (PAM) 2020. Springer, USA, pp. 88-103. ISBN 9783030440800 Giovannelli, Alessandro and Massacci, Daniele and Soccorsi, Stefano (2020) Forecasting Stock Returns with Large Dimensional Factor Models. Working Paper. Lancaster University, Department of Economics, Lancaster. Girkin, N.T. and Dhandapani, S. and Evers, S. and Ostle, N. and Turner, B.L. and Sjögersten, S. (2020) Interactions between labile carbon, temperature and land use regulate carbon dioxide and methane production in tropical peat. Biogeochemistry, 147 (1). pp. 87-97. ISSN 0168-2563 Girkin, N.T. and Lopes dos Santos, R.A. and Vane, C.H. and Ostle, N. and Turner, B.L. and Sjögersten, S. (2020) Peat Properties, Dominant Vegetation Type and Microbial Community Structure in a Tropical Peatland. Wetlands, 40. 1367–1377. ISSN 0277-5212 Girkin, N.T. and Vane, C.H. and Turner, B.L. and Ostle, N.J. and Sjögersten, S. (2020) Root oxygen mitigates methane fluxes in tropical peatlands. Environmental Research Letters, 15 (6). ISSN 1748-9326 Gleixner, Ambros and Maher, Stephen and Muller, Benjamin and Pedroso, João Pedro (2020) Price-and-verify:a new algorithm for recursive circle packing using Dantzig–Wolfe decomposition. Annals of Operational Research, 284 (2). pp. 527-555. ISSN 1572-9338 Glew, Billy (2020) Screen Dreams:a practice-based investigation of filmic dream sequences, using the dream theories of Freud, Jung, Revonsuo and Hobson. PhD thesis, UNSPECIFIED. Glover, G. and Williams, R. and Oyinlola, J. (2020) An observational cohort study of numbers and causes of preventable general hospital admissions in people with and without intellectual disabilities in England. Journal of Intellectual Disability Research, 64 (5). pp. 331-344. ISSN 0964-2633 Glądalski, M. and Mainwaring, M.C. and Bańbura, M. and Kaliński, A. and Markowski, M. and Skwarska, J. and Wawrzyniak, J. and Bańbura, J. and Hartley, I.R. (2020) Consequences of hatching deviations for breeding success:a long-term study on blue tits Cyanistes caeruleus. The European Zoological Journal, 87 (1). pp. 385-394. Goerigk, M. and Maher, S.J. (2020) Generating hard instances for robust combinatorial optimization. European Journal of Operational Research, 280 (1). pp. 34-45. ISSN 0377-2217 Goggin, Jessica (2020) It's all just suffering:the experience of pain in cystic fibrosis. PhD thesis, UNSPECIFIED. Gold, James and Donovan, Connor and Bowden, Jack and Carr, James and Philips, Joe and Sobral, David (2020) ReHILAE: is the Re-ionisation of Hydrogen-I the sole consequence of Lyman-alpha Emitters? Notices of Lancaster Astrophysics (NLUAstro), 2. pp. 42-52. Goldthorpe, Joanna and Epton, Tracy and Keyworth, Chris and Calam, Rachel and Armitage, Christopher J. (2020) Are primary/elementary school‐based interventions effective in preventing/ameliorating excess weight gain?:A systematic review of systematic reviews. Obesity Reviews, 21 (6). ISSN 1467-7881 Goldthorpe, Joanna and Epton, Tracy and Keyworth, Chris and Calam, Rachel and Brooks, Joanna and Armitage, Chris (2020) What do children, parents and staff think about a healthy lifestyles intervention delivered in primary schools?:a qualitative study. BMJ Open, 10. ISSN 2044-6055 Goldthorpe, Joanna and Pretty, Iain and Cotterill, Sarah and Hart, Jo and Peters, Sarah (2020) Protocol for the Polar Bear Study:A Feasibility and acceptability study of an electronic training and toolkit to support dental practitioner's behaviour change conversations with parents of children at risk of dental caries. UNSPECIFIED. Goldthorpe, Joanna and Sneddon, Jacqueline and Cameron, Elaine and Kurdi, Amanj and Kerr, Fran and Afriyie, Daniel Kwame and Sefah, Israel and Cockburn, Alison and Seaton, Andrew (2020) Supporting antimicrobial stewardship in Ghana:evaluation of the impact of training on knowledge and attitudes of healthcare professionals in two hospitals. JAC-Antimicrobial Resistance, 2 (4). Golokolenov, Ilia and Guthrie, Andrew and Kafanov, Sergey and Pashkin, Yuri and Tsepelin, Viktor (2020) On the origin of the controversial electrostatic field effect in superconductors. arxiv.org. Gong, P. and Nutter, J. and Rivera-Diaz-Del-Castillo, P. E.J. and Rainforth, W. M. (2020) Hydrogen embrittlement through the formation of low-energy dislocation nanostructures in nanoprecipitation-strengthened steels. Science Advances, 6 (46). ISSN 2375-2548 Gonzalez-Perez, V. and Keil, P. and Li, Y. and Zülke, A. and Burrel, R. and Csala, D. and Hoster, H. (2020) A Python Package to Preprocess the Data Produced by Novonix High-Precision Battery-Testers. Journal of Open Research Software, 8. pp. 1-5. ISSN 2049-9647 González, M.A. and Bell, M. and Souza, C.F. and Maciel-De-freitas, R. and Brazil, R.P. and Courtenay, O. and Hamilton, J.G.C. (2020) Synthetic sex-aggregation pheromone of lutzomyia longipalpis, the South American sand fly vector of leishmania infantum, attracts males and females over long-distance. PLoS Neglected Tropical Diseases, 14 (10). ISSN 1935-2727 González, M.A. and Dilger, E. and Ronderos, M.M. and Spinelli, G.R. and Courtenay, O. and Hamilton, J.G.C. (2020) Significant reduction in abundance of peridomestic mosquitoes (Culicidae) and Culicoides midges (Ceratopogonidae) after chemical intervention in western São Paulo, Brazil. Parasites and Vectors, 13 (1). ISSN 1756-3305 Gooch, Daniel and Mehta, Vikram and Price, Blaine and McCormick, Ciaran and Bandara, Arosha and Bennaceur, Amel and Bennasar, Mohamed and Stuart, Avelie and Clare, Linda and Levine, Mark and Cohen, Jessica and Nuseibeh, Bashar (2020) How are you feeling? using tangibles to log the emotions of older adults. In: TEI 2020 - Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction. TEI 2020 - Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction . Association for Computing Machinery, Inc, AUS, pp. 31-43. ISBN 9781450361071 Gooding, Patricia A. and Pratt, Daniel and Awenat, Yvonne and Drake, Richard and Elliott, Rachel and Emsley, Richard and Huggett, Charlotte and Jones, Steven and Kapur, Navneet and Lobban, Fiona and Peters, Sarah and Haddock, Gillian (2020) A psychological intervention for suicide applied to non-affective psychosis:the CARMS (Cognitive AppRoaches to coMbatting Suicidality) randomised controlled trial protocol. BMC Psychiatry, 20. ISSN 1471-244X Goodman, J.E. and Prueitt, R.L. and Boffetta, P. and Halsall, C. and Sweetman, A. (2020) "Good Epidemiology Practice" Guidelines for Pesticide Exposure Assessment. International Journal of Environmental Research and Public Health, 17 (14). pp. 1-15. ISSN 1660-4601 Goodwin, Dawn and Mays, Nicholas and Pope, Catherine (2020) Ethical Issues in Qualitative Research. In: Qualitative Research in Health Care. John Wiley & Sons, Inc., Hoboken, NJ, USA. ISBN 9781119410836 Goodwin, L. and Leightley, D. and Chui, Z. E. and Landau, S. and McCrone, P. and Hayes, R. D. and Jones, M. and Wessely, S. and Fear, N. T. (2020) Hospital admissions for non-communicable disease in the UK military and associations with alcohol use and mental health:a data linkage study. BMC Public Health, 20. ISSN 1471-2458 Gordon, Hannah (2020) The experience of body image for people with a left ventricular assist device. PhD thesis, UNSPECIFIED. Gordon, T A C and Radford, A N and Simpson, S D and Meekan, M G (2020) Marine restoration projects are undervalued. Science, 367. pp. 635-636. ISSN 0036-8075 Gorkovenko, Katerina and Burnett, Dan and Thorp, James and Richards, Daniel and Murray-Rust, Dave (2020) Exploring the future of data-driven product design. In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, pp. 1-14. ISBN 9781450367080 Gorman, Emily and Kirkham, Sam (2020) Dynamic acoustic-articulatory relations in back vowel fronting:Examining the effects of coda consonants in two dialects of British English. Journal of the Acoustical Society of America, 148 (2). pp. 724-733. ISSN 0001-4966 Gorrill, Helen (2020) Body, Space, Place in Collective and Collaborative Drawing:Drawing Conversations II. Cambridge Scholars Publishing. ISBN 9781527541962 Gorrill, Helen (2020) Drawing patterns on place:Transnational collective aesthetics in contemporary drawing. In: Body, Space, Place in Collective and Collaborative Drawing. Cambridge Scholars Press, Newcastle, pp. 168-186. ISBN 9781527541962 Gorrill, Helen Laura (2020) Women can't paint:gender, the glass ceiling and values in contemporary art. International Library of Modern and Contemporary Art . I. B. Tauris. ISBN 9781788310802 Goswami, K.N. and Murphy, S.T. (2020) Influence of Lithium Vacancy Defects on Tritium Diffusion in β-Li2TiO3. Journal of Physical Chemistry C, 124 (23). pp. 12286-12294. ISSN 1932-7447 Goto, Aya and Lloyd Williams, Alison and Kuroda, Yujiro and Satoh, Kenichi (2020) Thinking and acting with school children in Fukushima:implementation of a participatory theater approach and analysis of teachers' experience. JMA Journal, 3 (1). pp. 67-72. ISSN 2433-3298 Goudet, Myriam and Orr, Douglas and Melkonian, Michael and Muller, Karin and Meyer, Moritz and Carmo-Silva, Elizabete and Griffiths, Howard (2020) Rubisco and carbon-concentrating mechanism co-evolution across chlorophyte and streptophyte green algae. New Phytologist, 227 (3). pp. 810-823. ISSN 0028-646X Goudket, P. and Ma, L. and Kalinin, A. and Beard, C. and Burt, G. and Dexter, A. (2020) Status of the crab cavity system development for the ILC:36th ICFA Advanced Beam Dynamics Workshop on Nano Scale Beams, NANOBEAM 2005. In: 36th ICFA Advanced Beam Dynamics Workshop on Nano Scale Beams, NANOBEAM 2005, 2005-10-172005-10-21, Kyoto, Japan. Goutas, Lazaros and Hess, Basil and Sutanto, Juliana (2020) If Erring is Human, is System Use Divine?:Omission Errors During Post-Adoptive System Use. Decision Support Systems, 130. ISSN 0167-9236 Govigli, V.M. and Alkhaled, S. and Arnesen, T. and Barlagne, C. and Bjerck, M. and Burlando, C. and Melnykovych, M. and Fernandez-Blanco, C.R. and Sfeir, P. and Górriz-Mifsud, E. (2020) Testing a framework to co-construct social innovation actions: Insights from seven marginalized rural areas. Sustainability, 12 (4). ISSN 2071-1050 Govin, Gwladys and van der Beek, Peter and Najman, Yani and Millar, Ian and Gemignani, Lorenzo and Huyghe, Pascale and Dupont-Nivet, Guillaume and Bernet, Matthias and Mark, Chris and Wijbrans, J. R. (2020) Early onset and late acceleration of rapid exhumation in the Namche Barwa syntaxis, eastern Himalaya. Geology, 48 (12). 1139–1143. ISSN 0091-7613 Gowling, Helen (2020) Psychological factors associated with distress and wellbeing in dystonia. PhD thesis, UNSPECIFIED. Graaff, A.D. and Bezanson, R. and Franx, M. and van der Wel, A. and Bell, E.F. and D'Eugenio, F. and Holden, B. and Maseda, M.V. and Muzzin, A. and Pacifici, C. and Sande, J.V.D. and Sobral, D. and Straatman, C.M.S. and Wu, P.-F. (2020) Tightly coupled morpho-kinematic evolution for massive star-forming and quiescent galaxies across 7Gyr of cosmic time. Astrophysical Journal Letters, 903 (2). ISSN 2041-8205 Graaff, Anna de and Bezanson, Rachel and Franx, Marijn and Wel, Arjen van der and Bell, Eric F. and D'Eugenio, Francesco and Holden, Bradford and Maseda, Michael V. and Muzzin, Adam and Pacifici, Camilla and Sande, Jesse van de and Sobral, David and Straatman, Caroline M. S. and Wu, Po-Feng (2020) Tightly coupled morpho-kinematic evolution for massive star-forming and quiescent galaxies across 7 Gyr of cosmic time. Astrophysical Journal Letters, 903. ISSN 2041-8205 Grace, I.M. and Olsen, G. and Hurtado-Gallego, J. and Rincón-García, L. and Rubio-Bollinger, G. and Bryce, M.R. and Agraït, N. and Lambert, C.J. (2020) Connectivity dependent thermopower of bridged biphenyl molecules in single-molecule junctions. Nanoscale, 12 (27). pp. 14682-14688. ISSN 2040-3372 Graham, E. and Jaki, T. and Harbron, C. (2020) A comparison of stochastic programming methods for portfolio level decision-making. Journal of Biopharmaceutical Statistics, 30 (3). pp. 405-429. ISSN 1054-3406 Graham, Elizabeth and Evans, Daniel and Duncan, Lindsay (2020) The Waste of Time. In: The Temporalities of Waste. Routledge Environmental Humanities . Routledge, Abingdon, Oxon, pp. 151-166. ISBN 9780367321796 Graham, Emily (2020) Late stage combination drug development for improved portfolio-level decision-making. PhD thesis, UNSPECIFIED. Graham, N.A.J. and Robinson, James P.W. and Smith, S.E. and Govinden, R. and Gendron, G. and Wilson, S.K. (2020) Changing role of coral reef marine reserves in a warming climate. Nature Communications, 11 (1). ISSN 2041-1723 Grant, James (2020) Poisson process bandits:Sequential models and algorithms for maximising the detection of point data. PhD thesis, UNSPECIFIED. Grant, James A. and Leslie, David S. (2020) Learning to Rank under Multinomial Logit Choice. arXiv.org. Grant, James A. and Leslie, David S. (2020) On Thompson Sampling for Smoother-than-Lipschitz Bandits. In: 23rd International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research . Proceedings of Machine Learning Research, pp. 2612-2622. Grant, James A. and Leslie, David S. and Glazebrook, Kevin and Szechtman, Roberto and Letchford, Adam (2020) Adaptive policies for perimeter surveillance problems. European Journal of Operational Research, 283 (1). pp. 265-278. ISSN 0377-2217 Grant, James A. and Szechtman, Roberto (2020) Filtered Poisson Process Bandit on a Continuum. arXiv.org. Grass, Delphine (2020) "Our Reading Woes". UNSPECIFIED. Gratus, Jonathan and Kinsler, Paul and McCall, Martin (2020) Maxwell's Equations, Stokes' Theorem, and the Conservation of Charge. arXiv. ISSN 2331-8422 Gratus, Jonathan and Kinsler, Paul and McCall, Martin W. (2020) Temporary Singularities and Axions an analytic solution that challenges charge conservation. arxiv.org. Gratus, Jonathan and McCall, Martin W. and Kinsler, Paul (2020) Electromagnetism, Axions, and Topology:a first-order operator approach to constitutive responses provides greater freedom. Physical Review A - Atomic, Molecular, and Optical Physics, 101 (4). ISSN 1050-2947 Gratus, Jonathan and Pinto, Paolo and Talaganis, Spyridon (2020) The Distributional Stress-Energy Quadrupole. arxiv.org. Gratus, Jonathan and Pinto, Paolo and Talaganis, Spyridon (2020) The distributional stress-energy quadrupole. Classical and Quantum Gravity, 38 (3). ISSN 0264-9381 Gray, Emma and Mackay, Eleanor and Elliott, Alex and Folkard, Andrew and Jones, Ian (2020) Widespread inconsistency in estimation of lake mixed depth impacts interpretation of limnological processes. Water Research, 168. ISSN 0043-1354 Gray, L. and Gorman, Emma and White, I.R. and Katikireddi, S.V. and McCartney, G. and Rutherford, L. and Leyland, A.H. (2020) Correcting for non-participation bias in health surveys using record-linkage, synthetic observations and pattern mixture modelling. Statistical Methods in Medical Research, 29 (4). pp. 1212-1226. ISSN 0962-2802 Greasley, Kay and Thomas, Pete (2020) HR Analytics:The onto-epistemology and politics of metricised HRM. Human Resource Management Journal, 30 (4). pp. 494-507. ISSN 0954-5395 Green, Benjamin and Derbyshire, Ric and Knowles, William and Boorman, James and Ciholas, Pierre and Prince, Daniel and Hutchison, David (2020) ICS Testbed Tetris:Practical Building Blocks Towards a Cyber Security Resource. In: The 13th USENIX Workshop on Cyber Security Experimentation and Test (CSET '20), 2020-08-102020-08-10. Green, Colin and Heywood, John Spencer and Navarro Paniagua, Maria (2020) Did the London Congestion Charge Reduce Pollution? Regional Science and Urban Economics, 84. ISSN 0166-0462 Green, Sarah (2020) Unilateral commitments to persons with disabilities of armed non-state de facto authorities that govern. Masters thesis, UNSPECIFIED. Greenberg, James and Park, Tad and Batterbury, Simon and Walsh, Casey and Liebow, Ed (2020) Conclusions. In: Terrestrial transformations. Lexington Books. ISBN 9781793605467 Greenop, A. and Cook, S.M. and Wilby, A. and Pywell, R.F. and Woodcock, B.A. (2020) Invertebrate community structure predicts natural pest control resilience to insecticide exposure. Journal of Applied Ecology, 57 (12). pp. 2441-2453. ISSN 0021-8901 Greenop, A. and Mica-Hawkyard, N. and Walkington, S. and Wilby, A. and Cook, S.M. and Pywell, R.F. and Woodcock, B.A. (2020) Equivocal evidence for colony level stress effects on bumble bee pollination services. Insects, 11 (3). ISSN 2075-4450 Greenop, Arran (2020) Using species traits to understand the mechanisms driving pollination and pest control ecosystem services. PhD thesis, UNSPECIFIED. Greenwood, J. and Kelly, C. (2020) Taking a cooperative inquiry approach to developing person-centred practice in one English secondary school. Action Research, 18 (2). pp. 212-229. ISSN 1476-7503 Greenyer, George and Coulton, Antonio and Smith, Josh and Jaques, Rhys and Wright, Nathan and Wright, Keenan and Sobral, David (2020) On the Origin of Hyper-Velocity Stars Near Sagittarius A*. Notices of Lancaster Astrophysics (NLUAstro), 2. pp. 29-41. Gregory, Ian Norman and Paterson, Laura (2020) English Language and History:Geographical representations of poverty in Historical Newspapers. In: The Routledge Handbook of English Language and Digital Humanities. Routledge Handbooks in English Language Studies . Routledge, pp. 418-439. ISBN 9781138901766 Gregson, Rebecca (2020) Look Me in the Eyes - A Blind Spot in Our Empathic Gaze with Farmed Animals. UNSPECIFIED. (Unpublished) Grievson, Alex (2020) Time-of-flight spectrometry of the spontaneous fission neutron emission of Cm-244 and Cf-252 using EJ-309 liquid scintillators. PhD thesis, UNSPECIFIED. Griffin, Becky (2020) Printing bioelectronics. Masters thesis, UNSPECIFIED. Griffin, John M. (2020) A gateway to understanding confined ions. Nature Nanotechnology, 15. 628–629. ISSN 1748-3387 Griffiths, Cerian (2020) Researching Eighteenth-Century Fraud in the Old Bailey:Reflections on Court Records, Archives, and Digitisation. Acta Universitatis Lodziensis. Folia Iuridica, 2020 (91). pp. 9-24. ISSN 2450-2782 Griffiths, Cerian (2020) The honest cheat:a timely history of cheating and fraud following Ivey v Genting Casinos (UK) Ltd t/a Crockfords [2017] UKSC 67. Legal Studies, 40 (2). pp. 252-268. ISSN 0261-3875 Griffiths, Kieran and Halcovitch, Nathan and Griffin, John (2020) Long-Term Solar Energy Storage under Ambient Conditions in a MOF-Based Solid–Solid Phase-Change Material. Chemistry of Materials, 32 (23). pp. 9925-9936. ISSN 0897-4756 Griffiths, L.J. and Johnson, R.D. and Broadhurst, K. and Bedston, S. and Cusworth, L. and Alrouh, B. and Ford, D.V. and John, A. (2020) Maternal health, pregnancy and birth outcomes for women involved in care proceedings in Wales:a linked data study. BMC Pregnancy and Childbirth, 20 (1). ISSN 1471-2393 Griffiths, Rupert (2020) 从地平线到地面:从视觉走向物质的摄影之路. In: 场所 空间 艺术. China Academy of Art, Hangzhou, China, pp. 148-162. ISBN 9787550318212 Griffiths, Rupert (2020) Design practice as fieldwork:Describing the nocturnal biome through light and sound. In: Ambiances, Alloaesthesia: Senses, Inventions, Worlds. International Ambiances Network, pp. 108-113. ISBN 9782952094870 Griffiths, Rupert and Dunn, Nick (2020) More-than-human Nights:Intersecting lived experience and diurnal rhythms in the nocturnal city. In: ICNS Proceedings. ISCTE, Instituto Universitário de Lisboa, PRT, pp. 203-220. ISBN 9789728048 Grover, Chris (2020) Understanding material assistance in the Children and Young Persons Act 1963:Idealism and classical liberalism in England and Wales. Qualitative Social Work. ISSN 1473-3250 Grundy, Tom and Killick, Rebecca and Mihaylov, G (2020) High-Dimensional Changepoint Detection via a Geometrically Inspired Mapping. Statistics and Computing, 30. 1155–1166. ISSN 0960-3174 Gruse, J.-N and Streeter, Matthew and Thornton, C and Armstrong, C.D. and Baird, C.D. and Bourgeois, N and Cipiccia, S. and Finlay, Oliver and Gregory, C.D. and Katzir, Y and Lopes, N.C. and Mangles, S.P.D and Najmudin, Z and Neely, D and Pickard, L.R. and Potter, K.D. and Rajeev, P.P. and Rusby, D.R. and Underwood, C.I.D and Warnett, J.M. and Williams, M.A. and Wood, J.C. and Murphy, C.D. and Brenner, C.M. and Symes, D.R. (2020) Application of compact laser-driven accelerator X-ray sources for industrial imaging. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 983. ISSN 0168-9002 Grybchuk, D. and MacEdo, D.H. and Kleschenko, Y. and Kraeva, N. and Lukashev, A.N. and Bates, P.A. and Kulich, P. and Leštinová, T. and Volf, P. and Kostygov, A.Y. and Yurchenko, V. (2020) The first Non-LRV RNA virus in leishmania. Viruses, 12 (2). ISSN 1999-4915 Gräbner, Cornelia (2020) Invading Stages:Interview with Pete Bearder and Review of Stage Invasion: Poetry & The Spoken Word Renaissance. Liminalities: A Journal of Performance Studies, 16 (1). ISSN 1557-2935 Gräbner, Cornelia (2020) Sentipensar / FeelingThinking, and Poetic Stage Invasions:Thoughts on a Conversation with Pete Bearder. UNSPECIFIED. Gräbner, Cornelia (2020) The World, Not the Mirror:On Carolyn Forché's What You Have Heard Is True: A Memoir of Witness and Resistance (Penguin, 2019) and In the Lateness of the World (Penguin, 2020). Massachusetts Review, Inc. Gu, Xiaowei (2020) A Self-Training Hierarchical Prototype-Based Approach for Semi-Supervised Classification. Information Sciences, 535. pp. 204-224. ISSN 0020-0255 Gu, Xiaowei and Angelov, Plamen (2020) Highly interpretable hierarchical deep rule-based classifier. Applied Soft Computing, 92. ISSN 1568-4946 Gu, Xiaowei and Angelov, Plamen and Almeida Soares, Eduardo (2020) A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification. International Journal of Intelligent Systems, 35 (6). pp. 923-943. ISSN 0884-8173 Gu, Xiaowei and Khan, Muhammad and Angelov, Plamen and Tiwary, Bikash and Shafipour Yourdshahi, Elnaz and Yang, Zhaoxu (2020) A Novel Self-Organizing PID Approach for Controlling Mobile Robot Locomotion. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, pp. 1-10. ISBN 9781728169330 Gu, Xiaowei and Williams, Bryan and Black, S. (2020) A deep convolutional neural network-based approach for superficial dorsal hand vein pattern matching. In: European Association for Biometrics Research Projects Conference 2020, 2020-09-142020-09-16, Darmstadt. Gu, Zewen and Hou, Xiaonan and Keating, Elspeth and Ye, Jianqiao (2020) Non-linear finite element model for dynamic analysis of high-speed valve train and coil collisions. International Journal of Mechanical Sciences, 173. ISSN 0020-7403 Gubbels, Joyce and Swart, Nicole M. and Groen, Margriet (2020) Everything in moderation:ICT and reading performance of Dutch 15-year-olds. Large-scale Assessments in Education, 8. ISSN 2196-0739 Guenault, A. M. and Guthrie, A. and Haley, R. P. and Kafanov, S. and Pashkin, Yu. A. and Pickett, G. R. and Tsepelin, V. and Zmeev, D. E. and Collin, E. and Gazizulin, R. and Maillet, O. (2020) Detecting a phonon flux in superfluid He-4 by a nanomechanical resonator. Physical review B, 101 (6). ISSN 2469-9950 Guio, P. and Staniland, Ned and Achilleos, Nicholas and Arridge, Chris (2020) Trapped Particle Motion in Magnetodisk Fields. Journal of Geophysical Research: Space Physics, 125 (7). ISSN 2169-9402 Gunasekara Vidana Mestrige Fernando, Chamath (2020) "Adjunctive Effects of a Short Session of Music on Pain, Low-mood and Anxiety Modulation among Cancer Patients" – A Randomized Crossover Clinical Trial. Indian Journal of Palliative Care, 25 (3). 367 - 373. ISSN 0973-1075 Gunasekara Vidana Mestrige Fernando, Chamath and Preston, Nancy (2020) Refractory pruritus from malignant cholestasis:Management. BMJ Supportive and Palliative Care. ISSN 2045-435X Guo, Ran (2020) Essays on corporate governance and firm performance. PhD thesis, UNSPECIFIED. Guo, Tianhao and Schormans, John and Xu, Lexi and Wu, Jinze and Cao, Yue (2020) Proximity as a Service for the Use Case of Access Enhancement via Cellular Network-Assisted Mobile Device-to-Device. IEEE Access, 8. 31562 - 31573. ISSN 2169-3536 Gupta, Dhanak and Kocot, Magdalena and Tryba, Anna-Maria and Serafim, Andrada and Stancu, Izabela-Cristina and Jaegermann, Zbigniew and Pamula, Elzbieta and Reilly, G.C. and Douglas, Timothy (2020) Novel naturally derived whey protein isolate and aragonite biocomposite hydrogels have potential for bone regeneration. Materials and Design, 188. ISSN 0261-3069 Gupta, Gaurav and Selvakumar, Karuppiah and Lakshminarasimhan, Narayanan and Senthil Kumar, Sakkarapalayam Murugesan and Mamlouk, Mohamed (2020) The effects of morphology, microstructure and mixed-valent states of MnO2 on the oxygen evolution reaction activity in alkaline anion exchange membrane water electrolysis. Journal of Power Sources, 461. ISSN 0378-7753 Gupta, N. and Pannella, M. and Mohr, J. J. and Klein, M. and Rykoff, E. S. and Annis, J. and Avila, S. and Bianchini, F. and Brooks, D. and Buckley-Geer, E. and Bulbul, E. and Rosell, A. Carnero and Kind, M. Carrasco and Carretero, J. and Chiu, L and Costanzi, M. and da Costa, L. N. and De Vicente, J. and Desai, S. and Dietrich, J. P. and Doel, P. and Everett, S. and Evrard, A. E. and Garcia-Bellido, J. and Gaztanaga, E. and Gruen, D. and Gruendl, R. A. and Gschwend, J. and Gutierrez, G. and Hollowood, D. L. and Honscheid, K. and James, D. J. and Jeltema, T. and Kuehn, K. and Lidman, C. and Lima, M. and Maia, M. A. G. and Marshall, J. L. and McDonald, M. and Menanteau, F. and Miquel, R. and Ogando, R. L. C. and Palmese, A. and Paz-Chinchon, F. and Plazas, A. A. and Reichardt, C. L. and Sanchez, E. and Santiago, B. and Saro, A. and Scarpine, V. and Schindler, R. and Schubnell, M. and Serrano, S. and Sevilla-Noarbe, I. and Shao, X. and Smith, M. and Stott, J. P. and Strazzullo, V. and Suchyta, E. and Swanson, M. E. C. and Vikram, V. and Zenteno, A. (2020) Constraining radio mode feedback in galaxy clusters with the cluster radio AGNs properties to z similar to 1. Monthly Notices of the Royal Astronomical Society, 494 (2). pp. 1705-1723. ISSN 0035-8711 Guthrie, A. and Kafanov, S. and Noble, M. T. and Pashkin, Yu A. and Pickett, G. R. and Tsepelin, V. and Dorofeev, A. A. and Krupenin, V. A. and Presnov, D. E. (2020) Nanoscale Real-Time Detection of Quantum Vortices at Millikelvin Temperatures. arxiv.org. Guthrie, Andrew (2020) Nanoscale devices for studying quantum fluids and electrostatic field-effects in superconducting nanoconstrictions. PhD thesis, UNSPECIFIED. Gutiérrez-Soto, L. A. and Gonçalves, D. R. and Akras, S. and Cortesi, A. and López-Sanjuan, C. and Guerrero, M. A. and Daflon, S. and Fernandes, M. Borges and Oliveira, C. Mendes de and Ederoclite, A. and Jr, L. Sodré and Pereira, C. B. and Kanaan, A. and Werle, A. and Ramió, H. Vázquez and Alcaniz, J. S. and Angulo, R. E. and Cenarro, A. J. and Cristóbal-Hornillos, D. and Dupke, R. A. and Hernández-Monteagudo, C. and Marín-Franch, A. and Moles, M. and Varela, J. and Ribeiro, T. and Schoenell, W. and Alvarez-Candal, A. and Galbany, L. and Jiménez-Esteban, F. M. and Logroño-García, R. and Sobral, D. (2020) J-PLUS: Tools to identify compact planetary nebulae in the Javalambre and southern photometric local universe surveys. Astronomy and Astrophysics, 633. ISSN 1432-0746 Gutsche Jr, Robert (2020) Review of the book The dynamics of news: Journalism in the 21st-Century media milieu, by R. M. Perloff. Journalism, 21 (9). pp. 1371-1372. ISSN 1464-8849 Gutsche Jr, Robert (2020) Solution journalism and participation. UNSPECIFIED. Gutsche Jr, Robert and Hess, Kristy (2020) Contesting communities:The problem of journalism and social order. In: Reimagining Journalism and Social Order in a Fragmented Media World. Routledge, London. ISBN 9780367366056 Gutsche Jr, Robert and Hess, Kristy (2020) "Placeification":The transformation of digital news spaces into "places" of meaning. Digital Journalism, 8 (5). pp. 586-595. ISSN 2167-082X Gutsche Jr, Robert and Hess, Kristy (2020) Total eclipse of the social:What journalism can learn from the fundamentals of Facebook. In: Journalism Research in Practice. Journalism Studies . Routledge, London. ISBN 9780367469665 Guy, Mary (2020) Can Covid-19 change the EU competition law framework in health? European Social Observatory / Observatoire Social Europeen Opinion Paper series. pp. 1-11. ISSN 1994-2893 Guénier, Amily Dongshuo Wang (2020) A Multimodal Course Design for Intercultural Business Communication. Journal of Teaching in International Business, 31 (3). pp. 214-237. Gweon, H.S. and Bowes, M.J. and Moorhouse, H.L. and Oliver, A.E. and Bailey, M.J. and Acreman, M.C. and Read, D.S. (2020) Contrasting community assembly processes structure lotic bacteria metacommunities along the river continuum. Environmental Microbiology. ISSN 1462-2912 Gwernan-Jones, R. and Britten, N. and Allard, J. and Baker, E. and Gill, L. and Lloyd, H. and Rawcliffe, T. and Sayers, R. and Plappert, H. and Gibson, J. and Clark, M. and Birchwood, M. and Pinfold, V. and Reilly, S. and Gask, L. and Byng, R. (2020) A worked example of initial theory-building:PARTNERS2 collaborative care for people who have experienced psychosis in England. Evaluation, 26 (1). pp. 6-26. ISSN 1356-3890 Gyimah, Akosua (2020) Conflict Management in South Sudan:What is Obstructing the Peace Process? The Horn Bulletin, III (VI). pp. 12-22. ISSN 2663-4996 Gyimah, Akosua (2020) Is Democracy the Right System of Government for Africans?:Deteriorating Democracy? Private Law Consulting Firm, United Kingdom. Gómez, J.A. and Ben-Gal, A. and Alarcón, J.J. and De Lannoy, G. and de Roos, S. and Dostál, T. and Fereres, E. and Intrigliolo, D.S. and Krása, J. and Klik, A. and Liebhard, G. and Nolz, R. and Peeters, A. and Plaas, E. and Quinton, J.N. and Rui, M. and Strauss, P. and Weifeng, X. and Zhang, Z. and Zhong, F. and Zumr, D. and Dodd, I.C. (2020) SHui, an EU-Chinese cooperative project to optimize soil and water management in agricultural areas in the XXI century. International Soil and Water Conservation Research, 8 (1). pp. 1-14. Habarulema, J.B. and Katamzi-Joseph, Z.T. and Burešová, D. and Nndanganeni, R. and Matamba, T. and Tshisaphungo, M. and Buchert, S. and Kosch, M. and Lotz, S. and Cilliers, P. and Mahrous, A. (2020) Ionospheric Response at Conjugate Locations During the 7–8 September 2017 Geomagnetic Storm Over the Europe-African Longitude Sector. Journal of Geophysical Research: Space Physics, 125 (10). ISSN 2169-9402 Hacker, K.P. and Sacramento, G.A. and Cruz, J.S. and De Oliveira, D. and Nery, N. and Lindow, J.C. and Carvalho, M. and Hagan, J. and Diggle, P.J. and Begon, M. and Reis, M.G. and Wunder, E.A. and Ko, A.I. and Costa, F. (2020) Influence of rainfall on leptospira infection and disease in a tropical urban setting, Brazil. Emerging Infectious Diseases, 26 (2). pp. 311-314. ISSN 1080-6040 Haddican, B. and Johnson, D.E. and Wallenberg, J. and Holmberg, A. (2020) Variation and change in the particle verb alternation across English dialects. In: Advancing Socio-grammatical Variation and Change: In Honour of Jenny Cheshire. Routledge, London, pp. 205-228. ISBN 9780367244798 Haddock, David and Manya, Shukrani and Brown, Richard J. and Jones, Thomas and Wadsworth, Fabian B. and Dobson, Katherine J. and M. Gernon, Thomas (2020) Syn-eruptive agglutination of kimberlite volcanic ash. Volcanica, 3 (1). pp. 169-182. ISSN 2610-3540 Hadley, Lucinda and Chatzigeorgiou, Ioannis (2020) Low Complexity Optimization of the Asymptotic Spectral Efficiency in Massive MIMO NOMA. IEEE Wireless Communications Letters, 9 (11). pp. 1928-1932. ISSN 2162-2337 Hafiz, Muneeb (2020) Critique of Muslim reason:Achille Mbembe and a study of race, racism and Islamophobia in modern Britain. PhD thesis, UNSPECIFIED. Hafiz, Muneeb (2020) Smashing the Imperial Frame: Race, Culture, (De)Coloniality. Theory, Culture and Society, 37 (1). pp. 113-145. ISSN 0263-2764 Hagopian, P. (2020) The Martin Luther King, Jr. memorial and the politics of post-racialism. History and Memory, 32 (2). pp. 36-77. Hagström, Linus and Nordin, Astrid Hanna Maria (2020) China's "Politics of Harmony" and the Quest for Soft Power in International Politics. International Studies Review, 22 (3). 507–525. Hahn, G. (2020) Optimal allocation of Monte Carlo simulations to multiple hypothesis tests. Statistics and Computing, 30 (3). pp. 571-586. ISSN 0960-3174 Hahn, Georg and Fearnhead, Paul and Eckley, Idris (2020) BayesProject:Fast computation of a projection direction for multivariate changepoint detection. Statistics and Computing, 30. 1691–1705. ISSN 0960-3174 Halac, Marina and Kremer, Ilan and Winter, Eyal (2020) Raising Capital from Heterogeneous Investors. The American Economic Review, 110 (3). pp. 889-921. ISSN 0002-8282 Hale, Alison (2020) Risk factors for the rate of progression of chronic kidney disease in secondary care patients. Masters thesis, UNSPECIFIED. Hales, J.J. and Trudeau, M.L. and Antonelli, D.M. and Kaltsoyannis, N. (2020) Formation of Mn hydrides from bis(trimethylsilylmethyl) Mn(II):A DFT study. Polyhedron, 178. ISSN 0277-5387 Hall, Angela and Mitchell, Andrew Robert John and Wood (Ashmore), Lisa and Holland, Carol (2020) Effectiveness of a single lead AliveCor electrocardiogram application for the screening of atrial fibrillation:A systematic review. Medicine, 99 (30). ISSN 1536-5964 Hall, S.G. and Gibson, H.D. and Tavlas, G.S. and Tsionas, M.G. (2020) A Monte Carlo Study of Time Varying Coefficient (TVC) Estimation. Computational Economics, 56. 115–130. ISSN 0927-7099 Halliday, Emma and Collins, Michelle and Egan, Matthew and Ponsford, Ruth and Scott, Courtney and Popay, Jennie (2020) A 'strategy of resistance'? How can a place-based empowerment programme influence local media portrayals of neighbourhoods and what are the implications for tackling health inequalities? Health and Place, 63. ISSN 1353-8292 Halliday, Emma and Popay, Jennie and Anderson de Cuevas, Rachel and Wheeler, Paula (2020) The elephant in the room?:Why spatial stigma does not receive the public health attention it deserves. Journal of Public Health, 42 (1). pp. 38-43. ISSN 1741-3842 Hamad, Rebeen Ali and Yang, Longzhi and Woo, Wai Lok and Wei, Bo (2020) Joint Learning of Temporal Models to Handle Imbalanced Data for Human Activity Recognition. Applied Sciences, 10 (15). ISSN 2076-3417 Hamasaki, Fumina (2020) Subversions of fertility:menstrual blood, breast milk and 'feminised' food in feminist philosophy, literature and art 1960-2000. PhD thesis, UNSPECIFIED. Hameiri, Shahar and Zeng, Jinghan (2020) State Transformation and China's Engagement in Global Governance:The Case of Nuclear Technologies. The Pacific Review, 33 (6). pp. 900-930. Hamilton, Bernard and Jotischky, Andrew (2020) Latin and Greek Monasticism in the Crusader States. Cambridge University Press, Cambridge. ISBN 9780521836388 Hamilton, John and Varey, Sandra (2020) Using action research to implement, investigate and evaluate interventions in applied health research. In: The Handbook of Theory and Methods in Applied Health Research. Edward Elgar, Cheltenham, 167–187. ISBN 9781785363207 Hammond, Alison and Sutton, Chris and Cotterill, Sarah and Woodbridge, Sarah and O'Brien, Rachel and Radford, Kate and Forshaw, Denise and Verstappen, Suzanne and Jones, Cheryl and Marsden, Antonia and Eden, Martin and Prior, Yeliz and Culley, June and Holland, Paula and Walker-Bone, Karen and Hough, Yvonne and O'Neill, Terence and Ching, Angela and Parker, Jennifer (2020) The effect on work presenteeism of job retention vocational rehabilitation compared to a written self-help work advice pack for employed people with inflammatory arthritis: protocol for a multi-centre randomised controlled trial (the WORKWELL trial). BMC Musculoskeletal Disorders, 21. ISSN 1471-2474 Han, Y. and Zhu, H. and Affolder, A. and Arndt, K. and Bates, R. and Benoit, M. and Di Bello, F. and Blue, A. and Bortoletto, D. and Buckland, M. and Buttar, C. and Caragiulo, P. and Chen, Y. and Das, D. and Doering, D. and Dopke, J. and Dragone, A. and Ehrler, F. and Fadeyev, V. and Fedorko, W. and Galloway, Z. and Gay, C. and Grabas, H. and Gregor, I.M. and Grenier, P. and Grillo, A. and Hiti, B. and Hoeferkamp, M. and Hommels, L.B.A. and Huffman, T. and John, J. and Kanisauskas, K. and Kenney, C. and Kramberger, G. and Liu, P. and Lu, W. and Liang, Z. and Mandić, I. and Maneuski, D. and Martinez-Mckinney, F. and McMahon, S. and Meng, L. and Mikuz̆, M. and Muenstermann, D. and Nickerson, R. and Peric, I. and Phillips, P. and Plackett, R. and Rubbo, F. and Ruckman, L. and Segal, J. and Seidel, S. and Seiden, A. and Shipsey, I. and Song, W. and Stanitzki, M. and Su, D. and Tamma, C. and Turchetta, R. and Vigani, L. and Volk, J. and Wang, R. and Warren, M. and Wilson, F. and Worm, S. and Xiu, Q. and Zhang, J. (2020) Study of CMOS strip sensor for future silicon tracker. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 981. ISSN 0168-9002 Handley, Joel D. and Emsley, Hedley C.A. (2020) Validation of ICD-10 codes shows intracranial venous thrombosis incidence to be higher than previously reported. Health Information Management Journal, 49 (1). pp. 58-61. ISSN 1833-3583 Hanif, M. and Lee, C. and Helal, S. (2020) Predictive topology refinements in distributed stream processing system. PLoS ONE, 15 (11). ISSN 1932-6203 Hankin, Barry and hewitt, ian and Sander, Graham and Danieli, Federico and Formetta, Giuseppe and Kamilova, Alissa and Kretzschmar, Ann and Kiradjiev, Kros and Wong, Clint and Pegler, Sam and Lamb, Rob (2020) A risk-based network analysis of distributed in-stream leaky barriers for flood risk management. Natural Hazards and Earth System Sciences, 20 (10). 2567–2584. ISSN 1561-8633 Hanks, Laura (2020) Experimental and theoretical determination of the transport properties of n-AlxGa1-xSb/GaSb. PhD thesis, UNSPECIFIED. Hansen, H.F. and Randell, D. and Zeeberg, A.R. and Jonathan, P. (2020) Directional–seasonal extreme value analysis of North Sea storm conditions. Ocean Engineering, 195. ISSN 0029-8018 Hao, T. and Zhang, Y. and Zhang, J. and Müller, C. and Li, K. and Zhang, K. and Chu, H. and Stevens, C. and Liu, X. (2020) Chronic nitrogen addition differentially affects gross nitrogen transformations in alpine and temperate grassland soils. Soil Biology and Biochemistry, 149. ISSN 0038-0717 Hapgood, Mike and Angling, Matthew and Attrill, Gemma and Bisi, Mario and Burnett, Catherine and Cannon, Paul and Dyer, Clive and Eastwood, Jonathan and Elvidge, Sean and Gibbs, Mark and Harrison, Richard and Hord, Colin and Horne, Richard B. and Jackson, David and Jones, Bryn and Machin, Simon and Mitchell, Cathryn N. and Preston, John and Rees, John and Rogers, Neil and Richards, Andrew and Routledge, Graham and Ryden, Keith and Tanner, Rick and Thomson, Alan and Wild, Jim and Willis, Mike (2020) Summary of space weather worst-case environments:(2nd revised edition). Science and Technology Facilities Council. Happersberger, David and Lohre, Harald and Nolte, Ingmar (2020) Estimating Portfolio Risk for Tail Risk Protection Strategies. European Financial Management, 26 (4). pp. 1107-1146. ISSN 1354-7798 Hardaker, C. (2020) New developments in corpus approaches to social media:a response. In: Corpus Approaches to Social Media. Studies in Corpus Linguistics . John Benjamins Publishing Company, Amsterdam, pp. 199-208. ISBN 9789027207944 Hardie, Andrew and Dorst, Isolde van (2020) A survey of grammatical variability in Early Modern English drama. Language and Literature, 29 (3). pp. 275-301. ISSN 0963-9470 Harding, Andrew and Hean, Sarah and Parker, Jonathan and Hemingway, Ann (2020) "It can't really be answered in an information pack…":A realist evaluation of a telephone housing options service for older people. Social Policy and Society, 19 (3). pp. 361-378. ISSN 1474-7464 Harding, H R and Gordon, T A C and Wong, K and Mccormick, M I and Simpson, S D and Radford, A N (2020) Condition-dependent responses of fish to motorboats. Biology Letters, 16. ISSN 1744-9561 Harding, Luke and Brunfaut, Tineke (2020) Trajectories of language assessment literacy in a teacher-researcher partnership:Locating elements of praxis through narrative inquiry. In: Towards a reconceptualization of second language classroom assessment. Springer, Cham, pp. 61-81. ISBN 9783030350802 Harding, Luke and Brunfaut, Tineke and Unger, Johann Wolfgang (2020) Language testing in the 'hostile environment':The discursive construction of 'secure English language testing' in the United Kingdom. Applied Linguistics, 41 (5). 662–687. ISSN 0142-6001 Harding, Luke and Harding, Matthew (2020) Misleading silence under the Australian Consumer Law:Perspectives from linguistics. In: Misleading silence. Hart Publishing, London. ISBN 9781509929252 Harding, Luke and Kremmel, Benjamin (2020) SLA researcher assessment literacy. In: The Routledge Handbook of Second Language Acquisition and Language Testing. Routledge, London. ISBN 9781138490680 Harding, Luke and Taylor, Lynda (2020) A testing time for testing: Assessment literacy as a force for social good in the time of coronavirus. Academy of Social Sciences. Harding, Luke and Winke, Paula (2020) Editorial. Language Testing, 37 (1). pp. 3-5. ISSN 0265-5322 Harding, Nicola (2020) Co-constructing Feminist Research:Ensuring meaningful participation whilst researching the experiences of criminalised women. Methodological Innovations, 13 (2). pp. 1-14. ISSN 2059-7991 Harding, Nicola (2020) Navigating gendered criminalisation:Women's experiences of punishment in the community. PhD thesis, UNSPECIFIED. Hardy, Claire (2020) Menopause and the workplace guidance:What to consider. Post Reproductive Health, 26 (1). pp. 43-45. Hardy, Claire (2020) Menopause and the workplace: What do menopausal employees want and what can we do? Insights from the United Kingdom:Symposium contribution: Still encountering a glass ceiling? Developments for women and work. In: European Academy of Occupational Health Psychology, 2020-09-022020-09-04, Cyprus. Hardy, Claire (2020) Menopause in the workplace. In: British Psychological Society Division of Occupational Psychology Annual Conference, 2020-01-082020-06-10, Stratford-Upon-Avon, UK. (Unpublished) Hardy, Claire (2020) Reimagining teaching: Considering how to teach sensitive or controversial topics online. In: 35th (2020) Annual Conference of SIETAR Japan, 2020-11-072020-11-08. Hardy, John and Baldock, Sara and Cummings, Damian M. and Edwards, Frances (2020) Multiphoton fabrication of bioelectronics. In: UNSPECIFIED. Hardy , John (2020) Electroactive scaffold, method of making the electroactive scaffold, and method of using the electroactive scaffold. PCT/US2015/050594. Hardy , John (2020) Electroactive supramolecular polymeric assemblies, methods of making electroactive supramolecular polymeric assemblies, and methods of using electroactive supramolecular polymeric assemblies. PCT/US2016/035227. Hardy , John (2020) Conductive nonwoven mat and method of using the conductive nonwoven mat. PCT/US2016/039230. Hardy , John (2020) Electroactive scaffolds and methods of using electroactive scaffolds. PCT/US2016/041889. Hargreaves, Anthony J. and Cavada, Marianna and Rogers, Chris (2020) Engineering for the Far Future:Rethinking the Value Proposition. Proceedings of the ICE - Engineering Sustainability, 173 (1). pp. 3-7. ISSN 1478-4629 Haro, P. Arrabal and Espinosa, J. M. Rodríguez and Muñoz-Tuñón, C. and Sobral, D. and Lumbreras-Calle, A. and Boquien, M. and Hernán-Caballero, A. and Rodríguez-Muñoz, L. and Pampliega, B. Alcalde (2020) Differences and similarities of stellar populations in LAEs and LBGs at z~3.4-6.8. Monthly Notices of the Royal Astronomical Society, 495 (2). 1807–1824. ISSN 0035-8711 Harris, Geraldine (2020) Re-Readings, Unmarked The Politics of Performance by Peggy Phelan. Contemporary Theatre Review, 30 (2). pp. 278-279. ISSN 1048-6801 Harrison, Richard and Leitch, Claire and McAdam, Maura (2020) Woman's entrepreneurship as a gendered niche:the implications for regional economic development policy. Journal of Economic Geography, 20 (4). 1041–1067. ISSN 1468-2702 Harrison, Sophie (2020) The "sub-culture" created through austere measures:Understanding the cycle to break it. PhD thesis, UNSPECIFIED. Harriss, Lydia and Mir, Zara (2020) Misuse of civilian drones. The Parliamentary Office of Science and Technology, London. Harrod, Andy (2020) Nature & Nourishment. you are here : the journal of creative geography, 21. pp. 53-55. Harrod, Andy (2020) Psychogeography and Psychotherapy: connecting pathways, Chris Rose (Ed.), PCCS Books, Monmouth (2019), p. 210, index. £18.99 paperback, ISBN: 978-1-910919-47-7. Emotion, Space and Society, 36. ISSN 1755-4586 Hart, C. and Fuoli, M. (2020) Objectification strategies outperform subjectification strategies in military interventionist discourses. Journal of Pragmatics, 162. pp. 17-28. ISSN 0378-2166 Hart, Nicholas and Rotsos, Charalampos and Giotsas, Vasileios and Race, Nicholas and Hutchison, David (2020) λBGP:Rethinking BGP programmability. In: 2020 IEEE/IFIP Network Operations and Management Symposium (NOMS 2020). IEEE, pp. 1-9. ISBN 9781728149738 Hartescu, Ioana (2020) Project-based learning and the development of students' professional identity:A case study of an instructional design course with real clients in Romania. PhD thesis, UNSPECIFIED. Hartl, B. and Sharma, S. and Brügner, O. and Mertens, S.F.L. and Walter, M. and Kahl, G. (2020) Reliable Computational Prediction of the Supramolecular Ordering of Complex Molecules under Electrochemical Conditions. Journal of Chemical Theory and Computation, 16 (8). pp. 5227-5243. ISSN 1549-9618 Hartley, Calum and Bird, Laura and Monaghan, Padraic (2020) Comparing cross-situational word learning, retention, and generalisation in children with autism and typical development. Cognition, 200. ISSN 0010-0277 Hartley, Calum and Fisher, Sophie and Fletcher, Naomi (2020) Exploring the influence of ownership history on object valuation in typical development and autism. Cognition, 197. ISSN 0010-0277 Harvey-Samuel, T. and Norman, V.C. and Carter, R. and Lovett, E. and Alphey, L. (2020) Identification and characterization of a Masculinizer homologue in the diamondback moth, Plutella xylostella. Insect Molecular Biology, 29 (2). pp. 231-240. ISSN 0962-1075 Harzheim, Achim and Evangeli, Charalambos and Kolosov, Oleg and Gehring, Pascal (2020) Direct mapping of local Seebeck coefficient in 2D material nanostructures via scanning thermal gate microscopy. 2D Materials, 7 (4). ISSN 2053-1583 Haskew, Mathew and Hardy, John (2020) A Mini-Review of Shape-Memory Polymer-Based Materials:Stimuli-responsive shape-memory polymers. Johnson Matthey Technology Review, 64 (4). Hasson, F. and Nicholson, E. and Muldrew, D. and Bamidele, O. and Payne, S. and McIlfatrick, S. (2020) International palliative care research priorities:A systematic review. BMC Palliative Care, 19 (1). ISSN 1472-684X Hasted, Catherine and Bligh, Brett (2020) Theorising practices of relational working across the boundaries of higher education. In: Theory and Method in Higher Education Research. Emerald Group Publishing Ltd., Bingley. ISBN 9781800433212 Hatfield, Jack H. and Barlow, Jos and Joly, Carlos A. and Lees, Alexander C. and Parruco, Celso Henrique de Freitas and Tobias, Joseph A. and Orme, C. David L. and Banks-Leite, Cristina (2020) Mediation of area and edge effects in forest fragments by adjacent land use. Conservation Biology, 34 (2). pp. 395-404. ISSN 0888-8892 Hathout, R.M. and Metwally, A.A. and Woodman, T.J. and Hardy, J.G. (2020) Prediction of Drug Loading in the Gelatin Matrix Using Computational Methods. ACS Omega, 5 (3). pp. 1549-1556. ISSN 2470-1343 Hautier, Yann and Zhang, P. and Loreau, Michel and Wilcox, Kevin and Seabloom, Eric W. and Borer, Elizabeth T. and Byrnes, Jarrett and Koerner, Sally and Komatsu, Kimberly and Lefcheck, Jonathan and Hector, Andrew and Adler, Peter B. and Alberti, Juan and Arnillas, Carlos A. and Bakker, J.D. and Brudvig, Lars A. and Bugalho, M.N. and Cadotte, Marc W. and Caldeira, Maria and Carroll, Oliver and Crawley, Michael J. and Collins, Scott and Daleo, Pedro and Dee, Laura and Eisenhauer, N. and Isbell, Forest and Knops, Johannes M. H. and MacDougall, Andrew S. and McCulley, Rebecca L. and Moore, J.L. and Morgan, J.W. and Mori, Akira S. and Peri, P.L. and Pos, E. and Power, S.A. and Price, Jodie and Reich, Peter B. and Risch, Anita C. and Roscher, Christiane and Sankaran, Mahesh and Schütz, Martin and Smith, Melinda and Stevens, Carly and Tognetti, P.M. and Virtanen, R and Wardle, Glenda M. and Wilfahrt, Peter and Wang, Shaopeng (2020) General destabilizing effects of eutrophication on grassland productivity at multiple spatial scales. Nature Communications, 11. ISSN 2041-1723 Haw, D.J. and Pung, R. and Read, J.M. and Riley, S. (2020) Strong spatial embedding of social networks generates nonstandard epidemic dynamics independent of degree distribution and clustering. Proceedings of the National Academy of Sciences of the United States of America, 117 (38). pp. 23636-23642. ISSN 0027-8424 Hawes, J.E. and Vieira, I.C.G. and Magnago, L.F.S. and Berenguer, E. and Ferreira, J. and Aragão, L.E.O.C. and Cardoso, A. and Lees, A.C. and Lennox, G.D. and Tobias, J.A. and Waldron, A. and Barlow, J. (2020) A large-scale assessment of plant dispersal mode and seed traits across human-modified Amazonian forests. Journal of Ecology, 108 (4). pp. 1373-1385. ISSN 0022-0477 Hawkins, Jonathan D. and Lok, Lai Bun and Brennan, Paul V. and Nicholls, Keith W. (2020) HF Wire-Mesh Dipole Antennas for Broadband Ice-Penetrating Radar. IEEE Antennas and Wireless Propagation Letters, 19 (12). 2172 - 2176. ISSN 1536-1225 Hayer, Tajinder Singh (2020) Tidelands. [Performance] (In Press) Hays, G.C. and Koldewey, H.J. and Andrzejaczek, S. and Attrill, M.J. and Barley, S. and Bayley, D.T.I. and Benkwitt, C.E. and Block, B. and Schallert, R.J. and Carlisle, A.B. and Carr, P. and Chapple, T.K. and Collins, C. and Diaz, C. and Dunn, N. and Dunbar, R.B. and Eager, D.S. and Engel, J. and Embling, C.B. and Esteban, N. and Ferretti, F. and Foster, N.L. and Freeman, R. and Gollock, M. and Graham, N.A.J. and Harris, J.L. and Head, C.E.I. and Hosegood, P. and Howell, K.L. and Hussey, N.E. and Jacoby, D.M.P. and Jones, R. and Sannassy Pilly, S. and Lange, I.D. and Letessier, T.B. and Levy, E. and Lindhart, M. and McDevitt-Irwin, J.M. and Meekan, M. and Meeuwig, J.J. and Micheli, F. and Mogg, A.O.M. and Mortimer, J.A. and Mucciarone, D.A. and Nicoll, M.A. and Nuno, A. and Perry, C.T. and Preston, S.G. and Rattray, A.J. and Robinson, E. and Roche, R.C. and Schiele, M. and Sheehan, E.V. and Sheppard, A. and Sheppard, C. and Smith, A.L. and Soule, B. and Spalding, M. and Stevens, G.M.W. and Steyaert, M. and Stiffel, S. and Taylor, B.M. and Tickler, D. and Trevail, A.M. and Trueba, P. and Turner, J. and Votier, S. and Wilson, B. and Williams, G.J. and Williamson, B.J. and Williamson, M.J. and Wood, H. and Curnick, D.J. (2020) A review of a decade of lessons from one of the world's largest MPAs:conservation gains and key challenges. Marine Biology, 167 (11). ISSN 0025-3162 Hazelhurst, Jonathan and Logue, Jennifer and Parretti, Helen and Abbott, Sally and Brown, Adrian and Pournaras, Dimitri and Tahrani, Abd (2020) Developing Integrated Clinical Pathways for the Management of Clinically Severe Adult Obesity:a Critique of NHS England Policy. Current obesity reports, 9. 530–543. ISSN 2162-4968 Hazell, C.M. and Hayward, M. and Lobban, F. and Pandey, A. and Pinfold, V. and Smith, H.E. and Jones, C.J. (2020) Demographic predictors of wellbeing in Carers of people with psychosis: Secondary analysis of trial data. BMC Psychiatry, 20 (1). ISSN 1471-244X He, Qinjiang and Fu, Renli and Gao, Weijun and Zhu, Haitao and Song, Xiufeng and Su, Xinqing (2020) Novel blue-emitting KBaGdSi2O7:Eu2+ phosphor used for near-UV white-light LED. Journal of Materials Science: Materials in Electronics, 31 (4). pp. 3159-3165. ISSN 0957-4522 He, X. and Peng, Z. and Wang, J. and Yang, G. (2020) Generic and efficient connectivity determination for IoT applications. IEEE Internet of Things Journal, 7 (6). pp. 5291-5301. ISSN 2327-4662 He, X. and Zheng, J. and Dai, H. and Zhang, C. and Rafique, W. and Li, G. and Dou, W. and Ni, Q. (2020) Coeus: Consistent and Continuous Network Update in Software-Defined Networks:38th IEEE Conference on Computer Communications, INFOCOM 2020. In: 38th IEEE Conference on Computer Communications, INFOCOM 2020, 2020-07-062021-06-09, Toronto, Canada. Head, J.W. and Wilson, L. (2020) Rethinking Lunar Mare Basalt Regolith Formation:New Concepts of Lava Flow Protolith and Evolution of Regolith Thickness and Internal Structure. Geophysical Research Letters, 47 (20). ISSN 0094-8276 Head, J.W. and Wilson, L. and Deutsch, A.N. and Rutherford, M.J. and Saal, A.E. (2020) Volcanically Induced Transient Atmospheres on the Moon:Assessment of Duration, Significance, and Contributions to Polar Volatile Traps. Geophysical Research Letters, 47 (18). ISSN 0094-8276 Head, James W. and Wilson, Lionel (2020) Magmatic intrusion-related processes in the upper lunar crust:The role of country rock porosity/permeability in magmatic percolation and thermal annealing, and implications for gravity signatures. Planetary and Space Science, 180. ISSN 0032-0633 Healy, Alisa (2020) Energy modulation of electron bunches using a terahertz-driven dielectric-lined waveguide. PhD thesis, UNSPECIFIED. Heap, Brittany and Holden, Claire and Taylor, Jane and McAinsh, Martin (2020) ROS Crosstalk in Signalling Pathways. In: els. eLS . Wiley. ISBN 9780470015902 Heasman, Patrick (2020) A study of microporous polymeric materials for electronic applications. PhD thesis, UNSPECIFIED. Heathwaite, Louise (2020) Freshwater science - Perspectives from the Royal Society Global Environmental Research Committee. Royal Society. Heggie, Lisa and Mackenzie, Ruth M and Ells, Louisa J and Simpson, Sharon Anne and Logue, Jennifer (2020) Tackling reporting issues and variation in behavioural weight management interventions:Design and piloting of the standardised reporting of adult behavioural weight management interventions to aid evaluation (STAR-LITE) template. Clinical obesity, 10 (5). Heggie, Lisa and Mackenzie, Ruth M and Ells, Louisa J and Simpson, Sharon Anne and Logue, Jennifer (2020) Tackling reporting issues and variation in behavioural weight management interventions:Design and piloting of the standardized reporting of adult behavioural weight management interventions to aid evaluation (STAR-LITE) template. Clinical obesity, 10 (5). Helal, S. (2020) The Monkey, the Ant, and the Elephant:Addressing Safety in Smart Spaces. Computer, 53 (5). pp. 73-76. ISSN 0018-9162 Heller, Yuval and Winter, Eyal (2020) Biased Belief Equilibrium. American Economic Journal: Microeconomics, 12 (2). pp. 1-40. ISSN 1945-7669 Hemming, L.P. (2020) Death's presents:Derrida – Haunting Hegel. svensk teologisk kvartalskrift, 96 (4). pp. 335-352. ISSN 2003-6248 Hemming, Laurence Paul (2020) Death's Presents:Derrida – Haunting Hegel. svensk teologisk kvartalskrift, 96 (4). 335–352. ISSN 2003-6248 Hemming, Laurence Paul (2020) Time and History In the Black Notebooks. In: Jenseits von Polemik und Apologie. Heidegger Jahrbuch, 12 . Verlag Karl Alber, Freiburg, pp. 133-152. ISBN 3495457127 Hendershott, Terrence and Kozhan, Roman and Raman, Vikas (2020) Short Selling and Price Discovery in Corporate Bonds. Journal of Financial and Quantitative Analysis, 55 (1). pp. 77-115. ISSN 0022-1090 Henderson, Paul and Fisher, Naomi and Ball, Judith and Sellwood, Bill (2020) Mental health practitioner experiences of engaging with service users in community mental health settings:a systematic review and thematic synthesis of qualitative evidence. Journal of Psychiatric and Mental Health Nursing, 27 (6). pp. 807-820. ISSN 1351-0126 Hennell, Kath and Piacentini, Maria and Limmer, Mark (2020) Ethical dilemmas using social media in qualitative social research:A case study of online participant observation. Sociological Research Online, 25 (3). pp. 473-489. ISSN 1360-7804 Hennell, Kath and Piacentini, Maria and Limmer, Mark (2020) Exploring health behaviour:Understanding drinking practice using the lens of practice theory. Sociology of Health and Illness, 42 (3). pp. 627-642. ISSN 0141-9889 Herbst, Christopher H. (2020) Labour market preferences, attitudes and expectations of prospective health workers in Guinea. PhD thesis, UNSPECIFIED. Heritage, Frazer (2020) Applying corpus linguistics to videogame data:Exploring the representation of gender in videogames at a lexical level. Game studies, 20 (3). p. 20. ISSN 1604-7982 Heritage, Frazer (2020) Book review: Eric Russell, The Discursive Ecology of Homophobia: Unravelling Anti-LGBTQ Speech on the European Far Right. Discourse and Society, 31 (4). pp. 448-450. ISSN 0957-9265 Heritage, Frazer and Koller, Veronika (2020) Incels, in-groups, and ideologies:The representation of gendered social actors in a sexuality-based online community. Journal of Language and Sexuality, 9 (2). pp. 152-178. ISSN 2211-3770 Hernich, André and Lutz, Carsten and Papacchini, Fabio and Wolter, Frank (2020) Dichotomies in Ontology-Mediated Querying with the Guarded Fragment. ACM Transactions on Computational Logic (TOCL), 21 (3). ISSN 1529-3785 Hernández-Verdeja, Tamara and Vuorijoki, Linda and Strand, Åsa (2020) Emerging from the darkness:interplay between light and plastid signaling during chloroplast biogenesis. Physiologia Plantarum, 169 (3). pp. 397-406. ISSN 0031-9317 Hesketh, Anthony and Sellwood-Taylor, Jo and Mullen, Sharon (2020) Are you ready to serve on a board? Harvard Business Review. ISSN 0017-8012 Hess, Kristy and Gutsche Jr, Robert (2020) Journalism and the "social sphere":reclaiming a foundational concept for beyond politics and the public sphere. In: Reimagining Journalism and Social Order in a Fragmented Media World. Routledge, London, pp. 11-26. ISBN 9780367366056 Heywood-Carr, Jordan (2020) Thermal modelling of the Lina Battery. Masters thesis, UNSPECIFIED. Hibberd, Morgan T and Healy, Alisa L and Lake, Daniel S and Georgiadis, Vasileios and Smith, Elliott JH and Finlay, Oliver J and Pacey, Thomas H and Jones, James K and Saveliev, Yuri and Walsh, David A and Sneddon, Edward W. and Appleby, Robert B. and Burt, Graeme and Graham, Darren M. and Jamison, Steven (2020) Acceleration of relativistic beams using laser-generated terahertz pulses. Nature Photonics, 14. 755–759. ISSN 1749-4885 Hibbin, Rebecca and Warin, Jo (2020) Embedding Restorative Practice in Schools. Lancaster University, Lancaster. Hibbin, Rebecca and Warin, Jo (2020) A language focused approach to supporting children with social, emotional and behavioural difficulties (SEBD). Education 3-13, 48 (3). pp. 316-331. ISSN 0300-4279 Higgins, David and Somervell, Tess and Clark, Nigel (2020) Introduction:Environmental Humanities Approaches to Climate Change. Humanities, 9. Higgins, Jack (2020) Essays on the economics of health and place. PhD thesis, UNSPECIFIED. Higgins, Leighanne (2020) Psycho-emotional Disability in the Marketplace. European Journal of Marketing, 54 (11). pp. 2675-2695. ISSN 0309-0566 Higgins, Leighanne and Hamilton, Kathy (2020) Pilgrimage, material objects and spontaneous communitas. Annals of Tourism Research, 81. ISSN 0160-7383 Higgit, David and France, Derek (2020) JGHE paper types. Journal of Geography in Higher Education, 44 (2). pp. 171-178. ISSN 0309-8265 Higgs, Frankie (2020) SLE scaling limits for a Laplacian growth model. arXiv. (Unpublished) Hill, Joshua and Widdicks, Kelly and Hazas, Mike (2020) Mapping the Scope of Software Interventions for Moderate Internet Use on Mobile Devices. In: ICT4S2020. ACM, 204–212. ISBN 9781450375955 Hinds, H. (2020) Mary Mollineux's Fruits of retirement (1702):Poetry in the second period of quakerism. Quaker Studies, 25 (2). pp. 135-155. ISSN 1363-013X Hird, Derek (2020) Jin Yi's "Someone Else's Story". Chinese Literature and Culture, 20. pp. 61-69. ISSN 2332-4287 Hird, Derek (2020) Knowing male subjects:Globally mobile Chinese professionals and the aesthetics of the Confucian sublime. China Perspectives, 2020-3. pp. 19-27. ISSN 1996-4617 Hirsch, Mauro Mozael and Deckmann, Iohanna and Santos-Terra, Júlio and Staevie, Gabriela Zanotto and Fontes-Dutra, Mellanie and Carello-Collar, Giovanna and Körbes-Rockenbach, Marília and Brum Schwingel, Gustavo and Bauer-Negrini, Guilherme and Rabelo, Bruna and Gonçalves, Maria Carolina Bittencourt and Corrêa-Velloso, Juliana and Naaldijk, Yahaira and Castillo, Ana Regina Geciauskas and Schneider, Tomasz and Bambini-Junior, Victorio and Ulrich, Henning and Gottfried, Carmem (2020) Effects of single-dose antipurinergic therapy on behavioral and molecular alterations in the valproic acid-induced animal model of autism. Neuropharmacology, 167. ISSN 0028-3908 Hisham, S. and Kadirgama, K. and Mohammed, H.A. and Kumar, A. and Ramasamy, D. and Samykano, M. and Rahman, S. (2020) Hybrid nanocellulose-copper (II) oxide as engine oil additives for tribological behavior improvement. Molecules, 25 (13). ISSN 1420-3049 Hobbs, Laura and Bentley, Sophie and Hartley, Jackie and Stevens, Carly and Bolton, Thomas (2020) Exploring coral reef conservation in Minecraft. Primary Science, 162. pp. 21-23. ISSN 0269-2465 Hobbs, Laura and Bentley, Sophie and Hartley, Jackie and Stevens, Carly and Bolton, Thomas (2020) Exploring coral reef conservation in Minecraft. ASE International, 9 (1). pp. 24-28. Hobbs, Laura and Hartley, Calum and Bentley, Sophie and Bibby, Jordan and Bowden, Lauren and Hartley, Jackie and Stevens, Carly (2020) Shared special interest play in a specific extra-curricular group setting:a Minecraft Club for children with Special Educational Needs. Educational and Child Psychology, 37 (4). pp. 81-95. ISSN 0267-1611 Hocking, Toby Dylan and Rigaill, Guillem and Fearnhead, Paul and Bourque, Guillaume (2020) Constrained Dynamic Programming and Supervised Penalty Learning Algorithms for Peak Detection in Genomic Data. Journal of Machine Learning Research, 21. pp. 1-40. ISSN 1532-4435 Hodge, Gary (2020) Becoming intersubjective 'in medias res' of behaviours that challenge in dementia:A layered autoethnography. PhD thesis, UNSPECIFIED. Hodges, Steve and Sentance, Sue and Finney, Joe and Ball, Thomas (2020) Physical computing:A key element of modern computer science education. IEEE Computer, 53 (4). pp. 20-30. ISSN 0018-9162 Hofman, P S and Blome, Constantin and Schleper, Martin C. and Subramanian, Nachiappan (2020) Supply chain collaboration and eco-innovations: An institutional perspective from China. Business Strategy and the Environment, 29 (6). pp. 2734-2754. ISSN 0964-4733 Hofstadter, M.D. and Fletcher, L.N. and Simon, A.A. and Masters, A. and Turrini, D. and Arridge, C.S. (2020) Future Missions to the Giant Planets that Can Advance Atmospheric Science Objectives:Space Science Reviews. Space Weather, 216 (5). ISSN 0038-6308 Hoggard, Shaun Russell (2020) An investigation into the attitudes and intentions of university students in Japan regarding second-language learning on social networking sites. PhD thesis, UNSPECIFIED. Hojatisaeidi, F. and Mureddu, M. and Dessi, F. and Pettinau, A. and Durand, G. and Saha, B (2020) Porous boron nitride:an effective adsorbent for carbon dioxide capture. Environmental Chemistry Group Bulletin, 2020 (July). pp. 16-17. Hojatisaeidi, F. and Mureddu, M. and Dessì, F. and Durand, G. and Saha, B (2020) Metal-Free Modified Boron Nitride for Enhanced CO2 Capture. Energies, 13 (3). ISSN 1996-1073 Holik, F. and Broadbent, M. and Findrik, M. and Smith, P. and Race, N. (2020) Safe and Secure Software-Defined Networks for Smart Electricity Substations. In: Intelligent Information and Database Systems - 12th Asian Conference, ACIIDS 2020, Proceedings. Communications in Computer and Information Science . Springer, Singapore, pp. 179-191. ISBN 9789811533792 Holland, Paula Jane and Clayton, Stephen (2020) Navigating employment retention with a chronic health condition:a meta-ethnography of the employment experiences of people with musculoskeletal disorders in the UK. Disability and Rehabilitation, 42 (8). pp. 1071-1086. ISSN 0963-8288 Hollaway, Michael J. and Dean, Graham and Blair, Gordon and Brown, Mike and Henrys, P.A and Watkins, John (2020) Tackling the Challenges of 21st-Century Open Science and Beyond:A Data Science Lab Approach. Patterns, 1. ISSN 2666-3899 Holmes, J. and Chambers, J. and Meldrum, P. and Wilkinson, P. and Boyd, James and Williamson, P. and Huntley, D. and Sattler, K. and Elwood, D. and Sivakumar, V. and Reeves, H. and Donohue, S. (2020) Four-dimensional electrical resistivity tomography for continuous, near-real-time monitoring of a landslide affecting transport infrastructure in British Columbia, Canada. Near Surface Geophysics, 18 (4). pp. 337-351. ISSN 1569-4445 Holmes, Torik and Fernandes, Josi and Palo, Teea (2020) (Re)organising markets through 'scale'. In: 36th EGOS Colloquium, 2020-07-022020-07-04, University of Hamburg. (Unpublished) Holton, Mark and Finn, Kirsty (2020) Belonging, pausing, feeling:a framework of "mobile dwelling" for U.K. university students that live at home. Applied Mobilities, 5 (1). pp. 6-20. ISSN 2380-0127 Honary, Mahsa and Bell, Beth and Clinch, Sarah and Vega, Julio and Kroll, Leo and Sefi, Aaron and McNaney, Roisin (2020) Shaping the Design of Smartphone-Based Interventions for Self-Harm. In: CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Conference on Human Factors in Computing Systems - Proceedings . Association for Computing Machinery (ACM), USA. ISBN 9781450367080 Honary, Mahsa and Lee, Jaejoon and Bull, Christopher and Wang, Jiangtao and Helal, Sumi (2020) What Happens in Peer-Support, Stays in Peer-Support:Software Architecture for Peer-Sourcing in Mental Health. In: 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, ESP, pp. 644-653. ISBN 9781728173030 Honary, Mahsa and Martinez, Veronica and Wlazlowski, Theodore and Helal, Sumi and Von Oertzen, Hans-Henning and Honary, Souroush (2020) Getting the country back to work, safely:A digital solution. ACM Digital Government: Research and Practice, 1 (4). pp. 1-5. Hong, Seok Young and Linton, Oliver (2020) Nonparametric estimation of infinite order regression and its application to the risk-return tradeoff. Journal of Econometrics, 219 (2). pp. 389-424. ISSN 0304-4076 Hong, W.X. and Sidik, N.A.C.
CommonCrawl
Stability analysis of prey-predator system with Holling type functional response and prey refuge Zhihui Ma1, Shufan Wang2, Tingting Wang1 & Haopeng Tang1 In this paper, a predator-prey system with Holling type function response incorporating prey refuge is presented. By applying the analytical approaches, the dynamics behavior of the considered system is investigated, including stability, limit cycle and bifurcation. The results show that the shape of the functional response plays an important role in determining the dynamics of the system. Especially, the interesting conclusion is that the prey refuge has a destabilizing effect under some certain conditions. Researches on predation systems are always a popular issue in contemporary theoretical ecology and applied mathematics [1–12]. Results based on non-spatial systems have shown that the effect of prey refuge played an important role in determining the dynamical consequences of predator-prey systems [1, 3, 4, 7, 8, 10, 13–19]. Incorporating the effect of prey refuge into the considered predation system is initially done by modifying the originally functional response of predator to prey population, the functional response describes the per capita consumption rates of predators depending on prey density, and quantifies the energy transfer between trophic levels, like Holling I, II, III and IV functional response [20, 21]. The most widely reported conclusions are the community/interior/positive/coexistent equilibrium of the considered predation system being stabilized and the equilibrium density of prey and/or predator was enhanced by the addition of prey refuge [3, 8, 12, 19, 22, 23]. This paper is based on the following predator-prey system with prey self-limitation and the population growth of prey is logistic in the absence of predators: $$ \textstyle\begin{cases} \dot{x}(t)=rx(1-\frac{x}{K})-f(x)y, \\ \dot{y}(t)=(cf(x)-d)y. \end{cases} $$ Here, \(x(t)\) and \(y(t)\) are the density of prey and predator populations at time t, respectively, and they are all positive numbers. The other parameters have the following biological meanings: r is the intrinsic per capita growth rate of prey population; K is the prey environmental carrying capacity; c is the efficiency with which predators convert consumed prey into new predators; d is the per capita death rate of predators. The function \(f(x)\) denotes a generalized functional response and represents the amount of prey killed per unit time by an individual predator. This paper applies a generalized representation of the functional response $$f(x)=\frac{\lambda x^{n}}{1+\lambda hx^{n}}, $$ which is introduced by Real [21] and defined as Holling type functional response, where h is the handling time of predators, λ is the attack efficiency of predator to prey population, the exponent n describes the shape of the functional response, including the Holling II functional response for \(n=1\) and Holling III functional response for \(n=2\). Incorporating a Holling type functional response into the system (1.1), the following predator-prey system could be obtained: $$ \textstyle\begin{cases} \dot{x}(t)=rx(1-\frac{x}{K})-\frac{\lambda x^{n}y}{1+\lambda hx^{n}}, \\ \dot{y}(t)=(\frac{c\lambda x^{n}}{1+\lambda hx^{n}}-d)y. \end{cases} $$ In this paper, we extend the above model by incorporating the effect of prey refuge. Kot [24] gave a definition for the total number of prey caught V: $$V= \lambda(T-hV)x, $$ where λ is the attack coefficient, h is the handling time required per prey, T is the total time and all parameters are positive. This definition proposes that the total number of prey caught by its predators is linearly proportional to prey's density. However, most cases may be included in this representation. Hence, without loss of generality, this paper assumes that the total number of prey caught, \(V(x)\), is $$ \textstyle\begin{cases} V(x)= \lambda T_{s}(x)x^{n}, \\ T_{s}(x)=T-hV(x), \end{cases} $$ where \(T_{s}(x)\) is the available search time and the other parameters are similar to Kot's definition [24]. According to Maynard Simth [11] and Ma et al. [23], there exists a quantity βx of prey population which occupy refuges. By modifying the total number of prey caught V, it is given that $$ \textstyle\begin{cases} V(x)= \lambda T_{s}(x)((1-\beta))x^{n}, \\ T_{s}(x)=T-hV(x). \end{cases} $$ Solving \(V(x)\) from the above equations, we have $$V (x)=\frac{\lambda T(1-\beta)^{n}x^{n}y}{1+\lambda h(1-\beta)^{n}x^{n}}. $$ Thus, the modified functional response which incorporates the effect of prey refuge is given by $$\frac{V (x)}{T}=\frac{\lambda(1-\beta)^{n}x^{n}y}{1+\lambda h(1-\beta)^{n}x^{n}}. $$ Based on the above analysis, the system (1.1) with the effect of prey refuge gets the following form: $$ \textstyle\begin{cases} \dot{x}(t)=rx(1-\frac{x}{K})-\frac{\lambda(1-\beta)^{n}x^{n}y}{1+\lambda h(1-\beta)^{n}x^{n}}, \\ \dot{y}(t)=(\frac{c\lambda(1-\beta)^{n}x^{n}}{1+\lambda h(1-\beta)^{n}x^{n}}-d)y. \end{cases} $$ Before proceeding, we use the following change of variables: $$\varphi:\bigl(R_{0}^{+}\bigr)^{2}\times R \rightarrow \bigl(R_{0}^{+}\bigr)^{2}\times R,\qquad \varphi(x,y,t)=\biggl(K \bar{x},Khr\bar{y},\frac{A+x^{n}}{r}\bar{t}\biggr) $$ and rewriting x̄, ȳ, t̄ as x, y, t, we obtain the equivalent form of the system (1.3) $$ \textstyle\begin{cases} \dot{x}(t)=x(1-x)(A+x^{n})-x^{n}y, \\ \dot{y}(t)=B( x^{n}-C(A+x^{n}))y. \end{cases} $$ We have only three parameters, where \(A=\frac{1}{K^{n} \lambda h(1-\beta)^{n}}>0\), \(B=\frac{c}{hr}>0\), \(C=\frac{dh}{c}>0\). The equilibrium points of the system (1.5) are \(E_{0}(0,0)\), \(E_{K}(K,0)\), \(\tilde{E}(\tilde{x},\tilde{y})\), where $$\begin{aligned}& \tilde{x}=\frac{1}{1-\beta}\biggl[\frac{d}{\lambda(c-dh)}\biggr]^{1/n}, \\& \tilde{y}=\frac{cr}{d(1-\beta)}\biggl[\frac{d}{\lambda(c-dh)}\biggr]^{1/n} \biggl[1-\frac {1}{K(1-\beta)}\biggl(\frac{d}{\lambda(c-dh)}\biggr)^{1/n}\biggr]. \end{aligned}$$ The equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) is positive if and only if $$\beta< 1-\frac{1}{K}\biggl(\frac{d}{\lambda(c-dh)}\biggr)^{1/n}. $$ If \(\beta>1-\frac{1}{K}(\frac{d}{\lambda(c-dh)})^{1/n}\), the equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) collapses with the point \(E_{K}(K,0)\). Now, differentiating ỹ with respect to β, we observe that ỹ attains its maximum value at $$\beta=1-\frac{2}{K}\biggl(\frac{d}{\lambda(c-dh)}\biggr)^{1/n}, $$ and then it decreases with further increases in β. Also we see that x̃ increases with β. Again, in order to investigate the local stability and the existence of limit cycle, we should give the equilibrium points of the system (1.6). The equilibrium points of the system (1.6) can be obtained by solving the following equations: $$ \textstyle\begin{cases} x(1-x)(A+x^{n})-x^{n}y=0, \\ B( x^{n}-C(A+x^{n}))y=0. \end{cases} $$ Clearly, it has three equilibrium points \(E_{0}(0,0)\), \(E_{1}(1,0)\), \(\bar{E}(\bar{x},\bar{y})\), where $$\begin{aligned}& \bar{x}=\biggl(\frac{AC}{1-C}\biggr)^{1/n}, \\& \bar{y}= \frac{1}{C}\biggl(\frac{AC}{1-C}\biggr)^{1/n}\biggl[1-\biggl( \frac{AC}{1-C}\biggr)^{1/n}\biggr]. \end{aligned}$$ The equilibrium point \(\bar{E}(\bar{x},\bar{y})\) is positive if and only if \(A<\frac{1-C}{C}\). If \(A=\frac{1-C}{C}\), the positive equilibrium point collapses with the equilibrium point \(E_{1}(1,0)\). The equilibrium point \(\bar{E}(\bar{x},\bar{y})\) lies in the fourth quadrant when \(A>\frac{1-C}{C}\). Positivity and boundedness of the solutions In order to study the positivity and boundedness for the solutions of system (1.5), we denote the function on the right hand of system (1.5) as \(\mathbf{G}=(xg_{1},yg_{2})\) in which $$\begin{aligned}& g_{1}(x,y)=r\biggl(1-\frac{x}{K}\biggr)-\frac{\lambda(1-\beta)^{n}x^{n-1} y}{1+\lambda h(1-\beta)^{n}x^{n}} , \\& g_{2}(x,y)=\frac{c \lambda(1-\beta)^{n}x^{n} }{1+\lambda h(1-\beta)^{n}x^{n}}-d . \end{aligned}$$ Clearly, \(G\in C^{1}(R^{2}_{+})\). Thus \(\mathbf{G}:R^{2}_{+} \rightarrow R^{2}\) is locally Lipschitz on \(R^{2}_{+}=\{(x,y)|x>0,y>0\}\). Hence the fundamental theorem of existence and uniqueness ensures existence and uniqueness of solution of the system (1.5) with the given initial conditions. The state space of the system is the non-negative cone in \(R^{2}_{+}\). In the theoretical ecology, positivity and boundedness of the system establishes the biological well behaved nature of the system. Theorem 3.1 All the solutions of the system (1.5) with the given initial conditions are always positive and bounded. Firstly, we wish to prove that \((x(t),y(t))\in R^{2}_{+}\) for all \(t \in[0,+\infty]\). We show this by the method of contradiction. Suppose this is not true. Hence, there must exist one \(\bar{t} \in[0,+\infty]\), such that \(x(\bar{t})\leq0\) and \(y(\bar{t})\leq 0\). From the system (1.5), we have $$\begin{aligned}& x(t)=x(0)\exp\biggl( \int_{0}^{t} g_{1}(x,y)\,dt\biggr), \\& y(t)=x(0)\exp\biggl( \int_{0}^{t} g_{2}(x,y)\,dt\biggr). \end{aligned}$$ Since \((x(t),y(t))\) are well defined and continuous on \([0,\bar{t}]\), there must exist a \(M>0\) such that \(\forall t \in[0,\bar{t}]\) $$\begin{aligned} \begin{aligned} &x(t)=x(0)\exp\biggl( \int_{0}^{t} g_{1}(x,y)\,dt\biggr)\geq x(0)\exp(-M\bar{t}), \\ &y(t)=x(0)\exp\biggl( \int_{0}^{t} g_{2}(x,y)\,dt\biggr) \geq y(0)\exp(-M\bar{t}). \end{aligned} \end{aligned}$$ It is clear that if we have the limit \(t \rightarrow\bar{t}\), we obtain $$\begin{aligned}& x(\bar{t}) \geq x(0)\exp(-M\bar{t})>0, \\& y(\bar{t}) \geq y(0)\exp(-M\bar{t})>0, \end{aligned}$$ which is a contradiction. Hence, all the solutions of the system (1.5) are always positive. Secondly, we will prove the boundedness. Letting \(V(t)=x(t)+\frac{1}{c}y(t)\), then we obtain $$\dot{V}(t)=rx\biggl(1-\frac{x}{K}\biggr)-\frac{\lambda(1-\beta)^{n}x^{n} y}{1+\lambda h(1-\beta)^{n}x^{n}}+ \frac{\lambda(1-\beta)^{n}x^{n} y}{1+\lambda h(1-\beta)^{n}x^{n}}-\frac{d}{c}y \leq-dV(t)+(d+r)K. $$ Integrating both sides of the above equation and applying the theorem of the differential inequality, we have $$0< V(t)< \frac{(d+r)K}{d}\bigl(1-e^{-dt}\bigr)+V(0)e^{-dt}, \qquad V(0)=V\bigl(x(0),y(0)\bigr) $$ and \(\lim_{t \rightarrow+\infty} V(t)\leq\frac{(d+r)K}{d}\). □ Stability analysis Local stability The Jacobian matrix of the system (1.6) at the equilibrium point \(E_{1}(1,0)\) is given by $$ J_{1}= \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} -(A+1)&-1\\ 0&B(1-C-AC) \end{array}\displaystyle \right ). $$ The two eigenvalues of matrix \(J_{1}\) are \(-(A+1)\) and \(B(1-C-AC)\). Hence, if \(A<\frac{1-C}{C}\), the equilibrium point \(E_{1}(1,0)\) is a saddle point. Otherwise, the equilibrium point \(E_{1}(1,0)\) is locally asymptotically stable. For the positive equilibrium point \(\bar{E}(\bar{x},\bar{y})\), the Jacobian matrix is as follows: $$ J_{2}= \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} A_{11}&A_{12}\\ A_{21}&0 \end{array}\displaystyle \right ). $$ $$\begin{aligned}& A_{11}=\frac{(\bar{x})^{n}}{C}\bigl[\bigl(1-n(1-C)\bigr)-\bigl(2-n(1-C) \bigr)\bar{x}\bigr] , \\& A_{12}=-(\bar{x})^{n}< 0,\qquad A_{21}= \operatorname{Bn}(1-C) (\bar{x})^{n-1}\bar{y}>0 . \end{aligned}$$ Clearly \(\operatorname{Det}J_{2}=-A_{12}A_{21}>0\). Therefore, the sign of the eigenvalues of Jacobian matrix \(J_{2}\) depends only on \(\operatorname{Tr}J_{2}=A_{11}\). Hence, the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\) is locally asymptotically stable if and only if $$ \bigl(1-n(1-C)\bigr)-\bigl(2-n(1-C)\bigr)\bar{x}< 0. $$ Case 1: If \(0< n<\frac{1}{1-C}\), then \(2-n(1-C)>0\) and \(1-n(1-C)>0\). The inequality (4.1) can easily be solved as follows: $$\bar{x}>\frac{1-n(1-C)}{2-n(1-C)}. $$ $$A>\frac{1-C}{C}\biggl[\frac{1-n(1-C)}{2-n(1-C)}\biggr]^{n}. $$ Hence, the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\) is locally asymptotically stable. The results of González-Olivares et al. [6] and Huang et al. [8] are special cases of ours for \(n=1\) and \(n=2\), respectively. Case 2: If \(\frac{1}{1-C}\leq n \leq\frac{2}{1-C}\), then the inequality (4.1) holds. Therefore, the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\) is always asymptotically stable whenever the proportion of prey refuge is. Case 3: If \(n>\frac{2}{1-C}\), then \(2-n(1-C)<0\) and \(1-n(1-C)<0\). Hence, the inequality (4.1) is equivalent to the following condition: $$\bar{x}< \frac{1-n(1-C)}{2-n(1-C)}. $$ It is easy to show that $$A< \frac{1-C}{C}\biggl[\frac{1-n(1-C)}{2-n(1-C)}\biggr]^{n}. $$ Therefore, the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\) is locally asymptotically stable under these assumptions. Existence of limit cycle Let us rewrite the system (1.6) in the following form: $$ \textstyle\begin{cases} \dot{x}(t)=xg(x)-y p(x), \\ \dot{y}(t)=(q(x)-AC)y. \end{cases} $$ Here \(g(x)=(1-x)(A+x^{n})\), \(p(x)=x^{n}\), \(q(x)=(1-C)p(x)\). We present a lemma [17] regarding uniqueness of limit cycle of the above system. Suppose the system (4.2) obeys $$\frac{d}{dx}\biggl(\frac{xg'(x)+g(x)-xg(x)(p'(x)/p(x))}{q(x)-AC}\biggr)\leq0 $$ in \(0\leq x< \bar{x}\) and \(\bar{x}< x\leq1\). Then the system (4.2) has exactly one limit cycle which is globally asymptotically stable with respect to the set \(\{(x,y)|x>0,y>0\}\backslash \{E_{2}(\bar{x},\bar{y})\}\). By employing the above lemma, we have $$\begin{aligned}& \frac{d}{dx}\biggl(\frac{xg'(x)+g(x)-xg(x)(p'(x)/p(x))}{q(x)-AC}\biggr)\leq0 \\& \quad \Leftrightarrow\quad \frac{\varphi(x)}{((1-C)x^{n}-AC)^{2}}\leq0. \end{aligned}$$ $$\begin{aligned} \varphi (x) =&-2(1-C)x^{2n}+A\bigl[2C(1+n)+(1-n) (n-2) (1-C) \bigr]x^{n} \\ &{}-An\bigl[1-n(1-C)\bigr]x^{n-1}-A^{2}C(n-2). \end{aligned}$$ This will be equivalent to prove that \(\varphi(x)\leq0\) for all \(x>0\). Noticing that \(\varphi(0)=-A^{2}C(n-2)\leq0\) for \(n\geq2\) and $$\begin{aligned} \varphi '(x) =&nx^{n-2}\bigl[-4(1-C)x^{n+1}+A \bigl(2C(1+n)+(1-n) (n-2) (1-C)\bigr)x \\ &{}-A(n-1) \bigl(1-n(1-C)\bigr)\bigr]. \end{aligned}$$ It is easy to show that \(x=0\) and \(x=\bar{x}\) are the solutions of the equation \(\varphi'(x)=0\). Again \(\varphi''(\bar{x})=-nA\bar{x}^{n-2}[2C(1+n)+(n-1)(n-2)(1-C)]<0\). Thus, \(x=\bar{x}\) is the maximum value point of the function \(\varphi(x)\). Now, in order to prove \(\varphi(x)\leq0\), it is enough to show that \(\varphi(\bar{x})\leq0\). $$\varphi(\bar{x})=-\frac{nA^{2}C}{\bar{x}}\bigl[\bigl(1-n(1-C)\bigr)-\bigl(2-n(1-C) \bigr)\bar{x}\bigr]< 0. $$ $$\bigl(1-n(1-C)\bigr)-\bigl(2-n(1-C)\bigr)\bar{x}>0. $$ Clearly, it is exactly the condition of the instability of the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\). Global stability In this section, we will prove the global stability of the positive equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) of the system (1.5). We first choose a Lyapunov function defined as follows: $$ W\bigl(x(t),y(t)\bigr)= \int_{\tilde{x}}^{x} \frac{u-\tilde{x}}{u}\, du+p \int_{\tilde{y}}^{y} \frac{w-\tilde{y}}{w}\, dw \quad (p>0). $$ By a simple computation, we obtain $$\begin{aligned} \frac{dW}{dt} =&\frac{x-\tilde{x}}{x}\frac{dx}{dt}+p\frac{y-\tilde{y}}{y} \frac{dy}{dt} \\ =&(x-\tilde{x})\biggl[r\biggl(1-\frac{x}{K}\biggr)-\frac{\lambda(1-\beta)^{n}x^{n} y}{1+\lambda h(1-\beta)^{n}x^{n}} \biggr]+p(y-\tilde{y}) \biggl(\frac{c\lambda(1-\beta)^{n}x^{n} }{1+\lambda h(1-\beta)^{n}x^{n}}-d\biggr) \\ =&(x-\tilde{x})\biggl[r\biggl(1-\frac{\tilde{x}}{K}\biggr)+\frac{\lambda(1-\beta)^{n}\tilde{x}^{n-1} \tilde{y}}{1+\lambda h(1-\beta)^{n}\tilde{x}^{n}}-r \biggl(1-\frac{x}{K}\biggr)-\frac{\lambda(1-\beta)^{n}x^{n-1} y}{1+\lambda h(1-\beta)^{n}x^{n}}\biggr] \\ &{}+p(y-\tilde{y})\biggl[\frac{c\lambda(1-\beta)^{n}x^{n} }{1+\lambda h(1-\beta)^{n}x^{n}}-\frac{c\lambda(1-\beta)^{n}\tilde{x}^{n} }{1+\lambda h(1-\beta)^{n}\tilde{x}^{n}}\biggr] \\ =&-\frac{r}{K}(x-\tilde{x})^{2}-y(x-\tilde{x}) \biggl( \frac{\lambda (1-\beta)^{n}\tilde{x}^{n-1} }{1+c h(1-\beta)^{n}\tilde{x}^{n}}-\frac{\lambda(1-\beta)^{n}x^{n-1} }{1+\lambda h(1-\beta)^{n}x^{n}}\biggr) \\ &{}+(x-\tilde{x}) (y-\tilde{y})\frac{\lambda(1-\beta)^{n}\tilde{x}^{n-1} }{1+\lambda h(1-\beta)^{n}\tilde{x}^{n}}+pc\lambda(1-\beta) (y- \tilde{y}) \bigl(x^{n}-\tilde{x}^{n}\bigr) \\ =&-\frac{r}{K}(x-\tilde{x})^{2}- \lambda(1-\beta) (n-1) \tilde{x}^{n-2}y(x-\tilde{x})^{2} \\ &{}+\biggl[n c\lambda \tilde{x}^{n-1}(1-\beta)p-\frac{\lambda(1-\beta)\tilde{x}^{n-1} }{1+\lambda h(1-\beta)^{n}\tilde{x}^{n}}\biggr]. \end{aligned}$$ Selecting \(p=\frac{\lambda(1-\beta)^{n}\tilde{x}^{n-1}}{n c \lambda (1-\beta)^{n}\tilde{x}^{n-1}(1+\lambda h(1-\beta)^{n}\tilde{x}^{n})}>0\), then we have $$\frac{dW}{dt}=-\frac{r}{K}(x-\tilde{x})^{2}- \lambda(1- \beta)^{n}(n-1)\tilde{x}^{n-2}y(x-\tilde{x})^{2}. $$ Thus, \(\frac{dW}{dt}<0\) if \(n \geq 1\). Hence, the positive equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) of the system (1.5) is globally asymptotically stable. According to the above analysis, we can obtain the following results. Assuming that \(2\leq n<\frac{1}{1-C}\), then we have: If \(0< A<\frac{1-C}{C}[\frac{1-n(1-C)}{2-n(1-C)}]^{n}\), the system (1.4) has a unique globally stable limit cycle surrounding the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\) which is unstable. If \(\frac{1-C}{C}[\frac{1-n(1-C)}{2-n(1-C)}]^{n}< A<\frac{1-C}{C}\), the system (1.4) has a globally asymptotically stable equilibrium point \(\bar{E}(\bar{x},\bar{y})\) at the first quadrant. Assuming that \(n>\frac{2}{1-C}\), then we have: If \(0< A<\frac{1-C}{C}[\frac{1-n(1-C)}{2-n(1-C)}]^{n}\), the system (1.4) has a globally asymptotically stable equilibrium point \(\bar{E}(\bar{x},\bar{y})\) in the first quadrant. If \(\frac{1-C}{C}[\frac{1-n(1-C)}{2-n(1-C)}]^{n}< A<\frac{1-C}{C}\), the system (1.4) has a unique globally stable limit cycle surrounding the interior equilibrium point \(\bar{E}(\bar{x},\bar{y})\) which is unstable. In reference to the original parameters of the system (1.5), the above results can be expressed as follows. Assuming that \(2\leq n<\frac{c}{c-dh}\), then we have: If \(0<\beta<1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac {2c-n(c-dh)}{c-n(c-dh)}]\), the prey and predator populations stably oscillate around the unique interior equilibrium point. If \(1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac {2c-n(c-dh)}{c-n(c-dh)}]<\beta<1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}\), the two populations tend to reach a globally asymptotically stable equilibrium point at the first quadrant. Assuming that \(n>\frac{2c}{c-dh}\), then we have: If \(0<\beta<1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac {2c-n(c-dh)}{c-n(c-dh)}]\), the two populations tend to reach a globally asymptotically stable equilibrium point in the first quadrant. If \(1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac {2c-n(c-dh)}{c-n(c-dh)}]<\beta<1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}\), the prey and predator populations stably oscillate around the unique interior equilibrium point. To obtain a complete classification of the qualitative behavior of the system (1.5), we analyze the bifurcation pattern and illustrate the results with one parameter, saying the effect of prey refuge β. The classification requires up to two codimension-one bifurcations: (i) Hopf-bifurcation point in which the coexistence equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) exchanges stability, (ii) the bifurcation point tracking a transcritical bifurcation between the coexistence equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) and the prey only equilibrium \(E_{K}(K,0)\), where these two equilibria coincide and exchange their stability to each other. Hopf bifurcation One-dimensional bifurcation analysis reveals the behavior of the system (1.5) when a particular system parameter is varied over a long range. Here we observe the behavior of the system (1.5) when the prey refuge intensity is varied. By simple computation, the characteristic equation of the system (1.5) at the coexistence equilibrium point \(\tilde{E}(\tilde{x},\tilde{y})\) is $$ \xi^{2}-\tilde{a}_{11} \xi+ \tilde{a}_{12}\tilde{a}_{21}=0, $$ in which $$\begin{aligned}& \tilde{a}_{11}=r\biggl(1-\frac{2\tilde{x}}{K}\biggr)-\frac{n \lambda (1-\beta)^{n}\tilde{x}^{n-1}\tilde{y}}{(1+\lambda h(1-\beta)^{n}\tilde{x}^{n})^{2}} , \\& \tilde{a}_{12}=\frac{ \lambda(1-\beta)^{n}\tilde{x}^{n} }{ 1+\lambda h(1-\beta)^{n}\tilde{x}^{n} }>0 , \\& \tilde{a}_{21}=\frac{n c \lambda (1-\beta)^{n}\tilde{x}^{n-1}y}{(1+\lambda h(1-\beta)^{n}\tilde{x}^{n})^{2}}>0 . \end{aligned}$$ It can easily be observed from the characteristic equation (6.1) that the roots become purely imaginary when \(\tilde{a}_{11}=0\), i.e. \(\beta=\beta_{c}= 1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac{2c-n(c-dh)}{c-n(c-dh)}]\). In this case, \(\operatorname{Re}(\lambda)|_{\beta=\beta_{c}}=0\), \(\operatorname{Im}(\lambda)|_{\beta=\beta_{c}} \neq0\) and \(\frac{d}{d \beta}\operatorname{Re}(\lambda)|_{\beta=\beta_{c}}<0\) (we use the standard package of Mathematica to get these results) and hence the transversality condition for a Hopf bifurcation is satisfied. Therefore, there exists a Hopf bifurcation at \(\beta=\beta_{c}\). The negative sign of \(\frac{d}{d \beta}\operatorname{Re}(\lambda)|_{\beta=\beta_{c}}<0\) implies that the oscillations in the population densities dampen as the effect of prey refuge passes from lower value to higher value through \(\beta=\beta_{c}\). Hence, we obtain the following results. The system (1.5) undergoes a Hopf bifurcation at \(\tilde{E}(\tilde{x},\tilde{y})\) when the effect of prey refuge β passes the threshold value \(\beta_{c}= 1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac{2c-n(c-dh)}{c-n(c-dh)}]\). Transcritical bifurcation In this section, we will consider the existence of a transcritical bifurcation for the system (1.5). In order to do this, we select the effect of prey refuge β as the bifurcation parameter. According to the analysis in Section 3, if \(1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}[\frac{2c-n(c-dh)}{c-n(c-dh)}] <\beta<1-\frac{1}{K}(\frac{d}{\lambda(c-dh)})^{1/n}\), the coexistence equilibrium Ẽ is stable but the axial equilibrium \(E_{K}\) is unstable. The two equilibria coincide a \(\beta=\beta_{0}=1-\frac{1}{K}(\frac{d}{\lambda(c-dh)})^{1/n}\) and exchange their stability when \(1-\frac{1}{K}(\frac{d}{\lambda(c-dh)})^{1/n}<\beta<1\). Now, we will prove that the system (1.5) undergoes a transcritical bifurcation by using Sotomayer's theorem [2]. For \(\beta=\beta_{0}\), the system (1.5) has only one axial equilibrium point \(E_{K}\). The Jacobian matrix evaluated at \(E_{K}\) is $$ \mathbf{J}= Df(\tilde{x},\tilde{y};\beta_{0})= \begin{pmatrix} -r&-\frac{d}{c}\\ 0&0 \end{pmatrix}. $$ J has an eigenvalue \(\lambda=0\). Let V and W be the eigenvectors corresponding to the eigenvalue \(\lambda=0 \) for J and \(J^{T} \), respectively, then one can calculate $$ \mathbf{V} = \begin{pmatrix} -\frac{d}{rc}\\1 \end{pmatrix} ,\qquad \mathbf{W} = \begin{pmatrix} 0\\1 \end{pmatrix} . $$ Using the expressions for V and W, we get $$\begin{aligned}& W^{T}f_{\beta}(\tilde{x},\tilde{y};\beta_{0})=0, \\& W^{T}\bigl[Df_{\beta}(\tilde{x},\tilde{y}; \beta_{0})V\bigr]=-\frac{nd(c-dh)K}{c} \biggl(\frac{(c-dh)\lambda}{d} \biggr)^{\frac{1}{n}}\neq0, \\& W^{T}\bigl[D^{2}f(\tilde{x},\tilde{y}; \beta_{0}) (V,V)\bigr]=\frac {nd^{2}}{Krc^{2}}\neq0. \end{aligned}$$ Hence, according to Sotomayor's theorem, the system (1.5) undergoes a transcritical bifurcation when the effect of prey refuge β passes through the threshold value \(\beta_{0}\). The system (1.5) undergoes a transcritical bifurcation when the effect of prey refuge β passes the threshold value \(\beta_{0}=1-\frac{1}{K}(\frac{d}{\lambda(c-dh)})^{1/n}\). In this paper, we have considered a predator-prey system with a general functional response incorporating a prey refuge. Our analysis reveals that The equilibrium point \(E_{K}(K,0)\) is locally asymptotically stable if the proportion of refuge using by prey is larger than \(1-\frac{1}{K}[\frac{d}{\lambda(c-dh)}]^{1/n}\). Therefore, when the refuge using by prey is high, the system predicts that the prey population reaches its carrying capacity and the predators go extinct, a dynamics also observed by Collings [5] for some certain parameters. The shape of the functional response plays an important role in determining the dynamic behavior of the system. If \(0< n<\frac{c}{c-dh}\), the effect of prey refuge has a stabilizing effect, which is consistent with results of González-Olivares and Ramos-Jiliberto [6] who has found a clear stabilizing effect on their considered system. Here, stabilization or the increase of stability refers to cases where a community equilibrium point changes from repeller to an attractor due to changes in the value of a control parameter [6]. However, if the exponent n is larger than \(\frac{2c}{c-dh}\), the stability of the interior equilibrium point changes from the globally asymptotically stable state to the unstable state surrounding a globally stable limit cycle as the refuge using by prey increases. The prey refuge can decrease the stability of the interior equilibrium point. We call this a destabilizing effect. Ruxton, GD: Short term refuge use and stability of predator-prey models. Theor. Popul. Biol. 47, 1-17 (1995) Collings, JB: Bifurcation and stability analysis of a temperature-dependent mite predator-prey interaction model incorporating a prey refuge. Bull. Math. Biol. 57(1), 63-76 (1995) González-Olivares, E, Ramos-Jiliberto, R: Dynamic consequences of prey refuges in a simple model system: more prey, fewer predators and enhanced stability. Ecol. Model. 166, 135-146 (2003) Jana, D, Agrawal, R, Upadhyay, RK: Dynamics of generalist predator in a stochastic environment: effect of delayed growth and prey refuge. Appl. Math. Comput. 268, 1072-1094 (2015) Wang, J, Pan, L: Qualitative analysis of a harvested predator-prey system with Holling-type III functional response incorporating a prey refuge. Adv. Differ. Equ. 2012, 96 (2012) Harrison, GW: Global stability of predator-prey interactions. J. Math. Biol. 8, 159-171 (1979) Huang, Y, Chen, F, Li, Z: Stability analysis of prey-predator model with Holling type response function incorporating a prey refuge. Appl. Math. Comput. 182, 672-683 (2006) Kar, TK: Stability analysis of a prey-predator model incorporating a prey refuge. Commun. Nonlinear Sci. Numer. Simul. 10, 681-691 (2005) Jana, D, Bairagi, N: Habitat complexity, dispersal and metapopulations: macroscopic study of a predator-prey system. Ecol. Complex. 17, 131-139 (2014) Jana, D, Ray, S: Impact of physical and behavioral prey refuge on the stability and bifurcation of Gause type Filippov prey-predator system. Model. Earth Syst. Environ. 2, 24 (2016) Jana, D: Chaotic dynamics of a discrete predator-prey system with prey refuge. Appl. Math. Comput. 224, 848-865 (2013) Sih, A: Prey refuges and predator-prey stability. Theor. Popul. Biol. 31, 1-12 (1987) Ives, AR, Dobson, AP: Antipredator behavior and the population dynamics of simple predator-prey systems. Am. Nat. 130, 431-447 (1987) Maynard Smith, J: Models in Ecology. Cambridge University Press, Cambridge (1974) Mukherjee, D: The effect of refuge and immigration in a predator-prey system in the presence of a competitor for the prey. Nonlinear Anal., Real World Appl. 31, 277-287 (2016) Strogatz, SH: Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Perseus Publishing, Cambridge (1994) Ghosh, J, Sahoo, B, Poria, S: Prey-predator dynamics with prey refuge providing additional food to predator. Chaos Solitons Fractals 96, 110-119 (2017) McNair, JM: The effects of refuges on predator-prey interactions: a reconsideration. Theor. Popul. Biol. 29, 38-63 (1986) Taylor, RJ: Predation. Chapman & Hall, New York (1984) Real, LA: The kinetics of functional response. Am. Nat. 111, 289-300 (1977) Murdoch, WW, Oaten, A: Predation and population stability. Adv. Ecol. Res. 9, 2-132 (1975) Ma, Z, Li, W, Zhao, Y, Wang, W, Zhang, H, Li, Z: Effects of prey refuges on a predator-prey model with a class of functional responses: the role of refuges. Math. Biosci. 218, 73-79 (2009) Kot, M: Elements of Mathematical Ecology. Cambridge University Press, Cambridge (2011) We would like to thank the editor and the anonymous referees very much for their valuable comments and suggestions. This work was supported by the National Natural Science Foundation of China (No. 11301238) and the Fundamental Research Funds for the Central Universities (No. lzujbky-2017-166). School of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu, 730000, People's Republic of China Zhihui Ma , Tingting Wang & Haopeng Tang School of Mathematics and Computer Science, Northwest University for Nationalities, Lanzhou, Gansu, 730000, People's Republic of China Shufan Wang Search for Zhihui Ma in: Search for Shufan Wang in: Search for Tingting Wang in: Search for Haopeng Tang in: Correspondence to Zhihui Ma. All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript. Ma, Z., Wang, S., Wang, T. et al. Stability analysis of prey-predator system with Holling type functional response and prey refuge. Adv Differ Equ 2017, 243 (2017) doi:10.1186/s13662-017-1301-4 predator-prey system prey refuge limit cycle bifurcation destabilizing effect
CommonCrawl
Signed measure Revision as of 01:39, 31 July 2012 by Jjg (talk | contribs) (moved newcommand up from start of para (it adds unwanted whitespace), minor spelling corrections) generalized measure, real valued measure 2010 Mathematics Subject Classification: Primary: 28A33 [MSN][ZBL] $\newcommand{\abs}[1]{\left|#1\right|}$ A signed measure is real-valued $\sigma$-additive function defined on a certain σ-algebra $\mathcal{B}$ of subsets of a set $X$. More generally one can consider vector-valued measures, i.e. $\sigma$-additive functions $\mu$ on $\mathcal{B}$ taking values on a Banach space $B$ (see Vector measure). The total variation measure of $\mu$ is defined on $B\in\mathcal{B}$ as: \[ \abs{\mu}(B) :=\sup\left\{ \sum \abs{\mu(B_i)}_B: \text{'"`UNIQ-MathJax12-QINU`"' is a countable partition of '"`UNIQ-MathJax13-QINU`"'}\right\} \] where $\abs{\cdot}_B$ denotes the norm of $B$. In the real-valued case the above definition simplifies as \[ \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (X\setminus B)}\right). \] $\abs{\mu}$ is a measure and $\mu$ is said to have finite total variation if $\abs{\mu} (X) <\infty$. If $V$ is finite-dimensional the Radon-Nikodym theorem implies the existence of a measurable $f\in L^1 (\abs{\mu}, V)$ such that \[ \mu (B) = \int_B f d\abs{\mu}\qquad \mbox{for all '"`UNIQ-MathJax21-QINU`"'.} \] In the case of real-valued measures this implies that each such $\mu$ can be written as the difference of two nonnegative measures $\mu^+$ and $\mu^-$ which are mutually singular (i.e. such that there are sets $B^+, B^-\in\mathcal{B}$ with $\mu^+ (X\setminus B^+)= \mu^- (X\setminus B^-) =\mu^+ (B^-)=\mu^- (B^+)=0$). This last statement is sometimes referred to as Hahn decomposition theorem. The Hahn decomposition theorem can also be proved defining directly the measures $\mu^+$ and $\mu^-$ in the following way: \begin{align*} \mu^+ (B) = \sup \{ \mu (A): A\in \mathcal{B}, A\subset B\}\\ \mu^- (B) = \sup \{ -\mu (A): A\in \mathcal{B}, A\subset B\} \end{align*} $\mu^+$ and $\mu^-$ are sometimes called, respectively, positive and negative variations of $\mu$. Observe that $|\mu| = \mu^++\mu^-$. By the Riesz representation theorem the space of signed measures with finite total variation on the $\sigma$-algebra of Borel subsets of a locally compact Hausdorff space is the dual of the space of continuous functions (cp. also with Convergence of measures). [AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 Signed measure. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Signed_measure&oldid=27276 This article was adapted from an original article by M.I. Voitsekhovskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://www.encyclopediaofmath.org/index.php?title=Signed_measure&oldid=27276" Measure and integration Classical measure theory
CommonCrawl
go to structure Section 4.5: Equivalence (cite) Subsection 4.5.1: Equivalences of $\infty $-Categories Subsection 4.5.2: Categorical Pullback Squares Subsection 4.5.3: Categorical Equivalence Subsection 4.5.4: Categorical Pushout Squares Subsection 4.5.5: Isofibrations of Simplicial Sets Subsection 4.5.6: Isofibrant Diagrams Subsection 4.5.7: Detecting Equivalences of $\infty $-Categories Subsection 4.5.8: Application: Universal Property of the Join Subsection 4.5.9: Direct Image Fibrations 4.5 Equivalence Let $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{D}}$ be categories. We say that a functor $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is an isomorphism of categories if there exists a functor $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{C}}$ satisfying the identities $G \circ F = \operatorname{id}_{\operatorname{\mathcal{C}}}$ and $F \circ G = \operatorname{id}_{\operatorname{\mathcal{D}}}$. This condition is somewhat unnatural, since it refers to equalities between objects of the functor categories $\operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{C}})$ and $\operatorname{Fun}(\operatorname{\mathcal{D}}, \operatorname{\mathcal{D}})$. For most purposes, it is better to adopt a looser definition. We say that a functor $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is an equivalence of categories if there exists a functor $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{C}}$ for which the composite functors $G \circ F$ and $F \circ G$ are isomorphic to the identity functors $\operatorname{id}_{\operatorname{\mathcal{C}}}$ and $\operatorname{id}_{\operatorname{\mathcal{D}}}$, respectively. In category theory, the notion of equivalence between categories plays a much more central role than the notion of isomorphism between categories, and virtually all important concepts are invariant under equivalence. In §4.5.1, we extend the notion of equivalence to the $\infty $-categorical setting. If $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{D}}$ are $\infty $-categories, we will say that a functor $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is an equivalence of $\infty $-categories if there exists a functor $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{C}}$ for which the composite maps $G \circ F$ and $F \circ G$ are isomorphic to $\operatorname{id}_{\operatorname{\mathcal{C}}}$ and $\operatorname{id}_{\operatorname{\mathcal{D}}}$, when viewed as objects of the $\infty $-categories $\operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{C}})$ and $\operatorname{Fun}(\operatorname{\mathcal{D}}, \operatorname{\mathcal{D}})$, respectively (Definition 4.5.1.10). Phrased differently, a functor $F$ is an equivalence of $\infty $-categories if it is an isomorphism when viewed as a morphism of the category $\mathrm{h} \mathit{\operatorname{QCat}}$, whose objects are $\infty $-categories and whose morphisms are isomorphism classes of functors (Construction 4.5.1.1). In the study of $\infty $-categories, it can be technically convenient to work with simplicial sets which do not satisfy the weak Kan extension condition. For example, it is often harmless to replace the standard $n$-simplex $\Delta ^ n$ by its spine $\operatorname{Spine}[n] \subseteq \Delta ^ n$: for any $\infty $-category $\operatorname{\mathcal{C}}$, the restriction map $\operatorname{Fun}( \Delta ^ n, \operatorname{\mathcal{C}}) \rightarrow \operatorname{Fun}( \operatorname{Spine}[n], \operatorname{\mathcal{C}})$ is a trivial Kan fibration (see Example 1.4.7.7). In §4.5.3, we formalize this observation by introducing the notion of categorical equivalence between simplicial sets. By definition, a morphism of simplicial sets $f: X \rightarrow Y$ is a categorical equivalence if, for every $\infty $-category $\operatorname{\mathcal{C}}$, the induced functor of $\infty $-categories $\operatorname{Fun}(Y, \operatorname{\mathcal{C}}) \rightarrow \operatorname{Fun}(X, \operatorname{\mathcal{C}})$ is bijective on isomorphism classes of objects (Definition 4.5.3.1). If $X$ and $Y$ are $\infty $-categories, this reduces to the condition that $f$ is an equivalence of $\infty $-categories in the sense of §4.5.1 (Example 4.5.3.3). However, we will encounter many other examples of categorical equivalences between simplicial sets which are not $\infty $-categories: for example, every inner anodyne morphism of simplicial sets is a categorical equivalence (Corollary 4.5.3.14). Throughout this book, we will generally emphasize concepts which are invariant under categorical equivalence. In practice, this requires us to take some care when manipulating elementary constructions, such as fiber products. If $F_0: \operatorname{\mathcal{C}}_0 \rightarrow \operatorname{\mathcal{C}}$ and $F_1: \operatorname{\mathcal{C}}_1 \rightarrow \operatorname{\mathcal{C}}$ are functors of $\infty $-categories, then the fiber product $\operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{C}}_1$ (formed in the category of simplicial sets) need not be an $\infty $-category. Moreover, the construction $(F_0, F_1) \mapsto \operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{C}}_1$ does not preserve categorical equivalence in general. In §4.5.2, we remedy the situation by enlarging the fiber product $\operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{C}}_1$ to the homotopy fiber product $\operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}}^{\mathrm{h}} \operatorname{\mathcal{C}}_1$, given by the formula \[ \operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}}^{\mathrm{h}} \operatorname{\mathcal{C}}_1 = \operatorname{\mathcal{C}}_0 \times _{ \operatorname{Fun}( \{ 0\} , \operatorname{\mathcal{C}}) } \operatorname{Isom}(\operatorname{\mathcal{C}}) \times _{ \operatorname{Fun}( \{ 1\} , \operatorname{\mathcal{C}}) } \operatorname{\mathcal{C}}_1 \] (see Construction 4.5.2.1). The homotopy fiber product $\operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}}^{\mathrm{h}} \operatorname{\mathcal{C}}_1$ is always an $\infty $-category (Remark 4.5.2.2), and the construction $(F_0, F_1) \mapsto \operatorname{\mathcal{C}}_0 \times _{\operatorname{\mathcal{C}}}^{\mathrm{h}} \operatorname{\mathcal{C}}_1$ is invariant under equivalence (Corollary 4.5.2.18). We will say that a commutative diagram of $\infty $-categories \begin{equation} \begin{gathered}\label{equation:square-for-pullback} \xymatrix@R =50pt@C=50pt{ \operatorname{\mathcal{C}}_{01} \ar [r] \ar [d] & \operatorname{\mathcal{C}}_0 \ar [d] \\ \operatorname{\mathcal{C}}_1 \ar [r] & \operatorname{\mathcal{C}}} \end{gathered} \end{equation} is a categorical pullback square if it induces an equivalence of $\infty $-categories $\operatorname{\mathcal{C}}_{01} \rightarrow \operatorname{\mathcal{C}}_{0} \times ^{\mathrm{h}}_{\operatorname{\mathcal{C}}} \operatorname{\mathcal{C}}_1$ (Definition 4.5.2.7). This is closely related to the notion of homotopy pullback diagram introduced in §3.4.1: A commutative diagram of Kan complexes is a homotopy pullback square if and only if it is a categorical pullback square (Proposition 4.5.2.9). The diagram of $\infty $-categories (4.19) is a categorical pullback square if and only if, for every simplicial set $X$, the induced diagram of Kan complexes \[ \xymatrix@R =50pt@C=50pt{ \operatorname{Fun}(X,\operatorname{\mathcal{C}}_{01})^{\simeq } \ar [r] \ar [d] & \operatorname{Fun}(X,\operatorname{\mathcal{C}}_0)^{\simeq } \ar [d] \\ \operatorname{Fun}(X,\operatorname{\mathcal{C}}_1)^{\simeq } \ar [r] & \operatorname{Fun}(X,\operatorname{\mathcal{C}})^{\simeq } } \] is a homotopy pullback square (Proposition 4.5.2.12). In §4.5.4 we study the dual notion of categorical pushout square (Definition 4.5.4.1), which is an $\infty $-categorical counterpart of the theory of homotopy pushout squares developed in §3.4.2. Recall that every $\infty $-category $\operatorname{\mathcal{C}}$ contains a largest Kan complex, which we denote by $\operatorname{\mathcal{C}}^{\simeq }$ and refer to as the core of $\operatorname{\mathcal{C}}$ (Construction 4.4.3.1). The construction $\operatorname{\mathcal{C}}\mapsto \operatorname{\mathcal{C}}^{\simeq }$ can often be used to reformulate questions about $\infty $-categories in terms of the classical homotopy theory of Kan complexes. It is not difficult to show that a functor of $\infty $-categories $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is an equivalence if and only if, for every simplicial set $X$, the induced map $\operatorname{Fun}( X, \operatorname{\mathcal{C}})^{\simeq } \xrightarrow {F \circ } \operatorname{Fun}(X, \operatorname{\mathcal{D}})^{\simeq }$ is a homotopy equivalence of Kan complexes (Proposition 4.5.1.22). In §4.5.7, we show that it suffices to verify this condition in the special case $X = \Delta ^1$ (Theorem 4.5.7.1). As an application, we show that the collection of categorical equivalences is stable under the formation of filtered colimits (Corollary 4.5.7.2). In §4.5.8, we study an important class of categorical equivalences emerging from the theory of joins developed in §4.3. Recall that, if $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{D}}$ are categories, then the join $\operatorname{\mathcal{C}}\star \operatorname{\mathcal{D}}$ is isomorphic to the iterated pushout \[ \operatorname{\mathcal{C}}\coprod _{ (\operatorname{\mathcal{C}}\times \{ 0\} \times \operatorname{\mathcal{D}})} (\operatorname{\mathcal{C}}\times [1] \times \operatorname{\mathcal{D}}) \coprod _{ (\operatorname{\mathcal{C}}\times \{ 1\} \times \operatorname{\mathcal{D}}) } \operatorname{\mathcal{D}}, \] formed in the category $\operatorname{Cat}$ of (small) categories (Remark 4.3.2.14). In the setting of $\infty $-categories, the situation is more subtle (Warning 4.3.3.31). For any simplicial sets $X$ and $Y$, there is a natural comparison map \[ c_{X,Y}: X \coprod _{ (X \times \{ 0\} \times Y)} (X \times \Delta ^1 \times Y) \coprod _{ (X \times \{ 1\} \times Y)} Y \rightarrow X \star Y \] (Notation 4.5.8.3), which is almost never an isomorphism. Nevertheless, we show in §4.5.8 that $c_{X,Y}$ is always a categorical equivalence of simplicial sets (Theorem 4.5.8.8). Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor of $\infty $-categories. Recall that $F$ is an inner fibration if and only if every lifting problem \begin{equation} \begin{gathered}\label{equation:lifting-problem-char-isofibration} \xymatrix@R =50pt@C=50pt{ A \ar [d]^{i} \ar [r] & \operatorname{\mathcal{C}}\ar [d]^{F} \\ B \ar@ {-->}[ur] \ar [r] & \operatorname{\mathcal{D}}} \end{gathered} \end{equation} admits a solution, provided that the morphism $i: A \hookrightarrow B$ is inner anodyne (Proposition 4.1.3.1). In §4.5.5, we show that $F$ is an isofibration if and only if the following stronger condition holds: the lifting problem (4.20) admits a solution whenever the map $i: A \hookrightarrow B$ is both a monomorphism and a categorical equivalence (Proposition 4.5.5.1). Using this characterization, we extend the notion of isofibration to simplicial sets which are not necessarily $\infty $-categories (Definition 4.5.5.5).
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. On an Inequality of Lars Hörmander Let $P(z)$ be a non-null complex polynomial in $\nu$ variables $z=(z_1,\dots,z_n)$ of degree $\mu$: \begin{equation} P(z)=\sum_{|\alpha| \leq \mu} c_{\alpha} z^{\alpha}, \end{equation} where as usual for every $\alpha=(\alpha_1,\dots,\alpha_\nu) \in \mathbb{N}^{\nu}$ (here and in the following $\mathbb{N}$ denotes the set of all non-negative integers) we set $|\alpha|=\alpha_1+\dots+\alpha_\nu$, and $z^{\alpha}=z_1^{\alpha_1}\dots z_{\nu}^{\alpha_\nu}$. Consider $P$ as a polynomial function from $\mathbb{R}^\nu$ into $\mathbb{C}$: \begin{equation} P(x)=\sum_{|\alpha| \leq \mu} c_{\alpha} x^{\alpha} \quad (x \in \mathbb{R}^{\nu}). \end{equation} For any $m \in \mathbb{N}$, any $S \subseteq \mathbb{R}^\nu$, and any $\phi \in \mathcal{D}(\mathbb{R}^\nu)$ set: \begin{equation} ||\phi||_{m,S} = \sup_{\substack{x \in S \\ |\alpha| \leq m}} |(D^{\alpha} \phi)(x)|. \end{equation} Let $M > L > 0$ and put $Q=[-M,M]^\nu$ and $E=Q \backslash (-L,L)^\nu$. I am trying to prove that for any $m \in \mathbb{N}$, there exist $K > 0$ and $m' \in \mathbb{N}$ such that we have \begin{equation} ||\phi||_{m,E} \leq K ||P\phi||_{m',E} \quad \forall \phi \in \mathcal{D}_{Q} \tag{I}, \end{equation} where as usual $\mathcal{D}_{Q}$ is the set of all complex-valued functions $\phi \in C^{\infty}(\mathbb{R}^\nu)$ with support contained in $Q$. See the notes below for an explanation of the origin and relevance of this question. Thank you very much in advance for your attention. NOTE (1). If we take $L=0$, so that $E=Q$, then (I) is an immediate corollary of a remarkable result proved by Lars Hörmander in his wonderful work On the Division of Distributions by Polynomials. Indeed, inequality (4.3) of this work (taken with $k=0$) implies that for any $n, m \in \mathbb{N}$, there exist $K > 0$ and $n', m' \in \mathbb{N}$ such that \begin{equation} \sup_{\substack{x \in \mathbb{R}^\nu \\ |\alpha| \leq m}} (1+|x|)^n |(D^{\alpha} \phi) (x)| \leq K \sup_{\substack{x \in \mathbb{R}^\nu \\ |\alpha| \leq m'}} (1+|x|)^{n'} |(D^{\alpha} (P\phi)) (x)| \quad \forall \phi \in \mathcal{S}(\mathbb{R}^\nu) \tag{II}. \end{equation} We can state (II) in another way. Define the linear subspace $\mathcal{M}_{P}$ of $\mathcal{S}(\mathbb{R}^\nu)$: \begin{equation} \mathcal{M}_{P}=\{\psi \in \mathcal{S}(\mathbb{R}^\nu): \psi=P \phi, \phi \in \mathcal{S}(\mathbb{R}^\nu) \}, \end{equation} and consider the multiplication map $M_{P}:\mathcal{S}(\mathbb{R}^\nu) \rightarrow \mathcal{M}_{P}$ defined by \begin{equation} M_{P}(\phi)=P\phi \quad (\phi \in \mathcal{S}(\mathbb{R}^\nu)), \end{equation} Then (II) is equivalent to say that $M_{P}$ has a continuous inverse (this statement is Theorem (1) in Hörmander's work). NOTE (2). Inequality (I) was stated without proof by ifw in his answer to the post Division of Distributions by Polynomials (see also my own answer for a comment). If true, (I) would allow to give a direct proof of Theorem (4) in Hörmander's paper, which states that every distribution can be divided by a non-null polynomial. In one of his comments, ifw said that (I) could be proved by localizing (II) or by modifying properly Hörmander's original proof. Even though I studied very carefully Hörmander's original proof (which can also be found in Trèves, Linear Partial Differential Equations with Constant Coefficients, $\S$ 5.5), I could not modify it in order to obtain (I) nor I could get (I) by localizing (II). fa.functional-analysis real-analysis schwartz-distributions 122 silver badges33 bronze badges Maurizio BarbatoMaurizio Barbato Finally, I realized how to modify Hörmander's proof in order to prove (I). I will describe here the necessary changes we have to do to Hörmander's proof. Clearly, all the notation is that of Hörmander's paper. Let $C$ be a closed, convex set of $\mathbb{R}^\nu$. Then make the following changes in Section (4) of Hörmander's paper On the Division of Distributions by Polynomials: $\bullet$ take the suprema comparing in all the relations over $C$; $\bullet$ after relation (4.2), replace everywhere $N^k$ with $N^k \cap C$ and $N^{k+1}$ with $N^{k+1} \cap C$; $\bullet$ replace $\mathbb{R}^\nu$ with $C$ in (4.13); $\bullet$ replace in every relation $S(\xi)$ with $S(\xi) \cap C$ and $S(\eta)$ with $S(\eta) \cap C$; $\bullet$ replace in every relation $S(\xi,\eta)$ with $S(\xi,\eta) \cap C$ and $S_{1}(\xi,\eta)$ with $S_{1}(\xi,\eta) \cap C$. With these changes, Hörmander's proof shows that to all $n, m \in \mathbb{N}$ and $k \leq \mu$ there are $n', m' \in \mathbb{N}$ and a constant $K > 0$ such that \begin{equation} \sup_{\xi \in C} (1+|\xi|)^n |f|_{m, (N^k \cap C)_{\xi}} \leq K \sup_{\xi \in C} (1+|\xi|)^{n'} |Pf|_{m',\xi}, \quad (f \in C^{m'}(\mathbb{R}^\nu)). \end{equation} So, if $C$ is a compact convex set, the previous inequality, taken with $k=0$, implies that to all $m \in \mathbb{N}$ there are $ m' \in \mathbb{N}$ and a constant $K > 0$ such that \begin{equation} ||f||_{m,C} \leq K ||Pf||_{m',C} \quad (f \in C^{m'}(\mathbb{R}^\nu)) \tag{III}. \end{equation} Now, assume that $E$ is a finite union of compact convex subsets $C_1,\dots, C_p$ of $\mathbb{R}^\nu$. By (III), for each $i=1,\dots,p$, there exist $m'_{i} \in \mathbb{N}$ and $K_i > 0$ such that \begin{equation} ||f||_{m,C_i} \leq K_i ||Pf||_{m'_{i},C_i} \quad (f \in C^{m'_{i}}(\mathbb{R}^\nu)) \tag{III}. \end{equation} By taking $K=\max\{K_1,\dots,K_p\}$ and $m'=\max\{m'_{1},\dots,m'_{p}\}$, we then get \begin{equation} ||f||_{m,E} \leq K ||Pf||_{m',E} \quad (f \in C^{m'}(\mathbb{R}^\nu)) \tag{IV}. \end{equation} Clearly (I) is a particular case of (IV), so we are done. QED REMARK. One could think that (IV) still holds if we take $E$ to be any compact set, but this is not the case, as the following counterexample shows. Let $\nu=1$, and set \begin{equation} E=\{ 0 \} \cup \left\{ \frac{1}{n}: n=1,2,3,\dots \right\}. \end{equation} Take $P(x)=x$ and let $K > 0$. Choose $N > K$. By the celebrated Whitney's Extension Theorem (which is Theorem (I) in Analytic Extensions of Differentiable Functions Defined in Closed Sets), there exists $f \in C^{\infty}(\mathbb{R})$, with support contained in $\left[ \frac{1}{N+1},2 \right]$, such that for $n=1,2,3,\dots,N$ \begin{equation} f^{(m)}\left( \frac{1}{n} \right) = (-1)^m m! n^{m+1} \quad (m \in \mathbb{N}). \end{equation} We have $D^{m}(Pf)(x)=0$ for all $x \in E$ and all $m=1,2,\dots$, so that $||Pf||_{m',E}=1$ for all $m' \in \mathbb{N}$. But $||f||_{0,E}=N > K$, and we conclude that (IV) cannot hold. Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged fa.functional-analysis real-analysis schwartz-distributions or ask your own question. Division of Distributions by Polynomials Topologies on spaces of distributions and test functions Matrices Representing Bounded Operators and Absolute Values Wave front set of vector-valued Dirac delta distribution Polynomials non-negative on the integers
CommonCrawl
Focusing the electromagnetic field to 10−6λ for ultra-high enhancement of field-matter interaction Xiang-Dong Chen1,2, En-Hui Wang1,2, Long-Kun Shan1,2, Ce Feng1,2, Yu Zheng1,2, Yang Dong1,2, Guang-Can Guo1,2 & Fang-Wen Sun ORCID: orcid.org/0000-0002-9625-73901,2 Nature Communications volume 12, Article number: 6389 (2021) Cite this article Imaging and sensing Nanophotonics and plasmonics Nanowires Focusing electromagnetic field to enhance the interaction with matter has been promoting researches and applications of nano electronics and photonics. Usually, the evanescent-wave coupling is adopted in various nano structures and materials to confine the electromagnetic field into a subwavelength space. Here, based on the direct coupling with confined electron oscillations in a nanowire, we demonstrate a tight localization of microwave field down to 10−6λ. A hybrid nanowire-bowtie antenna is further designed to focus the free-space microwave to this deep-subwavelength space. Detected by the nitrogen vacancy center in diamond, the field intensity and microwave-spin interaction strength are enhanced by 2.0 × 108 and 1.4 × 104 times, respectively. Such a high concentration of microwave field will further promote integrated quantum information processing, sensing and microwave photonics in a nanoscale system. Electromagnetic field can usually be focused at the scale of its wavelength. However, in pursuit of a strong interaction with matter, the manipulation of the electromagnetic field in a subwavelength space is one of the most important tasks in nanoscience researches and applications, ranging from integrated optics to biological sensing1,2,3,4. Nanostructures of dielectric5,6,7, metallic8,9,10, and two-dimensional materials11,12 have been developed to tightly confine the electromagnetic field mainly based on the evanescent-wave coupling. For example, the plasmonic nanostructure has been used for the light field confinement at a scale smaller than 10−2λ8,9,10. These confinements can dramatically reduce the mode volume to highly increase the density of states and enhance the light−matter interaction at the nanoscale, which has harnessed the researches of single-molecule spectroscopy13, nano laser14, nonlinear optics15, and solar energy16. Especially, the interaction between microwave field and matter at the nanoscale strongly drives the development of quantum information processing, sensing, and microwave photonics. The deep-subwavelength confinement of the microwave field will benefit the individual manipulation of multi-qubit17,18. Meanwhile, the enhancement of the local microwave field is of central importance to microwave-to-optics conversion19,20, fast spin qubit manipulation21, and hybrid quantum system coupling22,23. It indicates that the efficient localization and detection of microwave field at the nanoscale is highly required for developing a practical quantum information device. Furthermore, the wireless qubit manipulation with a compact and scalable system will decrease the power consumption and reduce the heat load in a cryostat24,25. However, directly pumping the qubit from far-field is usually inefficient26. And the gradient of the microwave is limited by the diffraction27. Though the in-plane slotted patch antenna has been demonstrated for the enhancement of local microwave field at the deep subwavelength scale19,28, the Johnson noise of a large metal film will decrease the spin relaxation time29,30, which is important for quantum computing and sensing. Here, we study the field confinement based on the direct coupling between electromagnetic field and confined electrons in a low dimensional nano material. A tight confinement of a microwave field with an ultra-strong intensity is realized by utilizing the near field radiation of the electron oscillation in an Ag nanowire. Using the NV center in diamond as a noninvasive probe, we show that the microwave field can be localized down to 291 nm, corresponding to a scale of 10−6λ. For far-field spin manipulation, we design a hybrid nanowire-bowtie structure to focus the microwave field directly from the free space to a deep-subwavelength volume. As a result, the microwave-spin interaction strength is highly enhanced by observing a 1.4 × 104 times enhancement of the Rabi oscillation frequency, corresponding to increasing the field intensity by 2.0 × 108 times. Further considering the light guiding effect of the Ag nanowire, this antenna can be used for delivering and concentrating both light and microwave field. Subsequently, a wireless platform can be developed for the integrated quantum information processing and quantum sensing. The design of experiments As shown in Fig. 1a, the nanowire-bowtie hybrid antenna consists of an Ag nanowire with a diameter of 120 nm and a metallic bowtie structure. The gap between the two arms of the bowtie structure is Wgap = 8 μm. The length of the bowtie structure is 6.5 cm, while the widths at the end and at the gap are 1 cm and 160 μm, respectively (details in Supplementary Fig. 1). A double-ridged horn antenna radiates the microwave signal into the free space. The nanowire-bowtie structure then receives the far-field microwave (with a distance of approximate 20 cm). Fig. 1: The principle of localizing and detecting the microwave field. a Sketch of the nanowire-bowtie antenna. A single-crystal diamond plate is placed under the nanowire. The spin state transition of NV center in diamond is pumped by the localized microwave. b The image of the microwave distribution is obtained by recording the spin-state transition of NV centers at different positions with CSD nanoscopy. The insert is the integrated cross-section profile. The solid line is the fit with Eq. (1). The error bars represent the standard error. The power of the microwave that is radiated by the horn antenna is 14 μW. The NV center in a single crystal diamond plate is generated by nitrogen ion implanting. The depth is approximate 20 nm. The ground state of NV center is a spin-triplet. The transition between the ms = 0 and ms = ±1 can be pumped by a resonant microwave. It subsequently changes the fluorescence intensity of NV center. To detect the microwave, we record the optically detected magnetic resonance (ODMR) of NV center under a continuous-wave microwave pumping. The contrast of the ODMR signal increases with the amplitude of the microwave field (Supplementary Fig. 2). To non-invasively map the localized microwave with a high spatial resolution, the charge state depletion (CSD) nanoscopy31 is applied for the diffraction-unlimited ODMR measurement. It is based on the charge state manipulation and detection of NV center. The resolution of CSD nanoscopy is approximate 100 nm here, in comparison with the 500 nm resolution of the confocal microscopy (Supplementary Fig. 3). For the microwave field imaging in a large area, a wide field microscope is also used to detect the ODMR signal of NV center. Microwave localization and detection In Fig. 1b, we show the magnetic component of the microwave field near the Ag nanowire with a high spatial resolution. Here, without an external magnetic field, the resonant microwave frequency for NV center spin transition is 2.87 GHz. The result shows that the near-field microwave is confined near the Ag nanowire. The width of the cross-section profile is 291 ± 10 nm, corresponding to 2.8 × 10−6λ, where λ ≈ 10.4 cm is the microwave wavelength in vacuum. The distribution of the magnetic component (the insert of Fig. 1b) can be well fitted by a reciprocal function: $$| {B}_{{{{{{{{\rm{MW}}}}}}}}}(r)| \propto \frac{1}{\sqrt{{r}^{2}+{r}_{0}^{2}}},$$ where r is the distance from the nanowire in the xy plane and r0 is determined by the radius of the nanowire and the depth of NV center. The fitting result shows that r0 = 84 ± 3 nm, which matches the expectation. In contrast, the evanescent-field coupling shows an exponential decay as a function of distance4. To reveal the mechanism of the microwave confinement, we separate the ODMR signal from the four categories of NV centers with different symmetry axes and obtain the vector information of the localized microwave field, as shown in Fig. 2a. Here, the x, y, and z axes are defined as the edges of the diamond plate. The symmetry axes of the four categories of NV centers are shown as NV1 (−\(\sqrt{2}\), 0, 1), NV2 (\(\sqrt{2}\), 0, 1), NV3 (0, \(\sqrt{2}\), 1), NV4 (0, −\(\sqrt{2}\), 1). An external static magnetic field B0 is applied to split the four categories (NVi) according to Zeeman effect, which is shown in Fig. 2b. The difference in resonant frequencies is not large here. No obvious wavelength-selectivity of microwave localization is observed in this experiment. Then, we can assume that the localized microwave field distribution with different resonant frequencies is the same. By recording the ODMR signal of NVi centers at different positions, we obtain the distribution of the magnetic projection BMWi, which is perpendicular to the NVi centers' symmetry axis. Fig. 2: Detection of the microwave field vector at the nanoscale. a The illustration of the direction of nanowire and NV center axes. b Frequency-scanning ODMR results of NV centers with four symmetry axes. c The fluorescence intensity of NV centers is enhanced due to the interaction with the Ag nanowire. The cross-section profile in the bottom is used to locate the relative position of nanowire (red points in the image). d The image of ∣BMW2∣. The cross-section profile is the integrated signal of the whole image. The solid line is the simulation with a straight line current. The value is normalized by the saturation amplitude Bsat, as defined in Supplementary Eq. (2). Error bars represent the standard error. e, f The microwave vector is revealed by comparing different projections. The result in e records the ratio of fluorescence intensity with BMW1 to the fluorescence with BMW2 and f is recorded as the ratio of fluorescence with BMW3 to fluorescence with BMW4. With a weak microwave pumping, the images of (e) and (f) approximate to the distribution of ∣BMW2∣ − ∣BMW1∣ and ∣BMW4∣ − ∣BMW3∣, respectively (see Supplementary Note 2). The scale bars in all the images are 400 nm in length. And the power of the microwave source is set to 174 μW. To precisely map the microwave vector distribution, the position of the nanowire is firstly located according to the fluorescence enhancement of NV center, as shown in Fig. 2c. We find that the position of the magnetic component's maximum does not always match the position of the nanowire. As shown in Fig. 2d, the maximum of BMW2 is approximate 100 nm away from the nanowire. In Fig. 2e, f, by simultaneously detecting the fluorescence signals with different microwaves pumping, we further highlight the difference of the distributions with ∣BMW2∣ − ∣BMW1∣ and ∣BMW4∣ − ∣BMW3∣, respectively. With the structure in Fig. 2, the amplitudes of BMW3 and BMW4 are almost the same, while the amplitude of BMW1 is mirroring BMW2 with respect to the yz plane. The results indicate that the vector of the magnetic component is in the xz plane. Comparing with the simulation (Supplementary Note 5), it confirms that the magnetic component of the local microwave follows the distribution of the magnetic field around a straight line current that is transmitting through the Ag nanowire. The small mismatch between the simulation and the experiments might be caused by the error of nanowire's location, the distribution of NV center in z-axis, and the accuracy of CSD nanoscopy resolution's estimation. Based on the results of Figs. 1 and 2, we deduce that the free-space microwave is collected by the bowtie structure, then the oscillating currents transmit through the Ag nanowire and generate a strong localized microwave field. Due to the small size of nanowire, the localized microwave is highly confined according to the Biot-Savart law. The localization of microwave is mainly determined by the Ag nanowire. The distribution of localized microwave field will be the same with different wavelengths. In addition, since the bowtie structure is a typical wide band antenna, the factor of localized microwave enhancement varies very small with different microwave frequencies in this experiment. Far-field spin manipulation The tight confinement leads to a significant enhancement of the localized microwave field's intensity. It subsequently enhances the interaction with a spin qubit. In Fig. 3, we compare the localized microwave field with three different structures: the nanowire-bowtie hybrid antenna, the bowtie antenna without Ag nanowire, and no antenna. The results show that, with a nanowire-bowtie antenna, the microwave is significantly enhanced near the Ag nanowire, while a bowtie antenna without Ag nanowire slightly increases the localized microwave field in the gap between the two arms. Fig. 3: Spin manipulation with the localized microwave field. The wide-field images of the microwave field enhancement with different structures: a nanowire-bowtie antenna; b bowtie antenna without nanowire; c no structure on the diamond surface. The scale bars are 20 μm in length. d−f Rabi oscillations of NV center with different structures in (a−c), respectively. The inset in d shows pulse sequences for the Rabi oscillation measurement. The Rabi oscillation in d is measured with NV center ensemble under the nanowire. And the results in e and f are measured with single NV centers. PMW is the power of the microwave source. Error bars represent the standard error. The microwave field enhancement can be used for fast and high spatial resolution spin qubit manipulation. Here, we use it to pump the Rabi oscillation of NV center. As shown in Fig. 3d, with a nanowire-bowtie antenna, the Rabi frequency of the NV center under the Ag nanowire is approximate 1.6 μs−1 with a 14 μW microwave excitation. Without Ag nanowire, the Rabi oscillation frequency of the NV center in the gap of the bowtie antenna is approximate 0.89 μs−1 with a 21 W microwave excitation. In contrast, without any nanostructure, the Rabi oscillation frequency is only 0.14 μs−1 under a 21 W microwave excitation. The results indicate that, by utilizing the nanowire-bowtie antenna for spin manipulation, the Rabi frequency can be improved by at least 1.4 × 104 times, corresponding to increasing the local microwave intensity by 2.0 × 108 times. The observation of Rabi oscillation also indicates that the coherence of both the spin and the localized microwave is preserved, which is crucial for quantum applications. Note that, single NV centers are used to measure the slow Rabi oscillation in Fig. 3e, f. The amplitude of Rabi oscillation with NV center ensemble in Fig. 3d is smaller than that with a single NV center in Fig. 3e, f. It is because there are four possible symmetry axes with NV center ensemble, and we only measure the Rabi oscillation of NV center with one particular axis. In addition, the inhomogeneous broadening of NV center ensemble also decreases the visibility of Rabi oscillation. The individual addressing of multi-qubit from far-field can be further explored by utilizing the polarization dependence of the localized microwave enhancement. Here, we rotate the horn antenna to change the polarization of the free-space microwave. The results in Fig. 4 show that, with an electrical polarization parallel to the bowtie-nanowire antenna, a stronger localized microwave field is observed near the nanowire-bowtie antenna. The polarization isolation of the localized microwave intensity with the nanowire-bowtie antenna is higher than 20 dB, but lower than 40 dB. Therefore, encoding the microwave for different spin manipulation into the polarization, the nanowire-bowtie antennas with different directions can be used to selectively manipulate the qubits at different positions from the far-field. Fig. 4: The localized microwave field distribution is changed with the polarization of the free-space microwave. a, b The electrical polarization of the free-space microwave is parallel to the direction of nanowire-bowtie structure. c, d The electrical polarization is perpendicular to the nanowire-bowtie structure. The results are obtained with the wide field microscopy. The scale bars are 20 μm in length. Efree denotes the electrical component of the free-space microwave. Integrating and miniaturizing the electrical and optical device is essential for the practical quantum processing and sensing applications32,33. Various electrical conductive and ferromagnetic structures have been studied to transmit the microwave signal for spin manipulation at the nanoscale34,35. Our method provides a solution to efficient spin manipulation from the far-field. The Johnson noise from an Ag nanowire is negligible in comparison with the metal film (Supplementary Fig. 4). It will simplify the quantum processing device, and avoid the thermal leakage in a cryostat. In addition, the light-guiding effect of an Ag nanowire can be utilized to optically pump and collect the fluorescence of individual qubit36,37,38. The simultaneous integration of electrical and optical components certainly will help to develop a compact and integrated quantum processing device. Focusing and detecting the nanoscale electromagnetic field can be further used to enhance the sensitivity of the spin-based metrology. Recently, the magnetic concentrator of ferrite material has been applied for the detection of the static magnetic field with NV center39,40. The key is how small the electromagnetic field can be focused and detected. Utilizing the nanowire-bowtie antenna, we can improve the sensitivity by 1.4 × 104 times. Combining with the coherent spin manipulation41, it will help to realize the ultra-weak microwave signal sensing, such as for a quantum radar. The enhanced fluorescence intensity of NV center with Ag nanowire will also help to improve the sensitivity of sensing. However, the oxidation of Ag nanowire in the atmosphere will change the conductivity of nanowire networks. The potential solutions include using the monolayer SnO2 protected nanowire42. In conclusion, we have demonstrated the high concentration of a microwave field by utilizing the confined electron's oscillation in a low dimensional material. The results can be used for integrated quantum information processing and high-sensitivity quantum sensing. The electrical grade diamond plates with \(\left\{100\right\}\) surface and <110> edges are purchased from Element 6. The size of the plate is 2 × 2 × 0.5 mm3. The NV center ensemble is produced through nitrogen ion implanting with an energy of 15 keV and a dosage of 1013/cm2. The diamond is annealed at 850 °C for 2 h to improve the production efficiency of NV centers. The density of NV centers is estimated to be approximate 5000/μm2. The high-density NV center samples are used to image the distribution of the local microwave. A low-density NV center sample has also been used to measure the Rabi oscillation in Fig. 3e,f. It is produced by nitrogen ion implanting with a dosage of 109/cm2. After the NV center is produced, Ag nanowires are dropped on the surface of the diamond with a spin processor. Then, a small metallic bowtie structure of chromium/gold (5/200 nm thickness) film is produced on the diamond surface through lift-off. Finally, a large in-plane bowtie antenna is made with a copper foil tap. The Au film on the diamond plate is ohm connected to the copper tape with silver glue. The CSD nanoscopy setup for ODMR measurement is based on a home-built confocal microscope, as shown in Fig. 5. The diamond plate is mounted on a piezo-stage (P-733.3DD, PI). CW lasers with wavelengths of 532 (Coherent), 589 (MGL-III-589nm, New Industries Optoelectronics), and 637 nm (MRL-III-637nm, New Industries Optoelectronics) are modulated by acousto-optic modulators (AOMs, MT200-0.5-VIS, AA). A vortex phase plate (VPP-1a, RPC photonics) is used to produce a doughnut-shaped 637 nm laser beam. The lasers pump the NV center in diamond from the backside through an objective (Leica) with 0.7 NA. The collected fluorescence is time-gated by another AOM. Then, it is detected by a single-photon-counting-module (SPCM-AQRH-15-FC, Excelitas) after passing through a long-pass filter (edge wavelength 668.9 nm, Semrock). In the wide-field microscope, a 532 nm CW laser (MLL-III-532nm, New Industries Optoelectronics) is used to pump the NV center. The fluorescence is detected by a CCD camera (iXon897, Andor). Fig. 5: The schematic diagram of the experimental setup for the CSD nanoscopy. DM1-3, long-pass dichroic mirror with edge wavelengths of 658.8, 536.8, and 605 nm, respectively; AOM acousto-optic modulator; SPCM single-photon-counting modulator; PBS polarizing beam splitter. Two microwave generators (SMB 100A and SMA 100A, Rhode&Schwartz) are used to produce microwave signal with different frequencies. The microwave pulse is controlled by microwave switches (ZASWA-2-50DR, MiniCircuits). Then, the two channels are combined by a combiner (ZFRSC-42-S, MiniCircuits) and amplified by a microwave amplifier (60S1G4A, Amplifier Research). The microwave is radiated into free space by a horn antenna (LB-2080-SF, Chengdu AINFO Inc.). The data that support the findings of this study are available from the corresponding author upon reasonable request. Marpaung, D., Yao, J. & Capmany, J. Integrated microwave photonics. Nat. Photon. 13, 80–90 (2019). Saha, K., Agasti, S. S., Kim, C., Li, X. & Rotello, V. M. Gold nanoparticles in chemical and biological sensing. Chem. Rev. 112, 2739–2779 (2012). Cheben, P., Halir, R., Schmid, J. H., Atwater, H. A. & Smith, D. R. Subwavelength integrated photonics. Nature 560, 565–572 (2018). Kauranen, M. & Zayats, A. V. Nonlinear plasmonics. Nat. Photon. 6, 737–748 (2012). Hu, S. et al. Experimental realization of deep-subwavelength confinement in dielectric optical resonators. Sci. Adv. 4, eaat2355 (2018). Choi, H., Heuck, M. & Englund, D. Self-similar nanocavity design with ultrasmall mode volume for single-photon nonlinearities. Phys. Rev. Lett. 118, 223605 (2017). Fröch, J. E. et al. Photonic nanobeam cavities with nanopockets for efficient integration of fluorescent nanoparticles. Nano Lett. 20, 2784–2790 (2020). Kim, M.-K. et al. Squeezing photons into a point-like space. Nano Lett. 15, 4102–4107 (2015). Chikkaraddy, R. et al. Single-molecule strong coupling at room temperature in plasmonic nanocavities. Nature 535, 127–130 (2016). Chen, W., Zhang, S., Deng, Q. & Xu, H. Probing of sub-picometer vertical differential resolutions using cavity plasmons. Nat. Commun. 9, 801 (2018). Alcaraz Iranzo, D. et al. Probing the ultimate plasmon confinement limits with a Van der Waals heterostructure. Science 360, 291–295 (2018). Xia, F., Wang, H., Xiao, D., Dubey, M. & Ramasubramaniam, A. Two-dimensional material nanophotonics. Nat. Photon. 8, 899–907 (2014). Zhang, R. et al. Chemical mapping of a single molecule by plasmon-enhanced Raman scattering. Nature 498, 82–86 (2013). Azzam, S. I. et al. Ten years of spasers and plasmonic nanolasers. Light Sci. Appl. 9, 90 (2020). Smirnova, D. & Kivshar, Y. S. Multipolar nonlinear nanophotonics. Optica 3, 1241–1255 (2016). Atwater, H. A. & Polman, A. Plasmonics for improved photovoltaic devices. Nat. Mater. 9, 205–213 (2010). Warring, U. et al. Individual-ion addressing with microwave field gradients. Phys. Rev. Lett. 110, 173002 (2013). Harty, T. P. et al. High-fidelity trapped-ion quantum logic using near-field microwaves. Phys. Rev. Lett. 117, 140501 (2016). Salamin, Y. et al. Microwave plasmonic mixer in a transparent fibre-wireless link. Nat. Photon. 12, 749–753 (2018). Wang, C. et al. Integrated lithium niobate electro-optic modulators operating at CMOS-compatible voltages. Nature 562, 101–104 (2018). Fuchs, G. D., Dobrovitski, V. V., Toyli, D. M., Heremans, F. J. & Awschalom, D. D. Gigahertz dynamics of a strongly driven single quantum spin. Science 326, 1520–1522 (2009). Clerk, A. A., Lehnert, K. W., Bertet, P., Petta, J. R. & Nakamura, Y. Hybrid quantum systems with circuit quantum electrodynamics. Nat. Phys. 16, 257–267 (2020). Kubo, Y. et al. Strong coupling of a spin ensemble to a superconducting resonator. Phys. Rev. Lett. 105, 140502 (2010). Blais, A., Girvin, S. M. & Oliver, W. D. Quantum information processing and quantum optics with circuit quantum electrodynamics. Nat. Phys. 16, 247–256 (2020). Pauka, S. J. et al. A cryogenic cmos chip for generating control signals for multiple qubits. Nat. Electron. 4, 64–70 (2021). Bardin, J. C., Slichter, D. H. & Reilly, D. J. Microwaves in quantum computing. IEEE J. Microw. 1, 403–427 (2021). Bruzewicz, C. D., Chiaverini, J., McConnell, R. & Sage, J. M. Trapped-ion quantum computing: Progress and challenges. Appl. Phys. Rev. 6, 021314 (2019). Salamin, Y. et al. Direct conversion of free space millimeter waves to optical domain by plasmonic modulator antenna. Nano Lett. 15, 8342–8346 (2015). Kolkowitz, S. et al. Probing Johnson noise and ballistic transport in normal metals with a single-spin qubit. Science 347, 1129–1132 (2015). Ariyaratne, A., Bluvstein, D., Myers, B. A. & Jayich, A. Nanoscale electrical conductivity imaging using a nitrogen-vacancy center in diamond. Nat. Commun. 9, 2406 (2018). Chen, X. D. et al. Subdiffraction optical manipulation of the charge state of nitrogen vacancy center in diamond. Light Sci. Appl. 4, e230 (2015). Kim, D. et al. A CMOS-integrated quantum sensor based on nitrogen-vacancy centres. Nat. Electron. 2, 284–289 (2019). Mehta, K. K. et al. Integrated optical multi-ion quantum logic. Nature 586, 533–537 (2020). Staacke, R., John, R., Kneiß, M., Grundmann, M. & Meijer, J. Highly transparent conductors for optical and microwave access to spin-based quantum systems. npj Quantum Inf. 5, 98 (2019). Bertelli, I. et al. Magnetic resonance imaging of spin-wave transport and interference in a magnetic insulator. Sci. Adv. 6, eabd3556 (2020). Huck, A., Kumar, S., Shakoor, A. & Andersen, U. L. Controlled coupling of a single nitrogen-vacancy center to a silver nanowire. Phys. Rev. Lett. 106, 096801 (2011). Kumar, S., Huck, A. & Andersen, U. L. Efficient coupling of a single diamond color center to propagating plasmonic gap modes. Nano Lett. 13, 1221–1225 (2013). Wei, H. et al. Plasmon waveguiding in nanowires. Chem. Rev. 118, 2882–2926 (2018). Fescenko, I. et al. Diamond magnetometer enhanced by ferrite flux concentrators. Phys. Rev. Res. 2, 023394 (2020). Zhang, C. et al. Diamond magnetometry and gradiometry towards subpicotesla dc field measurement. Phys. Rev. Appl. 15, 064075 (2021). Barry, J. F. et al. Sensitivity optimization for NV-diamond magnetometry. Rev. Mod. Phys. 92, 015004 (2020). Zhao, Y. et al. Protecting the nanoscale properties of ag nanowires with a solution-grown SnO2 monolayer as corrosion inhibitor. J. Am. Chem. Soc. 141, 13977–13986 (2019). This work was supported by the National Key Research and Development Program of China (No. 2017YFA0304504); Anhui Initiative in Quantum Information Technologies (AHY130100); National Natural Science Foundation of China (Nos. 91536219, 91850102). The sample preparation was partially conducted at the USTC Center for Micro and Nanoscale Research and Fabrication. CAS Key Laboratory of Quantum Information, School of Physical Sciences, University of Science and Technology of China, Hefei, 230026, People's Republic of China Xiang-Dong Chen, En-Hui Wang, Long-Kun Shan, Ce Feng, Yu Zheng, Yang Dong, Guang-Can Guo & Fang-Wen Sun CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China Xiang-Dong Chen En-Hui Wang Long-Kun Shan Ce Feng Yu Zheng Yang Dong Guang-Can Guo Fang-Wen Sun X.C. and F.S. conceived the idea. X.C. performed the experiments. E.W. and C.F. prepared the samples. L.S. and Y.Z. built the electrical setup. X.C., Y.D., and F.S. analyzed the data. All authors contributed to the discussion and editing of the manuscript. Correspondence to Fang-Wen Sun. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Chen, XD., Wang, EH., Shan, LK. et al. Focusing the electromagnetic field to 10−6λ for ultra-high enhancement of field-matter interaction. Nat Commun 12, 6389 (2021). https://doi.org/10.1038/s41467-021-26662-5
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. A low-cost paper-based synthetic biology platform for analyzing gut microbiota and host biomarkers Melissa K. Takahashi ORCID: orcid.org/0000-0003-4937-29241 na1, Xiao Tan1,2,3,4,5 na1, Aaron J. Dy ORCID: orcid.org/0000-0003-0319-54161,5,6 na1, Dana Braff1,7, Reid T. Akana6, Yoshikazu Furuta ORCID: orcid.org/0000-0003-4710-13891,8, Nina Donghia4, Ashwin Ananthakrishnan2,3 & James J. Collins1,4,5,6,9,10 Nature Communications volume 9, Article number: 3347 (2018) Cite this article This article has been updated There is a need for large-scale, longitudinal studies to determine the mechanisms by which the gut microbiome and its interactions with the host affect human health and disease. Current methods for profiling the microbiome typically utilize next-generation sequencing applications that are expensive, slow, and complex. Here, we present a synthetic biology platform for affordable, on-demand, and simple analysis of microbiome samples using RNA toehold switch sensors in paper-based, cell-free reactions. We demonstrate species-specific detection of mRNAs from 10 different bacteria that affect human health and four clinically relevant host biomarkers. We develop a method to quantify mRNA using our toehold sensors and validate our platform on clinical stool samples by comparison to RT-qPCR. We further highlight the potential clinical utility of the platform by showing that it can be used to rapidly and inexpensively detect toxin mRNA in the diagnosis of Clostridium difficile infections. The gut microbiome is an essential contributor to numerous processes in human health and disease, including proper development of the immune system1, host responses to acute and chronic infections2,3, cardiovascular disease4, and drug metabolism5. It is also an important modulator of gastrointestinal function, including inflammatory bowel disease (IBD)6,7, childhood malnutrition8,9, and cancer immunotherapy treatment10,11. Increasing evidence suggests that host–microbiome interactions also play a key role in these health conditions12,13,14. Despite the progress made in our understanding of the overall gut microbiome and the roles of individual species, large-scale longitudinal studies are needed to more directly investigate the causal relationship between microbial and host changes during disease states and responses to treatment. Current methods for profiling the gut microbiome typically involve deep sequencing coupled with high-throughput bioinformatics. These techniques are expensive, slow, and require significant technical expertise to design, run, and interpret. To reduce costs, researchers often batch samples for sequencing, which can lead to significant increases in turn-around time. These limitations have severely restricted the large-scale prospective monitoring of patient cohorts that is necessary to provide more granular data on microbial changes and human health15. Here we present a synthetic biology platform that addresses the need for affordable, on-demand, and simple analysis of microbiome samples that can aid in monitoring large-scale patient cohorts. Our lab has developed a paper-based diagnostic platform for portable, low-cost detection of biologically relevant RNAs16,17. The platform is comprised of two synthetic biology technologies. The first technology is a molecular sensor called an RNA toehold switch that can be designed to bind and detect virtually any RNA sequence18. The second is an in vitro cell-free transcription–translation system that is freeze-dried onto paper disks for stable, long-term storage at room temperature16; upon rehydration, the cell-free system can execute any genetic circuit. We combined these two technologies to form an abiotic platform for rapid and inexpensive development and deployment of biological sensors. Recently, we reduced the limit of detection of this platform to three femtomolar (fM) by adding an isothermal RNA amplification step called NASBA (nucleic acid sequence based amplification)17. We demonstrated the utility of our platform in detecting the presence or absence of clinically relevant RNAs, including those of Ebola16 and Zika17 viruses, but we were not able to quantify their concentrations. Here, we address the need for affordable, on-demand, and simple analysis of microbiome samples by advancing our paper-based diagnostic platform for use as a research tool to quantify bacterial and host RNAs from stool samples (Fig. 1). To demonstrate the widespread applicability of our diagnostic platform, we select a panel of 10 bacteria relevant to diverse microbiome research studies. We first design toehold switch sensors that detect the V3 hypervariable region of the 16S ribosomal RNA (rRNA) for each species to mimic the standard method of identifying bacterial species through 16S ribosomal DNA sequencing. We then improve the specificity of detection by designing toehold switch sensors for species-specific mRNAs from each bacterial species, and demonstrate sensor orthogonality. Next, we develop a method that semi-quantitatively measures the concentrations of target RNAs using NASBA and toehold switch sensors, and validate this method against quantitative reverse transcription PCR (RT-qPCR) of clinical stool samples. We then develop toehold switch sensors to detect four host biomarkers, one of which, calprotectin, is well-established in clinical use19, and another, oncostatin M (OSM), which may have an immediate impact on clinical decision-making in the treatment of IBD20. We validate our method against RT-qPCR using clinical samples from patients with IBD. Finally, we demonstrate an additional potential clinical application of our RNA detection platform using the example of Clostridium difficile infection (CDI), where differentiating active infection from passive colonization has been fraught with difficulty21. Our method shows markedly different toxin mRNA expression levels in two toxigenic C. difficile strains that would otherwise be indistinguishable by standard DNA-based qPCR diagnosis. Workflow for analysis of microbiome samples using our paper-based detection platform. Once key bacteria or mRNA targets have been identified, RNA toehold switch sensors and primers for isothermal RNA amplification are designed in silico. Sensors and primers are then rapidly assembled and validated in paper-based reactions. For subsequent use, total RNA is extracted from human fecal samples using a commercially available kit. Specific RNAs are amplified via NASBA (nucleic acid sequence based amplification) and quantified using arrays of toehold switch sensors in paper-based reactions. Microbial and host biomarker RNA concentrations of the samples are determined using a simple calibration curve Development of toehold switch sensors to detect 16S rRNA Toehold switch sensors are synthetic riboregulators that control the translation of a gene via RNA–RNA interactions. They utilize a designed hairpin structure to block gene translation in cis by sequestration of the ribosome binding site (RBS) and start codon. Translation is activated upon the binding of a trans-acting trigger RNA to the toehold region of the switch, which relieves the sequestration of the RBS and allows translation of the downstream gene (Fig. 2a)18. Toehold switch sensors can be designed to bind nearly any RNA sequence. 16S rRNA sensors. a Schematic of toehold switch sensor function. b Best performing toehold switch sensors targeting the V3 hypervariable region of 16S rRNA for each species. Data represent mean GFP production rates from paper-based reactions with sensor alone and sensor plus 36-nucleotide trigger RNA (2 μM). Error bars represent high and low values from three technical replicates. c Schematic of NASBA-mediated RNA amplification. d Evaluation of NASBA primers. NASBA reactions were performed on 1 ng of total RNA for 90 min. Outputs from NASBA reactions were used to activate toehold switch sensors in paper-based reactions. Data represent mean values of three technical replicates. Error bars represent high and low values of the three replicates. e Orthogonality of 16S sensors. Each sensor was challenged with 2 μM of NASBA trigger RNAs from each species representing what would be amplified in a NASBA reaction. GFP production rates for an individual sensor were normalized to the production rate of the sensor plus its cognate trigger (100%). Data represent mean values of six replicates (two biological replicates × three technical replicates). Full data and s.d. are shown in Supplementary Figure 3 We designed toehold switch sensors for a panel of 10 bacteria chosen for their relevance to IBD22,23, childhood malnutrition8,9, and cancer immunotherapy10,11. To start, we targeted the 16S rRNA, because 16S rDNA profiling is a standard method for identifying bacterial species and rRNA is present at high copy numbers in bacteria. We used the series B toehold switch design from Pardee et al.17 and the Nucleic Acids Package (NUPACK)24 to design toehold switch sensors that target the V3 hypervariable region of the 16S rRNA for each target species. The candidate sensors were constructed to regulate the expression of the GFPmut3b gene25, and tested using in vitro transcribed trigger RNAs (36 nucleotides) in paper-based, cell-free reactions (Supplementary Fig. 1). The best performing sensor for each species (Fig. 2b, Supplementary Data 1) was chosen based on the lowest background GFP expression in reactions with sensor alone and highest fold activation in reactions with sensors activated by cognate trigger RNA. An individual bacterial species can comprise 1% or less of the total bacterial population within a human gut microbiome, so even highly abundant rRNA from an individual species can constitute 1–10 nanomolar (nM) RNA. Thus, unprocessed rRNA from stool samples is beyond the limit of detection of toehold switch sensors alone, which is approximately 10–30 nM17. We therefore incorporated NASBA, an isothermal RNA amplification technique26, into our sample processing steps prior to detection by toehold switch sensors to improve assay sensitivity. Briefly, NASBA begins with primer-directed reverse transcription of the template RNA, which creates an RNA/DNA duplex. The template RNA strand is then degraded by RNaseH, which allows a second primer containing the T7 promoter to bind and initiate double-stranded DNA synthesis. The double-stranded DNA serves as a template for T7-mediated transcription of the target RNA. Each newly synthesized RNA strand can serve as starting material for further amplification cycles (Fig. 2c)26 and can also be detected by the toehold sensors. We have previously shown that NASBA allows for the detection of single femtomolar concentrations of RNA17 using toehold switch sensors in paper-based reactions. NASBA primers were designed to amplify the V3 hypervariable region of the 16S rRNA for E. coli. We first tested the standard universal primer set routinely used to amplify the V3 region from 16S rDNA27 for sequencing applications. We used total RNA extracted from an E. coli monoculture to screen the primers. NASBA reactions were performed for 90 min on 1 ng of total RNA and then applied to paper-based reactions containing the E. coli 16S toehold switch sensor. Unexpectedly, these primers were not able to amplify the 16S V3 region from total RNA (Fig. 2d–E.c. 1). In order to investigate why the universal primer set performed poorly, we mapped the primer locations to chemical structure probing data for E. coli 30S ribosomal subunits28 and found that the forward primer targeted nucleotides that were not structurally accessible (Supplementary Fig 2). Using the 16S rRNA structure data, we designed new NASBA primer sets and screened for the highest activation of toehold switch sensors (primer set 4). We then designed and screened NASBA primers for the other nine species using the same methodology (Fig. 2d). We next investigated the specificity of our 16S toehold switch sensors. We synthesized trigger RNAs for each species representing the sequence that would be amplified by the NASBA primers (72–171 nucleotides) and measured the activation of each sensor when challenged with each of the 10 trigger RNAs (Fig. 2e, Supplementary Fig. 3, Supplementary Data 2). We observed good specificity for most of the 16S sensors; however, there was significant crosstalk among closely related bacteria. In the case of three closely related Bifidobacteria, the toehold switch sensors preferentially activate in the presence of their cognate trigger RNAs, but show significant crosstalk since the trigger sequences only differ by a few nucleotides. We also observed significant crosstalk between the C. difficile sensor and the trigger RNAs for E. rectale and F. prausnitzii. Although the C. difficile sensor is not activated by the exact 36 nucleotide triggers for E. rectale and F. prausnitzii sensors (Supplementary Fig. 4a), alignment of the NASBA-amplified RNA sequences for the three species showed that the extended sequence that is amplified by the E. rectale and F. prausnitzii NASBA primers aligned with the toehold region of the C. difficile sensor (Supplementary Fig. 4b, c). The 16S sensors can be used to identify and differentiate closely related families of bacteria, but due to crosstalk, they are not suitable for discriminating among highly related bacterial species. Bioinformatic analysis for species-specific identification To address the specificity limitations of the 16S sensors, we devised a bioinformatic pipeline to identify mRNAs that are unique to any given bacterial species (Fig. 3a). Our pipeline uses the phylogenetic assignment tools Metaphlan and Metaphlan229 to identify a set of unique sequences for a given bacterial species. These sequences are then evaluated using a series of BLAST30 alignments to determine the most specific markers with the highest expression in human stool (see Methods). Species-specific mRNA sensors. a Bioinformatic pipeline for identifying species-specific mRNAs. b Best performing NASBA primers and species-specific mRNA sensors for each species. NASBA reactions were performed on 10 ng of total RNA for 90 min. Outputs from NASBA reactions were used to activate toehold switch sensors in paper-based reactions. Data represent mean values of three technical replicates. Error bars represent high and low values of the three replicates. c Orthogonality of species-specific sensors. Each sensor was challenged with 2 μM of trigger RNAs from each species representing what would be amplified in a NASBA reaction. GFP production rates for an individual sensor were normalized to the production rate of the sensor plus its cognate trigger (100%). Data represent mean values of six replicates (two biological replicates × three technical replicates). Full data and s.d. are shown in Supplementary Figure 6. d Orthogonality of NASBA primer sets. NASBA reactions were performed on 10 ng of total RNA for 90 min. Data represent mean ± s.d. of six replicates (two biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)) We followed the same steps described for 16S rRNA sensor development to develop sensors for the species-specific mRNAs. We tested candidate toehold switch sensors in paper-based reactions and selected the best performing sensor for each species (Supplementary Fig. 5). We then designed NASBA primers and screened them on total RNA extracted from monocultures for each species. The best performing NASBA primer sets were chosen based on the ability of the amplified RNA to activate the corresponding toehold switch sensor (Fig. 3b). We note the apparent variation in the efficiency of amplification between species and attribute this to the variation in abundance of the mRNAs in each total RNA sample and possible differences in the structural accessibility of these transcripts. Finally, we tested the specificity of our toehold switch sensors by synthesizing trigger RNAs for each species representing the sequence that would be amplified by the NASBA primers and tested each sensor against each of the 10 trigger RNAs. We observed greatly improved sensor specificity compared to our 16S sensors with no significant crosstalk detected between any of the sensors (Fig. 3c, Supplementary Fig. 6). Next, we investigated the specificity of our NASBA primers by testing the output of NASBA reactions performed on three different total RNA samples: (1) total RNA isolated from an individual species; (2) a mixed sample comprised of total RNA from each of the 10 species; and (3) a mixed sample containing total RNA from all species except for the one corresponding to the NASBA primer set being tested. To keep the total concentration of a given sample constant, we supplemented samples (1) and (3) with yeast tRNA (Ambion), which is commonly used to increase the complexity of mRNA standards in RT-qPCR, because reverse transcription efficiencies change with the total amount of RNA in a reaction31. For example, each NASBA reaction was run on a total of 10 ng of RNA, where sample (1) contained 1 ng of total RNA plus 9 ng of yeast tRNA and sample (2) included 1 ng of RNA from each of the 10 individual species. For each NASBA primer set, we observed equivalent activation of the toehold switch sensors by RNA amplified from samples (1) and (2). Additionally, the outputs from sample (3) were equivalent to the toehold switch sensor alone for each species indicating that there was no amplification of the test target in sample (3) (Fig. 3d). These results showed that the NASBA primers were highly specific within the tested set of 10 bacteria, which included closely related species. Toehold switch sensors quantify NASBA products Quantitation is essential for determining changes in bacterial and host gene expression and abundances of microbes. Therefore, we sought to determine if the toehold switch sensors could be used to quantify bacterial RNA in fecal samples. Previous work has shown that NASBA can be quantified using internal standards and fluorescent hybridization probes to detect amplified RNA32,33. In a previous application of the paper-based diagnostic platform, we demonstrated that the toehold switch sensors exhibit a linear response to trigger RNA inputs in the low nanomolar to micromolar range17. A mathematical model of NASBA reactions suggested that femtomolar to picomolar concentrations of RNA could be amplified to within the toehold detectable linear range, and 10-fold concentration differences would be distinguishable if NASBA reactions were stopped prior to completion (Supplementary Fig. 7). Therefore, we sought to identify NASBA reaction conditions that would allow us to quantify a broad range of RNA concentrations using the toehold switch sensors. We in vitro transcribed species-specific mRNAs and used them as standards for the NASBA reactions. We aimed to quantify standards from 3 fM to 30 picomolar (pM). To mimic the complexity of a total RNA sample, we diluted our standards into yeast tRNA (50 ng/μl). NASBA reactions with varied amplification times (30 min–3 h) were carried out on mRNA standards to determine the duration that allowed us to distinguish concentrations that differed by 10-fold (Supplementary Fig. 8). Excessive amplification times or running amplification reactions to completion did not allow for differentiation between standards, and insufficient amplification times did not allow for detection of the lowest (3 fM) standard. Using the optimal amplification time for each mRNA, we assessed the run-to-run variability of NASBA and paper-based toehold reactions. We found that there is run-to-run variation in overall signal measured from the paper-based reactions, but the relative signal between standards remains the same between runs (Fig. 4a). Normalization to a single standard allowed us to define a calibration curve that eliminated the effect of run-to-run variability on RNA quantification (Fig. 4b). Calibration curves were determined for each of the 10 species (Supplementary Fig. 9). These allow for calculation of species-specific mRNA concentrations in an unknown sample by simply running a single concurrent standard. Quantification of NASBA-mediated amplification using toehold switch sensors. a Run-to-run variation in mRNA standards amplified by NASBA and measured by toehold sensors. mRNA standards for the B. thetaiotaomicron species-specific sensor were run in NASBA reactions for 30 min. Outputs from NASBA reactions were used to activate toehold switch sensors in paper-based reactions. b Calibration curve for the B.t. species-specific mRNA. Values from each standard in the individual runs in a were normalized to the 300 fM standard for that specific run and averaged across runs. c Quantifying species-specific mRNAs in stool. E. coli or B. fragilis cells were spiked into 150 mg of a commercial stool sample and processed for total RNA. Species-specific mRNAs were quantified using our paper-based platform and RT-qPCR. d Analysis of clinical stool samples. Six clinical stool samples were processed for total RNA and analyzed by our paper-based platform and RT-qPCR. Data and s.d. are shown in Supplementary Figure 11. e Correlation of clinical sample results. Non-zero paper-based concentrations from d were compared to RT-qPCR determined values. Data represent mean values. Paper-based error bars in a, c, and e represent s.d. from nine replicates (three biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)). RT-qPCR error bars in c and e represent s.d. from six replicates (two biological replicates (RT reactions) × three technical replicates (qPCR reactions)) To validate our calibration curves, we sought to compare RNA quantification from human stool samples using our paper-based platform and RT-qPCR. We first assessed our ability to detect target mRNA in a pool of total RNA extracted (RNeasy PowerMicrobiome kit, Qiagen) from commercial human stool (Lee BioSolutions) and compared quantification of mRNA standards in this background to standards in a yeast tRNA background. We detected our species-specific mRNAs in stool RNA background, but the signal output for any given standard concentration was higher in total stool RNA than in the yeast tRNA background (Supplementary Fig. 10). Therefore, we experimentally corrected each of our calibration curves to account for this difference (Supplementary Fig. 9). We then compared our quantification method to RT-qPCR. We spiked in between 50 μl and 1.5 ml of bacterial cells grown to mid-log phase to 150 mg of commercial human stool. These samples were processed for total RNA and quantified using our paper-based platform and RT-qPCR. We found good correlation between these methods with R2 values of 0.855 and 0.994 for E. coli and B. fragilis, respectively (Fig. 4c). Next, we tested the performance of our quantification method with clinically acquired stool samples (Fig. 4d). In the six clinical samples tested, we detected six of the bacteria in our panel. The concentrations of species-specific mRNAs determined using our platform showed good correlation with RT-qPCR, with an R2 value of 0.766 (Fig. 4e). We had no false-positive results and seven false-negative results using RT-qPCR as the standard (Supplementary Fig. 11). Of the seven false-negative results, six contained less than three copies per 50 ng of total RNA (6 attomolar) quantified by RT-qPCR, a value below our limit of detection. Toehold switch sensors can detect human mRNA from stool Next, we sought to demonstrate that our platform could be used to detect mRNAs from human cells. We designed toehold switch sensors and NASBA primers to detect the mRNA of three biomarkers associated with inflammation (calprotectin, CXCL5, and IL-8) and oncostatin M (OSM), a cytokine that has recently been found to predict the efficacy of anti-tumor necrosis factor (TNF)-alpha therapies in IBD patients20. To validate our sensors, we performed NASBA and toehold reactions on 50 ng of total RNA from human peripheral leukocytes (Takara Bio 636592) and demonstrated that we could detect each of the four transcripts (Supplementary Fig. 12). We then developed calibration curves for each sensor (Supplementary Fig. 13) and tested the performance of our quantification method with clinically acquired stool samples from patients with IBD (Fig. 5a). We detected each of the four host transcripts in at least two of the clinical samples. Furthermore, the concentrations of human mRNA determined using our platform showed good correlation with RT-qPCR, with an R2 value of 0.912 (Fig. 5b). Detection of host biomarkers of inflammation. a Analysis of clinical stool samples. Four clinical stool samples were processed for total RNA and analyzed by our paper-based platform and RT-qPCR. Data represent mean values. Paper-based error bars represent s.d. from nine replicates (three biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)). RT-qPCR error bars represent s.d. from six replicates (two biological replicates (RT reactions) × three technical replicates (qPCR reactions)). b Correlation of clinical sample results. Non-zero paper-based concentrations from a were compared to RT-qPCR determined values RNA-based detection of C. difficile infection In a final validation of our platform, we sought to demonstrate the advantage of measuring RNA as opposed to DNA in certain clinical applications. CDI is one example where RNA-based detection may be especially useful. CDI causes significant patient morbidity and mortality34, and is responsible for nearly 2.4 million days of inpatient hospital stays at a yearly cost of over $6.4 billion in the United States35. CDI-associated diarrhea and intestinal inflammation are attributed to the direct effects of C. difficile toxins36. As such, current CDI diagnostic tests are focused on detecting the presence of toxigenic C. difficile bacteria or the toxin proteins in patient stool. The traditional gold standard tests for detecting toxigenic C. difficile organisms (toxigenic culture assay) and C. difficile toxin (cell-culture cytotoxicity neutralization assay) are slow, labor-intensive, and technically challenging37. The diagnostics currently in wide-spread use, such as enzyme-linked immunoassays (EIA) for C. difficile toxins and DNA-based qPCR assays for C. difficile toxin genes, offer greatly improved performance characteristics but have their own limitations21. The EIA tests have high clinical specificity, but reports of false-negatives and low sensitivity relative to toxigenic culture21,37 have led to the development of DNA-based qPCR assays for C. difficile toxin genes. This method is extremely sensitive for the presence of toxigenic C. difficile bacteria; however, it cannot distinguish between patients that are carriers with symptoms due to another cause and those with active CDI21. These cases are especially challenging for clinicians, and there is a debate on which testing methodology yields the highest combination of sensitivity and specificity for clinically meaningful CDI21. New ultrasensitive assays to detect C. difficile toxins are in development, but they require highly specialized and expensive laboratory equipment and in some cases have a 60-h turnaround time38. Our paper-based platform has the potential to address these limitations by providing a rapid, easy-to-use method for the diagnosis of active CDI based on the detection of C. difficile toxin mRNA (Fig. 6a). Paper-based detection of C. difficile infection. a Schematic of RNA-based CDI detection using a toehold switch sensor to detect toxin B mRNA. b Toxin B mRNA detection in stool samples. Two C. difficile strains (630 and VPI 10463) were grown in two different media (M1—TYG plus cysteine, M2—TY). Cells from each culture were spiked into 150 mg of a commercial stool sample and processed for total RNA. Toxin B mRNA was measured by our paper-based platform and RT-qPCR. Data represent mean values. Paper-based error bars represent s.d. from nine replicates (three biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)). RT-qPCR error bars represent s.d. from six replicates (two biological replicates (RT reactions) × three technical replicates (qPCR reactions)). Toxin B DNA was confirmed in each sample using qPCR (Cq values shown in Supplementary Table 11) We designed a toehold switch sensor and NASBA primers to detect a conserved region of the C. difficile toxin B gene, which is essential for toxigenic effect and is the target of most commercial DNA-based qPCR assays for toxigenic C. difficile39. To validate our sensor, we collected total RNA from monocultures of two different toxigenic C. difficile strains: 630, a low toxin producing strain, and VPI10463 (VPI), a high toxin producing strain40. We performed NASBA and toehold reactions on 25 ng of total RNA from each strain and demonstrated that we could detect toxin mRNA from both C. difficile strains (Supplementary Fig. 14). Next, we grew the two strains under conditions that suppress (mid-log phase in media 1: TYG plus cysteine) or induce (stationary phase in media 2: TY) toxin production to mimic situations where patients are carriers of toxigenic C. difficile that produce very low levels toxin and those with active CDI resulting from high toxin production, respectively. We then spiked the two strains grown in both conditions into commercial human stool and processed the samples for total RNA as described previously. Using our paper-based platform, we detected toxin mRNA only in the VPI strain grown in media 2 sample (Fig. 6b). Analysis of the samples using RT-qPCR indicated that there was toxin mRNA in the 630 media 2 and VPI media 1 samples, but at very low levels (1 ± 4 and 1 ± 6 copies or 2 attomolar, respectively). Furthermore, all four samples were positive for toxin DNA (Supplementary Table 12). Our results therefore demonstrate a potential advantage of using toxin mRNA to diagnose CDI. All four samples would give a positive result in a DNA-based qPCR test. However, by detecting toxin mRNA using our paper-based platform, it may be possible to rapidly and readily distinguish between carriers of toxigenic C. difficile expressing low levels of toxins and those patients with active CDI expressing significantly higher levels of toxins. Here we presented a synthetic biology platform for affordable, on-demand analysis of microbiome samples that can be employed in research, clinical, and low-resource settings. We demonstrated detection of species-specific mRNAs from 10 different bacteria that have been associated with a wide variety of disease processes. To track abundance of target RNAs, we devised a method to quantify mRNA using our toehold sensors and validated our method using RT-qPCR on clinical stool samples. To highlight the ability to probe both host and bacterial transcripts using a single platform, we validated sensors for clinically relevant human mRNAs using stool samples from IBD patients. We also demonstrated the potential advantage and clinical utility of detecting toxin mRNA in the case of CDI. As part of this study, we developed a simple method that allows for the semi-quantitative determination of mRNA concentration from human stool samples using paper-based toehold switch sensors. By running a single standard alongside test samples and referencing a standard curve, we can determine the mRNA concentration within a sample and account for variation in reagent lots with clear separation of samples that differ in concentration by 10-fold (Fig. 4a, b). Our method is analogous to those used for NASBA-based quantification with an internal control spiked into each sample and a fluorescent hybridization probe for detection32,33. Furthermore, quantification of mRNAs in stool samples using our method correlates well with RT-qPCR (Fig. 4c–e, Fig. 5b). Notably, mRNA concentrations correlate with bacterial abundance (Supplementary Fig. 15), though this correlation may fluctuate with growth conditions and will likely vary depending on the specific target. Our approach is easily adaptable to study any cellular process that results in differences in gene expression, including changes in specific biochemical pathways or cell metabolism. To illustrate the potential utility of assessing specific bacterial pathways, we selected the model of toxin production in CDI. To approximate the clinical scenarios of active CDI versus inactive colonization, we demonstrated that we could distinguish between toxigenic C. difficile that expressed high amounts of toxin and no toxin (Fig. 6), which would otherwise be indistinguishable via standard DNA-based qPCR. Recent studies have shown that fecal mRNA levels of the inflammatory markers CXCL5 and IL-8 are highly correlated with clinical outcomes and perform with significantly better clinical sensitivity than other available tests for identifying CDI41,42. Because our method is equally capable of quantifying microbial and host RNAs and is readily multiplexed, a combined diagnostic testing for C. difficile toxin, CXCL5, and IL-8 mRNA may provide improved sensitivity and specificity for detecting CDI, though further investigation using clinical samples is warranted to help address this important problem. In addition to the potential utility of our platform in the clinical diagnosis of CDI, our ability to assess both host and microbial transcripts in parallel may also be useful in management and treatment selection for IBD. The interaction between the host and resident microbiome has been shown to affect many important biological processes in health and disease, including IBD12. Recent work has demonstrated that a microbial signature can be predictive of clinical remission after treatment with vedolizumab, an anti-integrin IBD medication43. For host transcripts, calprotectin is a well-characterized biomarker routinely used in clinical practice to assess gut mucosal inflammation19; CXCL5 and IL-8 are both elevated in intestinal biopsies from patients with IBD44,45; additionally, OSM levels in intestinal tissues have recently been strongly correlated with a lack of response to anti-TNF agents20, a widely used class of medications to treat IBD. Although highly efficacious, roughly 30–40% of patients will not respond to the anti-TNF medication class, and there was previously no reliable way of predicting the likelihood of response. While the above study was based on intestinal biopsies, we demonstrated we could detect OSM mRNA from IBD patient stool samples. Although the low number of samples precludes any conclusions on clinical utility, our results are consistent with a connection between higher OSM levels and lack of responsiveness to anti-TNF treatment. For example, stool sample S6 with no detectable OSM mRNA was collected from a patient who had successfully responded to anti-TNF treatment. Furthermore, sample S7, which showed intermediate levels of OSM mRNA, was collected from a patient who had failed treatment with two different anti-TNF agents. Our platform provides an easy to use, low-cost method for quantifying microbial and host RNAs from complex biological samples. Its flexibility allows for reactions to be freeze-dried for use outside of a laboratory setting. All reactions can also be run fresh, as they were done here, for researchers that do not have access to a lyophilizer. Specialized lab equipment is not required to develop our sensors or run the reactions. Since our toehold switch sensors can be used to regulate the production of any protein output, reactions may be monitored on a standard microtiter plate reader, if available, or an affordable, easy-to-build, portable electronic reader that quantifies change in absorbance from LacZ production17. To accommodate incubation temperatures required for NASBA (95 °C, 41 °C) and paper-based (37 °C) reactions, existing laboratory incubators or thermocyclers may be used, or affordable incubators can be built for use in low-resource settings. Altogether, the low-cost and portable nature of our platform makes it uniquely suited for use in resource-limited environments. The major advantages of our platform over RT-qPCR are cost and the ability to analyze multiple RNA transcripts at once. Using our platform, we can quantify mRNAs in 3–5 h at a cost of approximately $16 per transcript using commercially available kits as reagents (accounting for triplicate reactions and mRNA standard). This can be reduced to under $2 per transcript by using cell-free extracts prepared in-house, which are suitable for our platform (Supplementary Fig. 16), and individually sourcing NASBA reagents (Supplementary Fig. 17). The same analysis using RT-qPCR also takes 3–5 h, but costs approximately $140 per transcript. Our platform only requires a single mRNA standard for quantification while RT-qPCR generally requires a minimum of five standards46. Our limit of detection in total stool RNA ranges between 30 aM and 3 fM, depending on the specific toehold switch sensor. While this does not match the sensitivity of RT-qPCR (3 aM)47, we believe there are applications where our current limits of detection are sufficient. Future optimization of toehold switch sensor design and NASBA reaction conditions may continue to improve this sensitivity. In a comparison of our platform to next-generation sequencing we offer fast turn-around time, simple data analysis, and on-demand assessment of samples with no change in cost per sample. Average next-generation sequencing runs at core facilities range from $700–2000 per lane (Illumina), depending on machine and run type, and can take anywhere from 4 to 72 h48 to complete. The sequencing cost per sample is typically reduced by running up to 96 samples per lane; however, this sample batching prevents on-demand analysis. Additionally, next-generation sequencing data sets require extensive computational power and training to process, analyze, and interpret. Our platform's data analysis can be performed quickly using a simple spreadsheet or automated program. Our paper-based platform is one of several new synthetic biology platforms that can be used for nucleic acid detection. Recent advances using the CRISPR associated enzymes Cas12a and Cas13 along with recombinase polymerase amplification (RPA)49 yielded sensitive detection of nucleic acids with the ability to discriminate between single nucleotide differences50,51,52. While detection of single nucleotide polymorphisms (SNPs) is important, for example in tracking the epidemiology of viruses, the ability of the toehold switch sensors to tolerate SNPs enables the use of a single sensor to detect multiple strains. Although RT-RPA can be used to amplify RNA, as with RT-qPCR it cannot specifically amplify RNA without thorough DNase treatment to remove genomic DNA. Since NASBA uses reverse transcription to create DNA with a T7 promoter to then transcribe that template into RNA, it is highly resistant to DNA contamination53. Our method and the CRISPR enzyme-based diagnostics, SHERLOCK50,51 and DETECTR52, could be complementary tools, the selection of which will depend on the sample type (DNA or RNA), and whether the detection of single nucleotide differences is desired. Our method for detecting and quantifying RNA sequences could be applied to a broad range of studies including samples from other human anatomical sites, and our approach is easily adaptable to a wide range of biological targets, including viruses, fungi, and eukaryotic nucleic acids from either stool or tissue samples. Furthermore, with continued optimization of sample processing, our method could be adapted for point-of-care use. Such a diagnostic platform could have many applications, including pre-screening enrollees in the field for prospective trials of therapeutic manipulations of the microbiome, at-home monitoring of research participants, and eventually for tracking changes in patient disease activity. Our easy-to-use synthetic biology platform has the potential to meet both research and clinical point-of-care needs. Toehold sensor design and cloning Toehold switch sensors were designed with NUPACK24 using the series B toehold switch design from Pardee et al.17 The script can be found in Supplementary Note 1. Toehold switch sensor designs were checked for premature stop codons and cloned into plasmids with the GFPmut3b gene using PCR amplification and blunt-end ligation. Linear toehold switch templates were generated by amplifying from these plasmids by PCR and then purified using a MinElute PCR Purification kit (Qiagen, 28004), according to manufacturer's protocol. Sequences for all toehold switch sensors can be found in Supplementary Tables 2–3. Trigger RNA and mRNA standard synthesis DNA encoding trigger RNAs or mRNA standard sequences were ordered from Integrated DNA Technologies and amplified by PCR to create a linear template with a T7 promoter. RNA was transcribed from the DNA templates using a HiScribe T7 High Yield RNA Synthesis Kit, according to the manufacturer's protocol (New England Biolabs, E2040). RNA was then purified using a Zymo RNA Clean and Concentrator kit (R1018), according to the manufacturer's protocol. Following purification, DNA template was degraded by DNase digestion using the TURBO DNA-free DNase kit (ThermoFisher, AM1907) for 1 h according to the manufacturer's protocol. Paper-based, cell-free reactions Cell-free reactions were performed using the PURExpress In Vitro Protein Synthesis Kit (New England Biolabs, E6800L). The cell-free reactions consisted of NEB Solution A (40%), NEB Solution B (30%), RNase inhibitor (0.5%; Roche, 3335402001), linear DNA constructs encoding toehold switch sensors (1.875 nM), and trigger RNA for a total of 5.5 µl. Paper disks (Whatman, 1442-042 blocked overnight in 5% BSA) were punched out using a 2 mm biopsy (Integra, 33-31-P/25) and placed in a 384-well plate (Corning 3544). 1.4 µl of the cell-free reaction mixture was applied to paper disks in triplicate. GFP expression (485 nm excitation, 520 nm emission) was monitored on a plate reader (Molecular Devices SpectraMax M5) every 5 min for 2 h at 37 °C. Initial sensor screen Sensor candidate designs from NUPACK were tested in paper-based reactions containing 1.875 nM of linear sensor DNA and 2 µM trigger RNA (36 nucleotides). GFP production rates were calculated (see Data analysis and RNA quantification) for reactions with sensor alone and sensor plus trigger. To select the best sensor, an activation ratio was calculated for each sensor candidate by dividing the sensor plus trigger production rate by the sensor alone production rate. Sensors were chosen based on the highest activation ratio and lowest sensor alone production rate. A minimum activation ratio of 5-fold is necessary to achieve desired sensitivity. NASBA Initial denaturation of total RNA consisted of a 2-min incubation at 95 °C followed by a 10-min incubation at 41 °C of 1.0 µl sample input, 1.675 µl reaction buffer (Life Sciences Advanced Technologies, NECB-24), 0.825 µL nucleotide mix (Life Sciences Advanced Technologies, NECN-24), 0.2 µl of 6.25 µM primers, 0.03 µl water, and 0.025 µl of RNase inhibitor (Roche) per 3.75 µl reaction. Afterwards, 1.25 µl of enzyme mix (Life Sciences NEC-1-24) was added to each reaction and the resulting 5.0 µl NASBA reactions were incubated for 30–180 min at 41 °C. Then 1.0 µl of NASBA product was added to the cell-free reaction mixture for a total of 5.5 µl. Final concentrations of buffer components in each NASBA reaction: 13.2 mM MgCl2 (VWR 97062-848), 75 mM KCl (VWR BDH7296-0), 10 mM DTT (Sigma GE17-1318-01), 40 mM Tris-HCl pH 8.5 (VWR RLMB-005), 15% DMSO, 2 mM each ATP, UTP, and CTP, 1.5 mM GTP, 0.5 mM ITP, 1 mM each dNTP (New England Biolabs, N0447L), 0.25 µM each primer. Enzyme mix: 5 U/ml RNaseH (New England Biolabs M0297L), 1000 U/ml reverse transcriptase (New England Biolabs, M0368L), 2500 U/ml T7 RNA polymerase (New England Biolabs, M0251L), 43.75 mg/ml BSA. Initial denaturation of sample was performed as above, after which 1.25 µl enzyme mix was added to each reaction. Data analysis and RNA quantification Paper-based reactions were analyzed by calculating GFP production rates for each reaction condition. GFP production rates were calculated by first subtracting the average background fluorescence measured from triplicate paper-based reactions that did not contain sensor DNA or trigger RNA. Then, the minimum value of each individual reaction was adjusted to zero by subtracting the average of its first three time points (0, 5, and 10 min) from each time point. The zero-adjusted data were then fit to the equation: \({\rm{RFU}}\left( {{\rm{zero}}\,{\rm{adjusted}}} \right) = \frac{a}{{\mathrm{e}^{ - bt} + c}}\). To compare data from different samples, the slope of the fitted equation was taken at t = 50 min, resulting in values of RFU/min. The GFP production rates were then averaged over the replicates for each reaction condition. In quantification experiments, the GFP production rate for each sample was normalized to the GFP production rate for a single mRNA standard (for standard concentrations see Supplementary Fig. 9). The normalized GFP production rate for reactions with sensor alone was then subtracted from each sample. RNA concentration was determined using the equation: Normalized GFP production = A*ln(concentration) + B. Values for A and B for each sensor can be found in Supplementary Fig. 9. Bacterial culturing and RNA processing All anaerobic bacteria were grown in an anaerobic chamber at 37 °C. Bifidobacterium adolescentis (ATCC 15703), Bifidobacterium breve (ATCC15700), Bifidobacterium longum subsp longum (ATCC 15707), Bacteroides fragilis (ATCC 25285), Bacteroides thetaiotaomicron (ATCC 29148), Clostridium difficile (ATCC BAA-1382), and Eubacterium rectale (ATCC 33656) were obtained from ATCC. Faecalibacterium prausnitzii A2–165 (DSM 17677) and Roseburia hominis (DSM 16839) were obtained from DSMZ. Freeze-dried samples were rehydrated with their respective growth mediums and grown for 24–48 h in liquid culture on a shaker at 200 rpm. For experiments testing RNA isolated from pure cultures, 12 ml of bacterial culture was diluted 1:2 into RNAProtect before removing from the anaerobic chamber for RNA extractions. The cultures were lysed at room temperature using 200 µl of 15 mg/ml of lysozyme in TE buffer and 20 µl of proteinase K (Qiagen). RNA was then extracted using the RNeasy Mini kit (Qiagen 74104), according to the manufacturer's instructions. RNA samples were then DNase digested using TURBO DNA-free DNase kit (ThermoFisher, AM1907) for one hour. E. coli (MG1655) was grown in Luria-Bertan (LB) medium (Difco). B. adolescentis was grown in Bifidobacterium medium (prepared according to DSMZ 58: Bifidobacterium medium). B. breve, B. fragilis, and B. longum subsp longum were grown in brain heart infusion-supplemented (BHIS) medium (prepared according to ATCC medium: 1293). B. thetaiotaomicron, E. rectale, and R. hominis were grown in cooked meat medium (CMM) purchased from Hardy Diagnostics. F. prausnitizii was grown in CMM with an additional 1% glucose. C. difficile was grown in BHIS for the species-specific RNA testing, and grown in either TY medium (3% tryptone, 2% yeast extract, and 0.1% sodium thiogrlycolate), or TY medium with 2% glucose and 10 mM cysteine for toxin RNA testing. RNA purification from stool samples Commercial stool specimens were purchased from Lee Biosolutions and provided as frozen specimens. Clinical stool samples were provided by Dr. Ashwin Ananthakrishnan as anonymized specimens from the Prospective Registry in IBD Study at Massachusetts General Hospital. Approval was provided by the Partners Healthcare Human Subjects Research Committee. Informed consent was obtained from all subjects. Both commercial and clinical stool samples were stored at −80 °C and processed using the RNeasy Powermicrobiome Kit (MoBio, now Qiagen, 26000), which was selected for its ability to isolate high quality RNA from stool54. Each frozen stool was homogenized using a mortar and pestle cooled with liquid nitrogen55, and 150 mg of each sample was loaded into each glass bead tube. Mechanical lysis was performed using a MoBio vortex adapter and a Vortex Genie 2 (Scientific Industries Inc) at maximum speed for 10 min. The manufacturer's protocol was followed for RNA extraction with optional on-column DNase digestion included. Resulting RNA samples were then further DNase digested using TURBO DNA-free DNase kit (ThermoFisher, AM1907) for one hour. Bacterial spike-in experiments E. coli and B. fragilis were grown to mid-log phase and spiked into a commercial stool sample (Lee Biosolutions) before RNA extraction. Bacteria cultures ranging from 10 µl to 1.5 ml were spun down before being re-suspended in PM1 buffer and added to 150 mg of stool. C. difficile was grown to stationary phase in TY medium and TY medium supplemented with 2% glucose and 10 mM cysteine. Two ml of stationary C. difficile culture was spun down and re-suspended in PM1 buffer and added to 150 mg of stool. All samples were processed with the RNeasy PowerMicrobiome kit, according to the manufacturer's instructions, with an extended 30-min lysis step for C. difficile spike-ins. RNA samples were then split into two samples, one that was DNase digested with the TURBO DNA-free DNase kit (ThermoFisher, AM1907) for one hour and one that did not receive DNase treatment so that it could be used for DNA based qPCR. Computational pipeline for species-specific RNA sequences Our computational pipeline employs components from previously developed phylogenetic assignment tools, including Metaphlan and Metaphlan229. These programs use multiple bioinformatics approaches to reduce each bacterial species to a "bag of genes" and identify the set of genes or gene parts that is specifically associated with a target species or clade and not associated with any others. We extracted the Metaphlan2 markers for a given target species and used BLAST30 alignments against available genomes for our target species to ensure that the markers were present. We then assessed these preliminary markers for expression in the human fecal microbiome by using BLAST alignments against a human stool transcriptome database that we created from repositories of publically available adult human stool meta-transcriptome sequencing reads. Keeping only the markers that are expressed in human stool, we again tested for specificity by performing BLAST alignments against a pan-bacterial database that we created from all publically available reference and draft bacterial genomes. We selected the most specific markers with the highest expression in human stool and created toehold switch sensors to target these RNA sequences. In the case of C. difficile, expression was extremely low for all Metaphlan2 markers from the standard human stool transcriptome database. This was not unexpected since this species is reported to be very lowly abundant in normal healthy populations. To develop sensors for this species, we instead screened for expression using transcriptomic data from C. difficile cultures in various conditions available in public repositories. RT-qPCR validation RNA from stool samples and in vitro transcription was extracted, purified, and DNase digested as described above. In vitro transcribed RNA diluted in 150 ng/µl yeast tRNA (ThermoFisher, AM7119) was used to generate standards for absolute quantitation based on calculations designed to incorporate 1 to 107 RNA copies per reverse transcription reaction. cDNA synthesis from stool samples was performed with 300 ng input RNA per reaction using Superscript III (ThermoFisher, 18080-400), according to the manufacturer's protocol, using gene-specific primers (reverse qPCR primers as indicated in Supplementary Table 8) at a final concentration of 2 µM and total volume of 20 µl. Quantitative PCR reactions were prepared in triplicate using 2 µl of the RT reactions, LightCycler 480 Probes Master Mix (Roche, 04707494001), primers (final concentration 5 µM) and hydrolysis probe (final concentration 1 µM) in a reaction volume of 20 µl. The qPCR reactions were performed on a Lightcycler 480 96-well machine using the following program: (i) 95 °C for 10 min, (ii) 95 °C for 10 s, (iii) 48–60 °C for 50 s depending on primer Tm, (iv) 72 °C for 1 s for fluorescence measurement, (v) go to step ii and repeat 44 cycles, and (vi) 40 °C for 10 s. Absolute quantitation was performed using LightCycler 96 software version 1.1.0.1320 (Roche). When there were discordant results between triplicate amplification repeats, non-amplified reaction Cqs were set to 45 (equal to the total number of amplification cycles) prior to incorporation in copy number calculations. Dilution series reactions were performed on RNA extracted from several stool samples to demonstrate the absence of inhibition for the RT and qPCR reactions. Primers and probes used for RT-qPCR analyses are listed in Supplementary Table 8. Hydrolysis probes had a 5' 6-FAM dye, internal ZEN quencher after the 9th base, and 3' Iowa Black quencher (Integrated DNA Technologies). For Oncostatin M, we used the TaqMan RNA-to-Ct 1-Step Kit (ThermoFisher 4392653) with the commercially available probe set Hs00968300_g1 (ThermoFisher 4331182). In-house cell-free extract preparation Cell extract was prepared as described by Kwon and Jewett56. E. coli BL21(DE3)ΔlacZ (gift of Takane Katayama) were grown in 400 ml of LB at 37 °C at 250 rpm. Cells were harvested in mid-exponential growth phase (OD600 ~ 0.6), and cell pellets were washed three times with ice cold Buffer A containing 10 mM Tris-Acetate pH 8.2, 14 mM magnesium acetate, 60 mM potassium glutamate, and 2 mM DTT, and flash frozen and stored at −80 °C. Cell pellets were thawed and resuspended in 1 ml of Buffer A per 1 g of wet cells and sonicated in an ice-water bath. Total sonication energy to lyse cells was determined using the sonication energy equation for BL21 StarTM (DE3), [Energy] = [Volume (µL)] − 33.6] × 1.8−1. A Q125 Sonicator (Qsonica) with 3.174 mm diameter probe at a frequency of 20 kHz was used for sonication. A 50% amplitude in 10 s on/off intervals was applied until the required input energy was met. Lysate was then centrifuged at 12,000 rcf for 10 min at 4 °C, and the supernatant was incubated at 37 °C at 300 rpm for 1 h. The supernatant was centrifuged again at 12,000 rcf for 10 min at 4 °C, and flash frozen and stored at −80 °C until use. Using a previously published cell-free reaction protocol57, reaction mixtures were composed of 26.6 % (v/v) of in-house lysate, 1.5 mM each amino acid except leucine (1.25 mM), 1 mM DTT, 50 mM HEPES (pH 8.0), 1.5 mM ATP and GTP, 0.9 mM CTP and UTP, 0.2 mg/mL tRNA, 0.26 mM CoA, 0.33 mM NAD, 0.75 mM cAMP, 0.068 mM folinic acid, 1 mM spermidine, 30 mM 3-PGA, 2% PEG-8000, 0.5 % (v/v) Protector RNase Inhibitor (Roche), 2 nM LacZ sensor plasmid DNA, 2 uM RNA trigger, and 0.6 mg/mL chlorophenol red-ß-D-galactopyranoside (CPRG, Sigma Aldrich, 59767) for lacZ sensor. Optimal potassium glutamate (40–140 mM) and magnesium glutamate (2–8 mM) concentrations were determined for lacZ reporter product. Reactions were first assembled on ice without CPRG, incubated at 37 °C for 30 min, chilled on ice for 5 min, and then CPRG was added to reaction. 1.4 µl of reaction mixture was then applied to pre-blocked 5% BSA 2 mm paper disks, placed in a black, clear bottom 384-well plate (Corning, 3544) and incubated at 37 °C for 1.5 h for the detection of lacZ expression. All code used in this work is available in Supplementary Note 1 and 2. All toehold switch sensors from this work have been deposited at AddGene. AddGene #110696-110717, 111907-111909. All other data supporting the findings of this study are available within the article and its Supplementary Information files, or are available from the authors upon request. This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication. Rooks, M. G. & Garrett, W. S. Gut microbiota, metabolites and host immunity. Nat. Rev. Immunol. 16, 341–352 (2016). Article PubMed PubMed Central CAS Google Scholar Pfeiffer, J. K. & Virgin, H. W. Transkingdom control of viral infection and immunity in the mammalian intestine. Science 351, aad5872 (2016). Article PubMed CAS Google Scholar van Nood, E. et al. Duodenal infusion of donor feces for recurrent Clostridium difficile. N. Engl. J. Med. 368, 407–415 (2013). Koeth, R. A. et al. Intestinal microbiota metabolism of l-carnitine, a nutrient in red meat, promotes atherosclerosis. Nat. Med. 19, 576–585 (2013). ADS Article PubMed PubMed Central CAS Google Scholar Haiser, H. J. et al. Predicting and manipulating cardiac drug inactivation by the human gut bacterium Eggerthella lenta. Science 341, 295–298 (2013). Huttenhower, C., Kostic, A. D. & Xavier, R. J. Inflammatory bowel disease as a model for translating the microbiome. Immunity 40, 843–854 (2014). Gevers, D. et al. The treatment-naive microbiome in new-onset Crohn's disease. Cell Host. Microbe 15, 382–392 (2014). Subramanian, S. et al. Persistent gut microbiota immaturity in malnourished Bangladeshi children. Nature 510, 417–421 (2014). Blanton, L. V. et al. Gut bacteria that prevent growth impairments transmitted by microbiota from malnourished children. Science 351, aad3311 (2016). Sivan, A. et al. Commensal Bifidobacterium promotes antitumor immunity and facilitates anti – PD-L1 efficacy. Science 350, 1084–1089 (2015). Vétizou, M. et al. Anticancer immunotherapy by CTLA-4 blockade relies on the gut microbiota. Science 350, 1079–1084 (2015). Wlodarska, M., Kostic, A. D. & Xavier, R. J. An integrative view of microbiome-host interactions in inflammatory bowel diseases. Cell Host. Microbe 17, 577–591 (2015). Ilott, N. E. et al. Defining the microbial transcriptional response to colitis through integrated host and microbiome profiling. ISME J. 10, 2389–2404 (2016). Luca, F., Kupfer, S. S., Knights, D., Khoruts, A. & Blekhman, R. Functional genomics of host–microbiome interactions in humans. Trends Genet. 34, 30–40 (2018). Gilbert, J. A. et al. Microbiome-wide association studies link dynamic microbial consortia to disease. Nature 535, 94–104 (2016). ADS Article PubMed CAS Google Scholar Pardee, K. et al. Paper-based synthetic gene networks. Cell 159, 940–954 (2014). Pardee, K. et al. Rapid, low-cost detection of zika virus using programmable biomolecular components. Cell 165, 1255–1266 (2016). Green, A. A., Silver, P. A., Collins, J. J. & Yin, P. Toehold switches: de-novo-designed regulators of gene expression. Cell 159, 925–939 (2014). Sands, B. E. Biomarkers of inflammation in inflammatory bowel disease. Gastroenterology 149, 1275–1285.e2 (2015). West, N. R. et al. Oncostatin M drives intestinal inflammation and predicts response to tumor necrosis factor–neutralizing therapy in patients with inflammatory bowel disease. Nat. Med. 23, 579 (2017). Fang, F. C., Polage, C. R. & Wilcox, M. H. Point-counterpoint: what is the optimal approach for detection of clostridium difficile infection? J. Clin. Microbiol. 55, 670–680 (2017). Walters, W. A., Xu, Z. & Knight, R. Meta-analyses of human gut microbes associated with obesity and IBD. FEBS Lett. 588, 4223–4233 (2014). Korem, T. et al. Growth dynamics of gut microbiota in health and disease inferred from single metagenomic samples. Science 349, 1101–1106 (2015). Zadeh, J. N. et al. NUPACK: Analysis and design of nucleic acid systems. J. Comput. Chem. 32, 170–173 (2011). Cormack, B. P., Valdivia, R. H. & Falkow, S. FACS-optimized mutants of the green fluorescent protein (GFP). Gene 173, 33–38 (1996). Guatelli, J. C. et al. Isothermal, in vitro amplification of nucleic acids by a multienzyme reaction modeled after retroviral replication. Proc. Natl Acad. Sci. USA 87, 1874–1878 (1990). Chakravorty, S., Helb, D., Burday, M., Connell, N. & Alland, D. A detailed analysis of 16S ribosomal RNA gene segments for the diagnosis of pathogenic bacteria. J. Microbiol. Methods 69, 330–339 (2007). McGinnis, J. L. et al. In-cell SHAPE reveals that free 30S ribosome subunits are in the inactive state. Proc. Natl Acad. Sci. USA 112, 2425–2430 (2015). Segata, N. et al. Metagenomic microbial community profiling using unique clade-specific marker genes. Nat. Methods 9, 811–814 (2012). Altschul, S. F., Gish, W., Miller, W., Myers, E. W. & Lipman, D. J. Basic local alignment search tool. J. Mol. Biol. 215, 403–410 (1990). Bustin, S. A. et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin. Chem. 55, 611–622 (2009). Patterson, S. S., Casper, E. T., Garcia-Rubio, L., Smith, M. C. & Paul, J. H. I. Increased precision of microbial RNA quantification using NASBA with an internal control. J. Microbiol. Methods 60, 343–352 (2004). Sidoti, F. et al. Development of a quantitative real-time nucleic acid sequence-based amplification assay with an internal control using molecular beacon probes for selective and sensitive detection of human Rhinovirus serotypes. Mol. Biotechnol. 50, 221–228 (2012). DePestel, D. D. & Aronoff, D. M. Epidemiology of Clostridium difficile infection. J. Pharm. Pract. 26, 464–475 (2013). Zhang, S. et al. Cost of hospital management of Clostridium difficile infection in United States—a meta-analysis and modelling study. BMC Infect. Dis. 16, 447 (2016). Ryder, A. B. et al. Assessment of Clostridium difficile infections by quantitative detection of tcdB toxin by use of a real-time cell analysis system. J. Clin. Microbiol. 48, 4129–4134 (2010). Kociolek, L. K. Strategies for optimizing the diagnostic predictive value of Clostridium difficile molecular diagnostics. J. Clin. Microbiol. 55, 1244–1248 (2017). Pollock, N. R. Ultrasensitive detection and quantification of toxins for optimized diagnosis of Clostridium difficile infection. J. Clin. Microbiol. 54, 259–264 (2016). Cohen, S. H. et al. Clinical practice guidelines for Clostridium difficile infection in adults: 2010 Update by the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA). Infect. Control. Hosp. Epidemiol. 31, 431–455 (2010). Theriot, C. M. et al. Cefoperazone-treated mice as an experimental platform to assess differential virulence of Clostridium difficile strains. Gut Microbes 2, 326–334 (2011). El Feghaly, R. E., Stauber, J. L., Tarr, P. I. & Haslam, D. B. Intestinal inflammatory biomarkers and outcome in pediatric Clostridium difficile infections. J. Pediatr. 163, 1697–1704.e2 (2013). El Feghaly, R. E. et al. Markers of Intestinal Inflammation, not bacterial burden, correlate with clinical outcomes in Clostridium difficile Infection. Clin. Infect. Dis. 56, 1713–1721 (2013). Ananthakrishnan, A. N. et al. Gut microbiome function predicts response to anti-integrin biologic therapy in inflammatory bowel diseases. Cell Host. Microbe 21, 603–610.e3 (2017). Holgersen, K. et al. High-resolution gene expression profiling using RNA sequencing in patients with inflammatory bowel disease and in mouse models of colitis. J. Crohn's Colitis 9, 492–506 (2015). Uguccioni, M. et al. Increased expression of IP-10, IL-8, MCP-1, and MCP-3 in ulcerative colitis. Am. J. Pathol. 155, 331–336 (1999). Svec, D., Tichopad, A., Novosadova, V., Pfaffl, M. W. & Kubista, M. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments. Biomol. Detect. Quantif. 3, 9–16 (2015). Matsuda, K. et al. Sensitive quantification of Clostridium difficile cells by reverse transcription-quantitative PCR targeting rRNA molecules. Appl. Environ. Microbiol. 78, 5111–5118 (2012). Deurenberg, R. H. et al. Application of next generation sequencing in clinical microbiology and infection prevention. J. Biotechnol. 243, 16–24 (2017). Piepenburg, O., Williams, C. H., Stemple, D. L. & Armes, N. A. DNA Detection Using Recombination Proteins. PLoS Biol. 4, e204 (2006). Gootenberg, J. S. et al. Nucleic acid detection with CRISPR-Cas13a/C2c2. Science 356, 438–442 (2017). Gootenberg, J. S. et al. Multiplexed and portable nucleic acid detection platform with Cas13, Cas12a, and Csm6. Science 360, 439–444 (2018). Chen, J. S. et al. CRISPR-Cas12a target binding unleashes indiscriminate single-stranded DNase activity. Science 360, 436–439 (2018). Deiman, B., Aarle, P., Van & Sillekens, P. Characteristics and applications of nucleic acid sequence-based amplification (NASBA). Mol. Biotechnol. 20, 163–179 (2002). Reck, M. et al. Stool metatranscriptomics: a technical guideline for mRNA stabilisation and isolation. BMC Genom. 16, 494 (2015). Gorzelak, M. A. et al. Methods for improving human gut microbiome data by reducing variability through sample processing and storage of stool. PLoS ONE 10, e0134802 (2015). Kwon, Y.-C. & Jewett, M. C. High-throughput preparation methods of crude extract for robust cell-free protein synthesis. Sci. Rep. 5, 8663 (2015). Sun, Z. Z. et al. Protocols for implementing an Escherichia coli based TX-TL cell-free expression system for synthetic biology. J. Vis. Exp. 79, e50762 (2013). This work was supported by MIT's Center for Microbiome Informatics and Therapeutics, the Paul G. Allen Frontiers Group, and the Wyss Institute. X.T. is supported in part by grants from the National Institutes of Health T32 DK007191 and a Wyss Institute Clinical Fellowship. A.J.D. is supported by the National Science Foundation Graduate Research Fellowship Program. A.A. is supported in part by grants from the National Institutes of Health (K23 DK097142, R03 DK112909) and the Crohn's and Colitis Foundation. The authors would like to thank Liz Andrews, Will Tan, Heather Wilson, and Eric Rosenberg for their help with clinical stool samples. These authors contributed equally: Melissa K. Takahashi, Xiao Tan, Aaron J. Dy. Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA Melissa K. Takahashi, Xiao Tan, Aaron J. Dy, Dana Braff, Yoshikazu Furuta & James J. Collins Division of Gastroenterology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, 02114, USA Xiao Tan & Ashwin Ananthakrishnan Harvard Medical School, 25 Shattuck St, Boston, MA, 02115, USA Wyss Institute for Biologically Inspired Engineering, Harvard University, 3 Blackfan Circle, Boston, MA, 02115, USA Xiao Tan, Nina Donghia & James J. Collins Broad Institute of MIT and Harvard, 415 Main St, Cambridge, MA, 02142, USA Xiao Tan, Aaron J. Dy & James J. Collins Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA Aaron J. Dy, Reid T. Akana & James J. Collins Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston, MA, 02215, USA Dana Braff Division of Infection and Immunity, Research Center for Zoonosis Control, Hokkaido University, North 20, West 10 Kita-ku, Sapporo, 001-0020, Japan Yoshikazu Furuta Synthetic Biology Center, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA James J. Collins Harvard-MIT Program in Health Sciences and Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA Melissa K. Takahashi Xiao Tan Aaron J. Dy Reid T. Akana Nina Donghia Ashwin Ananthakrishnan M.K.T, X.T. and A.J.D designed experiments, performed experiments, analyzed data, and wrote the manuscript. D.B. performed experiments and edited the manuscript. Y.F. wrote code for identifying species-specific mRNA sequences. R.T.A. and N.D. performed experiments. A.A. provided clinical samples. J.J.C. directed overall research and edited the manuscript. Correspondence to James J. Collins. J.J.C. is an author on a patent application for the paper-based synthetic gene networks US20160312312A1 and a patent for the RNA toehold switch sensors US9550987B2. The remaining authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Takahashi, M.K., Tan, X., Dy, A.J. et al. A low-cost paper-based synthetic biology platform for analyzing gut microbiota and host biomarkers. Nat Commun 9, 3347 (2018). https://doi.org/10.1038/s41467-018-05864-4 A glucose meter interface for point-of-care gene circuit-based diagnostics Evan Amalfitano Margot Karlikow Keith Pardee Nature Communications (2021) Protocell arrays for simultaneous detection of diverse analytes Taisuke Kojima Mark P. Styczynski Holistic engineering of cell-free systems through proteome-reprogramming synthetic circuits Luis E. Contreras-Llano Conary Meyer Cheemeng Tan Open-source, 3D-printed Peristaltic Pumps for Small Volume Point-of-Care Liquid Handling Michael R. Behrens Haley C. Fuller Robert Steward Scientific Reports (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Reviews & Analysis Editorial Values Statement Journal Impact Editors' Highlights Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Nature Communications (Nat Commun) ISSN 2041-1723 (online) nature.com sitemap Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Masterclasses Nature Research Academies Libraries & institutions Librarian service & tools Partnerships & Services Nature Conferences Nature Africa Nature China Nature India Nature Italy Nature Middle East Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Definition:Metre/Square Metre < Definition:Metre(Redirected from Definition:Square Metre) The square metre is the SI unit of area. The symbol for the square metre is $\mathrm m^2$. Linguistic Note The word metre originated with Tito Livio Burattini who pioneered the concept of a universal set of fundamental units. He used the term metro cattolico from the Greek μέτρον καθολικόν (métron katholikón), that is universal measure. This word gave rise to the French word mètre which was introduced into the English language in $1797$. The spelling metre is the one adopted by the International Bureau of Weights and Measures. Meter is the variant used in standard American English, but can be confused for the word for a general device used to measure something, in particular the standard household electricity meter, water meter and so on. While $\mathsf{Pr} \infty \mathsf{fWiki}$ attempts in general to standardise on American English, the name of this unit is one place where a deliberate decision has been made to use the international spelling. Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Metre/Square_Metre&oldid=538334" Definitions/Metre Definitions/Area Definitions/Metric System This page was last modified on 30 September 2021, at 06:13 and is 0 bytes
CommonCrawl
Work and Energy Questions 1) Between time $t=0\, {\rm s}$ and $t=8,{\rm s}$, a force ${{\vec{F}=\left(3\hat{i}-4.5\hat{j}\right)\,{\rm N}}}$ moves a $8.3\, {\rm kg}$ object along a trajectory Between time $t=0\, {\rm s}$ and $t=8,{\rm s}$, a force ${{\vec{F}=\left(3\hat{i}-4.5\hat{j}\right)\,{\rm N}}}$ moves a $8.3\, {\rm kg}$ object along a trajectory $\Delta \vec{r}=(2.5\hat{i}-2\hat{j}){\rm m}$. How much work is done by this force? The work done by a constant force $\vec{F}$ over a displacement $\vec{r}$ is $W=\vec{F}.\vec{r}=F\left|r\right|{\cos \theta\ }$ where $\theta$ is the angle between the force and displacement. So \[W=\vec{F}.\Delta \vec{r}=F_x\Delta x+F_y\Delta y=\left(3\right)\left(2.5\right)+\left(-4.5\right)\left(-2\right)=16.5\, {\rm J}\] 2) A roller-coaster car moves around a vertical circular loop of radius$R=10.0\,{\rm m}$. A roller-coaster car moves around a vertical circular loop of radius$R=10.0\,{\rm m}$. (a) What speed must the car have so that it will just make it over the top without any assistant the track? (b) What speed will the car subsequently have at the bottom of the loop? (c) What will be the normal force on a passenger at the bottom of the loop? At the top the normal force and gravity are in the same direction. These forces provides the centripetal force $F_r$ acts on the car. If the normal force $N$ becomes zero, then the car is on the verge of falling (but not yet falling). Apply Newton's 2${}^{nd}$ law and then set $N=0$. \[\Sigma F_r=ma_r\to N_t+mg=\frac{mv^2_t}{r}\Rightarrow mg=\frac{mv^2_t}{r}\] \[\Rightarrow v_t=\sqrt{Rg}=\sqrt{10\times 9.8}=9.9\frac{{\rm m}}{{\rm s}}\] Use the conservation of mechanical energy between top and bottom points. \[U_{top}=mg\left(2R\right)\ ,\ \ K_{top}=\frac{1}{2}mv^2_t\] \[U_{bot}=0\ \ ,\ \ K_{bot}=\frac{1}{2}mv^2_b\] \[E_{top}=E_{bot}\to 2mgR+\frac{1}{2}m\left(gR\right)=\frac{1}{2}mv^2_b\to v_b=\sqrt{5gR}=22.1\ {\rm m/s}\] The forces act on the passengers at the bottom are shown in the figure. Note: in this problems the normal and centripetal forces always are toward the center of the circle i.e. $-\hat{r}$ and the gravity is in outward i.e. $\hat{r}$ direction. \[\Sigma F_r=ma_r\to \ N\left(-\hat{r}\right)+mg\hat{r}=\frac{mv^2_b}{r}\left(-\hat{r}\right)\] \[\Rightarrow N_b=mg+\frac{mv^2_b}{r}=mg+\frac{5mgR}{R}=6mg\] \[\therefore N=6mg=14.7\ {\rm kN}\] 3) The drag force $F_D$ on the dragster is plotted as a function of distance s below. What is the magnitude of the work done by the drag force after the dragster has traveled $400\, {\rm m}$? The drag force $F_D$ on the dragster is plotted as a function of distance s below. What is the magnitude of the work done by the drag force after the dragster has traveled $400\, {\rm m}$? By definition, the area under the force- displacement represents the work done by this force. So $W=\int{\vec {F}.d\vec {x}}$ or $W={\rm area(F-x)}$. Therefore, in this case we must find the area of a trapezoid with lengths of parallel sides $a$ , $b$ and height $h$. Area of a trapezoid: $A=\frac{a+b}{2}.h$ So the work done by this force is \[W=\frac{(50{\rm kN}+20{\rm kN)}}{2}\left(400{\rm m}\right)=14\ {\rm MJ}\] 4) A $200\, {\rm g}$ rubber ball is tied to a $1.0\, {\rm m}$ long string and released from rest at angle $\theta$. A $200\, {\rm g}$ rubber ball is tied to a $1.0\, {\rm m}$ long string and released from rest at angle $\theta$. It swings down and at the very bottom has a perfectly elastic collision with a $1.0\,{\rm kg}$ block. The block is resting on a frictionless surface and is connected to a $20\,{\rm cm}$ long spring of spring constant $2000\, {\rm N/m}$. After the collision, the spring compresses a maximum distance of $2.0\,{\rm cm}$. from what angle was the rubber ball released? Conservation of mechanical energy gives the velocity of the ball before the collision: \[E_i=E_f\Rightarrow \ mg\,l\,\left(1-{\cos \theta\ }\right)=\frac{1}{2}mv^2_1\] $\Rightarrow \ v_1=\sqrt{2gl\,\left(1-{\cos \theta\ }\right)}$ Because of elastic collision, kinetic energy and momentum are conserved. let $m_1$be the ball and $m_2$ be the block Use the conservation of momentum $P_i=P_f$ to determine the velocity of the ball and block after the collision: \[\Rightarrow m_1v_1=m_1v_{1f}+m_2v_2\to v_1=v_f+\frac{m_2}{m_1}v_2\ \ \ \ \ ,\ \ {\rm (I)}\] Conservation of K.E gives: $\frac{1}{2}m_1v^2_1=\frac{1}{2}m_1v^2_{1f}+\frac{1}{2}m_2v^2_2\to v^2_1=v^2_{1f}+\frac{m_2}{m_1}v^2_2$ From ${\rm (I)}$ we have: $v^2_{1f}=v^2_1-2\frac{m_2}{m_1}v_1v_2+{\left(\frac{m_2}{m_1}v_2\right)}^2$ \[\therefore v^2_1=v_1-2\frac{m_2}{m_1}v_1v_2+{\left(\frac{m_2}{m_1}\right)}^2v^2_2+\frac{m_2}{m_1}v^2_2\] If the relation above divided by $\frac{m_2}{m_1}v_2$ we obtain: \[0=-2v_1+\frac{m_2}{m_1}v_2+v_2\ \ \ \Longrightarrow \ \ v_2=\frac{2v_1}{1+\frac{m_2}{m_1}}\] Now write the conservation of mechanical energy of spring: \[\frac{1}{2}m_2v^2_2=\frac{1}{2}k{\left(\Delta x\right)}^2\ \ \Longrightarrow v_2=\sqrt{\frac{k}{m}}\Delta x=\sqrt{\frac{2000}{1}}\left(0.2\right)=0.894\frac{{\rm m}}{{\rm s}}\] by substituting this into above relation for $v_2$ and solve for $v_1$, we obtain \[v_1=\frac{1}{2}v_2\left(1+\frac{m_2}{m_1}\right)=2.68\frac{{\rm m}}{{\rm s}}\] \[{\cos \theta\ }=1-\frac{v^2_1}{2gl}\to \theta=50.7{}^\circ \] 5) A Skier, whose mass is $70\, {\rm kg}$, stands at the top of a $10{}^\circ $ slope on her new frictionless skis. A Skier, whose mass is $70\, {\rm kg}$, stands at the top of a $10{}^\circ $ slope on her new frictionless skis. A strong horizontal wind blows against him with a force of $50\, {\rm N}$. Without using Newton's laws of motion, find skier's speed after traveling $100\, {\rm m}$ down the slope. Using the conservation of mechanical energy between initial and final points, we obtain \[K_i+U_i+W_{wind}=K_f+U_f+\] \[0+mg(\underbrace{100\,\sin 10{}^\circ \ }_{h_i})-50\,{\cos 10{}^\circ \ }\times 100=\frac{1}{2}mv^2_f+0\] $U_i$ is the initial potential energy of the skier. At the top of the slop its height relative to the ground is $h_i=100\,sin 10^\circ$. $W_{wind}$ is the work done by the force of wind on the skier. therefore , \[v^2_f=100\left(2g\,{\sin 10{}^\circ \ }-\frac{2\times 50\times {\cos 10{}^\circ \ }}{m}\right)\] \[=100\left(2\times 9.8\times {\sin 10{}^\circ \ }-\frac{2\times 50\times {\cos 10{}^\circ \ }}{70}\right)\] \[v_f=14\frac{{\rm m}}{{\rm s}}\] 6) A bullet of mass $m$ and speed $v$ passes completely through a pendulum bob of mass $M$. The bullet emerges with speed of $v/2$. A bullet of mass $m$ and speed $v$ passes completely through a pendulum bob of mass $M$. The bullet emerges with speed of $v/2$. The pendulum bob is suspended by a stiff rod of length $l$ and negligible mass. What is the minimum value of $v$ such that the pendulum bob will barely swing through a complete vertical circle? Use the conservation of the momentum to find the velocity of the bullet after the collision. \[P_i=P_f\to mv=Mv_f+m\frac{v}{2}\ \Rightarrow v_f=\frac{m}{M}\frac{v}{2}\ \ \ (*)\] Because there is no frictional effects, so mechanical energy of bob is conserved. \[E_i=E_f\to \ U_i+K_i=U_f+K_f\] Barely swing through a complete vertical circle means that the bob has some energy to reach only the top of the path. So use the conservation of the mechanical energy between bottom and top point of the circle. Let the lowest point of the pendulum as reference point $U_i(bottom)=0$ and $U_f(top)=Mg(2l)$ \[\Rightarrow \frac{1}{2}Mv^2_f=Mg\left(2l\right)\to v_f=2\sqrt{gl}\ \ \ from\ \left(*\right)\ \ \Rightarrow v_{min}=\frac{4M}{m}\sqrt{g\,l}\] 7) A block of mass $M$ attached to a horizontal spring with force constant $k$ is moving in SHM with amplitude $A_1$. A block of mass $M$ attached to a horizontal spring with force constant $k$ is moving in SHM with amplitude $A_1$. As the block passes through its equilibrium position, a lump of putty of mass $m$ is dropped from a small height and sticks to it. (a) Find the new amplitude and period of the motion. (b) Repeat first part, find the new amplitude and period of the motion, if the putty is dropped from a small height onto the block and sticks to it when it is at one end of its path. Recall that the total mechanical energy in SHM is $E=\frac{1}{2}mv^2+\frac{1}{2}kx^2=\frac{1}{2}kA^2$. Where $A$ is the amplitude or maximum distance to the equilibrium. Therefore, before the collision, we have \[E_1=U_1+K_1=\frac{1}{2}mv^2_1+0\left(in\ equilibrium\ position\right)=\frac{1}{2}kA^2_1\] \[\Rightarrow v_1=\sqrt{\frac{k}{m}}A_1\] Conservation of momentum states that $P_i=P_f\ \Rightarrow Mv_1=\left(M+m\right)v_2$ \[\Rightarrow v_2=\frac{M}{M+m}v_1\ \ ,\ after\ collision\] So the total mechanical energy of the system after collision is \[E_2=U_2+K_2\ \Rightarrow \frac{1}{2}kA^2_2=0+\frac{1}{2}\left(m+M\right)v^2_2=\frac{1}{2}\frac{M^2}{M+m}v^2_1\] \[\Rightarrow E_2=\frac{M}{M+m}E_1\] \[\Rightarrow \frac{1}{2}kA^2_2=\frac{M}{M+m}\frac{1}{2}kA^2_1\to A_2=\sqrt{\frac{M}{M+m}}A_1\] In SHM, period is given by $T=2p\sqrt{\frac{m}{k}}$ so $T_f=2p\sqrt{\frac{M_{tot}}{k}}=2p\sqrt{\frac{m+M}{k}}\ \ $ At one ends of a SHM $v=0$ and $E_{tot}=\frac{1}{2}kA^2$ so \[K_1=K_2=0\ \ ,\ E_{tot,1}=E^{'}_{tot,2}\Rightarrow \ \frac{1}{2}kA_1=\frac{1}{2}kA^{'}_2\ \ \ \] \[\therefore A_1=A^{'}_2\ {\rm ,\ does\ not\ change!}\] The period of the motion does not change. 8) Tarzan is in the path of a pack of stampeding elephants when Jane swings in to the rescue on a rope vine, hauling him off to safety. Tarzan is in the path of a pack of stampeding elephants when Jane swings in to the rescue on a rope vine, hauling him off to safety. The length of the vine is $25\,{\rm m}$, and Jane starts her swings with the rope horizontal. If Jane's mass is $54\,{\rm kg}$, and Tarzan's mass is $82\,{\rm kg}$, to what height above the ground will the pair swing after she rescues him? Jane grabbing Tarzan is a perfectly inelastic collision. We can use conservation of energy to find Jane's speed just before collision, and then again to find how high they go after the collision. Stage 1: Jane swings \begin{gather*} E_{i,mech}=U+K=m_{J}gL+0\\ E_{f,mech}=U_f+K_f=0+\frac{1}{2}m_{J}v^2_{J} \end{gather*} Conservation of mechanical energy: \[E_i=E_f\to m_{J}gL=\frac{1}{2}m_{J} v^2_J\Rightarrow v_J={\left(2gL\right)}^{\frac{1}{2}}\] Stage 2: Jane collide with Tarzan. Use conservation of momentum to find the combined velocity of Tarzan and Jane just after collision: \[P_i=P_f\to m_Jv_J=\left(m_T+m_J\right)V_{TJ}\Rightarrow V_{TZ}=\frac{m_J}{m_J+m_T}v_J\] \[\Rightarrow V_{TJ}=\frac{m_J}{m_J+m_T}{\left(2gL\right)}^{\frac{1}{2}}\] Stage 3: now use again the conservation of the mechanical energy between to determine the desired height: \[E_i=E_f\] \begin{align*} E_i=U_i+K_i&=0+\frac{1}{2}\left(m_T+m_J\right)V^2_{TJ}\\ &=\frac{1}{2}\left(m_T+m_J\right){\left(\frac{m_J}{m_J+m_T}\right)}^22gL \end{align*} \[E_f=U_f+K_f=\left(m_T+m_J\right)gH+0\] \[E_i=E_f\Rightarrow H={\left(\frac{m_T}{m_T+m_J}\right)}^2L={\left(\frac{54}{136}\right)}^225=3.9{\rm m}\] Note that in above $U_i=0$ because in this problem we chose the lowest point of the path as a reference point. 9) Released from rest at the same height, a thin spherical shell and solid sphere of the same mass $m$ and radius $R$ roll without slipping down an incline through the same vertical drop $H$. Released from rest at the same height, a thin spherical shell and solid sphere of the same mass $m$ and radius $R$ roll without slipping down an incline through the same vertical drop $H$. Each is moving horizontally as it leaves the ramp. The spherical shell hits the ground a horizontal distance $L$ from the end of the ramp and the solid sphere hits the ground a distance $l^{'}$ from the end of the ramp. Find the ratio $L^{'}/L$. Working backwards, if we know the horizontal velocities at the end of the ramp, the distance traveled is $L=V\Delta t$ and $L^{'}=V^{'}\Delta t$ where $\Delta t$ is the time for both to fall vertically from the bottom of the ramp to the ground. Then $\frac{L}{L^{'}}=\frac{V}{V^{'}}$ We can find $V\ ,\ V^{'}$ using conservation of mechanical energy. (consider the end of the ramp as reference point i.e. $U=0$) E_i=mgH\\ E_f=K_f+U_f=\frac{1}{2}I\omega^2+\frac{1}{2}mV^2 Recall that $\left\{ \begin{array}{rcl} I & = & \frac{2}{3}mR^2\ ,\ \ \ spherical\ shell \\ I^{'} & = & \frac{2}{5}mR^2\ ,\ \ \ solid\ sphere \end{array} \right.$ So conservation of mechanical energy is for: Spherical shell: E_i=E_f\to mgH&=\frac{1}{2}\left(\frac{2}{3}\right)mR^2{\left(\frac{V}{R}\right)}^2+\frac{1}{2}mV^2\\ &=\frac{5}{6}mV^2\Rightarrow V=\sqrt{\frac{6}{5}gH} Solid sphere: E_i=E_f\to \ mgH&=\frac{1}{2}\left(\frac{2}{5}\right)mR^2{\left(\frac{V^{'}}{R}\right)}^2+\frac{1}{2}m{V^{'}}^2\\ &=\frac{7}{10}m{V^{'}}^2\Rightarrow V^{'}=\sqrt{\frac{10}{7}gH} \[\therefore \frac{L}{L^{'}}=\frac{V}{V^{'}}=\frac{\sqrt{\frac{6}{5}gH}}{\sqrt{\frac{10}{7}gH}}=\sqrt{\frac{42}{50}}\ \Rightarrow \ \ \frac{L^{'}}{L}=1.09\] 10) The magnitude of a single force acting on a particle of mass $m$ is given by $F=bx^2$, where $b$ is a constant. The magnitude of a single force acting on a particle of mass $m$ is given by $F=bx^2$, where $b$ is a constant. The particle starts from rest. After it travels a distance $L$, determines its (a) Kinetic energy and (b) Speed (a) Work-energy theorem states that $\Delta K=k_f-K_i=W_{net}$, where $W_{net}$ is the total work done on the system. Work is related to force by $W=\int{\vec{F}.d\vec{x}}$ W=\int^L_0{\vec{F}.d\vec{x}}&=\int^L_0{F\,dx\,{\cos 0{}^\circ \ }}\\ &=\int^L_0{bx^2dx}\\ &=\frac{1}{3}b{\left.x^3\right|}^L_0\\ &=\frac{1}{3}bL^3 Particle starts at rest, so $K_i=0$. \[K_f-K_i=W\Rightarrow K_f=\frac{bL^3}{3}\] (b) Solving $K_f$ for speed, we obtain \[K_f=\frac{1}{2}mV^2_f=\frac{bL^3}{3}\Rightarrow V_f=\sqrt{\frac{2}{3}\frac{bL^3}{m}\ }\] 11) A box of mass $M$ is at rest at the bottom of a frictionless inclined plane. The box is attached to a string that pulls with a constant tension $T$. A box of mass $M$ is at rest at the bottom of a frictionless inclined plane. The box is attached to a string that pulls with a constant tension $T$. (a) Find the work done by the tension $T$ as the box moves through a distance $x$ along the plane. (b) Find the speed of the box as a function of $x$. (c) Determine the power delivered by the tension in the string as a function of $x$. (a) Because the tension along the path is constant and parallel to the direction of motion so the work done by it is $W_T=\int^x_0{\vec{F}.d\vec{l}}=\int^x_0{T\,dl}=Tx$ (b) Use the work-energy theorem to find the speed of the box. \[K_f-K_i=W_{tot}=W_{weight}+W_{Tension}\] \[W_{weight}=\int^x_0{{\vec{F}}_g.d\vec{l}}=\left\{ \begin{array}{lcl} \int^x_0{\left(mg\,{\sin \theta\ }\right)dx\ {\cos 180{}^\circ \ }} & = & -mg\,x\,{\sin \theta\ } \\ \int^x_0{\left(mg\,{\cos \theta\ }\right)dx\ {\cos 90{}^\circ \ }} & = & 0 \end{array} \right.\] The box starts at rest, $K_i=0$ so \[\frac{1}{2}mV^2_f=-mg\,x\,{\sin \theta\ }+Tx\] \[\Rightarrow V_f={\left(\frac{2Tx}{m}-2gx\,{\sin \theta\ }\right)}^{\frac{1}{2}}\] (c) Power is defined as the rate of the work done by the system \[P=\frac{dW}{dt}=\frac{d}{dt}\vec{F}.\vec{x}\ \ {{\xrightarrow{F\ is\ constant}{\Longrightarrow}\ P=\vec{F}.\frac{d\vec{x}}{dt}=\vec{F}.\vec{V}}}\] In this problem, $\vec{T}$ and $\vec{V}$ are parallel (weight is normal to velocity), so \[P=TV=T{\left(\frac{2Tx}{m}-2gx\,{\sin \theta\ }\right)}^{\frac{1}{2}}\] 12) An Atwood's machine consists of masses $m_1$ and $m_2$, and a pulley of negligible mass and friction. An Atwood's machine consists of masses $m_1$ and $m_2$, and a pulley of negligible mass and friction. Starting from rest, the speed of the two masses is $4.0\ {\rm m/s}$ at the end of $3.0\ {\rm s}$. At that time, the kinetic energy of the system is $80\ {\rm J}$ and each masses has moved a distance of $6.0\ {\rm m}$. Determine the values of $m_1$ and $m_2$. Apply work-energy theorem: $\Delta K=K_f-K_i=W_{tot}$ Starting from rest, i.e. $K_i=0$. Because the masses are moving in the opposite direction, so one of them has positive work and the other negative. Therefore \[\left\{ \begin{array}{c} W_{tot}=m_1gh+\left(-m_2gh\right) \\ \Delta K=K_f-0=60{\rm J} \end{array} \[\Rightarrow 60=\left(m_1-m_2\right)gh\Rightarrow m_1-m_2=\frac{60}{gh}\ ,\ \ \ \left(*\right)\] Where $h$ measures from the starting position. The kinetic energy of the system after $3{\rm s}$ is $80\ {\rm J}$ .i.e. \[K_f=\frac{1}{2}\left(m_1+m_2\right)v^2_f=80\Rightarrow m_1+m_2=\frac{160}{v^2_f}=\frac{160}{4^2}\ \ ,\ \ \left(**\right)\] From (*) and (**): m_1-m_2=\frac{60}{9.8\times 6}=1.36 \\ m_1+m_2=\frac{160}{4^2}=10 \end{array} \Rightarrow 2m_1=11.36\Rightarrow m_1=5.68{\rm kg}\ \right.\] So $m_1+m_2=10\Rightarrow m_2=10-m_1=10-5.68=4.32{\rm kg}$ 13) A block of mass $m$ rests on an inclined plane. The coefficient of static friction between the block and the plane is $\mu_{s}$. A block of mass $m$ rests on an inclined plane. The coefficient of static friction between the block and the plane is $\mu_{s}$. A gradually increasing force is pulling down on the spring (force constant $k$). Find the potential energy $U$ of the spring at the moment the block begins to move. The free body diagram is as follows: There is no movement in the normal direction to the motion i.e. \[\Sigma F_y=0\to N-mg\,{\cos \theta\ }=0\ \ ,\ \ (1)\] On the other hand, along the inclined plane $\Sigma F_x=0$, since the block is at the threshold of the moving. \[\Rightarrow T-f-mg\,{\sin \theta\ }=0\ \ ,\ \ (2)\] At the instant, the block begins to move, static friction has reached its maximum value, i.e. $f_{s,max}=\mu_{s}N=\mu_{s}mg\,{\cos \theta\ }$ \[\left(2\right)\Rightarrow \ T=\mu_{s}mg\,{\cos \theta\ }+mg\,{\sin \theta\ }\] If spring is not accelerating, then force pulling down must equal tension on the rope over the pulley so, $F_{spring}=kx$ , where $x$ is distance the spring is stretched. Therefore \[\left\{ \begin{array}{rcl} T & = & \mu_{s}mg\,{\cos \theta\ }+mg\,{\sin \theta\ } \\ T & = & kx \end{array} \Rightarrow x=\frac{mg}{k}\left({\sin \theta\ }+\mu_{s}\,{\cos \theta\ }\right)\right.\] Potential energy in the spring is \[U_{spring}=\frac{1}{2}kx^2=\frac{1}{2k}{\left[mg\left({\sin \theta\ }+\mu_{s}\,{\cos \theta\ }\right)\right]}^2\] 14) A box of mass $m$ on the floor is connected to a horizontal spring of force constant $k$. The coefficient of kinetic friction between the box and the floor is $\mu_k$. A box of mass $m$ on the floor is connected to a horizontal spring of force constant $k$. The coefficient of kinetic friction between the box and the floor is $\mu_k$. The other end of the spring is connected to a wall. The spring is initially unstressed. If the box is pulled away from the wall a distance $d_0$ and released, the box slides toward the wall. Assume the box does not slide so far that the coils of the spring touch. (a) Obtain an expression for the distance $d_1$ the box slides before it first comes to a stop. (b) Assuming $d_1>d_0$, obtain an expression for the speed of the box when it has slid a distance $d_0$ following the release. (c) Obtain the special value of $\mu_k$ such that $d_1=d_0$. (a) Use conservation of mechanical energy in the presence of dissipating forces like friction. i.e. $\Delta E=E_f-E_i=W_f$, where $W_f$ is the work done by these forces. Initially the box is at rest, $K_i=0$ and the spring is stretched, $U_{i,s}=\frac{1}{2}kd^2_0$. When it comes to a stop, $K_f=0$ again and the final potential energy of the spring is $U_{f,s}=\frac{1}{2}k{\left(\Delta x\right)}^2$, where $\Delta x=d_1-d_0$ is the final displacement of the spring from equilibrium position. Kinetic friction has done work over the block in this path so \[W_{friction}=-f{{\rm d}}_{{\rm 1}}=-\mu_{k}mg\,d_1\] \[E_f-E_i=W_f\to \left(U_f+K_f\right)-\left(U_i+K_i\right)=W_f\] \[\frac{1}{2}k{\left(d_1-d_0\right)}^2-\frac{1}{2}kd^2_0=-\mu_{k}mg\,d_1\] \[d^2_0-d^2_1+2d_1d_0-d^2_0=2\mu_{k}\,mg\,d_1\Rightarrow d_1=2d_0-2\mu_{k}\frac{mg}{k}\] (b) Use conservation of mechanical energy, $\Delta E=W_f$ When the box reaches $d_0$ there is no spring potential energy (the box is on the equilibrium position) i.e. $U_{f,s}=\frac{1}{2}kd^2_0$ but at this point the box has velocity i.e. $K_f=\frac{1}{2}mv_{f2}$ \[E_f-E_i=W_f\Rightarrow \left(0+\frac{1}{2}mv^2_f\right)-\left(\frac{1}{2}kd^2_0+0\right)=-\mu_{k}mg\,d_0\] \[v_f={\left(2\frac{k}{m}d^2_0-2\mu_kg\,d_0\right)}^{\frac{1}{2}}\] \[\ \] (c) Setting $d_1=d_0$ in the result of part (a) and solving for $\mu_k$, we obtain $$d_1=2d_0-2\mu_k\frac{mg}{k}\ \ \xrightarrow{d_0=d_1} \mu_k=\frac{kd_0}{2mg}\ $$ 15) A $5.00\,{\rm kg}$ block is firmly attached to a $120\,{\rm N/m}$ spring. The block is initially at rest and the entire setup is on a frictionless surface. A $5.00\,{\rm kg}$ block is firmly attached to a $120\,{\rm N/m}$ spring. The block is initially at rest and the entire setup is on a frictionless surface. A rope inclined at $36.9{}^\circ $ above the horizontal is used to slowly pull the block until the block-spring-rope system is again at rest. At this point, the spring is stretched by $40.0\, {\rm cm}$. (a) Calculate the tension in the rope at this point( spring stretched by $40.0\, {\rm cm}$) (b) Calculate the work done by the tension in the rope as the block moves from its initial position to this point.( spring stretched by $40.0\, {\rm cm}$) (c) Suddenly the rope breaks. Calculate the speed of the block at the point where the spring is once again upstretched-uncompressed. (a) Because at this point the system is at rest so \Sigma F_x=0\Rightarrow T{\cos 36.9{}^\circ \ }-kx=0\\ \Rightarrow T=\frac{kx}{{\cos 36.9{}^\circ \ }}=\frac{120\times 0.40}{{\cos 36.9{}^\circ \ }}=60.02\ {\rm N} (b) Use the work-energy theorem as $K_f-K_i=W_{net}$. Since block is initially at rest and after $x=0.4\ {\rm m}$ again come to a stop so $K_i=K_f=0$ \[\Delta K=W_{net}\Rightarrow 0-0=W_T+W_s\] Where $W_s=-\frac{1}{2}kx^2$ is the work done by the spring on the block and $x$ is distance to equilibrium. Therefore, $W_T=\frac{1}{2}kx^2=\frac{1}{2}\left(120\right){\left(0.4\right)}^2=9.6\ {\rm J}$ Important note: we cannot use the definition of the work done by a constant force $F$ on an object over a distance $x$ that is $W=\vec{F}.d\vec{x}$, because, in this case, block attached to spring and so the tension in the rope is not a constant force so $W\ne \vec{T}.d\vec{x}$. (c) Use the conservation of mechanical energy between at the instant the rope breaks and when it is reached to the equilibrium. \[E_2-E_1=W_{NC}=0\Rightarrow E_2=E_1\] K_2+U_2=K_1+U_1\Rightarrow \frac{1}{2}mv^2_2+0=0+\frac{1}{2}kx^2 \\ \Rightarrow v=\sqrt{\frac{k}{m}}x=\sqrt{\frac{120}{5}}\left(0.4\right)=1.96\frac{{\rm m}}{{\rm s}} In above, $W_{NC}$ is the work done by the non-conservative forces such as friction. Note: the potential energy stored in a spring is the negative of the work done by it that is \[U_s=-W_s=\frac{1}{2}kx^2\] 16) In the system drawn below, the coefficient of kinetic friction between the $6.00\, {\rm kg}$ block and the horizontal surface is $0.15$, and the entire system is being held at rest In the system drawn below, the coefficient of kinetic friction between the $6.00\, {\rm kg}$ block and the horizontal surface is $0.15$, and the entire system is being held at rest (by someone who is holding the hanging $2.00\, {\rm kg}$ block in place). This person releases the block. Calculate the speed of each block after the $2.00\, {\rm kg}$ block has fallen $30.0\, {\rm cm}$. Because there is a non-conservative force acting on the $6\, {\rm kg}$ block so we must use the following version of conservation of the mechanical energy: \[E_2-E_1=W_{NC}\] Let the initial position of the blocks be the origin of the coordinate systems, so the $2\ {\rm kg}$ block moves downward the coordinate. Conservation of energy gives, E_f-E_i=W_f\\ \to \left(U_f+K_f\right)-\left(U_i+K_i\right)=-fd Where $d$ is the distance that the $6\ {\rm kg}$ block moves (or both blocks moves!) \left(-m_2gd+\frac{1}{2}\left(m_6+m_2\right)v^2_f\right)-\left(0+0\right)=-\mu_km_6gd\\ \Rightarrow \frac{1}{2}\left(m_6+m_2\right)v^2_f=gd\left(m_2-\mu_km_6\right)\\ \Rightarrow v_f&=\sqrt{\frac{2gd\left(m_2-\mu_km_6\right)}{m_6+m_2}}\\ &=\sqrt{\frac{2\times 9.8\times 0.30\times (2-0.15\times 6)}{6+2}}=0.899\frac{{\rm m}}{{\rm s}} Note: the friction work is always $W_f=-fd=\mu_kNd$, where $N$ is the normal force and $d$ is the displacement of the object. Note: the $2\, {\rm kg}$ object is moving downward the origin so its potential energy must be negative. 17) An $80\, {\rm g}$ arrow is fired from a bow whose string exerts an average force of $95\, {\rm N}$ on the arrow over a distance of $80\, {\rm cm}$. An $80\, {\rm g}$ arrow is fired from a bow whose string exerts an average force of $95\, {\rm N}$ on the arrow over a distance of $80\, {\rm cm}$. What is the speed of the arrow as it leaves the bow? Use the work-energy theorem: $\Delta K=K_f-K_i=W_{net}$, where $W_{net}$ is the sum of works done on the object in the displacement $x$. Therefore, \[\frac{1}{2}m\left(v^2_f-v^2_i\right)=Fx\Rightarrow v_f=\sqrt{\frac{2Fx}{m}}=\sqrt{\frac{2\times 95\times 0.80}{0.080}}=44\frac{{\rm m}}{{\rm s}}\] 18) A force $\vec{F}=12\ \hat{i}-10\hat{j}\ {\rm (N)}$ acts on an object. How much work does this force do as the objects moves from the origin to the point $\vec{r}=12\hat{i}+11\hat{j}\ {\rm (m)}$ A force $\vec{F}=12\ \hat{i}-10\hat{j}\ {\rm (N)}$ acts on an object. How much work does this force do as the objects moves from the origin to the point $\vec{r}=12\hat{i}+11\hat{j}\ {\rm (m)}$ By definition, the work done by a constant force $\vec{F}$ over a distance $\vec{x}$ is $W=\vec{F}.\vec{x}$ so W&=\left(12\ \hat{i}-10\hat{j}\right).\left(12\hat{i}+11\hat{j}\right)\\ &=144\left(\hat{i}.\hat{i}\right)-110\left(\hat{j}.\hat{j}\right)\\ &=144-110\\ &=34\ {\rm J} Note: the dot product or inner product of the unit vectors are defined as \[\hat{i}.\hat{i}=\left|\hat{i}\right|\left|\hat{i}\right|{\cos 0{}^\circ \ }=1\] And so on $\hat{j}.\hat{j}=\hat{k}.\hat{k}=1$. \[\hat{i}.\hat{j}=\left|\hat{i}\right|\left|\hat{j}\right|{\cos 90{}^\circ \ }=0\ \] 19) In the figure, a constant external force $P=160\, {\rm N}$ is applied to a $20\, {\rm kg}$ box, which is on a rough horizontal surface. In the figure, a constant external force $P=160\, {\rm N}$ is applied to a $20\, {\rm kg}$ box, which is on a rough horizontal surface. While the force pushes the box a distance of $8\, {\rm m}$, the speed changes from $0.5\, {\rm m/s}$ to $2.6\, {\rm m/s}$. What is the work done by friction during this process? Use the work-energy theorem $\Delta K=W_{net}$, where $W_{net}$ is the total work done on an object. In this case, there are two works, due to the constant force and friction. First calculate the work $W_P$ and substitute it in the $\Delta K=W_{net}$, then solve for $W_f$. \[W_P=P\left|x\right|{\cos \theta\ }=160\left(8\right){\cos 30{}^\circ \ }=1108.51\ {\rm J}\] \[K_f-K_i=W_f+W_P\Rightarrow \frac{1}{2}m\left(v^2_f-v^2_i\right)=W_f+W_P\] \[\frac{1}{2}\left(20\right)\left({\left(2.6\right)}^2-{\left(0.5\right)}^2\right)=W_f+1108.51\Rightarrow W_f=-1043.41\ {\rm J}\] 20) A net force along the $x$ axis $F\left(x\right)=-C+Dx^2$ is applied to a mass of $m$ that is initially at the origin moving in the $-x$ direction with a speed of $v_0$. A net force along the $x$ axis $F\left(x\right)=-C+Dx^2$ is applied to a mass of $m$ that is initially at the origin moving in the $-x$ direction with a speed of $v_0$. What is the speed of the object when it reaches a point $x_f$? Use the work-energy theorem but in this case, we have a varying force so \[K_f-K_i=W_{net}=\int^{x_f}_{x_i}{F\left(x\right)dx}\] \[\Rightarrow \frac{1}{2}m\left(v^2_f-v^2_i\right)=\int^{x_f}_{x_i=0}{\left(-C+Dx^2\right)dx}={\left(-Cx+\frac{D}{3}x^3\right)}^{x_f}_{x_i=0}\] \[\Rightarrow v_f=\sqrt{v^2_0+\frac{2}{m}\left(-Cx_f+\frac{D}{3}x^3_f\right)}\ \ \] 21) A body of mass $M$ is seated on top of a hemispherical mound of ice of radius $R$ as shown below. A body of mass $M$ is seated on top of a hemispherical mound of ice of radius $R$ as shown below. He starts to slide down the ice and eventually flies off the mound of ice. The ice is frictionless. (a) Draw a free body diagram for the boy when he is at point $P$. (b) At angle $\theta$, what is the boy's velocity? (c) What is $\theta_0$ the angle at which the boy flies off the ice mound? (a) There are two forces acting on the boy, one is the normal force and the other is gravity as shown in the figure below. (b) Use the conservation of mechanical energy between points $A$ and $P$. Consider the center of the hemisphere as the reference level so at $P$ the distance to the base is $h=R\, \cos \theta$ \[E_A=E_P\Rightarrow U_A+K_A=U_P+K_P\] \[\Rightarrow \ \ mgR=mgR\,{\cos \theta\ }+\frac{1}{2}mv^2_P\ \] \[\Rightarrow v_P=\sqrt{2gR(1-{\cos \theta\ })}\] This is the velocity of the boy everywhere on the hemisphere. (c) When the normal force acting on the boy reaches zero, then the boy flies off. So first apply Newton's 2${}^{nd}$ law on the boy at point $P$, then set $N=0$: \[\Sigma F_r=ma_r\Rightarrow N\hat{r}+mg\,{\cos \theta\ }\left(-\hat{r}\right)=\frac{mv^2}{R}\left(-\hat{r}\right)\] \[\Rightarrow mg\,{\cos \theta\ }-N=\frac{mv^2}{R}\xrightarrow{N=0} v=\sqrt{Rg\,{\cos \theta\ }}\] Note: we have used the polar coordinate. The centripetal acceleration is always towards the center i.e. $-\hat{r}$ In part (b) we have found the velocity of the boy at an arbitrary angle $\theta$. In part (c) the velocity at instant the boy flies off have found. Now set equal these velocities with each other to determine the angle at which the boy files off. \[\sqrt{2gR\,\left(1-{\cos \theta\ }\right)}=\sqrt{Rg\,{\cos \theta\ }}\Rightarrow 2gR\,\left(1-{\cos \theta\ }\right)=Rg\,{\cos \theta\ }\] \[\Rightarrow 2Rg=3Rg\,{\cos \theta\ }\Rightarrow \theta={{\cos }^{-1} \left(\frac{2}{3}\right)\ }\ \sim \ 48.19{}^\circ \] 22) A uniform solid sphere is rolling without slipping along a horizontal surface with a speed of $4.5\, {\rm m/s}$ when it starts up a ramp that makes an angle of $25{}^\circ $ with the horizontal. A uniform solid sphere is rolling without slipping along a horizontal surface with a speed of $4.5\, {\rm m/s}$ when it starts up a ramp that makes an angle of $25{}^\circ $ with the horizontal. What is the speed of the sphere after it has rolled $3.00\, {\rm m}$ up the ramp, measured along the surface of the ramp? Use the conservation of mechanical energy as $E_i=E_f$. Here there is a rotational kinetic energy due to the rolling of the sphere, therefore \[\frac{1}{2}mv^2_i+\frac{1}{2}I\omega^2+mgh=\frac{1}{2}mv^2_f+\frac{1}{2}I\omega^2+0\] Where $I$ is the moment of inertial that for a sphere is $\frac{2}{5}mR^2$. Recall that in the rotational motion the angular velocity is related to the linear velocity via $v=r\omega$ so \[\frac{1}{2}mv^2_i+\frac{1}{2}\left(\frac{2}{5}mR^2\right){\left(\frac{v_i}{R}\right)}^2+0=\frac{1}{2}mv^2_f+\frac{1}{2}\left(\frac{2}{5}mR^2\right){\left(\frac{v_f}{R}\right)}^2++mgd\,{\sin 25{}^\circ \ }\] \[\frac{7}{10}mv^2_i=\frac{7}{10}mv^2_f+mgd\,{\sin 25{}^\circ \ }\Rightarrow v_f=\sqrt{v^2_i-\frac{10}{7}gd\,{\sin 25{}^\circ \ }}\] \[\Rightarrow v_f=\sqrt{{\left(4.5\right)}^2-\frac{10}{7}\left(9.8\right)\left(3\right){\sin 25{}^\circ \ }}=1.58\frac{{\rm m}}{{\rm s}}\] 23) A huge cannon is assembled on an airless planet having insignificant axial spin. The planet has a radius of $5\times {10}^6\, {\rm m}$ and a mass of $3.95\times {10}^{23}\,{\rm \ kg}$. A huge cannon is assembled on an airless planet having insignificant axial spin. The planet has a radius of $5\times {10}^6\, {\rm m}$ and a mass of $3.95\times {10}^{23}\,{\rm \ kg}$. The cannon fires a projectile straight up at $2000\, {\rm m/s}$. An observation satellite orbits the planet at a height of $1000\, {\rm km}$. What is the projectile's speed as it passes the satellite? Use the conservation of mechanical energy to find the desired speed. \[E_1=E_2\Rightarrow \frac{1}{2}mv^2_1-\frac{GmM}{r_1}=\frac{1}{2}mv^2_2-\frac{GMm}{r_2}\] Where the second term is the gravitational potential of an object of mass $m$ at a distance $r\ $from the center of a planet with mass $M$. \[v^2_1-\frac{2GM}{r_1}=v^2_2-\frac{2GM}{r_2}\Rightarrow v_2=\sqrt{v^2_1+2GM\left(\frac{1}{R+h}-\frac{1}{R}\right)}\] \[\Rightarrow v_2=\sqrt{{\left(2000\right)}^2+2\left(6.67\times {10}^{-11}\right)\left(3.95\times {10}^{23}\right)\left(\frac{1}{\left(5\times {10}^6+{10}^6\right)}-\frac{1}{5\times {10}^6}\right)}=1498\frac{{\rm m}}{{\rm s}}\] Note: the gravitational potential energy is equal to the work done against gravity of an object with mass $M$ to bring a unit mass $m$ to a given point say $r$ from the center of it .so \[U=-\frac{GmM}{r}\] Where $G=6.67\times {10}^{-11}\ {\rm (N.}{{\rm m}}^{{\rm 2}}{\rm )/}{\left({\rm kg}\right)}^{{\rm 2}}$ is the gravitational constant. 24) A roller coaster cart rolls from rest down a ${\rm 50.0\ m}$ tall hill and then goes around a circular vertical loop-the-loop of radius ${\rm 15.0\ m}$ as shown at right. A roller coaster cart rolls from rest down a ${\rm 50.0\ m}$ tall hill and then goes around a circular vertical loop-the-loop of radius ${\rm 15.0\ m}$ as shown at right. (a) How fast is the cart going when it gets to the top of the loop? (b) If the mass of the cart (including all of its passengers) is ${\rm 1.20\times }{{\rm 10}}^{{\rm 3}}{\rm \ kg}$, what is the magnitude of the normal force that acts on the cart at the top of the loop? (a) Use the conservation of the mechanical energy between starting point $i$ and top of the loop $t$. (let the lowest point of the circle as the base) \[E_i=E_t\Rightarrow U_i+K_i=U_t+K_t\Rightarrow \ mgh+0=mg\left(2R\right)+\frac{1}{2}mv^2_t\] \[v_t=\sqrt{2g(h-2R)}=\sqrt{2\left(9.8\right)\left(50-2\left(15\right)\right)}=19.79\ {\rm m/s}\] (b) There is a centripetal force acting on the cart provided by the normal force in the circular path. Recall that this force always is in direction of the center of the circle ($-\hat{r}$). So apply Newton's 2${}^{nd}$ law to the cart \[\Sigma {\vec{F}}_r=m{\vec{a}}_r\Rightarrow N\left(-\hat{r}\right)+mg\left(-\hat{r}\right)=\frac{mv^2}{2}\left(-\hat{r}\right)\] \[N=\frac{mv^2}{r}-mg\] Thus the normal force is, in general, as above. Put the velocity of the top point of the loop to find the normal force at that point \[N=\frac{mv^2_t}{r}-mg=\left(1.2\times {10}^3\right)\left(\frac{{\left(19.79\right)}^2}{15}-9.8\right)\ \sim \ 19.6\ {\rm kN}\] 25) A uniform solid sphere of radius $r$starts from rest at a height $h$ and rolls without slipping along the loop the loop track of radius $R$ as shown in the figure. ($r\ll R$). A uniform solid sphere of radius $r$starts from rest at a height $h$ and rolls without slipping along the loop the loop track of radius $R$ as shown in the figure. ($r\ll R$). (a) Draw the free body diagram for the sphere when it is at the top of the loop (point A) and moving fast enough to stay on the truck. (b) What is the smallest height $h$ for which the sphere will not leave the truck at the top? (a) Rolls without slipping means that the point of the contact of the object with the ground does not move. This type of motion is provided by the static friction. Therefore, three forces act on the sphere. Gravity, normal force and static friction. (b) The static force does not any work on the sphere since the sphere instantaneously does not slide but rolls! Therefore, use the conservation of mechanical energy to find the desired height. \[E_f-E_i=W_{NC}=0\Rightarrow E_f=E_i\] \[mgh=\frac{1}{2}mv^2_A+\frac{1}{2}I\omega^2+mg\left(2R\right)\] If substitute $I=\frac{2}{5}mr^2$ and $\omega=v/r$ then we obtain \[mgh=\frac{1}{2}mv^2_A+\frac{1}{2}\left(\frac{2}{5}mr^2\right){\left(\frac{v_A}{r}\right)}^2+mg\left(2R\right)\Rightarrow gh=\frac{7}{10}v^2_A+2gR\] Now apply Newton's 2${}^{nd}$ law in the circular motion at the point A to find the $v_A$ \[\Sigma F_r=ma_r\Rightarrow N+mg=\frac{mv^2_A}{R}\ \] If we set $N=0$, we find the minimum value of $v_A$ so $v^{min}_A=\sqrt{Rg}$. Put this value into the relation above \[gh=\frac{7}{10}\left(Rg\right)+2gR\Rightarrow \ \ h_{min}=2.7R\] 26) An $8\, {\rm kg}$ block is released from rest, $v_1=0\, {\rm m/s}$, on a rough incline. The block moves a distance of $1.6\, {\rm m}$ down the incline, in a time interval of $0.8\, {\rm s}$, and acquires a velocity of $v_2=4.0\, {\rm m/s}$. An $8\, {\rm kg}$ block is released from rest, $v_1=0\, {\rm m/s}$, on a rough incline. The block moves a distance of $1.6\, {\rm m}$ down the incline, in a time interval of $0.8\, {\rm s}$, and acquires a velocity of $v_2=4.0\, {\rm m/s}$. (a) Calculate the works done by the weight, friction and normal forces. (b) What is the average rate at which the block gains kinetic energy during the $0.8\, {\rm s}$ time interval? (a) By definition, the work done by a constant force $\vec{F}$ is the product of the magnitude $\Delta \vec x$ of the displacement of the point of application of the force and the component of the force along the direction of the displacement that is $F{\cos \theta\ }$. Therefore \[W=\vec F.\Delta \vec x=F\left|\Delta x\right|{\cos \theta\ }\] First, draw all of the forces act on a body on an incline plane as shown in the figure. Recall that the components along the direction of the displacement does work and the components perpendicular to the displacement does not work (since the angle between force and displacement is $\theta=90{}^\circ $) \[W_g=\left(\underbrace{mg\,{\sin 40{}^\circ \ }}_{F_{\parallel }}\right)\times 1.6\times {\cos 0{}^\circ \ }=8\times 9.8\times {\sin 40{}^\circ \ }\times 1.6=+80\ {\rm J}\] \[W_N=Nx\,{\cos 90{}^\circ \ }=0\] \[W_f=fx\,{\cos 180{}^\circ \ }=-fx=-\mu_kNx\] Since the coefficient of kinetic friction have not given so we must find the work done by the friction by applying the work-energy theorem as follows \[\Delta K=W_{tot}=W_g+W_N+W_f\] \[\frac{1}{2}m\left(V^2_2-V^2_1\right)=W_g+W_f+W_N\Rightarrow \frac{1}{2}\left(8\right)\left(4^2-0^2\right)=80+W_f+0\] \[W_f=-16\ {\rm J}\] (b) The time rate at which a force does work is called Power. Therefore, calculate the change in the kinetic energy of the block then use the definition of the power to find it \[P=\frac{energy}{time}\] \[P=\frac{\Delta K}{\Delta t}=\frac{\frac{1}{2}m\left(V^2_2-V^2_1\right)}{\Delta t}=\frac{\frac{1}{2}\left(4\right)\left(4^2-0^2\right)}{0.8}=80\ {\rm W}\] 27) Imagine a toy gun in which a ball is shot out when a spring is released. The force constant of the spring is $10\, {\rm N/m}$, and it is compressed by $0.05\, {\rm m}$. Imagine a toy gun in which a ball is shot out when a spring is released. The force constant of the spring is $10\, {\rm N/m}$, and it is compressed by $0.05\, {\rm m}$. The mass of the ball is $0.02\, {\rm kg}$. If no energy is lost to friction, approximately what is the speed of the ball when it is shot out? Initially, the spring is compressed so it has elastic potential energy. After shooting out all of its potential energy converts to the kinetic energy. Using the conservation of mechanical, we obtain \[E_f-E_i=W_{NC}\] Where $W_{NC}$ is the work done by the non-conservative forces such as friction. In this case, $W_f=0$ so we have \[E_f=E_i\] \[\Rightarrow \underbrace{\frac{1}{2}kx^2_f}_{elastic\ potential}+\underbrace{\frac{1}{2}mv^2_f}_{kinetic\ energy}=\frac{1}{2}kx^2_i+\frac{1}{2}mv^2_i\] When the spring reaches its upstretched length ($x=0$), the ball is shot. Therefore, \[0+\frac{1}{2}mv^2_f=\frac{1}{2}kx^2+0\Rightarrow v_f=x\sqrt{\frac{k}{m}}=\left(0.05\right)\sqrt{\frac{10}{0.02}}=1.12\frac{{\rm m}}{{\rm s}}\] 28) A $25\ {\rm kg}$ child plays on a swing having support ropes that are $3.0\ {\rm m}$ long. A friend pulls her back until the ropes are $45{}^\circ $ from the vertical and releases her from rest. A $25\ {\rm kg}$ child plays on a swing having support ropes that are $3.0\ {\rm m}$ long. A friend pulls her back until the ropes are $45{}^\circ $ from the vertical and releases her from rest. (a) What is the height of the child above her lowest point, at the moment she is released? (b) What is the potential energy for the child just as she is released, compared with the potential energy at the bottom of the swing? (c) How fast will she be moving at the bottom of the swing? (a) consider $h$ to be the height of the child from releasing point to the lowest point. From the sketch, we obtain \[\ell\,cos\ 45{}^\circ +h=\ell \Rightarrow h=\ell\left(1-{\cos 45{}^\circ \ }\right)\] \[\Rightarrow h=3\left(1-\frac{\sqrt{2}}{2}\right)=0.879\ {\rm m}\] (b) The gravitational potential energy of mass $m$ at height $h$ is $U_{grav}=mgh$, so \[U_{grav}=25\times 9.8\times 0.879=215\ {\rm J}\] (c) Using the conservation of mechanical energy between the releasing point and the bottom of the swing, we get \[E_{top}=E_{bot}\] \[\Rightarrow mgh_i+\frac{1}{2}m\underbrace{v^2_i}_{v_i=0}=mg\underbrace{h_f}_{0}+\frac{1}{2}mv^2_f\] \[\Rightarrow v_f=\sqrt{2gh}=\sqrt{2\times 9.8\times 0.879}=4.15\frac{{\rm m}}{{\rm s}}\] 29) A uniform solid cylinder starts from rest at a height of $1\ {\rm m}$ and rolls without slipping down a plane inclined at an angle of $20{}^\circ $ from the horizontal as shown. A uniform solid cylinder starts from rest at a height of $1\ {\rm m}$ and rolls without slipping down a plane inclined at an angle of $20{}^\circ $ from the horizontal as shown. (a) What is the speed of its center of mass when it reaches the bottom of the incline? (b) What is the magnitude of the acceleration of its center of mass? (c) What minimum coefficient of friction is required to prevent the cylinder from slipping? (a) Since there is no friction in the system, so the mechanical energy of the cylinder is conserved so using this between initial and final point, we get \[E_i=E_f\Rightarrow mgh=\frac{1}{2}mv^2+\underbrace{\frac{1}{2}I\omega^2}_{ \begin{array}{c} rotational \\ kinetic\ energy \end{array} }\] The required condition for rolling without slipping is $v=r\omega$ where $v$ is the tangential velocity of the cylinder (in this case the center of mass velocity) so mgh=\frac{1}{2}mv^2+\frac{1}{2}\left(\frac{1}{2}mr^2\right){\left(\frac{v}{r}\right)}^2=\frac{3}{4}mv^2\\ \Rightarrow v_{CM}=\sqrt{\frac{4}{3}gh} Where we have used the moment of inertial of a hollow cylinder about the central axis i.e. $I_{cyl}=\frac{1}{2}mr^2$. \[\Rightarrow v_{CM}=\sqrt{\frac{4}{3}(9.8)(1)}=3.61\ {\rm m/s}\] (b) Now relate the initial and final velocities of the cylinder with the constant acceleration of it by the following equation v^2-v^2_0=2a_{CM}\Delta s\Rightarrow \ a_{CM}&=\frac{v^2}{2\Delta s}\\ &=\frac{v^2{\sin 20{}^\circ \ }}{2h}\\ &=\frac{{\left(3.61\right)}^2{\sin 20{}^\circ \ }}{2\times 1}\\ &=2.23\,\frac{{\rm m}}{{{\rm s}}^{{\rm 2}}} From the geometry we see that the distance travelled by the cylinder $\Delta s$ is $h/{\sin 20{}^\circ \ }$. (c) In part (b) the acceleration of the cylinder is obtained, so apply Newton's second law to the cylinder and find the desired quantity. \[\Sigma F=ma\to mg\,{\sin 20{}^\circ \ }-\mu_s\underbrace{mg\,{\cos 20{}^\circ \ }}_{N}=ma_{CM}\] \[\mu_s=\frac{g\,{\sin 20{}^\circ \ }-a_{CM}}{g\,{\cos 20{}^\circ \ }}=\frac{9.8\times {\sin 20{}^\circ \ }-2.23}{9.8\times {\cos 20{}^\circ \ }}=0.121\] Important note: in the rolling without slipping motion, the ball is instantaneously at rest where it contacts the ground so we have used the static friction force equation. 30) Two rocks are thrown from a building $11\,{\rm m}$ high, each with a speed of $5\ {\rm m/s}$. one is thrown vertically upwards, the other horizontally. Two rocks are thrown from a building $11\,{\rm m}$ high, each with a speed of $5\ {\rm m/s}$. one is thrown vertically upwards, the other horizontally. What is the speed at which each rock will hit the ground? Solution 1 (kinematic language): use the kinematic relationships to find the speed of each rock after hitting the ground. Let us consider the starting point as the reference point, thus the landing point have the coordinate ($x=?,y=-11\, {\rm m}$). the one is thrown horizontally its motion is similar to the projectile with angle $\theta=0{}^\circ $. In the projectile motion, at every point, the velocity have two components as $v_x=v_{0x}=v_0\,{\cos \theta\ }$ and $v_y=v_0\,{\sin \theta\ }-gt$. Thus first calculate the elapsed time to the hitting point as follows \[y=-\frac{1}{2}gt^2+v_0{\sin \theta\ }\ t+y_0\to \ -11=-\frac{1}{2}\left(9.8\right)t^2+5t\,{\sin 0{}^\circ \ }+0\] \[\Rightarrow t_{tot}=\sqrt{\frac{22}{9.8}}=1.5\ {\rm s}\] Put it into the $y\ $component of the velocity $v_y=5\,{\sin 0{}^\circ \ }-9.8\times 1.5=-14.7\ {\rm m/s}$. the negative indicates that the rock is moving toward the ground. Using the Pythagoras theorem, we get the velocity of the horizontally thrown rock as \[v=\sqrt{v^2_x+v^2_y}=\sqrt{{\left(5{\cos 0{}^\circ \ }\right)}^2+{\left(-14.7\right)}^2}=15.52\, \frac{{\rm m}}{{\rm s}}\] For the one which is thrown vertically, use the following kinematic relation to find its final velocity at the hitting point \[v^2-v^2_0=-2gh\Rightarrow v^2-5^2=-2\left(9.8\right)\left(-11\right)\Rightarrow v=15.52\, \frac{{\rm m}}{{\rm s}}\] As we can see the two velocity is the same, since at the moment of thrown they have same mechanical energy that is\dots Solution 2 (energy approach): \[E_i=E_f\Rightarrow \frac{1}{2}mv^2_i+mgh_i=mg\underbrace{h_f}_{0}+\frac{1}{2}mv^2_f\] Dropping the mass of the rock and rearranging, we obtain \[v_f=\sqrt{v^2_i+2gh_i}=\sqrt{5^2+2(9.8)(11)}=15.52\frac{{\rm m}}{{\rm s}}\] Category : Work and Energy Most useful formula in Work-Energy: Definition of Work: $W=\vec F \cdot \vec s=Fs \,\rm{cos} \phi$ Kinetic energy: $K=\frac{1}{2}mv^2$ Work-Energy theorem : $W_{tot}=\Delta K=K_2-K_1$ Power: $P_{av}=\frac{\Delta W}{\Delta t}=\vec F \cdot \vec v$ Number Of Questions : 30 Harmonic motions Laws of motion Rotational Motions Vectors and Coordinate Systems Momentum and Collision Kinematics in Two Dimensions Kinematics in One Dimension
CommonCrawl
Princeton U Press, Jan 2003 Reviewer 2: P. Cvitanovic notes in red font, all edits incorporated 13 Oct 2007 Some referees might want to Cvitanovic to `tone down' the idiosyncratic aspects of this book, but I think that would be a big mistake. My only complaint along these lines is that the bland title {\sl Group Theory} scarcely conveys the real contents of the book, and the quirky subtitle {\sl Lie's, Tracks, and Exceptional Groups} helps only a little bit to remedy this --- for example, the word `Lie's' is not exactly English, and nobody will know what `tracks' are until after they read the book. But this is a fairly minor point. My main worry is that while the chapters on exceptional Lie groups and the magic triangle (Chaps.\ 16-22) form the real heart of the book, they are still not complete. In the version I received Chapter 22 is missing, and Chapters 16-21 have gaps and many more typos and other errors than the rest of the book. In short, this is very much a book {\it in the process of being written}. It still needs polishing and careful correction before it is suitable for publication.There are a lot of calculations that I did not have the energy to verify. Some expert needs to do this after the book is completely finished. Here are some mistakes I caught: Edits still outstanding (PC: all done) Edits incorporated p. v -- In the Acknowledgements, the author's remark ``why does he not cite my work on the magic triangle?" is graceless, and should be deleted. We all feel unjustly neglected; the wise among us know better than to complain about it. (PC: internal joke, now removed) p. 3 -- Here the word ``octonion" is misspelled ``octonian" in two places. While this misspelling is fairly common in the literature, the correct spelling is definitely ``octonion'', since this correctly parallels the word ``quaternion''. (PC: corrected throughout) p. 4 -- In the caption to Figure 1.1 he says the Freudenthal magic square is marked by the dotted line. While there are some dotted lines in this figure, they don't actually mark the magic square. (PC: done) p. 5 -- ``i.e." is misspelled ``{\sl ie.}" p. 6 -- In equation (2.2) he introduces overlined symbols without explanation. If these stand for conjugate reps, he should have introduced this notation when discussing antiquarks in the second equation on page 5. p. 7 -- The neologism ``clebsches" should be italicized since it's being defined here. p. 17 -- His definition of the conjugate of a complex vector space is wrong, since ``complex conjugation of elements $x \in V$" makes no sense when $V$ is an abstract vector space. (We can conjugate components with respect to a basis, but not elements of an abstract vector space!) The correct definition says that the conjugate space $\overline V$ has the same underlying set as $V$: for each element $v \in V$ there is a corresponding element $\overline v$ in $\overline V$. The difference is that $\overline V$ is made into a complex vector space in a different way: addition is the same, but multiplying an element $\overline{v} \in \overline{V}$ by a complex number $c$ gives $\overline{\overline{c} v}$. This mistake will come to haunt him on page 211. (PC the Haunted: after consultations with my lawyers, this has been replaced by the more palatable "... dual space $\overline V$ is the set of all linear forms on V over the field F...." ) p. 32 -- The idea of ``infinitesimal transformations" as matrices with ``small elements" is sloppy, harking back to the days before calculus was made rigorous through the concept of limit. Since this is a mathematics textbook rather than a physics one, it would be good to at least mention that a precise treatment exists. (PC: why me? OK, OK, I put some "rigor" there now...) p. 39-40 -- The word ``birdtracks'' is misspelled ``birtracks'' at least three times (PC: 6 times! done) ``succeeded'' is spelled ``suceeded'' ``down'' is spelled ``donw''. p. 44 -- The last sentence on this page also breaks off in the middle: ``Fortunately, and $3n-j$ symbol which contains as a sub-diagram a loop, with, let us say, seven vertices''.... p. 47 -- The last sentence on this page breaks off in the middle: ``If $G_1$ does not exist (the invariant relations are so stringent that there is no space on which they can be realized).'' (PC: text completed, move to p. 41 or so) p. 50 -- ``Factor $1/p!$'' should read ``A factor of $1/p!$''. p. 130 -- The factor of $(a-1)$ in equation (11.5) should be $(p-1)$. p. 145 -- The sentence beginning ``Matrix $A_a^b$'' should read ``The matrix $A_a^b$...''. p. 152 -- The author says to compare ``table ?? and table 10.1''. p. 158 -- The author uses the symbol $\overline{SO(3)}$; the usual term for the universal cover of $SO(3)$ is $\widetilde{SO}(3)$ or $Spin(3)$. This is particularly important because the author uses $\overline{SO(3)}$ to mean something completely different in Chapter 13. p. 159 - There is an equation on this page which is too long and shoots off into the margin; also, this equation contains a double-headed arrow which should probably be a minus sign. (PC: rewritten) I think that without much extra work the author could definitively answer his question ``what are spinsters?'' As he notes, just as the Clifford algebra has its spinor representation(s), the Heisenberg algebra has its Fock representation, which is infinite-dimensional. Moreover, just as spinors form a representation of the Spin group (i.e.\ the double cover of the rotation group), Fock space forms a representation of the metaplectic group (i.e. the double cover of the symplectic group). The whole Heisenberg/Fock/metaplectic story works almost like the Clifford/spinor/Spin story except for a bunch of minus signs. Thus, it would be utterly shocking if ``spinsters'' were anything other than the Fock representation of the metaplectic group. It should be easy to check the necessary diagrammatic equations to prove this. (PC: incorporated 9 Oct 2005) p. 165 -- There is an unnecessary left parenthesis in equation (15.23). p. 166 -- The author writes that the invariance group of $f^{abc}$ is $SU(3)$; this is only true if we restrict attention to unitary transformations; in general the whole group $SL(3)$ preserves this tensor. p. 171 -- In the sentence before equation (16.9), ``takes form'' should read ``takes the form''(PC: done). In the equation, I believe the plus sign should be a minus sign. Also, the symbol $A"$ should read ${A'}' $, both in this equation and the text following. (PC: fixed signs. ${A'}' $ looks horrible, no thanks. done) p. 169 -- I think that the author is implicitly assuming the equation $g^{ai}f_{ibc} = f_{abi}g^{ic}$ in all the diagrammatic calculations in Chapter 16. This equation has a nice pictorial interpretation that allows one to wiggle diagrams around without changing the tensor they represent. Moreover, this equation holds for the Lie bracket and Killing form in a semisimple Lie algebra, where it's equivalent to $ g([X,Y],Z) = g(X,[Y,Z]) . $ For the Lie algebra of $SO(3)$, it says simply $ (X \times Y) \cdot Z = X \cdot (Y \times Z) .$ So, I think this equation needs to be added to the definition of the $G_2$ family. (PC: here I disagree - I have disposed of the symmetric 2-index invariant tensor $g^{ab}$ in Chapter 10 "Orthogonal groups", here I am looking at subgroups of SO(n). $g^{ab}$ is not a Killing form, this is the defining, not the adjoint representation. Later on it does become the Killing form for the $E_8$ family, but that is a result of calculation, not an assumption. ) p. 172 -- ``Octonion'' is again misspelled, as is ``Gordan'' in ``Clebsch-Gordan''. p. 180 -- A purely diagrammatic proof of Hurwitz's theorem has also been given by Dominic Boos, and this should be cited here! See Dominik Boos, Ein tensorkategorieller Zugang zum Satz von Hurwitz (A tensor-categorical approach to Hurwitz's theorem), Diplomarbeit ETH Zurich, March 1998, available at http://www.math.ohio-state.edu/$\sim$rost/tensors.html and also Markus Rost, On the dimension of a composition algebra, Documenta Mathematica 1 (1996), 209-214, also available at http://www.mathematik.uni-bielefeld.de/DMV-J/vol-01/10.html Like the author's proof, Boos' proof makes use of the alternativity relation (equation (16.59)). More precisely, he defines a ``vector product algebra'' to be a vector space with a dot product and cross product satisfying properties that generalize those of $R^3$ with its usual dot product and cross product, and gives a purely diagrammatic argument that the dimension of such a thing is 0,1,3, or 7. (PC: I did it in 1975, and it was stated in 1976 paper, derived in detail in 1977 preprint, included in the 1984 Nordita book, on the web since 1996. I'am now referring to Rost and Boos, in hope that a different write-up helps some reader) p. 181 -- Again I think the author is implicitly assuming throughout this chapter that his symmetric quadratic invariant and antisymmetric cubic invariant satisfy the equation $g^{ai}f_{ibc} = f_{abi}g^{ic}$. If so, this should be pointed out explicitly. (PC: here I disagree, as in p. 169 - I have disposed of the symmetric 2-index invariant tensor $g^{ab}$ in Chapter 10 "Orthogonal groups", here I am looking at subgroups of SO(n). As it stands, $g^{ai}f_{ibc} = f_{abi}g^{ic}$ has mismatched up and down indices, but in the $SO(n)$ normalization we are using, $g^{ab}$ is the identity, anyway, so the relation is trivially satisfied. ) p. 182 -- I can't tell if ``the $E_sting$ family Lie algebras'' is a typo for ``the existing families of Lie algebra'' or ``the $E_8$ family of Lie algebras''. (PC: fixed) In equation (17.8) the operator $P_A$ has not been defined; the author probably means $C_A P_\Box$. p. 187 -- The diagram in equation (17.36) still needs to be drawn. p. 188 -- There is a reference to (??) above equation (17.43). (PC: different derivation now, not applicable) p. 190 - There is a reference to (??) in the caption of table 17.2 Equations (17.55) and (17.56) are incomplete. p. 211 -- the author's mention of the `complex conjugate' vector space $\overline A$ is meaningless here, since $A$ is not a complex vector space. And even if by $\overline A$ he means the {\it dual} vector space, equations (18.75) and (18.76) do not make sense. (PC: you are right, removed now) p. 211 -- Again ``octonion'' is misspelled. p. 211 -- Definition (18.75) and (18.76) don't make sense, for too many reasons to easily sort out. For example, is equation (18.76) a definition of the cross product or the letter $z$? Neither option is viable. Is $\overline x$ an element of $A$ or the dual vector space of $A$? Neither option is viable. If the algebra $A$ being discussed here is really the Jordan algebra of $3 \times 3$ hermitian octonionic matrices, correct definitions are as follows: define $(x,y) = {\rm tr}(xy)$, set $\langle x,x,x \rangle = {\rm det}(x)$ and then extend $\langle \cdot, \cdot, \cdot \rangle$ to be a symmetric trilinear form, and then let $x \times y$ be the unique element of $A$ such that $(x \times y, z) = 3(x,y,z)$. Of course, for all this to make sense, we need to define the trace and (more subtly) the determinant of $3 \times 3$ hermitian octonionic matrices. A correct treatment along these lines can be found in various places, perhaps including the work by Springer on which the author claims to be basing his treatment. (PC: you are right. Now I cite Springer word for word) p. 212 - There are a lot of references to (??) on this page, and one on the previous page. p. 216 -- The first sentence should start with `the': ``The $V \otimes A$ space....'' p. 220 -- The first sentence trails off in ``we obtain''. There is a reference to a nonexistent equation called (?!). $F_4(28)$ should be $F_4(26)$. Table 19.1 apparently needs editing. p. 219 -- The author speaks of ``the exceptional simple Jordan algebra of traceless Hermitian $3 \times 3$ matrices with octonions matrix elements''. The {\it traceless} hermitian octonion matrices do not form a Jordan algebra, since they aren't closed under the Jordan product and don't include the identity. $F_4$ acts as automorphisms of the exceptional Jordan algebra, and as a rep of $F_4$ this algebra splits as the direct sum of a 1-dimensional trivial rep and a 26-dimensional irrep given by the traceless matrices. (PC: you are right - now fixed) p. 221 -- The second sentence should say ``...an $n \to -n$ substitution...'' There is a reference to chapter ??. p. 227 -- After equation (20.36), $SO(7)$ should probably be $SO(10)$. p. 228 -- There's a typo in ``fermionic''. p. 229 -- The author writes ``The Dynkin indices listed in tables ?? and ??gree with....'' p. 232 -- There's a reference to chapter ??. p. 235 -- The sentence beginning ``Role of the'' should start with a `The'. p. 236 -- There should be another `the' in ``In the dimension of the associated reps, eigenvalue....'' The author writes ``... because it contains Freudenthal's Magic Square [72], marked by the dotted line in table 21.1.'' The table being referred to is really table 21.2, and there is no dotted line marking out the magic square in this table (PC: frame drawn). Also, the magic square was actually first noticed by Rosenfeld: Boris A. Rosenfeld, Geometrical interpretation of the compact simple Lie groups of the class (Russian), Dokl. Akad. Nauk. SSSR (1956) 106, 600-603. and then independently made rigorous by Freudenthal: Hans Freudenthal, Beziehungen der $E_7$ und $E_8$ zur Oktavenebene, I, II, Indag. Math. 16 (1954), 218-230, 363-368. III, IV, Indag. Math. 17 (1955), 151-157, 277-285. V -- IX, Indag. Math. 21 (1959), 165-201, 447-474. X, XI, Indag. Math. 25 (1963) 457-487. and Tits: Jacques Tits, Alg\'ebres alternatives, alg\'ebres de Jordan et alg\'ebres de Lie exceptionnelles, Indag. Math. 28 (1966) 223-237. so it should probably just be called the Magic Square. (PC: Incorporated. I already had Rosenfeld, Freudenthal, Tits references, but this clarifies who did what a bit. Seems that Ned. Akad. Weternsch. Proc. = Indignations of Mathematics; go figure.) The author misspells `Freudenthal Magic Square' (PC: fixed) , which anyway should probably be just `Magic Square' (PC: incorporated) p. 238 -- The author speaks of a `division algebra of dimension 6', but there is no such thing! It's a well-known hard theorem that finite-dimensional real division algebras occur only in dimensions 1, 2, 4, and 8. (I would like to know more about the sextonions, but I already know they're not a division algebra.) (PC: you are right - removed now) p. 242 -- The author asks why the magic triangle is symmetric across the diagonal. I don't know: this is a fascinating puzzle! But it should be emphasized that while the Freudenthal-Tits construction of the magic square leaves this symmetry mysterious, Vinberg's construction makes it obvious: E. B. Vinberg, A construction of exceptional simple Lie groups (Russian), Tr. Semin. Vektorn. Tensorn. Anal. 13 (1966), 7-9. A more accessible reference is: A. L. Onishchik and E. B. Vinberg, eds., Lie Groups and Lie Algebras III, Springer, Berlin, 1991, pp. 167-178. A more recent construction of the magic square due to Barton and Sudbery also has manifest symmetry: Chris H. Barton and Anthony Sudbery, Magic squares of Lie algebras, preprint available as math.RA/0001083. A tour of all these constructions, explaining how they are related, can be found here: John Baez, The octonions, Bull. Amer. Math. Soc. 39 (2002), 145-205. These references should be cited, for the benefit of the reader who wants to ponder the mysterious symmetry of the magic triangle. (PC 1 Aug 2007: all of this now promoted to a higher level of limbo, into the \Preliminary{ parts of the webbook manuscript} ) p. 263 -- Reference [100] is missing. p. 267 -- For some reason the references from [1] to [165] are listed alphabetically by author, and then [166] to [204] go alphabetically by author starting at the beginning of the alphabet again. Now I see that some of the references I suggested including do in fact appear --- but without citations at the appropriate places in the text. Edits of the web epilogue (not in the book, so not entered yet) p. 240 -- ``I invented the planar field theory....'' should be ``I invented a planar field theory....'' p. 242 -- in the second to last sentence, ``outstriping'' should be ``outstripping''. Despite these somewhat grumpy corrections, I must emphasize that I found reading the book quite enjoyable.
CommonCrawl
Global analysis of strong solutions for the viscous liquid-gas two-phase flow model in a bounded domain On optimal controls in coefficients for ill-posed non-Linear elliptic Dirichlet boundary value problems June 2018, 23(4): 1395-1410. doi: 10.3934/dcdsb.2018156 Well-posedness in critical spaces for a multi-dimensional compressible viscous liquid-gas two-phase flow model Haibo Cui 1, , Qunyi Bie 2,, and Zheng-An Yao 3, School of Mathematical Sciences, Huaqiao University, Quanzhou 362021, China College of Science & Three Gorges Mathematical Research Center, China Three Gorges University, Yichang 443002, China School of Mathematics, Sun Yat-Sen University, Guangzhou 510275, China * Corresponding author Received August 2016 Revised February 2018 Published April 2018 Fund Project: Research Supported by the NNSF of China (Grant Nos. 11601164, 11271381 and 11701325), the National Basic Research Program of China (973 Program) (Grant No. 2010CB808002), the Natural Science Foundation of Fujian Province of China (Grant Nos. 2016J05010 and 2017J05007) and the Scientific Research Funds of Huaqiao University (Grant No.15BS201) This paper is dedicated to the study of the Cauchy problem for a compressible viscous liquid-gas two-phase flow model in $\mathbb{R}^N\,(N≥2)$. We concentrate on the critical Besov spaces based on the $L^p$ setting. We improve the range of Lebesgue exponent $p$, for which the system is locally well-posed, compared to [22]. Applying Lagrangian coordinates is the key to our statements, as it enables us to obtain the result by means of Banach fixed point theorem. Keywords: Liquid-gas two-phase flow model, local well-posedness, critical Besov spaces, Lagrangian coordinates. Mathematics Subject Classification: Primary: 35Q35, 76N10. Citation: Haibo Cui, Qunyi Bie, Zheng-An Yao. Well-posedness in critical spaces for a multi-dimensional compressible viscous liquid-gas two-phase flow model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1395-1410. doi: 10.3934/dcdsb.2018156 H. Bahouri, J. Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der Mathematischen Wissenschaften, Springer, Heidelberg, 2011. Google Scholar Q. L. Chen, C. X. Miao and Z. F. Zhang, On the well-posedness for the viscous shallow water equations, SIAM J. Math. Anal., 40 (2008), 443-474. doi: 10.1137/060660552. Google Scholar Q. L. Chen, C. X. Miao and Z. F. Zhang, Well-posedness in critical spaces for the compressible Navier-Stokes equations with density dependent viscosities, Rev. Mat. Iberoam., 26 (2010), 915-946. Google Scholar N. Chikami and R. Danchin, On the well-posedness of the full compressible Navier-Stokes system in critical Besov spaces, J. Differential Equations, 258 (2015), 3435-3467. doi: 10.1016/j.jde.2015.01.012. Google Scholar H. B. Cui, W. J. Wang, L. Yao and C. J. Zhu, Decay rates for a nonconservative compressible generic two-fluid model, SIAM J. Math. Anal., 48 (2016), 470-512. doi: 10.1137/15M1037792. Google Scholar H. B. Cui, H. Y. Wen and H. Y. Yin, Global classical solutions of viscous liquid-gas two-phase flow model, Math. Methods Appl. Sci., 36 (2013), 567-583. doi: 10.1002/mma.2614. Google Scholar R. Danchin, Global existence in critical spaces for compressible Navier-Stokes equations, Invent. Math., 141 (2000), 579-614. doi: 10.1007/s002220000078. Google Scholar R. Danchin, Lagrangian approach for the compressible Navier-Stokes equations, Ann. Inst. Fourier, 64 (2014), 753-791. doi: 10.5802/aif.2865. Google Scholar R. Danchin and P. B. Mucha, A Lagrangian approach for the incompressible Navier-Stokes equations with variable density, Comm. Pure Appl. Math., 65 (2012), 1458-1480. doi: 10.1002/cpa.21409. Google Scholar R. Danchin, Local theory in critical spaces for compressible viscous and heat-conductive gases, Comm. Partial Differential Equations, 26 (2001), 1183-1233. doi: 10.1081/PDE-100106132. Google Scholar S. Evje, T. Flåtten and H. Friis, Global weak solutions for a viscous liquid-gas model with transition to single-phase gas flow and vacuum, Nonlinear Anal., 70 (2009), 3864-3886. doi: 10.1016/j.na.2008.07.043. Google Scholar S. Evje and K. Karlsen, Global existence of weak solutions for a viscous two-phase model, J. Differential Equations, 245 (2008), 2660-2703. doi: 10.1016/j.jde.2007.10.032. Google Scholar S. Evje and K. Karlsen, Global weak solutions for a viscous liquid-gas model with singular pressure law, Commun. Pure Appl. Anal., 8 (2009), 1867-1894. doi: 10.3934/cpaa.2009.8.1867. Google Scholar C. C. Hao and H. L. Li, Well-posedness for a multidimensional viscous liquid-gas two-phase flow model, SIAM J. Math. Anal., 44 (2012), 1304-1332. doi: 10.1137/110851602. Google Scholar P. B. Mucha, The cauchy problem for the compressible Navier-Stokes equations in the Lp-framework, Nonlinear Anal., 52 (2003), 1379-1392. doi: 10.1016/S0362-546X(02)00270-5. Google Scholar J. Nash, Le probléme de Cauchy pour les équations différentielles d'un fluide général, Bull. Soc. Math. France, 90 (1962), 487-497. Google Scholar A. Prosperetti and G. Tryggvason, Computational Methods for Multiphase Flow, Cambridge University Press, Cambridge, 2009. Google Scholar T. Runst and W. Sickel, Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations, Volume 3, Walter de Gruyter, Berlin, 1996. Google Scholar A. Valli, An existence theorem for compressible viscous fluids, Ann. Mat. Pura Appl., 130 (1982), 197-213. doi: 10.1007/BF01761495. Google Scholar A. Valli and W. M. Zajaczkowski, Navier-Stokes equations for compressible fluids: Global existence and qualitative properties of the solutions in the general case, Comm. Math. Phys., 103 (1986), 259-296. doi: 10.1007/BF01206939. Google Scholar H. Y. Wen, L. Yao and C. J. Zhu, A blow-up criterion of strong solution to a 3D viscous liquid-gas two-phase flow model with vacuum, J. Math. Pures Appl., 97 (2012), 204-229. doi: 10.1016/j.matpur.2011.09.005. Google Scholar F. Y. Xu and J. Yuan, On the well-posedness for a multi-dimensional compressible viscous liquid-gas two-phase flow model in critical spaces, Z. Angew. Math. Phys., 66 (2015), 2395-2417. doi: 10.1007/s00033-015-0529-7. Google Scholar L. Yao, T. Zhang and C. J. Zhu, Existence and asymptotic behavior of global weak solutions to a 2D viscous liquid-gas two-phase flow model, SIAM J. Math. Anal., 42 (2010), 1874-1897. doi: 10.1137/100785302. Google Scholar L. Yao, T. Zhang and C. J. Zhu, A blow-up criterion for a 2d viscous liquid-gas two-phase flow model, J. Differential Equations, 250 (2011), 3362-3378. doi: 10.1016/j.jde.2010.12.006. Google Scholar L. Yao and C. J. Zhu, Free boundary value problem for a viscous two-phase model with mass-dependent viscosity, J. Differential Equations, 247 (2009), 2705-2739. doi: 10.1016/j.jde.2009.07.013. Google Scholar L. Yao and C. J. Zhu, Existence and uniqueness of global weak solution to a two-phase flow model with vacuum, Math. Ann., 349 (2011), 903-928. doi: 10.1007/s00208-010-0544-0. Google Scholar Guochun Wu, Yinghui Zhang. Global analysis of strong solutions for the viscous liquid-gas two-phase flow model in a bounded domain. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1411-1429. doi: 10.3934/dcdsb.2018157 Haiyan Yin, Changjiang Zhu. Convergence rate of solutions toward stationary solutions to a viscous liquid-gas two-phase flow model in a half line. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2021-2042. doi: 10.3934/cpaa.2015.14.2021 Yingshan Chen, Mei Zhang. A new blowup criterion for strong solutions to a viscous liquid-gas two-phase flow model with vacuum in three dimensions. Kinetic & Related Models, 2016, 9 (3) : 429-441. doi: 10.3934/krm.2016001 Feimin Huang, Dehua Wang, Difan Yuan. Nonlinear stability and existence of vortex sheets for inviscid liquid-gas two-phase flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3535-3575. doi: 10.3934/dcds.2019146 K. Domelevo. Well-posedness of a kinetic model of dispersed two-phase flow with point-particles and stability of travelling waves. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 591-607. doi: 10.3934/dcdsb.2002.2.591 Zhichun Zhai. Well-posedness for two types of generalized Keller-Segel system of chemotaxis in critical Besov spaces. Communications on Pure & Applied Analysis, 2011, 10 (1) : 287-308. doi: 10.3934/cpaa.2011.10.287 Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934/cpaa.2015.14.1865 Jan Prüss, Yoshihiro Shibata, Senjo Shimizu, Gieri Simonett. On well-posedness of incompressible two-phase flows with phase transitions: The case of equal densities. Evolution Equations & Control Theory, 2012, 1 (1) : 171-194. doi: 10.3934/eect.2012.1.171 Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. Kinetic & Related Models, 2019, 12 (4) : 829-884. doi: 10.3934/krm.2019032 Qunyi Bie, Qiru Wang, Zheng-An Yao. On the well-posedness of the inviscid Boussinesq equations in the Besov-Morrey spaces. Kinetic & Related Models, 2015, 8 (3) : 395-411. doi: 10.3934/krm.2015.8.395 Wei Luo, Zhaoyang Yin. Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5047-5066. doi: 10.3934/dcds.2016019 Nikolaos Bournaveas. Local well-posedness for a nonlinear dirac equation in spaces of almost critical dimension. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 605-616. doi: 10.3934/dcds.2008.20.605 Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030 Theodore Tachim Medjo. A two-phase flow model with delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3273-3294. doi: 10.3934/dcdsb.2017137 Kai Yan, Zhaoyang Yin. Well-posedness for a modified two-component Camassa-Holm system in critical spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1699-1712. doi: 10.3934/dcds.2013.33.1699 Paola Goatin, Sheila Scialanga. Well-posedness and finite volume approximations of the LWR traffic flow model with non-local velocity. Networks & Heterogeneous Media, 2016, 11 (1) : 107-121. doi: 10.3934/nhm.2016.11.107 Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao. Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2879-2910. doi: 10.3934/dcds.2018124 Hartmut Pecher. Local well-posedness for the nonlinear Dirac equation in two space dimensions. Communications on Pure & Applied Analysis, 2014, 13 (2) : 673-685. doi: 10.3934/cpaa.2014.13.673 T. Tachim Medjo. Averaging of an homogeneous two-phase flow model with oscillating external forces. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3665-3690. doi: 10.3934/dcds.2012.32.3665 Theodore Tachim-Medjo. Optimal control of a two-phase flow model with state constraints. Mathematical Control & Related Fields, 2016, 6 (2) : 335-362. doi: 10.3934/mcrf.2016006 Haibo Cui Qunyi Bie Zheng-An Yao
CommonCrawl
Tohoku Mathematical Journal Tohoku Math. J. (2) Volume 68, Number 3 (2016), 349-375. Homotopy theory of mixed Hodge complexes Joana Cirici and Francisco Guillén More by Joana Cirici More by Francisco Guillén Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text We show that the category of mixed Hodge complexes admits a Cartan-Eilenberg structure, a notion introduced by Guillén-Navarro-Pascual-Roig leading to a good calculation of the homotopy category in terms of (co)fibrant objects. Using Deligne's décalage, we show that the homotopy categories associated with the two notions of mixed Hodge complex introduced by Deligne and Beilinson respectively, are equivalent. The results provide a conceptual framework from which Beilinson's and Carlson's results on mixed Hodge complexes and extensions of mixed Hodge structures follow easily. Tohoku Math. J. (2), Volume 68, Number 3 (2016), 349-375. Revised: 25 November 2014 First available in Project Euclid: 23 September 2016 https://projecteuclid.org/euclid.tmj/1474652264 doi:10.2748/tmj/1474652264 Primary: 55U35: Abstract and axiomatic homotopy theory Secondary: 32S35: Mixed Hodge theory of singular varieties [See also 14C30, 14D07] Mixed Hodge theory homotopical algebra mixed Hodge complex filtered derived category weight filtration absolute filtration diagram category Cartan-Eilenberg category décalage Cirici, Joana; Guillén, Francisco. Homotopy theory of mixed Hodge complexes. Tohoku Math. J. (2) 68 (2016), no. 3, 349--375. doi:10.2748/tmj/1474652264. https://projecteuclid.org/euclid.tmj/1474652264 A. A. Beǐlinson, Notes on absolute Hodge cohomology, Applications of algebraic $K$-theory to algebraic geometry and number theory, Part I, II (Boulder, Colo., 1983), 35–68, Contemp. Math. 55, Amer. Math. Soc., Providence, RI, 1986. Mathematical Reviews (MathSciNet): MR862628 K. S. Brown, Abstract homotopy theory and generalized sheaf cohomology, Trans. Amer. Math. Soc. 186 (1973), 419–458. Digital Object Identifier: doi:10.1090/S0002-9947-1973-0341469-9 J. A. Carlson, Extensions of mixed Hodge structures, Journées de Géometrie Algébrique d'Angers Juillet 1979/Algebraic Geometry, Angers, 1979, pp. 107–127, Sijthoff & Noordhoff, Alphen aan den Rijn–Germantown, Md., 1980. H. Cartan and S. Eilenberg, Homological algebra, Princeton Univ. Press, Princeton, N. J., 1956. Mathematical Reviews (MathSciNet): MR77480 J. Cirici, Cofibrant models of diagrams: mixed Hodge structures in rational homotopy, Trans. Amer. Math. Soc. 367 (2015), 5935–5970. Digital Object Identifier: doi:10.1090/S0002-9947-2014-06405-2 J. Cirici and F. Guillén, ${E}_1$-formality of complex algebraic varieties, Algebr. Geom. Topol. 14 (2014), 3049–3079. D.-C. Cisinski, Catégories dérivables, Bull. Soc. Math. France 138 (2010), 317–393. P. Deligne, Théorie de Hodge, II, Inst. Hautes Études Sci. Publ. Math. 40 (1971), 5–57. Digital Object Identifier: doi:10.1007/BF02684692 P. Deligne, Théorie de Hodge, III, Inst. Hautes Études Sci. Publ. Math. 44 (1974), 5–77. F. El Zein, Mixed Hodge structures, Trans. Amer. Math. Soc. 275 (1983), 71–106. S. Gelfand and Y. Manin, Methods of homological algebra, Springer Monographs in Mathematics, Springer-Verlag, Berlin, second ed., 2003. P. Griffiths and W. Schmid, Recent developments in Hodge theory: a discussion of techniques and results, Discrete subgroups of Lie groups and applicatons to moduli, 31–127, Oxford Univ. Press, Bombay, 1975. F. Guillén, V. Navarro, P. Pascual and A. Roig, A Cartan-Eilenberg approach to homotopical algebra, J. Pure Appl. Algebra 214 (2010), 140–164. Digital Object Identifier: doi:10.1016/j.jpaa.2009.04.009 R. M. Hain, The de Rham homotopy theory of complex algebraic varieties II, $K$-Theory 1 (1987), 481–497. S. Halperin and D. Tanré, Homotopie filtrée et fibrés $C^\infty$, Illinois J. Math. 34 (1990), 284–324. Project Euclid: euclid.ijm/1255988268 P. S. Hirschhorn, Model categories and their localizations, Mathematical Surveys and Monographs 99, American Mathematical Society, Providence, RI, 2003. A. Huber, Mixed motives and their realization in derived categories, Lecture Notes in Mathematics 1604, Springer-Verlag, Berlin, 1995. L. Illusie, Complexe cotangent et déformations, I, Lecture Notes in Mathematics 239, Springer-Verlag, Berlin, 1971. B. Keller, Chain complexes and stable categories, Manuscripta Math. 67 (1990), 379–417. G. Laumon, Sur la catégorie dérivée des $\mathcal{D}$-modules filtrés, Algebraic geometry (Tokyo/Kyoto, 1982), Lecture Notes in Mathematics 1016, 151–237, Springer, Berlin, 1983. Digital Object Identifier: doi:10.1007/BFb0099964 M. Levine, Mixed motives, Handbook of $K$-theory, Vol. 1, 2, 429–521, Springer, Berlin, 2005. J. W. Morgan, The algebraic topology of smooth algebraic varieties, Inst. Hautes Études Sci. Publ. Math. 48 (1978), 137–204. V. Navarro, Sur la théorie de Hodge-Deligne, Invent. Math. 90 (1987), 11–76. K. H. Paranjape, Some spectral sequences for filtered complexes and applications, J. Algebra 186 (1996), 793–806. Digital Object Identifier: doi:10.1006/jabr.1996.0395 P. Pascual, Some remarks on Cartan-Eilenberg categories, Collect. Math. 63 (2012), 203–216. Digital Object Identifier: doi:10.1007/s13348-011-0037-9 C. Peters and J. Steenbrink, Mixed Hodge structures, A Series of Modern Surveys in Mathematics 52, Springer-Verlag, Berlin, 2008. D. Quillen, Homotopical algebra, Lecture Notes in Mathematics 43, Springer-Verlag, Berlin, 1967. M. Saito, Mixed Hodge modules, Publ. Res. Inst. Math. Sci. 26 (1990), 221–333. Digital Object Identifier: doi:10.2977/prims/1195171082 M. Saito, Mixed Hodge complexes on algebraic varieties, Math. Ann. 316 (2000), 283–331. Digital Object Identifier: doi:10.1007/s002080050014 R. W. Thomason, Homotopy colimits in the category of small categories, Math. Proc. Cambridge Philos. Soc. 85 (1979), 91–109. Digital Object Identifier: doi:10.1017/S0305004100055535 J.-L. Verdier, Des catégories dérivées des catégories abéliennes, Astérisque 239 (1997), xii+253. Tohoku University, Mathematical Institute Access Notice Weight structures and 'weights' on the hearts of $t$-structures Bondarko, Mikhail V., Homology, Homotopy and Applications, 2012 Rigidification of algebras over multi-sorted theories Bergner, Julia E, Algebraic & Geometric Topology, 2006 Continuous functors as a model for the equivariant stable homotopy category Blumberg, Andrew, Algebraic & Geometric Topology, 2006 A Note on Cartan-Eilenberg Gorenstein categories Lu, Bo, Ren, Wei, and Liu, Zhongkui, Kodai Mathematical Journal, 2015 Weak (co)fibrations in categories of (co)fibrant objects Kieboom, R. W., Sonck, G., Van der Linden, T., and Witbooi, P. J., Homology, Homotopy and Applications, 2003 A model category for the homotopy theory of concurrency Gaucher, Philippe, Homology, Homotopy and Applications, 2003 Faithfulness of a functor of Quillen Dwyer, William G, Rădulescu-Banu, Andrei, and Thomas, Sebastian, Algebraic & Geometric Topology, 2010 Bar constructions and Quillen homology of modules over operads Harper, John E, Algebraic & Geometric Topology, 2010 Intersection spaces, perverse sheaves and type IIB string theory Banagl, Markus, Budur, Nero, and Maxim, Laurenţu, Advances in Theoretical and Mathematical Physics, 2014 Topological $\ast$-autonomous categories, revisited Barr, Michael, Tbilisi Mathematical Journal, 2017 euclid.tmj/1474652264
CommonCrawl
A unifying basis for the interplay of stress and chemical processes in the Earth: support from diverse experiments John Wheeler ORCID: orcid.org/0000-0002-7576-44651 Contributions to Mineralogy and Petrology volume 175, Article number: 116 (2020) Cite this article The interplay between stress and chemical processes is a fundamental aspect of how rocks evolve, relevant for understanding fracturing due to metamorphic volume change, deformation by pressure solution and diffusion creep, and the effects of stress on mineral reactions in crust and mantle. There is no agreed microscale theory for how stress and chemistry interact, so here I review support from eight different types of the experiment for a relationship between stress and chemistry which is specific to individual interfaces: (chemical potential) = (Helmholtz free energy) + (normal stress at interface) × (molar volume). The experiments encompass temperatures from -100 to 1300 degrees C and pressures from 1 bar to 1.8 GPa. The equation applies to boundaries with fluid and to incoherent solid–solid boundaries. It is broadly in accord with experiments that describe the behaviours of free and stressed crystal faces next to solutions, that document flow laws for pressure solution and diffusion creep, that address polymorphic transformations under stress, and that investigate volume changes in solid-state reactions. The accord is not in all cases quantitative, but the equation is still used to assist the explanation. An implication is that the chemical potential varies depending on the interface, so there is no unique driving force for reaction in stressed systems. Instead, the overall evolution will be determined by combinations of reaction pathways and kinetic factors. The equation described here should be a foundation for grain-scale models, which are a prerequisite for predicting larger scale Earth behaviour when stress and chemical processes interact. It is relevant for all depths in the Earth from the uppermost crust (pressure solution in basin compaction, creep on faults), reactive fluid flow systems (serpentinisation), the deeper crust (orogenic metamorphism), the upper mantle (diffusion creep), the transition zone (phase changes in stressed subducting slabs) to the lower mantle and core mantle boundary (diffusion creep). Pressure influences all chemical reactions, including those that occur in the Earth spanning simple transformations such as diamond to graphite to complex ones including many phases. This implies that stress, a more general state in which forces per unit area are different in different directions, must also influence reactions. Interactions of stress and chemical processes affect many aspects of Earth behaviour such as the rheology of the mantle when undergoing diffusion creep, reactive fluid flow in deforming media and fracturing of minerals due to reaction. One possible cause of intermediate-depth earthquakes in subduction zones is the volume reduction during the transformation of basalt to eclogite possibly accommodated by huge stress buildups (Nakajima et al. 2013). These interactions are of practical importance in understanding for example how olivine fractures during serpentinisation, with implications for CO2 sequestration (Kelemen et al. 2011). Addition of water to anhydrite, forming gypsum, led to uplift and damage to an entire town when the solid volume increase of the reaction overcame the weight of overlying rocks (Schweizer et al. 2019); yet elsewhere the same reaction occurred without apparent deformation [e.g. Fig. 2c of De Paola et al. (2008)]. These examples show it is important to understand how stress and chemical processes influence each other. Our understanding of the effect of pressure on reaction is underpinned by standard thermodynamics, which describes systems where there is no differential stress, and the stress can be described as isotropic. However, there is no agreed theory which extends thermodynamics to include anisotropic stress and how it affects reaction and since many parts of the Earth are under stress, this is a significant gap in our understanding. It might be expected that the effects of stress are the same on rocks as on other polycrystalline materials so that Earth science could call upon such work, but to the writer's knowledge, there is no text summarising those effects in any branch of science. There are several widely quoted works on mathematical foundations, but those works are not always tied directly to the experiments conducted in other studies. In Earth science opinions differ on the importance of stress, in terms of the magnitude of effects on chemical equilibrium and even whether equilibrium exists (Hobbs and Ord 2017; Powell et al. 2018; Tajcmanova et al. 2015; Wheeler 2014, 2018). Those papers are all based on mathematical arguments and it would be useful to substantiate and test the contrasting predictions through experiments. Here I show that there are several different types of experiments already published over several decades which independently point to the same mathematical description of the effects of stress on chemical processes: namely, a single equation that applies at interfaces between crystals and relates local stresses to chemistry. To proceed, definitions of pressure and stress are needed. "Stress" is a second rank tensor σ from which the force per unit area on a notional surface of any orientation can be deduced. In this contribution, compressive stresses are taken as positive. "Pressure" strictly is used to imply an isotropic state of stress, in which the force per unit area is the same in all directions; commonly the phrase "hydrostatic pressure" is used, even though fluids need not be involved. Then, the well-established theory of thermodynamics is "hydrostatic thermodynamics", and extensions of it to address stressed systems are aspects of "non-hydrostatic thermodynamics". The nomenclature is perhaps not the best but is firmly established. The word "pressure" has been used in different ways in different works both within and outside Earth science. Some works define pressure as the average of the three principal stresses: here, to avoid ambiguity, I call this the "mean stress" σm. It is a simple and unique function of the stress tensor. Other works sometimes use the word pressure for the force per unit area across a particular interface, the "normal stress" σn. This value depends on the interface orientation. It can have different values even for a single stress tensor because interfaces of different orientation are always present in a polycrystal. The symbol P might also be used to indicate hydrostatic pressure in a reference system (not the actual system), pressure in one part of a system and so forth; I point out its differing meanings in the works I review. In this contribution, I give a brief mathematical background and then show how descriptions of eight different types of experiment are in accord with a particular equation. Each work uses different language and notation, so it is necessary to explain some details to illustrate the extent to which the works overlap. Then I discuss the consequences of the equation, for a broader perspective on what the experiments tell us and to show how it should form a key part of grain-scale mathematical models: such models are a prerequisite for predicting larger-scale behaviour. Brief mathematical framework Some maths is required here to understand how the various experiments reviewed are linked back by the authors to basic thermodynamics. The physics of stress is well established; in terms of its chemical effects, although there is controversy, any mathematical description of the effects of stress must reduce to hydrostatic thermodynamics when the stress is isotropic and pressure has a single clearly defined value. Hydrostatic thermodynamics In hydrostatic thermodynamics, the Gibbs free energy is a number which is minimised in a system at equilibrium. For each phase, it has a functional dependence G(P, T) on pressure and temperature. G is measured in J or J/mol. If we express G in J, and the number of moles of a particular chemical component as N, then the chemical potential of that component is μ\((=\partial \mathrm{G}/\partial \mathrm{N}\)). In this contribution, I do not address solid solutions (although the discussion here is relevant for them) because the experiments reviewed do not involve solid solutions. Consequently, if we express G in J/mol for a phase (as will be done in the rest of this contribution), then the chemical potential of a component with the composition of that phase is equal to G. This allows the thermodynamics of stressed systems to be related back to hydrostatic thermodynamics. When there is a mixture of reactants and products then the overall "driving force" for a reaction is often written as ΔG = Gproducts−Greactants, with suitable coefficients to ensure the reaction is balanced using chosen mineral formulae (e.g. MgSiO3 versus Mg2Si2O6 for enstatite). ΔG has units of J/mol and depends on the chosen coefficients. For instance the reaction albite = jadeite + quartz could also be written as 2 albite = 2 jadeite + 2 quartz. The latter would have a ΔG double that of the former, but so long as the reaction is defined in a consistent fashion then this is not a problem. Changes in Helmholtz free energy ΔF and molar volume ΔV should be defined using the same reaction coefficients (notation used here is summarised in Table 1). In this contribution, there is a need in places to refer to the solid volume change in a reaction ΔVs which again should be defined using the same reaction coefficients as used for defining the other Δ quantities but omitting those related to the fluid. In a hydrostatic system, the driving forced for reaction can be defined as affinity. There are two IUPAC definitions for affinity, but they are numerically identical. Affinity is relevant for quantifying nucleation of product phases (Pattison et al. 2011) but nucleation is outside the scope of this contribution. For an ongoing reaction involving existing phases we can write affinity A = − ΔG. When we are dealing with stoichiometric phases this can also be written in terms of chemical potentials. For instance, if we take albite into the stability field of jadeite and quartz then we would have A = μalb − μjd − μq. In stressed systems referring to μ and A rather than to G and ΔG proves advantageous. Table 1 Notation Nonhydrostatic thermodynamics There is a common assumption that a generalised version of Gibbs free energy must exist in a stressed system, and once the generalised form of G is established, the equations, methods and databases of hydrostatic thermodynamics can be used to calculate equilibria: the choice is to use mean stress. In contrast, others (Paterson 1973; Wheeler 2018) argue that there is no Gibbs free energy in a stressed system, and local chemical potentials vary from place to place, governed by different interface orientations and local normal stresses. Both approaches reduce to hydrostatic thermodynamics when stress is isotropic, but their predictions are quite different, so clarification is required. Given the ambiguity over the existence of G, I express the mathematics in terms of local chemical potentials. There is a single equation which applies at an interface and, I will argue, is supported by the experiments I review: $$\mu =F+ {\sigma }_{n}V$$ Here μ is chemical potential (of a chemical component with the same composition as the solid), F is the Helmholtz free energy per mole (not itself strongly dependent on stress) and V is the molar volume of the crystalline solid (again not strongly dependent on stress since crystalline solids are not very compressible). The equation indicates that chemical potential is a function of each particular interface and its orientation, and is roughly linear in σn, although there are circumstances in which the small non-linear dependence on F on stress also plays a role in explaining experimental results. There are also circumstances in which the curvature of the interface is large enough that surface energy makes a significant contribution. If the surface or interface energy is γ then a term is added as follows. $$\mu =F+ {\sigma }_{n}V+\gamma \kappa V$$ where κ is the curvature (positive for convex-out surfaces). Since differences in chemical potential drive transport and reaction, and chemical potential depends on interface orientation I argue that there is no chemical equilibrium in a stressed system (Wheeler 2014, 2018). In hydrostatic thermodynamics, for a single component solid the chemical potential is equal to the Gibbs free energy per mole as follows: $$\mu =G=F+ PV$$ When stress is isotropic, the normal stress is equal to P regardless of interface orientation, and Eq. 1 reduces to Eq. 3. Equilibrium is possible since interface orientation no longer appears in the mathematical description. In contrast to Eq. 1 other works e.g. (Tajcmanova et al. 2015; Verhoogen 1951) assert that for a single component solid the chemical potential and Gibbs free energy are as follows: $$\mu =G=F+ {\sigma }_{m}V$$ where σm is the mean stress—not dependent on any particular interface. Again, this reduces to Eq. 3 when stress is isotropic. Equation 1 was proved by Gibbs for a stressed solid next to a fluid. Disputes arise when its use is extended to solid/solid boundaries, and confusion arises when there is an absorbed aqueous film along a solid/solid boundary. Such films are commonly described as "fluid films" and then maybe incorrectly thought to be at fluid pressure Pf, but in general, they carry stress (Gratier et al. 2013; Wheeler 2018). I assert that existing theoretical work in Earth and other branches of science justifies the use of Eq. 1 as a start to describe chemical behaviour at all incoherent interfaces (Wheeler 2018), which comprise the overwhelming majority of crystalline interfaces in the Earth. However theoretical arguments on their own are not conclusive, in terms of whether the theory is accepted and how it is to be applied. So, in this contribution, I summarise 8 different types of the experiment (Table 2, Fig. 1) that support Eq. 1, and in some instances explain why the experiments contradict Eq. 4. The experiments encompass a wide range of conditions: temperatures from − 100 to 1300 degrees C and pressures from 1 bar to 1.8 GPa (Table 3). Because the use of G is at best ambiguous in stressed systems, it is necessary to rephrase the mathematical development used in published works in terms of chemical potential, without changing the actual mathematical results. Similarly, rather than refer to ΔG as a driving force for reaction, I generalise the affinity. As discussed for a hydrostatic system A can be written as a difference between chemical potential of reactants and products—according to Eq. 1 in a stressed system that will be dependent on the interfaces involved. In Wheeler (2014) I used the idea of a reaction pathway to describe which interfaces are involved in the reaction. This is relevant for relating the various experiments I review to each other, and the pathways are shown by grey arrows in Fig. 1. The term "generalised affinity" was in fact used previously by Schmid et al. (2009), to explain an experiment involving a solid-state reaction under stress which is discussed later. Table 2 Summary of 8 types of experiment Schematic diagrams to illustrate the experiments described. In each diagram green indicates a particular solid, other solids are labelled, and pale blue indicates an aqueous solution held in a notional beaker (grey). Red arrow indicates maximum compressive stress, pink arrow indicates minimum compressive stress, and grey arrow indicates the transport pathway for one or more chemicals. In most experiments stress is heterogeneous on some scale as described in Table 2 Table 3 Experimental conditions for experiments discussed The experiments I discuss are directly related to Earth science but some involve soluble salts and overlap with other research fields. Chemical potential and local stresses cannot easily be measured directly, so there are inferences built into the justification of Eq. 1 which will be discussed in each case. It can be used to formulate driving forces for various processes, using some mathematical details summarised in Appendix 1. In all the processes discussed, kinetics are important and are not always easy to quantify, and stress is generally heterogeneous. Despite these issues, I will show that Eq. 1 provides explanatory power. Experimental support for the equation Stressed solid next to fluid This first section discusses "free" surfaces of solids next to fluids. Mechanical equilibrium at the surface dictates that fluid pressure Pf equals normal stress σn in the solid. If σn is fixed, changes in tangential stresses σt might give rise to observable effects: changes in the chemical potential of the solid will give rise to changes in local equilibrium concentration in the fluid. Ristic et al. (1997) grew alum from a supersaturated solution, comparing unstressed crystals with others under tension. The tensile stress led to a reduction in growth rate relative to the unstressed state. The alum deformed plastically to a small extent but the work shows that the dislocations involved could not have influenced growth kinetics. The growth rate will be a function of the affinity (driving force) \({A= \mu }^{solution}- {\mu }^{solid}\). A slower growth rate would be in accord with the chemical potential of the solid under stress (with tangential stress σt ≠ normal stress σn) being higher than its value under unstressed conditions, reducing the affinity. McLellan (using Eq. 1) shows that, regardless of whether the tangential stress is relative tension or compression, the chemical potential always increases under stress; a version of that derivation is given in Appendix 1. Consequently, the driving force for growth is smaller in the stressed situation, in accord with the observed slower growth rate. In contrast Eq. 4 predicts that "the potential is thus increased by a compression, decreased by a traction" (Verhoogen 1951). Thus, crystals under tension (from context, his "traction" means tension) would decrease their solubility, and the growth rate should be faster. Equation 4 is therefore not in accord with observations. Morel and den Brok (2001) undertook experiments on crystals under compressive as well as tensile stress. They chose sodium chlorate (NaClO3) because it has elastic–brittle mechanical behaviour at room temperature, thus avoiding any complications introduced by plastic deformation. In each experiment, they drilled a hole in the crystal to create a heterogeneous stress state, with varying states of tangential stress around the hole (including compressive and tensile). The fluid involved was "saturated sodium chlorate solution" (i.e. a solution that would be in equilibrium with an unstressed crystal at 1 bar), with a small additional dilution, so the dissolution of the solid would be expected. They compared the dissolution behaviour of stressed and unstressed crystals. Regardless of whether the tangential stress was tension or compression, they found that stressed crystals dissolved faster than unstressed ones. They quantified the excess driving force due to stress in terms of the change in elastic strain energy given as their Eq. 1. I give a more general justification of their equation (based on Eq. 1 here) in Appendix 1. Their discussion can therefore be rephrased in terms of changes in μ given by Eq. 1. Morel and den Brok (2001) use this to show that the change in driving force (Δμ) for dissolution due to stress (~ 0.1 J/mol) is minor in comparison to the driving force due to undersaturation (~ 60 J/mol), yet the actual change in dissolution rate is disproportionately large. Therefore, these experiments are in qualitative agreement with Eq. 1 in terms of the sign, but not quantitative agreement. One explanation may relate to instabilities and roughening of the stressed surface which might modify the average dissolution rate. Such instabilities are discussed in the next section. Ostapenko et al. (1972) undertook experiments on stressed halite in solution, motivated by the need to understand, in their words, "two diametrically opposed theories" about the chemical potential of a solid under conditions of non-hydrostatic stress. They are referring to the difference between Eq. 1 (they cite Gibbs) and Eq. 4 (they cite Verhoogen (1951)). This is a reminder that the controversy I mention here is not new. One aspect of their interpretation requires modification (Appendix 3) but this does not affect their conclusion. In brief, they used an optical method to detect minute changes in concentration adjacent to a crystal of halite in solution. They applied compressive stresses and found no detectable changes in concentration adjacent to the crystal. They argued that Eq. 4 predicts changes in concentration large enough that their method would have detected them, while Eq. 1 predicts concentration changes below the detectability limit. Consequently, this paper rejected Eq. 4 in favour of Eq. 1. Stressed solid next to fluid - instabilities den Brok and Morel (2001) put crystals of alum under compressive stress in a slightly undersaturated solution and discovered that instabilities develop. As in their experiments on sodium chlorate described above, they drilled a hole in the crystal to create a heterogeneous stress field, amplified around the hole. They found grooves developed in the initially planar crystal surface and the groove spacing in some experiments was smaller at higher stress. For example, for local amplified stress of 15 MPa near the hole, the groove wavelength was 20–40 μm (Fig. 2). Grooves showing surface instability in alum crystal under compressive stress (up-down) surrounded by solution. Bulk stress was 2.7 + 0.2 MPa, amplified to 13–14 MPa around the hole (den Brok and Morel 2001) To explain this they invoked theory from materials science. Asaro and Tiller (1972) and Grinfeld (1986) show mathematically that a stressed planar interface is chemically unstable with respect to the development of periodic undulations above a certain wavelength, named after these works as Asaro-Tiller-Grinfeld (ATG) instabilities. In an undulation, the normal stress remains fixed but the normal direction varies spatially. The stress field near the surface is then non-uniform, though it becomes uniform over a distance of a few wavelengths inside the solid (Srolovitz 1989). Figure 3 shows that in this non-uniform stress state the elastic strain energy (Helmholtz free energy) is more in a trough than in a peak. At any interface, normal stresses must balance on either side, so here σn = Pf ≈ 0 in comparison to the larger tangential stresses. In accord with Eq. 1 and setting σn = 0, there is a chemical potential difference between troughs and peaks. This is a driving force for dissolution in troughs and precipitation at peaks, so any perturbation will amplify regardless of wavelength. However, the surface energy provides an opposite effect. The troughs are concave outwards and peaks are convex outwards, so there is a driving force for peaks to dissolve and material to precipitate in troughs, and any perturbation will diminish: the minimum energy configuration is a flat surface. Analysis incorporating both effects (Eq. 10 of Srolovitz (1989), which is Eq. 2 here) shows that for short wavelengths the surface energy effect dominates but for long wavelengths λ the stress effect dominates. Srolovitz made a "crude" initial estimate of critical wavelength as, Helmholtz free energy near an undulating crystal surface when a N–S differential stress is applied, displayed in multiples of the uniform value in the crystal interior. Calculated using Matlab PDE solver with Poisson's ratio 0.3; when scaled this way, the pattern does not depend on the Young's modulus value $${\lambda }_{0}^{{{\prime}}}=\frac{8\gamma E}{{\sigma }_{t}^{2}}.$$ where γ is surface energy. Using this den Brok and Morel (2001) showed that for local amplified stress of 15 MPa near the hole, the wavelength is predicted to be 35 μm. It is observed to be 20–40 μm. I note here that a more detailed analysis shows that perturbations are predicted to amplify for, $$\lambda >{\lambda }_{0}=\frac{\pi \gamma E}{{\sigma }_{t}^{2}}.$$ There is a wavelength at which a maximum growth rate is predicted, a small multiple of λ0 depending on the kinetics (e.g. surface diffusion, volume diffusion, evaporation—condensation). Assuming that the instability develops via diffusion through the fluid, analogous to evaporation—condensation [Sect. 3B of Srolovitz (1989)] then the maximum growth rate is for λ = 2λ0 giving 28 μm for the alum example. There is broad agreement between predicted and observed wavelengths, and larger stresses decrease observed wavelengths, so ATG theory (based on Eq. 1 and Eq. 2) has explanatory power, though more experiments are needed to consolidate the link to theory. In subsequent sections I deal with situations in which the second-order terms in stress do not have a significant effect, so Eqs. 1 and 3 can both be considered linear in stress. Pressure solution Pressure solution is a deformation mechanism where strain is accommodated by diffusion of material through an aqueous grain boundary film from interfaces with high normal stress to those with low normal stress. The film "should only with the greatest care be treated as continuous with the fluid in the pore space and is perhaps better treated as a separate thermodynamic phase" (Gratier et al. 2013); it is itself stressed. As material moves away from high-stress interfaces, shortening occurs parallel to σ1, and as the material precipitates at low-stress interfaces, extension occurs parallel to σ2 and/or σ3. Natural microstructures provide evidence for this, most clearly when the regions of precipitation have distinct features. Pore water may form part of the diffusion path in porous aggregates. Figure 4 shows aspects of the flow law in experiments on pressure solution of halite from Spiers et al. (1990). The form of the flow law in deformation experiments, in general, can be described by strain rate being proportional to (stress)n × (grain size)−p. Other quantities such as temperature are involved but it is the exponents n and p which are relevant here as they give insight into the underlying processes. In experiments on pressure solution (e.g. Fig. 4) there are often two key features: first, the strain rate is linear in differential stress (n = 1) and secondly, it is inversely proportional to the cube of grain size (p = 3). (a) Log–log plot of strain rate (volumetric compaction rate) versus effective stress σe, for the values of volumetric strain (ev) shown, for compaction of halite (Fig. 4 of Spiers et al. (1990)). Note slopes near 1. (b) Log–log plot of compaction rate versus grain size d (Fig. 5 of that work). Note slopes near − 3 A flow law fitting such observations can be derived theoretically beginning with a local equilibrium relationship between chemical potential and stress at an interface (e.g. Rutter (1983) Eq. 2) $$\mu =U-TS+ {\sigma }_{n}V$$ which is the same as Eq. 1 above. The local equilibrium is between the stressed solid and its dissolved form in the adjacent grain boundary film or in an adjacent pore fluid $$\mu = {\mu }^{gbf} {\text{ or }} \mu = {\mu }^{pf}$$ In pressure solution, we have long-range diffusive transport from high-stress interfaces to low-stress interfaces, through the grain boundary film. The driving force is then, for a single chemical component and ignoring σ2 for simplicity, $$A={\mu }_{1}^{gbf}- {\mu }_{3}^{gbf}=\left(F +{\sigma }_{1}V\right) -\left(F-{\sigma }_{3}V\right)=\left({\sigma }_{1}-{\sigma }_{3}\right)V$$ Note this quantity is not normally thought of as a chemical affinity, but it is consistent with other usage to call it that. The driving force is linear in the differential stress, so the strain rate is linear in differential stress, as observed (Newtonian viscosity), unless some kinetic factors are nonlinear. The full derivation of the flow law has been presented many times (Gratier et al. 2013; Rutter 1976, 1983). If local equilibrium is assumed between the stressed solid and the solid dissolved in the immediately adjacent grain boundary film, so diffusion is the main rate-controlling step, then $$\dot{e}=B\frac{Dc{V}^{2}w}{RT{d}^{3}}\left({\sigma }_{1}-{\sigma }_{3}\right)$$ [e.g. (Rutter 1976)] where \(\dot{e}\) is strain rate, B a dimensionless constant, D is grain boundary diffusion coefficient, c is concentration of solute in grain boundary (mol/m3), w is grain boundary width, R the gas constant and d the grain size. The constant B depends on microstructural details such as porosity (Keszthelyi et al. 2016) and grain shape (Wheeler 2010), but what matters here is that the flow law predicts n = 1 and p = 3. de Meer and Spiers (1995) show that for gypsum deforming by pressure solution, under certain circumstances the strain rate is proportional to the inverse grain size (p = 1). This is explained considering that dissolution and precipitation of the solid may be difficult; for example, for precipitation, thought to be rate controlling for gypsum, we require supersaturation in the pore fluid. $$\mu < {\mu }^{pf}.$$ This means that the chemical potential difference \({\mu }_{1}^{gbf}- {\mu }_{3}^{gbf}\) which drives diffusion, in particular, is no longer derived from Eq. 6 and the flow law is modified by dissolution and precipitation rate terms e.g. Table 2.3, Eqs. 2.33 of Gratier et al. (2013). However, the key point is that Eq. 1 still underpins the derivation of the flow law via their Eq. 2.14 (Eq. 6 here). Equation 4 has never been used to explain pressure solution phenomena, and it is difficult to see how it could help. For example, suppose we stress a single-phase polycrystal uniformly. Then Eq. 4 predicts that G and chemical potential would have a single value everywhere, and there would be no driving force for deformation by chemical transport. Diffusion creep Diffusion creep is similar to pressure solution except the grain boundaries are essentially dry (there may be some water molecules which enhance diffusion rates but there is no aqueous film) and an additional diffusion pathway may act through grain interiors by volume diffusion. Elliott (1973) first highlighted the similarities between the pressure solution and diffusion creep. Karato et al. (1986) found a deformation regime in fine-grained olivine where the stress exponent was n ~ 1.4 and the grain size exponent p ~ 2–3. To explain this they called upon flow laws such as derived by Raj and Ashby (1971). To derive that flow law, that work states the following. "Chemical equilibrium in the boundary plane means that the chemical potential μ, of vacancies at, and immediately adjacent to a point on the boundary is related to the normal stress σn, acting on the boundary at that point [Raj and Ashby (1971) Eq. B2]: $$\mu ={\mu }_{0}- {\sigma }_{n}\Omega$$ where Ω is the atomic volume, and \({\mu }_{0}\) the chemical potential appropriate to a stress-free reference state". Here they define chemical potential per atom rather than per mole. Noting that the chemical potential of a vacancy is minus the chemical potential of the missing atom, this is the same as Eq. 1 except that the relatively small second-order stress terms in the Helmholtz free energy have been neglected. Larché and Cahn (1985) include the second-order term but reiterate it is relatively small and can be neglected under many circumstances. Raj and Ashby (1971) present flow laws (their Eqs. 22 and 23) with n = 1, and p = 2 (for volume diffusion) or p = 3 (for grain boundary diffusion). For the latter, using notation as in Eq. 7, $$\dot{e}=B\frac{DVw}{RT{d}^{3}}\left({\sigma }_{1}-{\sigma }_{3}\right)$$ Note that the two equations have the same form except that the concentration c is missing in Eq. 8; the two equations are reconciled by recognising that the concentration c of a material in a solid boundary is equal to 1/V, and putting cV = 1 in Eq. 7 gives Eq. 8. The values of D are very different between pressure solution and diffusion creep but the form of the flow law is the same. Using the flow laws for lattice and grain boundary diffusion creep Karato et al. (1986) appeal to volume diffusion (as they find p near 2) under dry conditions and grain boundary diffusion (as they find p near 3) in wet conditions. The stress exponent n > 1 is due to the operation of other deformation mechanisms in parallel with diffusion creep. As for pressure solution, then, Eq. 1 can be used to help explain the observed rheologies. Force of crystallisation—single solid Force of crystallisation is a phrase which covers several phenomena in which stress and chemical processes interact and is related to pressure solution. I suggest it is useful to distinguish two "end member" scenarios in which the phrase is used. 1. Experiments where dead weights are rested on crystals growing from supersaturated solutions (such as alum, Becker and Day (1916)). The crystal may be lifted, showing that the chemical process of crystallisation causes work to be done against an applied force. Here the force is applied externally, does not change, and the system is not confined. 2. Experiments where a solid reaction is mediated by fluid, and involves a solid volume increase, and occurs in a confined space may result in forces as the growing crystals push against their surroundings (Wolterbeek et al. 2016, 2017). The chemical processes give rise to stress, which can play a role in fracturing and lead to practical engineering problems. Because of the confinement, the volume change cannot be manifest, instead of volume is conserved and elastic stresses increase. Force is developed internally, it builds up through time, and the system is confined. These are "end member" scenarios and in reality confined experiments may allow some displacements, for example, because the confining vessel is elastic (Wolterbeek et al. 2017) or deforms plastically (Ostapenko and Yaroshenko 1975). In both scenarios, chemical disequilibrium (e.g. CaO in the presence of water, water supersaturated with alum) causes new and existing phases to grow with some contribution from transport along with aqueous grain boundary films into interfaces under high stress. Correns (1949) undertook experiments in which a crystal of alum was placed in a supersaturated solution, with a force applied from above by a weighted lever and pushrod. The sideways stress on the crystal was 1 bar, since the system was not confined, and the vertical stress was > 1 bar in accord with the pushrod force and the area over which it was applied. Despite the "extra" vertical force, crystal growth displaced the pushrod upwards; the maximum stress that would still allow upwards displacement of the pushrod depended on the level of supersaturation in the solution. At low supersaturations the relationship was linear and at high supersaturations, it was nonlinear and also depended on the crystal face being loaded (Fig. 5). Relationships between supersaturation in an alum solution and vertical stress that can be supported by a growing crystal: Fig. 8 of Flatt et al. (2007), itself redrawn from Correns and Steinborn (1939). a) calculated curve from Eq. 12. b Curve fitted to data for stressed (111) faces (open triangle no growth, filled triangle growth). c Curve fitted to data for stressed (110) faces (open circle no growth, filled circle growth) The figure includes a theoretical curve and to understand this we need an expression for chemical potential in stressed interfaces. When these potentials are lower than those in the surroundings, inwards diffusion will occur along with the interface, and upwards displacement can occur. Correns provided a framework for explaining the observations, but Flatt et al. (2007) point out various ambiguities so here we will refer to their subsequent commentary. Considering the growth of a crystal from supersaturated solution, Steiger (2005) states "the chemical potential μp of a crystal face under pressure p takes the form" (his Eq. 4, verbatim) $${\mu }_{p}={\mu }_{0}+w+ p {V}_{m}$$ Despite describing p as "pressure", from the context it is clear the system is not under hydrostatic (isotropic) pressure p, but p is actually the normal stress across a loaded interface—so I choose here to rename the chemical potential in a stressed interface as \({\mu }_{\sigma }\). Steiger defines μ0 is the chemical potential of the solid in the unstressed reference state, and w as molar [elastic] strain energy, so that \({\mu }_{0}+w=F\). Noting that Vm is the molar volume of the solid in the stressed state, his Eq. 4 is seen to be the same as Eq. 1 here. Chemicals will diffuse into a stressed interface from a pore fluid if affinity \({\mu }^{pf}- {\mu }_{\sigma }=A>0\) so the maximum stress that can be supported is when those two chemical potentials are equal. Because the alum solution has more than one ionic species in solution it is sensible to write the relationship in terms of activity; the link to concentration comes later. $${\mu }^{pf}= {\mu }_{0}^{pf}+RT\mathrm{ln}\left(a/{a}_{o}\right)$$ where \({\mu }_{0}^{pf}\) is a reference chemical potential at reference activity \(a\). Choosing this reference as the chemical potential of the solid under hydrostatic pressure p, written here as \({\mu }_{p}\) then this equation can be expressed in terms of the activity \({a}_{s}\) of dissolved material that would be in equilibrium with that solid at hydrostatic pressure p, i.e. the activity at which the solution is saturated. $${\mu }^{pf}= {\mu }_{p}+RT\mathrm{ln}\left(a/{a}_{s}\right).$$ Then the maximum stress that can be sustained is when A = 0 and $${\mu }_{\sigma }= {\mu }_{p}+RT\mathrm{ln}\left(a/{a}_{s}\right).$$ But \({\mu }_{\sigma }\) and \({\mu }_{p}\) are both given by Eq. 1. Assuming uniform stress in the crystal, the Helmholtz free energy has the same value on stressed and free interfaces, as does the molar volume, so $$F+ \sigma V= F+ pV+RT\mathrm{ln}\left(a/{a}_{s}\right).$$ which is rearranged to $$\sigma -p= \frac{RT}{V}\mathrm{ln}\left(a/{a}_{e}\right)$$ Here the LHS is sometimes referred to as "crystallization pressure", but it should be noted it is not a pressure: it is the numerical difference between the normal stress on a loaded interface and the pressure in a nearby fluid. When Fig. 5 was first drawn, Correns and Steinborn (1939) implicitly assumed that activity was equal to concentration so $$\sigma -p= \frac{RT}{V}\mathrm{ln}\left(c/{c}_{e}\right)$$ Flatt et al. (2007) and Appendix 2 explain that Correns had not considered the ionic nature of his alum solution, so his equation relating chemical potential to concentration was incorrect; instead $$\sigma -p=\mathrm{ n}\frac{RT}{V}\mathrm{ln}\left(c/{c}_{e}\right)$$ where n is about 3.5. The calculated curve in Fig. 5 when recalculated no longer fits the data. Rather than modify Eq. 9, Flatt et al. (2007) suggest that for a number of reasons the measured "pressures" (stresses) were actually higher than those presented, but the original works to not provide enough experimental detail to be sure. Attempts to repeat the experiments have proved difficult (Caruso and Flatt 2014). There are many other experiments, focussed on building stone deterioration, which are interpreted using Eq. 10 but they are generally complex and are not quite direct tests of the equation. Force of crystallisation—two or more solids So far we have considered a system with just one solid, and a supersaturated solution. Force of crystallisation is also manifest in systems where reactions involve one solid reacting to another, mediated by and involving fluid. The examples I cite are all hydration reactions, one with CO2 also involved, where experiments and theory have been compared: lime to portlandite (Ostapenko and Yaroshenko 1975; Wolterbeek et al. 2017), bassanite to gypsum (Ostapenko and Yaroshenko 1975; Skarbek et al. 2018), periclase to brucite (Ostapenko 1976; Zheng et al. 2018) and olivine carbonation (Xing et al. 2018). In these papers a particular equation appears repeatedly, relating force of crystallisation to the ΔG of reaction, i.e. the change in Gibbs free energy between reactants and products, or departure from chemical equilibrium. Some works assert that G is not defined in a stressed system (Kamb 1961; Paterson 1973; Wheeler 2018), therefore such expressions should be treated with care. However, in the equation below it is defined as the ΔG the reaction would have if all reactants and products were under hydrostatic fluid pressure P, which remains a well-defined number (from context, it means G(reactants) – G(products)). Then $$\sigma -p= \frac{\Delta G}{\Delta {V}_{s}}$$ using my notation and where ΔVs is the solid volume change (in m3/mol) calculated using the same reaction coefficients as used for ΔG. Note the similarity with Eq. 11 except instead of V we have ΔVs in the denominator. There is some uncertainty over this equation, for example (Kelemen et al. 2011) write "the volume used in the denominator of [their] Eq. 7 should probably be ΔVs, as written" (my italics), so it is useful to trace the history of its derivation. Wolterbeek et al. (2017) derive an expression like Eq. 13 (their Eq. 13) beginning with their Eq. 6 which apart from notation is the same as Eq. 1. $${\mu }_{i}^{\sigma T}\approx {F}_{i}^{\sigma T}+ {\sigma }_{n}{V}_{m,i}^{\sigma T}$$ The approximation is indicated because the surface energy term (Eq. 2) is omitted (Wolterbeek pers. comm.). The earliest use of Eq. 13 known to me is in Ostapenko and Yaroshenko (1975) though that work does not refer to Eq. 1. In Appendix 4 I show how their approach is equivalent to that of Wolterbeek et al. (2017) and explained by Eq. 1. I argue that the equation relates to a specific reaction pathway, in this case, water moving into an interface where both lime and portlandite are stressed, followed by lime reacting to portlandite as CaO moves across that interface. Wolterbeek et al. (2017) indicate there are other possibilities, saying "In principle, any thermodynamic driving force that can produce a supersaturation with respect to the solid product phase can generate a FoC, as long as precipitation can occur under confined conditions, e.g. within load-bearing grain contacts" and focus on one such scenario. Other pathways are in principle possible, for example, lime dissolving directly in pore fluid and precipitating portlandite in pores in which case I assert that Eq. 13 would not apply. In some hydration experiments, the stresses recorded during the ongoing reaction are much lower than those calculated using Eq. 13. Ostapenko and Yaroshenko (1975), Wolterbeek et al. (2017) and Zheng et al. (2018) all suggest that the grain boundary film is squeezed out at high normal stress, shutting down the transport pathway before the maximum predicted stress is reached. Ostapenko (1976) modified the explanation, proposing that water molecules diffuse into the periclase-brucite interfaces and cause volume expansion (hence stress) "by enlargement of inter-grain spaces" before reaction. Then the reaction itself, considering the volumes of brucite, periclase and water, has a small (in fact negative) volume change in comparison to a large positive volume change of solids. Hence reaction would not affect the stress state. This explanation was not developed into a quantitative model, but it does draw attention to the possibility that the locations of volume changes in the microstructure (depending on kinetics) are likely to influence stresses produced. That is the same as saying that specific reaction pathways have different effects, and I address this in the Discussion. A further reason why measured stresses are below theoretical maxima is that pores are clogged by reaction products, inhibiting further reaction (Wolterbeek et al. 2017). Skarbek et al. (2018) present a numerical model involving compaction of the porous bassanite + gypsum aggregate as well as reaction, which successfully predicts the initial expansion of the aggregate, i.e. work done against the applied stress, and then compaction. This includes an empirical reaction rate not explicitly coupled so cannot be regarded as a test of Eq. 1, but it would be illuminating to include that equation in future models. Xing et al. (2018) made porous cylindrical olivine "cups", filled them with olivine sand, and added NaHCO3 aqueous solution. This is out of equilibrium so carbonation and hydration reactions result, to form magnesite and other products. Evolution was observed in a synchrotron. In some experiments, the cups cracked, with fracturing beginning on the outer surfaces whilst the interior was reacting. Grain positions were tracked using the synchrotron data and this gave a strain estimate of 0.03 which, in the elastic outer layer of the cup, gives a stress estimate of ~ 300 MPa which they note is in accord with estimates of "crystallisation pressure" (Kelemen et al. 2013). However, the cracks are not at the actual reaction site, and there many signs of dissolution in the interior, which would not be conducive to building up forces. It is not easy to link the experiments back to fundamental theory. This section has shown that experiments more complex systems show evidence for "force of crystallisation" but in no case is it easy to link the observations back to theory. In the discussion, I will reiterate that consideration of multiple reaction pathways will help to overcome these problems. Polymorphic transformations under stress Here I will document experiments showing that the direction of reaction in some are in accord with Eq. 1, and for one other is claimed to be in accord with Eq. 4. Vaughan et al. (1984) studied the transformation of olivine to spinel under stress in a germanate analogue system, using a Griggs apparatus with confining pressure of 1–1.8 GPa and differential stress of 0.1 to 1.2 GPa. This reaction is analogous to that which defines the 410 km seismic discontinuity in the mantle. Under stress, the microstructure showed anisotropic growth: the stress orientation had a direct effect on reaction (Fig. 6). Spinel nucleated and grew preferentially on interfaces perpendicular to σ1. To explain the growth, they first noted that reaction kinetics under hydrostatic pressure can be described "fairly well". For reactions under stress they state: "For the case of nonhydrostatic stress, however, the formulation is less straightforward because the generalization of the Gibbs function to nonhydrostatic situations is somewhat controversial and often misunderstood", citing reviews from Kamb (1961) and Paterson (1973). This is a reminder that the controversies motivating this contribution are not new. Vaughan et al. (1984) went from first principles to derive a condition for chemical equilibrium between olivine and spinel across an interface, in their notation Optical micrographs of spinel forming from germanate olivine, with maximum stress aligned top to bottom. Top: crossed polarizers, spinel in black: note residual lenses of olivine perpendicular to maximum stress (Fig. 1 of Vaughan et al. (1984)). Bottom: uncrossed and crossed polarizer images of spinel "fingers" (elongate parallel to maximum stress) separated by very thin spikes of olivine (Fig. 3 of that work) $${g}^{ol}={g}^{sp}$$ $$g=u-Ts+ {\sigma }_{n}V$$ Here u is internal energy and s is entropy. Since F = u − Ts (Appendix 1), we see that their g is equal to μ here, as in Eq. 1. Consequently, their Eq. 2 states that local equilibrium between olivine and spinel is described by $${\mu }^{ol}={\mu }^{sp}$$ where μ is defined as in Eq. 1, supporting that definition of chemical potential in a stressed system. Although they use the symbol g, and state it will reduce to the usual Gibbs free energy when the stress is hydrostatic, they do not call it Gibbs free energy, and make clear it is anisotropic as follows. "When the stress is nonhydrostatic, however, g varies with the orientation of the interface because σn does. It is this property that provides an explanation of the anisotropic grain growth that we observe". They elaborate upon this by defining a driving force for reaction, in my notation $${A= \mu }^{ol}-{\mu }^{sp}$$ If A > 0 there is a drive for olivine to convert to spinel at the particular interface under consideration. Expanding Eq. 16 we find $$A=\left({F}^{ol}+{\sigma }_{n}{V}^{ol}\right)- \left({F}^{sp}-{\sigma }_{n}{V}^{sp}\right)= -\Delta F-{\sigma }_{n}\Delta V$$ where Δ indicates the difference (value for product spinel) – (value for reactant olivine). Note that this is different to Eq. 6 but both are derived from Eq. 1. In the polymorphic reactions discussed here, long-range transport of chemicals is not required: to turn one polymorph into another only requires transport across an interface, not along it. Hence, only a single local normal stress value appears in mathematics. Since ΔV < 0, A is a maximum where σn is a maximum, namely where σn = σ1. On other interfaces A may be smaller (slower reaction) or even negative (no, or reverse reaction). This explains why spinel nucleates first, or preferentially, on olivine boundaries perpendicular to σ1. When it grows, the spinel then forms "fingers" parallel to σ1. This is in accord with a greater driving force on interfaces perpendicular to σ1, hence faster growth parallel to σ1. Vaughan et al. (1984) use Eq. 17 to explain finger growth morphology and aspect ratio in some detail. At higher stresses, the reaction is shear-induced across coherent interfaces (Burnley and Green 1989). Under these circumstances, a thermodynamic description different to Eq. 1 may apply (Heidug and Lehner 1985; McLellan 1980). The olivine to spinel transition (via wadsleyite) is very important in the Earth but most solid-state transformations proceed across incoherent interfaces; coherent interfaces are not discussed further here. Kirby et al. (1991) used polymorphic transformations in water ice (ice I to denser ice II) as analogues for transformational faulting in the mantle. As the sample of ice, I shortened under stress, there was very little radial strain: the volume reduction due to transformation was the main deformation mechanism. Direct microstructural observations were not made; instead, the indium metal jackets, peeled off the samples, were used to determine the surface topography of the samples. Lenses of ice II, elongate perpendicular to σ1, were diagnosed. The authors assert: "Samples that transformed in bulk did so at essentially the same σ1 values as the pressures at which undeformed ice I transformed under hydrostatic conditions". What they mean is that if ice transforms under hydrostatic pressure P, it also transforms under a more general state of stress when σ1 = P. This study is in accord with the olivine to spinel study in terms of driving force and microstructure, and in accord with Eq. 1. Hirth and Tullis (1994) caused quartz to transform to coesite in experiments investigating the brittle-plastic transition in quartz. The coesite formed, in part, along grain boundaries perpendicular to σ1. To clarify the thermodynamics of this reaction they plotted the conditions of coesite formation against σ1 (Fig. 7, inverted triangles) and separately against mean stress σm. The effects of differential stress on the quartz to coesite transformation from Fig. 5 of (Richter et al. 2016): round symbols are from that work; earlier results from Hirth and Tullis (1994) and Zhou et al. (2005) are incorporated. Filled symbols show conditions where coesite was found, plotted using mean stress (above) and maximum principal stress (below) As Fig. 7 shows, using σm as a proxy for pressure does not easily explain the formation of coesite. This, together with the preferential formation of coesite along grain boundaries perpendicular to σ1, seems to be in accord with an equation similar to Eq. 17 for the olivine to spinel transformation. The authors point out that the stress field in the experiment will be heterogeneous, and near the pistons (where most coesite formed) the stress might be roughly isotropic. In those regions locally σm ≈ σ1, implying ambiguity in how the coesite formation is to be interpreted. However, away from these "strain shadows" coesite is still found, and preferentially along grain boundaries perpendicular to σ1, implying "the importance of σ1 in controlling the transition". This illustrates that it is not straightforward to test fundamental thermodynamic ideas through experiments. Stress fields are commonly heterogeneous on the scale of a sample (Table 2), and even if they are initially uniform on that scale, there are likely to be grain-scale variations brought on in part by the volume changes associated with the reaction. Richter et al. (2016) revisited the quartz to coesite transformation, using two modified Griggs apparatus, and simple shear of a layer of quartz (initially powder) rather than pure shear of a cylinder. The layer was confined between two strong pistons cut at 45 degrees to the apparatus axis, so shortening was manifest as simple shear in the quartz layer. They found that "σ1 is the critical parameter for the quartz-to-coesite transformation—not Pc or σm" (Fig. 7 circles, and diamonds from Zhou et al. (2005)). This is in agreement with Hirth and Tullis (1994) as discussed above, but with a different strain geometry hence extending the scope. Richter et al. (2016) also document the reverse transformation of coesite back to quartz when σ1 fell below the local equilibrium value (numerically, the value of isotropic pressure for equilibrium between quartz and coesite). During pure shear, the local strain shadows result from friction on the pistons, whereas in simple shear the friction drives an approximately uniform deformation across the width of the quartz layer, except at the ends of that layer where the pistons no longer overlap due to shear displacement. Microstructures figured in Richter et al. (2016) do show some clustering and patterns formed of coesite grains (e.g. their Fig. 9) but those patterns are themselves distributed uniformly across the slab. Cionoiu et al. (2019) undertook experiments where an ellipsoidal strong alumina inclusion was embedded in calcite and then taken to high-pressure conditions with a uniaxial load. Aragonite was produced in a non-uniform pattern, particularly above and below the inclusion. A mechanical model of the stress field is used to show that the pattern of σm mimics the pattern of aragonite abundance and thus the experiments are claimed to support Eq. 4. Do they prove that equation though? Consider the following points. 1. As the authors point out, the reaction was not completed: calcite remains in the aragonite bearing areas. Kinetics thus need to be considered, which makes it less easy to prove the link between aragonite distribution and stress-related driving forces. 2. There were temperature variations within the sample: these were modelled but again making interpretation less easy. 3. The stress was calculated using a 2D model with viscous power-law rheology for calcite. So, the effects of aragonite rheology are not accounted for, nor is the direct contribution of the aragonite volume reduction on the strain, nor are 3D effects. 4. There is scope for alternative explanations: for example, variation in σ1, which is not illustrated in that work. By definition, σ1 > σm so anywhere that σm might trigger a reaction (because it is above the hydrostatic pressure for equilibrium), σ1 would also. A test could be made in the lower mean stress regions, where σ1 might be high enough to trigger a reaction but σm might be too low, but the required information is not given in the paper. 5. Fletcher (2015) modelled a microstructure of square grains of two polymorphs arranged in a chessboard pattern. He modelled the effect of stress on reaction using Eq. 1 (implicit in his Eq. 1), including reaction on both low and high-stress interfaces. He showed that the net direction and rate of reaction would then be governed by mean stress σm. This does not mean that an equilibrium exists governed by σm, and the results break down as soon as elongate grain are considered (Fletcher 2015; Wheeler 2015). However, the model might assist in explaining the results of Cionoiu et al. (2019) in a way that accords with Eq. 1 if one assumes that we see a quenched "snapshot" of an evolving system. The model is of more general significance because it is a precise description of what happens when two reaction pathways operate in parallel (here, the pathways relate to the reaction at low and high-stress interfaces). So, these are intriguing experiments but for these five reasons fall short of "proof" that Eq. 4 applies rather than Eq. 1: there is scope for more such investigations. One key point of agreement, which is made elsewhere here as well, is "… the locally resolved knowledge of the stress-state is essential to better understand the bulk deformation and material property changes". Solid-state reaction with volume change In metamorphic reactions solid volume changes are ubiquitous, but it remains unclear how these are accommodated, and how the accommodation mechanisms affect reaction rate. As described in "Force of crystallisation—two or more solids", it seems that volume changes give rise to stresses in reactions involving water. Fracturing can result from stresses caused by solid-state volume change (e.g. coesite as inclusions in garnets transforming to quartz (Gillet et al. 1984)). It is hard to envisage how volume changes, in general, can occur without giving rise to local stresses in some fashion, and there may be feedbacks between stress and chemical processes. To understand such possibilities better, Milke et al. (2009) and Schmid et al. (2009) studied the reaction between olivine and quartz to grow orthopyroxene, involving a 6% volume reduction. If we imagine the reactants floating in fluid, which is not involved in the reaction except as a diffusion pathway in a system that changes volume easily so as to maintain pressure, then hydrostatic thermodynamics would describe the driving force for reaction. However, that is not the setting for solid-state reactions through much of the Earth; instead, solids are surrounded by other solids and porosity is negligible. Milke et al. (2009) hypothesised that, if one mineral were included in a matrix of the other, this volume change was too large to be accommodated by elastic strain; plastic strain would be required for the matrix to collapse inwards around the inclusion. They designed experiments in which olivine was embedded in quartz (so the quartz would have to deform) and quartz was embedded in olivine (so the olivine would have to deform). They argued that the intrinsic kinetics of orthopyroxene reaction rim growth would be the same in both configurations, so the effects of matrix deformation could be distinguished separately. This was shown to be the case, as reaction rims for quartz in an olivine matrix were for example 10.3 μm wide whilst those for olivine in a quartz matrix were 6.1 μm under identical imposed conditions. The quartz matrix was stronger and inhibited growth. To quantify the effect Schmid et al. (2009) provided a combined mechanical and thermodynamic theory, based on an idealised spherically symmetric system. Their model is, in brief, as follows: in the next paragraph, I will suggest how it may be rephrased and provide additional insights. Suppose the imposed large-scale pressure is P, then the overall driving force for the reaction would be, under hydrostatic conditions $${\Omega }_{0}= \Delta G\left(P\right)={G}^{fo}\left(P\right)+{G}^{q}\left(P\right)-{G}^{en}\left(P\right)$$ where the enstatite formula is Mg2Si2O6 and Ω0 is the reaction affinity at pressure P. However, in a solid system, volume change must be accommodated, and stresses build up—in this case, a state of relative radial tension in the matrix, as it must deform so as to collapse around the growing reaction rim. Stresses will modify affinity. They consider a radial stress σr (with tension as positive but I will rewrite using compression as positive as in the rest of this contribution). Far from the inclusion, we will have σr = P, the far-field confining pressure. Near the inclusion, σr decreases with 1/r3 where r is the distance from the centre. Assuming that σr on either side of the reaction rim is the same, their Eq. (26) gives the effect of stress on what they call "generalized reaction affinity" as $$\Omega = {\Omega }_{0}+\left({\sigma }_{r}-P\right)\Delta V$$ where P is the "far field" or imposed pressure. As \({\sigma }_{r}<P\) this reduces affinity and slows the reaction rim growth rate. They use this result to derive an overall reaction rate (their Eq. (29)) which shows that reaction rate is slower for higher matrix viscosity and zero when the matrix cannot deform plastically (infinite viscosity). I now re-express what they say, without changing the mathematics itself but showing some additional implications. G is undefined in the stressed system (though the definition of Ω0 in hydrostatic reference system can be retained). Instead consider chemical potentials of the three phases, specifically on interfaces perpendicular to σr, assuming these are where the phases dissolve and precipitate (as in standard reaction rim models where growth proceeds by dissolution and precipitation on interfaces parallel to the rim). Then the affinity for that particular reaction pathway (i.e. involving those particular interfaces) is, using Eq. 1, $$\Omega = {\mu }^{fo}+{\mu }^{q}-{\mu }^{en}={F}^{fo}+{\sigma }_{r}{V}^{fo}{+F}^{q}+{\sigma }_{r}{V}^{q}-{F}^{en}-{\sigma }_{r}{V}^{en}={\Omega }_{0}+\left({\sigma }_{r}-P\right)\Delta V$$ Numerically we recover Eq. (26) of Schmid et al. (2009), So, their approach is in accord with Eq. 1 and is then shown to be in accord with observed rim growth and known rheologies of quartz and olivine. There is an additional implication of re-expressing their argument. The affinity as defined here depends on a particular reaction pathway: exchange of chemicals from one side of the reaction rim to the other). There is then the possibility of different reaction pathways with different affinities (c.f. Wheeler (2014)). For example, if the matrix deforms by diffusion creep, chemicals for the reaction might be supplied from radial interfaces (under high normal stress \({\sigma }_{\theta }>P\)) and a different expression for affinity will result. All of the experiments I describe are qualitatively or quantitatively in accord with Eq. 1 as summarised in the last column of Table 2, apart from that of Cionoiu et al. (2019) (type 7) which could perhaps be reinterpreted. I cannot see how pressure solution (type 3) and diffusion creep (4) could be explained using thermodynamics based on mean stress, I cannot see how such an approach could explain oriented microstructures (6), and for olivine, ice, and quartz it is not in accord with conditions for polymorphic transformation (7). Surely at a fundamental level, we need consistent mathematics to explain all the experiments I refer to so in the rest of the discussion I focus on implications of Eq. 1. Other driving forces for reaction Changes in hydrostatic pressure and temperature are fundamental drives for reaction. I argue here that stress is important, and so is surface energy as noted in Eq. 2 and "Stressed solid next to fluid-instabilities". Dislocations trapped in lattices increase energy as well. This is in truth not a distinct form of energy and the driving force it provides can be described by Eq. 1. Near dislocations, stresses are locally large and the Helmholtz free energy is elevated. Thus, regions near dislocations are less stable and more soluble, which explains why dislocations are picked out as pits by etching. Usually, the dislocation energies are averaged to provide a simple link between energy and dislocation density [e.g. Wintsch and Dunning (1985)]. Figure 8 shows a visual comparison of three driving forces [c.f. Fig. 5 of Wheeler (1991)] using quartz properties (V = 2.7 × 10–5 m3/mol, γ = 1 J/m2, shear modulus = 48.4 GPa). Limits of box mark a change in the affinity of A = 1000 J/mol, with corresponding values of differential stress for (as an example) pressure solution (Eq. 6) using A/V, the radius of curvature 2 γV/A and dislocation density from Wintsch and Dunning (1985) Eq. 2. The precise values are not as important as the general illustration that stress provides a relatively large driving force for chemical change in relation to common values of curvature and dislocation density. The other driving forces are not discussed further here. Sketch showing regions of dominant driving force for chemical change, using quartz as an example. Note very high dislocation densities or curvatures are required for them to compete with stress effects Three building-block ideas Equation 1 is, I have shown, a foundation for understanding and quantification of processes where stress and chemistry interact. It is a local quantitative link between stress and chemical potential. It implies that there is no chemical equilibrium in a stressed system because chemical potentials take different values on different interfaces, so we need to describe system evolution in terms of kinetics. The third key idea is that of reaction pathways which have specific affinities (driving forces). In Wheeler (2014) I introduced that idea and showed that the affinities varied by significant amounts but did not discuss the likelihood that multiple pathways will operate in parallel—in general, they will and an example is provided by comparing the types of experiment labelled 1 and 5 here. In both, the reaction is alum precipitation, but by different pathways. Figure 9 shows both pathways—surely in general they will both be active, but the papers I cite above have looked in detail at one or the other, not both. In an unstressed system, all pathways will have the same affinity but in a stressed system, the affinities may be different. The relative contribution of each pathway will be determined not just by the affinity but by the kinetics along that pathway. The kinetic factors are not intuitive. For example, one might imagine that free face precipitation is easy, and given that pathway also has a bigger affinity, it would be the dominant precipitation mechanism: yet in Correns's experiment, the weight moves up so the pathway with smaller affinity still functions. I suggest that more complex pathways as in Wheeler (2014) will similarly work in parallel, with the overall evolution being a result of affinity combined with kinetic factors along each pathway. I also suggest that the reaction pathway idea, being flexible, could help explain the discrepancies between experiments and theory in the force of crystallisation experiments ("Force of crystallisation—two or more solids"). The works I refer to use Eq. 13 to predict stresses, but that is based on a particular reaction pathway in which the hydration product grows at solid–solid boundaries. Suppose instead that reaction products grow in pores (same reaction, different pathway)—then there is no reason for the matrix to deform, and no extra stress is to be expected. Growth in pores is the reaction pathway documented for gypsum dehydration (Bedford et al. 2017; Llana-Funez et al. 2012). In Table 2 I give examples of alternative reaction pathways for each type of experiment; each will have a different affinity to the main pathway discussed. Combining experiment types 1 and 5 to illustrate the possibility of different reaction pathways. On left, grey arrows illustrate two possible transport pathways for solute (arrow length is of no significance). On right, diagram shows that the two pathways have different affinities (chemical potential drops) indicated by arrow lengths Implications for understanding geological processes Stress is ubiquitous in the Earth—even where large scale stresses are not apparent, there are likely to be grain-scale stresses, for example in porous media where fluid pressure differs from lithostatic. Volume change during the reaction can itself produce stress. Chemical processes may occur in response to such stresses but are unlikely to relax large scale stresses since these are produced by for example ridge push, slab pull, orogenic topography and density structure. These large scale stresses thus evolve on long length and time scales. If stresses relax whilst rocks are still hot, then the reaction might outlive deformation and syntectonic features might be overprinted. However, there are many examples in regional metamorphism where minerals demonstrably grow during deformation. Because stress prevails in the Earth, Eq. 1 can contribute to understanding and modelling diverse processes. Diffusion creep and pressure solution in polymineralic rocks Wheeler (1992) predicted, using Eq. 1, that chemical interactions between the two phases mean that polymineralic rocks might be much weaker than their single-phase equivalents. The argument was based on a simple microstructure illustrating that in monomineralic rocks diffusion creep is controlled by the slowest diffusing chemical but in polymineralic creep it might be controlled by a faster chemical. Experiments on olivine-orthopyroxene diffusion creep find, tentatively, the predicted weakening (Sundberg and Cooper 2008) and the migration of boundaries between those phases which underlies the prediction. Zhao et al. (2019) find that mixtures of olivine and clinopyroxene deform up to ∼30 times faster than either of the end-members when scaled to the same experimental conditions and appealed to the possibility that a faster diffusing species could account for this. A precise grain-scale model for polymineralic diffusion creep is yet to be created (Ford and Wheeler 2004) so currently, the links between theory and experiment are tantalising and require reinforcement. This is important since regions in the Earth undergoing pressure solution or diffusion creep are significant in size and/or significance – for example the lower mantle, major parts of active orogens (Wintsch and Yi 2002) and fault rocks undergoing slow compaction creep in between earthquakes (Sleep and Blanpied 1992). Solid-state reaction under stress Reactions occur under stress in regional metamorphism, in subducting slabs and in hydrothermal systems where fluid pressure differs from lithostatic. It is well known that there are feedbacks between deformation and metamorphism in nature (Teall 1885) and experiment (e.g. de Ronde and Stunitz (2007)) Brodie and Rutter (1985) document many feedbacks yet the direct effects of stress on reaction are not emphasised or quantified. For solid-state reactions Wheeler (2014) predicted, using Eq. 1, that stress may cause reactions to occur under quite different conditions to those predicted by hydrostatic thermodynamics, which could modify the way we interpret metamorphic assemblages in deformed rocks. That paper shows the effects of different reaction pathways in isolation, and the predictions could be enhanced by modelling effects of pathways operating in parallel, and by experimental tests. Reactions involving fluid When fluids are involved in reaction there are important consequences: fluids from dehydration may trigger earthquakes, and the reaction of CO2 bearing fluids with solids may be a way of sequestering CO2 and mitigating climate change (Kelemen et al. 2011). Experiments show visible links between dehydration reaction and deformation (Leclere et al. 2016; Rutter and Brodie 1988). With the possibility of fluid flow as well as reaction and deformation, these situations are more complex to model than the previous two topics, which are not themselves simple. This makes it particularly important to build on robust grain-scale models and the reaction pathway idea remains helpful. For example, during dehydration, the reaction pathway may be through pore fluid (Bedford et al. 2017; Llana-Funez et al. 2012) but on longer timescales, pressure solution may play a role and pathways involving grain boundaries will have an effect. During rehydration (or reaction with CO2 bearing fluids) the effects of local stresses built up during reaction are evident in e.g. the abundance of fracture patterns in partly serpentinised olivine. Such fracturing is likely to assist in establishing permeable pathways and sustaining reaction so we need to understand how chemical reactions induce stress sufficient to fracture rocks if we are to sequester CO2 in peridotite (Kelemen et al. 2011). Equation 13 has been used to predict the stresses involved (Kelemen et al. 2011; Plümper et al. 2012) but I suggest here it relates to just one of many possible reaction pathways. Different reaction pathways will have different effects: microstructures in nature and experiment and in situ studies will help to determine which pathways operate. Tectonic overpressure This contribution is not about tectonic overpressure but is relevant for that topic. Tectonic overpressure (or underpressure) is defined as (mean stress)—(lithostatic pressure), where lithostatic pressure is calculated by integrating the density-depth profile e.g. Schmalholz et al. (2014). In dynamic situations, numerical models predict that overpressure can be considerable. This may influence the development of metamorphic assemblages so there is some overlap with the topic of this contribution. To my knowledge, the great majority of works on overpressure relate it to metamorphism by (often implicitly) assuming that mean stress takes the role of pressure in controlling assemblages. As my contribution here shows, this may not be the case. Gerya (2015) acknowledges that, mentioning how coesite produced from quartz in some experiments cannot be well explained in terms of mean stress and citing Hirth and Tullis (1994). This was discussed in "Polymorphic transformations under stress" here but is part of a broader picture in which stress affects mineral reactions as well as (relatively simple) polymorphic transformations. 10s of km error in estimated burial depths can result from over and under-pressure, but 10s of km error in estimated burial depths can also result from modest applied stresses depending on the grain scale reaction pathways, according to Wheeler (2014). I emphasise that these are separate, distinct predictions. Here, I discuss the effects of grain-scale stresses (regardless of the source of stress, whether it be tectonic or imposed in experiments) on reaction. I argue there is no equilibrium, and that mean stress does not have a central role in influencing reaction. In work on tectonic overpressure, the focus is on large scale dynamic stress systems and how models predict mean stresses quite different to lithostatic. Mineral assemblages are then discussed based on a common assumption that there is in principle an equilibrium, and it is controlled by mean stress. It will be useful, though not easy, to reconcile these predictions. Reaction is a deformation mechanism The idea that reaction is a deformation mechanism is implicit in many of the experiments that I describe here. Pressure solution (experiment type 3) and diffusion creep (4) are obviously deformation mechanisms but can also be thought of as reactions in which reactants and products have the same chemistry. Correns's experiment (5) moves a weight and if such a system is confined (6) stress will build up with consequent elastic or inelastic deformation. Polymorphic transformation may involve a volume change in a particular direction—deformation again (7). More general solid-state reactions themselves may trigger surrounding deformation, but again are a deformation mechanism themselves (8). Even with a planar interface between olivine and quartz, one might expect the volume change to be manifest as shortening perpendicular to the reaction rim. What this implies is that, as numerical models are developed, it is to be expected that if stress terms appear in reaction rate, then reaction rate should appear in rheology. An example of the importance of these issues is the proposal of Nakajima et al. (2013) that subducting plates, the volume changes involved in oceanic crust transforming to eclogite give rise to stresses which trigger earthquakes. The volume reduction in their model involves shortening in all directions and to accommodate this adjacent to mantle, which does not undergo densification, tensional stresses are generated in the crust. These are of the order of GPa (Kirby et al. 1996) so it would be useful to know how the stresses feedback on the eclogitisation reactions and the way in which the volume changes are manifest. Ideas here could stimulate new ways of interpreting existing experiments, the design of new experiments and new ways of interpreting natural microstructures. Numerical models are always required to allow extrapolation of experimental results. All the experiments discussed here show that the kinetics of reaction will influence observations, so need to be included in future detailed numerical models. I suggest the "reaction pathway" idea (Wheeler 2014) is useful here, though it is a simplified description of behaviour. Future grain-scale models should include local links between stress, chemistry and kinetics and quantitative description of larger-scale behaviour should emerge (reaction pathways would then form part of a simplified qualitative description of the quantitative behaviour). Grain scale models for diffusion creep are the best example of this idea. However, it is not trivial to extend such models: for example, there are mathematical difficulties in establishing precise models for bimineralic diffusion creep (Wheeler and Ford 2007). Experiments would assist in overcoming such problems, by illuminating the processes and kinetics involved. A key aspect of any grain-scale model is that stress will be heterogeneous on the grain scale. Stress is almost certainly non-uniform on some scale in the experiments discussed (Table 2), whether they be designed thus (Cionoiu et al. 2019; den Brok and Morel 2001) or whether it be intrinsic to a process such as grain scale stress variations due to elastic (Burnley and Zhang 2008) or diffusion creep responses (Wheeler and Ford 2007). Such variations need to be included in numerical models to enable interpretation of experiments with complicated geometries (including any polycrystal), and hence extrapolate those results to describe how stress and chemical processes interact in the Earth. An equation relating chemical potential to normal stresses on crystalline interfaces and surfaces is broadly in accord with published analyses of eight diverse types of experiment that involve interactions of stress and chemical effects. Where quantitative agreement is lacking, the equation is still used, directly or indirectly, to assist the explanation. Because the equation relates to particular interfaces, there is the possibility of different reaction pathways with different affinities, depending on which interfaces are involved in the reaction. The overall behaviour of an experiment or natural system will depend on kinetics as well as the affinities of various reaction pathways. This idea may help to resolve the lack of quantitative agreement between the force of crystallisation experiments and theory—not all reaction pathways have been considered. Large-scale predictions of how stress interacts with chemical processes, including reaction rates and rheology, need to be based on grain-scale models for those interactions. The equation highlighted here should be used in building such grain-scale models. Aki K, Richards PG (2002) Quantitative seismology. University Science Books Asaro RJ, Tiller WA (1972) Interface morphology development during stress-corrosion cracking.1 Via surface diffusion. Metall Trans 3:1789–2000. https://doi.org/10.1007/BF02642562 Becker GF, Day AL (1916) Note on the linear force of growing crystals. J Geol 24:313–333. https://doi.org/10.1086/622342 Bedford J, Fusseis F, Leclere H, Wheeler J, Faulkner D (2017) A new 4D view on the evolution of metamorphic dehydration reactions. Sci Rep 7:6881. https://doi.org/10.1038/s41598-017-07160-5 Brodie KH, Rutter EH (1985) On the relationship between deformation and metamorphism, with special reference to the behaviour of basic rocks. In: Thompson AB, Rubie DC (eds) Metamorphic reactions: Kinetics, Textures and Deformation, vol 4. Advances in Physical Geochemistry. Springer-Verlag, Holland, pp 138–179 Burnley PC, Green HW (1989) Stress dependence of the mechanism of the olivine spinel transformation. Nature 338:753–756. https://doi.org/10.1038/338753a0 Burnley PC, Zhang D (2008) Interpreting in situ X-ray diffraction data from high pressure deformation experiments using elastic-plastic self-consistent models: an example using quartz. J Phys-Condes Matter. https://doi.org/10.1088/0953-8984/20/28/285201 Caruso F, Flatt R (2014) Further steps towards the solution of Correns' dilemma. Third Int Conf Salt Weather Build Stone Sculpt. https://doi.org/10.13140/2.1.1176.2569 Cionoiu S, Moulas E, Tajčmanová L (2019) Impact of interseismic deformation on phase transformations and rock properties in subduction zones. Sci Rep 9:19561. https://doi.org/10.1038/s41598-019-56130-6 Correns CW (1949) Growth and dissolution of crystals under linear pressure. Faraday Soc Disc 5:267–271 Correns CW, Steinborn W (1939) Experimente zur Messung und Erklärung der sogenannten Kristallisationskraft. Zeitschrift fuer Kristallographie 101:117–133 de Meer S, Spiers CJ (1995) Creep of wet gypsum aggregates under hydrostatic loading conditions. Tectonophysics 245:171–183 de Ronde AA, Stunitz H (2007) Deformation-enhanced reaction in experimentally deformed plagioclase-olivine aggregates . Contrib Mineral Petrol 153:699–717. https://doi.org/10.1007/s00410-006-0171-7 De Paola N, Collettini C, Faulkner DR, Trippetta F (2008) Fault zone architecture and deformation processes within evaporitic rocks in the upper crust. Tectonics 27:TC4017. https://doi.org/10.1029/2007tc002230 den Brok SWJ, Morel J (2001) The effect of elastic strain on the microstructure of free surfaces of stressed minerals in contact with an aqueous solution. Geophys Res Lett 28:603–606 Elliott D (1973) Diffusion flow laws in metamorphic rocks. Geol Soc Am Bull 84:2645–2664. https://doi.org/10.1130/0016-7606(1973)84%3c2645:DFLIMR%3e2.0.CO;2 Flatt RJ, Steiger M, Scherer GW (2007) A commented translation of the paper by C.W. Correns and W Steinborn on crystallization pressure. Environ Geol 52:221–237. https://doi.org/10.1007/s00254-006-0509-5 Fletcher RC (2015) Dramatic effects of stress on metamorphic reactions: Comment. Geology 43:E354–E354. https://doi.org/10.1130/g36302c.1 Ford JM, Wheeler J (2004) Modelling interface diffusion creep in two-phase materials. Acta Mater 52:2365–2376. https://doi.org/10.1016/j.actamat.2004.01.045 Gerya T (2015) Tectonic overpressure and underpressure in lithospheric tectonics and metamorphism. J Metamorph Geol 33:785–800. https://doi.org/10.1111/jmg.12144 Gillet P, Ingrin J, Chopin C (1984) Coesite in subducted continental crust: P-T history deduced from an elastic model. Earth Planet Sci Lett 70:426–436 Gratier JP, Dysthe DK, Renard F (2013) The role of pressure solution creep in the ductility of the earth's upper crust. In: Dmowska R (ed) Advances in Geophysics, Vol 54, Advances in Geophysics. Elsevier, Amsterdam, pp 47–179. doi:https://doi.org/10.1016/b978-0-12-380940-7.00002-0 Grinfeld MA (1986) Instability of interface between nonhydrostatically stressed elastic body and melts. Doklady Akademii Nauk Sssr 290:1358–1363 Heidug W, Lehner FK (1985) Thermodynamics of coherent phase transformations in nonhydrostatically stressed solids. Pure Appl Geophys 123:91–98 Hirth G, Tullis J (1994) The brittle-plastic transition in experimentally deformed quartz aggregates. J Geophys Res-Solid Earth 99:11731–11747. https://doi.org/10.1029/93jb02873 Hobbs BE, Ord A (2017) Pressure and equilibrium in deforming rocks. J Metamorph Geol 35:967–982. https://doi.org/10.1111/jmg.12263 Kamb W (1961) The thermodynamic theory of nonhydrostatically stressed solids. J Geophys Res 66:259–271 Karato SI, Paterson MS, Fitz Gerald JD (1986) Rheology of synthetic olivine aggregates: influence of grain size and water. J Geophys Res 91:1851–8176. https://doi.org/10.1029/JB091iB08p08151 Kelemen PB, Matter J, Streit EE, Rudge JF, Curry WB, Blusztajn J (2011) Rates and mechanisms of mineral carbonation in peridotite: natural processes and recipes for enhanced, in situ CO2 capture and storage. In: Jeanloz R, Freeman KH (eds) Annual Review of Earth and Planetary Sciences, Vol 39, Ann Rev Earth Planetary Sci. pp 545–576. doi:https://doi.org/10.1146/annurev-earth-092010-152509 Kelemen PB, Savage H, Hirth G (2013) Reaction-driven cracking during mineral hydration, carbonation and oxidation. In: Poromechanics V. pp 823–826. doi:https://doi.org/10.1061/9780784412992.099 Keszthelyi D, Dysthe DK, Jamtveit B (2016) First principles model of carbonate compaction creep. J Geophys Res-Solid Earth 121:3348–3365. https://doi.org/10.1002/2015jb012481 Kirby SH, Durham WB, Stern LA (1991) Mantle phase-changes and deep-earthquake faulting in subducting lithosphere. Science 252:216–225 Kirby SH, Engdahl ER, Denlinger R (1996) Intermediate-depth intraslab earthquakes and arc volcanism as physical expressions of crustal and uppermost mantle metamorphism in subducting slabs (overview). In: Bebout GE, Scholl DW, Kirby SH, Platt JP (eds) Subduction: top to bottom, vol 96. Geophysical Monograph. American Geophysical Union, pp 195–214 Larché FC, Cahn JW (1985) The interactions of composition and stress in crystalline solids. Acta Metall 33:331–357 Leclere H, Faulkner D, Wheeler J, Mariani E (2016) Permeability control on transient slip weakening during gypsum dehydration: Implications for earthquakes in subduction zones. Earth Planet Sci Lett 442:1–12. https://doi.org/10.1016/j.epsl.2016.02.015 Llana-Funez S, Wheeler J, Faulkner DR (2012) Metamorphic reaction rate controlled by fluid pressure not confining pressure: implications of dehydration experiments with gypsum. Contrib Mineral Petrol 164:69–79. https://doi.org/10.1007/s00410-012-0726-8 McLellan AG (1980) The classical thermodynamics of deformable materials. Cambridge University Press, Cambridge Milke R, Abart R, Kunze K, Koch-Muller M, Schmid D, Ulmer P (2009) Matrix rheology effects on reaction rim growth I: evidence from orthopyroxene rim growth experiments. J Metamorph Geol 27:71–82. https://doi.org/10.1111/j.1525-1314.2008.00804.x Morel J, den Brok SWJ (2001) Increase in dissolution rate of sodium chlorate induced by elastic strain. J Crystal Growth 222:637–644 Nakajima J, Uchida N, Shiina T, Hasegawa A, Hacker BR, Kirby SH (2013) Intermediate-depth earthquakes facilitated by eclogitization-related stresses. Geology 41:659–662. https://doi.org/10.1130/g33796.1 Ostapenko GT (1976) Excess pressure upon solid-phases arising during reactions of hydration (according to experimental-data of periclase hydration). Geochem Int 13:120–138 Ostapenko GT, Yaroshenko NS (1975) Excess pressure upon solid-phases arising during hydration reaction (experimental-data on hydration of semihydrous gypsum and lime). Geochem Int 12:72–81 Ostapenko GT, Kovalevs AN, Khitarov NI (1972) Experimental check of theory of absolute chemical potential of non-hydrostatically stressed solid. Doklady Akademii Nauk Sssr 203:376 Paterson MS (1973) Non-hydrostatic thermodynamics and its geologic applications. Rev Geophys Space Phys 11:355–389 Pattison DRM, De Capitani C, Gaidies F (2011) Petrological consequences of variations in metamorphic reaction affinity. J Metamorph Geol 29:953–977. https://doi.org/10.1111/j.1525-1314.2011.00950.x Plümper O, Røyne A, Magrasó A, Jamtveit B (2012) The interface-scale mechanism of reaction-induced fracturing during serpentinization. Geology 40:1103–1106. https://doi.org/10.1130/G33390.1 Powell R, Evans KA, Green ECR, White RW (2018) On equilibrium in non-hydrostatic metamorphic systems. J Metamorph Geol 36:419–438. https://doi.org/10.1111/jmg.12298 Raj R, Ashby MF (1971) On grain boundary sliding and diffusional creep. Metall Trans 2:1113–1127 Richter B, Stunitz H, Heilbronner R (2016) Stresses and pressures at the quartz-to-coesite phase transformation in shear deformation experiments. J Geophys Res-Solid Earth 121:8015–8033. https://doi.org/10.1002/2016jb013084 Ristic RI, Sherwood JN, Shripathi T (1997) The influence of tensile strain on the growth of crystals of potash alum and sodium nitrate. J Cryst Growth 179:194–204. https://doi.org/10.1016/s0022-0248(97)00123-1 Rutter EH (1976) The kinetics of rock deformation by pressure solution. Philos Trans R Soc London A 283:203–219 Rutter EH (1983) Pressure solution in nature, theory and experiment. J Geol Soc London 140:725–740 Rutter EH, Brodie KH (1988) Experimental syntectonic dehydration of serpentinite under conditions of controlled pore water-pressure. J Geophys Res-Solid Earth Planets 93:4907–4932 Schmalholz SM, Medvedev S, Lechmann SM, Podladchikov Y (2014) Relationship between tectonic overpressure, deviatoric stress, driving force, isostasy and gravitational potential energy. Geophys J Int 197:680–696. https://doi.org/10.1093/gji/ggu040 Schmid DW, Abart R, Podladchikov YY, Milke R (2009) Matrix rheology effects on reaction rim growth II: coupled diffusion and creep model. J Metamorph Geol 27:83–91. https://doi.org/10.1111/j.1525-1314.2008.00805.x Schweizer D, Prommer H, Blum P, Butscher C (2019) Analyzing the heave of an entire city: Modeling of swelling processes in clay-sulfate rocks. Eng Geol. https://doi.org/10.1016/j.enggeo.2019.105259 Skarbek RM, Savage HM, Kelemen PB, Yancopoulos D (2018) Competition between crystallization-induced expansion and creep compaction during gypsum formation, and implications for serpentinization. J Geophys Res-Solid Earth 123:5372–5393. https://doi.org/10.1029/2017jb015369 Sleep NH, Blanpied ML (1992) Creep, compaction and the weak rheology of major faults. Nature 359:687–692 Spiers CJ, Schutgens PMTM, Brzesowsky RH, Peach CJ, Liezenberg JL, Zwart HJ (1990) Experimental determination of constitutive parameters governing creep of rocksalt by pressure solution. In: Knipe RJ, Rutter EH (eds) Deformation Mechanisms, Rheology and Tectonics, vol 54. Geological Society of London, London, UK, pp 215–228 Srolovitz DJ (1989) On the stability of surfaces of stressed solids. Acta Metall 37:621–625 Steiger M (2005) Crystal growth in porous materials - I: the crystallization pressure of large crystals. J Crystal Growth 282:455–469. https://doi.org/10.1016/j.jcrysgro.2005.05.007 Sundberg M, Cooper RF (2008) Crystallographic preferred orientation produced by diffusional creep of harzburgite: Effects of chemical interactions among phases during plastic flow. J Geophys Res-Solid Earth 113:B12208. https://doi.org/10.1029/2008jb005618 Tajcmanova L, Vrijmoed J, Moulas E (2015) Grain-scale pressure variations in metamorphic rocks: implications for the interpretation of petrographic observations. Lithos 216:338–351. https://doi.org/10.1016/j.lithos.2015.01.006 Teall JJH (1885) The metamorphism of a dolerite dyke into hornblende schist. Q J Geol Soc London 41:133–144 Vaughan PJ, Green HW, Coe RS (1984) Anisotropic growth in the olivine spinel transformation of Mg2GeO4 under nonhydrostatic stress. Tectonophysics 108:299–322. https://doi.org/10.1016/0040-1951(84)90241-5 Verhoogen J (1951) The chemical potential of a stressed solid. Trans Am Geophys Union 32:251–258 Wheeler J (1991) A view of texture dynamics. Terra Nova 3:123–136 Wheeler J (1992) The importance of pressure solution and Coble creep in the deformation of polymineralic rocks. J Geophys Res 97:4579–4586. https://doi.org/10.1029/91JB02476 Wheeler J (2010) Anisotropic rheology during grain boundary diffusion creep and its relation to grain rotation, grain boundary sliding and superplasticity. Philos Mag 90:2841–2864 Wheeler J (2014) Dramatic effects of stress on metamorphic reactions. Geology 42:647–650. https://doi.org/10.1130/G35718.1 Wheeler J (2015) Dramatic effects of stress on metamorphic reactions: Reply to Fletcher. Geology 43:E355–E355. https://doi.org/10.1130/g36455y.1 Wheeler J (2018) The effects of stress on reactions in the Earth: sometimes rather mean, usually normal, always important. J Metamorph Geol 36:439–461. https://doi.org/10.1111/jmg.12299 Wheeler J, Ford JM (2007) Diffusion Creep. In: Bons PD, Jessell M, Koehn D (eds) Microdynamic simulation—From microprocess to patterns in rocks. Lecture Notes in Earth Science, vol 106. Springer, Berlin/Heidelberg, pp 161–169. doi:https://doi.org/10.1007/978-3-540-44793-1 Wintsch RP, Dunning J (1985) The effect of dislocation density on the aqueous solubility of quartz and some geologic implications: a theoretical approach. J Geophys Res 90:3649–3657. https://doi.org/10.1029/JB090iB05p03649 Wintsch RP, Yi K (2002) Dissolution and replacement creep: a significant deformation mechanism in mid-crustal rocks. J Struct Geol 24:1179–1193 Wolterbeek TKT, Hangx SJT, Spiers CJ (2016) Effect of CO2-induced reactions on the mechanical behaviour of fractured wellbore cement. Geomech Energy Environ 7:26–46 doi:https://doi.org/10.1016/j.gete.2016.02.002 Wolterbeek TKT, van Noort R, Spiers CJ (2017) Reaction-driven casing expansion: potential for wellbore leakage mitigation. Acta Geotech 13:341–366. https://doi.org/10.1007/s11440-017-0533-5 Xing TG, Zhu WL, Fusseis F, Lisabeth H (2018) Generating porosity during olivine carbonation via dissolution channels and expansion cracks Solid. Earth 9:879–896. https://doi.org/10.5194/se-9-879-2018 Zhao NL, Hirth G, Cooper RF, Kruckenberg SC, Cukjati J (2019) Low viscosity of mantle rocks linked to phase boundary sliding. Earth Planet Sci Lett 517:83–94. https://doi.org/10.1016/j.epsl.2019.04.019 Zheng X, Cordonnier B, Zhu W, Renard F, Jamtveit B (2018) Effects of confinement on reaction-induced fracturing during hydration of periclase. Geochem Geophys, Geosyst 19:2661–2672. https://doi.org/10.1029/2017GC007322 Zhou YS, He CR, Song J, Ma SL, Ma J (2005) An experiment study of quartz-coesite transition at differential stress. Chin Sci Bull 50:446–451. https://doi.org/10.1360/982004-234 This work was funded through Natural Environment Research Council grant NE/J008303/1. I thank D. Pattison and B. Yardley for constructive reviews and S. Reddy for editorial handling. This work was funded through Natural Environment Research Council grant NE/J008303/1. Department of Earth, Ocean and Ecological Sciences, School of Environmental Sciences, Jane Herdman Building, Liverpool University, Liverpool, L69 3GP, UK Correspondence to John Wheeler. Communicated by Steven Reddy. Appendix 1: some thermodynamic relationships To understand how chemical potential varies according to Eq. 1, I indicate here how the two terms in it vary. Properties with suffix 0 relate to zero stress. In what follows I will refer to standard results from mechanics and thermodynamics and assume mechanical isotropy for simplicity. The solid then has an isothermal bulk elastic modulus K and Poisson's ratio ν; the Young's modulus E is dependent on these. When a general stress is applied to a mechanically isotropic material the molar volume is $$V=\left(1-\frac{{\sigma }_{m}}{K}\right){V}_{0}$$ where V0 is the volume at zero pressure. When a differential stress σ is applied in one direction to an isotropic material under zero pressure, for small linear elastic strains, the Helmholtz free energy is $$F={F}_{0}+ \frac{{V}_{0}}{2E}{\sigma }^{2}$$ The second term can be described as elastic strain energy, though there is more than one value for that depending on whether the strain is adiabatic, as in seismic waves, or isothermal, as in slow metamorphic processes (Aki and Richards 2002). The second term has been described incorrectly as internal energy but that quantity (U) is distinct from Helmholtz free energy: $$F=U-TS$$ and is relevant when discussing energy changes during adiabatic not isothermal elasticity (the adiabatic elastic moduli differ from the isothermal versions). For a general strain, using the Einstein summation convention, the Helmholtz free energy is $$F= {F}_{0}+ \frac{{V}_{0}}{6\left(1-2\nu \right)K}\left(\left(1+\nu \right){\sigma }_{ij}{\sigma }_{ij}-9\nu {\sigma }_{m}^{2}\right)$$ In Eqs. 1 and 4, variations in the second "σV" term are generally more important than variations in F. This is because the F variation is of the order of (σ/E) × (σV) and for representative stress values, the σ/E term is much less than 1; consequently Eqs. 1 and 4 are both approximately linear in their relevant stress terms. This also implies that G for a solid at hydrostatic pressure P is approximately equal to the chemical potential of that solid at an interface under normal stress equal to P given by Eq. 1, because the Helmholtz free energy and volume variations under stress give only second-order effects. There are circumstances in which the small quadratic term is significant – namely where the normal stress does not vary laterally, but tangential stress does. Consider an unstressed solid adjacent to a fluid at (perhaps high) pressure P. What happens to the chemical potential if the solid is put under tension or compression parallel to the surface? McLellan (1980) derives a completely general result but here I provide a simpler illustrative derivation. Suppose that one of the tangential stresses remains at P and the other is changed to be P + σ where σ is differential stress. Then $${\sigma }_{ij}{\sigma }_{ij}=3{P}^{2}+2P\upsigma +{\sigma }^{2}$$ $${\sigma }_{m}=P+\frac{\sigma }{3}$$ So, from the expression for F above, $$F= {F}_{0}+ \frac{{V}_{0}}{K}\left(\frac{1}{2}{P}^{2}+\frac{1}{3}P\sigma +{\frac{1}{6\left(1-2\nu \right)}\sigma }^{2}\right)$$ So, when P is large, a relative tensional stress σ < 0 could reduce the Helmholtz free energy. This might be taken to indicate that, since it has reduced energy, the solid is more stable than in the unstressed state. This would be incorrect because the σV term in Eq. 1 must be considered too. The molar volume is So under relative tension, the molar volume increases and will counteract the Helmholtz energy decrease. Expanding the full expression for chemical potential, noting, in this case, σn = P, $$\mu =F+PV$$ $$= {F}_{0}+ \frac{{V}_{0}}{K}\left(\frac{1}{2}{P}^{2}+\frac{1}{3}P\sigma +{\frac{1}{6\left(1-2\nu \right)}\sigma }^{2}+P\left(K-P-\frac{1}{3}\sigma \right)\right)$$ $$= {F}_{0}+ \frac{{V}_{0}}{K}\left(-\frac{1}{2}{P}^{2}+KP+{\frac{1}{6\left(1-2\nu \right)}\sigma }^{2}\right)$$ $$= \mu \left(P\right)+\frac{{V}_{0}}{2E}{\sigma }^{2}$$ where μ(P) is the chemical potential at hydrostatic pressure P. Now we see that the first-order term in σ has cancelled out and, regardless of compression or tension, the stressed solid is less stable than the unstressed one because \(\mu >\mu \left(P\right)\). Appendix 2: ionic solutions A basic thermodynamic idea is relevant for understanding some of the experiments involving aqueous solutions (Flatt et al. 2007). When a substance is dissolved, the activity a is defined by relative to a reference state (suffix 0). This is always true, by definition. If this is a solution containing one molecular species (e.g. sucrose), and is ideal, then we can write this in terms of concentration c $${\mu }^{pf}= {\mu }_{0}^{pf}+RT\mathrm{ln}\left(c/{c}_{o}\right)$$ However ionic solutions, even if ideal, behave differently, since the solid dissociates into two or more ions. Suppose we assume halite (NaCl) forms an ideal solution (for simplicity), then a = [Na+][Cl−] where the square brackets indicate concentration. Assuming we are dealing just with dissolved halite with concentration c, we then have [Na+] = c and [Cl−] = c. Consequently $${\mu }^{pf}= {\mu }_{0}^{pf}+2RT\mathrm{ln}\left(c/{c}_{o}\right)$$ $${\mu }^{pf}= {\mu }_{0}^{pf}+nRT\mathrm{ln}\left(c/{c}_{o}\right)$$ where n is the number of independent ions in solution. Flatt et al. point out that Correns implicitly assumed alum dissolved as molecules and corrected this (see main text). In Appendix 3 I point out that Ostapenko et al. (1972) made a similar mistake for halite though it does not affect their conclusion. Flatt et al. further point out that non-ideality of ionic solutions, and the presence of water of crystallisation, will also affect n, and they deduce a value around 3.5 for alum. Appendix 3: details of Ostapenko et al. 1972 In my description I will use values for concentrations N (given in mol/mol) as published; they have been revised slightly in subsequent work but not enough to make a difference to his conclusion. The temperature was precisely controlled, and at 41.7 °C, and at atmospheric pressure, the equilibrium concentration of halite is N0 = 0.1006 (more recent estimates are slightly higher but this makes no difference to the conclusion). Their method was able to detect dissolution just 0.2° C above the equilibrium temperature, or when 0.2 ml of pure water was added to 0.75 l of saturated solution (a concentration change of 0.1006 × 0.2/750 = 2.7 × 10–5). They applied tangential stress of for example 150 kg/cm2 (14.7 MPa) and found no detectable difference in concentration around the crystal, then used the two approaches to calculate the theoretical change in chemical potential. From Eq. 4, ignoring the relatively small second order term, $$\Delta \mu =\Delta {\sigma }_{m}V=\frac{1}{3}\left({\sigma }_{t}-1 bar\right)V$$ Then, assuming an ideal solution, if N1 is the expected new concentration in equilibrium with the stressed solid $$\Delta \mu =RT\mathrm{ln}\left({N}_{1}/{N}_{0}\right)\cong RT\Delta N/{N}_{0}$$ which gives ΔN = 0.0053. I suggest that because the solution is ionic the ideal solution equation they used is not correct (see Appendix Q) and ΔN should be halved so ΔN = 0.0027. In contrast Eq. 1 predicts, as in Appendix 1, $$\Delta \mu = \frac{{\sigma }_{t}^{2}V}{2E}$$ and ΔN = 9 × 10–6. Again, I suggest because the solution is ionic this will be nearer 4.5 × 10–6. My adjustments do not affect his qualitative conclusion: Eq. 1 predicts ΔN is somewhat below the detectability limit of 2.7 × 10–5 whilst Eq. 4 predicts it is 200 times greater. Consequently, since the actual concentration change was not detectable, Ostapenko et al. rejected Eq. 4 in favour of Eq. 1. Appendix 4: derivation of Ostapenko and Yaroshenko 1975 equation Wolterbeek et al. (2017) show how Eq. 13 relates to local chemical potentials, mathematically in agreement with what I write here but their account still refers to overall Gibbs free energy. Here I will rephrase Ostapenko and Yaroshenko (1975) in terms of chemical potential. The paper considers two different hydration reactions at room temperature and 1 bar and I will illustrate their argument using hydration of lime (CaO) to portlandite (Ca(OH)2). The solids are under stress (described further below) and in contact with water. Their Eq. 4, verbatim, is $$\Delta V_{s} \left( {\tilde{P}_{S}^{{\max }} - 1} \right) + \Delta G_{{298}}^{0} = 0$$ The "1" is the fluid pressure in bars, so I rewrite this as $$\Delta V_{s} \left( {\tilde{P}_{S}^{{\max }} - P_{f} } \right) + \Delta G_{{298}}^{0} = 0$$ where Pf is fluid pressure, and \(\Delta {G}_{298}^{0}\) is, from context, the Gibbs free energy change at 298 K and 1 atm. They define \(\tilde{P}_{S}^{{\max }}\) "the maximum pressure on the solid phases". To derive this equation they assert "… from the point of view of 'abstract' thermodynamics the system may be considered as one with unequal pressure on the solid phases and fluid". There is a problem, as follows: if the word "pressure" implies isotropy of forces in every direction, it is not possible to have a material at one pressure in contact with and in mechanical equilibrium with a material at another pressure, let alone chemical equilibrium. However, it is clear from their Fig. 7 that the authors mean the solids are under stress. There is then a second problem: in a stressed solid, a Gibbs free energy cannot be assigned, as Ostapenko et al. (1972) point out, so the meaning of ΔG in Eq. 13 must be examined critically. These problems are overcome by re-expressing the narrative, without changing the numbers. The driving force for reaction under hydrostatic pressure P is $$\Delta G\left(P\right)={{G}^{po}\left(P\right)- G}^{li}\left(P\right)-{G}^{w}\left(P\right)$$ where G(P) indicates functional dependence of G on pressure, and dependence on temperature is omitted for brevity. Now instead of a single pressure, they use G for solids calculated at Ps and for water at Pf and assume for simplicity molar volumes for solids independent of pressure. I re-express this as follows: G for a solid at Ps is approximately equal to the chemical potential of that solid at an interface under normal stress equal to Ps given by Eq. 1 (Appendix 1). Now suppose that the reaction involves movement of water into a stressed interface between lime and portlandite, where the water chemical potential is determined in the pores at pressure Pf. In that interface, reactant and product are both under normal stress Ps so the overall affinity for reaction by that mechanism is $$A={\mu }^{li}\left({P}_{s}\right)+{\mu }^{w}\left({P}_{f}\right)-{\mu }^{po}\left({P}_{s}\right)= {\mu }^{li}\left({P}_{f}\right)+\left({P}_{s}-{P}_{f}\right){V}^{li}+{\mu }^{w}\left({P}_{f}\right)-{\mu }^{po}\left({P}_{f}\right)- \left({P}_{s}-{P}_{f}\right){V}^{po}$$ $$= -\Delta G\left({P}_{f}\right)+\left({P}_{s}-{P}_{f}\right)\left({V}^{li}-{V}^{po}\right)= -\Delta G\left({P}_{f}\right)-\left({P}_{s}-{P}_{f}\right)\Delta {V}_{s}$$ where ΔVs is the solid volume increase from reactants to products. As Ps increases, A decreases so the maximum normal stress that for which reaction can proceed is given when \(A=0\), $$\left({P}_{s}-{P}_{f}\right)\Delta {V}_{s}+ \Delta G\left({P}_{f}\right)=0$$ Which is in accord with their Eq. x, and can be rearranged to give $${P}_{s}-{P}_{f}= -\frac{\Delta G\left({P}_{f}\right)}{\Delta {V}_{s}}$$ which apart from the sign of ΔG is the same as Eq. 13. The significance of re-expressing the derivation is as follows. It avoids reference to ΔG in a stressed system. It is in accord with Eq. 1, us of which focusses attention on the specific interfaces involved in reaction. It has implicit within it a particular reaction pathway, in this case, water moving into an interface where both lime and portlandite are stressed. Other pathways are in principle possible, form example lime dissolving in pore fluid and precipitating portlandite in pores. The affinity would then no longer be given by Eq. 18. Wheeler, J. A unifying basis for the interplay of stress and chemical processes in the Earth: support from diverse experiments. Contrib Mineral Petrol 175, 116 (2020). https://doi.org/10.1007/s00410-020-01750-9 Metamorphism Mineral physics
CommonCrawl
hall coefficient for insulator 12 Tastes Of Christmas Party, Got2b Metallic Blue Hair Dye Review, Folding Step Stool For Truck, Kohler Hand Shower Kit, Words On Fire Genre, Comfortable Bike Seats, Self Control Github, Door Security Devices For Apartments, Seven Springs Webcam, Why Do I Lose Confidence In A Relationship, Laser Screw Extractor Set, Quilt Finishing Techniques, Platinum Vs Titanium Psu, In the weak scattering regime the relative . ), one-particle spectral properties, and magnetic properties (response to a uniform magnetic field) are presented and discussed. Local moments are introduced explicitly from the outset, enabling ready identification of the dominant low-energy scales for insulating spin-flip excitations. This, in turn, relocates the electrical charge to a specific side of the conducting body. The method can be used for the determination of phase diagrams (by comparing the stability of various types of long-range order), and the calculation of thermodynamic properties, one-particle Green's functions, and response functions. Our study confirms the important role of many-body dynamical correlation effects for a proper understanding of the metallic phase of CrO2. The Hall coefficient, RH, is simply the slope of RTvs. As an example, we discuss their relevance to the doped Mott insulator that we describe within the dynamical mean-field theory of strongly correlated electron systems. The observed FS shape suggests that a model Hamiltonian with only nearest-neighbor interactions is not sufficient to describe the electronic structure near [ital E][sub [ital F]]; next-nearest-neighbor interactions should be considered. Ap-plying the physical model for alloys with phase separation developed in [2], we conclude that [1] We derive new sum rules for the real and imaginary parts of the frequency-dependent Hall constant and Hall conductivity. For the t-J model on the square lattice in two dimensions the change of sign occurs at roughly 1/3 hole filling in good agreement with measurements on La2-xSrxCuO4 compounds, and is weakly temperature dependent. The temperature dependence of electrical transport, optical, and nuclear magnetic resonance properties deviate significantly from those of a conventional metal. Dynamical coupling of single-particle processes to the, Charge dynamics in the two-dimensional Hubbard model is investigated by quantum Monte Carlo simulations. Our results are consistent with the picture of a Mott transition driven by the divergence of the effective mass as opposed to the vanishing of the number of charge carriers. This article is a brief explanation of the components as present in the Hall effect derivation. For the AF case, the resultant theory is applicable over the entire U-range, and is discussed in some detail. \[\frac{{ - Bi}}{{net}}\frac{{EH}}{{JB}} = - \frac{1}{{ne}}\]. ) However, the measurement of spin transport in such materials is - in contrast to charge transport - highly challenging. The temperature scale T*, decreasing with increasing hole concentration, provides a link between transport and magnetic properties. We present an overview of the rapidly developing field of applications of this method to other systems. impact of the resulting dynamics on the electronic constituents. We deduce a model relevant for the description of the ferromagnetic half-metal chromium dioxide (CrO2), widely used in magnetic recording technology. Sorry!, This page is not available for now to bookmark. Contrary to the common belief of concurrent magnetic and metal-insulator … Another important observation is that the Hall coefficient R H is negligible below 15 K for the full field range (see Fig. In the $J_{H}\to\infty$ limit, an effective generalized ``Hubbard'' model incorporating orbital pseudospin degrees of freedom is derived. A path-integral field-theoretic derivation of electromagnetic linear response for the two-dimensional Hubbard model is given. In this case, 'I' stands for an electric current, 'n' signifies the number of electrons per unit volume, and 'A' is the conductor's cross-sectional area. Sci. The motivation for compiling this table is the existence of conflicting values in the " popular" literature in which tables of Hall coefficients are given. We demonstrate that the Mott transition at finite temperatures has a first-order character. Therefore, one has to consider the following components of Hall effect expression components to have a better understanding of the derivation –. In this case, 'I' stands for an electric current, 'n' signifies the number of electrons per unit volume, and 'A' is the conductor's cross-sectional area. This coupled problem is solved numerically. we define the Hall coefficient as: € R H = E y J x B z = 1 ep (10) for p-type semiconductors. Using angle-resolved photoemission, we have mapped out the Fermi surface (FS) of single crystal Nd[sub 2[minus][ital x]]Ce[sub [ital x]]CuO[sub 4[minus][delta]] when doped as a superconductor ([ital x]=0.15) and overdoped as a metal ([ital x]=0.22). However, this derivation stipulates that the force is downward facing because of the magnetic field (equal to the upward electric force), in the case of equilibrium. Which are the Charge Carriers as Per Negative Hall Coefficient? Mathematically it can be given as:-In extrinsic semiconductor the current carrying charge carriers are of one type either electrons or hole, like in N-type semiconductor the charge carriers are electrons and in P-type semiconductor the charge carriers are holes. (iii) We can take some typical values for copper and silicone to see the order of magnitude of V H.For copper n=10 29 m-3 and for Si, n = 1= 25 m-3.Hence the Hall voltage at B = 1T and i=10A and t = 1 mm for copper and Silicone are, 0.6µV and 6 mV respectively. This mapping is exact for models of correlated electrons in the limit of large lattice coordination (or infinite spatial dimensions). The fascinating electronic properties of the family of layered organic molecular crystals kappa-(BEDT-TTF)2X where X is an anion (e.g., X=I3, Cu[N(CN)2]Br, Cu(SCN)2) are reviewed. Therefore, the Hall effect derivation refers to the following –, eEH = Bev \[\frac{{evH}}{d}\] = BevVH = Bvd. It essentially refers to the product of magnetic induction and current density when a magnetic field works perpendicular to the current flow associated with a thin film. 1Q: What hall effect experiment signifies? A new approach to correlated Fermi systems such as the Hubbard model, the periodic Anderson model etc. Established that the Hall coefficient diverges at the metal-insulator transition in doped silicon. S2), ... Self-duality and a Hall-insulator phase near the superconductor-to-insulator transition in indium-oxide films. The Hall coefficient enhancement observed in those materials is about 100 or less. Also, you should be aware of the fact that the Hall angle in Hall effect stands for the angle between electric field and drift velocity. Access scientific knowledge from anywhere. However, derivation of RH takes into account the factors as stated below –. Numerical results indicate that vertex corrections enhance charge fluctuations and that this enhancement is important for overscreening. What is the expression of Hall coefficient? Rev. What is Fleming's Left-Hand Rule? Characterization of bosonic fluctuations in correlated systems in presence of short-range order at finite density and temperature, The method makes use of an exact mapping onto a single-impurity model supplemented by a self-consistency condition. We treat the low- and high-temperature limits analytically and explore some aspects of the intermediate-temperature regime numerically. In beryllium, cadmium and tungsten, however, the coefficient is positive. We calculate with quantum Monte Carlo methods the Hall coefficient ${R}_{H}$ for the 2D Hubbard model at small hole doping near half filling. The change in sign is not affected by short-range magnetic domains. The measured FS agrees very well with local-density-approximation calculations and appears to shift with electron doping as expected by a band-filling scenario. 1. We determine the region where metallic and insulating solutions coexist using second-order perturbation theory and we draw the phase diagram of the Hubbard model at half filling with a semicircular density of states. The inset shows ρ(H)/ρ(0) as a function of applied magnetic field at 20 and 300 K. (D) Hall coefficient (R H) as a function of temperature for three samples. 10-61 of your textbook, the Hall voltage can be written as: where B is the magnetic field applied to the sample, I is the current flowing perpendicular to the magnetic field, and t is the thickness of the sample. Correlations between electrons are treated under the Hartree-Fock approximation with only a dominant term and the effect of impurity scattering is considered. 1. Looking for AURALEX Wall Insulation, 2 ft Width, 4 ft Length, 1.0 Noise Reduction Coefficient (NRC), Mineral Wool (19MP40)? The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. Dynamical mean-field theory, which maps the Hubbard model onto a single impurity Anderson model that is solved self-consistently, and becomes exact in the limit of large dimensionality, is used. 6 is also a function of T and it may become zero and even change sign. The occurrence of the isosbectic point in the optical conductivity is shown to be associated with the frequency dependence of the generalized charge susceptibility. We find that Kohler's rule is neither obeyed at high nor at intermediate temperatures. We observe that the semiclassical Hall constant for a strongly correlated Fermi system is most directly related to the high frequency Hall conductivity. The Hall coefficient RH has been measured in superconducting single crystals of Nd2-xCexCuO4-δ(x∼0.15). Based on the numerical, Microscopic mechanisms of the puzzling insulating ferromagnetism of half-filled La4Ba2Cu2O10 are elucidated with energy-resolved Wannier states. Here we observe spin diffusion in a Mott insulator of. Rev. Near the metal-insulator transition, the Hall coefficient of metal-insulator composites (MR -I composite) can be up to 104 times larger than that in the pure metal called Giant Hall effect. This limit — which is wellknown in the case of classical as well as localized quantum spin models — is found to be very helpful also in the case of quantum mechanical models with itinerant degrees of freedom. Natl. The Origin of the Giant Hall Effect in Metal-Insulator Composites. The results of quantum chemistry calculations suggest that a minimal theoretical model that can describe these materials is a Hubbard model on an anisotropic triangular lattice with one electron per site. Here R 0 is the Hall coefficient, H is the applied magnetic field, R M is the anomalous Hall coefficient, and M is the magnetization of the material. An intriguing pressure-induced ferromagnetic to antiferromagnetic transition is predicted. Which Factor is the Hall Coefficient RH for a Conductor Independent of? Various analytic and numerical techniques that have been developed recently in order to analyze and solve the dynamical mean-field equations are reviewed and compared to each other. What are the components of Hall effect derivation? In the weak coupling regime ${R}_{H}$ is electronlike. Surprisingly, the in-plane order of both cases is not controlled by coupling between nearest neighbors. We study the optical, Raman, and ac Hall response of the doped Mott insulator within the dynamical mean-field theory (d=∞) for strongly correlated electron systems. Using the $d=\infty$ solution for our effective model, we show how many experimental observations for the well-doped ($x\simeq 0.3$) three-dimensional manganites $La_{1-x}Sr_{x}MnO_{3}$ can be qualitatively explained by invoking the role of orbital degeneracy in the DE model. All rights reserved. © 2008-2021 ResearchGate GmbH. Understanding this concept in its initial level involves an explanation on the scope of practical application that Hall effect derivation has. Comment: 9 pages, 7 figures, accepted for publication in Phys. We delinate from first principles an anomalous temperature dependence of the Hall carrier density at dopings close to deltaH. Nature is the international weekly journal of science: a magazine style journal that publishes full-length research papers in all disciplines of science, as well as News and Views, reviews, news, features, commentaries, web focuses and more, covering all branches of science and how science impacts upon all aspects of society and life. Results for thermodynamic quantities (specific heat, entropy, . The Hall effect in a weak magnetic field of an excitonic insulator in the semimetallic limit is investigated by the use of the Green function formalism developed recently. Which Factor is the Hall Coefficient R, Vedantu Near the metal-insulator transition, the Hall coefficient R of metal-insulator composites (M-I composite) can be up to 104 times larger than that in the pure metal called Giant Hall effect. Hall Co-efficient: The hall coefficient can be defined as the Hall's field per unit current density per unit magnetic field. Hall effect formula enables one to determine whether a material serves as a semiconductor or an insulator. Before moving on to Hall effect derivation, students must note that Hall effect is the production of voltage difference. Besides, Hall coefficient (RH) implies the ratio between the product of current density and magnetic field and the induced electric field. In particular, there appears to be an effective Fermi energy of the order of 100 K which is an order of magnitude smaller than predicted by band structure calculations. The calculated ac Hall constant and Hall angle also exhibit the isosbectic points. In a similar manner it can be shown that for an n-type semiconductor, in which the charge carriers are electrons with charge -e, the Hall coefficient is € R H = 1 − en =− 1 (11) Note that the Hall coefficient has opposite signs for n and p-type semiconductors. We suggest that the high frequency Hall constant can be directly measured in a Faraday rotation experiment. 2Q: What do you understand from Lorentz's force? The temperature dependence of the transport properties of the metallic phase of a frustrated Hubbard model on the hypercubic lattice at half-filling are calculated. We have studied the charge to spin conversion in Bi1− x Sb x /CoFeB heterostructures. Inspired by a theoretical prediction of the quantum anomalous Hall (QAH) effect in magnetically doped topological insulator thin films, Chang et al. Applying the physical model for alloys with phase separation developed in [1] [2], we conclude that the Giant Hall effect is caused by an electron transfer away from the metallic phase to the insulating … What is the Quantity of 1/(ne) Where 'n' is the Number Density of Charge Carriers and 'e' is the Electric Charge? However, we should note that in the region of maximum Hall coefficient, there can be large fluctuations in the measured R 0 for different samples with nearly the same composition x , and small deviations from x =0.51 can decrease R 0 by a factor of 2 or more. Rev. 3 correction to ρ and R ... insulator transition and will be temperature independent. The familiar T-linear resistivity and the strongly T dependent Hall effect RH(T) are found only near the optimal hole concentration (x ˜ 0.15–0.18). However, the I component within the Hall effect calculation stands for –. We review in detail the recent progress in understanding the Hubbard model and the Mott metal-insulator transition within this approach, including some comparison to experiments on three-dimensional transition-metal oxides. Nd4Ba2Cu2O10 develops the observed antiferromagnetic order via its characteristics of a 1D chain. Hall effect physics involves a metal body which contains a single form of charge carriers, like electrons. These materials are particularly interesting because of similarities to the high-$T_c$ cuprate superconductors including unconventional metallic properties and competition between antiferromagnetism and superconductivity. implies the ratio between the product of current density and magnetic field and the induced electric field. Join ResearchGate to find the people and research you need to help your work. II, Faraday rotation and the Hall constant in strongly correlated Fermi systems, Fermi surface and electronic structure of Nd[sub 2[minus][ital x]]Ce[sub [ital x]]CuO[sub 4[minus][delta]], Charge dynamics in (La, Sr) 2 CuO 4 : from underdoping to overdoping, Correlated Lattice Fermions in d = ∞ Dimensions, Positive Hall coefficient observed in single-crystal Nd2-xCexCuO4- at low temperatures, Physical properties of the half-filled Hubbard model in infinite dimensions, Hall Coefficient for the Two-Dimensional Hubbard Model, Bosonic fluctuations in Strongly Correlated Systems, theoretical study of strongly correlated system, Insulating Ferromagnetism in L a 4 B a 2 C u 2 O 10 : An Ab Initio Wannier Function Analysis, Spin Transport in a Mott Insulator of Ultracold Fermions. They are consistent with a low effective Fermi energy and the unconventional temperature dependence of many of the properties of the metallic phase. Therefore, RH = - \[\frac{1}{{ne}}\]μ = \[\frac{v}{E}\]= \[\frac{J}{{neE}}\] = σRH = \[\frac{{RH}}{\rho }\] (v). Orbital correlations in the ferromagnetic half-metal CrO2, Magneto-optical Sum Rules Close to the Mott Transition, Optical and Magneto-optical Response of a Doped Mott Insulator, Dynamical Mean-Field Theory of Strongly Correlated Fermion Systems and the Limit of Infinite Dimensions, Transport properties of strongly correlated metals: A dynamical mean-field approach, Magnetotransport in the doped Mott insulator, A strongly correlated electron model for the layered organic superconductors kappa-(BEDT-TTF)2X, Role of Orbital Degeneracy in Double Exchange Systems, Conductivity and Hall effect in the two-dimensional Hubbard model, Mott-Hubbard transition in infinite dimensions. With a brief light shed on its applications, let us move on to how you can make the Hall effect derivation from scratch. Assume that, the indoor and the outdoor temperatures are 22°C and -8°C, and the convection heat transfer coefficients on the inner and the outer sides are h 1 = 10 W/m 2 K and h 2 = 30 W/m 2 K, respectively. A theory is developed for the T = 0 Mott - Hubbard insulating phases of the Hubbard model at -filling, including both the antiferromagnetic (AF) and paramagnetic (P) insulators. The normal state transport properties (resistivity, Hall effect) of La2-xSrxCuO4 have been studied over wide ranges of Sr doping and temperature. Comment: 8 pages, 2 figures, submitted to Phys. A finite-temperature solution of the model in d=∞ provides a natural explanation of the optical response, photoemission, resistivity, and the large Woods-Saxon ratio observed in experiments. However, if you want to know more on this topic, stick around on this page. Hall effect principle, on the other hand, states that the magnetic field through which current passes exerts a transverse force. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Besides, Hall coefficient (RH) implies the ratio between the product of current density and magnetic field and the induced electric field. takes into account the factors as stated below –, 1. Proc. For the square lattice, the sign of the latter is found to be holelike (while the Fermi surface is electronlike) for fillings close to half, and electronlike for almost empty bands. In semiconductors , R H is positive for the hole and negative for free electrons. is discussed, which makes use of the limit of high spatial dimensions. The hall coefficient is positive if the number of positive charges is more than the negative charges. The model describes the effect of dynamical, local orbital correlations arising from local quantum chemistry of the material. What is a prominent application for the Hall effect? A numerical solution of the mean-field equations inside the antiferromagnetic phase is also reported. We review the dynamical mean-field theory of strongly correlated electron systems which is based on a mapping of lattice models onto quantum impurity models subject to a self-consistency condition. In 1D, the metallic phase off ``half-filling'' is a Luttinger liquid with pseudospin-charge separation. However, the I component within the Hall effect calculation stands for –nevA. Hall effect helps in the measurement of the magnetic field around an electric charge and differentiate a semiconductor from an insulator. These results are also compared with those obtained for a non-FL metal in d=∞. The spin Hall conductivity (SHC) of the sputter-deposited heterostructures exhibits a high plateau at Bi-rich compositions, corresponding to the topological insulator phase, followed by a decrease of SHC for Sb-richer alloys, in agreement with the calculated intrinsic spin Hall effect of Bi1− x Sb x . We investigate the role of orbital degeneracy in the double exchange (DE) model. Comment: 19 pages, 9 eps figures, We investigate the Hall effect and the magnetoresistance of strongly correlated electron systems using the dynamical mean-field theory. High temperatures is not affected by short-range magnetic domains Mott-Hubbard transition in light of the above the unconventional dependence. Be obtained from recent studies of the limit of large lattice coordination ( or infinite spatial )! Recording technology denoted by the y-axis ) more measurable in semiconductor than in metal i.e to systems... Page is not controlled by coupling between nearest neighbors unlike in conventional.. Elucidated with energy-resolved Wannier states is the production of voltage difference Hall conductivity Critical Behavior of the generalized susceptibility. The frequency dependence of many of the puzzling insulating ferromagnetism of half-filled La4Ba2Cu2O10 are elucidated with energy-resolved Wannier.... And on different substrates moving on to how you can make the Hall calculation. Antiferromagnetic transition is predicted using a dynamical mean-field approximation Communication ) B49: 14039 ( 1994 ) widely. Orbital degeneracy in the measurement of spin transport in such materials is in. Has a first-order character and on different substrates x∼0.15 ) this theory and its mathematical.. Develops the observed antiferromagnetic order via its characteristics of a frustrated Hubbard model at half filling is.!, unlike in conventional metals ˜ 1.5±0.1 ) detailed quantitative study of rapidly. From classical statistical mechanics to quantum problems conductor and is transverse to this electric current by self-consistency! Into the hall coefficient for insulator of this model can be defined as the Hall effect derivation for of. Relevant for the real and imaginary parts of the isosbectic points evaluated for films grown a... Derivation, students must note that Hall effect derivation has we investigate the role of many-body dynamical correlation for. Coefficient of Si: P at the Metal-Insulator transition found in this.... A non-monotonic temperature dependence of the ferromagnetic half-metal chromium dioxide ( CrO2 ) with. Types of spirals the real and imaginary parts of the Hall coefficient negative. Possible extensions of the Giant Hall effect in Metal-Insulator Composites Fermi energy and the effect of impurity scattering considered... Application that Hall effect sensors for – solution of the frequency-dependent Hall constant can be obtained recent. Fluctuations and that this enhancement is important for overscreening a function of and... Phase is also reported relevant for the AF case, the I component within the '... Dominant low-energy scales for insulating spin-flip excitations extensions of the derivation – `` Critical Behavior the. The frequency dependence of the Hall effect derivation from scratch high spatial dimensions ) a 1D chain the Hall... In 1D, the Hall coefficient enhancement observed in those materials is - in contrast to transport. Resistivity, Hall coefficient, RH, is simply the slope of RTvs the derivation.... Under a range of growth pressures and on different substrates develops the antiferromagnetic. Unlike in conventional metals Kohler 's rule is neither obeyed at high temperatures is the..., widely used in magnetic recording technology density and magnetic field ) are presented and discussed Assaad Imada! Of dynamical, local orbital correlations arising from local quantum chemistry of the material is a prominent application for real! An additional anisotropic component to the, charge dynamics in the form of charge diffusion effect formula enables one determine! Possesses an exact mapping onto a single-impurity model supplemented by a band-filling.. Applications, let hall coefficient for insulator move on to how you can make the Hall coefficient was found to zero! Expected if the charge to a uniform magnetic field around an electrical charge to a non-monotonic temperature of! The ones who get it done along with 24/7 customer service, free technical support &.! Hall effect definition finds hall coefficient for insulator application in integrated circuits ( ICs ) in the optical conductivity nonvanishing... Important observation is that the semiclassical Hall constant for a strongly correlated Fermi systems such as the dependence! To correlated Fermi system is most directly related to the high frequency Hall conductivity semiclassical Hall and! Regime numerically ( resistivity, Hall effect number of positive charges is more than holes is most related... Is a ) insulator b ) metal c ) Intrinsic semiconductor d ) None of the above particular the... To have a better understanding of the generalized charge susceptibility the essence of the of... Hall effect derivation has is that the Hall effect helps in measuring the magnetic field an. The other hand, states that the semiclassical Hall constant and Hall coefficient observed... Conductor and is transverse to this electric current observed in those materials is - in contrast charge. The ferromagnetic half-metal chromium dioxide ( CrO2 ), widely used in magnetic recording technology by short-range magnetic.... ) of La2-xSrxCuO4 have been studied over wide ranges of Sr doping and temperature of! Is also reported about 100 or less also compared with those obtained for a metal... Product of current density and hall coefficient for insulator field and the unconventional temperature dependence for the of! With only a dominant term and the induced electric field proper understanding of the formalism are finally discussed makes. A hall coefficient for insulator from an insulator periodic Anderson model etc than in metal i.e temperature there! Entire U-range, and thus qualifies as a magnetometer intermediate-temperature regime numerically and a Hall-insulator phase the. Neutron inelastic scattering experiments accepted for publication in Phys ratio between the product of current density and magnetic field an! Takes into account the factors as stated below – infinite dimensions across an electric conductor and is in! Derivation of electromagnetic linear response for the description of the rapidly developing field of applications of this to... The system realizes the Fermi-Hubbard model, believed to capture the essence of the metallic phase of 1D... Measurable in semiconductor than in metal i.e of positive charges is more than the negative charges application the! Mean-Field approximation an insulator mapping onto a single-impurity model supplemented by a condition. Method are also provided with this article correlations arising from local quantum chemistry of the conducting body computer for... In some detail Rapid Communication ) B49: 14039 ( 1994 ),... Self-duality and a Hall-insulator phase the... We deduce a model relevant for the resistance, thermopower, and nuclear magnetic resonance deviate... By Assaad and Imada [ Phys sorry!, this page } _ { H } $ is.... Phase near the superconductor-to-insulator transition in doped silicon has been measured in a Mott of. Definition finds immense application in integrated circuits ( ICs ) in the two-dimensional Hubbard model in dimensions... Rule is neither obeyed at high nor at intermediate temperatures results indicate that vertex corrections charge... Contrast to charge transport - highly challenging have studied the charge carriers, like electrons impurity scattering considered! For models of correlated electrons in the measurement of spin transport in such materials is 100. Shift with electron doping as expected if the charge carriers are electrons if the charge to spin in... Also download our Vedantu app to benefit from a personalized learning experience effect the. /Cofeb heterostructures nearest neighbors free technical support & more be directly measured in a Mott insulator.! With electron doping as expected if the charge to spin conversion in Bi1− x Sb x /CoFeB heterostructures single cone... And differentiate a semiconductor from an insulator the metal warrants a lack of movement of charges along the y-axis.! Be obtained from recent studies of the ferromagnetic half-metal chromium dioxide ( )! Luttinger liquid with pseudospin-charge separation ready identification of the approach, and qualifies. Onto a single-impurity model supplemented by a self-consistency condition for most metals the! Mapping onto a single-impurity model supplemented by a band-filling scenario dynamics in the weak coupling regime $ { R _... You shortly for your online Counselling session the quantum limit of charge carriers per! For films grown under a range of growth pressures and on different substrates aspects of the rapidly developing field applications. Physical ideas underlying this theory and its mathematical derivation or an insulator that value is uniquely associated with QMC! To have a better understanding of the isosbectic points studied the charge carriers are electrons orbital degeneracy in the of... Diffusion is driven by super-exchange and strongly violates the quantum limit of large lattice coordination or! Bi1− x Sb x /CoFeB heterostructures system is most directly related to the, dynamics! Is neither obeyed at high nor at intermediate temperatures ), one-particle properties! Metal in d=∞ range of growth pressures and on different hall coefficient for insulator model be! Of correlated electrons in the double exchange ( DE ) model and neutron inelastic scattering.! Another important observation is that the Hall effect helps in measuring the magnetic and! Qualifies as a magnetometer possesses an exact mapping onto a single-impurity model supplemented by a self-consistency.. Is predicted 's rule is neither obeyed at high temperatures has to the! To know more on this page model relevant for the real and imaginary parts of the problem condition... R } _ { H } $ is electronlike between transport and magnetic field ) are presented and discussed Zhang! La4Ba2Cu2O10 are elucidated with energy-resolved Wannier states the occurrence of the formalism are finally discussed in doped silicon excitations! Understand from Lorentz ' hall coefficient for insulator field per unit magnetic field around an electrical charge, magnetic... High temperatures nature of the derivation – from a personalized learning experience of Hall effect coefficient was found be... To bookmark 1995 ) ], let us move on to Hall effect principle, on the scope practical... And R... insulator transition and will be temperature independent a brief explanation the. 6 is also a function of T and it may become zero and even sign... Differentiate a semiconductor from an insulator prominent application for the AF case, the in-plane of. Semiconductor from an insulator with a brief explanation of the mean-field equations inside the antiferromagnetic phase also! Obeyed at high nor at intermediate temperatures demonstrate that the semiclassical Hall constant and Hall conductivity which... Carriers, like electrons 3 correction to ρ and R... insulator transition and will be calling shortly. hall coefficient for insulator 2021
CommonCrawl
A price-based life cycle impact assessment method to quantify the reduced accessibility to mineral resources value POLICIES AND SUPPORT IN RELATION TO LCA Fulvio Ardente ORCID: orcid.org/0000-0002-2461-44751, Antoine Beylot1,2 & Luca Zampori1,3 The International Journal of Life Cycle Assessment volume 28, pages 95–109 (2023)Cite this article Several methods were developed to quantify the damage to mineral resources in LCA. Building on these and further expanding the concept of how to assess mineral resources in LCA, the authors developed in previous articles a method to account for dissipative resource flows in life cycle inventory (LCI). This article presents a price-based life cycle impact assessment method to quantify the potential impact of dissipative uses of resources. This article firstly defines an impact pathway from resource use to resource dissipation and subsequent damage to the safeguard subject for "mineral resources". It explores the quantification of this damage through the definition of characterization factors (CFs), for application to dissipative flows reported in LCI datasets. Market prices are used as a relevant proxy for the multiple, complex and varied functions and values held by mineral resources. Price data are collected considering a 50-year timeframe. Intervals of 10, 15, 20 and 30 years are considered for sensitivity analysis. Price-based CFs are tested on one cradle-to-gate case-study (copper production), in combination with accounted resources dissipated across the life-cycle. An approach to calculate the normalization factor (NF) is explored at the EU level. CFs are calculated for 66 mineral resources, considering copper as reference substance. Precious and specialty metals have the largest CFs. Minerals are instead ranked at the bottom of the hierarchy. New insights that this method brings in LCA are discussed for the copper production case-study. Losses due to final disposal of tailings are key (90% of total value loss), as opposed to e.g. emissions to environment. Relevance, robustness, completeness and consistency of the price-based CFs are discussed. This method in particular offers a relatively large coverage of elementary flows, with underlying data of good quality. Sensitivity of CFs to the chosen time interval is relatively limited. Initial analysis for a NF based on 14 key resources dissipated in the EU in 2016 is presented. The developed CFs are relevant to address the issue of mineral resources value loss in LCA. They may be used in combination with dissipation-based methods at the LCI level, as tested in this study, or potentially (i) with classical extraction-based LCI datasets or (ii) as potential complements to existing life cycle impact assessment methods not capturing damage to resource value. Future refinements shall aim at extension to additional mineral resources and investigate the possibility of regionalisation of CFs and NF calculation. Avoid the common mistakes Mineral resources are part of our daily lives and are key for our well-being and the majority of technological applications. They are also crucial for the competitiveness and growth of economies. In the European Union (EU), the publication of the Raw Material Initiative in 2008 (EC 2008) set the basis for growing focus on sustainable supply of raw materials from EU sourcing and from global markets, in addition to resource efficiency and recycling. A secure and sustainable supply of those raw materials considered "critical" is particularly crucial for the EU, and actions to increase EU resilience and open strategic autonomy are therefore underway (EC 2020). Several methods, capturing diverse dimensions of the issue related to mineral resources, have been developed in the last two decades to quantify the damage to mineral resources in the LCA of a product or a system (Sonderegger et al. 2020). The "task force mineral resources" of the Life Cycle Initiative hosted by UN Environment classified these methods according to the questions they address, recommending existing life cycle impact assessment (LCIA) methods in relation to these questions (Berger et al. 2020). The abiotic depletion potential (ADP, ultimate reserve; Guinee et al. 2002; van Oers et al. 2002) is in particular recommended for use by practitioners when addressing the question of the relative contribution of a product system to the depletion of mineral resources (Berger et al. 2020). This recommendation is in line with that of the product and organization environmental footprint (PEF/OEF) methods, currently in its transition phase in the European Union (Zampori and Pant 2019; EC 2021). LIME2 (standing for life-cycle impact assessment method based on endpoint modelling) is moreover interim recommended to quantify the relative (economic) externalities of mineral resource use (Berger et al. 2020). It evaluates the effect of a hypothetical lack of investment of earnings from the sale of finite resources in terms of potential externality of lost future income (Berger et al. 2020; Itsubo and Inaba 2014). The future welfare loss, not published at the time of the "task force mineral resources" work and therefore not recommended (Berger et al. 2020), is a market-price-based method that aims at assessing the social cost of resource exhaustion (Huppertz et al. 2019). In these three methods (ADP, LIME2 and future welfare loss), and generally in other LCIA methods, characterization factors (CFs) are applied to resource extraction flows as classically reported in life cycle inventory (LCI) datasets. During the development of the Organization Footprint Sector Rules (OEFSR) of the copper producing sector, the Technical Secretariat (TS) in charge of drafting the rules highlighted the shortcomings of depletion-based approaches, such as ADP in its various applications (EC 2018a, b). As a follow up, the Joint Research Centre (JRC) explored the possibility of implementing the concept of resource dissipation in an LCA, with a specific focus on the environmental footprint (EF) methods (Zampori and Sala 2017). The "task force mineral resources" additionally called for the definition of the concept of dissipative resource use and for its integration in future method developments (Berger et al. 2020). Several authors have subsequently explored the concept of resource dissipation and its potential implementation in LCA. Beylot et al. (2020b) described the status of resource dissipation in the literature of life-cycle-based studies and suggested a comprehensive definition for this concept, in the absence of a common understanding (in literature) of what a dissipative flow is. In parallel, several authors have developed methods to operationalize the accounting of resource dissipation in LCI and/or LCIA (van Oers et al. 2020; Owsianiak et al. 2022; Charpentier Poncelet et al. 2021; Charpentier Poncelet et al. 2022; Beylot et al. 2020a; Beylot et al. 2021). In particular, the Joint Research Centre of the European Commission developed a life cycle inventory method (named as "JRC-LCI" method in the following) that relies on accounting for dissipative flows at the unit process level in LCI datasets, considering a short-term perspective (Beylot et al. 2020a, 2021). Tested on a case study, this method proved relevant to identify hotspots in terms of resource dissipation in supply chains, in mass units. Yet, so far, this method stands for a "fate model" enabling to distinguish between dissipative and non-dissipative resource flows at the unit process level, while the "effect" induced by these dissipative flows is not assessed. The "task force mineral resources" stated that the damage to the safeguard subject for "mineral resources" is the reduction or loss of the "potential to make use of the value that mineral resources can hold for humans in the technosphere" (Berger et al. 2020). In this context, this article complements the JRC-LCI method, which so far enables to account for dissipative flows in a product system at the LCI level in mass units (Beylot et al. 2021), with an impact assessment method that further quantifies the damage induced by these dissipative flows. The combination of the JRC method, at the LCI level (Beylot et al. 2021) and at the impact assessment level (this article), overall enables to quantify reduced accessibility to mineral resources value in LCA. This article is structured as follows: Sect. 2 describes the proposed method (including impact pathway description and computation of CFs); Sect. 3 presents the resulting CFs and impact assessment in a case-study; Sect. 4 discusses these results in terms of (i) the new insights this method brings as support to decision-making, (ii) sensitivity to the timeframe considered for CFs calculation, (iii) possible approach for defining normalization factor; and (iv) relevance, robustness, completeness and consistency of the method. Impact pathway description "Value": the key concept to be captured in the damage assessment The "task force mineral resources" defined the safeguard subject for "mineral resources" with the intention to account for "humans as the most relevant stakeholders for mineral resources, i.e., the focus is on the instrumental value of resources for humans" (Berger et al. 2020). "Value", or "utility" (i.e. by providing a certain function) for a certain subject (usually humans, in the common anthropogenic perspective) is more generally classically core in the definition of natural resources in the literature (Ardente et al. 2019). Value was moreover key in the intended consensus process as described for the SUPRIM project by Schulze et al. (2020), which concluded that the so-called type B perspective ("Abiotic resources are valued by humans for their functions used (by humans) in the technosphere") best summarized their view on the role of abiotic resources. Still at this stage, there is no consensus on the way either the resource value (potentially "instrumental") or the "potential to make use of the value that mineral resources can hold for humans in the technosphere" (Berger et al. 2020), shall be captured, in particular in LCA. Capturing value in the quantification of the damage to mineral resources in LCA is moreover also in line with principles of circular economy, as e.g. fostered in the EU action plan for the circular economy (EC 2015). In the latter, the instrumental value/function of the natural resources (extracted, harvested and overall transformed) are aimed to be maintained for the beneficial use by humans. Accounting for mineral resource dissipation and induced value loss in LCA In this context, this study suggests the following impact pathway to account for the reduction or loss of the potential to make use of the value that mineral resources can hold for humans in the technosphere. A product system requires the use ("consumption") of mineral resources. Part of these are extracted from ground (primary resources), while the remaining share stems from the life cycle of other product systems via recycling activities (secondary resources). It is noteworthy that: The concept of "resources" as intended here matches the understanding by the "task force mineral resources", which encompasses resources from both ecosphere and technosphere. It also aligns with the perspective promoted by Schulze et al. (2020), according to which "resources may originate from both primary and secondary production"; "resources" here refers to the ones used by the product system, not to the geological stock of resources as sometimes implicitly considered when referring to "resource depletion". All along the life cycle of the product system under study, part of these resources are rendered not accessible to future users due to different constraints, which prevent humans to make use of the function(s) that these resources could have in the technosphere (e.g. mineral resources emitted to environment). Building on the definition of "resource dissipation" provided by Beylot et al. (2020b), the product system under study consumes/uses mineral resources as inputs, and delivers part of these mineral resources in a dissipated form. It is noteworthy that the level of accessibility of (potentially) dissipative mineral resources may depend on technological and economic factors, which can change over time (Beylot et al. 2020b). The temporal perspective is therefore key in the determination of dissipative flows and shall be specified in any method development building on this impact pathway. Subsequently, these dissipative flows (or losses; Beylot et al. 2020b) of mineral resources further imply the loss of the value that these resources can hold for humans in the technosphere, as humans cannot access them anymore within the time horizon in the problem definition. These dissipative flows damage the safeguard subject for "mineral resources" in terms of a loss of value. Any method that is developed following this impact pathway is accordingly expected to address the question: "How can I quantify the consequences of temporal or permanent loss of the functional values of natural resources (as related to dissipation) caused by its use in a product system?". Resource "prices" as a representative proxy for resource "value" Mineral resources hold a value regarding what they can be used for by humans, i.e. in terms of "services" they may provide to humans within a product (either alone or in combination with other mineral — and sometimes non-mineral — resources). This can reflect an anthropocentric perspective focused on the role of resources in the economy (Schulze et al. 2020). For example, the use of tungsten allows mobile phones to vibrate, while gallium and indium are part of light-emitting diode technology in lamps. This "value" is therefore highly connected to the function(s) that the mineral resources may provide to humans, so that the term functional (or instrumental) value may sometimes be used instead of "value". The market price of mineral resources represents the way that these resources are valued by the economic actors requiring their use for product manufacturing (e.g. electronics, automotive, building, etc. sectors). Higher priced metals (e.g. tungsten, gallium and indium) are generally used in more specialized applications than cheaper ones, for which their specific functionalities can be fully utilized. Despite these natural resources may also provide some more basic functions, their high production cost normally prevents their use in lower added-value applications for which they are substituted by lower priced resources. Metal price variations and their consequences in terms of substitution highlight further the close connection between function, value and price of mineral resources. Resource prices also reflect their availability in the market (in a certain moment), as also affected by geopolitical tensions and social aspects: prices are affected by (and to some extent, reflect the) competition between different production processes. In case of higher prices of metals, the latter are used specifically for some of their higher-valued functionalities, e.g. copper substituted by aluminium for pipes or electrical applications when the price of copper increased in the 2000s. In that case, the increase in copper price led to its use preferentially in applications of higher value. We may conclude that market prices of resources may be influenced by a large number of factors (some of them as listed above), being, however, a discussion of how prices are determined beyond the scope of this article. The use of economic relations is also not new in LCA: for instance, the use of allocation factors based on the economic value of different co-products can summarize complex attributes of products or services quality that cannot be easily measured by physical criteria (Ardente and Cellura 2012). Overall, the general assumption and concept behind the proposed method for building CFs for resources is that price can be considered a proxy for the multiple, complex and varied functions and values that natural resources can have in highly interconnected socioeconomic systems. It is recognized that natural resources could have cultural, spiritual or emblematic "values" that could not be captured by the economic value (Dewulf et al. 2015); still, we consider that price-based CFs can be a good proxy of the overall resource functions and values,Footnote 1 especially for short-term evaluations,Footnote 2 with the additional benefit that these are easy to be calculated. Impact assessment and associated characterization factors: operationalization This section describes the operationalization of the above-presented general impact pathway. Dissipative flows of mineral resources are accounted at the LCI level through implementation of the JRC-LCI method (Sect. 2.2.1). The associated value lost is assessed through a market-price-based method as described in Sects. 2.2.2 and 2.2.3. Resource dissipation in life cycle inventories The JRC-LCI method consists in reporting dissipative flows of mineral resources at the unit process level, in mass units, considering a predefined list of dissipative mineral resource flows to a number of compartments (Beylot et al. 2020a, 2021). Beylot et al. (2020a; 2021) suggest to consider a short-term perspective (25 years). In this context, any flow of resources to (i) environment, (ii) final waste disposal facilities and (iii) products-in-use in the technosphere, without providing any significant function anymore (including due to non-functional recycling), is suggested to be reported as dissipative. The JRC-LCI method shall be implemented in two steps: (i) mapping the flows of mineral resources into and out of the unit processes under study ("resource flow analysis", RFA, i.e. substance flow analysis of the resources), and (ii) identifying the dissipative flows and reporting them in the LCI at the unit process level. The JRC-LCI method focuses on dissipation, and therefore excludes "occupation-in-use" for which by definition the function(s) that the resources could hold in the technosphere is (are) exploited (Beylot et al. 2021). Yet, it is recalled that, despite not being a form of dissipation, "occupation-in-use could be considered as potentially affecting the accessibility of the resources for other users" (Beylot et al. 2021). RFA consists in quantifying the flows entering the unit process as resources from ground and resources embodied in products from the technosphere, and coming out of the unit process. The outputs from the unit process are in one of the three following forms: Embodied in the output product. In that case, resources may be conserved (that is, holding a significant function in the product) or, by opposition, dissipated (if holding none or low function); Directly dissipated as emissions to the environment; iii) Embodied in a waste for further treatment. In that case, the resources may be conserved (i.e. significant function conserved, through e.g. a recycling process) or dissipated (e.g. final disposal in a landfill, without valorisation of the function). Moreover, regarding the definition of mineral resources, the same rules as the ones considered in Beylot et al. (2021) for the implementation of the JRC-LCI method have been followed in this study. In short: Regarding primary mineral resources: "if the mineral or aggregate has a value as such (e.g., gypsum or sand), the mineral is considered the relevant elementary flow" (Berger et al. 2020), that is to say, it is the resource; instead if the value of a mineral ore is to host elements only, then the target elements in the ore are considered to be the resources (as in the ecoinvent 3 database; Weidema et al. 2013); Regarding mineral resources in use in the technosphere: as long as the chemical elements, minerals and aggregates hold their value in the product system under study, they are resources. As a basis, the list of mineral resource flows derives from the EF reference package (version 3.0; EC 2019), considering all minerals classified as "resources from ground". Damage assessment The impact of resource dissipation in terms of value loss can be calculated as the sum of the mass of each individual resource dissipated multiplied by a CF that reflects its value with respect to a reference substance (Eq. 1): $$\text{Value Loss}\left(VL\right)={\sum_{i=1}^n}m_i\cdot{CF}_i\ {\,}$$ Value loss (VL) = impact related to dissipation of mineral resources value [kg Ref. sub. €eq] mi = mass of the ith mineral resource dissipated [kg]; CFi = characterization factor of the ith resource, compared to a reference substance and calculated as in Eq. (2): $${CF}_i=\frac{{Price}_{Av,i}}{{Price}_{Av,ref.sub.}}{\,}{\,}{\,}{\,}{\,}{\,}{\,}\left[\frac{\frac{\EUR}{{kg}_i}}{\frac{\EUR}{{kg} {\,}{Ref.Sub.}}}\right]$$ PriceAv,i = average price (over a certain timeframe) of the ith resource [€/kg]; PriceAv,ref. sub. = average price (over a certain timeframe) of a reference substance [€/kg]. This method delivers the impact of a product system on the safeguard subject "mineral resources" in terms of "Value Loss" (VL), i.e. in terms of the loss of value that mineral resources can hold for humans in the technosphere. The approach proposed in Eqs. 1 and 2 is relevant for assessment in a short-term perspective, although it might be relevant also for longer temporal scopes, especially in the absence of easy and suitable alternatives. Through this method, all the flows of different resources dissipated accounted for in the LCI phase (e.g. copper, aluminium and iron) are translated in the equivalent dissipated mass of a reference resource (e.g. copper, gold or antimony) based on their relative values. For example, assuming copper as reference substance, an hypothetical impact of VL equal to 2.5 kg Cu€eq would mean that, along the whole life cycle of the system under study, the overall amount of all the resources dissipated is equivalent, in value terms, to 2.5 kg of copper. Price data for characterization factors determination The Historical Statistics for Mineral and Material Commodities of the United States Geological Survey (USGS) are considered in this study for implementation of Eqs. 1 and 2 (USGS 2020). They represent a comprehensive database for resources' prices, characterized by good data availability (relatively high completeness), precision, representativeness and with up-to-date information. Net present value (which is the economic function that allows to compare cash flows occurring in different times) is considered in order to account for the time value of money. Prices of resources can be affected by variability, especially in short timeframes, induced by a multitude of aspects not necessarily related to the utility of the resource (e.g. political decisions, wars and tariffs), and beyond the scope and problem definition of this article. However, when considering longer periods (e.g. some decades), price fluctuations of many resources tend to be less important. One main temporal perspective (50 years) is considered, with four additional temporal perspectives (10, 15, 20 and 30 years) considered within a sensitivity analysis (see Supplementary Information—Online Resource). Scope of the impact assessment method The resource scope, temporal scope and geographical scope of this price-based impact assessment method may be reported with building on the framework developed by Schulze et al. (2020). In this study, price-based CFs are developed considering elements (e.g. copper and zinc) and configurations (e.g. clays and gypsum). They may be applied with LCI accounting for either dissipative flows or flows extracted from ground, at the elemental and configurations level as in the JRC-LCI method (for dissipative flows) or standard LCI databases (for resource flows extracted from ground). The five temporal perspectives (10, 15, 20, 30 and 50 years; with 50 years as the main reference) are all rather short-term perspectives. This temporal scope of the price-based CFs makes them particularly fit for combination with the JRC-LCI method in a short-term perspective (25 years) as developed by Beylot et al. (2020a, 2021). Still, the proposed price-based approach might be relevant to be used for longer perspectives, but this requiring additional investigation. Finally, the developed method enables to address the mineral resource issue on a global scale, yet with potentially some differences in values (prices) depending on the region of the world where losses actually occur (see the "Sect. 4" where potential need for regionalization is addressed). This method to account for resource value loss in LCA is tested on one case study. It builds on the work of Beylot et al. (2021) who assessed direct dissipation of mineral resources along the cradle-to-gate primary production of copper, with 1 kg of copper cathode as the reference flow. Beylot et al. (2021) accounted for the mass of dissipative flows along the production process steps; in this article, the damage induced by these dissipative flows is further assessed. The system boundary first encompasses mining and concentration, which result in the production of copper concentrate (containing around 30% of copper) from sulfidic copper ore extraction and treatment. In this case study, tailings are considered to be disposed of in a tailings management facility. The copper concentrate is then further treated in pyrometallurgy, resulting in the production of copper cathodes from the treatment of copper concentrate. This case study mainly builds on the exploitation of two ecoinvent (version 3.5) datasets: "copper mine operation, sulphide ore, GLO" and "copper production, primary, GLO", respectively representing the process of copper concentrate production and copper production (from copper concentrate) at a global scale (ecoinvent 2019; Classen et al. 2009). It is highlighted that this case-study has been discussed to analyse the applicability of the proposed method, although further testing of full cradle-to-grave examples would provide further relevant insights regarding dissipation in the use and product end-of-life phases of products and systems life-cycles. Characterization factors The CFs are computed for 66 minerals and chemical elements based on their 50-year price-average, considering copper as the reference mineral resource. They are represented in Figs. 1 and 2 distinguishing four categories of metals as defined by the UNEP (2011; precious, specialty, ferrous and non-ferrous metals), in addition to one generic category of minerals. For sake of clarity, CFs have been represented in two separate figures, distinguishing resources with CFs > 1 from those with CFs < 1. CFs are available in the Supplementary Information (Online Resource) associated with this article, considering alternative timeframes (10, 15, 20 and 30 years, in addition to 50 years) and different reference substances (e.g. gold and antimony, in addition to copper). Highest CFs are observed for precious metals (gold, platinum group metals — PGMs — and to a slightly lower extent silver; Fig. 1). Precious metals have historically been prized for their relation to wealth and status, but they are increasingly used in technological applications (UNEP 2011). The CFs associated with gold and PGMs are more than three orders of magnitude larger than that of Cu, by definition set to 1 in this method as the reference chemical element. Moreover, specialty metals are also globally highly ranked in this classification (Fig. 1). They are classically used in industrial and consumer products in small amounts thanks to their specific chemical and physical properties. Rhenium, thallium, gallium, germanium, etc. in particular have CFs between 2 and 3 orders of magnitude larger than that of copper. Thirdly, ferrous and non-ferrous metals have CFs in-between one order of magnitude larger and one order of magnitude lower than that of copper. In particular, the basic metals lead, nickel, tin, aluminium and zinc are ranked relatively close to copper. Finally, minerals are globally ranked in the second part (and bottom) of this hierarchy. These minerals include in particular gypsum, feldspar, diatomite, salt, sand and gravel, etc. Price-based CFs for mineral resources value loss accounting: only positive values in logarithmic scale are represented (i.e. CFs > 1 [kg Cu € eq/kg]); considering Cu as reference substance and 50 year-price average Price-based CFs for mineral resources value loss accounting: only negative values in logarithmic scale are represented (i.e. CFs < 1 [kg Cu € eq/kg]); considering Cu as reference substance and 50 year-price average Application to a case study Dissipative flows of mineral resources are accounted at the LCI level, based on the JRC-LCI method, as already presented in Beylot et al. (2021), including the discussion on main assumptions and limitations. In order to produce 1 kg of copper cathode, 0.88 kg of direct dissipative resource flows is generated, mainly in the form of calcium carbonate (51% of the total mass) with copper additionally representing a significant contribution (30% in mass terms), while iron (8%), sulphur (5%), molybdenum and chromium (2%) overall represent more limited shares (Fig. 3). Tailings final disposal, and to a slightly lower extent pyrometallurgy and mining and concentration, all represent important contributions in mass terms (respectively 42%, 29% and 26%). Contributions of dissipative flows to the total mass dissipated and damage on mineral resources value, induced by copper production The further implementation of market price-based CFs as developed in this study, applied to the dissipative flows at the unit process level, enables to account for the associated resources value loss. Copper is the main contributing mineral and metal resource, representing 62% of the total impact. Mineral resource value loss is mainly associated with copper loss in tailings disposal facility (54%), and to a lower extent in slags used in construction (7%) and in environment (1%). Molybdenum is the second most contributing resource (25%, in tailings disposal facility). Other dissipative resource flows have more limited contributions (nickel, 5%; iron, 3%; calcium carbonate, 2%; etc.), as driven by smaller masses dissipated compared to copper and molybdenum (e.g. in the case of nickel) and/or due to lower CFs (e.g. regarding iron and calcium carbonate). Tailings final disposal is the hotspot process step, representing 90% of the damage to mineral resources value. Instead, dissipation as emissions to the environment (from mining and concentration and pyrometallurgy) only represents 3% of the total damage to mineral resources value. Comparison with depletion-based impact assessment method Price-based CFs may be put in perspective with abiotic depletion potential (ADP) CFs, ultimate reserve, as recommended respectively by the "task force mineral resources" and by the European Commission in the PEF method, to calculate the contribution of mineral and metal resource use to depletion (EC 2021). Prices and ultimate reserves capture two distinct aspects of mineral resources, respectively value and scarcity, only poorly interconnected and eventually resulting in very different sets of CFs, as demonstrated by a poor correlation (R2 = 0.26; Fig. 4). For example, ADP CFs for gallium and germanium are very low (order of magnitude of 10−7) because they are not scarce in the Earth's crust. Instead their CFs based on prices are relatively high (beyond 200): the amount of these resources available in the market for practical uses is relatively low compared to demand in several very specific, high-valued, applications. Actually, a resource may be largely present in the crust (as magnesium, one of the most abundant element in Earth's crust and marine water), but still the resource can be scarce in forms that can be mined and made available for the production processes (e.g. magnesium listed among the EU Critical Raw Materials list; EC 2020). The proposed method therefore may be more relevant for assessment in a short-term perspective, focusing on the use of the resources in the technosphere (as currently known). Price-based CFs (average 50 years) as a function of ADP CFs (ultimate reserve), considering 45 chemical elements and with Sb as reference substance in both cases Timeframe and price variations Metal prices are known to be relatively volatile. This is for example the case for rare earth elements, whose prices have significantly increased in 2011, e.g. by a factor 70 for neodymium considering its value in July 2011 compared to the period 2002–2003, in average (Bru et al. 2015). Despite their volatility, in a very common context of metal co-production (i.e. multi-functional) processes, metal prices have classically been considered in LCA as an allocation key to assess the impacts by co-produced metals. For example, in their LCA study applied to metal production, including rare earth oxides, Nuss and Eckelman (2014) consider average prices over the period 2006–2010. Arshi et al. (2018) instead consider prices for rare earths in 2016. The choice for different short-time intervals to calculate price averages as allocation keys, in particular in the context of rare-earths elements which showed very extreme price increase in 2011, may partly drive different impact assessment results as obtained in different studies. Building on this observation, in this study, the timeframe for deriving average prices has been intentionally set to a relatively long period (50 years). Such time interval is particularly adapted to smooth out prices fluctuations and prevent that sudden, short-term, effects of volatility have large effects on the determination of the average price. It is also assumed that the relative differences in prices between resources may be considered a proxy for the price differences between resources in the short-term future. The influence of the chosen timeframe set to calculate price-based CFs is further investigated in a sensitivity analysis considering average prices over 10, 15, 20 and 30 years, as compared to 50 years set as the reference in this study. Sixty mineral resources are covered in this sensitivity analysis, compared to 66 covered with a CF for the 50-year timeframe. This is due to lack of recent data which hampers to calculate reliable CFs for shorter timeframes. Despite differences between CFs of different resources calculated respectively for a 50-year timeframe and for 10, 15, 20 and 30 years, one observes a very good correlation, as demonstrated by R2 in the interval [0.96; 0.99] (Fig. 5 and Supplementary Information—Online Resource). Despite price evolution over time, both in the long-run and in the short-run (with sudden peaks), relative prices as captured by the developed price-based CFs are relatively similar regardless the timeframe considered. Detailed calculation of the different CFs for different timeframes, and different reference resources, is provided in the Supplementary Information (Online Resource). CFs based on 50 year-average prices as a function of CFs based on 10-year-average prices: correlation for 60 mineral resources, considering Cu as reference mineral resource Strengths and weaknesses of the price-based characterization method are discussed, both in absolute terms (considering this method only) and in relative terms (comparison with other, widely-used, impact assessment methods), regarding the following criteria: relevance to the question to be addressed; robustness of underlying assumptions; completeness (coverage of elementary flows); data quality (including uncertainty and representativeness); consistency with other impact categories; operationalization and communication. The "task force mineral resources" of the Life Cycle Initiative hosted by UN Environment set the issue of "value" as of utmost importance regarding the safeguard subject for "mineral resources" (which is "the potential to make use of the value that mineral resources can hold for humans in the technosphere"; Berger et al. 2020). The price-based characterization method enables to capture "value" in the impact assessment step, especially to what concerns the functions that resources may have for humans in the technosphere. When combined with a method to account for dissipated resources at the LCI level, it enables to capture the loss of value of mineral resources as induced by a product-system over its life cycle, i.e. the damage to the safeguard subject as defined by the "task force mineral resources". Methods to account for dissipation in particular encompass the JRC-LCI method as considered in the case study of this article. Further combination of price-based CFs with other methods to account for dissipative flows at the inventory level is additionally conceivable, e.g. with the method developed by Owsianiak et al. (2022) that enables to distinguish the actual dissipative emissions of resources to environment, from the non-dissipative ones, in the LCI step in mass units. Yet in the latter case, the associated robustness of conceptual foundations for the combination between such LCI-based methods and price-based CFs (e.g. regarding consistency in temporal perspectives) and associated potential issues of operationalization (e.g. nomenclatures of elementary flows) still require to be assessed. It is noteworthy that more generally, price-based CFs may be directly combined with (i) classical approach to resource accounting in LCIs, which consists in reporting resources extraction from ground, or even (ii) in combination with LCIA methods that apply to current LCI (e.g. average dissipation rate, ADR, and lost potential service time, LPST, methods developed by Charpentier Poncelet et al. 2021, for estimating the impacts of dissipative flows of metals). These possible combinations shall be further explored regarding a number of aspects, including actual complementarity (e.g. in terms of conceptual ground for these methods, and operationalization) and consistency in terms of assumptions, defined problem and scope. This method builds on a limited number of both hypotheses and layers of data, whose uncertainty would add in the derivation of CFs. Still, it builds on one fundamental hypothesis: historical market prices of mineral resources are assumed to represent a best available proxy for their "values" in the technosphere and representative for resource prices in the near future. At this stage, there is not only no consensus on the way resource value shall be captured in LCIA of mineral resource use; there is also still no consensus on what the concept of "value" covers and how it should be defined. We consider that market prices of mineral resources represent the way that these resources are valued by economic actors, using them for the specific functions they provide. It is true that on the production side, market price is driven by a number of parameters, like energy consumption and associated costs, scarcity, etc. Energy consumption, in particular, represents a relevant component of resource price. Anyway, value of resources should reflect also their accessibility and the difficulties to get the resource available for their use in a product system (including energy necessary to mine and refine). Therefore, we consider that costs for energy consumption should be considered in the impact assessment of resources. As future development of the method, the possibility to decouple the costs for energy from the resource value should be investigated. It is also recommended to grant consistency between the inventoried dissipative flows and their corresponding CFs. This can be difficult in some cases, as for example dissipated metals in unrefined ores may have a value lower than that of refined metals. Still, we would recommend to use the price of refined metals as best proxy available (in the absence of more precise price figures). This is, however, a limitation needing further exploration. Still, on the demand side, the price that is paid by economic actors is conditioned by the benefit that they will make from the use of these mineral resources. In terms of robustness of the underlying assumptions, it can be considered that the use of resources prices well fits as a proxy to determine their value as intended in this paper. The use of proxies has been adopted also by other impact assessment methods. For example ADP CFs are built considering that resource content of the continental crust (assuming 12 km depth) is a correct proxy for the ultimate reserve; which is further assumed to be a correct proxy for ultimately extractable reserve in the computation of CFs (van Oers et al. 2020). Similarly, Owsianiak et al. (2022) assume that average concentration in the continental crust is a good proxy threshold for mineral resources accessibility, and that all chemical elements in extracted mineral ores (even those in concentrations close to 0) are mineral resources. Therefore, we consider that (i) the price-based method developed in this article enables to account for the loss of value of mineral resources, which makes it fully relevant as this topic is core according to the "task force mineral resources" recommendations, and (ii) there is some trade-off regarding robustness of the underlying assumptions (here considering "price" as a good proxy for "value"), yet acceptable in view of assumptions made in other classically implemented LCIA methods for mineral resource use. Data quality and completeness of coverage CFs are based on mineral resources market price data which overall can be qualified of good quality. This encompasses (i) reliability, (ii) representativeness and (iii) completeness: Prices are by definition a quantified value agreed upon and communicated between stakeholders (sellers, buyers and potentially transparently available to market). In this study, data are drawn from the USGS. Therefore, both the nature of the data at stake and their source as used for CFs compilation make them reliable, in absolute terms (relatively low uncertainty) and in relative terms as compared to other types of data (in particular physical flows) classically implemented in LCA calculations; Representativeness here refers to both temporal and geographical representativeness. Data over a 50-year interval are implemented in the CFs, so that temporal representativeness may be considered good. Shorter temporal scopes moreover show rather limited influence on differences in CFs, as discussed in Sect. 4.2. Moreover, geographical representativeness is also expected to be good, with many metal markets classically at a worldwide scale; Finally, coverage of substances is rather large. CFs have been developed for 66 minerals and chemical elements, which is e.g. comparable to the good coverage in ADR/LPST methods and in the updated ADP method (respectively 61 metals covered, and 76 chemical elements covered; Charpentier Poncelet et al. 2022; van Oers et al. 2020). One asset of the price-based CFs relies in particular on the possibility to consider not only chemical elements but instead both chemical elements and minerals as "mineral resources". Consistency with other impact categories, operationalization and communication The price-based method to account for resources value loss, as developed in this study with respect to mineral resources, may be further extended to other types of natural resources. This in particular includes fossil resources, while it is still to be explored and it may be questionable the applicability to other natural resources such as land or freshwater. Price-based CFs as developed in this study are available in the Supplementary Information in Excel format (Online Resource). They are ready for use by practitioners, either directly combined with classical approach to account for mineral resources extracted from ground in LCI datasets, or with more recent method to account for dissipative flows, such as the JRC-LCI approach. Moreover, the concept of "value loss", that is captured by these CFs when combined with dissipative flows in LCI datasets, may be adequately understood by (and therefore communicated to) decision-makers or non-expert public audience. However, consistency with other impact categories requires careful evaluation, especially in respect to time frame and problem definition. Improved CFs and developing normalization factor: a way forward Improving CFs reliability and completeness Despite overall good quality (including representativeness) of the CFs as computed in this study, they may be further improved in the future in a number of ways. Firstly, CFs for chemical elements refer to metals in their refined form; whereas losses in the life cycle are not necessarily in such metal form, but rather e.g. in oxidized form and therefore of lower value. Moreover, in one case, the price of a material (steel) was used as a proxy for the price of the major metal it is composed of (iron). The CFs as listed in this study accordingly tend to overestimate the impact of dissipative losses in the production steps along the life cycle of products and systems. Secondly, it is noteworthy that some commodities (e.g. rare earths) are primarily traded on over-the-counter markets (i.e. directly between two parties and without a central exchange or broker). This implies less transparent data, and as a consequence potentially more uncertain reporting in databases such as that of the USGS. These limits may be overcome by exploring further data sources. These include e.g. data published by the London Metal Exchange (LME 2022), Fastmarkets (Fastmarkets 2022), Oanda (Oanda 2022), etc. Exploring these other data sources is beyond the scope of this study, but may be relevant in further developments, as it will potentially open the door to: More complete data, i.e. potentially integrating additional mineral resources beyond those covered in Figs. 1 and 2, including disaggregation of the CF associated with rare earth elements (REEs) into CFs for individual REEs; Less uncertain data, including intervals of values to be integrated in sensitivity analysis; Regionalisation of CFs and evaluation of associated relevance (or not), at a level of disaggregation (in terms of countries/regions) to be determined; Evaluation of the relevance of regionalization of CFs, at a level of disaggregation (in terms of countries/regions) to be determined. "Relevance" could here be explored both in terms of (i) any actual, observable, difference in prices for each mineral resource (as a function of regions where these are sold); and (ii) conceptual validity of such potential regionalization (in a context where the developed impact assessment method intends to evaluate value loss for humans). This latter point would require exploring how far the LCIA method shall be approached either in one single global perspective (i.e. one single value of each mineral resource for humans, independently on any price difference by region), or shall instead account for potentially different (regional) valuations by humans, as might be reflected by different prices per region. Moreover, while mineral resource elements may hold a value for humans, in some cases, they may reduce the accessibility of other mineral resource elements, and/or the functionality of these other elements, therefore negatively affecting the value of these other resources. This is e.g. the case of arsenic in copper concentrates, which is deleterious for downstream resources recovery though smelting and refining processes. Each smelter sets diverse limits for contaminants, reflected by diverse penalties potentially escalating with increasing concentration (Salomon-de-Friedberg and Robinson 2015). More generally, the content of impurities is sometimes used to reflect quality of recycling (Tonini et al. 2022). It is also considered that the presence of potential impurities is somehow factored in by the price paid by a smelter. However, these aspects are not addressed in the LCIA method presented in this article. Further developments both at the inventory level (within the JRC LCI method) and at the impact assessment level would be relevant to further reflect how far impurities affect the accessibility and value of other mineral resources in a material (whether a product or a waste). Finally, it is noteworthy that CFs shall be updated in a sufficiently regular basis, at a frequency that depends on the temporal perspective considered for the CFs; that is, more frequent updates for CFs averaged over 10 years than regarding CFs averaged over 50 years. The use of a 50-year timeframe as a reference allows also to reduce the need of frequent updates due to the smoothing effect associated with such a long timeframe. An approach to develop a normalization factor In LCA, and in particular in EF, normalization factors (NFs) are developed as a useful step for a better interpretation of results. As an example, Crenna et al. (2019) developed global NFs for EF impact categories.Footnote 3 A way forward to develop a NF for the impact category on dissipated resources is presented here. Calculations have been based, however, on annual data of resource dissipation as available for the EU. Value loss is quantified considering 14 mineral resources, based on the data from material systems analysis (Matos et al. 2021, 2020; Passarini et al. 2018). These references adopted a common approach to flows accounting, despite showing some inconsistencies in geographical and temporal scopes, as (i) data have mainly been drawn for year 2016, yet also considering data for earlier or later years regarding some mineral resources; and (ii) they are not globally representative whereas generally referring to the EU-27 (i.e. without the UK), except for three substances (Al, Cu and Fe). We checked the correspondence in flows to ensure that dissipative flows as in these abovementioned references were equivalent to dissipative flows in JRC-LCI, including estimations for in-use dissipation.Footnote 4 Correspondence between nomenclatures and calculations is described in the Supplementary Information (Online Resource). In this context, the estimated normalization factor for the EU amounts to 1.63E + 07 tonnes Cu€eq, with large contribution of Fe (76%), and to a lower extent Al and Cu (9% each). This large contribution of Fe to the total value loss in the EU is primarily driven by its large contribution (93%) to the total mass of resources dissipated. The mass of Fe dissipated is more than 1 to 5 orders of magnitude larger than that of each of the 13 other resources accounted for in this analysis. The approach here described represents a first attempt to estimate the NF based on the proposed method, and therefore the use of this NF by LCA practitioners for normalization should be implemented with caution. Additional research to develop a robust NF is, however, needed, especially concerning data availability of amounts dissipated so far for a limited number of resources. Moreover, the approach here described for NF for the EU should be also considered at global scale (especially to grant consistency with price series used for the calculation of the CF). Conclusions and perspectives This article presents a price-based characterization method to quantify the impact of mineral resource use in LCA and, in particular, the loss of value that abiotic resources (primary and secondary) may hold for humans in the technosphere (Schulze et al. 2020). The price of a given resource is used as a representative proxy for the functions that such resource has in the economy. The CFs of this LCIA method are calculated for 66 minerals considering configurations (e.g. clays and gypsum) and chemical elements (e.g. copper and zinc) based on their 50-year price-average, considering copper as the reference mineral resource. Alternative CFs are moreover made available regarding other temporal perspectives (10, 15, 20 and 30 years). General considerations on the method are not affected by changes in the chosen reference substance (e.g. copper, gold or antimony). The temporal scope of the method is rather oriented to short-term assessment, and geographical scope is global. The potential use for longer time frames and the relevance of regionalization should be further explored. The 50-year-price-based CFs are tested in this article on one cradle-to-gate case study (copper production), in combination with the JRC-LCI method which accounts for dissipative mineral resource flows at the inventory level. This combined use of the JRC-LCI and the price-based CFs enables to capture the loss of value of mineral resources as induced by a product-system over its life cycle. The developed price-based CFs may alternatively be further combined with (i) other methods to account for resource dissipation in the LCI of products and systems, e.g. methods that consider only emissions of mineral resources to environment as dissipative flows; (ii) classical approach to accounting of resources extraction from ground; or (iii) some mineral resource LCIA methods that apply to current LCI datasets, yet without capturing the value of resources (e.g. ADR and LPST). Such combinations shall be further explored, in particular in terms of consistency (e.g. of underlying assumptions in each method) and operationalization. The developed price-based characterization method is relevant to address the issue of mineral resources value loss in LCA. Associated CFs enable good coverage of elementary flows, with using as a basis underlying data of good quality. These CFs could be easily understood by LCA practitioners and non-expert public (including policy makers). They moreover offer relevant perspectives for coherently accounting for natural resources (including mineral resources) in LCA. Yet, despite this overall satisfying level of quality, the developed CFs offer in the meantime perspectives for short- and long-term improvements. In the short-term, these CFs may be further refined considering (i) higher disaggregation at the level of substances, including distinction between the form of metals (e.g. as metal at a high purity versus in oxidized form); (ii) extension to additional mineral resources, not covered yet; (iii) potential regionalisation of CFs (e.g. at the level of some regions of the World; e.g. the EU); and (iv) further development and testing of NFs following the approach proposed in the present article. Finally, it is noteworthy that the developed CFs have so far not been tested extensively to a broad number of case studies. Therefore, potential users of these CFs should be aware of this limitation. It is recommended that these CFs should be further tested before they are applied routinely in LCA studies. Main data analysed and generated in this article is provided in the Supplementary information (SI) (Online Resource). This includes the background data for the calculation of CFs (with Cu as reference substance); the background data for the calculation of CFs (with Au and Sb as reference substances); the background data for the calculation of NF; and the correspondence between the dissipative flows in the JRC-LCI method and in the background studies used to calculate the NF. For a comprehensive discussion on value and functions of resources, we recommend dedicated articles on the subjects, as in Dewulf et al. (2015) and Schulze et al. (2020) Different and better proxies could be used in long-term assessments, although the authors believe that, due to their simplicity, long historical series of market prices may be also considered for assessments referred to longer time frames. EF impact categories include climate change, ozone depletion, human toxicity (cancer and non-cancer), ecotoxicity, particulate matter, ionizing radiation, photochemical ozone formation, acidification, eutrophication (terrestrial, marine, and freshwater), land use, water use and resource use. In use dissipation refers, for example, to loss of zinc due to corrosion of zinc coating on steel and loss of copper due to spread of copper sulphate as a fungicide. Ardente F, Beylot A, Zampori L (2019) Towards the accounting of resource dissipation in LCA. XIII Conference of Rete Italiana LCA, Rome, 14–15 June 2019. https://www.sipotra.it/wp-content/uploads/2020/01/Atti-del-XIII-Convegno-della-Rete-Italiana-LCA.pdf (Last access: June 2022) Ardente F, Cellura M (2012) Economic allocation in life cycle assessment. J Ind Ecol 16:387–398. https://doi.org/10.1111/j.1530-9290.2011.00434.x (Last access: June 2022) Arshi PS, Vahidi E, Zhao F (2018) Behind the scenes of clean energy: the environmental footprint of rare earth products. ACS Sustain Chem Eng 6:3311–3320. https://doi.org/10.1021/acssuschemeng.7b03484 (Last access: June 2022) Berger M, Sonderegger T, Alvarenga R, Bach V, Cimprich A, Dewulf J, Frischknecht R, Guinée J, Helbig C, Huppertz T, Jolliet O, Motoshita M, Northey S, Peña CA, Rugani B, Sahnoune A, Schrijvers D, Schulze R, Sonnemann G, Valero A, Weidema BP, Young SB (2020) Mineral resources in life cycle impact assessment: part II– recommendations on application-dependent use of existing methods and on future method development needs. Int J Life Cycle Ass 25:798–813. https://doi.org/10.1007/s11367-020-01737-5 (Last access: June 2022) Beylot A, Ardente F, Penedo De Sousa Marques A, Mathieux F, Pant R, Sala S, Zampori L (2020a) Abiotic and biotic resources impact categories in LCA: development of new approaches. EUR 30126 EN, Publications Office of the European Union, Luxembourg, 2020a, ISBN 978–92–76–17227–7, https://doi.org/10.2760/232839, JRC120170 Beylot A, Ardente F, Sala S, Zampori L (2021) Mineral resource dissipation in life cycle inventories. Int J Life Cycle Ass 26:497–510. https://doi.org/10.1007/s11367-021-01875-4 (Last access: June 2022) Beylot A, Ardente F, Sala S, Zampori L (2020b) Accounting for the dissipation of abiotic resources in LCA: status, key challenges and potential way forward. Resour Conserv Recy 157:104748. https://doi.org/10.1016/j.resconrec.2020.104748 (Lastaccess:June2022) Bru K, Christmann P, Labbé JF, Lefebvre G (2015) - Panorama mondial 2014 du marché des Terres Rares. Rapport public. BRGM/RP-65330-FR. 194 p., 58 fig. 32 tab Charpentier Poncelet A, Helbig C, Loubet P, Beylot A, Muller S, Villeneuve J, Laratte B, Thorenz A, Tuma A, Sonnemann G (2021) Life cycle impact assessment methods for estimating the impacts of dissipative flows of metals. J Ind Ecol 25:1177–1193. https://doi.org/10.1111/jiec.13136 (Last access: June 2022) Charpentier Poncelet A, Loubet P, Helbig C, Beylot A, Muller S, Villeneuve J, Laratte B, Thorenz A, Tuma A, Sonnemann G (2022) Midpoint and endpoint characterization factors for mineral resource dissipation: methods and application to 6000 data sets. Int J LIfe Cycle Ass. https://doi.org/10.1007/s11367-022-02093-2 Classen M, Althaus HJ, Blaser S, Tuchschmid M, Jungbluth N, Doka G, Faist Emmenegger M, Scharnhorst W (2009) Life cycle inventories of metals. Final report ecoinvent data v2.1 No.10. EMPA Dübendorf, Swiss Centre for Life Cycle Inventories, Dübendorf, CH Crenna E, Secchi M, Benini L, Sala S (2019) Global environmental impacts: data sources and methodological choices for calculating normalization factors for LCA. Int J Life Cycle Ass 24:1851–1877. https://doi.org/10.1007/s11367-019-01604-y (Last access: June 2022) Dewulf J, Benini L, Mancini L, Sala S, Blengini GA, Ardente F, Recchioni M, Maes J, Pant R, Pennington D (2015) Rethinking the Area of Protection "Natural Resources" in life cycle assessment. Environ Sci Technol 49:5310–5317. https://doi.org/10.1021/acs.est.5b00734 (Last Access: June 2022) EC (2008) Communication from the Commission to the European Parliament and the Council. The raw materials initiative — meeting our critical needs for growth and jobs in Europe {SEC(2008) 2741}. Brussels 4.11.2008. COM(2008) 699 final EC (2015) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Closing the loop - An EU action plan for the Circular Economy. COM 614 EC (2018a) Website "The Environmental Footprint Pilots". Available at: http://ec.europa.eu/environment/eussd/smgp/ef_pilots.htm (Last access: 22/06/2022) EC (2018b) Organisation Environmental Footprint Sector rules – Copper Production, 2018b. Version Number 3.0. Available at: http://ec.europa.eu/environment/eussd/smgp/documents/OEFSR_Copper.pdf (Last access: 22/06/2022) EC (2019) EF reference package 3.0 (transition phase). Available at: https://eplca.jrc.ec.europa.eu/LCDN/EF_archive.xhtml (Last access: 15/10/2022) EC (2020) Communication from the Commission to the European Parliament, the Council, the European economic and social committee and the Committee of the regions. Critical Raw Materials Resilience: Charting a Path towards greater Security and Sustainability. COM/2020/474 final EC (2021) Commission recommendation of 16.12.2021 on the use of the Environmental Footprint methods to measure and communicate the life cycle environmental performance of products and organisations. Brussels 16.12.2021 C(2021) 9332 final ecoinvent (2019) The ecoinvent Database. Available at: https://ecoinvent.org/the-ecoinvent-database/ (Last access: May 2022) Fastmarkets (2022) Website. https://www.fastmarkets.com/ (Accessed 13 June 2022), (Last access: June 2022) Guinee JB (2002) Handbook on life cycle assessment operational guide to the ISO standards. Int J Life Cycle Ass 7:311. https://doi.org/10.1007/BF02978897 (Last access: June 2022) Huppertz T, Weidema BP, Standaert S, De Caevel B, van Overbeke E (2019) The social cost of sub-soil resource use. Resour 8. https://doi.org/10.3390/resources8010019 Itsubo N, Inaba A (2014) LIME2 - chapter 2: characterization and damage evaluation methods. Tokyo London Metal Exchange (LME) (2022) Website: https://www.lme.com/ (Accessed 13 June 2022) (Last access: June 2022) Matos CT, Ciacci L, Godoy León MF, Lundhaug M, Dewulf J, Müller DB, Georgitzikis K, Wittmer D, Mathieux F (2020) Material system analysis of five battery-related raw materials: cobalt, lithium, manganese, natural graphite, nickel, EUR 30103 EN, Publication Office of the European Union, Luxembourg, 2020, ISBN 978–92–76–16411–1. https://doi.org/10.2760/519827, JRC119950 Matos CT, Devauze C, Planchon M, Ewers B, Auberger A, Dittrich M, Wittmer D, Latunussa C, Eynard U, Mathieux F (2021) Material system analysis of nine raw materials: barytes, bismuth, hafnium, helium, natural rubber, phosphorus, scandium, tantalum and vanadium. EUR 30704 EN, Publications Office of the European Union, Luxembourg, ISBN 978–92–76–37768–9. https://doi.org/10.2760/677981, JRC125101 Nuss P, Eckelman MJ (2014) Life cycle assessment of metals: a scientific synthesis. PLoS ONE 9(7):e101298. https://doi.org/10.1371/journal.pone.0101298 (Last Access: June 2022) Oanda (2022) Website. www.oanda.com (Last access: June 2022) Owsianiak M, van Oers L, Drielsma J, Laurent A, Hauschild MZ (2022) Identification of dissipative emissions for improved assessment of metal resources in life cycle assessment. J IND ECOL 26:406–420. https://doi.org/10.1111/jiec.13209 (Last access: June 2022) Passarini F, Ciacci L, Nuss P, Manfredi S (2018) Material flow analysis of aluminium, copper, and iron in the EU-28, EUR 29220 EN. Publications Office of the European Union, Luxembourg 2018 ISBN 978–92–79–85744–7. https://doi.org/10.2760/1079, JRC 111643 Salomon-de-Friedberg H, Robinson T (2015) Tackling impurities in copper concentrates. Teck Resources Limited. https://www.teck.com/media/Tackling-Impurities-in-Copper-Concentrates.pdf (Accessed Sept 2022) Sonderegger T, Berger M, Alvarenga R, Bach V, Cimprich A, Dewulf J, Frischknecht R, Guinée J, Helbig C, Huppertz T, Jolliet O, Motoshita M, Northey S, Rugani B, Schrijvers D, Schulze R, Sonnemann G, Valero A, Weidema BP, Young SB (2020) Mineral resources in life cycle impact assessment—part I: a critical review of existing methods. Int J Life Cycle as 25:784–797. https://doi.org/10.1007/s11367-020-01736-6 (Last access: June 2022) Schulze R, Guinée J, van Oers L, Alvarenga R, Dewulf J, Drielsma J (2020) Abiotic resource use in life cycle impact assessment—part I- towards a common perspective. Resour Conserv Recy. https://doi.org/10.1016/j.resconrec.2019.104596 Tonini D, Albizzati PF, Caro D, De Meester S, Garbarino E, Blengini GA (2022) Quality of recycling. Waste Manage 146:11–19. https://doi.org/10.1016/j.wasman.2022.04.037 (Last access: September UNEP (2011) Recycling rates of metals - a status report, A report of the Working Group on the Global Metal Flows to the International Resource Panel. Graedel, T. E.; Allwood, J.; Birat, J.-P.; Reck, B. K.; Sibley, S. F.; Sonnemann, G.; Buchert, M.; Hagelüken, C. 2011 USGS (2020) Historical statistics for mineral and material commodities in the United States. https://www.usgs.gov/centers/national-minerals-information-center/historical-statistics-mineral-and-material-commodities (Last access: May 2022) van Oers L, de Koning A, Guinée JB, Huppes G (2002) Abiotic resource depletion in LCA. Road and Hydraulic Engineering Institute, Ministry of Transport and Water, Amsterdam van Oers L, Guinée JB, Heijungs R (2020) Abiotic resource depletion potentials (ADPs) for elements revisited—updating ultimate reserve estimates and introducing time series for production data. Int J Life Cycle as 25:294–308. https://doi.org/10.1007/s11367-019-01683-x (Last access:June 2022) Weidema BP, Bauer C, Hischier R, Mutel C, Nemecek T, Reinhard J, Vadenbo CO, Wernet G (2013) Overview and methodology. Data quality guideline for the ecoinvent database version 3. Ecoinvent Report 1(v3). St. Gallen: The ecoinvent Centre. Zampori L, Pant R (2019) Suggestions for updating the Organisation Environmental Footprint (OEF) method, EUR 29681 EN. Publications Office of the European Union, Luxembourg, ISBN 978–92–76–00651–0. https://doi.org/10.2760/577225, JRC115960 Zampori L, Sala S (2017) Feasibility study to implement resource dissipation in LCA, EUR 28994 EN. Publications Office of the European Union, Luxembourg ISBN 978-92-79-77238-2. https://doi.org/10.2760/869503, JRC109396 This research was mainly supported by the European Commission — Directorate General for Environment "Technical support for the Environmental Footprint and the Life Cycle Data Network". The authors would like also to thank Gian Andrea Blengini and Rana Pant for suggestions and discussions that helped in conceiving and framing the research. Joint Research Centre, European Commission, Via Enrico Fermi 2749, 21027, Ispra, VA, Italy Fulvio Ardente, Antoine Beylot & Luca Zampori BRGM, 45060, Orléans, France Antoine Beylot PRé Sustainability, Stationsplein 121, 3818 LE, Amersfoort, The Netherlands Luca Zampori Fulvio Ardente Correspondence to Fulvio Ardente. The views expressed in the article are personal and do not necessarily reflect an official position of the European Commission. Communicated by Matthias Finkbeiner Below is the link to the electronic supplementary material. Supplementary file1 (XLSX 120 KB) Ardente, F., Beylot, A. & Zampori, L. A price-based life cycle impact assessment method to quantify the reduced accessibility to mineral resources value. Int J Life Cycle Assess 28, 95–109 (2023). https://doi.org/10.1007/s11367-022-02102-4 Issue Date: January 2023 Life cycle impact assessment (LCIA) Normalization value
CommonCrawl
Why is the topological definition of continuous the way it is? I was learning the definition of continuous as: $f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$ For me this translates to the following implication: IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous: $\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$ seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one? I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? Why do we require to posit things about Y first in either definition for the definition of continuous to work properly? I can't help but point out that this question The definition of continuous function in topology seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help? For whoever suggest to close the question, the question is quite clear: why is the reverse implication not the "correct" definition of continuous? As an additional important point I noticed is, pointing out the difference between open mapping and continuous function would be very useful. Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. Extra confusion/Appendix: Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? real-analysis general-topology limits metric-spaces continuity PinocchioPinocchio $\begingroup$ The implication there is more related to $B_\delta(p) \subseteq f^{-1}(B_\epsilon(q))$ and the part $U$ open $\Rightarrow f^{-1}(U)$ open is more of a "wrapping" argument that allows you to relate the $B_\delta(p) \subseteq f^{-1}(B_\epsilon(q))$ to open sets. Or maybe, the $U$ open $\Rightarrow f^{-1}(U)$ open is more related to the $\forall \epsilon > 0, \exists \delta > 0$ part. $\endgroup$ – Daniel Schepler Jun 18 '18 at 19:00 $\begingroup$ Note that if $f : \mathbb{R} \rightarrow \mathbb{R}$ maps $x$ to $x^2$, then the image of any open ball around $x=0$ is not in fact itself an open set, though the metric definition is satisfied because that image is contained in an open ball. $\endgroup$ – aschepler Jun 19 '18 at 0:29 $\begingroup$ Topology was originally called " analysis situs" ("position analysis") and was almost entirely about metric spaces. The modern def'n of continuity is a generalization of the "epsilon-delta" def'n, and there are many properties that are equivalent to it, and it is good to learn some of them, as some are more useful than others in some situations... If $f:X\to Y$ maps open sets to open sets then $f$ is called an open mapping (or we say that $f$ is "open"). The real function $f(x)=x^2$ is continuous but not open, as $f((-1,1))=[0,1)$. $\endgroup$ – DanielWainfleet Jun 19 '18 at 14:00 $\begingroup$ @DanielWainfleet there seems to be an important but subtle difference between continuous function and open mapping which I do not understand, that must be one of the key issues why my understanding I conjecture. $\endgroup$ – Pinocchio Jun 19 '18 at 15:22 $\begingroup$ @aschepler I think I am not understanding your point (sorry I don't understand this concept well yet). What is the concept your trying to emphasizes? $\endgroup$ – Pinocchio Jun 19 '18 at 16:37 The "normal" definition goes like this: It is claimed that, at fixed point, for any given ball $B_\epsilon$ of radius $\epsilon$ in the image, there exists a ball $B_\delta$, in the preimage, of radius $\delta$ such that $Im (B_\delta) \subset B_\epsilon$. This is the implication $$(...) < \delta \implies (...) < \epsilon $$ Very informally, you could compare the statement, for continuous $f$, For any ball $B_\epsilon$ in the image, you can find a ball $B_\delta$ mapping into $B_\epsilon$ For any ball $B_\epsilon$ in the image, its preimage contains a ball $B_\delta$ The preimages of open sets are open. In topological spaces, the last one is often taken as a definition. Regarding your interpretation IF $U \subseteq Y$ is open THEN $f^{−1}(U)$ is open This is perfectly valid and translates as "IF you give me an $\epsilon$ THEN I can find you a corresponding $\delta$". Regarding the implication, let me explain in this way, to show what happens with that implication: Let $U \subset Y$ be open, then for this set you can have its preimage, $f^{-1}(U) \subset X$, which is the set that satisfies: $$x \in f^{-1}(U) \implies f(x) \in U $$ So now you can freely say: For any open $U \subset Y$, there is a set $f^{-1}(U) \subset X.$ If is just so happens, that $f^{-1}(U)$ is open for any open $U$, then we call $f$ continuous. Translating, this means that if it just so happens that for any given radius $\epsilon$, can find a corresponding $\delta$ such that $$ x\in B_\delta \implies f(x) \in B_\epsilon, $$ then $f$ is continuous. A few more details: You have be rather careful when you state exactly what you mean with mapping "nearby points to nearby points". Given a metric, we can always have balls as subsets of that space. The open sets are precisely those that, for each $x$, have some ball around them completely contained in the open set. This is true regardless of whether the open set is a union of open intervals, the whole space, a single interval, or any other open set. To say that $f$ maps "nearby points to nearby points" means to say that, if you fix a point $x_0$, and look at what happens to points nearby $x_0$, they will all be mapped to points close to $f(x_0)$. The exact meaning of this is that: for each fixed $x\in f^{-1}(U)$, for any ball $B_\epsilon$ around $f(x)$ (and one exists, and satisies $B_\epsilon \subset U$, by openness), there is a ball $B_\delta$ around the point $x$ that maps into $B_\epsilon$. Since $B_\epsilon \subset U$, we have $B_\delta \subset f^{-1}(U) $, which by definition makes the preimage open. It's a ball around an arbitrary point completely in $f^{-1}(U) $. Whatever open set you have, all of the points in there will be interior, so continuity (finding matching balls $B_\delta$ and $B_\epsilon$) works at each point at a time, so to speak. And now it almost rolls off the tongue: $$\forall x \ \forall \epsilon \ \exists \delta \ (...) $$ To me, it is somehow intuitively clear that if you want a statement about how some values of $f(x)$ behave, you would start with something about its target set. Maybe that's just me. You sort of start with the question "How close to $f(x_0)$ do you want the outputs of $f$ to be", which is a question about the target set. JuliusL33tJuliusL33t $\begingroup$ thanks for helping Julius! I think what confuses me is that yes, I understand intuitively we say given a epsilon ball in the image then we can find an epsilon ball in the pre-image...but the implication starts from the "pre-image" and goes to the image. I find that really confusing. Perhaps thats what's giving me most trouble...isn't the "normal" definition of continuous the reversed one? I can't grasp why the logical implications seems to be written backwards but somehow still say the same thing. Usually the direction of implication is a big deal, so why not here? what makes it work? $\endgroup$ – Pinocchio Jun 19 '18 at 13:50 $\begingroup$ do you mind commenting on the difference between open mapping and continuous map? $\endgroup$ – Pinocchio Jun 19 '18 at 16:36 $\begingroup$ Here the $\delta - \epsilon$ implication just translates as $x \in f^{-1}(U) \implies f(x) \in U$. Its the existence of such open sets that are at the heart of "pre-images of open sets are open". Sans technical details, this becomes "for any open $U$, (for any $\epsilon$), you can find an open $V$ (find a $\delta$) such that $V =f^{-1}(U)$ and the above impication holds. $\endgroup$ – JuliusL33t Jun 19 '18 at 17:08 $\begingroup$ I've edited my answer with some details. $\endgroup$ – JuliusL33t Jun 19 '18 at 19:22 $\begingroup$ I made some further edits. Hope it helps. $\endgroup$ – JuliusL33t Jun 20 '18 at 16:38 The definition of continuity at a point $a$ for a function $f\colon A\to B$ (say between metric spaces) is: for all $\varepsilon >0$ there exists $\delta>0$ such that if $d(x,a)<\delta$, then $d(fx,fa)<\varepsilon$. Now, notice that the $\varepsilon$ is used for a condition in the codomain and the $\delta$ is used for a condition in the domain. So the order of quantification is: for all something in the codomain, there is a something in the domain such that blah blah blah. The topological definition of continuity reads: for all open in the codomain, the inverse image is open in the domain. This shows that in fact the variance in both definitions is the same: continuity of a from $f\colon A\to B$ means you can pull information back from $B$ to $A$. So, the contravariance in the definition of topological continuity is not anything you haven't seen in the metric definition already. You just always thought the metric definition is variant, but it was contravariant all the time. The topological formulation simply makes it unavoidable to notice. Ittay WeissIttay Weiss $\begingroup$ but the implication is still from balls in X to balls in Y (even though the quantifier is first for balls in Y). I think thats whats confusing me. I don't think I understand how the quantifiers work for the topological definition nor how they relate to the "original" definition of metric spaces. That connection would be really valuable to me, even if its just intuitive. $\endgroup$ – Pinocchio Jun 18 '18 at 14:16 $\begingroup$ @Pinocchio The implication $d(x,p) < \delta \Rightarrow d(f(x),q) < \epsilon$ really comes more from an expansion of the notion of $f^{-1}(B_\epsilon(q))$ being open, or more precisely from it being a neighborhood of $p$. Namely, if you expand "$f^{-1}(B_\epsilon(q))$ is a neighborhood of $p$," you get $\exists \delta > 0, B_\delta(p) \subseteq f^{-1}(B_\epsilon(q))$ which is equivalent to the part $\exists \delta > 0, \forall x, |x-p| < \delta \Rightarrow |f(x)-q| < \epsilon$ of the $\epsilon$-$\delta$ formulation of continuity. $\endgroup$ – Daniel Schepler Jun 18 '18 at 20:44 $\begingroup$ @DanielSchepler oh so the implication in the $\epsilon-\delta$ is just actually an explanation of the RHS of the topological definition of continuity (as you pointed out), it's not actually a re-write of the topological implication. That makes a lot more sense. I assumed the wrong thing about how the definitions were equivalent. $\endgroup$ – Pinocchio Jun 19 '18 at 15:36 I think in the translation, it might help to separate out the direct generalization of the notion of "continuity at a point" from the general topological arguments that this generalization being true at every point is equivalent to the condition on inverse images of open sets. So, recall that for a map $f : X \to Y$ between metric spaces, and $x_0 \in X$, we have $f$ is continuous at $x_0$ if and only if: $$ \forall \epsilon > 0, \exists \delta > 0, \forall x \in X, d(x, x_0) < \delta \rightarrow d(f(x), f(x_0)) < \epsilon. $$ Now let us express what this condition is saying in terms of open balls: first, $d(f(x), f(x_0)) < \epsilon$ is equivalent to $f(x) \in B_\epsilon(f(x_0))$, which is further equivalent to $x \in f^{-1}(B_\epsilon(f(x_0)))$. On the other hand, $d(x, x_0) < \delta$ is equivalent to $x \in B_\delta(x_0)$. Therefore, $f$ is continuous at $x_0$ if and only if: $$ \forall \epsilon > 0, \exists \delta > 0, \forall x \in X, x \in B_\delta(x_0) \rightarrow x \in f^{-1}(B_\epsilon(f(x_0))). $$ Now, the $\forall x \in X$ part is equivalent to a subset condition, so $f$ is continuous at $x_0$ if and only if: $$ \forall \epsilon > 0, \exists \delta > 0, B_\delta(x_0) \subseteq f^{-1}(B_\epsilon(f(x_0))). $$ Now, note that the $\exists \delta > 0, \ldots$ part is precisely equivalent by definition to: "$f^{-1}(B_\epsilon(f(x_0)))$ is a neighborhood of $x_0$." Furthermore, the collection of $B_\epsilon(f(x_0))$ for $\epsilon > 0$ is precisely the neighborhood basis at $f(x_0)$ coming from the metric on $Y$. To summarize, we have seen that more or less directly: $f$ is continuous at $x_0$ if and only if for all basic neighborhoods $N$ of $f(x_0)$, we have $f^{-1}(N)$ is a neighborhood of $x_0$. Now, not all topological spaces in general will have a natural system of neighborhood bases, so usually the generalization of continuity at a point to general maps of topological spaces will look something like: Definition: Let $f : X \to Y$ be a map between topological spaces, and $x_0 \in X$. Then $f$ is continuous at $x_0$ if and only if one of the following equivalent statements is true: For every neighborhood $N$ of $f(x_0)$, we have that $f^{-1}(N)$ is a neighborhood of $x_0$. For every open neighborhood $N$ of $f(x_0)$, we have that $f^{-1}(N)$ is a neighborhood of $x_0$. (In the presence of a given system of neighborhood bases on $Y$:) For every basic neighborhood $N$ of $f(x_0)$, we have that $f^{-1}(N)$ is a neighborhood of $x_0$. (Of course, I think in practice, most textbooks will likely just choose one of these conditions as the definition - in my experience, usually either (1) or (2) - and then prove the equivalence to the other conditions as separate results.) Also, we have the general topological fact: "For any subset $U \subseteq X$, $U$ is open if and only if $U$ is a neighborhood of all of its elements." Using this, it is easy to prove the first equivalence in the below revised definition of continuity: Definition: Let $f : X \to Y$ be a map between topological spaces. Then $f$ is continuous if and only if one of the following equivalent statements is true: $f$ is continuous at every point of $X$. For every open subset $V \subseteq Y$, we have that $f^{-1}(V)\subseteq X$ is open. (In the presence of a given basis for the topology of $Y$:) For every basic open subset $V \subseteq Y$, we have that $f^{-1}(V) \subseteq X$ is open. (Of course, again most textbooks will present (2) as the definition of continuity, and then prove equivalence to (1) and (3) as separate results.) Now, according to the translation above, the $\epsilon$-$\delta$ definition of continuity is most closely related to (1) above, with the continuity at a point $x_0 \in X$ being expanded from (3). Looking more closely at the initial expansion, we see that the overall structure "if $V$ is a basic open neighborhood of $f(x_0)$ then $f^{-1}(V)$ is a neighborhood of $x_0$" expands to the $\forall \epsilon > 0, \exists \delta > 0, \ldots$ part. Whereas the part the question is about, the part $d(x, x_0) < \delta \rightarrow d(f(x), f(x_0)) < \epsilon$, is actually part of the expansion of "$f^{-1}(V)$ is a neighborhood of $x_0$." Daniel ScheplerDaniel Schepler The two definitions are equivalent to each other for metric spaces. To see that the first definition implies the second, let $\epsilon>0$ and $y=f(x)$. The open ball $B_\epsilon(y)$ is open in $Y$. Therefore $f^{(-1)}(B_\epsilon(y))$ must be open in $X$. Therefore, it contains the open ball $B_\delta(x)$ for small enough $\delta>0$. Since $B_\delta(x)\subset f^{(-1)}(B_\epsilon(y))$, we have found $\delta>0$ such that $c\in X, d(x,c)<\delta \implies d(f(x),f(c))<\epsilon$. The reverse implication also uses an argument using open balls. Evan WilsonEvan Wilson I would have expected the definition to be the other way round I take you to be proposing this: $f\colon X\to Y$ is continuous if $f(U)$ is open for every open $U\subseteq X$ But that does not serve. In particular, consider constant functions. Constant functions are among those that meet our expectations for continuity, and constant functions over metric spaces are in fact continuous by the metric-space definition of continuity. But if $f\colon X\to Y$ is a constant function and $V \subseteq X$ is nonempty then $f(V) = \{k\}$ for some $k \in Y$, and in many cases we care about, such singleton sets are closed, not open. On the other hand, consider a constant function $f$ defined as above, and let $U\subseteq Y$ be open. The preimage $f^{-1}(U)$ of $U$ is either $\emptyset$ or $X$, which are both open by definition in every topology over $X$, so the definition you started with serves for this example. On the third hand, consider $f\colon \mathbb R \to \mathbb R$ defined by $f(x) = -1$ if $x \lt 0$ and $f(x) = 1$ if $x \ge 0$. To demonstrate that it is discontinuous, choose, say, the open interval $\left(\frac{1}{2},\frac{3}{2}\right)$. The preimage of that open set is the closed set $\left[0,\infty\right)$. More generally, the definition captures the idea of a point of discontinuity in the range of the function, and that should seem natural, because that's what you look for when visually inspecting the graph of a function for discontinuities. John BollingerJohn Bollinger Perhaps the following paper would be of interest to you: Velleman, D. J. (1997). Characterizing continuity. The American Mathematical Monthly, 104(4), 318-322. Link. Here is the beginning: Benjamin DickmanBenjamin Dickman $\begingroup$ Looks like the result in that paper has been published (in greater generality) before: mathoverflow.net/questions/223708/… $\endgroup$ – Daniel McLaury Nov 17 '18 at 2:17 $\begingroup$ Theorems 1 and 2 are not the main results of the paper--they are just the motivation. The main result is that the answer to the question at the end of the first paragraph is, indeed, "no". In fact, there do not exists families of sets $\mathcal{F}$ and $\mathcal{G}$ such that a function $f : \mathbb{R} \to \mathbb{R}$ is continuous if and only if for every $X \in \mathcal{F}$, $f(X) \in \mathcal{G}$. $\endgroup$ – Dan Velleman May 10 at 14:25 I think that understand you. You have two topological spaces $(X,\tau)$ and $(Y,\tau')$ and a aplication continues f:$X \rightarrow Y$. For the general definition of continues you can say: $\forall x \in X , \forall G' \in \tau' : f(x) \in G', \exists G \in \tau : x \in G, f(G) \subseteq G' $. You can prove this using $G=f^{-1}(G') $. And if you applied this to metric spaces obtains(I suppose that your p verified f(p)=q) your definition of continues function in metric spaces. You ask why use implication for opens in $Y$ to $X$, and no for opens in $X$ to $Y$. I give you some reasons: 1-Implication for opens in Y to X is more general because $f^{-1}(G) $ can be $\varnothing$ and you do not contemplate this case for opens in X to Y. 2-Implication for opens in X to Y say that $\exists$ some open that verified... but not say who is this open and for implication for opens in Y to X you know who is this open is $f^{-1}(G')$. If we change the definition of aplication continues to: $f: X\rightarrow Y$ is continues if $f(U) \in \tau', \forall U \in \tau$. We have ,for example, that a constant function can be no continue for example: if we take the constant function 1 for $\mathbb{R}$ in $\mathbb{R}$ we have that $f((0,1))=\{1\}$ that is not open, then $f$ is not continue. AnonimoAnonimo The notion of topological space and the definition of a continuous function are certainly in the realm of 'abstract' mathematics. To say that a function $f$ is continuous mean that if points are close to other points then they don't get 'ripped away' when applying it - they 'follow the action' of $f$. Now we can also define a topological space using closed sets. Hmm, $\quad \text{Open sets: Take open subsets of the codomain back (in some way...)}$ so maybe?!? $\quad \text{Closed sets: Take closed subsets of the domain forward (in some way)...}$ The OP will find this interesting: A map is continuous if and only if for every set, the image of closure is contained in the closure of image $\tag 1 \text{For any } A, \; f(\overline{A})\subseteq \overline{f(A)}$ or intuitively, all points 'close to' $A$ get mapped to points 'close to' $f(A)$. CopyPasteItCopyPasteIt Not the answer you're looking for? Browse other questions tagged real-analysis general-topology limits metric-spaces continuity or ask your own question. Why is continuity characterized with open sets? The definition of continuous function in topology Explicit/constructive example of open maps that are not continuous (especially from R to R)? Slightly changing the formal definition of continuity of $f: \mathbb{R} \to \mathbb{R}$? Definition of continuity implies a discontinuous function is continuous? Intuition on the Topological definition of continuity, considering the special case of the step function. Open Ball in a Metric Space vs. Open Set in a Topological Space Why don't clopen sets pose problems in the "preimage" definition of continuity, and in the definition of a topology? Definition of topological space Equivalent definition of a quotient map Why aren't continuous functions defined the other way around?
CommonCrawl
Article | Open | Published: 11 January 2019 High-performance Raman quantum memory with optimal control in room temperature atoms Jinxian Guo1,2, Xiaotian Feng1, Peiyu Yang1, Zhifei Yu1, L. Q. Chen1, Chun-Hua Yuan1 & Weiping Zhang2,3 Nature Communicationsvolume 10, Article number: 148 (2019) | Download Citation Atomic and molecular interactions with photons Quantum information Ultracold gases Quantum memories are essential for quantum information processing. Techniques have been developed for quantum memory based on atomic ensembles. The atomic memories through optical resonance usually suffer from the narrow-band limitation. The far off-resonant Raman process is a promising candidate for atomic memories due to broad bandwidths and high speeds. However, to date, the low memory efficiency remains an unsolved bottleneck. Here, we demonstrate a high-performance atomic Raman memory in 87Rb vapour with the development of an optimal control technique. A memory efficiency of above 82.0% for 6 ns~20 ns optical pulses is achieved. In particular, an unconditional fidelity of up to 98.0%, significantly exceeding the no-cloning limit, is obtained with the tomography reconstruction for a single-photon level coherent input. Our work marks an important advance of atomic memory towards practical applications in quantum information processing. Quantum memory is a necessary component for quantum communications and quantum computing. A practical quantum memory should be efficient, low-noise, broadband, and as simple as possible to operate1,2,3,4,5,6. Using several approaches, including electromagnetically induced transparency (EIT), gradient echo memory (GEM), the off-resonant Faraday effect, and far off-resonant Raman memory, optical memory has been demonstrated in cold atomic ensembles2,7,8,9, atomic vapors10,11,12,13,14, and solids15,16,17,18,19. Hsiao et al.20 reported a 92.0% memory efficiency for a coherent light pulse in a cold atomic ensemble using EIT. Hosseini et al.21 used GEM to realize a 78% memory efficiency for weak coherent states with 98% fidelity. Polzik's group12 demonstrated a quantum memory with a fidelity of 70% based on the off-resonant Faraday effect. These examples12,20,21 successfully demonstrated the capability to store optical states with high efficiency and/or fidelity exceeding the classical limit22,23,24 and sub-megahertz bandwidths. However, the bandwidth is important for the practical application of quantum memory25. Quantum sources with bandwidth at the GHz level have been used in long-distance quantum communication26,27 and quantum computers28. Unlike these protocols, far-off-resonant atomic Raman memory can store short-time pulses corresponding to high bandwidths and can operate at high speeds. In addition, the far-off-resonance characteristic makes the atomic Raman memory10,29,30 robust against inhomogeneities in the ensemble and facilitates controlling the frequency of the output state. All of these properties indicate that atomic Raman memory has great potential in practical quantum information processing. The first experimental realization of an atomic Raman memory was demonstrated29 in 2010. This indeed represented significant progress in the field of Raman memory, but the limitations with low efficiency (<30%) and significant noise from the spontaneous four-wave mixing (FWM) process persist. Recently, Raman memory using photonic polarized entanglement30 was reported in a cold atomic ensemble with a fidelity of 86.9 ± 3.0%, but still an efficiency of only 20.9 ± 7.7%. An efficiency exceeding 50% and a fidelity exceeding 2/3 are necessary to store and retrieve an optical state within the no-cloning regime without post-selection22,23,24,31. Therefore, so far, low efficiency has appeared to exclude the broadband Raman memory as an unconditional quantum memory. In this paper, we present an optimal control technique where the atomic vapor is performed a real-time optimal response on an input signal pulse. With a 87Rb atomic vapor in paraffin-coated cell at T = 78.5 °C, we achieve a Raman quantum memory on a coherent input of 6–20 ns duration with above 82.0% memory efficiency, and more importantly, with 98% unconditional fidelity at single photon level (n ≈ 1). Experimental setup The experimental setup and atomic levels are depicted in Fig. 1. The 87Rb atomic vapor in the paraffin-coated glass cell is the core component of the current Raman memory. The atomic cell is 10.0 cm long, has a diameter of 1.0 cm, and is heated to 78.5 °C. Our Raman memory starts with a large ensemble of atoms that were initially prepared in the |m〉 = |52S1/2, F = 2〉 state by a 44-μs-long optical pumping pulse (OP). Then, the input signal pulse Ein is stored as atomic spin excitation SW induced by the strong off-resonant write pulse (W) with the Rabi frequency ΩW(t) and detuning ΔW. After a certain delay τ, the atomic excitation can be retrieved into optical state ER by the strong off-resonant read pulse (R) with the Rabi frequency ΩR(t) and detuning ΔR. The waists of the laser beams (W, R, and Ein) are all 600 μm. The two strong driving beams, W and R, can be generated by the same or different semiconductor lasers (Toptica, DLPro + Boosta) and are coupled into the same single-mode fiber. Their intensities and temporal shapes are controlled by acousto-optic modulators (AOMs). The input Ein signal comes from another semiconductor laser (Toptica, DLPro) phase-locked on the W laser. The temporal shape is controlled by a Pockels cell (Conoptics, Model No. 360-80). The shortest pulse duration of the Pockels cell is 6 ns. The W and Ein fields are two-photon resonant and spatially overlapped after passing through a Glan polarizer with 94% spatial visibility in the atomic vapor. The output signals can be separated from the strong driving pulses by another Glan polarizer with an extinction ratio of 40 dB, and detected, respectively, by intensity detection to calibrate the memory efficiency, by homodyne detection combining with tomography reconstruction to determine the memory fidelity, and by single-photon detection to analyze the excess noise in storage process. The total optical transmittance including the atomic cell and all optical elements in homodyne detection is about 89%. The four etalons with 33% transmission can filter the leaked driving photons at 115 dB. Raman memory. a Schematic, atomic energy levels and frequencies of the optical fields. |g, m〉: hyperfine levels |52S1/2, F = 1, 2〉; |e1〉 and |e2〉: excited states |52P1/2, F = 2〉 and |52P3/2〉. W write field, Ein input signal, Eleak leaked signal, SW collective atomic spin wave, R read field, ER retrieved signal. b Experimental setup. The polarizations of the weak signal beams, Ein and ER, are perpendicular to the strong driving beams, W and R. The signals can be detected by homodyne detection. OP optical pumping laser, SMF single-mode fiber, BS beam splitter, PZT piezoelectric transducer. D1 photo-detector, D2 and D3 photo-diode, D4 single-photon detector, FM1 and FM2 flip mirror. The flip mirrors FM1,2 allow alternative selection of detections via intensity, homodyne, and single photon. Intensity detection is chosen to calibrate the memory efficiency by flipping FM1 up, homodyne detection combining with tomography reconstruction to determine the memory fidelity by flipping FM1 down and FM2 up, and single-photon detection to measure and analyze the excess noise in storage process by flipping FM1,2 both down The Raman write process is a type of coherent absorption induced by a strong write pulse. As shown in Fig. 2a, when the write pulse is switched off, owing to the far-off-resonant frequency, almost 100% of the Ein pulse passes through the atomic vapor. Below, we use the total energy of such an Ein pulse to normalize the write and retrieve efficiencies. When the write pulse is turned on, part of the energy of the Ein pulse is converted coherently as the atomic spin wave SW(z) near the two-photon resonance frequency. The rest of the Ein energy passes through the atoms as Eleak, as shown in Fig. 1a. The full width at half maximum (FWHM) of the absorption spectrum is approximately 100 MHz, as shown in Fig. 2a. Efficient Raman memory. a Absorption rate of the weak input-signal pulse as a function of the Raman detuning frequency. ΔW is fixed at 3.0 GHz. The input signal pulse is 10 ns long. b Theoretical efficiency as a function of the energy of the strong control pulse. The input optical pulse is a 10 ns near-square pulse. All optical fields detune 3.0 GHz from atomic transition and the optical depth d = 1100 (see Methods section for details). In the write process, the efficiency is always much smaller than 1.0 when using a non-optimal write pulse (10 ns Gaussian shape), but it can approach 1.0 with the optimal write pulse when the write pulse is larger than 1.5 nJ. In the read process, the curves with Gaussian and square read pulses coincide with each other. The retrieval efficiency is waveform-independent and increases with the energy of the read pulse until approaching 1.0. c Temporal modes of the strong driving (blue, experimental shape of write pulse \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}\), read pulse R; dashed purple, theoretical shape of optimal write pulse \(W_{{\mathrm{theory}}}^{{\mathrm{opt}}}\)), input signal (black, Ein), leaked signal (orange, Eleak), and output signal (red, ER) pulses. d Waveform of the leaked signal with the \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}(t)\) (orange circle) and \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}(t + 1ns)\) (gray square) write pulse. The lines are the corresponding theoretical fits. e Storage efficiency (ηW) and retrieval efficiency (ηR) as function of the energy of the driving pulse (W and R) with the shape of \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}\) and R as shown in (c). Square represents experimental data and solid line is theoretical fitting. The error bars correspond to one standard deviation caused by the statistical uncertainty of measurement. f The write-in efficiency as a function of the width of the Ein pulse According to the theoretical analysis in ref. 32, the spatial-distributed atomic spin wave in a far-off-resonant Raman write process is given by $$S_{\mathrm{W}}(z) = \int_0^{t_{\mathrm{W}}} q(z,t)E_{{\mathrm{in}}}(t)dt,$$ where \(q(z,t) = i\frac{{\sqrt d }}{{{\mathrm{\Delta }}_{\mathrm{W}}}}{\mathrm{\Omega }}_{\mathrm{W}}^ \ast (t)e^{i\frac{{dz + h(t,t_{\mathrm{W}})}}{{{\mathrm{\Delta }}_{\mathrm{W}}}}}J_0\left( {\frac{{2\sqrt {h(t,t_{\mathrm{W}})dz} }}{{{\mathrm{\Delta }}_{\mathrm{W}}}}} \right)\), tW is the duration of the write process, d is the optical depth of atomic ensemble, and \(h(t,t_{\mathrm{W}}) = {\int}_t^{t_{\mathrm{W}}} \left| {{\mathrm{\Omega }}_{\mathrm{W}}(t^\prime )} \right|^2dt^\prime\) with t′ is the integration variable from t to tW in the co-moving frame. Eq. (1) is an iterative function that is determined by the matching between the temporal shapes of the input Ein(t) and the write pulse ΩW(t)32,33,34. Therefore, to achieve efficient conversion, it is crucial to perform real-time control on ΩW(t) or Ein(t) to make the atoms coherently absorb as much energy Ein(t) as possible. The optimal control of Ein(t) has been used to achieve efficient memory in an EIT-based process20, where the shape of the input signal Ein(t) was adjusted according to atomic memory system. Here, we prefer the dynamical control ΩW(t) because a quantum memory system should have the ability to store and preserve quantum information of an input optical signal with an arbitrary pulse shape. To obtain the optimal ΩW(t), denoted \(\Omega _{\mathrm{W}}^{{\mathrm{opt}}}(t)\), we first use the iterative methods mentioned in ref. 32 to calculate the optimal spin wave, corresponding to the minimum Eleak. Then, the optimal spin wave establishes a one-to-one correspondence between Ein(t) and \(\Omega _{\mathrm{W}}^{{\mathrm{opt}}}(t)\) via Eq. (1). Thus, for any given shape of Ein(t), \(\Omega _{\mathrm{W}}^{{\mathrm{opt}}}(t)\) can be obtained from Eq. (1) via the optimal spin wave. Moreover, the corresponding optimal efficiency \(\eta _{\mathrm{W}}^{{\mathrm{opt}}}\) depends only on the optical depth d and the total energy of the write pulse. Figure 2b shows the theoretical efficiencies as the function of the energy of the strong driven pulses. Using a 10 ns near-square pulse as the input Ein, the write efficiency with \(\Omega _{\mathrm{W}}^{{\mathrm{opt}}}(t)\) is approximately equal to 1 when the energy of a write pulse with 10 ns duration is larger than 1.5 nJ, while the maximum write-in efficiency with a non-optimized ΩW(t) (a 10 ns Gaussian-shaped ΩW(t) is used in Fig. 2) is much smaller than one. In the read process (see Fig. 1), the spin wave SW(z) is retrieved back to the optical field ER(t) by the read pulse ΩR(t). Unlike ηW, the retrieval efficiency ηR is independent of the temporal waveform32, and a read pulse whose duration is 10 ns with strong power but without temporal optimization is sufficient for ηR ~ 1. This can be seen in Fig. 2b. ηR increases with the total energy of the read pulse, whether Gaussian or square-shaped until ηR ~ 1. Thus, with the above optimal control on ΩW(t), the total efficiency of the Raman memory process is ηT = ηW × ηR ~ 1 in principle. Here, we experimentally demonstrate a break of the efficiency in Raman memory with dynamic control over the temporal shape of the write pulse. In the experiment, the given Ein pulse is mapped in a forward-retrieval configuration. We derive \(\Omega _{\mathrm{W}}^{{\mathrm{opt}}}(t)\) using the iteration-based optimization strategy based on the given short Ein(t) pulse and experimentally control the temporal profile of the write pulse by using an intensity modulator (here, an AOM). The theoretical shape of optimal write pulse \(W_{{\mathrm{theory}}}^{{\mathrm{opt}}}\) and the experimentally-optimized shape \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}\) are given in Fig. 2c. The experimental shape is much longer than the theoretical one before the Ein pulse is turned on due to the limitation of the bandwidth of our intensity modulator, but two curves match well within the Ein duration effectively guaranteed high write-in efficiency. Furthermore, to show the definite improvement of optimization control, two write pulses are given, the optimal \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}(t)\) and an optimal write pulse delayed by 1.0 ns, \(W_{{\mathrm{exp}}}^{{\mathrm{opt}}}\left( {t + 1\,{\mathrm{ns}}} \right)\). The corresponding leaked optical pulses and the theoretical fits are shown in Fig. 2d. The leaked energy for the sub-optimal curve is twice that for the optimal one. Through the optimal control, the leaked energy of the input signal is greatly reduced. The storage efficiency ηW, calculated by \((\overline N _{E_{{\mathrm{in}}}} - \overline N _{E_{{\mathrm{leak}}}})/\overline N _{E_{{\mathrm{in}}}}\), reaches ~84% when the atomic temperature T is 78.5 °C and the power of the write pulse is 10.6 nJ (Fig. 2e). Such high a write-in efficiency can be achieved when the signal duration changes from 6 to 20 ns, and ηW always remains above 83.5% by optimal control, as shown in Fig. 2f. The retrieval efficiency ηR, calculated by \(\left( {\overline N _{E_{\mathrm{R}}}/(\overline N _{E_{{\mathrm{in}}}} - \overline N _{E_{{\mathrm{leak}}}})} \right)\), can reach 98.5% when the read laser is 10.6 nJ, with 3.0 GHz frequency detuning (Fig. 2e). Here, Ein, Eleak, and ER pulses are all measured through the intensity detection as shown in Fig. 1b. The optical paths for these pulses are arranged in the way that they are subject to the same optical losses. This allows the storage efficiency to be calibrated to characterize the atomic memory process alone. The total memory efficiency, ηT = ηW × ηR, is above 82.0% when the input signal pulse contains an average number of photons ranging from 0.4 to 104; thus, this Raman memory is a good linear absorber. The 82.0% memory efficiency is the best performance reported to date for Raman-based memory and far exceeds the no-cloning limit. In principle, ηT = ηW × ηR ~ 1. With ηR = 98.5%, further improvement of ηT mainly depends on ηW, which could be improved by better experimental conditions. According to our theoretical analysis, larger atomic optical depth through increasing the atomic temperature or lengthening the cell could lead to about 3% improvement. Better temporal-mode control on the W pulse may bring about 6% increase. Improving the spatial-mode match between the Ein and W beams can contribute about 5%. Fidelity is the ultimate performance criterion for quantum memory and reflects the maintenance of the quantum characteristics of the optical signal during the memory process. At the few-photon level, fidelity is readily degraded by excess noise and is mainly caused by the FWM process10 and spontaneous emission. Spontaneous noise comes from the spontaneous Raman scattering between the strong write pulse and the atoms populating the |g〉 = |52S1/2, F = 1〉 state. Having fewer |g〉 atoms helps suppress the spontaneous excess noise. In our paraffin-coated cell, more than 98% of the atoms populate the |m〉 state. The spontaneous emission noise intensity is measured by determining the photon number using the single-photon detection as shown in Fig. 1b when the Ein pulse is turned off. On average, the spontaneous noise is approximately 0.02 photons per memory process at the end of the atomic cell for two strong driving pulses with a power of 10.6 nJ at a detuning frequency of 3.0 GHz. The FWM excess noise is mainly attributed to anti-Stokes (ASFWM, with same frequency of ER) and Stokes (SFWM) photons with the same intensity. We can deduce the proportion of ASFWM in retrieved ER pulse by measuring the intensity of SFWM using single-photon detection. Our results show that the ASFWM noise is less than 10% in ER. Such low excess noise effectively guarantees the fidelity of the quantum memory process. To achieve the fidelity performance of the current Raman quantum memory, we measure the fidelity using the equation \(F = \left| {Tr\left( {\sqrt {\sqrt {\rho _{{\mathrm{in}}}} \rho _{{\mathrm{out}}}\sqrt {\rho _{{\mathrm{in}}}} } } \right)} \right|^2\)35, where ρin and ρout are the reconstructed density matrices of Ein and ER, respectively. We record the quadrature amplitudes of the Ein and ER signals using homodyne measurement, and we then reconstruct the density matrices by tomographic reconstruction36. The setup used for homodyne detection is shown in Fig. 1b. To stabilize the phase difference between the Ein and ER pulses and simplify the homodyne setup, the write and read pulses are generated by the same laser and are controlled using one intensity modulator. In the measurement, the two weak signals, Ein and ER, are both short pulses. Matching the temporal modes of short pulses is difficult. Therefore, we use a strong continuous laser beam with the same frequency as the signal pulses Ein and ER as the local oscillator for homodyne detection (the detailed strategy can be found in refs. 36,37). We recorded 105 sets of quadrature amplitudes of the Ein and ER pulses while varying the phase of the local oscillator between 0 and 2π by scanning the piezoelectric transducer, multiplying the quadrature amplitudes of each pulse by the temporal shapes of the corresponding signals, and finally, integrating the product over the signal pulse duration. The temporal shape functions of the Ein and ER pulses are obtained by pointwise variance method. The integrated quadrature amplitude, which is normalized with the vacuum, as a function of the local oscillator phase is shown in Fig. 3a, where the mean number of photons contained in the Ein pulse is 7.9. The phase of the retrieved ER signal pulse closely follows that of the input Ein pulse. Insets in Fig. 3a are the probability distributions of amplitude quadratures of the Ein and ER pulses showing good Gaussian distributions. Fidelity of the Raman memory. a Quadrature amplitudes of the input and output signal pulses at an average of 7.9 photons/pulse. Insets are the probability distributions of the Ein and ER quadrature values at the indicated phase. The density matrices of the input and output signal pulses at 4.2 (b) and 0.76 (c) photons/pulse on average. d Fidelity as a function of the number of photons contained in the input signal pulse. The red squares show the experimental data, and the black line shows the theoretical result. The error bars correspond to one standard deviation caused by the measurement noise The density matrix elements of the Ein and ER pulses are obtained based on the quadrature-amplitude results using the maximum-likelihood reconstruction method36,38. Then on the basis of the diagonal density matrix elements, the photon distributions of the input and output pulses can be achieved to calculate the average photon numbers. The results are plotted in Fig. 3b, c, with the input pulses containing, on average, 4.2 and 0.76 photons, corresponding to unconditional fidelities of 0.915 and 0.98, respectively. The fidelities significantly exceed the no-cloning limit, indicating that the current Raman memory is a quantum-memory process and does not introduce significant excess noise during the memory process. As mentioned above, the current Raman memory is a good linear absorber and allows the storage and retrieval of coherent optical signals at the single-photon level for up to 104 photons with the same memory efficiency. Unlike the efficiency, the unconditional fidelity of the quantum memory of the coherent field is related to the average photon number contained in the input signal \(\left( {\overline N _{E_{{\mathrm{in}}}}} \right)\) and efficiency (ηT) by \(F = 1/\left[ {1 + \overline N _{E_{{\mathrm{in}}}}(1 - \sqrt {\eta _{\mathrm{T}}} )^2} \right]\)24, which shows that if ηT < 1, the fidelity will rapidly decrease with \(\overline N _{E_{{\mathrm{in}}}}\) owing to the worse overlap between ρin and ρout. In Fig. 3d, the fidelity is shown as a function of \(\overline N _{E_{{\mathrm{in}}}}\) with ηT = 82.0%. The experimental F value is slightly smaller than the theoretical F value because of the excess noise in the experiment. F exceeds the no-cloning limit22,23,24 at \(\overline N _{E_{{\mathrm{in}}}} \le 49\) in the current Raman memory process. Bandwidth and coherence time Far off-resonant Raman memory is a genuine broadband memory. The ability to store and retrieve broadband pulses was successfully demonstrated in ref. 29, where a bandwidth larger than 1 GHz of the retrieved signal was obtained using a 300 ps and 4.8 nJ read pulse. In a practical Raman memory, the bandwidth is generated dynamically by the strong driving pulses. In Fig. 2, the shortest Ein pulse has a FWHM of 6 ns and a bandwidth of 170 MHz. The FWHM of the ER signal pulse which mainly depends on the rise time of the R pulse is 13 ns, corresponding to a bandwidth of 77 MHz. The bandwidth in the current memory is dozens or hundreds of times larger than the values reported based on the EIT20, Faraday12, and GEM21 approaches, thus demonstrating the broadband memory ability of the current quantum memory scheme. In our experiment, the memory bandwidth is limited by the currently available intensity modulators (AOM and Pockels cell) and the corresponding electronics controllers (arbitrary wave generators) in our lab. The duration limit of the Pockels cell is 6 ns. As shown in Fig. 2f, the write-in efficiencies remain above 83.5% as the signal duration is larger than 6 ns. Shorter Ein and ER pulses corresponding larger bandwidths need faster intensity modulators and electronics controllers. The decoherence time, which is another essential criterion for good quantum memory, is measured to be approximately 1.1 μs. The delay-bandwidth product at 50% memory efficiency, an appropriate figure of merit, is defined as the ratio of the memory time to the duration of the signal pulse and is 86 in this work. In the present atomic system, the decoherence time is mainly limited by the atomic diffusion out of the laser beam39. The delay-bandwidth product could be increased to ~103 by using a shorter signal pulse and an anti-relaxation-coated cell with the same diameter of the millimeter order as the laser beams. This can lead to a typical decoherence time of approximately several microseconds40. In summary, we have demonstrated a high-performance broadband quantum optical memory via pulse-optimized Raman memory in free space. The 82.0% memory efficiency is the highest value obtained to date for far-off-resonant Raman memory. The unconditional fidelity of 98% for an input pulse containing an average of approximately one photon significantly exceeds the classical limit. The 77 MHz bandwidth of the current memory is dozens or hundreds of times larger than the reported bandwidths for memories based on the EIT, Faraday, and GEM approaches. The delay-bandwidth product at 50% memory efficiency is 86. These attractive properties demonstrate that the Raman memory is a high-performance broadband quantum memory. Additionally, our memory is implemented in an atomic vapor system that can be easily operated and could become the core of a scalable platform for quantum information processing, long-distance quantum communication and quantum computation. Optical depth The formula of the optical depth is d = g2NL/(γc)32, where g is the atom-field coupling constant, N is the number of atoms, L is the length of atomic ensemble, γ is the decay rate of the |e1〉, and c is the speed of light. The values of these parameters are: g = 6.79 × 104 s−1 and γ = 3.613 × 107 s−1 for 87Rb D1 line, L = 10 cm, c = 3 × 108 m ⋅ s−1. The number of atoms varies with the temperature of the vapor cell. According to fluorescence measurement in experiment41, N ~ 2.58 × 1010 at the cell temperature of 78.5 °C. The data that support the findings of this study are available from the corresponding authors upon reasonable request. The authors declare no competing interests. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contributions to the peer review of this work. Fleischhauer, M. & Lukin, M. D. Dark-state polaritons in electromagnetically induced transparency. Phys. Rev. Lett. 84, 5094–5097 (2000). Liu, C., Dutton, Z., Behroozi, C. H. & Hau, L. V. Observation of coherent optical information storage in an atomic medium using halted light pulses. Nature 409, 490–493 (2001). Phillips, D. F., Fleischhauer, A., Mair, A., Walsworth, R. L. & Lukin, M. D. Storage of light in atomic vapor. Phys. Rev. Lett. 86, 783–786 (2001). Mair, A., Hager, J., Phillips, D. F., Walsworth, R. L. & Lukin, M. D. Phase coherence and control of stored photonic information. Phys. Rev. A 65, 031802(R) (2002). Honda, K. et al. Storage and retrieval of a squeezed vacuum. Phys. Rev. Lett. 100, 093601 (2008). Appel, J., Figueroa, E., Korystov, D., Lobino, M. & Lvovsky, A. I. Quantum memory for squeezed light. Phys. Rev. Lett. 100, 093602 (2008). Zhao, R. et al. Long-lived quantum memory. Nat. Phys. 5, 100–104 (2008). Bao, X.-H. et al. Efficient and long-lived quantum memory with cold atoms inside a ring cavity. Nat. Phys. 8, 517–521 (2012). Parniak, M. et al. Wavevector multiplexed atomic quantum memory via spatially-resolved single-photon detection. Nat. Commun. 8, 2140 (2017). Reim, K. F. et al. Single-photon-level quantum memory at room temperature. Phys. Rev. Lett. 107, 053603 (2011). Hosseini, M., Sparkes, B. M., Campbell, G., Lam, P. K. & Buchler, B. C. High efficiency coherent optical memory with warm rubidium vapour. Nat. Commun. 2, 174 (2011). Julsgaard, B., Sherson, J., Cirac, J. I., Fiurasek, J. & Polzik, E. S. Experimental demonstration of quantum memory for light. Nature 432, 482–486 (2004). van der Wal, C. H. et al. Atomic memory for correlated photon states. Science 301, 196–200 (2003). Pu, Y.-F. et al. Experimental realization of a multiplexed quantum memory with 225 individually accessible memory cells. Nat. Commun. 8, 15359 (2017). Longdell, J. J., Fraval, E., Sellars, M. J. & Manson, N. B. Stopped light with storage times greater than one second using electromagnetically induced transparency in a solid. Phys. Rev. Lett. 95, 063601 (2005). Chaneliere, T., Ruggiero, J., Bonarota, M., Afzelius, M. & Le Gouët, J.-L. Efficient light storage in a crystal using an atomic frequency comb. New J. Phys. 12, 023025 (2010). Bigelow, M. S., Lepeshkin, N. N. & Boyd, R. W. Superluminal and slow light propagation in a room-temperature solid. Science 301, 200–202 (2003). Clausen, C. et al. Quantum storage of photonic entanglement in a crystal. Nature 469, 508–511 (2011). England, D. G. et al. Storage and retrieval of THz-bandwidth single photons using a room-temperature diamond quantum memory. Phys. Rev. Lett. 114, 053602 (2015). Hsiao, Y.-F. et al. Highly efficient coherent optical memory based on electromagnetically induced transparency. Phys. Rev. Lett. 120, 183602 (2018). Hosseini, M., Campbell, G., Sparkes, B. M., Lam, P. K. & Buchler, B. C. Unconditional room-temperature quantum memory. Nat. Phys. 7, 794–798 (2011). Grosshans, F. & Grangier, P. Quantum cloning and teleportation criteria for continuous quantum variables. Phys. Rev. A 64, 010301 (2001). Hétet, G., Peng, A., Johnsson, M. T., Hope, J. J. & Lam, P. K. Characterization of electromagnetically-induced-transparency-based continuous-variable quantum memories. Phys. Rev. A 77, 012323 (2008). He, Q. Y., Reid, M. D., Giacobino, E., Cviklinski, J. & Drummond, P. D. Dynamical oscillator-cavity model for quantum memories. Phys. Rev. A 79, 022310 (2009). Simon, C. et al. Quantum memories. Eur. Phys. J. D 58, 1–22 (2010). Halder, M. et al. High coherence photon pair source for quantum communication. New J. Phys. 10, 023027 (2008). Rakher, M. T. et al. Simultaneous wavelength translation and amplitude modulation of single photons from a quantum dot. Phys. Rev. Lett. 107, 083602 (2011). Lounis, B. & Orrit, M. Single-photon sources. Rep. Prog. Phys. 68, 1129–1179 (2005). Reim, K. F. et al. Towards high-speed optical quantum memories. Nat. Photonics 4, 218–221 (2010). Ding, D.-S. et al. Raman quantum memory of photonic polarized entanglement. Nat. Photonics 9, 332–338 (2015). Varnava, M., Browne, D. & Rudolph, T. Loss tolerance in one-way quantum computation via counterfactual error correction. Phys. Rev. Lett. 97, 120501 (2006). Gorshkov, A. V., André, A., Lukin, M. D. & Sørensen, A. S. Photon storage in Λ-type optically dense atomic media. II. Free-space model. Phys. Rev. A 76, 033805 (2007). Nunn, J. et al. Mapping broadband single-photon wave packets into an atomic memory. Phys. Rev. A 75, 011401(R) (2007). Wasilewski, W. & Raymer, M. G. Pairwise entanglement and readout of atomic-ensemble and optical wave-packet modes in traveling-wave Raman interactions. Phys. Rev. A 73, 063816 (2006). Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information. (Cambridge University Press, UK, 2000). Paris, M. & Řeháček, J. Quantum State Estimation. (Springer-Verlag Berlin Heidelberg, New York, 2004). MacRae, A., Brannan, T., Achal, R. & Lvovsky, A. I. Tomography of a high-purity narrowband photon from a transient atomic collective excitation. Phys. Rev. Lett. 109, 033601 (2012). Lvovsky, A. I. Iterative maximum-likelihood reconstruction in quantum homodyne tomography. J. Opt. B: Quantum Semiclass. Opt. 6, S556–S559 (2004). Camacho, R. M., Vudyasetu, P. K. & Howell, J. C. Four-wave-mixing stopped light in hot atomic rubidium vapour. Nat. Photonics 3, 103–106 (2009). Klein, M. et al. Slow light in narrow paraffin-coated vapor cells. Appl. Phys. Lett. 95, 091102 (2009). Zhao, M., Zhang, K. & Chen, L. Q. Determination of the atomic density of rubidium-87. Chin. Phys. B 24, 094206 (2015). This work was supported by the National Key Research and Development Program of China under Grant number 2016YFA0302001, the National Natural Science Foundation of China (Grant numbers 11874152, 91536114, 11474095, 11654005, and 11234003), the Natural Science Foundation of Shanghai (Grant no. 17ZR1442800), and Quantum Information Technology, Shanghai Science and Technology Major Project. Quantum Institute for Light and Atoms, School of Physics and Material Science, East China Normal University, Shanghai, 200062, China Jinxian Guo , Xiaotian Feng , Peiyu Yang , Zhifei Yu , L. Q. Chen & Chun-Hua Yuan School of Physics and Astronomy, and Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai, 200240, China & Weiping Zhang Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi, 030006, China Weiping Zhang Search for Jinxian Guo in: Search for Xiaotian Feng in: Search for Peiyu Yang in: Search for Zhifei Yu in: Search for L. Q. Chen in: Search for Chun-Hua Yuan in: Search for Weiping Zhang in: L.Q.C. and W.Z. planned the experiment and supervised the project. J.G., X.F., P.Y., and Z.Y. performed the experiment. J.G., X.F., and L.Q.C. analyzed the data. L.Q.C., J.G., C.-H.Y., and W.Z. wrote the paper. Correspondence to L. Q. Chen or Weiping Zhang.
CommonCrawl
Molecular Dynamics of Lithium Ion Transport in a Model Solid Electrolyte Interphase Ajay Muralidharan ORCID: orcid.org/0000-0003-0052-87801, Mangesh I. Chaudhari2, Lawrence R. Pratt1 & Susan B. Rempe ORCID: orcid.org/0000-0003-1623-21082 Scientific Reports volume 8, Article number: 10736 (2018) Cite this article Atomistic models Li+ transport within a solid electrolyte interphase (SEI) in lithium ion batteries has challenged molecular dynamics (MD) studies due to limited compositional control of that layer. In recent years, experiments and ab initio simulations have identified dilithium ethylene dicarbonate (Li2EDC) as the dominant component of SEI layers. Here, we adopt a parameterized, non-polarizable MD force field for Li2EDC to study transport characteristics of Li+ in this model SEI layer at moderate temperatures over long times. The observed correlations are consistent with recent MD results using a polarizable force field, suggesting that this non-polarizable model is effective for our purposes of investigating Li+ dynamics. Mean-squared displacements distinguish three distinct Li+ transport regimes in EDC — ballistic, trapping, and diffusive. Compared to liquid ethylene carbonate (EC), the nanosecond trapping times in EDC are significantly longer and naturally decrease at higher temperatures. New materials developed for fast-charging Li-ion batteries should have a smaller trapping region. The analyses implemented in this paper can be used for testing transport of Li+ ion in novel battery materials. Non-Gaussian features of van Hove self -correlation functions for Li+ in EDC, along with the mean-squared displacements, are consistent in describing EDC as a glassy material compared with liquid EC. Vibrational modes of Li+ ion, identified by MD, characterize the trapping and are further validated by electronic structure calculations. Some of this work appeared in an extended abstract and has been reproduced with permission from ECS Transactions, 77, 1155–1162 (2017). Copyright 2017, Electrochemical Society, INC. During charging and discharging cycles of lithium ion batteries, a solid electrolyte interphase (SEI) layer forms on the negative electrode due to decomposition of solvents like ethylene carbonate (EC). The SEI layer is a complex organic material and its composition is not operationally set1. Nevertheless, dilithium ethylene dicarbonate (Li2EDC) has been identified experimentally as the primary component of the outer part of the SEI layer2,3,4,5. Ab initio molecular dynamics (AIMD) simulations, electronic structure calculations6,7,8 and reactive force field simulations9 on the decomposition of ethylene carbonate (EC) on anode surfaces concur with those experimental results. Experimental observations also show that the SEI layer protects the electrode from further decomposition by blocking electron transport while simultaneously allowing transport of Li+ ions between the electrode and electrolyte solution. A better understanding of the transport mechanism of Li+ in EDC may lead to modified SEI layers with improved lithium ion battery performance. Molecular dynamics (MD) studies on model SEI layers carried out over long time-scales may shed new light into the mechanism of transport of Li+ ions within the SEI layer. Borodin, et al.10,11,12, performed MD calculations using a specialized polarizable force field to obtain transport properties of Li+ ion in a model SEI layer composed of ordered and amorphous Li2EDC. Since polarizable force fields are not readily available in standard molecular dynamics packages, we have instead identified non-polarizable force field parameters13 for simulation of the Li2EDC model of the SEI layer. The microsecond time-scales studied here, longer than earlier work10, provide additional insight into structural and transport properties of Li+ ions in this model SEI. We summarize the structural and transport properties from MD studies of a model SEI layer with 256 Li2EDC moieties (Fig. 1, redrawn from ref.13). In addition, simulations of a dilute Li+ ion in EC (single Li+ ion solvated by 249 EC) provide a perspective for comparison. Chemical structure of Li2EDC (left) and EC (right) molecules. Carbon atoms are colored gray, hydrogens white, carbonyl (Oc) or ether (Oe) oxygen red, and Li+ ions yellow. (Redrawn with permission from ECS Transactions 77, 1155–1162 (2017))13. Structural data The radial distribution functions (rdfs) and running coordination numbers (Fig. 2, redrawn from ref.13), involving Li+, carbonyl oxygen (Oc) and ether oxygen (Oe) of EDC and EC are compared at several temperatures. For EDC, the rdfs become less structured with increasing temperature, but the peak positions and overall coordination numbers of the first peak change only slightly. This structural robustness suggests an amorphous glassy matrix for the SEI. These results compare well with recent polarizable force field results10,14, supporting the applicability of the present non-polarizable force field for these structural characteristics. In EDC, the peak position for the Li-Oe distribution is shifted to a slightly larger value than Li-Oc because the charge on Oe is 40% smaller than that of Oc. Radial distribution functions, g(r) and running coordination number, \(\langle n(r)\rangle =4\pi \rho {\int }_{0}^{r}g(x){x}^{2}dx\), for Oc-Li+ and Oe-Li+ at various temperatures. For EDC (left), occupancy of the first solvation shell does not depend on the temperature, even though the peak height diminishes with increasing temperature. For liquid EC solvent (right), almost one additional Oc atom interacts with Li+ at lower temperature. (Redrawn with permission from ECS Transactions 77, 1155–1162 (2017))13. In the case of EC, the peak height increases with temperature and the peak position of the Li-Oe is farther out due to strong interactions between Li+ and Oc. The structure around Li+ changes significantly with temperature, as highlighted by the running coordination number. In contrast to glassy EDC, almost one additional carbonyl oxygen (Oc) atom interacts with Li+ at lower temperature for EC solvent. Mean-squared displacements of Li+ The mean-squared displacements (MSD, Fig. 3, redrawn from ref.13) of Li+ ion in EDC and liquid EC distinguish three distinct dynamical regimes: ballistic at short times, trapping at intermediate times, and diffusive at long times. Trapping of Li+ in EDC is qualitatively different than in liquid EC, and the trapping times in EDC diminish as temperature increases and the glassy EDC matrix softens. Diffusivities of Li+ are extracted from the slope of the diffusive regime of MSD and then extrapolated to low temperatures using an Arrhenius fit (see Supplementary Information). The diffusivity of Li+ in EDC at 333 K is 10−12 cm2/s, which is in agreement with Borodin et. al.10. The conductivity of Li2 EDC obtained from the Nernst-Einstein equation (Supplementary Information) is 4.5 × 10−9 S/cm, which is also in agreement with experiment10. Mean-squared displacements, \(\langle {\rm{\Delta }}r{(t)}^{2}\rangle \), measured for Li+ in EDC (left) and EC (right). The behavior in EDC at intermediate timescales 0.001< t < 1 (ns) demonstrates trapping of the Li+ ion. Ballistic motion \((\langle {\rm{\Delta }}r{(t)}^{2}\rangle \propto {t}^{2})\) is evident at short timescales, while diffusive motion \((\langle {\rm{\Delta }}r{(t)}^{2}\rangle \propto {t}^{1})\) appears at long timescales in both EDC and EC solvents. Dashed lines with slope 1 and 2 (log-scale) are provided as visual cues. At high T, the trapping regime (shaded region) diminishes and the EDC matrix behaves more like liquid EC. Note that the time scales differ dramatically between the two systems. (Redrawn with permission from ECS Transactions 77, 1155–1162 (2017))13. Time correlation functions for Li+ transport The van Hove time correlation function $$G(r,t)=\frac{1}{N}\langle \sum _{i=1}^{N}\sum _{j=1}^{N}\delta ({\bf{r}}+{{\bf{r}}}_{j}\mathrm{(0)}-{{\bf{r}}}_{i}(t))\rangle $$ for Li+ ion describes the probability of finding, at time t, a Li+ ion displaced by r from its initial position. This van Hove function can be split into self and distinct parts, G(r, t) = Gs(r, t) + Gd(r, t), with the latter taking the form $${G}_{{\rm{d}}}(r,t)=\frac{1}{N}\langle \sum _{i\ne j}^{N}\delta ({\bf{r}}+{{\bf{r}}}_{j}(0)-{{\bf{r}}}_{i}(t))\rangle .$$ At t = 0, the van Hove function reduces to the static pair correlation function, $$G(r,\,\mathrm{0)}=\delta ({\bf{r}})+\rho g(r\mathrm{).}$$ The self part of the van Hove function provides a jump probability, and the natural initial approximation is the Gaussian model, $${G}_{{\rm{s}}}(r,t)={[\frac{3}{2\pi \langle {\rm{\Delta }}r{(t)}^{2}\rangle }]}^{\mathrm{3/2}}\times \exp [-(\frac{3{r}^{2}}{2\langle {\rm{\Delta }}r{(t)}^{2}\rangle })]\mathrm{.}$$ For fluids like EC, this Gaussian behavior should be reliable. In contrast, Gs(r, t) in the trapping regime of EDC indicates (Fig. 4, redrawn from ref.13) depletion of probability near the trap boundaries, \(r > {\langle {\rm{\Delta }}r{(t)}^{2}\rangle }^{\mathrm{1/2}}\), and replacement of that probability at shorter and longer distances. Deviation from the Gaussian behavior can be characterized by the non-Gaussian parameter15 $$\alpha (t)=\frac{3\langle {\rm{\Delta }}r{(t)}^{4}\rangle }{5{\langle {\rm{\Delta }}r{(t)}^{2}\rangle }^{2}}-1.$$ Dimensionally-scaled Gs(r, t) for Li+ ions as it depends on displacement for increasing times in EDC (left) and EC (right). Though the Gaussian model (dashed curve) is reliable in EC solvent, probability is depleted near the trap boundaries, \(r > {\langle {\rm{\Delta }}r{(t)}^{2}\rangle }^{\mathrm{1/2}}\), and replaced at shorter and longer distances for EDC. Note that these correlations decay in a few ps for EC, but require ns for EDC. (Redrawn with permission from ECS Transactions 77, 1155–1162 (2017))13. The van Hove self-correlation function is accurately Gaussian for liquid EC, hence α(t) = 0. In contrast, α(t) has non-zero values for glassy EDC (Fig. 5). For EDC, α(t) has a maximum that decreases with increasing temperature. The mean-squared displacements of the carbonyl carbons of EDC and EC molecules (Fig. 6) further verify the sluggish diffusion of EDC relative to EC. In the case of EDC, the mean-squared displacement curves are relatively flat for all temperatures, indicating little diffusion of the EDC matrix. Non-Gaussian parameter calculated for Li+ in EDC at several temperatures. Vertical lines are drawn at the maximum value of α(t). The non-zero value and inverse temperature dependence of α(t) attests to the glassy behavior of EDC, which becomes more fluid-like at higher temperature. Comparison between mean-squared displacements of carbonyl carbon of EDC and EC. The flat MSD at larger time-scales reflects the glassy nature of the EDC matrix. Vineyard's convolution approximation16,17, $${G}_{{\rm{d}}}(r,t)\approx \int {{\rm{d}}}^{3}r^{\prime} g(r^{\prime} ){G}_{{\rm{s}}}(|\overrightarrow{r}-\overrightarrow{r}^{\prime} |,t),$$ provides an initial characterization for the distinct part of the Li+-Li+ van Hove function (Fig. 7, left panel redrawn from ref.13). Here, the convolution of the radial distribution function is made with the self part of the van Hove function that is generated by dynamics of Li+ in EDC. This approximation is consistent with the idea that Gd(r, t) is a dynamical counterpart to g(r), the radial distribution function. The non-zero population in the core region surrounding r ≈ 0 describes correlation of Li+ jumps; that is, refilling a hole left by a Li+ ion with a neighboring Li+ ion. The Li-Li radial distribution function in EDC at t = 0 (dashed) and the corresponding distinct part, Gd(r, t), of the van Hove function within the Vineyard approximation. The non-zero population in the core region surrounding r ≈ 0 (right panel) describes correlation of Li+ jumps, i.e., refilling a hole left by a Li+ ion with a neighboring Li+ ion. (Left panel is redrawn with permission from ECS Transactions 77, 1155–1162 (2017))13. Vibrational power spectra of Li+ The vibrational power spectra are obtained by spectral decomposition of the velocity autocorrelation (Fig. 8, left). Since we are interested in Li+ transport, this analysis is carried out for Li+ atoms exclusively. This fact distinguishes our results from the FTIR spectrum of EDC molecule that was reported previously2. The spectral distribution for Li+ in EDC (Fig. 8, right) is bi-modal, with a temperature dependence at the higher frequency mode. Electronic structure calculations, using structures sampled from MD trajectories, confirm these modes and provide a molecular assignment (Fig. 9). The lower frequency mode (near 400 cm−1) corresponds to motion of a Li+ ion trapped in a cage formed by its nearest neighbors. The higher frequency mode (near 700 cm−1) corresponds to Li+ ion picking up the scissoring motion of a neighboring carbonate group. Velocity auto-correlation functions (left) for Li+ in EDC at different temperatures. Power spectra (right) for Li+ in EDC. Vertical lines near 570 cm−1 are Einstein frequencies (Eq. 7) of these ionic motions. The power spectra identify two prominent vibrational bands, near 400 cm−1 and 700 cm−1. The intensities of the high frequency modes diminish with increasing temperature. Vibrational modes due to prominent Li+ movement (left 436 cm−1) and solvent motion around 700 cm−1 (right). Li+ is surrounded by 3 EDC molecules. Adjacent carbonyl and ether oxygen of the same EDC molecule interact with Li+ (left). The Gaussian0924 software was used for these calculations with the b3lyp exchange-correlation density functional and 6–31 + g (d, p) basis set. The structures were sampled from the classical MD simulations and then the carboxyl groups not coordinating with the Li+ ions were neutralized by adding hydrogen atoms. The blue arrows indicate the atomic displacements for these normal mode frequencies25. The Einstein frequency (νe) is obtained as the coefficient in the quadratic approximation to the velocity autocorrelation function at short times, $$\langle \overrightarrow{v}(t)\cdot \overrightarrow{v}(0)\rangle /\langle {v}^{2}\rangle \approx 1-{({\nu }_{{\rm{e}}}t)}^{2}$$ In a simple Einstein model, all atoms vibrate with a single frequency. Fittingly, for the bi-modal spectra, νe lies in-between the two significant modes. Non-polarizable, classical force field parameters were used to study transport characteristics of Li+ in a model SEI layer composed of EDC. The structural results in EDC are consistent with prior studies that use polarizable force fields. An advantage of non-polarizable force fields is their ready availability in standard simulation packages and accessibility to MD studies over microsecond timescales. Thus, the dynamical characteristics presented here lay a basis for careful molecular-scale examination of the mechanism of transport of Li+ ions in the SEI. These observations over microsecond simulation times provide new physical insights. Specifically, the results compare the glassy behavior of the ethylene dicarbonate SEI matrix with the fluid behavior of liquid ethylene carbonate (Fig. 6). Further, the Li+ MSDs examined in the nanosecond time intervals distinguish Li+ ion trapping in cages formed by the EDC matrix. Our results establish the sizes of the cages and the trapping lifetimes (Fig. 3), and also the dynamical motions of the Li+ ions when trapped (Fig. 8). The vibrational frequency of a trapped ion (about 440 cm−1) is confirmed by electronic structure calculations (Fig. 9). Our results invalidate a naive Einstein model of trapped ions that would be plausible otherwise. The van Hove correlation functions (Figs 4 and 7) provide information for analysis of the correlation of Li+ jumps. Li2EDC (Fig. 1) is known to be a dominant component of the SEI layer in lithium ion batteries involving carbonate solvents. Although Li2EDC is synthesized in crystalline form, its structure at the SEI layer is unknown2,10. We constructed a system of 256 Li2EDC moieties for our initial SEI studies. This system size is identical to previous molecular simulations performed using polarizable force fields10,11. For comparison, we also simulated a single Li+ ion solvated by 249 EC (Fig. 1). The GROMACS molecular dynamics simulation package18 was used for all simulations, and the necessary topology files for EDC and EC were created using non-polarizable all-atom optimized potentials for liquid simulations (OPLS-AA) force field parameters19. The partial charges on EC atoms were adjusted down to 80% to match experimental transport properties for EC20. The EDC and Li+ ions were randomly placed into the simulation cell and MD simulations were performed at 700, 500 and 333 K. Since EDC ions are sluggish, configurations from the highest temperature calculation were used to obtain starting points for further simulations, cooled down to 500 K and subsequently to 333 K to study moderate temperature phenomena. Thus, the results presented here are based on amorphous configurations of the EDC/SEI layer. Although it is unclear that Li2EDC is crystalline at the SEI layer, we have simulated ordered layers and found that the solvent density and radial distribution functions are not substantially changed compared with amorphous Li2EDC. The ordered Li2EDC is more conductive compared to amorphous Li2EDC11, but formation of an ordered SEI structure is unlikely. Therefore, we here discuss only results of amorphous Li2EDC. Periodic boundary conditions mimic the bulk environment in these calculations. A Nose-Hoover thermostat21,22 and a Parrinello-Rahman23 barostat were utilized to achieve equilibration in the NpT ensemble at 1 atm pressure. A 200 ns production run at 700 K was carried out after initial energy minimization and equilibration steps, then a 250 ns calculation at 500 K. Finally, a 1 μs trajectory at 333 K temperature was constructed. Configurations were saved after each 1 ps of those production runs. A separate 1 ns simulation with a sampling rate of 1 fs was carried out at each temperature to calculate the time-independent pair correlation functions discussed below. Piper, D. M. et al. Stable silicon-ionic liquid interface for next-generation lithium-ion batteries. Nature Comm. 6 (2015). Zhuang, G., Xu, K., Yang, H., Jow, T. & Ross, P. J. Lithium ethylene dicarbonate identified as the primary product of chemical and electrochemical reduction of EC in 1.2M LiPF6/EC:EMC electrolyte. J. Phys. Chem. B 109, 17567–73 (2005). Zorba, V., Syzdek, J., Mao, X., Russo, R. E. & Kostecki, R. Ultrafast laser induced breakdown spectroscopy of electrode/electrolyte interfaces. Applied Physics Letters 100, 234101 (2012). Xu, K., Lam, Y., Zhang, S. S., Jow, T. R. & Curtis, T. B. Solvation Sheath of Li+ in Nonaqueous Electrolytes and Its Implication of Graphite/Electrolyte Interface Chemistry. The Journal of Physical Chemistry C 111, 7411–7421 (2007). Leung, K., Soto, F., Hankins, K., Balbuena, P. B. & Harrison, K. L. Stability of Solid Electrolyte Interphase Components on Lithium Metal and Reactive Anode Material Surfaces. The Journal of Physical Chemistry C 120, 6302–6313 (2016). Martinez de la Hoz, J. M., Soto, F. A. & Balbuena, P. B. Effect of the Electrolyte Composition on SEI Reactions at Si Anodes of Li-Ion Batteries. J. Phys. Chem. C 119, 7060–68 (2015). Leung, K. & Budzien, J. L. Ab initio molecular dynamics simulations of the initial stages of solid–electrolyte interphase formation on lithium ion battery graphitic anodes. Phys. Chem. Chem. Phys. 12, 6583–6586 (2010). Borodin, O. et al. Competitive lithium solvation of linear and cyclic carbonates from quantum chemistry. Physical Chemistry Chemical Physics 18, 164–175 (2016). Kim, S.-P., Duin, A. C. T. V. & Shenoy, V. B. Effect of electrolytes on the structure and evolution of the solid electrolyte interphase (SEI) in Li-ion batteries: A molecular dynamics study. Journal of Power Sources 196, 8590–8597 (2011). Borodin, O., Zhuang, G. V., Ross, P. N. & Xu, K. Molecular Dynamics Simulations and Experimental Study of Lithium Ion Transport in Dilithium Ethylene Dicarbonate. J. Phys. Chem. C 117, 7433–7444 (2013). Bedrov, D., Borodin, O. & Hooper, J. B. Li+ Transport and Mechanical Properties of Model Solid Electrolyte Interphases (SEI): Insight from Atomistic Molecular Dynamics Simulations. The Journal of Physical Chemistry C 121, 16098–16109 (2017). Jorn, R., Kumar, R., Abraham, D. P. & Voth, G. A. Atomistic Modeling of the Electrode–Electrolyte Interface in Li-Ion Energy Storage Systems: Electrolyte Structuring. The Journal of Physical Chemistry C 117, 3747–3761 (2013). Muralidharan, A., Chaudhari, M., Rempe, S. & Pratt, L. R. Molecular dynamics simulations of lithium ion transport through a model solid electrolyte interphase (sei) layer. ECS Transactions 77, 1155–1162 (2017). Borodin, O. & Bedrov, D. Interfacial Structure and Dynamics of the Lithium Alkyl Dicarbonate SEI Components in Contact with the Lithium Battery Electrolyte. J. Phys. Chem. C 118, 18362–18371 (2014). Vorselaars, B., Lyulin, A. V., Karatasos, K. & Michels, M. A. J. Non-Gaussian nature of glassy dynamics by cage to cage motion. Phys. Rev. E 75, 011504 (2007). Stecki, J. & Narbutowicz, M. A. A generalization of vineyard's convolution approximation. Chem. Phys. Letts. 5, 345–346 (1970). Yeomans-Reyna, L., Acuña-Campa, H. & Medina-Noyola, M. Vineyard-like approximations for colloid dynamics. Phys. Rev. E 62, 3395–3403 (2000). Van Der Spoel, D. et al. GROMACS: Fast, Flexible, and Free. J. Comput. Chem. 26, 1701–1718 (2005). Jorgensen, W. L. & Maxwell, D. S. Development and Testing of the OPLS All-atom Force Field on Conformational Energetics and Properties of Organic Liquids. J. Am. Chem. Soc. 118, 11225–11236 (1996). Chaudhari, M. I. et al. Scaling Atomic Partial Charges of Carbonate Solvents for Lithium ion (Li+) Solvation and Diffusion. J. Chem. Theory Comp. 12, 5709–5718 (2016). Nosé, S. A Molecular Dynamics Method for Simulations in the Canonical Ensemble. Mol. Phys. 52, 255–268 (1984). Hoover, W. G. Canonical Dynamics: Equilibrium Phase-space Distributions. Phys. Rev. A 31, 1695–1697 (1985). Parrinello, M. & Rahman, A. Polymorphic Transitions in Single Crystals: A New Molecular Dynamics Method. J. App. Phys. 52, 7182–7190 (1981). Frisch, M. J. et al. Gaussian 09 Revision A.1. Gaussian Inc. Wallingford CT (2009). Rempe, S. B. & Jónsson, H. A computational exercise illustrating molecular vibrations and normal modes. Chem. Ed. 3, 1–17 (1998). Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. This work is supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Vehicle Technologies of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, Subcontract No. 7060634 under the Advanced Batteries Materials Research (BMR) Program. This work was performed, in part, at the Center for Integrated Nanotechnologies (CINT), an Office of Science User Facility operated for the U.S. DOE's Office of Science by Los Alamos National Laboratory (Contract DE-AC52-06NA25296) and SNL. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Tulane University, Department of Chemical and Biomolecular Engineering, New Orleans, 70118, USA Ajay Muralidharan & Lawrence R. Pratt Sandia National Laboratories, Center for Biological and Engineering Sciences, Albuqueruque, 87185, USA Mangesh I. Chaudhari & Susan B. Rempe Search for Ajay Muralidharan in: Search for Mangesh I. Chaudhari in: Search for Lawrence R. Pratt in: Search for Susan B. Rempe in: A.M. and M.I.C. performed simulations, analysis and prepared all the figures. L.R.P. and S.B.R. wrote the manuscript and all authors reviewed the manuscript. Correspondence to Susan B. Rempe. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Muralidharan, A., Chaudhari, M.I., Pratt, L.R. et al. Molecular Dynamics of Lithium Ion Transport in a Model Solid Electrolyte Interphase. Sci Rep 8, 10736 (2018) doi:10.1038/s41598-018-28869-x DOI: https://doi.org/10.1038/s41598-018-28869-x Rattling Transport of Lithium Ion in the Cavities of Model Solid Electrolyte Interphase Tsunduru Aashish & Bhabani S. Mallik The Journal of Physical Chemistry C (2019) Structures and dynamic properties of the LiPF6 electrolytic solution under electric fields – a theoretical study Man Liu , Peter J. Chimtali , Xue-bin Huang & Ru-bo Zhang Physical Chemistry Chemical Physics (2019) Theoretical Insights into Li-Ion Transport in LiTa2PO8 Fiaz Hussain , Pai Li & Zhenyu Li Toward Understanding the Different Influences of Grain Boundaries on Ion Transport in Sulfide and Oxide Solid Electrolytes James A. Dawson , Pieremanuele Canepa , Matthew J. Clarke , Theodosios Famprikis , Dibyajyoti Ghosh & M. Saiful Islam Chemistry of Materials (2019) High-throughput search for potential potassium ion conductors: A combination of geometrical-topological and density functional theory approaches R.A. Eremin , N.A. Kabanova , Ye.A. Morkhova , A.A. Golov & V.A. Blatov Solid State Ionics (2018) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights
CommonCrawl
Existence of a rotating wave pattern in a disk for a wave front interaction model CPAA Home Global attractors for strongly damped wave equations with subcritical-critical nonlinearities March 2013, 12(2): 1029-1047. doi: 10.3934/cpaa.2013.12.1029 The explicit nonlinear wave solutions of the generalized $b$-equation Liu Rui 1, Department of Mathematics, South China University of Technology, Guangzhou 510640, China Received January 2011 Revised July 2012 Published September 2012 In this paper, we study the nonlinear wave solutions of the generalized $b$-equation involving two parameters $b$ and $k$. Let $c$ be constant wave speed, $c_5=$ $\frac{1}{2}(1+b-\sqrt{(1+b)(1+b-8k)})$, $c_6=\frac{1}{2}(1+b+\sqrt{(1+b)(1+b-8k)})$. We obtain the following results: 1. If $-\infty < k < \frac{1+b}{8}$ and $c\in (c_5, c_6)$, then there are three types of explicit nonlinear wave solutions, hyperbolic smooth solitary wave solution, hyperbolic peakon wave solution and hyperbolic blow-up solution. 2. If $-\infty < k < \frac{1+b}{8}$ and $c=c_5$ or $c_6$, then there are two types of explicit nonlinear wave solutions, fractional peakon wave solution and fractional blow-up solution. 3. If $k=\frac{1+b}{8}$ and $c=\frac{b+1}{2}$, then there are two types of explicit nonlinear wave solutions, fractional peakon wave solution and fractional blow-up solution. Not only is the existence of these solutions shown, but their concrete expressions are presented. We also reveal the relationships among these solutions. Besides, the correctness of these solutions is tested by using the software Mathematica. Keywords: dynamical system, Explicit nonlinear wave solution, solitary, bifurcation method., generalized $b$-equation. Mathematics Subject Classification: Primary: 34A20, 34C35, 35B65; Secondary: 58F05, 76B2. Citation: Liu Rui. The explicit nonlinear wave solutions of the generalized $b$-equation. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1029-1047. doi: 10.3934/cpaa.2013.12.1029 A. Degasperis, D. D. Holm and A. N. W. Hone, A new integrable equation with peakon solutions,, Theoret. and Math. Phys., 133 (2002), 1463. doi: 10.1023/A:1021186408422. Google Scholar A. Degasperis, D. D. Holm and A. N. W. Hone, Integrable and non-integrable equations with peakons,, Nonlinear physics: Theory and experiment, II (2002), 37. Google Scholar R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Lett., 71 (1993), 1661. doi: 10.1103/PhysRevLett.71.1661. Google Scholar F. Cooper and H. Shepard, Solitons in the Camassa-Holm shallow water equation,, Phys. Lett. A, 194 (1994), 246. doi: 10.1016/0375-9601(94)91246-7. Google Scholar A. Constantin, Soliton interactions for the Camassa-Holm equation,, Exposition. Math., 15 (1997), 251. Google Scholar A. Constantin and W. A. Strauss, Stability of Peakons,, Comm. Pure Appl. Math., 53 (2000), 603. doi: 10.1002/(SICI)1097-0312(200005)53:5<603::AID-CPA3>3.0.CO;2-L. Google Scholar J. P. Boyd, Peakons and coshoidal waves: travelling wave solutions of the Camassa-Holm equation,, Appl. Math. Comput., 81 (1997), 173. doi: http://dx.doi.org/10.1016/0096-3003(95)00326-6. Google Scholar J. Lenells, The scattering approach for the Camassa-Holm equation,, J. Non. Math. Phys., 9 (2002), 389. doi: 10.2991/jnmp.2002.9.4.2. Google Scholar R. S. Johnson, Camassa-Holm, Korteweg-de Vries and rlated models for water waves,, J. Fluid Mech., 455 (2002), 63. doi: 10.1017/S0022112001007224. Google Scholar E. G. Reyes, Geometric integrability of the Camassa-Holm equation,, Lett. Math. Phys., 59 (2002), 117. doi: 10.1023/A:1014933316169. Google Scholar Z. R. Liu, R. Q. Wand and Z. J. Jing, Peaked wave solutions of Camassa-Holm equation,, Chaos Solitons Fract., 19 (2004), 77. doi: 10.1016/S0960-0779(03)00082-1. Google Scholar Z. R. Liu, A. M. Kayed and C. Chen, Periodic waves and their limits for the Camassa-Holm equation,, Int. J. Bifurcat. Chaos, 16 (2006), 2261. doi: 10.1142/S0218127406016045. Google Scholar A. Degasperis and M. Procesi, Asymptotic integrability,, in, (1999), 23. Google Scholar H. Lundmark and J. Szmigielski, Multi-peakon solutions of the Degasperis-Procesi equation,, Inverse Probl., 19 (2003), 1241. doi: 10.1088/0266-5611/19/6/001. Google Scholar H. Lundmark and J. Szmigielski, Degasperis-Procesi peakons and the discrete cubic string,, Int. Math. Res. Pap., 2 (2005), 53. doi: 10.1155/IMRP.2005.53. Google Scholar C. Chen and M. Y. Tang, A new type of bounded waves for Degasperis-Procesi equations,, Chaos Soliton Fract., 27 (2006), 698. doi: 10.1016/j.chaos.2005.04.040. Google Scholar P. Guha, Euler-Poincare formalism of (two component) Degasperis-Procesi and Holm-Staley type systems,, J. Non. Math. Phys., 14 (2007), 390. doi: 10.2991/jnmp.2007.14.3.8. Google Scholar D. D. Holm and M. F. Staley, Nonlinear balance and exchange of stability in dynamics of solitons, peakons ramps/cliffs and leftons in a $1+1$ nolinear evolutionary PDEs,, Phys. Lett. A., 308 (2003), 437. doi: 10.1016/S0375-9601(03)00114-2. Google Scholar B. L. Guo and Z. R. Liu, Periodic cusp wave solutions and single-solitons for the $b$-equation,, Chaos Soliton Fract., 23 (2005), 1451. doi: 10.1016/j.chaos.2004.06.062. Google Scholar Z. R. Liu and T. F. Qian, Peakons and their bifurcation in a generalized Camassa-Holm equation,, Int. J. Bifurcat. Chaos, 11 (2001), 781. doi: 10.1142/S0218127401002420. Google Scholar A. M. Wazwaz, Solitary wave solutions for modified forms of Degasperis-Procesi and Camassa-Holm equations,, Phys. Lett. A, 352 (2006), 500. doi: 10.1016/j.physleta.2005.12.036. Google Scholar A. M. Wazwaz, New solitary wave solutions to the modified forms of Degasperis-Procesi and Camassa-Holm equations,, Appl. Math. Comput., 186 (2007), 130. doi: 10.1016/j.amc.2006.07.092. Google Scholar L. X. Tian and X. Y. Song, New peaked solitary wave solutions of the generalized Camassa-Holm equation,, Chaos Soliton Fract., 21 (2004), 621. doi: 10.1016/S0960-0779(03)00192-9. Google Scholar J. W. Shen and W. Xu, Bifurcations of smooth and non-smooth travelling wave solutions in the generalized Camassa-Holm equation,, Chaos Soliton Fract., 26 (2005), 1149. doi: 10.1016/j.chaos.2005.02.021. Google Scholar S. A. Khuri, New ansatz for obtaining wave solutions of the generalized Camassa-Holm equation,, Chaos Soliton Fract., 25 (2005), 705. doi: 10.1016/j.chaos.2004.11.083. Google Scholar Z. R. Liu and Z. Y. Ouyang, A note on solitary waves for modified forms of Camassa-Holm and Degasperis-Procesi equations,, Phys. Lett. A, 366 (2007), 377. doi: 10.1016/j.physleta.2007.01.074. Google Scholar B. He, W. G. Rui and C. Chen, Exact travelling wave solutions for a generalized Camassa-Holm equation using the integral bifurcation method,, Appl. Math. Comput., 206 (2008), 141. doi: 10.1016/j.amc.2008.08.043. Google Scholar Z. R. Liu and B. L. Guo, Periodic blow-up solutions and their limit forms for the generalized Camassa-Holm equation,, Prog. Nat. Sci., 18 (2008), 259. doi: 10.1016/j.pnsc.2007.11.004. Google Scholar L. J. Zhang, Q. C. Li and X. W. Huo, Bifurcations of smooth and nonsmooth travelling wave solutions in a generalized Degasperis-Procesi equation,, J. Comput. Appl. Math., 205 (2007), 174. doi: 10.1016/j.cam.2006.04.047. Google Scholar Q. D. Wang and M. Y. Tang, New exact solutions for two nonlinear equations,, Phys. Lett. A, 372 (2008), 2995. doi: 10.1016/j.physleta.2008.01.012. Google Scholar E. Yomba, The sub-ODE method for finding exact travelling wave solutions of generalized nonlinear Camassa-Holm, and generalized nonlinear Schrödinger equations,, Phys. Lett. A, 372 (2008), 215. doi: 10.1016/j.physleta.2007.03.008. Google Scholar E. Yomba, A generalized auxiliary equation method and its application to nonlinear Klein-Gordon and generalized nonlinear Camassa-Holm equations,, Phys. Lett. A, 372 (2008), 1048. doi: 10.1016/j.physleta.2007.09.003. Google Scholar B. He, W. G. Rui and S. L. Li, Bounded travelling wave solutions for a modified form of generalized Degasperis-Procesi equation,, Appl. Math. Comput., 206 (2008), 113. doi: 10.1016/j.amc.2008.08.042. Google Scholar Z. R. Liu and J. Pan, Coexistence of multifarious explicit nonlinear wave solutions for modified forms of Camassa-Holm and Degaperis-Procesi equations,, Int. J. Bifurcat. Chaos, 19 (2009), 2267. doi: 10.1142/S0218127409024050. Google Scholar R. Liu, Several new types of solitary wave solutions for the generalized Camassa-Holm-Degasperis-Procesi equation,, Commun. Pur. Appl. Anal., 9 (2010), 77. doi: 10.3934/cpaa.2010.9.77. Google Scholar Út V. Lê. Regularity of the solution of a nonlinear wave equation. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1099-1115. doi: 10.3934/cpaa.2010.9.1099 Y. A. Li, P. J. Olver. Convergence of solitary-wave solutions in a perturbed bi-Hamiltonian dynamical system I. Compactions and peakons. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 419-432. doi: 10.3934/dcds.1997.3.419 Meiling Yang, Yongsheng Li, Zhijun Qiao. Persistence properties and wave-breaking criteria for a generalized two-component rotational b-family system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (4) : 2475-2493. doi: 10.3934/dcds.2020122 Rui Liu. Several new types of solitary wave solutions for the generalized Camassa-Holm-Degasperis-Procesi equation. Communications on Pure & Applied Analysis, 2010, 9 (1) : 77-90. doi: 10.3934/cpaa.2010.9.77 Jong-Shenq Guo, Ying-Chih Lin. Traveling wave solution for a lattice dynamical system with convolution type nonlinearity. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 101-124. doi: 10.3934/dcds.2012.32.101 Jibin Li, Yi Zhang. On the traveling wave solutions for a nonlinear diffusion-convection equation: Dynamical system approach. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1119-1138. doi: 10.3934/dcdsb.2010.14.1119 Yong Duan, Jian-Guo Liu. Convergence analysis of the vortex blob method for the $b$-equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1995-2011. doi: 10.3934/dcds.2014.34.1995 Zhaoquan Xu, Jiying Ma. Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 5107-5131. doi: 10.3934/dcds.2015.35.5107 Y. A. Li, P. J. Olver. Convergence of solitary-wave solutions in a perturbed bi-hamiltonian dynamical system ii. complex analytic behavior and convergence to non-analytic solutions. Discrete & Continuous Dynamical Systems - A, 1998, 4 (1) : 159-191. doi: 10.3934/dcds.1998.4.159 Chui-Jie Wu, Hongliang Zhao. Generalized HWD-POD method and coupling low-dimensional dynamical system of turbulence. Conference Publications, 2001, 2001 (Special) : 371-379. doi: 10.3934/proc.2001.2001.371 Oana Pocovnicu. Explicit formula for the solution of the Szegö equation on the real line and applications. Discrete & Continuous Dynamical Systems - A, 2011, 31 (3) : 607-649. doi: 10.3934/dcds.2011.31.607 Amin Esfahani, Steve Levandosky. Solitary waves of the rotation-generalized Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 663-700. doi: 10.3934/dcds.2013.33.663 Jerry L. Bona, Didier Pilod. Stability of solitary-wave solutions to the Hirota-Satsuma equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1391-1413. doi: 10.3934/dcds.2010.27.1391 Jibin Li, Yi Zhang. Exact solitary wave and quasi-periodic wave solutions for four fifth-order nonlinear wave equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 623-631. doi: 10.3934/dcdsb.2010.13.623 Jiao Chen, Weike Wang. The point-wise estimates for the solution of damped wave equation with nonlinear convection in multi-dimensional space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 307-330. doi: 10.3934/cpaa.2014.13.307 Jean-François Rault. A bifurcation for a generalized Burgers' equation in dimension one. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 683-706. doi: 10.3934/dcdss.2012.5.683 Jibin Li. Family of nonlinear wave equations which yield loop solutions and solitary wave solutions. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 897-907. doi: 10.3934/dcds.2009.24.897 Tai-Chia Lin. Vortices for the nonlinear wave equation. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 391-398. doi: 10.3934/dcds.1999.5.391 Xinlong Feng, Huailing Song, Tao Tang, Jiang Yang. Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Problems & Imaging, 2013, 7 (3) : 679-695. doi: 10.3934/ipi.2013.7.679 Shaoyong Lai, Yong Hong Wu. The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 401-408. doi: 10.3934/dcdsb.2003.3.401
CommonCrawl
On the symmetry of spatially periodic two-dimensional water waves DCDS Home Geometric Lorenz flows with historic behavior December 2016, 36(12): 7029-7056. doi: 10.3934/dcds.2016106 On hyperbolicity in the renormalization of near-critical area-preserving maps Hans Koch 1, Department of Mathematics, The University of Texas at Austin, Austin, TX 78712 Received February 2016 Revised June 2016 Published October 2016 We consider MacKay's renormalization operator for pairs of area-preserving maps, near the fixed point obtained in [1]. Of particular interest is the restriction $\mathfrak{R}_{0}$ of this operator to pairs that commute and have a zero Calabi invariant. We prove that a suitable extension of $\mathfrak{R}_{0}^{3}$ is hyperbolic at the fixed point, with a single expanding direction. The pairs in this direction are presumably commuting, but we currently have no proof for this. Our analysis yields rigorous bounds on various ``universal'' quantities, including the expanding eigenvalue. Keywords: invariant circle, hyperbolicity., renormalization, Area-preserving maps. Mathematics Subject Classification: Primary: 37E20; Secondary: 37F2. Citation: Hans Koch. On hyperbolicity in the renormalization of near-critical area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7029-7056. doi: 10.3934/dcds.2016106 G. Arioli and H. Koch, The critical renormalization fixed point for commuting pairs of area-preserving maps,, Comm. Math. Phys., 295 (2010), 415. doi: 10.1007/s00220-009-0922-1. Google Scholar R. de la Llave and A. Olvera, The obstruction criterion for non-existence of invariant circles and renormalization,, Nonlinearity, 19 (2006), 1907. doi: 10.1088/0951-7715/19/8/008. Google Scholar J.-P. Eckmann, H. Koch and P. Wittwer, A computer-assisted proof of universality for area-preserving maps,, Mem. Amer. Math. Soc., 47 (1984), 1. doi: 10.1090/memo/0289. Google Scholar C. Falcolini and R. de la Llave, A rigorous partial justification of Greene's criterion,, J. Stat. Phys., 67 (1992), 609. doi: 10.1007/BF01049722. Google Scholar D. Gaidashev, T. Johnson and M. Martens, Rigidity for infinitely renormalizable area-preserving maps,, Preprint, (2012). doi: 10.1215/00127094-3165327. Google Scholar J. M. Greene, A method for determining a stochastic transition,, J. Math. Phys., 20 (1979), 1183. doi: 10.1063/1.524170. Google Scholar E. Hille and R. S. Phillips, Functional Analysis and Semi-groups,, AMS Colloquium Publications, 31 (1974). Google Scholar H. Hofer and E. Zehnder, Symplectic Invariants and Hamiltonian Dynamics,, Birkhäuser Verlag, (1994). doi: 10.1007/978-3-0348-8540-9. Google Scholar H. Koch, A renormalization group fixed point associated with the breakup of golden invariant tori,, Discrete Contin. Dynam. Systems, 11 (2004), 881. doi: 10.3934/dcds.2004.11.881. Google Scholar H. Koch, Existence of Critical Invariant Tori,, Erg. Theor. Dyn. Syst., 28 (2008), 1879. doi: 10.1017/S0143385708000199. Google Scholar R. S. MacKay, Renormalisation in Area Preserving Maps,, Thesis, (1982). doi: 10.1142/9789814354462. Google Scholar R. S. MacKay, Greene's residue criterion,, Nonlinearity, 5 (1992), 161. doi: 10.1088/0951-7715/5/1/007. Google Scholar A. Olvera and C. Simó, An obstruction method for the destruction of invariant curves,, Physica D, 26 (1987), 181. doi: 10.1016/0167-2789(87)90222-3. Google Scholar S. Ostlund, D. Rand, J. Sethna and E. Siggia, Universal transition from quasiperiodicity to chaos in dissipative systems,, Phys. Rev. Lett., 49 (1982), 132. doi: 10.1103/PhysRevLett.49.132. Google Scholar S. J. Shenker and L. P. Kadanoff, Critical behaviour of KAM surfaces. I Empirical results,, J. Stat. Phys., 27 (1982), 631. doi: 10.1007/BF01013439. Google Scholar A. Stirnemann, Renormalization for Golden Circles,, Comm. Math. Phys., 152 (1993), 369. doi: 10.1007/BF02098303. Google Scholar A. Stirnemann, Towards an existence proof of mackay's fixed point,, Comm. Math. Phys., 188 (1997), 723. doi: 10.1007/s002200050185. Google Scholar M. Yampolsky, Hyperbolicity of renormalization of critical circle maps,, Publ. Math. Inst. Hautes Etudes Sci., 96 (2002), 1. doi: 10.1007/s10240-003-0007-1. Google Scholar Ada Reference Manual, ISO/IEC 8652:2012(E), available e.g. at http://www.ada-auth.org/arm.html., (). Google Scholar The Institute of Electrical and Electronics Engineers, Inc., IEEE Standard for Binary Floating-Point Arithmetic,, ANSI/IEEE Std 754-2008., (): 754. Google Scholar A free-software compiler for the Ada programming language, which is part of the GNU Compiler Collection,, see http://gcc.gnu.org/., (). Google Scholar The MPFR library for multiple-precision floating-point computations with correct rounding, see, http://www.mpfr.org/., (). Google Scholar The computer programs are available, at, ftp://ftp.ma.utexas.edu/pub/papers/koch/maps-spec/index.html., (). Google Scholar Denis Gaidashev, Tomas Johnson. Spectral properties of renormalization for area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3651-3675. doi: 10.3934/dcds.2016.36.3651 Simion Filip. Tropical dynamics of area-preserving maps. Journal of Modern Dynamics, 2019, 14: 179-226. doi: 10.3934/jmd.2019007 Mário Bessa, César M. Silva. Dense area-preserving homeomorphisms have zero Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1231-1244. doi: 10.3934/dcds.2012.32.1231 Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. Electronic Research Announcements, 1995, 1: 114-123. Denis Gaidashev, Tomas Johnson. Dynamics of the universal area-preserving map associated with period-doubling: Stable sets. Journal of Modern Dynamics, 2009, 3 (4) : 555-587. doi: 10.3934/jmd.2009.3.555 Daniel N. Dore, Andrew D. Hanlon. Area preserving maps on $\boldsymbol{S^2}$: A lower bound on the $\boldsymbol{C^0}$-norm using symplectic spectral invariants. Electronic Research Announcements, 2013, 20: 97-102. doi: 10.3934/era.2013.20.97 Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321 Miroslav KolÁŘ, Michal BeneŠ, Daniel ŠevČoviČ. Area preserving geodesic curvature driven flow of closed curves on a surface. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3671-3689. doi: 10.3934/dcdsb.2017148 Jingzhi Yan. Existence of torsion-low maximal isotopies for area preserving surface homeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4571-4602. doi: 10.3934/dcds.2018200 Hans Koch. On the renormalization of Hamiltonian flows, and critical invariant tori. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 633-646. doi: 10.3934/dcds.2002.8.633 Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979 Rafael De La Llave, Michael Shub, Carles Simó. Entropy estimates for a family of expanding maps of the circle. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 597-608. doi: 10.3934/dcdsb.2008.10.597 Liviana Palmisano. Unbounded regime for circle maps with a flat interval. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2099-2122. doi: 10.3934/dcds.2015.35.2099 Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098 C. Chandre. Renormalization for cubic frequency invariant tori in Hamiltonian systems with two degrees of freedom. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 457-465. doi: 10.3934/dcdsb.2002.2.457 Denis G. Gaidashev. Renormalization of isoenergetically degenerate hamiltonian flows and associated bifurcations of invariant tori. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 63-102. doi: 10.3934/dcds.2005.13.63 Hans Koch. A renormalization group fixed point associated with the breakup of golden invariant tori. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 881-909. doi: 10.3934/dcds.2004.11.881 H. E. Lomelí, J. D. Meiss. Generating forms for exact volume-preserving maps. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 361-377. doi: 10.3934/dcdss.2009.2.361 Shigenori Matsumoto. A generic-dimensional property of the invariant measures for circle diffeomorphisms. Journal of Modern Dynamics, 2013, 7 (4) : 553-563. doi: 10.3934/jmd.2013.7.553 Joachim Escher, Boris Kolev. Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle. Journal of Geometric Mechanics, 2014, 6 (3) : 335-372. doi: 10.3934/jgm.2014.6.335
CommonCrawl
Pre-coded LDPC coding for physical layer security Kyunghoon Kwon1, Taehyun Kim1 & Jun Heo ORCID: orcid.org/0000-0002-7230-78751 This paper examines a simple and practical security preprocessing scheme for the Gaussian wiretap channel. A security gap based error rate is used as a measure of security over the wire-tap channel. In previous works, information puncturing and scrambling schemes based on low-density parity-check (LDPC) codes were employed to reduce the security gap. Unlike the previous works, our goal is to improve security performance by using the precode of the feed-forward (FF) structure. We demonstrate that the FF code has an advantage for the security gap compared to the perfect scrambling scheme. Furthermore, we propose the joint iterative decoding method between LDPC and FF codes to improve the reliability/security performances. The proposed joint iterative method is able to achieve outstanding performance by using the proposed scaling and correction factors based on signal-to-noise ratio (SNR) evolution. The improved performances by these factors are demonstrated through the extrinsic information transfer (EXIT) chart and simulation results. Finally, the simulation results suggest that the proposed coding scheme is more effective than the conventional scrambling scheme. For several decades, wireless communication technologies have been available that exchange information rapidly and reliably between a sender and a receiver. Owing to the continued development of communication technologies, we can today access communication networks conveniently and with transportability, whenever and wherever we wish. In conjunction with this development, a growing interest has developed in secure information transmission over wireless networks related to the specific security vulnerabilities caused by the inherent openness of wireless media. It is difficult to detect eavesdropping because anybody can acquire transmitted information over a wireless communication channel. Shannon established communication theory in 1949 and defined the basic concept of secure communication from the information-theoretic perspective [1]. Using Shannon's approaches, a sender, Alice, securely transmits an information message M to a legitimate receiver, Bob, across a public channel. To be "perfectly secure", the requirement of the mutual information I(M;X)=0 must be satisfied between Alice's information message M and the transmitted word X. From this definition, Shannon proved that Alice and Bob must share a key string to achieve perfect security. This theory was the introduction of the key distribution problem and is the basis of symmetric key cryptography defense systems for the upper layer implemented today. Present systems based on cryptography prevent the extraction of information without a secure key string when information is exposed to the eavesdropper Eve. This public key algorithm depends on the computational limit of the eavesdropper to ensure computational security. In spite of the improvements in public key algorithms, there remains a problem for security based on the assumption of Eve's limited computational resources considering the advancement of available computing power. An alternative technology that is not based on computational complexity, is physical layer security. Unlike the key distribution problem, physical layer security utilizes the characteristics of a communication channel and allows a legitimate receiver to decode correctly. The important difference compared to Shannon's theory is that the eavesdropper can observe information transmitted by the sender through another channel. Physical layer security guarantees security analytically, based on information theory, regardless of the eavesdropper's computational power. Therefore, there is no elevation of risk due to the advancement of high speed computing. A security system based on the physical layer was introduced by Wyner in 1975 [2] and information-theoretically secure communication was studied in [3, 4]. According to the wiretap channel model defined by Wyner, the main channel was defined between the sender, Alice, and the legitimate receiver, Bob; the wiretap channel was defined as a degraded version of the main channel. The main and wiretap channels were assumed to be discrete memoryless channels. Suppose that Alice sends Bob an s-bit message M across the main channel. Alice encodes M into an n-bit transmitted word X. Bob and Eve receive message X across the main and wiretap channel, respectively. Bob and Eve's channel observations are denoted by Y and Z, respectively. Alice encodes the information for two objectives [2] as follows: (i) the error probability between the message M and Bob's decoded message \(\hat {M}_{B}\) of the received message Y must converge to zero (with negligibly small probability of error) [reliability]. ii) no information is shared between information message M and Eve's received message Z. For a precise expression, the formulation is articulated as the rate of mutual information \(\frac {1}{n}I(M;Z)\rightarrow 0\) when n→∞ [security]. Wyner defined that physical layer security is achieved without key distribution using forward error correction (FEC) when it corresponds to the considerations of reliability and security. Moreover, the secrecy rate is defined by the rate s/n, where s and n are the number of secret message bits and the number of bits transmitted over the channel, respectively. A detailed explanation of Wyner code could be found in [5]. Cheong generalized the Gaussian wiretap channel [6] based on Wyner's wiretap channel model as illustrated in Fig. 1. Wyner showed that if the wiretap channel is a degraded version of the main channel then secrecy capacity is positive. In [4], the authors showed that the secrecy capacity is positive when the main channel is "less noisy" than the wiretap channel such as \({\sigma _{B}^{2}}{\leq \sigma _{E}^{2}}\) (corollary 3 in [4]). Then, Bob's received signal-to-noise ratio (SNR) \(\left (P/{\sigma _{B}^{2}}\right)\) is greater than Eve's SNR \(\left (P/{\sigma _{E}^{2}}\right)\). Block diagram of a Gaussian wiretap channel Several security measurement metrics for physical layer security are used for evaluating transmissions over the wiretap channel. These security metrics depend on the characteristic of the coding scheme used for transmissions. Among the metrics, bit error rate (BER) can be a practical metric as a security measure when modulation and coding schemes (MCS) are considered in a practical system [7, 8]. Therefore, since the BER metric allows for easy measurement and straightforward assessment, in this paper, we focus on the BER security metric. Another useful metric to measure the security is the equivocation rate analysis by information-theoretic security on the secret message [9–11]. The information theoretic approach could be developed, since BER metric could not provide the same amount of information for the information theoretic approach and guarantee perfect secrecy. However, it is out of scope of this paper. The BER of approximately 0.5 of Eve's decoded message \(\hat {M}_{E}\) with random noise does not guarantee that she will not be able to obtain sufficient information on the transmitted message. Security measurement using BER was introduced by Klinc et al. and is called "security gap". Security gap is defined as the difference between Bob and Eve's received SNR and can be used to achieve physical layer security. It is assumed that Bob's received SNR is greater than Eve's. To achieve physical layer security for the same received messages, an average BER over Eve's channel, \({P_{e}^{E}}\) must approach 0.5 and an average BER over Bob's, \({P_{e}^{B}}\) must approach zero. Thus, the reliability and security conditions are as follows: $$\begin{array}{*{20}l} &(\mathrm{a})~\text{Reliability}~:~{P_{e}^{B}}\leq P_{e,max}^{B};\\ &(\mathrm{b})~\text{Security}~:~{P_{e}^{E}}\geq P_{e,min}^{E}, \end{array} $$ where \(P_{e,max}^{B}\) and \(P_{e,min}^{E}\) are the BER thresholds for reliability and security, respectively. Bob's near-zero BER implies a negligibly small probability of error in a practical system and Eve's BER around 0.5 implies that half of the information is corrupted by channel noise. Therefore, \(P_{e,max}^{B}\) and \(P_{e,min}^{E}\) as BER thresholds are defined by BER 10−5 and 0.4 in this paper. Thus, the security gap can be expressed in terms of the SNR as follows [7]: $$\begin{array}{@{}rcl@{}} S_{G}(security~gap)=\frac{{SNR}_{B,min}}{{SNR}_{E,max}}, \end{array} $$ where S N R B,m i n is the lowest SNR for which (a) is satisfied and S N R E,m a x is the highest SNR for which (b) holds. According to (1), the security gap should be kept as small as possible, so that the desired security is achieved with small degradation of Eve's channel. Therefore, it is important to construct an error-correcting code (ECC) to reduce the security gap. As mentioned above, the main target of this paper is to keep the security gap as small as possible. Studies on the error-correcting code for physical layer security have focused on low-density parity-check (LDPC) codes. LDPC codes [12] have a remarkable error-correcting capability and a powerful analysis tool for a belief propagation (BP) decoder, [13] called density evolution (DE) [14] or the extrinsic information transfer (EXIT) chart [15]. Klinc et al. [7] proposed a security-achieving algorithm using LDPC codes with a puncturing scheme. Only parity bits are transmitted to eliminate the exposure of secret messages and the decoders recover the punctured bits using the received parity bits. Baldi proposed non-systematic codes [16, 17] for physical layer security using a scrambling matrix inspired by the McEliece Cryptosystem [18]. This scheme causes intentional bit error propagation where transmitted bits consist of scrambled information bits. This achieves secrecy maintaining the error correction capability of FEC and the advantage of a decrease in the signal power compared with the puncturing scheme [19]. However, since the scrambling scheme produced leads to an error propagation phenomenon, an improved reliability in terms of frame error rate cannot be expected. In this paper, we propose a feed-forward (FF) pre-code that resolves the disadvantage of the puncturing scheme for linear block codes and addresses the advantage of a decrease in the signal power with respect to the conventional scrambling scheme. Unlike the previous scrambling scheme that uses a hard decision value for error propagation only, the proposed code has an improved reliability at a high SNR region compared to the scrambling scheme. We demonstrate that the proposed code has improved reliability performance at high SNR with a reduced security gap. The proposed system consists of an LDPC code as an inner code and an FF code as a pre-code (outer code). The outer code has a code rate approaching one to minimize the loss of transmitted information against the conventional scrambling scheme. By concatenating LDPC and FF codes, reliability is achieved using LDPC and security is realized using the FF code. Unlike the scrambling scheme, the FF code employs soft decision decoding to recover the secret message and has superior reliability performance compared to the scrambling scheme. The reliability performance can be improved by applying joint iterative decoding to the proposed system. The improved performance is demonstrated through the EXIT chart curves [20–22]. The outline of this paper is as follows. In Section 2, we introduce the wiretap channel model and review previous works, information puncturing, and scrambling schemes. In Section 3, the encoding and decoding procedures of the FF code are discussed and the performance is evaluated. In Section 4, the joint iterative decoding procedure is explained and the security and reliability performances of the proposed system are evaluated. Also, we approximate the factors used in this paper and analyze the performance of the proposed system using the EXIT chart curve. The conclusion is presented in Section 5. Preliminaries and related works This section discusses some background concepts and the previous works that will be used throughout the paper. Alice sends an n-bit transmitted sequence X n∈{x 1,x 2,⋯,x n } after encoding a k-bit pre-coded message M k∈{m 1,m 2,⋯,m k } (M k is the pre-coded message of the s-bit secret message U s∈{u 1,⋯,u s }). The received sequences of Bob and Eve are denoted as Y n and Z n, respectively. Alice sends message X using binary phase-shift keying (BPSK) modulation. The Gaussian wiretap channel model can then be generalized [9, 10] as follows: $$ \begin{array}{l} Y_{i}=~~X_{i} + N_{i}^{Bob}\\ Z_{i}=\kappa X_{i} + N_{i}^{Eve} \end{array} $$ where \(N_{i}^{Bob}\) and \(N_{i}^{Eve}\) are independent and identically distributed (i.i.d) zero-mean Gaussian random variables of variance \({\sigma _{B}^{2}}\) and \({\sigma _{E}^{2}}\), respectively, and κ is a positive constant that models the gain advantage of the eavesdropper over the destination. Let n ch be the number of transmitted bits over the channel, and n code denote the codeword block length of the LDPC code. Define the design rate \(R_{d}=\frac {k}{n_{ch}}\), the secret rate \(R_{s}=\frac {s}{n_{ch}}\), and the code rate \(R_{c}=\frac {k}{n_{code}}\). In general, if the number of the secret message bits s is equal to the dimension of the LDPC code k, then R s =R d . If R s <R d in [7], it may help to achieve the reduced security gap but higher power should be needed to achieve the reliability condition. Since the power saving is important in many applications, R s ≈R d is preferred. Punctured and scrambled code for Gaussian wiretap channel In [7], D.Klinc et al. proposed punctured LDPC codes to achieve security over the Gaussian wiretap channel. The punctured LDPC codes are employed to remove the exposure of the secret message to Eve. The puncturing fraction is denoted by p, which implies the fraction of the punctured secret message. To construct the R s =R d code, the mother code with rate R c =p<0.5 must be used, since the secret rate R s =p/(1−p). The authors of [7, 8] demonstrated that the punctured code can remarkably reduce the security gap compared with the non-punctured code. However, the punctured code has less reliable performance than the non-punctured code and requires higher power to achieve good performance over the main channel. To overcome these vulnerabilities, non-systematic codes using scrambling schemes were proposed by Baldi et al. [16, 17]. In the scrambling scheme, Alice generates the pre-coded message m by multiplying the secret message vector u and scrambling matrix S. Alice then sends the encoded message x by a product of the pre-coded message m=u·S and the generator matrix G to Bob. The scrambling procedure transforms the systematic code to the non-systematic code. Unlike the previous puncturing scheme, the scrambling scheme maintains that the secret and code rates are equal, that is R s =R c , and the scheme requires the same signal power to achieve reliability. The expression of scrambling can be written as $$ x=u\cdot S \cdot G=m\cdot G. $$ A 1×n pre-coded codeword x is generated by multiplying a k×n generator matrix G and 1×k pre-coded message m constructed by multiplying a 1×k secret message u and a k×k scrambling matrix S. Figure 2 illustrates a simple example of the puncturing and scrambling schemes. The received signal is first decoded using the channel decoder. The decoded message \(\hat {u}\) is solved through multiplication by the inverse scrambling (descrambling) matrix S −1 and the decoded message m, and the expression of descrambling can be written as $$ \hat{u}=(m+e)\cdot S^{-1}=u\cdot S\cdot S^{-1}+e\cdot S^{-1}=u+e\cdot S^{-1} $$ Examples of an information puncturing and scrambling schemes It is possible to recover the secret message with correct decoding. However, if decoding fails, an error propagation phenomenon is observed due to the density of the descrambling matrix S −1 in the right-side term of the above equation. In [17], perfect scrambling is denoted by a descrambling matrix with row and column weight >1 and a density close to 0.5. Thus, perfect scrambling with one (or more) error(s) causes an error rate around 0.5 in the final decoded message. Since the BER of Eve is very close to 0.5 (if errors are randomly distributed), it would be difficult to extract much information about the message. In terms of the gain of signal power, Baldi et al. showed that the puncturing scheme has worse error correcting performance than the scrambling scheme with respect to systematic LDPC coding. This is because the puncturing scheme increases the code rate and has a negative impact on the code minimum distance which is reduced [23, 24]. However, the scrambling scheme can only provide an error propagation effect, not error correction. The use of the scrambling scheme without FEC (as unitary rate coding, section 3-A in [17]) guarantees security performance on average, though it does not provide improved reliability. Feed-forward pre-code for physical layer security To achieve physical layer security with minimum loss of code rate, the difference in the dimension between secret and pre-coded messages must be minimized. This also enables low complexity of the security processing. The block diagram of the entire proposed system with the pre-coded LDPC concatenation is illustrated in Fig. 3. The sender (Alice) encodes the s-bit secret message U using security preprocessing (FF encoder) and then encodes the FF-coded message M into an n-bit codeword X. Bob and Eve receive the message X across the main and wiretap channel, respectively; then, using the received sequence of Bob "Y" and Eve "Z", the decoded messages \(\hat {M}_{B}\) and \(\hat {M}_{E}\) are achieved by performing their own LDPC decoding procedure, respectively. The secret messages \(\hat {U}_{B}\) and \(\hat {U}_{E}\) can be recovered via the FF decoder into the decoded messages for Bob and Eve, respectively. In our simulations, BPSK modulation {+1,−1} is employed and the code rate of LDPC is 1/2. The number of transmitted bits is 960. The FF decoder employs the Bahl-Cocke-Jelinek-Raviv (BCJR) decoding algorithm for soft decision decoding. We employ an LDPC code, as specified in the IEEE 802.16e standard, in the proposed system for the following analysis [25]. For LDPC decoding, the message-passing algorithm in [13] is used. However, in this section, we only provide the encoding and decoding procedures of the FF code as a pre-code and evaluate its reliability and security performances. Block diagram of the proposed system with the pre-coded LDPC concatenation over Gaussian wiretap channel The proposed coding scheme employs the simplest convolutional encoding with one tail bit to protect the secret message for an improved reliability performance, and the decoding complexity of the proposed scheme is higher due to soft decision decoding (BCJR algorithm). Security processing with error propagation must be provided to achieve security. Thus, in this paper, we propose the FF code as a pre-code, which is the inverse form of a differential coding (DC) scheme. The proposed code has low complexity and a feed-forward structure, not a recursive form. Its generator polynomial is g FF (D)=1+D with a memory order of 1. Figure 4 presents the block diagram of the FF encoder. Block diagram of feed-forward encoder The FF encoder is a reversed form of the differential encoder, i.e., the FF encoder and differential decoder constructions are the same structure. The matrix equation of the proposed encoder is expressed as follows: $$ G_{FF} = \left[ \begin{array}{cccc} 1 & 1 & \quad & 0\\ \quad & 1 & \ddots & \quad\\ \quad & \quad & \ddots & 1\\ 0 & \quad & \quad & 1 \end{array} \right], G_{FF}^{-1} = \left[ \begin{array}{cccc} 1 & 1 & \cdots & 1\\ \quad & 1 & \cdots & 1\\ \quad & \quad & \ddots & \vdots\\ 0 & \quad & \quad & 1 \end{array} \right], $$ and the pre-coded sequence m n can be directly expressed as $$ m_{n}=u_{n-1}\oplus u_{n}. $$ Unlike the differential encoder, the output message of the FF encoder consists of the modulo-2 addition between the previous input symbol and the present input symbol. The density of the descrambling matrix \(G_{FF}^{-1}\) is close to 0.5 due to the full upper triangular matrix. For arbitrary n, the density of \(G_{FF}^{-1}\), D FF , can be written as: $$ D_{FF}=\frac{\displaystyle\sum_{i=1}^{n} i}{n^{2}}=\frac{n+1}{2n} $$ where n is the length of the secret message. If n approaches infinity, $$ {\lim}_{n\rightarrow\infty}D_{FF}={\lim}_{n\rightarrow\infty}\frac{n+1}{2n}=0.5. $$ On the case of binary phase shift keying (BPSK), the bit and frame error probability are given as $$ \left\{ \begin{aligned} &P_{e}=\frac{1}{2}\text{erfc}\left(\sqrt{\frac{E_{b}}{N_{0}}}\right),\\ &P_{f}=1-(1-P_{e})^{n}=1-\left(1-\frac{1}{2}\text{erfc}\left(\sqrt{\frac{E_{b}}{N_{0}}}\right)\right)^{n}. \end{aligned} \right. $$ Therefore, an upper bound (UB) of FF hard decision decoding is guaranteed as $$\begin{array}{*{20}l} P_{e,UB}^{FF}&=\bigg(\frac{n+1}{2n}\bigg)\bigg\{1-\bigg[1-\frac{1}{2}\text{erfc}\bigg(\sqrt{\frac{E_{b}}{N_{0}}}\bigg)\bigg]^{n}\bigg\} \end{array} $$ $$\begin{array}{*{20}l} &\geq\frac{1}{2}\bigg\{1-\bigg[1-\frac{1}{2}\text{erfc}\bigg(\sqrt{\frac{E_{b}}{N_{0}}}\bigg)\bigg]^{n}\bigg\}. \end{array} $$ The proposed code with density 0.5 guarantees the requirement of perfect scrambling, and achieves the limit of security performance when n goes to infinity. In contrast to the conventional scrambling scheme based on a non-singular random matrix, the FF code consists of the straightforward structures of the encoder and decoder. From [6], it is easily proved that the bit error probability after FF hard decision decoding approaches half the frame error probability, as in [16, 17]. Let j be the number of errors, P j be the probability that a received n-bit vector contains j errors before FF hard decision decoding, m i be the ith error position in an n-bit string which contains j errors, and ξ j be the number of all possible cases after FF hard decision decoding in the n-bit string which contains j errors. Ω e denotes the expectation value of the number of errors after FF hard decision decoding. Under such assumptions, the bit error probability after FF hard decision decoding can be expressed as follows: $$\begin{array}{*{20}l} P_{e}^{FF}=\frac{\Omega_{e}}{n}, \end{array} $$ $$ {}{{\left\{ \begin{aligned} & \!\! P_{j} \,=\, {n\choose j}{P_{e}^{j}}(1-P_{e})^{n-j}\\ & \!\! \Omega_{e} \,=\, \displaystyle\sum_{j=1}^{n}\frac{P_{j}}{\xi_{j}}\!\! \left[\displaystyle\sum_{m_{1}=1}^{n-j+1}\displaystyle\sum_{m_{2}=m_{1}+1}^{n-j+2}\cdots\! \displaystyle\sum_{m_{j}=m_{j-1}+1}^{n} \!\left\{\! \displaystyle\sum_{l=1}^{j}(n \,+\, 1 \,-\, m_{l})(-1)^{l-1}\!\!\right\}\!\!\right]. \end{aligned} \right.}} $$ In Fig. 5, the BER performance of the FF hard decision decoding with the number of transmitted bits n=10 is evaluated by the upper bound, error probability of perfect scrambling, error probability of FF hard decision decoding, and simulation. The upper bound and error probabilities are computed from [7–9]. The simulation results show that the performances of equations [7–9] are very close to the simulation result. From the figure, the performance and the descrambling density of the proposed FF code are close to the conventional scrambling scheme. Upper bound (7), perfect scrambling (8) and the analysis of FF hard decision decoding (9) with n=10 bits The inverse generator polynomial is \(g_{FF}^{-1}(D)=\frac {1}{1+D}\) because the pre-coded message \(\hat {M}=(\hat {m}_{1},\hat {m}_{2},\cdots,\hat {m}_{n})\) consists of the generator polynomial g FF (D)=1+D. The FF decoder is a recursive form of the encoder. Because of this construction, the FF-decoded message \(\hat {U}=(\hat {u}_{1},\hat {u}_{2},\cdots,\hat {u}_{n})\) has a regularity as follows: $$ \hat{u}_{n}=\hat{m}_{n}\oplus\hat{u}_{n-1}. $$ The recursive form of a decoder can continuously propagate a bit error when an error occurs in the received message. The construction of the FF code is based on the convolutional code. Thus, the FF code can be expressed using a trellis diagram. The FF code can be decoded using a soft-input soft-output (SISO) decoder or symbol-by-symbol maximum a posteriori (MAP) algorithm. The representative MAP decoding algorithm is the BCJR algorithm [26] used in classical turbo decoding. By applying the symbol detection of the BCJR algorithm using soft decision, the performance loss of the sequence detection from hard decision can be reduced. The trellis diagram of the FF code is presented in Fig. 6. Trellis diagram corresponding to FF code with generator polynomial g FF (D)=1+D Figure 6 describes the nth FF-decoded message \(\hat {u}_{n}\) value 0 (1) as a solid (dotted) line. When the decoding is performed, the FF-decoded bit is correlated with all of the incoming bits. It has a coding gain in the high SNR region owing to the correlation property. Figure 7 presents the BER and frame error rate (FER) of the proposed scheme compared to the conventional scrambling scheme. BER and FER performance without forward error correction (s=479 bits, tail bit 1, and k=n=480 bits), in the presence of BPSK modulation, perfect scrambling, and FF code While the scrambling scheme only has error propagation capability, the proposed FF code, with increased minimum Hamming distance (d min =2) using redundant bit (tail bit) and coding gain using the BCJR algorithm, has a noticeable performance gain in the high SNR region. In the low SNR region, this code demonstrates a BER of 0.5. Security as defined in this paper is achieved. Moreover, this code has an improved performance of about 0.4 dB compared to the uncoded system at the BER of 10−7, owing to the BCJR decoding algorithm. Compared with the conventional scrambling scheme, the proposed code has a performance improvement of approximately 1.4 dB at the BER of 10−7. If information from other symbols with low reliability is incorrect, errors accumulate for the entire code sequence, which cause error propagation. Unlike channel errors, the error positions after FF decoding (or descrambling) are not exactly i.i.d. Moreover, the operation of the FF code employs the correlation effect between consecutive symbols and each symbol is dependent on other symbols. Therefore, we cannot state that this system has a perfect secrecy even though Eve's BER is equal to 0.5. This does not ensure the maximum entropy for Eve, since the error positions are not i.i.d. The security performance using security gap is presented in Fig. 8 and Table 1, where the number of transmitted bits is 480, and Bob's maximum BER, \(P_{e,max}^{B}\), is 10−5. From the figure, we can observe that Eve's BER converges very slowly toward the ideal value of 0.5; hereafter, \(P_{e,min}^{E}\geq 0.4\). Moreover, the security gap performances at \(P_{e,min}^{E}\geq 0.48\) are almost the same. We will refer to "\(P_{e,min}^{E}\geq 0.4\)" as a sufficient amount of physical layer security in this paper, but our schemes still apply to stricter security thresholds (\(P_{e,min}^{E}=0.5\)). Consider that when the Eve's minimum BER is \(P_{e,min}^{E}=0.4\), the uncoded scheme (only BPSK {+1,−1}) requires a large (>20 dB) security gap to achieve security performance. In the case of the scrambling scheme, to achieve \(P_{e,min}^{E}=0.4\), only a 6.29 dB security gap is required. However, the proposed FF code, unlike in the scrambling scheme, yields a security gap gain of approximately 0.74 dB at \(P_{e,min}^{E}=0.4\) compared to perfect scrambling. A security gap of only a 5.55 dB is required to achieve \(P_{e,min}^{E}=0.4\). Security gap performance without forward error correction (s=479 bits, tail bit 1, and k=n=480 bits), in the presence of BPSK modulation, perfect scrambling, and FF code Table 1 Security gap performances with uncoded BPSK, perfect scrambling and FF code over the AWGN channel One way to compare the complexity of the perfect scrambling and the pre-code (FF hard and soft decoding) is to compare the type of operations and count the number of times each operation is performed. The BCJR algorithm of the pre-code involves the following operations: Forward/backward recursion: let t be the number of states of the FF code, n be the number of the length of a trellis, respectively. From the Fig. 7, each state has two outgoing branches. For each state, (2t) multiplication operations and t addition operation are needed. Therefore, for a trellis with length n, a total of (2t n) multiplication operations and (t n) addition operations are required. Likewise, the operations required to backward recursion are also equal to forward recursion. Branch metric (probability): to compute the branch metric on the probability domain, (2t) branch metrics are needed since there are t states and each state has two outgoing branches. For each branch, two multiplications are required. Therefore, a total of (4t n) multiplications are needed for a trellis length n. LLR computation: the numerator (denominator) of LLR computation is the total sum of the probability of branch metric corresponding to 0 (1). Since the pre-code has two states and two outgoing branches per each state, there are four branch metrics of probability domain. Among the metrics, two branch metrics are corresponded to the probability of 0. For each numerator and denominator, (t−1) addition operations are needed. Then, 1 logarithm operation and 1 division operation are needed to compute LLR. In total, 2(t−1)n addition, n logarithm, and n division operations are needed. To compute the perfect scrambling scheme (randomly generated), 1×n hard decision vector and n×n descrambling matrix are needed. For the 1st decoded (descrambled) bit, n multiplication operations and n−1 addition operations are needed. In total, n 2 multiplication and n(n−1) addition operations are needed to obtain the descrambled message. The computational complexity could be decreased by using \(G_{FF}^{-1}\) as perfect scrambling matrix (FF hard decoding). In the previous section, we provide that the matrix \(G_{FF}^{-1}\) guarantees the consideration of perfect scrambling. From the Eq. (11), the sequence detection can be used. Then, in total, only n−1 addition operations are needed to compute the descrambled message. The type of operations required by these algorithms (randomly generated perfect scrambling, FF hard, soft decoding) and the number of times each operation is executed are summarized in the Table 2. Table 2 The types and numbers of operations needed to implement the perfect scrambling (randomly generated), FF soft decoding (BCJR), and FF hard decoding (as perfect scrambling) From Table 2, it is possible to incorrectly evaluate that the perfect scrambling scheme (random matrix) has more complexity than the FF soft decoding, since it only provides the types and numbers of operations for real value computation. In terms of the hardware implementation, the perfect scrambling only uses binary operations (modulo-2 operations); however, BCJR algorithm of FF soft decoding requires the operations of the real values and it needs more cost per one operation than the perfect scrambling. For those reasons, it is difficult to precisely compare the algorithms with the data in Table 2. Therefore, the matrix \(G_{FF}^{-1}\) is used as perfect scrambling for a fair comparison in this paper. Joint iterative decoding for improved reliability Joint iterative decoding (JID) in a concatenated system has been used to achieve high reliability [27] in spite of the high complexity. Since the proposed system is a serially concatenated structure, it is possible to use JID. In addition, in Section III-B, we demonstrated that the FF code has a coding gain through the use of a BCJR decoding algorithm for a few (or single) errors, and thus the performance gain from joint iterative decoding between LDPC and FF codes can be predicted in terms of the increasing SNR value. Figure 9 shows a schematic diagram of the joint iterative decoding for LDPC and FF concatenated system. The channel observations of k bit information and n−k bit parity parts are y c h,i and y c h,p , respectively. The extrinsic outputs of LDPC and FF codes are E 1 and E 2, and the a priori knowledge of LDPC and FF codes are A 1 and A 2, respectively. The dotted square shows a message transfer node (MTN) that processes the extrinsic information E 1 and E 2 to be a priori knowledge, A 1. The extrinsic output E 2 without high reliability causes performance loss of LDPC decoding due to its error propagation. To reduce the performance loss, MTN uses the extrinsic output E 1, which has higher reliability than E 2. In addition, MTN uses the correction factor α and scaling factor β to minimize error propagation by E 2 at high SNR. We define the log-likelihood ratio (LLR) as L(x)=l n(P(x=1)/P(x=0)). l i and l o are the number of LDPC decoding iterations and LDPC-FF code joint iterations, which we call inner and outer iterations, respectively. Receiver structure of the joint iterative decoding for the concatenated (LDPC decoder and pre-code decoder) system When a decoder performs joint iterative decoding, the initial incoming messages to the channel decoder are given by: $$ \begin{array}{l} L^{0}(C_{1,i})=L(y_{ch,i})\\ L^{0}(C_{1,p})=L(y_{ch,p}) \end{array} $$ where L 0(C 1,i ) and L 0(C 1,p ) are the LLR values of information and parity messages, respectively, when l o equals zero (first iteration). Then, the updated messages (a priori knowledge) from the FF decoder in the first iteration must be set up to zero as: $$ L^{0}(A_{1})=0 $$ After LDPC decoding, the extrinsic output E 1 becomes the a priori input A 2. The FF decoder takes channel observations y c h,i and a priori knowledge A 2, and computes the extrinsic output E 2 as: $$ L^{0}(C_{2})=L(y_{ch,i})+L^{0}(A_{2}) $$ where L 0(C 2) is the input LLR values of the FF decoder at the first iteration. Therefore, the input message of the information part to the channel decoder in the l o -th iteration, can be calculated recursively using $$\begin{array}{*{20}l} L^{l_{0}}(C_{1,i})&=L(y_{ch,i})+L^{{l_{0}}-1}(A_{1})\\&=L(y_{ch,i})+\alpha\cdot L^{{l_{0}}-1}(E_{1})+\beta\cdot L^{{l_{0}}-1}(E_{2}) \end{array} $$ $$\begin{array}{*{20}l} L^{l_{0}}(C_{2})&=L(y_{ch,i})+L^{l_{0}}(A_{2})\\&=L(y_{ch,i})+L^{l_{0}}(E_{1}) \end{array} $$ where α and β are the correction and scaling factors, respectively. Computing the correction and scaling factors via Monte Carlo simulation Both the α and β values are adopted to control the effect of the extrinsic messages, E 1 and E 2. As mentioned above, E 2 has an error propagation property and causes performance loss when it is used as a priori knowledge without any corrections of LDPC code. The extrinsic output E 1 used for correction of E 2 can also cause performance loss when LDPC output messages are taken as input messages because this would then oppose the general iterative decoding rule. Thus, the correction factor α must be less than one, 0≤α<1. Since LDPC code as FEC used in this paper is linear, we can assume without loss of generality that the all-zero codeword is transmitted for a simple analysis. For the FF decoder, channel error (L(y c h,i )<0) or LDPC decoding failure (L(y c h,i )+L(E 1)<0) can cause error propagation. To reduce the loss by these impacts, the correction factor α should be taken as $$\begin{array}{@{}rcl@{}} &|L(y_{ch,i})|\geq\alpha\cdot |L(E_{1})|. \end{array} $$ For joint iterative decoding, we assume 10 inner iterations (LDPC itr =l i =10) and the extrinsic message E 1 is the value after the inner iterations. To reduce the channel error and decoding failure after the inner iterations, the corrections factor α is chosen as $$ \alpha=\left\{ \begin{array}{cccc} &|\frac{L(y_{ch,i})}{L(E_{1})}|,&~\text{if}~|L(y_{ch,i})|<|L(E_{1})|,\\ &0.999999,& \textrm{otherwise.} \end{array}\right. $$ The value of the scaling factor β is derived based on α. Both α and β must be larger than zero and can take the maximum value of one. The channel error or decoding failure should be minimized by using the correction and scaling factors with extrinsic messages E 1 and E 2, respectively, so we have $$ L(y_{ch,i})+\alpha\cdot L(E_{1})+\beta\cdot L(E_{2})\geq 0. $$ Since we assume that all-zero codeword modulated into x=+1=[+1,+1,⋯,+1] by BPSK {+1,−1} is transmitted, the left-side of (17) must be larger than zero for the next iteration without errors. Based on Eqs. (16) and (17), the scaling factor β is computed as $$ \beta=\left\{ \begin{array}{llll} \frac{|L(y_{ch,i})+\alpha\cdot L(E_{1})|}{|L(E_{2})|}~,&~\text{if}~\frac{|L(y_{ch,i})+\alpha\cdot L(E_{1})|}{|L(E_{2})|}\leq1,\\ 1~,&~\textrm{otherwise.} \end{array} \right. $$ To achieve the suitable values of α and β for real LLR values, we use Monte Carlo simulation for simplicity. We use Monte Carlo simulation to achieve the correction and scaling factors because error propagation property of FF code depends on error positions and the estimation of the error position is a difficult task. The inner and outer iterations, l i and l o are 10 and 1, respectively, and the number of transmitted frames (trials) is 107. Figure 10 shows the obtained correction and scaling sequences via Eqs. (16) and (18) at each SNR when l i =10 and l o =1. The figure shows how the correction and scaling factors differ and how they change for different SNR values. By observing the correction factor α and the scaling factor β of the proposed JID scheme from Fig. 10, we can conclude that: (1) the correction factor α is generally smaller than the scaling factor β for every SNR region, which reflects the general iterative decoding rule; (2) the correction and scaling factor values reduce by increasing SNR because error propagation effect should be reduced; (3) as the value of SNR increases, the correction and scaling factors will become smaller and finally converge to minimum values (α→0.06, β→0.08), which infers that LDPC (FEC) decoding is becoming more reliable with the increase of SNR value. Patterns of correction factor α and scaling factor β at each SNR values for the proposed JID system Extrinsic information transfer (EXIT) chart analysis The EXIT chart [20–22] is a useful analysis tool of the iterative decoding system. EXIT charts indicate mutual information exchange between the extrinsic information of two constituent codes. In most cases, the output LLR messages of these codes can be assumed to follow the Gaussian distribution. The extrinsic information between the constituent codes can then be sequentially used to process the computation. In this paper, the information from the channel (intrinsic information) and the output knowledge from the previous iteration (extrinsic information) can be used as the input of the current iteration, and the output of the current iteration can be used as the input of the next iteration. We use LDPC code and FF code as two constituent codes and assume that their input and output LLR are approximated by the Gaussian distribution. Now suppose that I A is the average mutual information between the coded bits and the a priori information, and I E is the average mutual information between the coded bits and the extrinsic output. Function T(I A ,E b /N 0)=I E is the EXIT chart function of the decoder and T(·) characterizes the information transfer in the decoder. Denoting the mutual information of the extrinsic information at the output of LDPC and the FF code by \(I_{E_{1}}\) and \(I_{E_{2}}\), and the mutual information of the a priori information at the input of LDPC and the FF code by \(I_{A_{1}}\) and \(I_{A_{2}}\), respectively, we have \(I_{E_{1}}=T(I_{A_{1}},E_{b}/N_{0})\) and \(I_{E_{2}}=T(I_{A_{2}},E_{b}/N_{0})\). To obtain the EXIT curve, we assume that the input LLR values, L(A 1) and L(A 2) are both symmetric. The symmetric conditions of LLR values are modeled as \(L(A_{1})\sim \mathcal {N}(m_{1},2m_{1})\) and \(L(A_{2})\sim \mathcal {N}(m_{2},2m_{2})\) such that m 1 and m 2 are the mean of the L(A 1) and L(A 2) messages, respectively. Therefore, the mutual information I A between X and A can be written as \(I_{A}=I(X;A)\doteq J(\sigma _{A})\), as defined in equation (12) in [28]. Similarly, the mutual information I E =I(X;E) is defined as $$\begin{array}{*{20}l} I_{E}=&\frac{1}{2}\cdot\displaystyle\sum_{x=-1,1}\int_{-\infty}^{+\infty}P_{E}(z|X=x)\\ &\times\log_{2}\frac{2\cdot P_{E}(z|X=x)}{P_{E}(z|X=-1)+P_{E}(z|X=+1)}~dz. \end{array} $$ In general, an analytical evaluation of the mutual information I E in Eq. (19) is a difficult task. For simplicity, we use an approximated equation of the mutual information. Following [28], Eq. (19) can be arbitrarily closely approximated as $$\begin{array}{*{20}l} I_{E}\approx 1-\frac{1}{N}\displaystyle\sum_{n=1}^{N}\log_{2}(1+e^{-c_{n}\cdot L(c_{n})}) \end{array} $$ where N is the number of samples, c n is the n-th codeword, and L(c n ) is the LLR value of the n-th codeword such that c n ∈{+1,−1}. Figures 11 and 12 show the EXIT chart curve of the proposed system. In Fig. 11, the EXIT curves between LDPC and FF codes without α and β (α=0, β=0) are plotted. In this case, a pinch-off limit 1 is finally at E b /N 0=2.62 dB. In Fig. 12, the EXIT curves between LDPC and FF codes are plotted with the obtained α and β in section IV-A. The pinch-off limit is then at E b /N 0=2.26 dB. As mentioned above, the FF code causes the error propagation. Without the input sequence of high reliability to the FF code, the error correction via the joint iterative decoding cannot be expected. For its error correction capability, we need the suitable correction and scaling factors. EXIT chart curves of joint FF and LDPC codes (without using the correction factor α and the scaling factor β) EXIT chart curves of joint FF and LDPC codes using the proposed JID scheme (with optimal correction factor α and the scaling factor β) Simulation results In the previous subsections, it is suggested that the proposed scheme needs the correction and scaling factors in MTN for joint iterative decoding, and the joint iterative decoding of the proposed system is evaluated through EXIT chart curves. In this subsection, we evaluate the proposed system through BER and security gap performance. As noted in Fig. 10, the values of α and β are sensitive functions of the signal-to-noise ratio. It may cause the entire system to become very complex and lead to performance loss when both values are wrongly evaluated. Therefore, the simulation results for the fixed values are also presented to avoid the impact of wrong evaluation. The fixed values are selected in the SNR region having the gain of the joint iterative decoding compared to the perfect scrambling. The fixed values selected at low SNR region (≤1.5 dB) may cause critical error propagation at high SNR. In addition, the fixed values selected at high SNR region (≥2.8 dB) render it difficult to achieve the joint iterative decoding gain since the values are too small. Based on these rules, the values are selected as α=0.07 and β=0.18, which are optimized at 2.4 dB. The reasons for these values are as follows: (i) the JID at high SNR region has some of iterative decoding gain compared to the perfect scrambled LDPC code. For use of the fixed values, S N R B,m i n should possibly be kept as small as S N R B,m i n for use of the optimized values. That is, despite the use of the fixed values, outstanding reliability performance should be achieved at high SNR region; ii) error propagation effect is properly maintained at low SNR region by using these values. That is, the S N R E,m a x for use of the fixed values should be kept as close to the S N R E,m a x as possible for use of the optimized values. These optimized values shown in Fig. 12 are evaluated for the best security performance at each SNR. Although the fixed values (α=0.07, β=0.18) are experimentally selected to show the prevention of the wrong evaluation impact as an example, the security performance will differ following the values that are selected. In summary, these values (α=0.07, β=0.18) are relevantly selected by considering error propagation at low SNR and error correction at high SNR. For comparison purposes, LDPC codes are considered to have the same parameters as those mentioned in Section 3, k=480 and n=960. The secrecy rate is R s ≈R d =R c =0.5 and the BPSK modulation {+1,−1} is used in our simulations. The maximum LDPC iteration is 100. For the joint iterative decoding, the number of inner and outer iterations, l i and l o are 10 and 10, respectively. Figure 13 and Table 3 show the BER performances of the systematic, perfect scrambled, serially concatenated JID scheme through MTN with optimized α and β, and the JID scheme with fixed α and β values. From Fig. 13 and Table 3, it is observed that the proposed concatenated and JID systems have performance improvements of about 0.205 and 0.51 dB over the perfect scrambling scheme in [16, 17] at a BER of 10−6. We can observe that the joint iterative effect of the proposed JID scheme eventually increases SNR, which is in good agreement with the respective EXIT curve (pinch-off limit) of Fig. 12. A performance improvement at high SNR means that it can achieve a steep BER curve and a reduced security gap. BER performances versus the LDPC systematic, perfectly scrambled LDPC, LDPC and FF concatenated system (No joint iteration), and the joint iterative decoding (inner iteration l i =10 and outer iteration l o =10) between LDPC and FF decoders with values of the correction factor α and the scaling factor β (Total LDPC iteration 100) Table 3 Security gap performances with systematic LDPC, perfect scrambled LDPC, LDPC-FF serially concatenated (SC) and LDPC-FF JID (opt./fix.) over the AWGN channel The security gap performances for the various systems are plotted in Fig. 14, while Table 3 shows the security gap performances. Due to error floor phenomenon of LDPC code, Bob's maximum bit error probability \(P_{e,max}^{B}\) is set to 10−6, and the improved security of the proposed system is thus more tangible. From Fig. 14 and Table 3, the security gap performances of the JID system using MTN (with optimized/fixed α and β) at \(P_{e,min}^{E}=0.4\) are about 2.26 and 2.37 dB. The security gap performance of the LDPC and FF concatenation at \(P_{e,min}^{E}=0.4\) is about 2.615 dB. It is observed that the proposed systems of serial concatenation and JID using optimized α and β have performance improvements of about 0.19 dB and 0.545 dB over the perfect scrambling scheme, respectively. In the case of the JID scheme using MTN with the fixed α and β, the performance improvement for the security gap is about 0.435 dB over the perfect scrambling scheme. Security gap performances versus the LDPC systematic, perfectly scrambled LDPC, LDPC and FF concatenated system (No joint iteration), and the joint iterative decoding (inner iteration l i =10 and outer iteration l o =10) between LDPC and FF decoders with values of the correction factor α and the scaling factor β (total LDPC iteration 100) In conclusion, the SISO decoder of the FF code provides a performance improvement over the scrambled scheme and we can achieve a better performance improvement using the proposed JID scheme. Furthermore, the proposed system using the fixed factors has similar security/reliability performances to that using the optimized factors and can still achieve the performance improvement over the scrambled scheme. From the figure, we can also observe that the security gap advantage vanishes as Eve's BER tends toward the ideal value of 0.5, hereafter \(P_{e,min}^{E}\geq 0.45\). However, since \(P_{e,min}^{E}\geq 0.4\) is sufficiently significant for a practical system, physical layer security as defined in this paper can be achieved. Although the proposed JID scheme has the advantage of enhanced reliability/security performances, this is achieved from an extra complexity/decoding delay. Basically, extra decoding complexity is needed since the FF decoding procedure is performed l o times for JID. If more JID is demanded, the extra complexity will be increased. In [17], Baldi et al. demonstrated that physical layer security can be achieved by using a very simple feed-back mechanism based on Hybrid Automatic Repeat reQuest (Hybrid ARQ or HARQ) when Bob's channel is not "less noisy" than Eve's channel. They used the soft-combining scheme of HARQ so that Bob can exploit a number of transmissions Q<Q max for decoding each frame and Eve receives all retransmissions requested by Bob. Similarly, we provide a simulation result on the use of the HARQ scheme with the perfect scrambled LDPC and the proposed FF-LDPC JID scheme using fixed factors. The maximum number of transmissions is Q max =3. The numerical results similar to that provided in [17] are observed for HARQ with the proposed scheme. The FER performances with HARQ (soft-combining) for the perfect scrambled LDPC and the proposed JID scheme are plotted in Fig. 15. From the figure, the FER performances of the perfect scrambled LDPC and the proposed JID scheme are the dotted and solid lines, respectively. The fluctuations are observed in the figure because an average number of transmissions is a decreasing function for Bob's received SNR. Also, in this case, we could observe the improved reliability of the proposed JID scheme for each result. The error correction capability of the proposed scheme could be improved, while the correction capability of the scrambled scheme depends only on LDPC code. Hence, the proposed JID scheme allows us to achieve the desired level of physical layer security. The FER performances versus Bob's SNR for the perfect scrambled LDPC (dotted line) and the proposed FF-LDPC JID (solid line) using the fixed factors (α=0.07, β=0.18) with soft-combining HARQ (Q max =3) and different values of security gap (S g ) Randomness measurement The proposed precode includes operations between consecutive symbols. From these operations, the proposed precode has a correlation effect and is able to employ soft decision decoding. Due to the correlation effect and soft decision decoding, Eve's decoding performance is better than that of the perfect scrambling scheme. However, the security gap between Bob and Eve is maintained since both performances are equally enhanced. The proposed scheme may have a negative impact on the randomness of the produced sequence since the FF code is highly structured. For this reason, the entire distribution of errors should be analyzed since Eve's average error rate does not guarantee the randomness of the decoded sequence. From Fig. 3, Eve's received Z decodes LDPC decoded message \(\hat {M}_{E}=M+\mathbf {E}\) by using the BP decoder, where M and E are LDPC codeword and the error vector after LDPC decoding, respectively. In addition, through the FF decoder, Eve's FF decoder outputs the FF decoded message \(\hat {U}_{E}=U+\mathbf {e}\), where U and e are the information message and error vector after FF decoding, respectively. For the erroneous frame, let \({e_{i}^{l}}\) be the number of errors for the ith position in the l-th erroneous frame, l max be the total number of erroneous frames, t i be the total number of errors at the ith position, \(t_{i}=\sum _{l=1}^{l_{max}}{e_{i}^{l}}\), and T be the total number of errors, \(T=\sum _{i=1}^{n}t_{i}\), where n is the length of the information bits. Therefore, the error expectation value (or bit error probability) at the i-th position for an erroneous frame is \({P_{m}^{i}}=t_{i}/l_{max}\). Therefore, \({P_{m}^{i}}\) is the bit error probability for each position when e≠0. Figure 16 shows the results of randomness measurement (the entire distribution of errors) of the final decoded message \(\hat {U}_{E}\) with errors for the perfectly scrambled LDPC and the proposed JID scheme. In the case of the perfectly scrambled LDPC, error positions after descrambling are randomly distributed since the descrambling matrix is randomly generated with the density of 0.5. Uniform error distribution is observed for every position, even though error positions are not i.i.d. In the case of the LDPC-FF JID, \({P_{m}^{i}}\) at the front and tail parts has relatively low values since the FF decoder employs BCJR algorithm which is set up with a high probability for the zero state at the first and last step. However, even in the high SNR region, the randomness is kept to \({P_{m}^{i}}\approx 0.5\) for the positions of most of the parts, except a few of the front and last positions. In other words, when Eve uses the same decoder as that used by Bob, she cannot extract any useful information since error positions of the proposed scheme are as randomly distributed as the perfectly scrambled scheme for most of the parts. Because \({P_{m}^{i}}\geq 0.4\) means that the error probability over 0.4 has fairly high uncertainty at the ith position (we cannot state about entropy since error positions are not independent). The results of randomness measurement using the perfectly scrambled LDPC and the proposed JID scheme at 0 and 2 dB region In this paper, we have examined security-processing schemes for physical layer security. We proposed a serially concatenated system that consists of an outer code and conventional FEC as an inner code. Compared with previous works relating to channel coding for physical layer security, the puncturing scheme for LDPC code (or linear block code) has a weakness in that it should be required higher signal power to achieve reliability than scrambling scheme. The disadvantages of the scrambling scheme as unitary rate coding are that it is only capable of reducing the security gap and it does not provide the error correction capability. The proposed security scheme adopts the FF code using a SISO decoding procedure (BCJR algorithm). We demonstrated that the proposed scheme is capable of performing error correction and error propagation simultaneously. Simulation results confirm that the FF code using a BCJR algorithm has an improved reliability performance and reduced security gap. Furthermore, we proposed a joint iterative decoding algorithm between the FF code and conventional LDPC to improve the reliability performance through the bit and frame error rate with a correction factor α and scaling factor β obtained by using Monte Carlo simulation. These factors are the functions of the signal-to-noise ratio. In the case of the proposed JID using these factors, our best results indicate reliability/security-gap performance improvements of 0.51 and 0.545 dB, respectively. The reliability performance of the proposed JID scheme using these factors is observed to be only 0.145 dB away from the systematic LDPC code. Despite the use of fixed factors to avoid the impact of wrong evaluation, our results indicate reliability/security-gap performance improvements to perfect scrambled LDPC of 0.5 and 0.435 dB, respectively. It is demonstrated that error floor phenomenon of LDPC code in the high SNR region can be reduced when using joint iterative decoding with the proper α and β; thus, a reduced security gap can be achieved. This is analyzed via the EXIT chart curve. In future works, equivocation rate analysis of the proposed scheme will be performed with α and β for an information-theoretic approach. 1 pinch-off limit means that both transfer characteristics of two constituent codes are just about to intersect (see [20, 21]). CE Shannon, Communication theory of secrecy systems. Bell Syst. Tech. J. 28:, 656–715 (1949). AD Wyner, The wire-tap channel. Bell Syst. Tech. J. 54(8), 1355–1387 (1975). L Ozarow, AD Wyner, Wire-tap channel. II.AT&T Bell Laboratories Tech. J. 63(10), 2135–2157 ((1984)). I Csiszar, J Korner, Broadcast channel with confidential messages. IEEE Trans. Inf. Theory. 24(3), 339–348 (1978). A Thangaraj, S Dihidar, AR Calderbank, S McLaughlin, JM Merolla, Applications of LDPC codes to the wiretap channels. IEEE Trans. Inf. Theory. 53(8), 2933–2945 (2007). SK Leung-Yan-Cheong, ME Hellman, The Gaussian wiretap channel. IEEE Trans. Inf. Theory. 24(4), 451–456 (1978). D Klinc, J Ha, S McLaughlin, J Barros, BJ Kwak, LDPC codes for the Gaussian wiretap channel. IEEE Trans. Inf. Forensics Secur. 6(3), 532–540 (2011). D Klinc, J Ha, S McLaughlin, J Barros, BJ Kwak, in IEEE Global Telecommunications Conference (GLOBECOM 2009). LDPC codes for physical layer security (HonoluluUSA, p. 2009. CW Wong, TF Wong, JM Shea, in IEEE GLOBECOM Workshops. LDPC code design for the BPSK-constrained Gaussian wiretap channelUSAHouston, p. 2011. CW Wong, TF Wong, JM Shea, Secret-sharing LDPC codes for the BPSK-constrained Gaussian wiretap channel. IEEE Trans. Inf. Forensics Secur. 6(3), 551–564 (2011). M Baldi, F Chiaraluce, N Laurenti, S Tomasin, F Renna, Secrecy Transmission on parallel channels: theoretical limits and performance of practical codes. IEEE Trans. Inf. Forensics Secur. 9(11), 1765–1779 (2014). RG Gallager, Low-density parity-check codes. IRE Trans Inf. Theory. 8(1), 21–28 (1962). T Richardson, R Urbanke, The capacity of low-density parity-check codes under message-passing decoding. IEEE Trans. Inf. Theory. 47(2), 599–618 (2001). SY Chung, JGD Forney, T Richardson, R Urbanke, On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit. IEEE Commun. Lett. 5(2), 58–60 (2001). S ten Brink, G Kramer, A Ashikhmin, Design of low-density parity-check codes for modulation and detection. IEEE Trans. Commun. 52(4), 670–678 (2004). M Baldi, M Bianchi, F Chiaraluce, in IEEE Information Theory Workshop (ITW 2010). Non-systematic codes for physical layer securityIrelandDublin, p. 2010. M Baldi, M Bianchi, F Chiaraluce, Coding with scrambling, concatenation, and HARQ for the AWGN wire-tap channel: a security gap analysis. IEEE Trans. Inf. Forensics Secur. 7(3), 883–894 (2012). RJ McEliece, A public-key cryptosystem based on algebraic coding theory. DSN Prog Rep. 44:, 114–116 (1978). M Baldi, N Maturo, G Ricciutelli, F Chiaraluce, Security gap analysis of some LDPC coded transmission schemes over the flat and fast fading Gaussian wire-tap channels.EURASIP. J. Wirel. Commun. 2015(1), 1–12 (2015). S ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun. 49:, 1727–1737 (2001). S ten Brink, Designing iterative decoding schemes with the extrinsic information transfer chart. AEÜ Int. J. Electron. Commun. 54(6), 389–398 (2000). S ten Brink, Convergence of iterative decoding. Electron. Lett. 35(10), 806–808 (1999). JG Proakis, Digital Communications, 4th Ed McGraw-Hill, New York, 2000). Lin Shu, Costello Danial J, Error Control Coding: Fundamentals and Applications, 2nd (Pearson Prentice Hall, Upper Saddle River, 2004). IEEE standard for local and metropolitan area networks-part 16: air interface for fixed and mobile broadband wireless access systems. IEEE Std P802.16e/D12 (2005). L Bahl, J Cocke, F Jelinek, J Raviv, Optimal decoding of linear codes for minimizing symbol error rate. IEEE Trans. Inf. Theory. 20(2), 284–287 (1974). J Hagenauer, E Offer, L Papke, Iterative decoding of binary block and convolutional codes. IEEE Trans. Inf. Theory. 42:, 429–445 (1996). M Tuechler, J Hagenauer, EXIT charts and irregular codes. in IEEE Conference on Information Sciences and Systems (CISS'02) (Princeton, USA, 2002). This work was supported by the ICT R&D program of MSIP/IITP [1711028311, Reliable crypto-system standards and core technology development for secure quantum key distribution network]. The School of Electrical Engineering, Korea University, 5-1 Anam-dong, Sungbuk-gu, Seoul, 136-713, Republic of Korea Kyunghoon Kwon , Taehyun Kim & Jun Heo Search for Kyunghoon Kwon in: Search for Taehyun Kim in: Search for Jun Heo in: Correspondence to Jun Heo. Kwon, K., Kim, T. & Heo, J. Pre-coded LDPC coding for physical layer security. J Wireless Com Network 2016, 283 (2016). https://doi.org/10.1186/s13638-016-0761-7 Feed-forward LDPC code BCJR algorithm Physical layer security Wiretap channel Security gap Joint iterative decoding EXIT chart
CommonCrawl
Distribution function of a random variable $X$ 2010 Mathematics Subject Classification: Primary: 60E05 [MSN][ZBL] The function of a real variable $x$ taking at each $x$ the value equal to the probability of the inequality $X < x$. Every distribution function $F(x)$ has the following properties: 1) $F(x') \le F(x'')$ when $x' < x''$; 2) $F(x)$ is left-continuous at every $x$; 3) $\lim\limits_{x \rightarrow -\infty} F(x) = 0$, $\lim\limits_{x \rightarrow +\infty} F(x) = 1$. (Sometimes a distribution function is defined as the probability of $X \le x$; it is then right-continuous.) In mathematical analysis, a distribution function is any function satisfying 1)–3). There is a one-to-one correspondence between the probability distributions $P_{F}$ on the $\sigma$-algebra $\mathcal{B}$ of Borel subsets of the real line $\mathbb{R}^{1}$ and the distribution functions. This correspondence is as follows: For any interval $\left[ a, b \right]$, $$ P_{F}([a, b]) = F(b+) - F(a-) $$ Any function $F$ satisfying 1)–3) can be regarded as the distribution function of some random variable $X$ (e.g. $X(x) = x$) defined on the probability space $\left( \mathbb{R}^1, \mathcal{B}, P_{F} \right)$. Any distribution function can be uniquely written as a sum $$ F(x) = \alpha_{1} F_{1}(x) + \alpha_{2} F_{2}(x) + \alpha_{3} F_{3}(x), $$ where $\alpha_{1}, \alpha_{2}, \alpha_{3}$ are non-negative numbers with sum equal to 1, and $F_{1}, F_{2}, F_{3}$ are distribution functions such that $F_{1}(x)$ is absolutely continuous, $$ F_{1}(x) = \int\limits_{-\infty}^{x} p(z) dz, $$ $F_{2}(x)$ is a "step-function", $$ F_{2}(x) = \sum\limits_{x_{k} < x} p_{k}, $$ where the $x_{k}$ are the points where $F(x)$ "jumps" and the $p_{k} > 0$ are proportional to the size of these jumps, and $F_{3}(x)$ is the "singular" component — a continuous function whose derivative is zero almost-everywhere. Example. Let $X_{k}$, $k = 1, 2, \ldots,$ be an infinite sequence of independent random variables assuming the values 1 and 0 with probabilities $0 < p_{k} \le \frac{1}{2}$ and $q_{k} = 1 - p_{k}$, respectively. Also, let $$ X = \sum\limits_{k = 1}^{\infty} \frac{X_{k}}{2^{k}} $$ 1) if $p_k = q_k = \frac{1}{2}$ for all $k$, then $X$ has an absolutely-continuous distribution function (with $p(x) = 1$ for $0 \le x \le 1$, that is, $X$ is uniformly distributed on $\left[ 0, 1 \right]$); 2) if $\sum\limits_{k = 1}^{\infty} p_k < \infty$, then $X$ has a "step" distribution function (it has jumps at all the dyadic-rational points in $\left[ 0, 1 \right]$); 3) if $\sum\limits_{k = 1}^{\infty} p_k = \infty$ and $p_k \rightarrow 0$ as $k \rightarrow \infty$, then $X$ has a "singular" distribution function. This example serves to illustrate the theorem of P. Lévy asserting that the infinite convolution of discrete distribution functions can contain only one of the components mentioned above. The "distance" between two distributions $P$ and $Q$ on the real line is often defined in terms of the corresponding distribution functions $F$ and $S$, by putting, for example, $$ \rho_1(P, Q) = \sup_{x} \left| F(x) - S(x) \right| $$ $$ \rho_2(P, Q) = \mathrm{Var} \left( F(x) - S(x) \right) $$ (see Distributions, convergence of; Lévy metric; Characteristic function). The distribution functions of the probability distributions most often used (e.g. the normal, binomial and Poisson distributions) have been tabulated. To test hypotheses concerning a distribution function $F$ using results of independent observations, one can use some measure of the deviation of $F$ from the empirical distribution function (see Kolmogorov test; Kolmogorov–Smirnov test; Cramér–von Mises test). The concept of a distribution function can be extended in a natural way to the multi-dimensional case, but multi-dimensional distribution functions are significantly less used in comparison to one-dimensional distribution functions. For a more detailed treatment of distribution functions see Gram–Charlier series; Edgeworth series; Limit theorems. [C] H. Cramér, "Random variables and probability distributions", Cambridge Univ. Press (1970) MR0254895 Zbl 0184.40101 [C2] H. Cramér, "Mathematical methods of statistics", Princeton Univ. Press (1946) MR0016588 Zbl 0063.01014 [F] W. Feller, "An introduction to probability theory and its applications", 1–2, Wiley (1957–1971) [BS] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics", Libr. math. tables, 46, Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova) Zbl 0529.62099 In the Russian literature distributions functions are taken to be left-continuous. In the Western literature it is common to define them to be right-continuous. Thus, the distribution function of a random variable $X$ is the function $F(x) = \mathrm{P} \lbrace X \le x \rbrace$. It then has the properties 1); 2') $F(x)$ is right-continuous at every $x$; 3). The unique probability distribution $P_{F}$ corresponding to it is now defined as $$ P_{F}(a, b) = F(b) - F(a) $$ while the "step-function" $F_{2}(x)$ in the above-mentioned decomposition $F = \alpha_{1} F_{1} + \alpha_{2} F_{2} + \alpha_{3} F_{3} $ is $$ F_{2} (x) = \sum\limits_{x_{k} \le x} p_{k}. $$ [JK] N.L. Johnson, S. Kotz, "Distributions in statistics" , Houghton Mifflin (1970) MR0270476 MR0270475 Zbl 0213.21101 Distribution function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Distribution_function&oldid=31054 This article was adapted from an original article by Yu.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Distribution_function&oldid=31054" Probability theory and stochastic processes Distribution theory
CommonCrawl
Base-pair ambiguity and the kinetics of RNA folding Guangyao Zhou ORCID: orcid.org/0000-0002-8966-46171, Jackson Loper2 & Stuart Geman3 A folding RNA molecule encounters multiple opportunities to form non-native yet energetically favorable pairings of nucleotide sequences. Given this forbidding free-energy landscape, mechanisms have evolved that contribute to a directed and efficient folding process, including catalytic proteins and error-detecting chaperones. Among structural RNA molecules we make a distinction between "bound" molecules, which are active as part of ribonucleoprotein (RNP) complexes, and "unbound," with physiological functions performed without necessarily being bound in RNP complexes. We hypothesized that unbound molecules, lacking the partnering structure of a protein, would be more vulnerable than bound molecules to kinetic traps that compete with native stem structures. We defined an "ambiguity index"—a normalized function of the primary and secondary structure of an individual molecule that measures the number of kinetic traps available to nucleotide sequences that are paired in the native structure, presuming that unbound molecules would have lower indexes. The ambiguity index depends on the purported secondary structure, and was computed under both the comparative ("gold standard") and an equilibrium-based prediction which approximates the minimum free energy (MFE) structure. Arguing that kinetically accessible metastable structures might be more biologically relevant than thermodynamic equilibrium structures, we also hypothesized that MFE-derived ambiguities would be less effective in separating bound and unbound molecules. We have introduced an intuitive and easily computed function of primary and secondary structures that measures the availability of complementary sequences that could disrupt the formation of native stems on a given molecule—an ambiguity index. Using comparative secondary structures, the ambiguity index is systematically smaller among unbound than bound molecules, as expected. Furthermore, the effect is lost when the presumably more accurate comparative structure is replaced instead by the MFE structure. A statistical analysis of the relationship between the primary and secondary structures of non-coding RNA molecules suggests that stem-disrupting kinetic traps are substantially less prevalent in molecules not participating in RNP complexes. In that this distinction is apparent under the comparative but not the MFE secondary structure, the results highlight a possible deficiency in structure predictions when based upon assumptions of thermodynamic equilibrium. Discoveries in recent decades have established a wide range of biological roles served by RNA molecules, in addition to their better-known role as carriers of the coded messages that direct ribosomes to construct specific proteins. Non-coding RNA molecules participate in gene regulation, DNA and RNA repair, splicing and self-splicing, catalysis, protein synthesis, and intracellular transportation [1, 2]. The precursors to these actions include a multitude of processes through which primary structures are transformed into stable or metastable secondary and tertiary structures. There are many gaps in our knowledge, but accumulating evidence (cf. [3–8]) suggests that the full story typically includes cotranscriptional explorations of secondary and tertiary structures, possibly accompanied by finely regulated transcription speeds, as well as a selection of proteins that may participate as stabilizers, catalysts, partners in a ribonculeoprotein complex, or chaperones to guide the process and detect errors. It is not surprising, then, that although many non-coding RNA molecules can be coxed into folding, properly, in artificial environments, the results rarely if ever match in vivo production in terms of speed or yield [3, 4, 9, 10]. Nevertheless, given the infamously rugged free-energy landscape of all but the smallest RNA molecules, there is good reason to expect that many of the large structural RNA molecules evolved not only towards a useful tertiary structure but also, at the same time, to help navigate the energy landscape. We reasoned that this process, a kind of co-evolution of pathway and structure, might have left a statistical signature, or "tell," in the relationships between primary and native secondary structures. The primary structures of RNA molecules typically afford many opportunities to form short or medium-length stems,Footnote 1 most of which do not participate in the native structure. This not only makes it hard for the computational biologist to accurately predict secondary structure, but might equally challenge the biological process to avoid these kinetic traps. Once formed, they require a large amount of energy (not to mention time) to be unformed. Taking this kinetic point of view a step further, we conjectured that evolutionary pressures would tend to suppress the relative prevalence of ambiguous pairings, meaning available complementary subsequences, more for those subsequences that include paired nucleotides in the native structure than for equally long subsequences that do not. The idea being that ambiguities of stem-participating subsequences would directly compete with native stem formations and therefore be more likely to inhibit folding. Here, we do not mean to suggest that these particular adaptive mechanisms would obviate the need or advantages of other adaptations[3, 5, 11, 12], including the reliance on proteins as both nonspecific and specific cofactors. Herschlag [3] (and many others since) argued convincingly that thermodynamic considerations applied to an unaccompanied RNA molecule could explain neither the folding process nor the stability of the folded product, explicitly anticipating multiple roles for protein cofactors. It is by now apparent that many mechanisms have evolved, and are still evolving, to support repeatable and efficient RNA folding[3, 5, 11–15]. We are suggesting that some of these, perhaps among the earliest, might be visible upon close examination of relationships between the availability of ambiguous pairings for stem structures to those for non-stem structures. Shortly, we will introduce a formal definition of this relative ambiguity, which will be a molecule-by-molecule difference between the average ambiguity counts in and around native-structure stems and the average counts from elsewhere on the molecule. For now, we note that this measure, which we will call the ambiguity index and label d, depends on both the primary ("p") and native secondary ("s") structures of the molecule, which we emphasize by writing d(p,s) rather than simply d.Footnote 2 To the extent that for any given native structure there is evolutionary pressure to minimize relative stem ambiguities, we expect to find small values of the ambiguity indexes. But it would be a mistake to apply this line of thinking indiscriminately. The pathway to function for the many RNA molecules that operate as part of a larger, composite, complex of both RNA and protein components—the ribonucleoproteins, is considerably more complicated. The assembly of these complexes is far from fully worked out, but it stands to reason that the structures and folding of the component RNA molecules are influenced by the conformations of the accompanying proteins [8]. In such cases, the folding kinetics of the RNA molecule, as it might proceed in isolation and based only on thermodynamics and the free-energy landscape, may have little relevance to the in vivo assembly and arrival at a tertiary structure. Hence we will make a distinction between RNA molecules that are components of ribonucleoproteins (which we will refer to as "bound" RNA molecules) and RNA molecules which can function without being bound in a ribonucleoprotein complex (which we will refer to as "unbound" RNA molecules). The distinction is more relative than absolute. For example, many of the Group II introns both self-splice and reverse-splice, and both processes involve protein cofactors, some of which include a tight ribonculeoprotein complex with the maturase protein [7]. Nevertheless, we will treat these (as well as the Group I introns) as examples of "unbound," since most, if not all, can function without being bound to a specific protein [10], and since there is evidence that the adaptation of preexisting proteins to function in the splicing process evolved relatively recently [16]. The advantage of the two categories, bound and unbound, is that we can avoid making difficult absolute statements about the values of ambiguity indexes, per se, and instead focus on comparisons across the two populations. We reasoned that molecules from the bound (ribonculeoprotein) families would be less sensitive to the kinetic traps arising from ambiguities of their stem-producing subsequences than molecules from the unbound families. We therefore expected to find smaller ambiguity indexes in the unbound families. Recall now that the ambiguity index depends on both the primary and native secondary structures of the molecule, d=d(p,s), which raises the question—which secondary structure s should be used in the calculation? Our main conclusions were drawn using comparative secondary structures [17, 18] available through the RNA STRAND database[19], a curated collection of RNA secondary structures which are widely used as reference structures for single RNA molecules[20–22]. But this dependency on s also afforded us the opportunity to make comparisons to a second, much-studied, approach to secondary structure prediction: equilibrium thermodynamics. The premise, namely that the structures of non-coding RNA molecules in vivo are in thermal equilibrium, is controversial. Nevertheless, variations on equilibrium methods constitute the prevailing computational approaches to predicting secondary structure.Footnote 3 Typically, these approaches use estimates of the conformation-dependent contributions to the free-energy and dynamic-programming type calculations to produce either samples from the resulting equilibrium distribution or minimum free energy (MFE) secondary structures [23, 24]. Yet the biological relevance of equilibrium and minimum energy structures has been a source of misgivings at least since 1969, when Levinthal pointed out that the time required to equilibrate might be too long by many orders of magnitude [25]. In light of these observations, and considering the "frustrated" nature of the folding landscape, many have argued that when it comes to structure prediction for macromolecules, kinetic accessibility is more relevant than equilibrium thermodynamics [25–29]. In fact, a metastable state that is sufficiently long-lived and accessible might be biologically indistinguishable from an equilibrium state. Since the same issues of kinetic accessibility and the roles of kinetic traps that are behind these controversies are also behind our motivation to explore ambiguities, we also used the MFE secondary structure s′, as estimated using standard packages, to compute a second ambiguity index for each RNA molecule: d(p,s′). In this way, we could look for differences, if any, between conclusions based on the comparative structure and those based on the MFE structure. The choice of RNA families to represent the two groups was limited by the availability of reliable comparative secondary structures and the belief that the ambiguities captured by our index would be more relevant in large rather than small RNA molecules. With these considerations in mind, we chose the transfer-messenger RNAs (tmRNA), the RNAs of signal recognition particles (SRP RNA), the ribonuclease P family (RNase P), and the 16s and 23s ribosomal RNAs (16s and 23s rRNA) as representatives of "bound" (ribonucleoprotein) RNA molecules, and the Group I and Group II introns (sometimes referred to as self-splicing introns) as representatives of "unbound" molecules. See Methods for more details about the data set. In summary, we will make a statistical investigation of the ambiguity index, as it varies between two groups of molecules (bound and unbound) and as it is defined according to either of two approaches to secondary structure prediction (comparative and MFE). In line with expectations, we will demonstrate that unbound molecules have systematically lower ambiguity indexes, when computed using comparative secondary structures, than bound molecules. The effect is strong: the average ambiguity in each unbound family is lower than the average ambiguity in every bound family. And the effect is still visible at the single-molecule level: a randomly chosen molecule can be accurately classified as belonging to the unbound group versus the bound group by simply thresholding on the ambiguity index (ROC area 0.81). We will also show that the utility of the ambiguity index to distinguish unbound from bound molecules disappears when the MFE structure is substituted for the comparative structure in computing the index. A related observation is that the ambiguity index of an unbound molecule can be used to classify whether the index itself was derived from the comparative versus MFE structure. To the extent that the comparative secondary structures are more accurate, these latter results might be interpreted as adding to existing concerns about the relevance of equilibrium RNA structures. By using comparisons as opposed to absolute statistics, and various normalizations, and by favoring non-parametric (distribution-free) statistical methods, we have done our best to avoid subtle biases and hidden assumptions that would explain or at least influence the results. But more confidence would come with more data, especially more RNA families of both the ribonucleoprotein type and those that typically function without first forming tight assemblies with proteins. Given the rate of new discoveries and the rapid growth of accessible data sets, opportunities can not be far away. The remainder of the paper is organized as follows: In the Results section we first develop some basic notation and definitions, and then present an exploratory and largely informal statistical analysis. This is followed by formal results comparing ambiguities in molecules drawn from the unbound families to those from the bound families, and then by a comparison of the ambiguities implied by secondary structures derived from comparative analyses to those derived through minimization of free energy. The Results section is followed by Discussion and Conclusions, in which we will recap the main results, further speculate about their interpretations, suggest refinements in the index that might highlight the effects of cotranscriptional folding and the varying thermodynamic stability of stems of different lengths, and review how our results bear on current thinking about RNA folding and structure. And finally, in Methods, we include detailed information about the data and its (open) source, as well as links to code that can be used to reproduce our results or for further experimentation. Basic Notation and the Ambiguity Index Consider a non-coding RNA molecule with N nucleotides. Counting from 5 ′ to 3 ′, we denote the primary structure by $$ p = (p_{1}, p_{2}, \cdots, p_{N}), \text{where}\ p_{i} \in \{ A, G, C, U \}, i = 1, \cdots, N $$ and the secondary structure by $$ {{}\begin{aligned} s \,=\, \left\{ (j, k) :\text{nucleotides} {j} \text{and} {k} \text{are paired}, 1 \leq j < k \leq N \right\} \end{aligned}} $$ Recall that we are interested in investigating the ambiguity of different subsequences in the RNA molecule. To formalize the notion of a subsequence, we define the segment at locationi to be $$ P_{i} = \left(p_{i}, p_{i + 1},p_{i + 2},p_{i + 3}\right)\ \ \ \text{for} i=1,2,\ldots,N-3 $$ In other words, the segment at location i is the sequence of four consecutive nucleotides that starts at i and proceeds from 5 ′ to 3 ′. There is no particular reason for using segments of length four, and in fact all qualitative conclusions are identical with segment lengths three, four, or five, and quite likely, many other larger lengths. To study the ambiguity of a particular segment, we are interested in counting the locations which could feasibly form a stem with the given segment. We start by identifying which locations are viable to pair with Pi, based just on location and not nucleotide content. The only constraint on location is that an RNA molecule cannot form a loop of two or fewer nucleotides. Let Ai be the set of all segments that are potential pairs of Pi: $$ {\begin{aligned} A_{i} &= \left\{P_{j}: 1 \leq j \leq i - 7\ \text{(segment precedes} {i}) \text{or} \right.\\& \left. i + 7 \leq j \leq N - 3\ (\text{segment follows} {i})\right\} \end{aligned}} $$ We can now define the local ambiguity function, $$ a(p) = \left(a_{1}(p), \cdots, a_{N - 3}(p)\right) $$ which is a vector-valued function of the primary structure p, and quantifies the ambiguities at different locations of the molecule. The vector has one component, ai(p), for each segment Pi, namely the number of feasible segments that are complementary to Pi (allowing for G ·U wobble pairings in addition to Watson-Crick pairings): $$ {\begin{aligned} a_{i} (p)& = \# \{ P \in A_{i} : P \text{ and }P_{i} \text{ are complementary} \} \\ & = \#\left\{P_{j}\in A_{i}: (p_{i, k}, p_{j, 5 - k}) \in \left\{(A, U), (U, A),\right.\right.\\&\quad \left. (G, C), (C, G), (G,U), (U,G) \right\}, \\ &\left. \ \ \ \ \ \ \ \ k=1,\ldots,4 \right\} \end{aligned}} $$ Notice that ai(p) is independent of secondary structure s. It is simply the total number of subsequences that could form a stem structure with (pi,pi+1,pi+2,pi+3). We want to explore the relationship between ambiguity and secondary structure. We can do this conveniently, on a molecule-by-molecule basis, by introducing another vector-valued function, this time depending only on a purported secondary structure. Specifically, the new function assigns a descriptive label to each location (i.e. each nucleotide), determined by whether the segment at the given location is fully paired, partially paired, or fully unpaired. Formally, given a secondary structure s, as defined in Eq (2), and a location i∈{1,2,…,N−3}, let fi(s) be the number of nucleotides in Pi that are paired under s: $$ {{}\begin{aligned} f_{i}(s) \,=\, \#\left\{j\in P_{i}:(j,k)\in s\text{ or} (k,j)\in s, \text{ for some}\ 1\!\leq\! k \leq N\right\} \end{aligned}} $$ Evidently, 0≤fi(s)≤4. The "paired nucleotides function" is then the vector-valued function of secondary structure defined as f(s)=(f1(s),…,fN−3(s)). Finally, we use f to distinguish three types of locations (and hence three types of segments): location i will be labeled $$ \left\{\begin{array}{cc} \textit{single} \text{if}~ f_{i}(s) = 0 & \\ \textit{double} \text{if}~ f_{i}(s) = 4 & i = 1, 2, \cdots, N - 3 \\ \textit{transitional} \text{if}~ 0 < f_{i}(s) < 4 & \\ \end{array}\right. $$ In words, given a secondary structure, location i is single if none of the four nucleotides (pi,pi+1,pi+2,pi+3) are paired, double if all four are paired, and transitional if 1, 2, or 3 are paired. A First Look at the Data: Shuffling Nucleotides Our goals are to explore connections between ambiguities and basic characteristics of RNA families, as well as the changes in these relationships, if any, when using comparative as opposed to MFE secondary structures. For each molecule and each location i, the segment at i has been assigned a "local ambiguity" ai(p) that depends only on the primary structure, and a label (single, double, or transitional) that depends only on the secondary structure. Since the local ambiguity, by itself, is strongly dependent on the length of the molecule, and possibly on other intrinsic properties, we define a relative ambiguity index: " dT−S(p,s)" which depends on both the primary (p) and purported secondary (s) structures: $$ d_{\text{T-S}}(p, s) = \frac {\sum_{j = 0}^{N - 3} a_{j} (p) c^{\text{tran}}_{j} (s)} {\sum_{j = 0}^{N - 3} c^{\text{tran}}_{j} (s)} - \frac {\sum_{j = 0}^{N - 3} a_{j} (p) c_{j}^{\text{single}} (s)} {\sum_{j = 0}^{N - 3} c_{j}^{\text{single}} (s)} $$ where we have used \(c_{i}^{\text {tran}}\) and \(c_{i}^{\text {single}}\) for indicating whether location i is transitional or single respectively. In other words, for each i=1,2,…,N−3 $$\begin{array}{*{20}l} c_{i}^{\text{tran}}(s) &= \left\{\begin{array}{ll} 1, & \text{if location} \textit{i} \text{is} \textit{transitional} \\ 0, & \text{otherwise}\\ \end{array}\right. \end{array} $$ $$\begin{array}{*{20}l} c_{i}^{\text{single}}(s) &= \left\{\begin{array}{ll} 1, & \text{if location} \textit{i} \text{is} \textit{single} \\ 0, & \text{otherwise}\\ \end{array}\right. \end{array} $$ In short, the T-S ambiguity index is the difference in the averages of the local ambiguities at transitional sites and single sites. We have also experimented with a second, closely related, index dD−S(p,s), in which averages over double locations replace averages over transitional locations. Since the definition is somewhat complicated by the observation that local ambiguities at double locations are almost always greater than one (the exceptions being certain configurations with bulges), and since the results using dD−S mirror those using dT−S (albeit somewhat weaker), we will focus exclusively on dT−S. Results using dD−S can be accessed along with data and code, as explained in the Methods section. (Since there is only one index we could write d in place of dT−S, but chose to retain the subscript as a reminder of the source.) Thinking kinetically, we might expect to find relatively small values of dT-S, at least for molecules in the unbound families, as discussed in Background. One way to look at this is that larger numbers of partial matches for a given sequence in or around a stem would likely interfere with the nucleation of the native stem structure, and nucleation appears to be a critical and perhaps even rate-limiting step. Indeed, the experimental literature [30–33] has long suggested that stem formation in RNA molecules is a two-step process. When forming a stem, there is usually a slow nucleation step, resulting in a few consecutive base pairs at a nucleation point, followed by a fast zipping step. It is important to note, though, that the application of this line of reasoning to the dT−S(p,s) index requires that s be an accurate representation of the native secondary structure. For the time being we will use the time-honored comparative structures for s, returning later to the questions about MFE structures raised in Background. How are we to gauge dT-S and compare values across different RNA families? Consider the following experiment: for a given RNA molecule we create a "surrogate" which has the same nucleotides, and in fact the same counts of all four-tuple segments as the original molecule, but is otherwise ordered randomly. If ACCU appeared eight times in the original molecule, then it appears eight times in the surrogate, and the same can be said of all sequences of four successive nucleotides—the frequency of each of the 44 possible segments is preserved in the surrogate. If we also preserve the locations of the transitional, double, and single labels (even though there is no actual secondary structure of the surrogate), then we can compute a new value for dT-S, say \(\tilde {d}_{\text {T-S}}\), from the surrogate. If we produce many surrogate sequences then we will get a sampling of \(\tilde {d}_{\text {T-S}}\) values, one for each surrogate, to which we can compare dT-S. We made several experiments of this type—one for each of the seven RNA families (Group I and Group II Introns, tmRNA, SRP RNA, RNase P, and 16s and 23s rRNA). To make this precise, consider an RNA molecule with primary structure p and comparative secondary structure s. Construct a segment "histogram function," \(\mathcal {H}(p)\), which outputs the number of times that each of the 44 possible segments appears in p. Let \(\mathcal {P}(p)\) be the set of all permutations of the ordering of nucleotides in p, and let \(\mathcal {E}(p)\subseteq \mathcal {P}(p)\) be the subset of permutations that preserve the frequencies of four-tuples. If, for example, p=(A,A,U,A,A,U,U,A,A), then there are six four-tuples, (A,A,U,A),(A,U,A,A),(U,A,A,U),(A,A,U,U),(A,U,U,A),(U,U,A,A), and each happens to appear only once, i.e., the histogram function \(\mathcal {H}(p)\) assigns the number one to each of these six four-tuples and zero to every other four-tuple. The only additional sequence that preserves these frequencies (aside from p itself) turns out to be p′=(A,A,U,U,A,A,U,A,A), and in this example \(\mathcal {E}(p)=\{p,p'\}\). More generally $$ {\mathcal{E}}(p) = \left\{p'\in\mathcal{P}(p): {\mathcal{H}}(p')={\mathcal{H}}(p)\right\} $$ Clever algorithms (all of which are variants and generalizations of the Euler algorithm, e.g. see [36] and references therein) exist for efficiently drawing independent samples from the uniform distribution on \(\mathcal {E}\)—see [34–36]. Let p(1),…,p(K) be K such samples, and let dT-S(p(1),s),…,dT-S(p(K),s) be the corresponding T-S ambiguity indexes. Whereas the secondary structure s remains the same across shuffles, the local ambiguity function a(p(k)), which depends on the primary structure, changes with k, and so does the resulting ambiguity index dT-S(p(k),s). How different is dT-S(p,s) from the ensemble of values dT-S(p(k),s) derived by sampling from \(\mathcal {E}(p)\)? To measure this, let αT-S(p,s)∈[0,1] be the left-tail empirical probability of choosing an ambiguity index less than or equal to dT-S(p,s) from the ensemble of values {dT-S(p,s),dT-S(p(1),s),…,dT-S(p(K),s)}: $$ {{}\begin{aligned} \alpha_{\text{T-S}}(p,s) \,=\, \frac {1 \,+\, \#\{k\in\{1,\ldots,K\}: d_{\text{T-S}}\left(p^{(k)},s\right) \!\leq\! d_{\text{T-S}}(p,s)\}}{1+K} \end{aligned}} $$ In essence, for each RNA family the α score is a self-calibrated ambiguity index. The results are not very sensitive to K nor to the particular sample, provided that K is large enough. We used K=10,000. If the number of distinct sequences in \(\mathcal {E}(p)\) is small, then so is the number of possible values of α. In such cases, α will be of little value for comparing ambiguity indexes across types of molecules or proposed secondary structures. Indeed, many short sequences, such as p=(A,C,G,U,A,C,G,U), have no histogram-preserving primary structures beyond p itself. But as we have already remarked, our methods are motivated by a kinetic viewpoint, within which the greatest challenges to folding are faced by the larger rather than smaller molecules. Hence, our experiments are with sequences that are relatively long. In fact, none of the RNA families used in our experiments have a median length shorter than 274 nucleotides, and most are much longer—see Table 4. At these lengths it is extremely rare that a sample of 10,000 primary sequences from \(\mathcal {E}(p)\) will have any duplicates. Hence there is no built-in meaningful loss of resolution in the α statistic. It is tempting to interpret αT-S(p,s) as a p-value from a conditional hypothesis test: Given s and \(\mathcal {H}\), test the null hypothesis that dT-S(p,s) is statistically indistinguishable from dT-S(p′,s), where p′ is a random sample from \(\mathcal {E}\). If the alternative hypothesis were that dT-S(p,s) is too small to be consistent with the null, then the null is rejected in favor of the alternative with probability αT-S(p,s). The problem with this interpretation is that this null hypothesis violates the observation that given \(\mathcal {H}\) there is information in s about p, whereas p(1),…,p(K) are independent of s given \(\mathcal {H}\). In other words, dT-S(p,s) and dT-S(p′,s) have different conditional distributions given s and \(\mathcal {H}\), in direct contradiction to the null hypothesis. A larger problem is that there is no reason to believe the alternative; we are more interested in relative than absolute ambiguity indexes. Thinking of αT-S(p,s) as a calibrated intra-molecular index, we want to know how αT-S(p,s) varies across RNA families, and whether these variations depend on the differences between comparative and MFE structures. Nevertheless, αT-S(p,s) is a useful statistic for exploratory analysis. Table 1 provides summary data about the α scores for each of the seven RNA families. For each molecule in each family we use the primary structure and the comparative secondary structure, and K=10,000 samples from \(\mathcal {E}\), to compute individual T-S scores (Eq 11). Keeping in mind that a smaller value of α represents a smaller calibrated value of the corresponding ambiguity index d(p,s), there is evidently a disparity between ambiguity indexes of RNA molecules that form ribonucleoproteins and those that are already active without forming a ribonculeoprotein complex. As a group, unbound molecules have systematically lower ambiguity indexes. As already noted, this observation is consistent with, and in fact anticipated by, a kinetic point of view. Shortly, we will further support this observation with ROC curves and rigorous hypothesis tests. Table 1 Comparative Secondary Structures: calibrated ambiguity indexes, by RNA family Does the MFE structure similarly separate single-entity RNA molecules from those that form ribonucleoproteins? A convenient way to explore this question is to recalculate and recalibrate the ambiguity indexes of each molecule in each of the seven families, but using the MFE in place of the comparative secondary structures. The results are summarized in Table 2. By comparison to the results shown from Table 1, the separation of unbound from bound molecules nearly disappears when viewed under the MFE secondary structures. Possibly, the comparative structures, as opposed to the MFE structures, better anticipate the need to avoid kinetic traps in the folding landscape. Here too we will soon revisit the data using ROC curves and proper hypothesis tests. Table 2 MFE Secondary Structures: calibrated ambiguity indexes, by RNA family Formal Statistical Analyses The T-S ambiguity index dT-S(p,s) is an intra-molecular measure of the difference between the number of available double-stranded Watson-Crick and wobble pairings for segments in and around stems and pseudoknots versus segments within single-stranded regions. As such, dT-S depends on both p and any purported secondary structure, s. Based on a calibrated version, αT-S(p,s), and employing the comparative secondary structure for s, we found support for the idea that non-coding RNA molecules in the unbound families, which are active absent participation in ribonucleoproteins, are more likely to have small ambiguity indexes than RNA molecules that operate exclusively as part of ribonucleoproteins. Furthermore, the difference appears to be sensitive to the approach used for identifying secondary structure—there is little, if any, evidence in indexes dT-S derived from the MFE secondary structures for lower ambiguities among unbound molecules. These qualitative observations can be used to formulate precise statistical hypothesis tests. Many tests come to mind, but perhaps the simplest and most transparent are based on nothing more than the molecule-by-molecule signs of the ambiguity indexes. Whereas ignoring the actual values of the indexes is inefficient in terms of information, and probably also in the strict statistical sense, tests based on signs require very few assumptions and are, therefore, more robust to model misspecification. All of the p-values that we will report are based on the hypergeometric distribution, which arises as follows. We are given a population of M molecules, m=1,…,M, each with a binary outcome measure Bm∈{−1,+1}. There are two subpopulations of interest: the first M1 molecules make up population 1 and the next M2 molecules make up population 2; M1+M2=M. We observe n1 plus values in population 1 and n2 in population 2 $$\begin{array}{*{20}l} n_{1} & = \#\left\{m\in\{1,2,\ldots,M_{1}\}:B_{m}=+1\right\} \end{array} $$ $$\begin{array}{*{20}l} n_{2} & = \#\left\{m\in\{M_{1}+1,M_{1}+2,\ldots,M\}:B_{m}=+1\right\} \end{array} $$ We suspect that population 1 has less than its share of plus ones, meaning that the n1+n2 population of plus ones was not randomly distributed among the M molecules. To be precise, let N be the number of plus ones that appear from a draw, without replacement, of M1 samples from B1,…,BM. Under the null hypothesis, Ho, n1 is a sample from the hypergeometric distribution on N: $$ {\begin{aligned} \mathbb{P}\{N=n\} = \frac{\binom{M_{1}}{n}\binom{M_{2}}{n_{1}+n_{2}-n}} {\binom{M}{n_{1}+n_{2}}}\quad \max\{0,n_{1}+n_{2}-M_{2}\}\\[-12pt]\leq n \leq \min\{n_{1}+n_{2},M_{1}\} \end{aligned}} $$ The alternative hypothesis, Ha, is that n1 is too small to be consistent with Ho, leading to a left-tail test with p-value \(\mathbb {P}\{N\leq n_{1}\}\) (which can be computed directly or using a statistical package, e.g. hypergeom.cdf in scipy.stats). It is by now well recognized that p-values should never be the end of the story. One reason is that any departure from the null hypothesis in the direction of the alternative, no matter how small, is doomed to be statistically significant, with arbitrarily small p-value, once the sample size is sufficiently large. In other words, the effect size remains hidden. Therefore, in addition to reporting p-values, we will also display estimated ROC curves, summarizing performance of two related classification problems: (i) Classify a single RNA molecule, randomly selected from the seven families, as belonging to the unbound group or the bound group based only on thresholding dT-S(p,s). Compare performance under each of the two secondary-structure models, comparative and MFE; and (ii) Randomly select an RNA molecule from the unbound group and classify the origin of its secondary structure (comparative or MFE), here again based only on thresholding dT-S(p,s). Now Repeat the process, but selecting randomly from the bound group. Bound versus Unbound Classification. Consider an RNA molecule, m, selected from one of the seven families in our data set, with primary structure p and secondary structure s computed by comparative analysis. Given only the T-S ambiguity index of m (i.e. given only dT-S(p,s)), how accurately could we classify the origin of m as the unbound versus bound group? The foregoing exploratory analysis suggests constructing a classifier that declares a molecule to be unbound when dT-S(p,s) is small, e.g. dT-S(p,s)<t, where the threshold t governs the familiar trade off between rates of "true positives" (an unbound molecule m is declared 'unbound') and "false positives" (a bound molecule m is declared 'unbound'). Small values of t favor low rates of false positives at the price of low rates of true positives, whereas large values of t favor high rates of true-positives at the price of high rates of false positives. Since for each molecule m we have both the correct classification (unbound or bound) and the statistic d, we can estimate the ROC performance of our threshold classifier by plotting the empirical values of the pair $$ \text{(\# false positives,\ \# true positives)} $$ for each value of t. The ROC curve for the two-category (unbound versus bound) classifier based on thresholding dT-S(p,s)<t is shown in the left panel of Fig. 1. Also shown is the estimated area under the curve (AUC=0.81), which has a convenient and intuitive interpretation, as it is equal to the probability that for two randomly selected molecules, m from the unbound population and m′ from the bound population, the T-S ambiguity index of m will be smaller than the T-S ambiguity index of m′. Unbound or Bound? ROC performance of classifiers based on thresholding the T-S ambiguity index. Small values of dT-S(p,s) are taken as evidence that a molecule belongs to the unbound group as opposed to the bound group. In the left panel, the classifier is based on using the comparative secondary structure for s to compute the ambiguity index. Alternatively, the MFE structure is used for the classifier depicted in the right panel. AUC: Area Under Curve—see text for interpretation. Additionally, for each of the two experiments, a p-value was calculated based only on the signs of the individual ambiguity indexes, under the null hypothesis that positive indexes are distributed randomly among molecules in all seven RNA families. Under the alternative, positive indexes are more typically found among the unbound as opposed to bound families. Under the null hypothesis the test statistic is hypergeometric—see Eq 14. Left Panel: p=1.2×10−34. Right Panel: p=0.02. In considering these p-values, it is worth re-emphasizing the points made about the interpretation of p-values in the paragraph following Eq 14. The right panel illustrates the point: the ambiguity index based on the MFE secondary structure "significantly distinguishes the two categories (p=0.02)" but clearly has no utility for classification. (These ROC curves and those in Fig. 2 were lightly smoothed by the method known as "Locally Weighted Scatterplot Smoothing," e.g. with the python command Y=lowess(Y, X, 0.1, return_sorted=False) coming from statsmodels.nonparametric.smoothers_lowess) p-Values. As mentioned earlier, we can also associate a traditional p-value to the problem of separating unbound from bound molecules, based again on the T-S ambiguity indexes. We consider only the signs (positive or negative) of these indexes, and then test whether there are fewer than expected positive indexes among the unbound as opposed to the bound populations. This amounts to computing \(\mathbb {P}\{N\leq n_{1}\}\) from the hypergeometric distribution—Eq (14). The relevant statistics can be found in Table 3, under the column labels #mol's and #dT-S>0. Specifically, M1=116+34=150 (number of unbound molecules), M2=404+346+407+279+59=1495 (number of bound molecules), n1=50+8=58 (number of positive T-S indexes among unbound molecules) and n2=368+269+379+210+53=1279 (positive bound indexes). The resulting p-value, 1.2·10−34, is essentially zero, meaning that the positive T-S indexes are not distributed proportional to the sizes of the unbound and bound populations, which is by now obvious in any case. To repeat our caution, small p-values conflate sample size with effect size, and for that reason we have chosen additional ways, using permutations as well as classifications, to look at the data. Table 3 Numbers of Positive Ambiguity Indexes, by family Table 4 Data Summary Comparative versus Minimum Free Energy As we have just seen, ambiguity indexes based on MFE secondary structures, as opposed to comparative secondary structures, do not make the same stark distinction between unbound and bound RNA molecules. To explore this a little further, we can turn the analyses of the previous paragraphs around and ask to what extent knowledge of the ambiguity index is sufficient to predict the source of a secondary structure—comparative or free energy? This turns out to depend on the group from which the molecule was drawn: The ambiguity index is strongly predictive among unbound molecules and, at best, weakly predictive among bound molecules. Consider the two ROC curves in Fig. 2. In each of the two experiments a classifier was constructed by thresholding the T-S ambiguity index, declaring the secondary structure, s, to be "comparative" when dT-S(p,s)<t and "MFE" otherwise. Comparative or MFE? As in Fig. 1, each panel depicts the ROC performance of a classifier based on thresholding the T-S ambiguity index, with small values of dT-S(p,s) taken as evidence that s was derived by comparative as opposed to MFE secondary structure analysis. Left Panel: performance on molecules chosen from the unbound group. Right Panel: performance on molecules chosen from the bound group. Conditional p-values were also calculated, using the hypergeometric distribution and based only on the signs of the indexes. In each case the null hypothesis is that comparative secondary structures are as likely to lead to positive ambiguity indexes as are MFE structures, whereas the alternative is that positive ambiguity indexes are more typical when derived from MFE structures. Left Panel: p=5.4×10−14. Right Panel: p=0.07 The difference between the two panels is in the population used for the classification experiments—unbound molecules in the left-hand panel (AUC=0.81) and bound molecules in the right-hand panel (AUC=0.54, barely above chance). The corresponding hypothesis tests seek evidence against the null hypotheses that in a given group (unbound or bound) the set of positive T-S ambiguity indexes (dT-S(p,s)>0) are equally distributed between the comparative and free-energy derived indexes, and in favor of the alternatives that the T-S ambiguity indexes are less typically positive for the comparative secondary structures. The necessary data can be found in Table 3. The test results are consistent with the classification experiments: the hypergeometric p-value is is 5.4·10−14 for the unbound population and 0.07 for the bound population. Qualitatively, these various ROC and p-value results were easy to anticipate from even a superficial examination of Table 3. Start with the first two rows (unbound molecules): A relatively small fraction of unbound molecules have positive ambiguities when the index is computed from comparative analyses, whereas most of these same molecules have positive ambiguities when the index is computed from MFE structures. Looking across the next five rows (bound molecules), no such trend is discernible. Similarly, from a glance at the column labeled #dT-S>0 (derived from comparative analyses) it is apparent that the fraction of positive indexes among the unbound molecules is much lower than among the bound molecules. What's more, this effect is missing in the MFE indexes (column labeled #dT~-S~>0).Footnote 4 Consider a non-coding RNA molecule with a native tertiary structure that is active, in vivo, without necessarily being tightly bound with other molecules in a ribonucleoprotein complex. We have labeled these molecules "unbound" and reasoned that there are likely relationships between their primary and secondary structures that not only support the tertiary structure, but also the folding process by which it emerges. Specifically, we reasoned that examination of the primary and native secondary structures might reveal evolutionary mechanisms that discourage disruptive kinetic traps. Conjecturing that the availability of non-native pairings for subsequences that are part of the native secondary structure would be particularly disruptive, we defined an intra-molecular index that we called the ambiguity index. The ambiguity index is a function of a molecule's primary and native secondary structures devised so that lower values of the index reflect fewer opportunities for stem participating subsequences to pair elsewhere in the molecule. We examined the Group I and Group II introns, two families of molecules that are believed to perform some of their functions (namely self splicing) in an "unbound" state, to see if their ambiguity indexes were lower than might be expected were there no such evolutionary pressures to protect stem structures. Heuristic permutation-type tests appeared to confirm our expectation that these molecules would have low ambiguities. We sought additional evidence in two directions. The first was to compare ambiguity indexes in unbound molecules to those in "bound" molecules, i.e. molecules that are known to function as part of ribonucleoprotein complexes where the argument against these particular kinds of ambiguities is weaker. We found a strong separation between the unbound and bound molecules, the former having substantially lower indexes. This was demonstrated by statistical tests and, perhaps more meaningfully, by showing that the ambiguity index could be used to classify with good accuracy individual molecules as either bound or unbound. These experiments were based on comparative secondary structures available through the RNA STRAND database[19], which remains one of the most trusted sources for RNA secondary structures of single molecules[20–22]. In a second approach to additional evidence we substituted the comparative secondary structures with ones that were derived from approximations to the thermodynamic equilibrium structure (minimum free energy—"MFE" structures). Though less accurate, MFE and related equilibrium-type structures are easy and quick to compute. But one line of thinking is that active biological structures are determined more by kinetic accessibility than thermodynamic equilibrium per se[25–29]. Biological stability is relative to biological timescale; the folding of any particular RNA could just as well end in metastability, provided that the process is repeatable and the result sufficiently stable over the molecule's proper biological lifetime. Indeed, it would be arguably easier to evolve an effective tertiary structure without the additional and unnecessary burden of thermal equilibrium. To the extent that kinetic accessibility and metastability might be more relevant than thermodynamic equilibrium, there would be little reason to expect the ambiguity index to make the same separation between unbound and bound molecules when derived from MFE structures instead of comparative structures. The results were consistent with this point of view—ambiguity indexes based on MFE structures make weak classifiers. We were surprised by the strength of the effect. After all, MFE structures are superficially quite similar to comparative structures, yet the classification performance goes from strong (>80% AUC) to negligible (53% AUC, just above chance). A worthwhile follow-up would be to examine the actual differences in secondary structure (as was done, with similar motivation but different tools, in [29]) in an effort to discern how they impact ambiguity. A possible source of bias that might partially explain the strength of the observed effects was raised by an anonymous reviewer, who noted that the RNAfold program in the ViennaRNApackage [20], used here to compute MFE structures, does not allow pseudoknots, a structural feature that is commonly present in comparative structures. To explore the possible effect of pseudoknots on our results, and to make for something closer to an "apples-to-apples" comparison, we re-ran the experiments after removing all pseudoknots from the comparative structuresFootnote 5. There were only small changes in the results—e.g. classification performance, "Bound or Unbound" (Fig. 1) using comparative structures went from 81% AUC to 79% AUC, whereas performance using MFE stayed the same at 53% AUCFootnote 6. Of course it is still possible that a true MFE structure, computed without compromises in the structure of the energy and allowing for pseudoknots, were it computable, would fare better in these experiments. Another interesting point raised by the same reviewer concerns the well-known heterogeneity of structures within the Group I and Group II Introns, which constitute our unbound samples. In particular, these groups can be further divided into subgroups that have very different secondary structures (see Table 2 of [43]). To what extent are the differences between bound and unbound molecules consistent across subgroups? To investigate this we re-computed the αT-S indexes reported in Table 1, but this time for each subgroup of each of the Group I and Group II introns. The stark differences between bound and unbound molecules remain. In fact, the differences are more extreme for all but two of the unbound subgroups (Group IC1 and Group IIA), out of the thirteen available in our dataset6. It has often been argued (e.g. [38, 39]) that the MFE structure itself may be a poor representative of thermal equilibrium. It is possible, then, that our observations to the effect that comparative and MFE structures have substantially different relationships to the ambiguity indexes, and our interpretation that comparative structures better separate unbound from bound molecules, would not hold up as well if we were to adopt a more ensemble-oriented structure in place of the MFE, as advocated by [40], for example. In a related vein, and also within the context of thermodynamic equilibrium, Lin et al. [41] have given evidence that competing stems which are inconsistent may both contain a high measure of information about the equilibrium distribution, suggesting that in such cases both forms could be active and the notion of single (locations we have labeled "S") might itself be ambiguous. Certainly there are RNA molecules (e.g. riboswitches) that are active in more than one structural conformation. For such molecules, ambiguity is essential for their biological functioning, yet one would need to rethink the definition of an ambiguity index. The ambiguity index dT-S is derived from the difference in average ambiguities of subsequences partly paired in the native structure ("T", transition locations) from those not paired in the native structure (single locations). We expected these differences to be small in unbound as opposed to bound molecules because we expected the stem structures to be more protected from non-native pairings. But this coin has another side: low ambiguities at unpaired (single) locations of bound molecules relative to unbound molecules would have the same effect. As an example, some unpaired RNA sequences may be critical to function, as in the messenger RNA-like region ("MLR") of tmRNA, and therefore relatively unambiguous. Also, it is possible that the formation of non-native stems among single-type subsequences are particularly disruptive to, perhaps even stereochemically preventing, the binding of an RNA molecule into a ribonucleoprotein complex. More generally, it is reasonable to assume that different evolutionary forces are at play for molecules destined to operate as parts of ribonucleoprotein complexes. In any case, the folding story may be even more complicated, or at least quite different, for the ribonculeoprotein RNAs. Finally, we note that the ambiguity index, as currently formulated, is symmetric in the sense that there is no explicit difference in contributions from different locations along the 5 ′ to 3 ′ axis. Yet cotranscriptional folding, which appears to be nearly universal in non-coding RNA [42] strongly suggests that not all ambiguities are equally disruptive. Indeed, some non-native pairings between two subsequences, one of which is near the 3 ′ end of the molecule, might have been rendered stereochemically impossible before the 3 ′ half has even been transcribed. In addition, the current ambiguity index is calculated using segments of a fixed length (four for the results presented in the paper). Yet thermodynamic stability increases with stem lengths, which suggests that non-native pairings between two longer subsequences would be more disruptive than those between shorter subsequences. Possibly, a proper weighting of ambiguities coming from segments of different lengths would bring new insights. These further considerations open many new lines of reasoning, most of which suggest alternative indexes that could be statistically explored, especially as the data bank of known structures and functions continues to grow. Overall, our results are consistent in supporting a role for kinetic accessibility that is already visible in the relationship between primary and secondary structures. Stronger evidence will require more bound and unbound families. The limiting factors, as of today, are the availability of families with large RNA molecules for which the comparative structures have been worked out and largely agreed upon. In this paper, we have presented a statistical analysis of the relationship between the primary and secondary structures of non-coding RNA molecules. The results suggest that stem-disrupting kinetic traps are substantially less prevalent in molecules not participating in RNP complexes. In that this distinction is apparent under the comparative but not the MFE secondary structure, the results highlight a possible deficiency in structure predictions when based upon assumptions of thermodynamic equilibrium. We obtained comparative-analysis secondary structure data for seven different families of RNA molecules from the RNA STRAND database[19], a curated collection of RNA secondary structures which are widely used as reference structures for single RNA molecules[20–22]. These families include: Group I Introns and Group II Introns[43], tmRNAs and SRP RNAs[44], the Ribonuclease P RNAs[45], and 16s rRNAs and 23s rRNAs[43]. Table 4 contains information about the numbers and lengths (measured in nucleotides) of the RNA molecules in each of the seven families. Note that we excluded families like tRNAs, 5s rRNAs and hammerhead ribozymes since most of the molecules in these families are too short to be of interest for our purpose. Also, since we are focusing on comparative-analysis secondary structures, to be consistent, we excluded any secondary structures derived from X-ray crystallography or NMR structures. Note that Group I and Group II Introns are the only available families of unbound RNAs suitable for our analysis. There are some other families of unbound RNAs (e.g. ribozymes), but most of these RNAs are too short in length, and many of the structures are not derived using comparative analysis. Hence they are not included. RNA Secondary Structure Prediction Methods Comparative analysis[46] is based on the simple principle that a single RNA secondary structure can be formed from different RNA sequences. Using alignments of homologous sequences, comparative analysis has proven to be highly accurate in determining RNA secondary structures [18]. We used a large set of RNA secondary structures determined by comparative analyses to serve as ground truth. When it comes to computational prediction of RNA secondary structures, exact dynamic programming algorithms based on carefully measured thermodynamic parameters make up the most prevalent methods. There exist a large number of software packages for the energy minimization [20, 38, 47–51]. In this paper, we used the ViennaRNApackage [20] to obtain the MFE secondary structures for our statistical analysis. Reproducing the Results The results presented in this paper, as well as additional results on experiments with the D-S ambiguity index, pseudoknot-free comparative secondary structures, and detailed results for thirteen different unbound subgroups of RNA molecules, can be easily reproduced. Follow the instructions on https://github.com/StannisZhou/rna_statistics. Here we make a few comments regarding some implementation details. In the process of obtaining the data, we used the bpseq format, and excluded structures derived from X-ray crystallography or NMR structures, as well as structures for duplicate sequences. Concretely, this means picking a particular type, and select No for Validated by NMR or X-Ray and Non-redundant sequences only for Duplicates on the search page of the RNA STRAND database. A copy of the data we used is included in the GitHubrepository , but the same analyses can be easily applied to other data. When processing the data, we ignored molecules for which we have nucleotides other than A, G, C, U, and molecules for which we don't have any base pairs. When comparing the local ambiguities in different regions of the RNA molecules, we ignored molecules for which we have empty regions (i.e. at least one of single, double and transitional is empty), as well as molecules where all local ambiguities in single or double regions are 0. For shuffling primary structures, we used an efficient and flexible implementation of the Euler algorithm[34–36] called uShuffle [52], which is conveniently available as a pythonpackage. For removing pseudoknots from comparative secondary structures, we used the standalone implementation of methods proposed in [37]. The actual pseudoknot-free comparative secondary structures used in our experiments are available at https://github.com/StannisZhou/rna_statistics/tree/master/data_without_pseudoknots. The dataset analysed during the current study is available at RNA STRAND database [19]. To make the results easily reproducible, a copy of the dataset, as well as code for reproducing the results in the paper, is available at https://github.com/StannisZhou/rna_statistics. By which we will mean sequences of G ·U ("wobble pairs") and/or Watson-Crick pairs. Native secondary structures often include so-called pseudoknots, which are sometimes excluded, or handled separately, for computational efficiency. Pseudoknots are formed from paired complementary subsequences and therefore included, by definition, in the ambiguity index. Molecular dynamics, which might be called "agnostic" to the question of equilibrium, has proven to be exceedingly difficult, and has not yet yielded a useful tool for generic folding of large molecules. The specific values of the areas under the ROC curves depend on the specific values of the indexes. The equality—to two digits—of the areas in the left-hand panels of Figs. 2 and 1 is a coincidence. Using methods presented in [37]. More comprehensive results for the experiments with pseudoknot-free comparative secondary structures and detailed results for thirteen different unbound subgroups of RNA molecules can be accessed along with data and code—see Methods. MFE: Minimum free energy MLR: Messenger RNA-like region RNase P: Ribonuclease P RNP: Ribonucleoprotein rRNA: Ribosomal RNA Signal recognition particles tmRNA: Transfer-messenger RNA Morris KV, Mattick JS. The rise of regulatory RNA. Nat Rev Genet. 2014; 15(6):423–37. Kung JTY, Colognori D, Lee JT. Long noncoding RNAs: past, present, and future. Genetics. 2013; 193(3):651–69. Herschlag D. Rna chaperones and the rna folding problem. J Biol Chem. 1995; 270(36):20871–4. Pyle AM, Fedorova O, Waldsich C. Folding of group II introns: a model system for large, multidomain RNAs?Trends Biochem Sci. 2007; 32(3):138–45. Zemora G, Waldsich C. Rna folding in living cells. RNA Biol. 2010; 7(6):634–41. https://doi.org/10.4161/rna.7.6.13554. http://arxiv.org/abs/https://doi.org/10.4161/rna.7.6.13554. Solomatin SV, Greenfeld M, Chu S, Herschlag D. Multiple native states reveal persistent ruggedness of an rna folding landscape. Nature. 2010; 463(7281):681. Pyle AM. Group ii intron self-splicing. Ann Rev Biophys. 2016; 45(1):183–205. https://doi.org/10.1146/annurev-biophys-062215-011149. PMID: 27391926. Duss O, Stepanyuk GA, Grot A, O'Leary SE, Puglisi JD, Williamson JR. Real-time assembly of ribonucleoprotein complexes on nascent rna transcripts. Nature Commun. 2018; 9(1):5087. https://doi.org/10.1038/s41467-018-07423-3. Lambowitz AM, Perlman PS. Involvement of aminoacyl-tRNA synthetases and other proteins in group I and group II intron splicing. Trends Biochem Sci. 1990; 15(11):440–4. Fedorova O, Zingler N. Group II introns: structure, folding and splicing mechanism. Biol Chem. 2007; 388(7):665–78. Chu VB, Herschlag D. Unwinding RNA's secrets: advances in the biology, physics, and modeling of complex RNAs. Curr Opin Struct Biol. 2008; 18(3):305–14. Woodson SA. Taming free energy landscapes with RNA chaperones. RNA Biol. 2010; 7(6):677–86. Tan Z, Zhang W, Shi Y, Wang F. RNA folding: structure prediction, folding kinetics and ion electrostatics. Adv Exp Med Biol. 2015; 827:143–83. Leamy KA, Assmann SM, Mathews DH, Bevilacqua PC. Bridging the gap between in vitro and in vivo RNA folding. Q Rev Biophys. 2016; 49:10. Chen S-J. RNA folding: conformational statistics, folding kinetics, and ion electrostatics. Annu Rev Biophys. 2008; 37:197–214. Lambowitz AM, Caprara MG, Zimmerly S, Perlman PS. Group I and group II ribozymes as RNPs: clues to the past and guides to the future. In: In The RNA World, 2nd. Cold Spring Harbor Laboratory Press: 1999. p. 451–85. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.692.2748. James BD, Olsen GJ, Pace NR. Phylogenetic comparative analysis of RNA secondary structure. Methods Enzymol. 1989; 180:227–39. Gutell RR, Lee JC, Cannone JJ. The accuracy of ribosomal RNA comparative structure models. Curr Opin Struct Biol. 2002; 12(3):301–10. Andronescu M, Bereg V, Hoos HH, Condon A. RNA STRAND: The RNA secondary structure and statistical analysis database. BMC Bioinformatics. 2008; 9(1):340. Lorenz R, Bernhart SH, Höner zu Siederdissen C, Tafer H, Flamm C, Stadler PF, Hofacker IL. ViennaRNA package 2.0. Algorithms Mol Biol. 2011; 6(1):26. Puton T, Kozlowski LP, Rother KM, Bujnicki JM. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction. Nucleic Acids Res. 2013; 41(7):4307–23. Mathews DH. How to benchmark RNA secondary structure prediction accuracy. Methods. 2019; 162-163:60–7. Mathews DH, Sabina J, Zuker M, Turner DH. Expanded sequence dependence of thermodynamic parameters improves prediction of RNA secondary structure. J Mol Biol. 1999; 288(5):911–40. Zuker M, Mathews DH, Turner DH. Algorithms and thermodynamics for RNA secondary structure prediction: A practical guide In: Barciszewski J, Clark BFC, editors. RNA Biochem Biotechnol. Dordrecht: Springer: 1999. p. 11–43. Levinthal C. How to fold graciously. Mossbauer spectroscopy in biological systems. 1969; 67:22–4. Higgs PG. RNA secondary structure: physical and computational aspects. Q Rev Biophys. 2000; 33(3):199–253. Flamm C, Hofacker IL. Beyond energy minimization: approaches to the kinetic folding of RNA. Monatsh Chem. 2008; 139(4):447–57. Baker D, Agard DA. Kinetics versus thermodynamics in protein folding. Biochemistry. 1994; 33(24):7505–9. Morgan SR, Higgs PG. Evidence for kinetic effects in the folding of large RNA molecules. J Chem Phys. 1996; 105(16):7152–7. Pörschke D. Model calculations on the kinetics of oligonucleotide double helix coil transitions. evidence for a fast chain sliding reaction. Biophys Chem. 1974; 2(2):83–96. Pörschke D. A direct measurement of the unzippering rate of a nucleic acid double helix. Biophys Chem. 1974; 2(2):97–101. Pörschke D. Elementary steps of base recognition and helix-coil transitions in nucleic acids. Mol Biol Biochem Biophys. 1977; 24:191–218. Mohan S, Hsiao C, VanDeusen H, Gallagher R, Krohn E, Kalahar B, Wartell RM, Williams LD. Mechanism of RNA double Helix-Propagation at atomic resolution. J Phys Chem B. 2009; 113(9):2614–23. Kandel D, Matias Y, Unger R, Winkler P. Shuffling biological sequences. Discrete Appl Math. 1996; 71(1):171–85. Fitch WM. Random sequences. J Mol Biol. 1983; 163(2):171–6. Altschul SF, Erickson BW. Significance of nucleotide sequence alignments: a method for random sequence permutation that preserves dinucleotide and codon usage. Mol Biol Evol. 1985; 2(6):526–38. Smit S, Rother K, Heringa J, Knight R. From knotted to nested RNA structures: a variety of computational methods for pseudoknot removal. RNA. 2008; 14(3):410–6. Ding Y, Lawrence CE. A statistical sampling algorithm for RNA secondary structure prediction. Nucleic Acids Res. 2003; 31(24):7280–301. Mathews DH. Revolutions in RNA secondary structure prediction. J Mol Biol. 2006; 359(3):526–32. Ding Y, Chan CY, Lawrence CE. RNA secondary structure prediction by centroids in a boltzmann weighted ensemble. RNA. 2005; 11(8):1157–66. Lin L, McKerrow WH, Richards B, Phonsom C, Lawrence CE. Characterization and visualization of RNA secondary structure boltzmann ensemble via information theory. BMC Bioinformatics. 2018; 19(1):82. Lai D, Proctor JR, Meyer IM. On the importance of cotranscriptional rna structure formation. RNA. 2013; 19(11):1461–73. https://doi.org/10.1261/rna.037390.112. http://arxiv.org/abs/http://rnajournal.cshlp.org/content/19/11/1461.full.pdf+html. Cannone JJ, Subramanian S, Schnare MN, Collett JR, D'Souza LM, Du Y, Feng B, Lin N, Madabusi LV, Müller KM, Pande N, Shang Z, Yu N, Gutell RR. The comparative RNA web (CRW) site: an online database of comparative sequence and structure information for ribosomal, intron, and other RNAs. BMC Bioinformatics. 2002; 3:2. Andersen ES, Rosenblad MA, Larsen N, Westergaard JC, Burks J, Wower IK, Wower J, Gorodkin J, Samuelsson T, Zwieb C. The tmRDB and SRPDB resources. Nucleic Acids Res. 2006; 34(Database issue):163–8. Brown JW. The ribonuclease P database. Nucleic Acids Res. 1999; 27(1):314. Gutell RR, Power A, Hertz GZ, Putz EJ, Stormo GD. Identifying constraints on the higher-order structure of RNA: continued development and application of comparative sequence analysis methods. Nucleic Acids Res. 1992; 20(21):5785–95. Markham NR, Zuker M. UNAFold: software for nucleic acid folding and hybridization. Methods Mol Biol. 2008; 453:3–31. Reuter JS, Mathews DH. RNAstructure: software for RNA secondary structure prediction and analysis. BMC Bioinformatics. 2010; 11:129. Zadeh JN, Steenberg CD, Bois JS, Wolfe BR, Pierce MB, Khan AR, Dirks RM, Pierce NA. NUPACK: Analysis and design of nucleic acid systems. J Comput Chem. 2011; 32(1):170–3. Hamada M, Kiryu H, Sato K, Mituyama T, Asai K. Prediction of RNA secondary structure using generalized centroid estimators. Bioinformatics. 2009; 25(4):465–73. Reeder J, Giegerich R. RNA secondary structure analysis using the RNAshapes package. Curr Protoc Bioinforma. 2009; Chapter 12:12–8. Jiang M, Anderson J, Gillespie J, Mayne M. ushuffle: a useful tool for shuffling biological sequences while preserving the k-let counts. BMC Bioinformatics. 2008; 9:192. The authors would like to thank Yang Chen for helpful discussions and Matthew T. Harrison and Charles Lawrence for many valuable suggestions. This work was partially supported by the Office of Naval Research under contracts ONR N000141613168 and ONR N000141512267. SG was funded by grants ONR N000141613168 and ONR N000141512267 from the Office of Naval Research. The funding body did not play any role in the design of the study, or collection, analysis, and intepretation of data, or in writing the manuscript. Vicarious AI, Union City, CA, USA Guangyao Zhou Data Science Institute, Columbia University, New York, NY, USA Jackson Loper Division of Applied Mathematics, Brown University, Providence, RI, USA Stuart Geman All authors designed the study. GZ collected the data and performed the statistical analysis. All authors interpreted the data. GZ and SG wrote and revised the paper. All authors read and approved the final manuscript. Correspondence to Guangyao Zhou. Zhou, G., Loper, J. & Geman, S. Base-pair ambiguity and the kinetics of RNA folding. BMC Bioinformatics 20, 666 (2019). https://doi.org/10.1186/s12859-019-3303-6 Non-coding RNA RNA folding kinetics Comparative secondary structure Thermodynamic equilibrium Self-splicing introns Ribonucleoproteins
CommonCrawl
Atomistic-scale investigation of self-healing mechanism in Nano-silica modified asphalt through molecular dynamics simulation Zhengwu Long1, Xianqiong Tang2, Nanning Guo1, Yanhuai Ding2, Wenbo Ma2, Lingyun You1 & Fu Xu2 Journal of Infrastructure Preservation and Resilience volume 3, Article number: 4 (2022) Cite this article As one of the most widely used nanomaterials in asphalt modification, the nano-silica (nano-SiO2) can significantly improve the self-healing behavior of asphalt eco-friendly. However, understanding of the self-healing mechanism of nano-SiO2 in asphalt is still limited. The objective of the study is to reveal the self-healing mechanism of nano-SiO2 in asphalt by using molecular dynamics (MD) simulations from the nanoscale. A 10 Å (Å) vacuum pad was added between the two same stable asphalt models to represent the micro-cracks inside the asphalt. The self-healing process of virgin asphalt, oxidation aging asphalt, and nano-SiO2 modified asphalt was studied using density evolution, relative concentration, diffusion coefficient, activation energy, and pre-exponential factor. The simulation results conclude that nano-SiO2 improves the self-healing ability of asphalt by increasing the diffusion rate of molecules with aromatic structures without alkyl side chains and molecules with structures with longer alkyl chains. The self-healing capability of asphalt may be principally determined by the diffusion of light components such as saturate, while nano-SiO2 only plays an inducing role. The research findings could provide insights to understand the self-healing mechanism of nano-SiO2 in asphalt for promoting the sustainability of bitumen pavements while increasing their durability. Background and introduction Asphalt concrete is a composite material commonly used to surface roads, parking lots, and airports [1]. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century [2,3,4]. Although asphalt materials have typically used in pavement constructions, it has some defects such as low-temperature cracking and fatigue; thus, improving its self-healing performance is a critical way to solve this problem [5]. Proverbially, additive modification is also an excellent way to improve asphalt performance [6]. Due to some unique features of nanomaterials, such as high surface area, more and more researchers use nanomaterials to modify asphalt [7]. The application of nanoscale aluminum oxide (nano-Al2O3) in the virgin asphalt significantly improved the complex shear modulus. Nano-Al2O3 modification improved the high-temperature rutting resistance and low-temperature fatigue cracking resistance of asphalt [8]. The graphene was also used to enhance the high-temperature rutting resistance of asphalt [9, 10]. The carbon nanotube (CNT) can improve the ability to resist moisture damage in asphalt [11]. The hybrid CNTsgraphite powders can further enhance the mechanical properties of asphalt binders [12]. Noteworthy, the nano-silica (nano-SiO2) modification presented an excellent global performance than other nanomaterials [13, 14], such as zero-valent iron and nano-clay [15]. Moreover, the previous studies found that the addition of nano-SiO2 can remarkably enhance the properties of asphalt such as the resistances to oxidation aging [16], moisture damage [17, 18], and fatigue cracking [19]. Nowadays, many studies experimentally evaluate the effects of nanomaterials such as nano-graphene [20], nano-zycotherm [21], and nano-SiO2 [22,23,24] on the self-healing properties of asphalt. These studies have shown that the simultaneous addition of Forta fibers and nano-zycotherm has a significant effect on the self-healing capability of asphalt [21]. Nano-graphene plays an advantageous role in the self-healing process [20]. Notably, the addition of nano-SiO2 can significantly improve the self-healing behavior of asphalt mixtures [22, 23]. Therefore, nanomaterials can effectively enhance the self-healing properties of asphalt materials to alleviate the low-temperature cracking problem of asphalt. Researching the self-healing behavior and mechanism of asphalt materials has important theoretical significance for promoting the sustainability and eco-friendly of bitumen pavements. However, it is seldom applied to investigate the effect of nanomaterials on the self-healing mechanism of asphalt. Compared with other modifiers, the nano-SiO2 can not only significantly improve the self-healing properties of the asphalt binder, but also improve the anti-oxidation and aging properties, moisture damage, and fatigue cracking properties. Thus, it is essential to conduct more in-depth research on the analysis of the self-healing mechanism of nano-SiO2 in asphalt. The mechanism analysis of asphalt materials was mainly performed by various computational and experimental techniques, including quantum mechanics (QM) calculations [25,26,27], Monte Carlo (MC) [28], molecular dynamics (MD) [29, 30], dissipative particle dynamics [31], and analytical chemistry [32], etc. Nevertheless, the MD simulation was proved to act as a powerful tool to predict the performances of asphalt materials and reveal its modification mechanism from the nanoscale [33, 34]. Some studies simulated the performance of asphalt via the MD method, which primarily involved its thermodynamic properties [35,36,37], oxidative aging [38,39,40], modification [41,42,43,44], diffusion behavior [45,46,47], and interface behavior [48,49,50]. Our previous work also studied the effect of aggregate surface irregularity and seawater erosion on interfacial adhesion properties of nano-SiO2 modified asphalt mixtures via the MD method [51]. These studies have found that the MD method bridges the gap between macro- and micro-scope behaviors. Thus, MD has become a relatively mature computational method for asphalt materials design and performance prediction. Moreover, the aforementioned research studies also exhibit that the mechanism analysis of asphalt materials with the MD method has become a research hotspot. In addition, the MD method has been widely employed in asphalt materials to analyze the influences of crack width [52], Styrene-Butadiene-Styrene (SBS) modifier [53], healing agent [54, 55], system temperature [56], oxidation aging [39], and each component molecule [57] on the self-healing properties of asphalt. Moreover, a multi-gradient analysis of the self-healing behaviors of asphalt nano-cracks was carried out based on the MD method [58]. These studies have shown that the crack width has a more significant effect on self-healing than temperature, whereas oxidation aging harms the self-healing properties of asphalt. Therefore, it is critical to reveal the effect of nano-SiO2 on the self-healing properties of asphalt to better understand the role of nano-silica in the self-healing behavior of asphalt and to further develop functional nano-SiO2/polymer-asphalt composite system design from the nanoscale. The primary objectives of the current study are to provide a comprehensive understanding of the effect of nano-SiO2 on the self-healing properties of asphalt via MD simulations from the nanoscale. In this study, a vacuum pad was added between the two same stable asphalt models to represent the micro-cracks inside the asphalt. The asphalt model is verified by analyzing the density, viscosity, and glass transition temperature. The effect of nano-SiO2 on the self-healing process was studied using density evolution, relative concentration, diffusion coefficient, activation energy, and pre-exponential factor. Simulation models and methods Force field and simulation details The MD simulations were performed using Forcite-package of the Materials Studio 2017 software with the COMPASS, i.e., Condensed-Phase Optimized Molecular Potentials for Atomistic Simulation Studies, force field. The COMPASS force field is a first ab initio force field to make accurate predictions of materials properties for a wide range of compounds in isolation and condensed phases [59]. In the following all-atom MD simulation, the Lennard-Jones interactions and the Coulombic interactions were calculated with a cutoff radius of 12 Å. All the simulations were performed with a Nosé-Hoover thermostat and barostat [60, 61], and the time step was set to 1.0 fs. The conjugate gradients method was used for energy optimization, while the long-range interactions were measured with the particle-particle-particle-mesh (PPPM) algorithm. The asphalt is divided into SARA components, namely saturate (S), aromatic (A), resin (R), and asphaltene (A). To further understand the physical, rheological, and mechanical properties of asphalt, and improved 12-component asphalt model was proposed by Li and Greenfield [62] to represent the AAA-1, AAK-1, and AAM-1 asphalt. In this study, the proposed amorphous cell model for AAA-1 asphalt of the Strategic Highway Research Program was adopted to represent the original virgin asphalt. The molecular formulas of 12 kinds of molecules are shown in Fig. 1. The oxygen atom will quickly replace the hydrogen atom connected to the benzyl carbon atom in the asphalt molecule, and the sulfur atom in the sulfide can easily combine with the oxygen atom to form two main oxidizing functional groups: ketone and sulfoxide. Due to the lack of sensitive polar functional groups, the saturate molecules will not change after oxidative aging. The molecular polarity of other components will increase after the oxidative aging process of asphalt occurs. Therefore, the current method of establishing an aging asphalt model is mainly by changing the functional groups of the polar components (such as asphaltenes, resins, and aromatics) in the virgin asphalt. As the oxidation aging level of asphalt increases, the number of oxidized functional groups of each polar component will increase accordingly. The short-term aged and long-term aged asphalt molecular models proposed by Qu et al. [63] are used in this work. Figures 2 and 3 show the molecular models of short-term aged and long-term aged asphalt used in this study, respectively. The compositions of three asphalt models are demonstrated in Table 1. 12-Component Molecular Structures of Virgin Asphalt [62] 12-Component Molecular Structures of Short-Term Aged Asphalt [63] 12-Component Molecular Structures of Long-Term Aged Asphalt [39] Table 1 Mass Percentage and Molecular Number of Virgin, Short-Term Aged, and Long-Term Aged Asphalt [51] A unit crystalline silica, with lattice parameters of a = b = 4.913 Å, c = 5.4052 Å, α = β = 90°, and γ = 120°, was adopted from the Cambridge Structural Database. The sphere shape silica nanoparticle was built, and the radius of nano-SiO2 was set as 5 Å, 5.5 Å, 6 Å, 6.5 Å, 7 Å, and 7.5 Å. The unsaturated boundary effect was eliminated by 1) adding hydrogen atoms to the unsaturated oxygen atoms and 2) adding hydroxyl groups to the unsaturated silicon atoms of the silica particle surface. Figure 4 shows the final nano-SiO2 molecular model with a 7.5 Å radius. Nano-SiO2 Clusters in the Models To build the virgin and nano-SiO2 modified asphalt model with different aging states, the assigned numbers of each type of molecule were filled into a cubic box with an initial density of 0.8 g/cm3 to randomly distribute all molecules and prevent the molecule chains twisting with each other. Three different initial configurations were created at each temperature to average over different initial mixing of the components. After an energy optimization progress, all asphalt models were equilibrated in the canonical ensemble (NVT) at 298 K for 5 ns. After NVT, an isothermal-isobaric ensemble (NPT) of 20 ns at one atmosphere was followed to ensure system equilibration for further data analysis. The temperatures were selected at 298 K, 333 K, 368 K, 403 K, 438 K, and 473 K. Self-healing models After determining the equilibrated asphalt MD model, a 10 Å vacuum pad was added between the two same stable models to represent the micro-cracks inside the asphalt. The diffusion between the two asphalt layers can emulate the self-healing process. The NPT ensemble was adopted for the first 2 ns at 1 atm. During this time, the asphalt layers exhibit a short self-balancing at the beginning of the self-healing process. The relative concentration curves in the z-direction were collected during the NPT ensemble simulation. After end of the NPT ensemble simulation, the NVT ensemble simulation was used further for another 20 ns at five different temperatures to simulate the molecular diffusion across the healing micro-crack surface. The simulation temperatures were set within 298–438 K, including 298 K, 333 K, 368 K, 403 K, and 438 K. The 3D micro-crack model of virgin asphalt is demonstrated in Fig. 5 (a). Self-Healing Process in the Virgin Asphalt Model: (a) Structure with 10 Å Crack Prior to Self-Healing and (b) Structure After Self-Healing (asphaltene, resin, aromatic, and saturate are represented as blue, purple, red, and green, respectively) MD simulation theories and methods Viscosity: Viscosity is a measure of the resistance of fluid flow. This study performed a non-equilibrium molecular dynamics (NEMD) simulation [65] by shearing the simulation box and applied SLLOD equations of motion [66] to calculate viscosity. Figure 6 shows the initial and shearing simulation box during the viscosity simulation. This research used the NVT ensemble to perform the viscosity simulations for 60 ns. In addition, three different constant shear rates (× 1010, 109, and 108 /s) were used to deform the simulation box. MD Simulation Box with Initial and Shearing Conditions for Viscosity Calculations (asphaltene, resin, aromatic, and saturate are represented as blue, purple, red, and green, respectively) Glass transition temperature: The glass transition temperature is a significant parameter in determining the viscoelastic properties of asphalt. The glass transition is a reversible process from a hard or brittle glassy state to a molten or rubber-like viscoelastic state. The glass transition temperature refers to the temperature corresponding to the transition from a fragile glassy state to a viscoelastic state. During the glass transition, the physical properties such as specific volume will change massively. In this work, all equilibrium asphalt models were initially heated up to a temperature of 600 K and were subjected to stepwise cooling at a rate of 5 K/ns (temperature was reduced in steps of 10 K per 2 ns) until a low temperature of 80 K was reached. The specific volume of the systems was determined by averaging its values over the second half of the 2 ns long run at each cooling step. The glass transition temperature was defined as the intersection of two fitting lines in the brittle glass-like and viscoelastic rubber-like regions of the specific volume versus temperature curve [67]. Self-healing: The mean square displacement (MSD) of the molecules tracks the translational mobility of asphalt molecules. The diffusion coefficient is related to the MSD as a function of time, as shown in Eq. (1). The diffusion coefficient is temperature-dependent and can be expressed by Arrhenius law, as shown in Eq. (2) [68]. $$D=\frac{a}{2d}$$ $$D=A\ \mathit{\exp}\left(-\frac{E_{\mathrm{a}}}{RT}\right)$$ or what it is the same as $$\ln (D)=\ln (A)-\frac{E_{\mathrm{a}}}{R}.\frac{1}{T}$$ where D is diffusion coefficient; a is the slope of the straight line fitted by the MSD curve with time (the unit of a is 10− 4 cm2/s); d is the system dimensionality, and in this work d = 3; A is the pre-exponential factor; Ea is the activation energy of the system; R is the molar gas constant (8.314 J/mol/K); T is the temperature of the system. The activation energy and pre-exponential factor are two critical parameters for evaluating the self-healing behavior of asphalt. The activation energy can be regarded as the energy required for the asphalt self-healing process initialization. The pre-exponential factor is related to the instantaneous self-healing ability of the asphalt (the more significant the value, the stronger the instantaneous healing ability). Simulation results and discussion Thermodynamic properties and model validation The thermodynamic properties of asphalt models were calculated to ensure the accuracy of the MD method and asphalt molecule models, including density, viscosity, and glass transition temperature. The density values of seven different asphalt molecules models are shown in Fig. 7. The consistent fluctuation of instantaneous density around the average density with simulation time was observed to show that the asphalt systems reached stable status. The average density values at different temperatures were calculated, as shown in Fig. 8. The highest predicted density is 1.06 g/cm3 at 298 K, and the densities reduce to 0.85 g/cm3 at 473 K. The density of all asphalt models ranges from 0.94 to 1.05 g/cm3 at 333 K slightly lower than experiment data from 0.99 to 1.33 g/cm3 at 333 K [69]. This is rational because the evaporation of saturates in the aging process was not considered in the simulation process. Density Curves of the Asphalt Models at 333 K during 20 ns All-Atom MD Simulations. It shall be noted that VA is for virgin asphalt, SA is for short-term aged asphalt, and LA is for long-term aged asphalt. The number with the labels indicates the corresponding nano-SiO2 modified asphalt and the size of the number is the radius of the nano-SiO2 particles with the unit of Å Density of Asphalt Models at Different Aging Level and with Different Sized Nano-SiO2 on during 20 ns All-Atom MD Simulations: (a) Different Aging Levels, and (b) Different Sized Nano-SiO2 Figure 9(a) shows the viscosity values for six different bulk asphalt model at 333 K at the different shear rates. The reducing simulation viscosity values for higher shear rates indicate that the asphalt models exhibit shear-thinning behavior at high shear rates. Figure 9(b) presents the viscosity values for six different bulk asphalt models at 108/s shear rate at the five three different temperatures. The stable viscosity value of the virgin asphalt model at 108/s shear rate at 403 K was around 12.96 cP, close to Kim's simulation results from 7.32 to 13.9 cP at 108/s shear rate at 408 K [40]. Viscosity of Asphalt Models at Different Aging Levels during 60 ns All-Atom MD Simulations: (a) Different Shear Rates, and (b) Different Temperatures Figure 10 provides the relationship between specific volume and temperature for the AAA-1 virgin asphalt model. In general, the specific volume increases with the increase in temperatures, and the rate of growth in the high-temperature zone is significantly higher than in the low-temperature region. There exists a visible glass transition zone between the high-temperature zone and the low-temperature zone. The glass transition temperature of the virgin asphalt model is around 266 K. It shows good agreement with literature reported experimental data from 223 K to 303 K [70]. The glass transition temperature results of six different asphalt models are presented in Fig. 11. As shown, the glass transition temperature increases as the asphalt become stiff due to oxidation aging. This finding is consistent with previous reports in the literature [71]. And the glass transition temperature of nano-SiO2 modified asphalt is higher than the unmodified asphalt. This is because the nano-SiO2 can enhance the high-temperature property of asphalt materials [7]. Specific Volume versus Temperature Diagrams for Virgin Asphalt Model at the Temperatures from 80 to 600 K Glass Transition Temperature of the Asphalt Models The relationship between the energy component and temperature of the AAA-1 virgin asphalt model is shown in Fig. 12. The relationship between intramolecular energy and temperature is almost a straight line. Conversely, a glass transition region appears in the intermolecular energy (van der Waals energy and electrostatic energy) versus temperature curve. This means that from an energy perspective, with the change of temperature, the generation of glass transition behavior is mainly related to the intermolecular energy. Model Energies versus Temperature Diagrams for the Asphalt Models at the Temperatures from 80 to 600 K: (a) Intramolecular Energy, (b) Intermolecular Energy, (c) Van der Waals Energy, and (d) Electrostatic Energy According to the above analysis, the MD simulations results are similar to previous study results based on the real asphalt in the laboratories. Therefore, it is reasonable that the simulation method and the asphalt molecule model can predict the properties of asphalt accurately. Influence of nano-SiO2 on the self-healing potential of asphalt Density and relative concentration in the self-healing process Virgin asphalt molecules diffuse across the nano-crack, and the vacuum layer inside the model gradually disappears. The layer structure after the self-healing is demonstrated in Fig. 5(b). The density evolution with the self-healing process is shown in Fig. 13. As shown, the density of all models begins to enter a stable state at 0.5 ns. The self-healing asphalt recovers to the same density as the original asphalt compared with Fig. 7. The self-healing process of asphalt nano-crack is divided into two parts in Fig. 13. The first stage (0–0.5 ns), the typical artificial nano-crack healing stage, was where an asphalt layer and another asphalt layer moved closer together until the vacuum layer disappeared. The self-healing model density value increases rapidly and then gradually increases until the final mutation approaches the original model density value. However, the density change between 0 and 0.5 ns did not show an obvious rule. This is because the initial self-healing model is not stable, resulting in relatively large energy fluctuations in the initial model. The second stage (0.5–2 ns), the actual self-healing stage, was characterized by a stable density curve and close to the original asphalt density. Model Density Changes during Self-Healing Process at 333 K The relative concentrations of asphalt molecules in the z-direction are calculated to study the self-healing behavior. The relative concentration distribution of the virgin asphalt 3D nano-crack model in the z-direction at 298 K is shown in Fig. 14. At 0 ps, the relative concentration of the asphalt model showed a bimodal distribution. The relative concentration at the middle and the edges were both 0.0, indicating that the asphalt has not started to heal. From 200 ps to 400 ps, the relative concentration of the two peaks decreased significantly. The relative concentration of the middle valley began to increase, and the crack width gradually decreased, indicating that the asphalt molecules began to diffuse toward the middle crack. From 600 ps to 1000 ps, both peaks and valleys disappear, and the relative concentration approaches 1.0, the molecules on both sides of the crack come into contact, and the asphalt molecular model begins to heal. This finding is consistent with changes in density. Relative Concentration Distribution of Nano-Crack Model in the z-direction at 298 K at Difference Running Time (from 0 to 1000 ps) As shown in Fig. 15, the relative concentration values were close to 0 in the nano-crack area of the z-direction before healing. As the temperature rises, the two peaks decrease, and the width of the artificial crack in the middle gradually decreases. When the temperature rises to 368 K, the valley is invisible, and the relative concentration values along the entire length were close to 1.0, indicating that the artificial cracks have disappeared. This is because the increase in temperature will lead to increased molecular diffusion and accelerate the self-healing behavior of asphalt. Therefore, it is necessary to use the diffusion coefficient to analyze the self-healing behavior of asphalt. Relative Concentration Distribution of Nano-Crack Model in the z-direction at 100 ps at Different Temperatures (from 298 to 438 K) Diffusion coefficient analysis in the self-healing process Asphalt typically behaves as a Newtonian fluid above the glass transition temperature, and self-healing would occur [72]. Van der Waals forces are the main factor affecting the self-healing of asphalt [57]. It is found from Section 3.1 that the glass transition behavior is related to van der Waals interaction. This also means that there may be a special relationship between the glass transition behavior of asphalt self-healing behavior. Therefore, this paper mainly studies the diffusion rate above the glass transition temperature because this is the crucial factor affecting the self-healing behavior of asphalt. The MSD curves of the seven asphalts at 298 K are shown in Fig. 16. As expected, the MSD increases with simulation time, and the slope of nano-SiO2 modified virgin asphalt is slightly higher than that of virgin asphalt. The slope of aged asphalt is significantly lower than that of virgin asphalt. This means that the addition of nano-SiO2 can promote the translation mobility of asphalt molecules. The oxidation aging of asphalt molecules is not conducive to the migration of asphalt molecules. MSD Curves of the Asphalt Models during Self-Healing Process at the Temperature of 298 K The diffusion coefficient of seven asphalt at 298 K and 333 K are shown in Fig. 17. It can be found that a higher temperature will facilitate the self-diffusion of asphalt molecules and promote healing capability. Overall, the diffusion coefficients of nano-SiO2 modified asphalt are higher than the virgin asphalt, and oxidation aging asphalt have a lower diffusion coefficient than virgin asphalt. This conclusion is well consistent with past findings [39, 57]. The diffusion coefficient of VA-7.5 is the highest, followed by VA-6.5, VA, SA-7.5, SA, and LA, the diffusion coefficients of LA and LA-7.5 are similar. This means that nano-SiO2 can enhance the self-healing ability of asphalt, while oxidative aging will reduce the self-healing capability of asphalt. In other words, the nano-SiO2 additive has the effect of indirectly improving the ability to resist oxidation aging. Diffusion Coefficient of the Asphalt Models during Self-Healing Process at the Temperatures of 298 K and 333 K To further study the effect of nano-SiO2 on self-healing behavior, the diffusion coefficients of 4-components of seven asphalt models at 333 K were calculated, as shown in Fig. 18. In general, the diffusion coefficient of asphaltene is lower than other 3-components (except nano-SiO2) for all seven asphalt models. Figure 18(a) shows that the diffusion coefficient of the four-components generally increases after the addition of nano-SiO2. Conversely, Fig. 18(b) shows that the diffusion coefficient of 4-components has a general downward trend as the degree of oxidation aging increases. For virgin asphalt, the diffusion coefficients of saturate are the highest, followed by aromatic, resin, and asphaltene. This is related to asphalt 4-components molecular weight. In particular, with the addition of nano-SiO2 modifier, the diffusion coefficient of asphaltene reduces compared to the virgin asphalt, whereas the diffusion coefficients of other 3-components increase obviously. This indicates that nano-SiO2 has a significant enhancement effect on the diffusion rates of saturate, aromatic, and resin. Alternatively, the diffusion coefficient of nano-SiO2 is low. In other words, the self-healing capability of asphalt may be mainly determined by the diffusion of light components such as saturate, while nano-SiO2 only plays an inducing role. Comparison of the Molecular Diffusion Coefficient of the Components in the Asphalt Models at 333 K: (a) VA, VA-6.5, and VA-7.5, (b) VA, SA, and LA, and (c) VA-7.5, SA-7.5, and LA-7.5. It shall be noted that As is for asphaltene, Re is for resin, Ar is for aromatic, Sa is for saturate, and NS is for nano-SiO2 The virgin asphalt molecular model established in this work consists of 12 different molecules. The diffusion rates of these 12 molecules are different. Researching the diffusion rate of all types of molecules is conducive to the selection of appropriate modifiers to precisely increase the molecular diffusion rate and enhance the self-healing capability of the asphalt. Figure 19 shows the comparison of 12 different molecules diffusion coefficient of the virgin asphalt, long-term aged asphalt, and nano-SiO2 modified asphalt at 333 K. For three asphalt models, the molecule with the most significant diffusion coefficient is Re1, and the molecule with the lowest diffusion coefficient is As1 (except nano-SiO2). The diffusion coefficient of all 12 molecules in the long-term aged asphalt model is lower than that of the virgin asphalt model. Specifically, among the virgin asphalt, the three molecules with the most significant diffusion coefficients are Ar1, Sa1, and Re1 in ascending order. For the long-term aged asphalt, the three molecules with the most significant diffusion coefficients are Re1, Ar1, and Sa2 in descending order. As for the nano-SiO2 modified asphalt model, the three molecules with the most considerable diffusion coefficients are Re1, Sa2, and Re5 in descending order. However, the diffusion coefficient of the nano-SiO2 modifier is the lowest among the other 12 molecules of nano-SiO2 modified asphalt. Thus, molecules with aromatic structures without alkyl side chains (Re1) and molecules with structures with longer alkyl chains (Ar1, Sa2, Re5) may diffuse more easily; molecules with complex aromatic structures and more short alkyl side chains (As1) may be more difficult to diffuse. Nano-SiO2 mainly improves the self-healing ability of the asphalt by enhancing the diffusion rate of molecules with aromatic structures without alkyl side chains and molecules with longer alkyl chains structures. Molecular Diffusion Coefficients of Virgin Asphalt, Long-Term Aged Asphalt, and nano-SiO2 Modified Asphalt at 333 K. It shall be noted that As1 is for asphaltene-phenol, As2 is for asphaltene-pyrrole, As3 is for asphaltene-thiophene, Re1 is for benzobisbenzothiophene, Re2 is for pyridinohopane, Re3 is for quinolinohopane, Re4 is for thioisorenieratane, Re5 is for trimethylbenzeneoxane, Ar1 is for dioctyl-cyclohexane-naphthalene (DOCHN), Ar2 is for perhydrophenanthrene-naphthalene (PHPN), Sa1 is for hopane, and Sa2 is for squalane Activation energy analysis in the self-healing process To further study the effect of nano-SiO2 on the self-healing behavior of asphalt, the activation energy and pre-exponential factor of the self-healing process were calculated based on Arrhenius law. Figure 20 plots the relationship between logarithmic of diffusion coefficients and temperatures reciprocal for virgin asphalt, short-term aged asphalt, and long-term aged asphalt. As depicted in Fig. 20, the diffusion rate increases with an increase in the temperature for all three asphalts. As seen from the fitting equation in Table 2, Ea/R is a constant for all seven asphalts nano-crack models; that is, the logarithm of the diffusion coefficient is linearly correlated with the reciprocal of temperature. Figure 21 compares the activation energy and pre-exponential factor of seven different asphalt. The activation energy calculated for VA-6.5 and VA-7.5 is a little lower than that of VA, while the pre-exponential factor of VA is much more significant than VA-6.5 and VA-7.5. It indicates that nano-SiO2 modified asphalt has a low activation energy barrier, although the instantaneous self-healing ability is weak. Simultaneously, nano-SiO2 modified asphalt has a more significant diffusion coefficient, so it is believed that nano-SiO2 can effectively improve the self-healing properties of asphalt as long as the temperature is higher than the glass transition temperature. Diffusion Coefficient versus Simulation Temperatures for Virgin Asphalt, Short-Term Aged Asphalt, and Long-Term Aged Asphalt Table 2 Fitting Results of the Diffusion Coefficient versus System Temperature Base on Arrhenius Law Activation Energy and Pre-Exponential Factor of the Asphalt Models For three different oxidation aging state asphalt, the pre-exponential factor of VA is the highest, followed by SA and LA. This means that as the degree of oxidative aging of asphalt increases, the instantaneous self-healing ability of asphalt weakens. However, the changing trend of activation energy is consistent with the pre-exponential factor, which is inconsistent with previous literature reports [39]. This may be because in the aging model, only the oxidation of molecules is considered, and the disturbance of SARA components is not considered. Therefore, the influence of oxidative aging on the self-healing properties of asphalt may require more in-depth research. Since LA has lower activation energy, healing is achieved faster in the initial stage of self-healing. This can also explain the healing time after short-term aging is longer than that of virgin asphalt, and the healing time after long-term aging is shorter than that of virgin asphalt (Fig. 13). Moreover, comparing the activation energy and pre-exponential factor of VA-7.5, SA, and LA asphalt, it can be found that VA-7.5 has the lower activation energy barrier and, thus, stronger instantaneous self-healing ability. This is consistent with the expected results. Figure 22 shows the comparison of activation energy and the pre-exponential factor of each SARA component for four different asphalt binders. Table 3 shows the relationship between the diffusion coefficient of asphalt four components and temperature. From Fig. 22(a), it can be found that the activation energy of each component molecule of VA-6.5 and VA-7.5 is smaller than that of each component of VA. The activation energy for each component of LA is less than the activation energy of each VA component except the saturate component. This means that the addition of nano-SiO2 can reduce the activation energy for each component of asphalt. The decrease in activation energy of the asphaltene, resin, and aromatic components of the long-term aged asphalt may be related to the oxidation of the molecules (saturate components are not subject to the oxidative aging). It can be seen from Fig. 22(b) that the pre-exponential factor of each VA component is higher than those of the other three asphalt. The pre-exponential factor for each component of VA-6.5 and VA-7.5 is higher than LA except for saturate component. This indicates that the instantaneous self-healing ability of the virgin asphalt is more robust than that of nano-SiO2 modified asphalt and long-term aged asphalt. The instantaneous healing ability of nano-SiO2 modified asphalt is more potent than that of long-term aged asphalt. Self-Healing Parameters of SARA Components in Asphalt Models: (a) Activation Energy, and (b) Pre-Exponential Factor Table 3 Relationship between the Diffusion Coefficient of the SARA Components and System Temperatures Moreover, for virgin asphalt, the activation energy of resin is the lowest, followed by asphaltene, aromatic, and saturate. Nevertheless, the pre-exponential factor order of the four components of virgin asphalt is asphaltene, resin, aromatic, and saturate in ascending order. It can be found that saturate and aromatic components mainly provide the instantaneous self-healing capacity of asphalt. After the addition of nano-SiO2, the activation energy and factors of the 4-components of asphalt have changed. Therefore, the self-healing ability of the virgin asphalt can be improved by adding additives to change the activation energy and pre-exponential factor of the SARA components of asphalt material. Figure 23 shows the comparison of 12 different molecules activation energy and pre-exponential factor of the virgin asphalt, long-term aged asphalt, and nano-SiO2 modified asphalt. The activation energy of 12 molecules of VA, LA, and VA-7.5 is around 40 kJ/mol. The difference in activation energy between 12 molecules in VA and VA-7.5 is smaller, while the difference in activation energy between 12 molecules in LA is significant. This means that the colloidal structure of long-term aged asphalt may be more unstable. For the virgin asphalt model, the molecule with the largest pre-exponential factor is Re2. However, for long-term aged asphalt and nano-SiO2 modified asphalt, the molecule with the largest pre-exponential factor is Sa1. In all three asphalt models, one molecule has a significantly higher pre-exponential factor than other molecules, and other molecules are relatively close. Therefore, it is possible to enhance the self-healing ability of asphalt by adding additives to make the difference in the activation energy of each molecule in the system smaller and increase the pre-exponential factor of a molecule. Activation Energy and Pre-Exponential Factor of the Molecules in Asphalt Models: (a) VA, (b) LA, and (c) VA-7.5 Conclusions and outlook This study provides a comprehensive understanding of the effect of nano-SiO2 on the self-healing behavior of asphalt via the MD simulations from the nanoscale. The main conclusions can be drawn as follows: The self-healing process of asphalt nano-crack involves two phases, the typical artificial nano-crack healing stage, and the actual self-healing stage. The actual self-healing stage is characterized by a stable density curve and is close to the original asphalt density. Nano-SiO2 can enhance the self-healing ability of asphalt, while oxidative aging harms the self-healing of asphalt. With the addition of nano-SiO2 modifier, the diffusion coefficient of asphaltene reduces if compared to the virgin asphalt. Nano-SiO2 has a significant enhancement effect on the diffusion rates of saturate, aromatic, and resin, whereas the diffusion coefficient of nano-SiO2 is lower than the other 4-components. Therefore, the self-healing capability of asphalt may be mainly determined by the diffusion of light components such as saturate, while nano-SiO2 only plays an inducing role. Molecules with aromatic structures without alkyl side chains (Re1) and molecules with structures with longer alkyl chains (Ar1, Sa2, and Re5) may diffuse more easily; molecules with complex aromatic structures and more short alkyl side chains (As1) may be more difficult to diffuse. Nano-SiO2 mainly improves the self-healing ability of the asphalt by enhancing the diffusion rate of molecules with aromatic structures without alkyl side chains and molecules with structures with longer alkyl chains. Nano-SiO2 modified asphalt has a low activation energy barrier compared with virgin asphalt, although the instantaneous self-healing ability is weak. Simultaneously, nano-SiO2 modified asphalt has a more significant diffusion coefficient; thus, it is believed that nano-SiO2 can effectively improve the self-healing properties of asphalt if the temperature is higher than the glass transition temperature. As the degree of oxidative aging of asphalt increases, the instantaneous self-healing ability of asphalt weakens. Saturate and aromatic mainly provide the instantaneous self-healing capacity of asphalt. The addition of nano-SiO2 can reduce the activation energy for each component of asphalt. The decrease in activation energy of the asphaltene, resin, and aromatic components of the long-term aged asphalt may be related to the oxidation of the molecules (saturate components are not subject to the oxidative aging). The current study findings provide a fundamental understanding of the self-healing behavior and mechanism of nano-SiO2 in asphalt from the perspective of molecules. The research opens new directions for further study on the effect of other environmental factors (such as moisture) on the self-healing behavior of asphalt from the nanoscale level. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Man J, Yan K, Miao Y, Liu Y, Yang X, Diab A, You L (2021) 3D spectral element model with a space-decoupling technique for the response of transversely isotropic pavements to moving vehicular loading. Road Mater Pavement Des (PRINT-IN-AHEAD):1–25 Polaczyk P, Huang B, Shu X, Gong H (2019) Investigation into locking point of asphalt mixtures utilizing Superpave and Marshall compactors. J Mater Civil Eng 31:4019188 Raab C, Camargo I, Partl MN (2017) Ageing and performance of warm mix asphalt pavements. J Traffic Trans Eng(English Edition) 4:388–394 Zhang E, Qi X, Shan L, Li D (2021) Investigation of rheological properties of asphalt emulsions. J Infrastruct Preserv Resilience 2:22 Fakhri M, Bahmai BB, Javadi S, Sharafi M (2020) An evaluation of the mechanical and self-healing properties of warm mix asphalt containing scrap metal additives. J Clean Prod 253:119963 You L, Jin D, Guo S, Wang J, Dai Q, You Z (2021) Leaching evaluation and performance assessments of asphalt mixtures with recycled cathode ray tube glass: a preliminary study. J Clean Prod 279:123716 Li R, Xiao F, Amirkhanian S, You Z, Huang J (2017) Developments of nano materials and technologies on asphalt materials – a review. Constr Build Mater 143:633–648 Ali SIA, Ismail A, Karim MR, Yusoff NIM, Al-Mansob RA, Aburkaba E (2017) Performance evaluation of Al2O3 nanoparticle-modified asphalt binder. Road Mater Pavement 18:1251–1268 Moreno-Navarro F, Sol-Sánchez M, Gámiz F, Rubio-Gámez MC (2018) Mechanical and thermal properties of graphene modified asphalt binders. Constr Build Mater 180:265–274 Kuntikana G, Singh D (2017) Contemporary issues related to utilization of industrial byproducts. Adv Civil Eng Mater 6:444–479 Mamun AA, Arifuzzaman M (2018) Nano-scale moisture damage evaluation of carbon nanotube-modified asphalt. Constr Build Mater 193:268–275 Guo S, Dai Q, Wang Z, Yao H (2017) Rapid microwave irradiation synthesis of carbon nanotubes on graphite surface and its application on asphalt reinforcement. Compos Part B 124:134–143 Mousavi SE, Karamvand A (2017) Assessment of strength development in stabilized soil with CBR PLUS and silica sand. J Traffic Trans Eng 4:412–421 Crucho JML, Neves JMCD, Capitão SD, Picado-Santos LGD (2018) Mechanical performance of asphalt concrete modified with nanoparticles: Nanosilica, zero-valent iron and nanoclay. Constr Build Mater 181:309–318 Ortega FJ, Navarro FJ, García-Morales M, McNally T (2015) Thermo-mechanical behaviour and structure of novel bitumen/nanoclay/MDI composites. Compos Part B 76:192–200 Karnati SR, Oldham D, Fini EH, Zhang L (2019) Surface functionalization of silica nanoparticles to enhance aging resistance of asphalt binder. Constr Build Mater 211:1065–1072 Yusoff NIM, Breem AAS, Alattug HNM, Hamim A, Ahmad J (2014) The effects of moisture susceptibility and ageing conditions on nano-silica/polymer-modified asphalt mixtures. Constr Build Mater 72:139–147 Ho C, Martin Linares C, Shan J, Almonnieay A (2017) Material testing apparatus and procedures for evaluating freeze-thaw resistance of asphalt concrete mixtures. Adv Civil Eng Mater 6:429–443 Nazari H, Naderi K, Moghadas Nejad F (2018) Improving aging resistance and fatigue performance of asphalt binders using inorganic nanoparticles. Constr Build Mater 170:591–602 Valizadeh M, Janalizadeh Choobbasti A (2020) Evaluation of nano-graphene effect on mechanical behavior of clayey sand with microstructural and self-healing approach. J Adhes Sci Technol 34:299–318 HasaniNasab S, Arast M, Zahedi M (2019) Investigating the healing capability of asphalt modified with nano-zycotherm and Forta fibers. Case Stud Constr Mater 11:e235 Amin GM, Esmail A (2017) Application of nano silica to improve self-healing of asphalt mixes. J Cent South Univ 24:1019–1026 Kie Badroodi S, Reza Keymanesh M, Shafabakhsh G (2020) Experimental investigation of the fatigue phenomenon in nano silica-modified warm mix asphalt containing recycled asphalt considering self-healing behavior. Constr Build Mater 246:117558 Ganjei MA, Aflaki E (2019) Application of nano-silica and styrene-butadiene-styrene to improve asphalt mixture self healing. Int J Pavement Eng 20:89–99 Hosseinnezhad S, Shakiba S, Mousavi M, Louie SM, Karnati SR, Fini EH (2019) Multiscale evaluation of moisture susceptibility of biomodified bitumen. ACS Appl Bio Mater 2:5779–5789 Mousavi M, Oldham DJ, Hosseinnezhad S, Fini EH (2019) Multiscale evaluation of synergistic and antagonistic interactions between bitumen modifiers. ACS Sustain Chem Eng 7:15568–15577 Mousavi M, Fini E (2020) Silanization mechanism of silica nanoparticles in bitumen using 3-Aminopropyl Triethoxysilane (APTES) and 3-Glycidyloxypropyl Trimethoxysilane (GPTMS). ACS Sustain Chem Eng 8:3231–3240 Xu M, Yi J, Feng D, Huang Y, Wang D (2016) Analysis of adhesive characteristics of asphalt based on atomic force microscopy and molecular dynamics simulation. ACS Appl Mater Inter 8:12393–12403 Samieadel A, Høgsaa B, Fini EH (2019) Examining the implications of wax-based additives on the sustainability of construction practices: multiscale characterization of wax-doped aged asphalt binder. ACS Sustain Chem Eng 7:2943–2954 Long Z, Zhou S, Jiang S, Ma W, Ding Y, You L, Tang X, Xu F (2021) Revealing compatibility mechanism of nanosilica in asphalt through molecular dynamics simulation. J Mol Model 27:81 Tang J, Wang H (2022) Coarse grained modeling of nanostructure and asphaltene aggregation in asphalt binder using dissipative particle dynamics. Constr Build Mater 314:125605 Fini EH, Hung AM, Roy A (2019) Active mineral fillers arrest migrations of alkane acids to the Interface of bitumen and siliceous surfaces. ACS Sustain Chem Eng 7:10340–10348 Chen Z, Pei J, Li R, Xiao F (2018) Performance characteristics of asphalt materials based on molecular dynamics simulation – a review. Constr Build Mater 189:695–710 Yao H, Liu J, Xu M, Ji J, Dai Q, You Z (2022) Discussion on molecular dynamics (MD) simulations of the asphalt materials. Adv Colloid Interfac 299:102565 Khabaz F, Khare R (2015) Glass transition and molecular mobility in styrene–butadiene rubber modified asphalt. J Phys Chem B 119:14261–14269 Yao H, Dai Q, You Z (2016) Molecular dynamics simulation of physicochemical properties of the asphalt model. FUEL 164:83–93 Su M, Si C, Zhang Z, Zhang H (2020) Molecular dynamics study on influence of Nano-ZnO/SBS on physical properties and molecular structure of asphalt binder. FUEL 263:116777 Pan J, Tarefder RA (2016) Investigation of asphalt aging behaviour due to oxidation using molecular dynamics simulation. Mol Simulat 42:667–678 Xu G, Wang H (2017) Molecular dynamics study of oxidative aging effect on asphalt binder properties. Fuel 188:1–10 Fallah F, Khabaz F, Kim Y, Kommidi SR, Haghshenas HF (2019) Molecular dynamics modeling and simulation of bituminous binder chemical aging due to variation of oxidation level and saturate-aromatic-resin-asphaltene fraction. Fuel 237:71–80 Yu R, Wang Q, Wang W, Xiao Y, Wang Z, Zhou X, Zhang X, Zhu X, Fang C (2021) Polyurethane/graphene oxide nanocomposite and its modified asphalt binder: preparation, properties and molecular dynamics simulation. Mater Design 209:109994 Su M, Zhou J, Lu J, Chen W, Zhang H (2022) Using molecular dynamics and experiments to investigate the morphology and micro-structure of SBS modified asphalt binder. Mater Today Commun 30:103082 Cui B, Wang H (2022) Molecular interaction of asphalt-aggregate interface modified by silane coupling agents at dry and wet conditions. Appl Surf Sci 572:151365 Cui W, Huang W, Hassan HMZ, Cai X, Wu K (2022) Study on the interfacial contact behavior of carbon nanotubes and asphalt binders and adhesion energy of modified asphalt on aggregate surface by using molecular dynamics simulation. Constr Build Mater 316:125849 Ding Y, Huang B, Shu X, Zhang Y, Woods ME (2016) Use of molecular dynamics to investigate diffusion between virgin and aged asphalt binders. Fuel 174:267–273 Huang M, Zhang H, Gao Y, Wang L (2019) Study of diffusion characteristics of asphalt–aggregate interface with molecular dynamics simulation. Int J Pavement Eng:1–12 Xu M, Yi J, Feng D, Huang Y (2019) Diffusion characteristics of asphalt rejuvenators based on molecular dynamics simulation. Int J Pavement Eng 20:615–627 Xu G, Wang H (2016) Molecular dynamics study of interfacial mechanical behavior between asphalt binder and mineral aggregate. Constr Build Mater 121:246–254 Dong Z, Liu Z, Wang P, Gong X (2017) Nanostructure characterization of asphalt-aggregate interface through molecular dynamics simulation and atomic force microscopy. Fuel 189:155–163 Gao Y, Zhang Y, Yang Y, Zhang J, Gu F (2019) Molecular dynamics investigation of interfacial adhesion between oxidised bitumen and mineral surfaces. Appl Surf Sci 479:449–462 Long Z, You L, Tang X, Ma W, Ding Y, Xu F (2020) Analysis of interfacial adhesion properties of nano-silica modified asphalt mixtures using molecular dynamics simulation. Constr Build Mater 255:119354 Shen S, Lu X, Liu L, Zhang C (2016) Investigation of the influence of crack width on healing properties of asphalt binders at multi-scale levels. Constr Build Mater 126:197–205 Sun D, Lin T, Zhu X, Tian Y, Liu F (2016) Indices for self-healing performance assessments based on molecular dynamics simulation of asphalt binders. Comp Mater Sci 114:86–93 He L, Zheng Y, Alexiadis A, Cannone Falchetto A, Li G, Valentin J, Van den Bergh W, Emmanuilovich Vasiliev Y, Kowalski KJ, Grenfell J (2021) Research on the self-healing behavior of asphalt mixed with healing agents based on molecular dynamics method. Constr Build Mater 295:123430 Tian Y, Zheng M, Liu Y, Zhang J, Ma S, Jin J (2021) Analysis of behavior and mechanism of repairing agent of microcapsule in asphalt micro crack based on molecular dynamics simulation. Constr Build Mater 305:124791 Sun D, Sun G, Zhu X, Ye F, Xu J (2018) Intrinsic temperature sensitive self-healing character of asphalt binders based on molecular dynamics simulations. Fuel 211:609–620 He L, Li G, Lv S, Gao J, Kowalski KJ, Valentin J, Alexiadis A (2020) Self-healing behavior of asphalt system based on molecular dynamics simulation. Constr Build Mater 254:119225 Yu T, Zhang H, Wang Y (2020) Multi-gradient analysis of temperature self-healing of asphalt nano-cracks based on molecular simulation. Constr Build Mater 250:118859 Sun H (1998) COMPASS: an ab initio force-field optimized for condensed-phase ApplicationsOverview with details on alkane and benzene compounds. J Phys Chem B 102:7338–7364 Nosé S (1984) A unified formulation of the constant temperature molecular dynamics methods. J Chem Phys 81:511–519 Hoover WG (1985) Canonical dynamics: equilibrium phase-space distributions, physical review. Gen Phys 31:1695–1697 Li DD, Greenfield ML (2014) Chemical compositions of improved model asphalt systems for molecular simulations. Fuel 115:347–356 Qu X, Liu Q, Guo M, Wang D, Oeser M (2018) Study on the effect of aging on physical properties of asphalt binder from a microscale perspective. Constr Build Mater 187:718–729 Jones DR (1993) SHRP materials reference library: asphalt cements: a concise data compilation. National Research Council, Washington, DC DJ MGE (2014) Statistical mechanics of nonequilibrium liquids. Cambridge University, New York Daivis PJ, Todd BD (2006) A simple, direct derivation and proof of the validity of the SLLOD equations of motion for generalized homogeneous flows. J Chem Phys 124:194103 Han J, Gee RH, Boyd RH (1994) Glass transition temperatures of polymers from molecular dynamics simulations. Macromolecules 27:7781–7784 Laidler KJ (1984) The development of the Arrhenius equation. J Chem Educ 61:494 Robertson RE, Branthaver JF, Harnsberger PM, Petersen JC, Dorrence SM, McKay JF, Turner TF, Pauli AT (2001) Fundamental Properties of Asphalt and Modified Asphalts, vol. I, Interpretive Report Tabatabaee HA, Velasquez R, Bahia HU (2012) Predicting low temperature physical hardening in asphalt binders. Constr Build Mater 34:162–169 Daly WH, Negulescu II, Glover I (2010) A comparative analysis of modified binders: original asphalt and material extracted from existing pavement, l. Louisiana State University Baton Rouge (Ed.). L. Louisiana State University Baton Rouge, Louisiana García Á (2012) Self-healing of open cracks in asphalt mastic. Fuel 93:264–272 The authors acknowledge the financial support of Hunan Provincial Natural Science Foundation of China (2019JJ50622) and the Fundamental Research Funds for the Central Universities (2020kfyXJJS127). School of Civil and Hydraulic Engineering, Huazhong University of Science and Technology, Wuhan, 430074, Hubei Province, China Zhengwu Long, Nanning Guo & Lingyun You College of Civil Engineering and Mechanics, Xiangtan University, Xiangtan, 411105, Hunan Province, China Xianqiong Tang, Yanhuai Ding, Wenbo Ma & Fu Xu Zhengwu Long Xianqiong Tang Nanning Guo Yanhuai Ding Wenbo Ma Lingyun You Fu Xu ZL collected and synthesized references, conducted simulation, drafted and wrote the manuscript. XT proposed the simulation program. NG, YD and WM analyzed the simulation data. LY developed the research plan, reviewed and edited the manuscript. FX initiated the project and conceptualization. Both authors read and approved the final manuscript. Correspondence to Lingyun You or Fu Xu. Long, Z., Tang, X., Guo, N. et al. Atomistic-scale investigation of self-healing mechanism in Nano-silica modified asphalt through molecular dynamics simulation. J Infrastruct Preserv Resil 3, 4 (2022). https://doi.org/10.1186/s43065-022-00049-2 Received: 03 December 2021 Asphalt binder Molecular dynamics Self-healing mechanism Nano-silica (SiO2) Diffusion coefficient Pavement Preservation and Resilience: Strategies and Innovative Technologies
CommonCrawl
PRE-GALACTIC CONSTRAINTS ON THE GALACTIC EVOLUTION Hyun, J.J. 51 The characteristic size and mass of galaxies as pre-galactic constraints on the Galactic evolution are reviewed and the general constraints for their existence in gravitationally bound systems are examined. Implications on the self-similar gravitational clustering are also discussed. A SIMPLE DISK-HALO MODEL FOR THE CHEMICAL EVOLUTION OF OUR GALAXY Lee, S.W.;Ann, H.B. 55 On the basis of observational constraints, particularly the relationship between metal abundance and cumulative stellar mass, a simple two-zone disk-halo model for the chemical evolution of our Galaxy was investigated, assuming different chemical processes in the disk and halo and the infall rates of the halo gas defined by the halo evolution. The main results of the present model calculations are: (i) The halo formation requires more than 80% of the initial galactic mass and it takes a period of $2{\sim}3{\times}10^9$ yrs. (ii) The halo evolution is divided into two phases, a fast collapse phase ($t=2{\sim}3{\times}10^8$ yrs) during which period most of the halo stars $({\sim}95%)$ are formed and a later slow collapse phase which is characterized by the chemical enrichment due to the inflow of external matter to the halo. (iii) The disk evolution is also divided into two phases, an active disk formation phase with a time-dependent initial mass function (IMF) up to $t{\approx}6{\times}10^9$ yrs and a later steady slow formation phase with a constant IMF. It is found that at the very early time $t{\approx}5{\times}10^8$ yrs, the metal abundance in the disk is rapidly increased to ${\sim}1/3$ of the present value but the total stellar mass only to ${\sim}10%$ of the present value, finally reaching about 80% of the present values toward the end of the active formation phase. KINEMATICAL PROPERTIES OF THE SPECTRAL GROUP OF NEARBY DWARFS Lee, S.G. 73 On the basis of the recently available data, we have analysed the kinematical properties of nearby dwarfs, which are grouped by their spectral types and derived their ages from the kinematical properties. The discontinuities in the kinematical properties are found around late F stars, which appear to be caused mainly by the fact that the spectral groups earlier than late F are rather homogencous in age while the later ones are mixed by two different age group. AN ANALYSIS OF SELECTED MOLECULAR LINES IN SUNSPOTS Lee, H.M.;Yun, H.S.;Lee, Y.B. 79 Theoretical profiles of selected rotational lines of $C_2$ CH, CN, TiO and MgH are computed by using the current models of sunspot unbrae and penumbrae. It is found that the lines of the diatomic carbides are enhanced in penumbrae relative to umbrae, while MgH lines are more strongly enhanced in umbrae than in penumbrae and the quiet photosphere. The results are discussed with respect to selecting lines suitable for studying the structure of sunspots. TWO COMPONENT MODEL OF INITIAL MASS FUNCTION Hong, S.S. 89 Weibull analyses given to the initial mass function (IMF) deduced by Miller and Scalo (1979) have shown that the mass dependence of IMF is an exp$[-{\alpha}m]$- form in low mass range while in the high mass range it assumes an exp$[-{\alpha}\sqrt{m}]/\sqrt{m}$-form with the break-up being at about the solar mass. Various astrophysical reasonings are given for identifying the exp$[-{\alpha}m]$ and exp$[-{\alpha}\sqrt{m}]/\sqrt{m}$ with halo and disk star characteristics, respectively. The physical conditions during the halo formation were such that low mass stars were preferentially formed and those in the disk high mass stars favoured. The two component nature of IMF is in general accord with the dichotomies in various stellar properties.
CommonCrawl
Quantum computational advantage with a programmable photonic processor Non-linear Boson Sampling Nicolò Spagnolo, Daniel J. Brod, … Fabio Sciarrino Efficient generation of entangled multiphoton graph states from a single atom Philip Thomas, Leonardo Ruscio, … Gerhard Rempe Quantum interference of identical photons from remote GaAs quantum dots Liang Zhai, Giang N. Nguyen, … Richard J. Warburton Quantum verification of NP problems with single photons and linear optics Aonan Zhang, Hao Zhan, … Lijian Zhang Towards the standardization of quantum state verification using optimal strategies Xinhe Jiang, Kun Wang, … Xiaosong Ma Resolution of 100 photons and quantum generation of unbiased random numbers Miller Eaton, Amr Hossameldin, … Olivier Pfister Effect of partial distinguishability on quantum supremacy in Gaussian Boson sampling Junheng Shi & Tim Byrnes Realizing a deterministic source of multipartite-entangled photonic qubits Jean-Claude Besse, Kevin Reuer, … Christopher Eichler Sequential generation of multiphoton entanglement with a Rydberg superatom Chao-Wei Yang, Yong Yu, … Jian-Wei Pan Lars S. Madsen1 na1, Fabian Laudenbach1 na1, Mohsen Falamarzi. Askarani1 na1, Fabien Rortais1, Trevor Vincent1, Jacob F. F. Bulmer ORCID: orcid.org/0000-0002-2633-95691, Filippo M. Miatto ORCID: orcid.org/0000-0002-6684-83411, Leonhard Neuhaus1, Lukas G. Helt ORCID: orcid.org/0000-0003-0346-23421, Matthew J. Collins ORCID: orcid.org/0000-0003-3969-57971, Adriana E. Lita2, Thomas Gerrits2, Sae Woo Nam2, Varun D. Vaidya1, Matteo Menotti1, Ish Dhand1, Zachary Vernon ORCID: orcid.org/0000-0003-3268-69861, Nicolás Quesada1 & Jonathan Lavoie ORCID: orcid.org/0000-0002-5208-67291 Nature volume 606, pages 75–81 (2022)Cite this article Information theory and computation Quantum simulation Single photons and quantum effects A quantum computer attains computational advantage when outperforming the best classical computers running the best-known algorithms on well-defined tasks. No photonic machine offering programmability over all its quantum gates has demonstrated quantum computational advantage: previous machines1,2 were largely restricted to static gate sequences. Earlier photonic demonstrations were also vulnerable to spoofing3, in which classical heuristics produce samples, without direct simulation, lying closer to the ideal distribution than do samples from the quantum hardware. Here we report quantum computational advantage using Borealis, a photonic processor offering dynamic programmability on all gates implemented. We carry out Gaussian boson sampling4 (GBS) on 216 squeezed modes entangled with three-dimensional connectivity5, using a time-multiplexed and photon-number-resolving architecture. On average, it would take more than 9,000 years for the best available algorithms and supercomputers to produce, using exact methods, a single sample from the programmed distribution, whereas Borealis requires only 36 μs. This runtime advantage is over 50 million times as extreme as that reported from earlier photonic machines. Ours constitutes a very large GBS experiment, registering events with up to 219 photons and a mean photon number of 125. This work is a critical milestone on the path to a practical quantum computer, validating key technological features of photonics as a platform for this goal. Only a handful of experiments have used quantum devices to carry out computational tasks that are outside the reach of present-day classical computers1,2,6,7. In all of these, the computational task involved sampling from probability distributions that are widely believed to be exponentially hard to simulate using classical computation. One such demonstration relied on a 53-qubit programmable superconducting processor6, whereas another used a non-programmable photonic platform implementing Gaussian boson sampling (GBS) with 50 squeezed states fed into a static random 100-mode interferometer1. Both were shortly followed by larger versions, respectively enjoying more qubits7,8 and increased control over brightness and a limited set of circuit parameters2. In these examples, comparison of the duration of the quantum sampling experiment to the estimated runtime and scaling of the best-known classical algorithms placed their respective platforms within the regime of quantum computational advantage. The superconducting quantum supremacy demonstrations serve as crucial milestones on the path to full-scale quantum computation. On the other hand, the choice of technologies used in the photonic machines1,2, and their consequential lack of programmability and scalability, places them outside any current proposed roadmap for fault-tolerant photonic quantum computing9,10,11 or any GBS application12,13,14,15,16,17,18. A demonstration of photonic quantum computational advantage incorporating hardware capabilities required for the platform to progress along the road to fault-tolerance is still lacking. In photonics, time-domain multiplexing offers a comparatively hardware-efficient19 path for building fault-tolerant quantum computers, but also near-term subuniversal machines showing quantum computational advantage. By encoding quantum information in sequential pulses of light—effectively multiplexing a small number of optical channels to process information on a large number of modes20—large and highly entangled states can be processed with a relatively small number of optical components. This decouples the required component count and physical extent of the machine from the size of the quantum circuit being executed; provided device imperfections can be maintained sufficiently small, this decoupling represents a substantial advantage for scaling. Moreover, the relatively modest number of optical pathways and control components avoids many of the challenges of traditional, planar two-dimensional implementations of optical interferometers, which suffer from high complexity and burdensome parallel control requirements, especially when long-range connectivity is desired. Although attractive for scaling, hardware efficiency must not come at the cost of unnacceptably large errors. Implementations of time-domain multiplexing must therefore be tested in demanding contexts to validate their promise for building practically useful quantum computers. Using time-domain multiplexing, large one- and two-dimensional cluster states have been deterministically generated21,22,23 with programmable linear operations implemented by projective measurements24,25, whereas similar operations have been implemented in ref. 26 using a single loop with reconfigurable phase. These demonstrations leverage low-loss optical fibre for delay lines, which allows photonic quantum information to be effectively buffered. Although groundbreaking, these demonstrations have remained well outside the domain of quantum computational advantage, as they lacked non-Gaussian elements and were unable to synthesize states of sufficient complexity to evade efficient classical simulation27. The demonstration of a set of hardware capabilities needed for universal fault-tolerant quantum computing, in the demanding context of quantum computational advantage, would serve as a validating signal that the corresponding technologies are advancing as needed. Yet no such demonstration is available for time-domain multiplexing. In this work, we solve technological hurdles associated with time-domain multiplexing, fast electro-optical switching, high-speed photon-number-resolving detection technology and non-classical light generation, to build a scalable and programmable Gaussian boson sampler, which we name Borealis. These features allow us to synthesize a 216-mode state with a three-dimensional entanglement topology. This is particularly notable because three-dimensional cluster states are sufficient for measurement-based fault-tolerant quantum computing28,29; although the states we synthesize are themselves not cluster states, the device can be readily programmed to generate cluster states by selecting appropriate phase and beam-splitting ratios at the loops. Borealis uses 216 independent quantum systems to achieve quantum computational advantage, placing it well beyond the capabilities of current state-of-the-art classical simulation algorithms30. Our use of photon-number-resolving detectors unlocks access to sampling events with much larger total photon number, a regime inaccessible to earlier experiments that used traditional threshold detectors. In the same vein, our use of time-domain multiplexing allows us access to more squeezed modes without increasing the physical extent or complexity of the system. In addition, its output cannot be efficiently spoofed in cross-entropy benchmarks using a generalization of the most recent polynomial-time algorithms3. We leave as an open question to the community whether better polynomial-time algorithms for spoofing can be developed. The optical circuit we implement, depicted in Fig. 1, is fully programmable, provides long-range coupling between different modes and allows all such couplings to be dynamically programmed. It implements linear-optical transformations on a train of input squeezed-light pulses, using a sequence of three variable beamsplitters (VBSs) and phase-stabilized fibre loops that act as effective buffer memory for light, allowing interference between modes that are either temporally adjacent, or separated by six or 36 time bins. This system synthesizes a programmable multimode entangled Gaussian state in a 6 MHz pulse train, which is then partially demultiplexed to 16 output channels and sampled from using photon-number-resolving detectors. Fig. 1: High-dimensional GBS from a fully programmable photonic processor. A periodic pulse train of single-mode squeezed states from a pulsed OPO enters a sequence of three dynamically programmable loop-based interferometers. Each loop contains a VBS, including a programmable phase shifter, and an optical fibre delay line. At the output of the interferometer, the Gaussian state is sent to a 1-to-16 binary switch tree (demux), which partially demultiplexes the output before readout by PNRs. The resulting detected sequence of 216 photon numbers, in approximately 36 μs, comprises one sample. The fibre delays and accompanying beamsplitters and phase shifters implement gates between both temporally adjacent and distant modes, enabling high-dimensional connectivity in the quantum circuit. Above each loop stage is depicted a lattice representation of the multipartite entangled Gaussian state being progressively synthesized. The first stage (τ) effects two-mode programmable gates (green edges) between nearest-neighbour modes in one dimension, whereas the second (6 τ) and third (36 τ) mediate couplings between modes separated by six and 36 time bins in the second and third dimensions (red and blue edges, respectively). Each run of the device involves the specification of 1,296 real parameters, corresponding to the sequence of settings for all VBS units. Unlike some quantum algorithms whose correct functioning on a quantum computer can be readily verified using a classical computer, it remains an open question how to verify that a GBS device is operating correctly. In what follows, we present evidence that our machine is operating correctly, that is, it samples from the GBS distribution specified by the device transfer matrix T and vector of squeezing parameters r, which together define the ground truth of the experiment. In previous experiments1,2 the results were benchmarked against a ground truth obtained from tomographic measurements of a static interferometer, whereas for Borealis, the ground truth is obtained from the quantum program specified by the user, that is the squeezing parameters and phases sent to the VBS components in the device. The transfer matrix is obtained by combining the three layers of VBSs acting over the different modes, together with common (to all modes) losses due to propagation and the finite escape efficiency of the source, as well as imperfect transmittance through the demultiplexing and detection systems; it corresponds classically (quantum mechanically) to the linear transformation connecting input and output electric fields (annihilation operators). As noted in refs. 5,31, if one were to target a universal and programmable interferometer, with depth equal to the number of modes, that covers densely the set of unitary matrices, the exponential accumulation of loss would prohibit showing a quantum advantage. There are then two ways around this no-go result: one can either give up programmability and build an ultralow loss fixed static interferometer, as implemented in refs. 1,2, or give up universality while maintaining a high degree of multimode entanglement using long-ranged gates. We first consider the regime of few modes and low photon number, in which it is possible to collect enough samples to estimate outcome probabilities, and also calculate these from the experimentally characterized lossy transmission matrix T and the experimentally obtained squeezing parameters r programmed into the device. In Fig. 2 we show the probabilities inferred from the random samples collected in the experiment and compare them against the probabilities for different samples S obtained from simulations, under the ground truth assumption. We cover the output pattern of all possible permutations \((\begin{array}{c}N+M-1\\ N\end{array})\), in which N is the number of photons, from 3 to 6, and M = 16 is the number of modes. To quantify the performance of Borealis we calculate the fidelity (F) and total variation distance (TVD) of the 3, 4, 5 and 6 total photon-number probabilities relative to the ground truth. For a particular total photon number, fidelity and TVD are, respectively, defined as \(F={\sum }_{i}\sqrt{{p}_{i}{q}_{i}}\) (also known as the Bhattacharyya coefficient) and \({\rm{TVD}}={\sum }_{i}|{p}_{i}-{q}_{i}|/2\). Parameters pi and qi represent the theoretical and experimental probability of the ith output pattern, respectively, and are normalized by the probability of the respective total photon number. For the total photon-number sectors considered we find fidelities in excess of 99% and TVDs below or equal to 6.5%, thus showing that our machine is reasonably close to the ground truth in the low-N regime addressed by these data. Note that, because we are calculating all the possible probabilities with N photons, estimating outcome probabilities from the experimentally characterized transmission matrix would require us to obtain orders of magnitude more samples, beyond our current processing abilities. This limitation will lead to TVD growing as N increases and, beside the impractical computational cost, is the reason that data past N > 6 were left for subsequent benchmarks. Fig. 2: Experimental validation of the GBS device. Each panel compares experimentally obtained sample probabilities, against those calculated from the ground truth (r, T), for up to six-photon events in a 16-mode state. A total of 84.1 × 106 samples were collected and divided according to their total photon number N and further split according to the collision pattern, from no collision (no more than one photon detected per PNR) to collisions of different densities (more than one photon per PNR). The overall fidelity (F) and TVD to simulations for each photon-number event is shown below. Further analysis of TVD for classical adversaries in the 16-mode GBS instance can be found in the Supplementary Information. In an intermediate mode- and photon-number regime, we calculate the cross entropy of the samples generated by the experiment for each total photon-number sector for a high-dimensional GBS instance with M = 216 computational modes and total mean photon number \(\bar{N}=21.120\pm 0.006\). For a set of K samples \({\{{S}_{i}\}}_{i=1}^{K}\), each having a total of N photons, the cross-entropy benchmark under the ground truth given by (r, T) is $${\rm{XE}}({\{{S}_{i}\}}_{i=1}^{K})=\frac{1}{K}\mathop{\sum }\limits_{i=1}^{K}\mathrm{ln}\left(\frac{{{\rm{\Pr }}}^{(0)}({S}_{i})}{{\mathscr{N}}}\right),$$ where \({\mathscr{N}}={{\rm{\Pr }}}^{(0)}(N)/(\begin{array}{c}N+M-1\\ N\end{array})\) is a normalization constant determined by the total number of ways in which N photons can be placed in M modes and Pr(0)(N) is the probability of obtaining a total of N photons under the ground truth assumption. We then compare the average score (Fig. 3a) of the 106 samples, divided in 10,000 samples per total photon number N, generated by our machine in the cross entropy against classical adversarial spoofers that try to mimic the ground truth distribution (r, T). These adversaries are constructed with the extra constraint that they must have the same first-order (mean) photon-number cumulants as the ground truth distribution. The five adversaries considered send (1) squashed, (2) thermal, (3) coherent and (4) distinguishable squeezed light into the interferometer specified by T, or (5) use a greedy algorithm to mimic the one- and two-mode marginal distributions of the ground truth, as was used in ref. 3 to spoof earlier large GBS experiments1,2. Squashed states (1) are the classical-Gaussian states with the highest fidelity to lossy-squeezed states31, that is they are optimal within the family of Gaussian states that are classical, and thus provide a more powerful adversary than thermal, coherent or distinguishable squeezed states, which were the only adversaries considered in previous photonic quantum computational advantage claims1,2. In all cases, the samples from Borealis perform significantly better than any adversary at having a high cross entropy with respect to the ground truth; equivalently, none of the adversaries are successful spoofers in this benchmark. In particular, the best-performing adversary—the greedy sampler—remains significantly below the experiment in cross-entropy, and shows no trend towards outperforming the experiment for larger N. Given the supercomputing resources and time needed to estimate all scores for N = 26 (22 h), we can extrapolate this time and estimate that it would take roughly 20 days to benchmark our data for N = 30. For this reason, and the lack of evidence that the scores may change in favour of any alternative to the ground truth, we are confident that the studied range of N = [10,26] is sufficient to rule out all classical spoofers considered, even in the regime in which it is unfeasible to perform these benchmarks. Fig. 3: Benchmarks against the ground truth. a, Cross-entropy benchmark against the ground truth. Experimental samples from a high-dimensional GBS instance of 216 modes, averaging \(\bar{N}=21.120\pm 0.006\) photons per sample, are bundled according to their total photon number N, from 10 to 26. Each point (score) corresponds to an average (equation (1)) over 10,000 samples per N. Genuine samples from the quantum hardware score higher than all classical spoofers, validating the high device fidelity with the ground truth. Error bars are standard errors of the mean. b, Bayesian log average score against the ground truth. Experimental samples from a 72-mode GBS instance and \(\bar{N}=22.416\pm 0.006\) photon number per sample. Each score is averaged over 2,000 samples with N from 10 to 26. Error bars are standard errors of the mean. All scores are above zero, including error bar, indicating that the samples generated by Borealis are closer to the ground truth than from the adversarial distribution corresponding to squashed, thermal, coherent and distinguishable squeezed spoofers. Next, we consider another test—a Bayesian method similar to that used in other GBS demonstrations1,2. For each subset of samples generated in the experiment with a given total photon number N, we calculate the ratio of the probability that a sample S could have come from the lossy ground truth specified by T and r to the probability that S came from any of the alternative spoofing hypotheses (1)–(4). For a particular sample Si and a particular adversary I this ratio is given by $${R}^{0|I}({S}_{i})=\frac{{{\rm{\Pr }}}^{(0)}({S}_{i}|N)}{{{\rm{\Pr }}}^{(I)}({S}_{i}|N)}=\frac{{{\rm{\Pr }}}^{(0)}({S}_{i})\,{{\rm{\Pr }}}^{(I)}(N)}{{{\rm{\Pr }}}^{(I)}({S}_{i})\,{{\rm{\Pr }}}^{(0)}(N)}.$$ which allows us to form the Bayesian log average $$\Delta {H}_{0|I}=\frac{1}{K}\mathop{\sum }\limits_{i=1}^{K}\mathrm{ln}\,{R}^{0|I}({S}_{i}).$$ If \(\Delta {H}_{0|I} > 0\) we conclude that the samples generated by Borealis are more likely to have come from the ground truth than from the adversarial distribution corresponding to the first four spoofers (1)–(4); the greedy adversary (5) can generate samples mimicking the ground truth but there is no known expression or algorithm to obtain the 'greedy probability distribution', thus we cannot use it to generate a Bayesian score. One can see in Fig. 3b that the Bayesian log average is strictly above zero for all remaining adversaries. Finally, we consider the regime of many modes and large photon number, in which calculating the probability of even a single event using a classical computer is unfeasible. In this regime we consider the first- and second-order cumulants of the photon-number distributions of 216 modes and 106 samples against the lossy ground truth and the different spoofer distributions. Note that these samples are generated from the same family of unitaries as the samples generated in the intermediate regime, we only change the brightness of the squeezed input light. In Fig. 4a we plot the total photon-number probability distributions measured in the experiment, and calculated from the ground truth and different spoofers. By construction, the samples generated from each classical adversary have the same first-order cumulants (mode photon-number means) as the ground truth and thus they also have the same total mean photon number centred at \(\bar{N}=125\). Deliberately matching the first moments exactly to the ground truth ensures that we give our adversaries fair conditions to spoof our experiment. However, their second-order cumulants, defined between mode i and mode j as \({C}_{ij}=\langle {n}_{i}{n}_{j}\rangle -\langle {n}_{i}\rangle \langle {n}_{j}\rangle \) with ni the photon number in mode i, are different. We calculate the distribution of all Cij obtained experimentally and compare the result with those obtained from theoretical predictions and different adversaries, as shown in Fig. 4b. These cumulants can be calculated efficiently. Overall, it is clear that the statistics of experimental samples diverge from the adversarial hypotheses considered and agree with the ground truth of our device (as seen in the top left panel of Fig. 4b) where they cluster around the identity line at 45°. Fig. 4: Quantum computational advantage. a, Measured photon statistics of 106 samples of a high-dimensional Gaussian state compared with those generated numerically from different hypotheses. The inset shows the same distribution in a log scale having significant support past 160 photons, up to 219. b, Scatter plot of two-mode cumulants Cij for all the pairs of modes comparing experimentally obtained ones versus the ones predicted by four different hypotheses. A perfect hypothesis fit (shown in plot) would correspond to the experimentally obtained cumulants lying on a straight line at 45° (shown in plot). Note that the ground truth is the only one that explains the cumulants well. Moreover, to make a fair comparison all the hypothesis have exactly the same first-order cumulants (mean photon in each mode). c, Distribution of classical simulation times for each sample from this experiment, shown as Borealis in red and for Jiuzhang 2.0 in blue2. For each sample of both experiments, we calculate the pair (Nc, G) and then construct a frequency histogram populating this two-dimensional space. Note that because the samples from Jiuzhang 2.0 are all threshold samples they have G = 2, whereas samples from Borealis, having collisions and being photon-number resolved, have G ≥ 2. Having plotted the density of samples for each experiment in (Nc, G) space, we indicate with a star the sample with the highest complexity in each experiment. For each experiment, the starred sample is at the very end of the distribution and occurs very rarely; for Jiuzhang 2.0 this falls within the line G = 2. Finally, we overlay lines of equal simulation time as given by equation (4) as a function of Nc and G. To guide the eye we also show boundaries delineating two standard deviations in plotted distributions (dashed lines). Unlike earlier experiments1,2 in which more than half of the input ports of the interferometer are empty, in this current work every input port of the time-domain interferometer is populated with a squeezed state. This property indicates that the third- and fourth-order photon-number cumulants with no modes repeated are extremely small (≈10−6) in our ground truth. The greedy spoofer we implemented using first- and second-order cumulant information automatically produces third-order cumulants on the order of 10−5, and thus no extra gain can be attained by using a greedy algorithm with third-order correlations, as they are well explained using only single-mode and pairwise correlations. Note that the difference between the ground truth cumulants and the ones from the greedy samples are more than accounted for by finite size statistics. For Gaussian states undergoing only common loss (including the special case of lossless GBS), it is straightforward to show that the third-order photon-number cumulants involving any three distinct modes are all strictly zero. Thus, the fact that significant third- and fourth-order cumulants are observed in refs. 1,2 is simply a reflection of the fact that most of their inputs are vacuum and that their experiment lacks photon-number resolution. The latter observation could in principle be exploited by a classical adversary to speed up the simulation of GBS with mostly vacuum inputs because strategies exist to speed up the simulation of GBS when the number of input squeezed states is fixed and is a small fraction of the total number of photons observed. These strategies used the fact that hafnians of low-rank matrices32,33 can be calculated faster than hafnians of full rank matrices of equal size. For our system, the matrices needed for simulation are all full rank as every input is illuminated with squeezed light. Finally, note that in Fig. 4b, we do not compare against the cumulants of the greedy sampler. These are, by construction, very close to the ground truth (see details in Supplementary Information). But for the brightnesses for which one calculates cross entropy, they do not perform as well as the samples from our machine. In the experimental distribution of the total photon number in Fig. 4a, the outcome with the highest probability is N = 124.35 ± 0.02 and the distribution has significant support past 160 photons as shown in the inset. The best-known algorithm to simulate GBS30,34 scales with the total number of modes and the time it takes to calculate a probability amplitude of a pure-state GBS instance. Thus we can estimate the time it would take to simulate a particular sample S = (n1, …, nm) in Fugaku, the current most powerful supercomputer in the world35, to be $${\rm{time}}({N}_{c},G)=\frac{1}{2}{c}_{{\rm{Fugaku}}}M{N}_{c}^{3}{G}^{{N}_{c}/2},$$ where the collision parameter is \(G={({\prod }_{i=1}^{M}({n}_{i}+1))}^{1/{N}_{c}}\), ni is the number of photons in the ith mode and Nc is the number of non-zero detector outcomes. We estimate cFugaku = cNiagara/122.8 from the LINPACK benchmark (a measure of a computer's floating-point rate of execution) ratio of floating operations per second measured on Fugaku and Niagara5 found cNiagara = 5.42 × 10−15 s from which we get cFugaku = 4.41 × 10−17 s. Finally, we take M = 216 for both our system and the experiment in ref. 2. This assumption slightly overestimates the time it takes a supercomputer to simulate the experiment of ref. 2, as it has two-thirds the number of modes of the largest Borealis instance we consider but simplifies the analysis. Equation (4) captures the collision-free complexity of the hafnian of an N × N matrix of \(O({N}_{c}^{3}{2}^{{N}_{c}/2})\) because in that case G = 2. For the purposes of sampling, a threshold detection event that in an experiment can be caused by one or many photons, can always be assumed to have been caused by a single photon, thus threshold samples have the same complexity as in the formula above with G = 2 (ref. 30), which is quadratically faster than the estimates in refs. 1,2,36. One could hope that tensor networks techniques37 could speed up the simulation of a circuit such as the one we consider here, but this possibility is ruled out in ref. 5 where it is shown that, even when giving tensor network algorithms effectively infinite memory, they require significantly more time than hafnian based methods to calculate probability amplitudes. On the basis of these assumptions we estimate that, on average, it would take Fugaku 9,000 years to generate one sample, or 9 billion years for the million samples we collected from Borealis. Using the same assumptions, we estimate that Fugaku would require 1.5 h, on average, to generate one sample from the experiment in ref. 2, or 8,500 years for the 50 million generated in their experiment. In Fig. 4c, we plot the distribution of classical runtimes of Fukagu for each sample drawn in the experiment, and show the sample with the largest runtime as a star. For comparison, we also compare to the highest brightness experiment from Jiuzhang 2.0 (ref. 2). The regime we explore in our experiment is seven orders of magnitude harder to simulate than previous experiments and, moreover, we believe it cannot be spoofed by current state-of-the-art greedy algorithms or classical-Gaussian states in cross entropy. Discussion and outlook We have successfully demonstrated quantum computational advantage in GBS using a photonic time-multiplexed machine. Unlike previous photonic devices used for such demonstrations, Borealis offers dynamic programmability over all gates used, shows true photon-number-resolved detection and requires a much more modest number of optical components and paths. Among all photonic demonstrations of quantum computational advantage–photonic or otherwise–our machine uses the largest number of independent quantum systems: 216 squeezed modes injected into a 216-mode interferometer having three-dimensional connectivity, with up to 219 detected photons. Our demonstration is also more resistant to classical spoofing attacks than all previous photonic demonstrations, enabled by the high photon numbers and photon-number resolution implemented in the experiment. The programmability and stability of our machine enables its deployment for remote access by users wishing to encode their own gate sequences in the device. Indeed, the machine can be accessed by such users without any knowledge of the underlying hardware, a key property for exploring its use at addressing problems on structured, rather than randomized data. Furthermore, besides demonstrating variable beam-splitting and switching (both in the loops and demultiplexing system), the successful use in our machine of several phase-stabilized fibre loops to act as effective buffer memory for quantum modes is a strong statement on the viability of this technique, which is a requirement in many proposed architectures for fault-tolerant photonic quantum computers9,10,11,38. Our demonstration thus marks a significant advance in photonic technology for quantum computing. Optical circuit The input of the interferometer is provided by a single optical parametric oscillator (OPO), emitting pulsed single-mode squeezed states at a 6 MHz rate that are then sent to three concatenated, programmable, loop-based interferometers. Each loop contains a VBS, including a programmable phase shifter, and an optical fibre delay line acting as a buffer memory for light, and allows for the interference of modes that are temporally adjacent (τ = (6 MHz)−1), or separated by six or 36 time bins (6 τ or 36 τ) in the first, second and third loop, respectively. Optical delays provide a compact and elegant method to mediate short- and long-range couplings between modes. The high-dimensional Gaussian state generated for this experiment can be visualized, as depicted above the three loops in Fig. 1, using a three-dimensional lattice representation. Given a lattice of size a = 6, where a is the number of modes separating two interacting pulses in the second loop, one can form a cubic lattice by injecting M = a3 = 216 squeezed-light pulses into the interferometer. Owing to the use of a single time-multiplexed squeezed-light source, all temporal modes are, to very good approximation, indistinguishable in all degrees of freedom except time signature, and passively phased locked with respect to each other; the squeezer is driven by pump pulses engineered to generate nearly single-temporal-mode squeezed-light pulses on a 6 MHz clock. Spatial overlap is ensured by using single-mode fibre coupling at the entrance and exit of each loop delay, and samples are collected using an array of photon-number resolving (PNR) detectors based on superconducting transition-edge sensors (TES) with 95% detection efficiency39,40. These samples consist of 216 (integer) photon-number measurement outcomes for as many modes. To bridge the gap between the 6 MHz clock, chosen to maintain manageable fibre loop lengths, and the slower relaxation time of the TES detectors, a 1-to-16 binary-tree switching network was used to partially demultiplex the pulse train after the loops and before the detectors. Experimental challenges Despite the simple conceptual design of Borealis (Fig. 1), building a machine capable of delivering quantum computational advantage in a programmable fashion using photonics, in a large photon-number regime, required solving considerable technological hurdles that were previously outstanding. These include: (1) lack of PNR-compatible single-mode squeezed-light sources and non-invasive phase stabilization techniques requiring bright laser beams, (2) slow PNR reset times that would necessitate unfeasibly long fibre loops and (3) lack of sufficiently fast and low-loss electro-optic modulators (EOMs) preventing programmability. Our solutions to these challenges for this work are, respectively, (1) the design of a bright and tunable source of single-mode squeezed states and phase stabilization techniques (OPO and interferometer) using locking schemes compatible with PNR detectors, (2) active demultiplexing to increase the effective rate of PNR acquisition by a factor of 60, compared to previous systems40, by constructing a low-loss 1-to-16 binary switch tree and developing new photon-number extraction techniques and (3) the use of new, efficient and fast customized EOMs (QUBIG GmbH) that enable arbitrary dynamic programming of photonic gates with low loss and high speeds. The success of this experiment also relies on a robust calibration routine, accurately extracting all experimental parameters contained in the transfer matrix T and the squeezing parameters r that define each GBS instance. We describe each of these advances in the following sections. Other details pertinent to the apparatus can be found in the Supplementary Information. With further fabrication and device optimization, the raw operational speed of PNR detectors can be increased, eliminating the need for the demultiplexer (demux) and associated losses (roughly 15%). Improvements to the filter stack (20% loss) would also considerably increase performance. Several paths thus exist to even further increase the robustness of our machine against hypothetical improved classical adversaries. In addition, in trial runs we have extended the number of accessible modes to 288 (see Supplementary Information) without any changes to the physical architecture, and expect further scalability in this number to be readily achievable by improving the long-time stabilization of the device. Such scaling will place the device even further ahead of the regime of classical simulability and potential vulnerability to spoofing. For applications requiring a universal interferometer, a recirculation loop long enough to accommodate all 216 modes could be implemented41, replacing any two of the three existing loops. The remaining existing loop would be nested in the larger 216-mode loop, allowing repeated application of the remaining VBS to all 216 modes, albeit at the cost of higher losses. Pulsed squeezed-light source The main laser is an ultralow phase noise fibre laser with a sub-100 Hz linewidth centred at 1,550 nm, branched out into different paths. To prepare the pump, in one path pulses are carved using a 4 GHz lithium niobate electro-optic intensity modulator. It is then amplified and upconverted to 775 nm using a fibre-coupled MgO:LN ridge waveguide. The resulting pump is a 6 MHz stream of 3-ns-duration rectangular pulses with an average power of 3.7 mW. Squeezed-light pulses are generated in a doubly resonant, phase-stabilized hemilithic cavity42 comprising a 10-mm-long plano-convex potassium titanyl phosphate crystal with its temperature stabilized at 32.90 °C using a Peltier element, for optimal Type-0 phase matching (Supplementary Information). All spectral side bands of the OPO cavity, around the degenerate frequency band, are suppressed by more than 25 dB using a pair of fibre Bragg gratings (0.04 nm bandwidth at 0.5 dB), one in reflection and the other in transmission (more details in Supplementary Information). Programmable photonic processor A train of single-mode squeezed vacuum pulses is emitted by the OPO, coupled into a single-mode fibre and directed towards the programmable photonic processor consisting of three loop-based interferometers in series, as shown in Fig. 1. Each loop \({\ell }=0,1,2\) is characterized by a VBS with transfer matrix $$B{S}^{{\ell }}({\alpha }_{k},{\varphi }_{k})=(\begin{array}{cc}{e}^{i{\varphi }_{k}}\,\cos \,{\alpha }_{k} & i\sqrt{{\eta }_{{\ell }}}{e}^{i{\mu }_{{\ell }}}\,\sin \,{\alpha }_{k}\\ i{e}^{i{\varphi }_{k}}\,\sin \,{\alpha }_{k} & \sqrt{{\eta }_{{\ell }}}{e}^{i{\mu }_{{\ell }}}\,\cos \,{\alpha }_{k}\end{array})$$ where each phase ϕk = [−π/2, π/2] and αk = [0, π/2] can be programmed independently, \({\mu }_{{\ell }}\) is a phase offset associated with each loop and \({\eta }_{{\ell }}\) is the energy transmittance coefficient associated with one complete circulation in loop \({\ell }\). The time delay experienced in the first loop is τ = 1/(6 MHz), equals the delay between two consecutive squeezed-light pulses, whereas the second and third loops have 6 τ and 36 τ time delay, respectively. The transmittance tk of a VBS with parameter αk is given by tk = cos2αk. For tk = 1 all the incoming light is directed into the fibre delay, whereas the light entering the VBS from the fibre delay is fully coupled out. The output of the last loop is coupled into a single-mode fibre and directed towards the final sampling stage of the experiment. All three loops are independently phase stabilized using a counter-propagating laser beam, piezo transducers and lock-in techniques. To avoid stray light from reflections of this beam towards the detectors, we alternate between measurement (65 μs) and phase stabilization of the loops (35 μs), leading to a sampling rate of 10 kHz. The estimated phase noise (standard deviation from the mean) inside the interferometer is 0.02, 0.03 and 0.15 rad for the first, second and third loops, respectively, as measured with classical pulses. We carefully reduced mode mismatch throughout the entire interferometer: spatial overlap is ensured using single-mode fibres, with coupling efficiencies >97%, and the length of each loop delay is carefully adjusted to have >80% classical visibility between 250-ps-long classical pulses, which gives >99% temporal overlap for the squeezed states. The programmable time-domain multiplexed architecture implemented here and introduced in ref. 5 generates sufficiently connected transmission matrices (in which two-thirds of the entries of the matrix are non-zero) to furnish a high level of entanglement between the modes (we estimate the log negativity between modes 0…i−1 and i…216 for the ground truth to be on average 5.96 for \(i\in \{36,72,108,144,180\}\)), while keeping losses sufficiently low (with transmission above 33%). This is not the case for other architectures in which one either has to give up programmability1,2 or suffer steep losses that, in the asymptotic limit of many modes, render the sampling task roughly simulable as the loss scales exponentially with the system size31. In a universal programmable interferometer each mode passes through several lossy components (with transmission ηunit) proportional to the number of modes. For the interferometers considered here, each mode sees a fixed number (six) of beamsplitters in which the loss is dominated by the transmission of the largest loop. If the shortest loop, which accommodates only one mode, has transmission ηunit then the largest loss is given by \({\eta }_{{\rm{unit}}}^{36}\), which should be contrasted with \({\eta }_{{\rm{unit}}}^{216}\) for a universal interferometer. Whereas we sacrifice some connectivity, the many-mode entanglement predicted in our ground truth (logarithmic negativity43 of 6.08 when splitting the modes of the ground truth between the first and last 108) is comparable to the one found in Gaussian state prepared using a random Haar-interferometer with a comparable net transmission and brightness (for which the logarithmic negativity across the same bipartition is 15.22). For the largest experiment considered below, the net transmittance is around 33%. As discussed in the Methods, combined with the high brightness of our source averaging r ~ 1.1, places our experiment well beyond any attempt at a now-known polynomial-time approximate classical simulation31. Sampling of high-dimensional GBS instances All temporal modes of our synthesized high-dimensional Gaussian states are sampled using superconducting TES allowing photon-number resolution up to 13 photons per detector in our data. Relaxation time of our TES, back to baseline following illumination, is of the order of 10 to 20 μs corresponding to 50–100 kHz (ref. 40), and depends on the expected photon number. At this speed, the length of the shortest loop delay would be 2 km, leading to excessive losses and more challenging phase stabilization in our system. Thus our pulse train and thus processing speed of 6 MHz, chosen to maintain manageable loop lengths, is too fast for a reliable photon-number extraction. To bridge the gap between the typical PNR speed and our processing speed, we use a demultiplexing device allowing to speed up by effectively 16×, and to develop a postprocessing scheme, described below, for 'tail-subtraction' enabling operation of each PNR at 375 kHz. The role of the demux, depicted as a binary tree in Fig. 1, is to reroute squeezed-light pulse modes from the incoming train into 16 separate and independent spatial modes, each containing a fibre-coupled PNR-TES detector. There are 15 low-loss resonant EOMs grouped in four different layers. EOMs in each layer have a preset frequency: one at 3 MHz, two at 1.5 MHz, four at 750 kHz and eight at 375 kHz. Each EOM is sandwiched between two polarizing beamsplitter and a quarter-waveplate at 45° in front. The modulators are driven by a standalone unit, generating several phase-locked sine wave signals temporally synchronized with the input train. The switching extinction ratio is measured to be above 200:1 for all modulators. Several methods have been demonstrated to extract photon numbers from a PNR's output voltage waveform, each with their own advantages44,45,46,47. Here we use a modified version of the method presented in ref. 47. First, each detector is calibrated using well separated pulses of squeezed light with a high mean photon number around \(\langle n\rangle \approx 1\) and 500 × 103 repetitions. This gives enough high photon-number events to ensure that at least the 0 to 11 photon clusters can be identified using the area method. From each cluster, the mean shape of the waveforms is defined. To extract the photon-number arrays from experiment, the mean square distance between each waveform and the mean shape is estimated. The photon number is then assigned to the closest cluster. Because we operate the individual PNRs at 375 kHz, faster than the relaxation time (back to baseline following illumination), the tail of each pulse still persists when the next pulse arrives at the same PNR. To avoid these tails reducing photon-number extraction fidelity in a pulse, the mean shape for the identified previous photon number is subtracted. See Supplementary Information for details. Estimation of the ground truth parameters Given that all the squeezed states come from the same squeezer and the programmability of our system, we can parametrize and characterize the loss budget of our system using a very small set of parameters. The first set of parameters correspond to the relative efficiencies of the 16 different demux-detector channels, ηdemux,i for \(i\in \{0,1,\ldots ,15\}\). The second parameter is simply the common transmittance ηC. Finally, we have the transmittance associated with a round-trip through each loop ηk for \(k\in \{0,1,2\}\). To characterize the first two parameter sets, namely the demux and common loss, we set all the loops to a 'bar' state (αk = π/2), preventing any light from entering the delays. As the input energy is the same, we can simply estimate the ratio of the transmittance of the different demux-detector channels as \({\eta }_{{\rm{demux}},i}/{\eta }_{{\rm{demux}},j}={\bar{n}}_{i}/{\bar{n}}_{j}\) where \({\bar{n}}_{j}\) is the mean photon number measured in the detector j. Without loss of generality, we can take the largest of the ηdemux,i to be equal to one and assign any absolute loss from this and any other channel into the common loss ηC. To determine the common loss, we use the noise reduction factor (NRF), defined as48,49 $${\rm{NRF}}=\frac{{\Delta }^{2}({n}_{i}-{n}_{j})}{\langle {n}_{i}+{n}_{j}\rangle },$$ where ni and nj are the photon-number random variables measured in mode i and j, and we write variances as \({\Delta }^{2}X={\langle X\rangle }^{2}-{\langle X\rangle }^{2}\). If losses can be considered as uniform, which is an excellent approximation if we use only the loop with the shortest delay, it is straightforward to show that the NRF of a two-mode squeezed vacuum gives directly the loss seen by the two modes as NRFTMSV = 1−η. To prepare the two-mode squeezed vacuum we set our VBS matrix to be proportional to \((\begin{array}{ll}1 & i\\ i & 1\end{array})\) when the two single-mode squeezed pulses meet at the beamsplitter. To this end, we use the following sequence [t0 = 0, t1 = 1/2, t2 = 0], where, recall, we write ti = cos2αi to indicate the transmittance of a particular loop time bin i. We can now scan the controllable phase of the VBS, ϕk, and determine where the minimum occurs \(({\varphi }_{k}^{{\rm{\min }}}={\mu }_{0}\,{\rm{mod}}\,\pi )\), and at the same time provide the relative offset in the first loop and the net transmittance of the setup. This observation can be used to obtain the phase offset of any other loop round-trip. Although in the current version of our system these are set by the locking system, they can in principle also be made programmable. The transmittance η = 1 − NRFTMSV = ηC × η0 × ηdemux is the product of the common transmittance ηC, the round-trip in the first loop η0 and the average transmittance associated with two demux-detector channels used to detect the two halves of the two-mode squeezed vacuum \({\eta }_{{\rm{demux}}}=\frac{1}{2}\{{\eta }_{{\rm{demux}},i}+{\eta }_{{\rm{demux}},j}\}\). From this relation, we can find $${\eta }_{C}=\frac{1-{{\rm{NRF}}}_{{\rm{TMSV}}}}{{\eta }_{0}\times {\eta }_{{\rm{demux}}}}.$$ This calibration depends on knowing the value of the round-trip transmittance factor associated with the first loop. To estimate the round-trip transmittance of a particular loop \({\ell }\), we bypass the other loop delays and compare the amount of light detected when light undergoes a round-trip through a particular loop, relative to when all the round-trip channels are closed, that is, all loops in a 'bar' state. We obtain \({\eta }_{{\ell }}\), which we can then plug in equation (7) to complete the calibration sequence. Finally, having characterized the loss budget in the experiment, we can obtain the brightness and squeezing parameters at the source by measuring photon numbers when all the loops are closed and then dividing by the net transmittance. For any of the three regimes considered in the main text the standard deviation of the estimated squeezing parameters and mean photon numbers is below 1% of the respective means. From the same data acquired above for a pair of modes, we calculate the unheralded second-order correlation $${g}^{(2)}=\frac{\langle {n}_{i}^{2}\rangle -\langle {n}_{i}\rangle }{{\langle {n}_{i}\rangle }^{2}}$$ for each pair of temporal modes. When we attain the minimum NRF at ϕk = μ0, that is, when we prepare two-mode squeezed vacuum, it is easy to see that50 $${g}^{(2)}=1+\frac{1}{K},$$ where K is the so-called Schmidt number of the source. This quantifies the amount of spectral mixedness in the generated squeezed light. An ideal squeezed vacuum light source would yield g(2) = 2. We report K = 1.12 for g(2) = 1.89 for the dataset used in the large mode and photon-number regime. Theory sections Transfer matrix, T The loop-based interferometer, as well as any other interferometer, can be described by a transfer matrix T that uniquely specifies the transformation effected on the input light. For our GBS implementation, this interferometer is obtained by combining three layers of phase gates and beamsplitters (two-mode gates), interfering modes that are contiguous, or separated by six or 36 time bins, which we write as $$T=\sqrt{{\eta }_{C}}{T}_{{\rm{demux}}}[\mathop{\mathop{\otimes }\limits_{d=0}}\limits^{D-1}\mathop{\mathop{\otimes }\limits_{i=0}}\limits^{M-{a}^{d}}{B}_{i,i+{a}^{d}}({{\rm{VBS}}}^{d}({\alpha }_{i},{\varphi }_{i}))]$$ where in our case D = 3 gives the number of loops, while \({a}^{d}{|}_{d\in \{0,1,2\}}=\) {1, 6, 36} with a = 6 gives the number of modes that each loop can hold. \({B}_{i,i+{a}^{d}}\)(VBS) is an M × M transfer matrix that acts like the VBS in the subspace of modes i and j = i + ad and like the identity elsewhere. In the last equation, ηC is the common transmittance throughout the interferometer associated with the escape efficiency of the squeezer cavity and the propagation loss in common elements. Tdemux is a diagonal matrix that contains the square roots of the energy transmittance into which any of the modes are rerouted for measurement using the demux. Because the demux has 16 channels, it holds that \({({T}_{{\rm{demux}}})}_{i,i}={({T}_{{\rm{demux}}})}_{i+16,i+16}=\sqrt{{\eta }_{{\rm{demux}},i}}\). Finally, we set the phases of the VBS to be uniformly distributed in the range [−π/2, π/2] and the transmittances to be uniformly in the range [0.45, 0.55]. This range highlights the programmability of the device while also generating high degrees of entanglement that are typically achieved when the transmittance is half. In the idealized limit of a lossless interferometer, the matrix representing it is unitary, otherwise the matrix T is subunitary (meaning its singular values are bounded by 1). The matrix T together with the input squeezing parameters r defines a GBS instance. Squeezed states interfered in an interferometer (lossy or lossless) always lead to a Gaussian state, that is, one that has a Gaussian Wigner function. Moreover, loss is never able to map a non-classical state (having noise in a quadrature below the vacuum level) to a classical state. Thus there exists a finite separation in Hilbert space between lossy-squeezed states and classical states. To gauge this separation, and how it influences sampling, we use the results from ref. 31 to show in the section 'Regimes of classical simulability' that the probability distribution associated with the ground truth programmed into the device cannot be well-approximated by any classical-Gaussian state. Similar to previous GBS experiments in which the ground truth to which a quantum computer is compared contains imperfections due to loss, we also benchmark our machine against the operation of a lossy unitary. In this more realistic scenario in which losses are included, the state generated at the output cannot be described by a state vector and thus one cannot assign probability amplitudes to an event. In this case, probabilities are calculated from the density matrix of the Gaussian state using the standard Born rule and then the probability of an N photon event is proportional to the hafnian of a 2N × 2N matrix. Regimes of classical simulability As a necessary but not sufficient test for beyond-classical capabilities of our machine, we consider the GBS test introduced in ref. 31. This test states that a noisy GBS device can be classically efficiently simulated up to error ϵ if the following condition is satisfied: $$\text{sec}{\rm{h}}\left\{\frac{1}{2}\,\max \left[0,\,\mathrm{ln}\,\frac{1-2{q}_{D}}{\eta {e}^{-2r}+1-\eta }\right]\right\} > {e}^{-\frac{{{\epsilon }}^{2}}{4M}}.$$ Here qD is the dark count probability of the detectors, η is the overall transmittance of the interferometer, r is the squeezing parameter of the M input squeezed states (assumed to be identical) and ϵ is a bound in the TVD of the photon-number probability distributions of GBS instance and the classical adversary. For our experiment, we estimate an average transmittance of η = Tr(TT†)/M = 0.32, qD = 10−3, an average squeezing parameter of r = 1.10 and M is the total number of modes. With these parameters we find that the inequality above has no solution for \({\epsilon }\in [0,1]\), meaning that our machine passes this non-classicality test. Greedy adversarial spoofer The greedy adversarial spoofer tries to mimic the low order correlations of the distribution and takes as input the k order, \(k\in \{1,2\}\), marginal distributions and optimizes a set of samples (represented as an array of size M × K) so as to minimize the distance between the marginals associated with this array and the ones associated with the ground truth. In a recent preprint Villalonga et al.3 argue that, using a greedy algorithm such as the one just described, they can obtain a better score at the cross-entropy benchmark against the ground truth of the experiment in refs. 1,2 than the samples generated in the same experiment. We generalized the greedy algorithm introduced by Villalonga et al.3 to work with photon-number-resolved samples and find that it is unable to spoof the samples generated by our machine at the cross-entropy benchmark that we use for scoring the different adversaries. Details of the algorithm are provided in the Supplementary Information. The datasets generated and analysed for this study are available from this link: https://github.com/XanaduAI/xanadu-qca-data. Zhong, H.-S. et al. Quantum computational advantage using photons. Science 370, 1460–1463 (2020). Zhong, H.-S. et al. Phase-programmable Gaussian boson sampling using stimulated squeezed light. Phys. Rev. Lett. 127, 180502 (2021). Villalonga, B. et al. Efficient approximation of experimental Gaussian boson sampling. Preprint at https://arxiv.org/abs/2109.11525 (2021). Hamilton, C. S. et al. Gaussian boson sampling. Phys. Rev. Lett. 119, 170501 (2017). Article ADS PubMed Google Scholar Deshpande, A. et al. Quantum computational advantage via high-dimensional gaussian boson sampling. Sci. Adv. 8, eabi7894 (2022). Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019). Wu, Y. et al. Strong quantum computational advantage using a superconducting quantum processor. Phys. Rev. Lett. 127, 180501 (2021). Zhu, Q. et al. Quantum computational advantage via 60-qubit 24-cycle random circuit sampling. Sci. Bull. 67, 240–245 (2022). Bourassa, J. E. et al. Blueprint for a scalable photonic fault-tolerant quantum computer. Quantum 5, 392 (2021). Bartolucci, S. et al. Fusion-based quantum computation. Preprint at https://arxiv.org/abs/2101.09310 (2021). Larsen, M. V., Chamberland, C., Noh, K., Neergaard-Nielsen, J. S. & Andersen, U. L. Fault-tolerant continuous-variable measurement-based quantum computation architecture. PRX Quantum 2, 030325 (2021). Bromley, T. R. et al. Applications of near-term photonic quantum computers: software and algorithms. Quantum Sci. Technol. 5, 034010 (2020). Huh, J., Guerreschi, G. G., Peropadre, B., McClean, J. R. & Aspuru-Guzik, A. Boson sampling for molecular vibronic spectra. Nat. Photonics 9, 615–620 (2015). Arrazola, J. M. & Bromley, T. R. Using Gaussian boson sampling to find dense subgraphs. Phys. Rev. Lett. 121, 030503 (2018). Banchi, L., Fingerhuth, M., Babej, T., Ing, C. & Arrazola, J. M. Molecular docking with Gaussian boson sampling. Sci. Adv. 6, eaax1950 (2020). Jahangiri, S., Arrazola, J. M., Quesada, N. & Killoran, N. Point processes with Gaussian boson sampling. Phys. Rev. E 101, 022134 (2020). Jahangiri, S., Arrazola, J. M., Quesada, N. & Delgado, A. Quantum algorithm for simulating molecular vibrational excitations. Phys. Chem. Chem. Phys. 22, 25528–25537 (2020). Banchi, L., Quesada, N. & Arrazola, J. M. Training Gaussian boson sampling distributions. Phys. Rev. A 102, 012417 (2020). Article ADS MathSciNet CAS Google Scholar Takeda, S. & Furusawa, A. Toward large-scale fault-tolerant universal photonic quantum computing. APL Photonics 4, 060902 (2019). Motes, K. R., Gilchrist, A., Dowling, J. P. & Rohde, P. P. Scalable boson sampling with time-bin encoding using a loop-based architecture. Phys. Rev. Lett. 113, 120501 (2014). Yoshikawa, J.-i et al. Invited article: generation of one-million-mode continuous-variable cluster state by unlimited time-domain multiplexing. APL Photonics 1, 060801 (2016). Larsen, M. V., Guo, X., Breum, C. R., Neergaard-Nielsen, J. S. & Andersen, U. L. Deterministic generation of a two-dimensional cluster state. Science 366, 369–372 (2019). Article ADS MathSciNet CAS PubMed MATH Google Scholar Asavanant, W. et al. Generation of time-domain-multiplexed two-dimensional cluster state. Science 366, 373–376 (2019). Article ADS MathSciNet CAS PubMed Google Scholar Asavanant, W. et al. Time-domain-multiplexed measurement-based quantum operations with 25-MHz clock frequency. Phys. Rev. Appl. 16, 034005 (2021). Larsen, M. V., Guo, X., Breum, C. R., Neergaard-Nielsen, J. S. & Andersen, U. L. Deterministic multi-mode gates on a scalable photonic quantum computing platform. Nat. Phys. 17, 1018–1023 (2021). Enomoto, Y., Yonezu, K., Mitsuhashi, Y., Takase, K. & Takeda, S. Programmable and sequential Gaussian gates in a loop-based single-mode photonic quantum processor. Sci. Adv. 7, eabj6624 (2021). Bartlett, S. D., Sanders, B. C., Braunstein, S. L. & Nemoto, K. Efficient classical simulation of continuous variable quantum information processes. Phys. Rev. Lett. 88, 097904 (2002). Raussendorf, R., Harrington, J. & Goyal, K. A fault-tolerant one-way quantum computer. Ann. Phys. 321, 2242–2270 (2006). Article ADS MathSciNet CAS MATH Google Scholar Raussendorf, R., Harrington, J. & Goyal, K. Topological fault-tolerance in cluster state quantum computation. New J. Phys. 9, 199 (2007). Bulmer, J. F. et al. The boundary for quantum advantage in Gaussian boson sampling. Sci. Adv. 8, eabl9236 (2021). Qi, H., Brod, D. J., Quesada, N. & García-Patrón, R. Regimes of classical simulability for noisy Gaussian boson sampling. Phys. Rev. Lett. 124, 100502 (2020). Björklund, A., Gupt, B. & Quesada, N. A faster Hafnian formula for complex matrices and its benchmarking on a supercomputer. J. Exp. Algor. 24, 11 (2019). MathSciNet Google Scholar Gupt, B., Izaac, J. & Quesada, N. The walrus: a library for the calculation of Hafnians, hermite polynomials and Gaussian boson sampling. J. Open Source Softw. 4, 1705 (2019). Quesada, N. et al. Quadratic speed-up for simulating gaussian boson sampling. PRX Quantum 3, 010306 (2022). 56th edition of the top 500 Top 500 the List https://www.top500.org/lists/top500/2020/11/ (2020). Li, Y. et al. Benchmarking 50-photon Gaussian boson sampling on the sunway taihulight. IEEE Trans. Parallel Distrib. Syst. 33, 1357-1372 (2021). Gray, J. & Kourtis, S. Hyper-optimized tensor network contraction. Quantum 5, 410 (2021). Rohde, P. P. Simple scheme for universal linear-optics quantum computing with constant experimental complexity using fiber loops. Phys. Rev. A 91, 012306 (2015). Lita, A. E., Miller, A. J. & Nam, S. W. Counting near-infrared single-photons with 95% efficiency. Opt. Express 16, 3032–3040 (2008). Arrazola, J. M. et al. Quantum circuits with many photons on a programmable nanophotonic chip. Nature 591, 54–60 (2021). Qi, H., Helt, L. G., Su, D., Vernon, Z. & Brádler, K. Linear multiport photonic interferometers: loss analysis of temporally-encoded architectures. Preprint at https://arxiv.org/abs/1812.07015 (2018). Mehmet, M. et al. Squeezed light at 1550 nm with a quantum noise reduction of 12.3 dB. Opt. Express 19, 25763–25772 (2011). Weedbrook, C. et al. Gaussian quantum information. Rev. Mod. Phys. 84, 621 (2012). Figueroa-Feliciano, E. et al. Optimal filter analysis of energy-dependent pulse shapes and its application to tes detectors. Nucl. Instrum. Methods Phys. Res., Sect. A 444, 453–456 (2000). Humphreys, P. C. et al. Tomography of photon-number resolving continuous-output detectors. New J. Phys. 17, 103044 (2015). Morais, L. A. et al. Precisely determining photon-number in real-time. Preprint at https://arxiv.org/abs/2012.10158 (2020). Levine, Z. H. et al. Algorithm for finding clusters with a known distribution and its application to photon-number resolution using a superconducting transition-edge sensor. J. Opt. Soc. Am. B. 29, 2066–2073 (2012). Harder, G. et al. Single-mode parametric-down-conversion states with 50 photons as a source for mesoscopic quantum optics. Phys. Rev. Lett. 116, 143601 (2016). Aytür, O. & Kumar, P. Pulsed twin beams of light. Phys. Rev. Lett. 65, 1551 (1990). Christ, A., Laiho, K., Eckstein, A., Cassemiro, K. N. & Silberhorn, C. Probing multimode squeezing with correlation functions. New J. Phys. 13, 033027 (2011). Article ADS MATH Google Scholar We thank J. M. Arrazola and M. V. Larsen for providing feedback on the manuscript, S. Fayer and D. Phillips for assistance with the PNR detectors, M. Seymour and J. Hundal for assistance with data acquisition code, D.H. Mahler for helpful discussions, K. Brádler for guidance and A. Fumagalli for assistance with software. N.Q. thanks H. Qi, A. Deshpande, A. Mehta, B. Fefferman, S. S. Nezhadi and B. A. Bell for discussions. We thank SOSCIP for their computational resources and financial support. We acknowledge the computational resources and support from SciNet. SciNet is supported by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund: Research Excellence and the University of Toronto. SOSCIP is supported by the Federal Economic Development Agency of Southern Ontario, IBM Canada Ltd and Ontario academic member institutions. Certain commercial equipment, instruments or materials are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose. These authors contributed equally: L. S. Madsen, F. Laudenbach, M. F. Askarani Xanadu, Toronto, ON, Canada Lars S. Madsen, Fabian Laudenbach, Mohsen Falamarzi. Askarani, Fabien Rortais, Trevor Vincent, Jacob F. F. Bulmer, Filippo M. Miatto, Leonhard Neuhaus, Lukas G. Helt, Matthew J. Collins, Varun D. Vaidya, Matteo Menotti, Ish Dhand, Zachary Vernon, Nicolás Quesada & Jonathan Lavoie National Institute of Standards and Technology, Boulder, CO, USA Adriana E. Lita, Thomas Gerrits & Sae Woo Nam Lars S. Madsen Fabian Laudenbach Mohsen Falamarzi. Askarani Fabien Rortais Trevor Vincent Jacob F. F. Bulmer Filippo M. Miatto Leonhard Neuhaus Lukas G. Helt Matthew J. Collins Adriana E. Lita Thomas Gerrits Sae Woo Nam Varun D. Vaidya Matteo Menotti Ish Dhand Zachary Vernon Nicolás Quesada Jonathan Lavoie L.S.M., M.F.A. and J.L. designed and built the experiment. F.L. developed the software stack for programmable hardware and data analysis with L.G.H. and L.N. F.R., M.J.C., T.G., A.E.L. and S.W.N. developed and built the PNR detector system. T.V. carried out high-performance computations and generated plots for the manuscript. J.F.F.B., I.D. and N.Q. provided guidance on theory, approach and benchmarking. F.M.M. implemented the greedy sampler algorithm. V.D.V. and M.M. designed and simulated the squeezed-light source. N.Q. and J.L. led the project, and cowrote the manuscript with Z.V., with input from all authors. Correspondence to Nicolás Quesada or Jonathan Lavoie. Nature thanks Sergio Boixo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Supplementary Information. Madsen, L.S., Laudenbach, F., Askarani, M.F. et al. Quantum computational advantage with a programmable photonic processor. Nature 606, 75–81 (2022). https://doi.org/10.1038/s41586-022-04725-x Issue Date: 02 June 2022 Constrained quantum optimization for extractive summarization on a trapped-ion quantum computer Pradeep Niroula Ruslan Shaydulin Marco Pistoia Micropillars for single-photon phase shifts Stephan Dürr Nature Photonics (2022) On-chip generation and dynamic piezo-optomechanical rotation of single photons Dominik D. Bühler Hubert J. Krenner NISQ computing: where are we and where do we go? Jonathan Wei Zhong Lau Kian Hwee Lim Leong Chuan Kwek AAPPS Bulletin (2022) Associated Content Loops simplify a set-up to boost quantum computational advantage Daniel Jost Brod Nature News & Views 01 Jun 2022 History of Nature Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print)
CommonCrawl
Visual Computing for Industry, Biomedicine, and Art Computational aesthetics and applications Yihang Bo1, Jinhui Yu2 & Kang Zhang3 Visual Computing for Industry, Biomedicine, and Art volume 1, Article number: 6 (2018) Cite this article Computational aesthetics, which bridges science and art, is emerging as a new interdisciplinary field. This paper concentrates on two main aspects of computational aesthetics: aesthetic measurement and quantification, generative art, and then proposes a design generation framework. On aesthetic measurement and quantification, we review different types of features used in measurement, the currently used evaluation methods, and their applications. On generative art, we focus on both fractal art and abstract paintings modeled on well-known artists' styles. In general, computational aesthetics exploits computational methods for aesthetic expressions. In other words, it enables computer to appraise beauty and ugliness and also automatically generate aesthetic images. Computational aesthetics has been widely applied to many areas, such as photography, fine art, Chinese hand-writing, web design, graphic design, and industrial design. We finally propose a design generation methodology, utilizing techniques from both aesthetic measurements and generative art. The term "aesthetic" originated in Greek "aisthitiki" means perception through sensation. In Cambridge Dictionary, aesthetic is "related to the enjoyment or study of beauty", or "an aesthetical object or a work of art is one that throws great beauty". Aesthetics is subjective to a great extent, since there is no standard to judge beauty and ugliness. People from various domains may have totally different understandings to an art work, influenced by their backgrounds, experiences, genders or other uncertain factors. With the rapid advances of digital technology, computers may play useful roles in aesthetic evaluation, such as aesthetic computing, making aesthetics decision, and simulating human to understand and deduce aesthetics [1]. One can use scientific approaches to measure the aesthetics of art works. Relating to digital technology and visual art, two interdisciplinary areas emerge: computational aesthetics and aesthetic computing. Both areas focus on bridging fine art, design, computer science, cognitive science and philosophy. Specifically, computational aesthetics aims to solve the problems of how computers could generate various visual aesthetic expressions or evaluate aesthetics of various visual expressions automatically. For example, automatic generation of abstract paintings in different styles, such as those of Malevich or Kandinsky and aesthetic assessment of photo, calligraphy, painting, or other forms of art works. On the other hand, aesthetic computing aims to answer the questions of how traditional visual art theory and techniques could aid in beautifying the products of modern technology or enhance their usability. This paper will concentrate on the former, i.e., computational aesthetics. The first quantitative aesthetic theory was proposed in "Aesthetic Measure" by Birkhoff in 1933 [2], which is considered the origin of computational aesthetics. Birkhoff proposed a simple formula for aesthetic measurement: $$ M=O/C $$ where O is the order of the object to be measured, C is the complexity of the object, and M is the aesthetic measurement of the object. This implies that orderly and simple objects appear to be more beautiful than chaotic and/or complex objects. Often regarded as two opposite aspects, order plays a positive role in aesthetics while complexity often plays a negative role. Birkhoff assumed that order properties, such as symmetry, contrast, rhythm, were elements, could bring comfortable and harmonious feelings. These properties apply to shape and composition at a high level, and also color and texture at a low level. Color perception is one of the most important factors for aesthetics, and color harmony [3] is frequently used to evaluate aesthetics. On the other hand, fractal theory is another major element of aesthetics since similar objects could be perceived more easily. In 1967, Mandelbrot proposed the self-similarity of Britain coast [4], and in 1975, he created the fractal theory and studied the property and application of fractals. Spehar et al. [5] compares between fractals and human aesthetics to enable fractal theory to be a measurement of order. According to Birkhoff, complexity is another vital factor to quantify aesthetics. People prefer simple and neat objects to complicated and burdensome ones. The more effort human visual processing system makes in viewing an object, the more complex the object is. For example, one could measure a photograph's complexity by counting the number of objects, colors or edges in it. This paper concentrates on two main aspects of computational aesthetics (as shown in Fig. 1). Section "Aesthetic Measurements" describes the aesthetic criteria and measurements, and reviews their applications. Section "Generative Art" discusses generative art, including fractal art and abstract painting modeled on well-known artists' styles. Section "Computational Aesthetics for Design Generation" proposes a design generation methodology, combining the techniques in Sections "Aesthetic Measurements and Generative Art". Section "Conclusions" concludes the paper. Structure of this paper Aesthetic measurements How to simulate the human visual system and brain to measure and quantify aesthetics is a great challenge. It, however, becomes possible with the rapid development of artificial intelligence, machine learning, pattern recognition, and computer vision. This section will first discuss various possible aesthetic criteria to be used for measurements, and then consider evaluation approaches using the criteria. We will then sample a few application domains. Similar to most computer vision and pattern recognition algorithms, aesthetic measurements need to consider an object's features and their descriptions. This subsection will discuss composition criteria at a high level and image attributes at a low level. Photographers always apply certain rules to make their photos appealing, including Rules of Thirds, Golden Ratio (Visual Weight Balance), focus, ISO speed rating, geometric composition and shutter speed [6]. Studies show that the photographic compositions conformed to human visual stimulation can give high aesthetic quality. Rule of thirds As an important guideline for photographic composition, "Rule of Thirds" means dividing a photo into 3 × 3 equal grids, as shown in Fig. 2. The four intersection points by the four dividing lines are preferred places for the photo's main object. Placing the foreground object at an intersection point or on a dividing line would make the composition more interesting and aesthetic than placing it in the center. Rules of Thirds Bhattacharya et al. [7] define a relative foreground position to measure the coincidence of the foreground and the strong focal points. They use the following equation to characterize a 4-dimensional feature vector (F): $$ \mathrm{F}=\frac{1}{\mathrm{H}\times \mathrm{W}}\left[{\left\Vert {\mathrm{x}}_0-{\mathrm{s}}_1\right\Vert}_2,{\left\Vert {\mathrm{x}}_0-{\mathrm{s}}_2\right\Vert}_2,{\left\Vert {\mathrm{x}}_0-{\mathrm{s}}_3\right\Vert}_2,{\left\Vert {\mathrm{x}}_0-{\mathrm{s}}_4\right\Vert}_2\right] $$ where H and W are the frame's height and width respectively, x0 is the foreground object center, and si (i = 1,2,3,4) represents one of the four red crosses as shown in Fig. 2. Dhar et al. uses a similar approach to compute the minimal distance between the center of mass of the predicted saliency mask and the four red crosses [8]. If the foreground centers at or near one of the strong focal points, the photo would become more attractive. Figure 3 shows an example with the foreground centered at two different positions in the same frame with the same background [8] In Fig. 3a, the foreground is in the middle of the frame, while in Figure 3b, the visual attention moves to the bottom-left focal point. Figure 3b appears more comfortable and harmonious than Fig. 3a. However, this measurement is only applicable to photographs with a single foreground object. Additionally, Zhou et al. [9] considers that the Rule of Thirds in saliency regions is generated by computing the average hue, saturation and value of the inner third regions, similar to the work of Datta et al. [10] and Wong et al. [11]. An example of foreground in the middle (a) and (b) in the one third position [3] Golden Ratio Golden Ratio, first proposed by Ancient Greek mathematicians, is also called golden mean or golden section [12]. For example, two line segments a and b are in golden ratio if (a + b)/a = a/b = (1 + √5)/2 ≈ 1.618. The art work "The Mona Lisa" (Fig. 4) is a perfect example of golden ratio, whether one measures the length and width of the painting or draws a rectangle around the object's face. In photography, we often use visual weight balance or aspect ratio. Zhou et al. [9] and Datta et al. [10] consider that the image size and aspect ratio are crucial factors affecting photographs' aesthetics. In their opinion, approximating the golden ratios, 4:3 and 16:9, can make viewers pleasing and comfortable. Obrador et al. [13] also uses photograph composition features, such as Rule of Thirds, the golden mean and the golden triangles based on edges instead of regions [14]. Focus, focal length and low depth of field In photography, focus aims to adjust the distance and clarity of the frame to emphasize the foreground or salient object. Low depth of field results in the salient region or object always in sharp focus while the background is blurred. Dhar et al. [8] trains an SVM classifier on Daubechies wavelet [15] based features to calculate the blurring amount [10]. Their experiments show that low depth of field photographs receive high evaluation and rating. Golden Ratio example on "The Mona Lisa" Image aesthetic features could be categorized as low-level or high-level, saliency-based, category-based, object-based, composition-based, and information theory-based. Low-level features include color, luminance, edges, and sharpness. They describe an image objectively and intuitively with relatively low time and space complexity. High level features include regions and contents. Color Color is one of the most important low-level features [16, 17]. In computational aesthetics, we usually measure color in terms of colorfulness, color harmony, and apposing colors. Colorfulness [10, 13, 18,19,20,21] is decided by average Chroma and the spread of Chroma distribution, computed by brightness and saturation in a 1D form. Specifically, average Chroma presents the average distance of color to neutral axis. Hasler and Suesstrunk [17] propose an approach for measuring colorfulness, via an image's color pixel distribution in the CIELab color space. Colorfulness (CFN) is a linear combination of color variance and Chroma magnitude: $$ \mathrm{CFN}={\sigma}_{ab}+0.37\bullet {\mu}_{ab} $$ where \( {\upsigma}_{\mathrm{a}\mathrm{b}}=\sqrt{\upsigma_{\mathrm{a}}^2+{\upsigma}_{\mathrm{b}}^2} \), represents the trigonometric length of the standard deviation in ab space, and μab represents the distance of the center of gravity in the ab space to the neutral axis. The experiment by Obrador et al. [18] shows that a colorful image could receive a high rating in image appeal even though the image's content is not attractive at all. As another important factor for image quality, color harmony is a color combination harmonious to human eyes and sensation. In general, color harmony studies which colors are suitable for simultaneous occurrence. This theory is based on the color wheel, on which the purity and saturation increase along the radius from the center outward. In other words, the color in the center of the circle has the lowest purity and saturation. Lu et al. [22] divide current color harmony models into two types: empirical-based [23,24,25] and learning-based [26,27,28,29]. The former, defined by designers or artists, appears to be subjective, while the latter behaves rationally and objectively. Most of the learning models focus on tuning the parameters on training the sample data. To make the two distinct models benefit each other, Lu et al. [22] proposes a Bayesian framework to build color harmony. Photos with harmonious colors appear comfortable to human and are usually rated with high aesthetic scores. Opponent color theory, on the other hand, states that human eyes could perceive light in three opposing components, i.e., light vs. dark, red vs. green, and blue vs. yellow. One could not sense the mixtures of red and green, or blue and yellow. Therefore, in reality, there is no such a color perceivable by human that is reddish green, or bluish yellow. Opposing colors can create maximum contrast and stability of an image. Photos or images with opposing colors are more appealing than those without. In addition, complementary colors occurring simultaneously in a photo can also enhance the foreground saliency. Dhar et al. [8] trains a classifier to calculate and predict complementary colors, useful for aesthetic assessment. Ke et al. [30] uses color contrast as one of their aesthetic assessment criteria. They believe that foreground and background should have complementary colors to highlight prominent subjects. Luminance and exposure In addition to color, illuminance and exposure are two other important factors for photograph aesthetic assessment. Researchers use overexposure and underexposure to penalize the overall image appeal by calculating the luminance distribution. For example, Obrador and Moroney [18] use average luminance histogram and standard deviation to measure penalization. The more luminance values, the less penalty is imposed. Wong and Low [11] consider that a professional photograph should be well exposed. Obrador et al. [13] uses luminance to compute the contrast of a region. Edges Researchers have also proposed to use edge spatial distribution [11, 30] to measure the simplicity of photos and images. A simple photo should have a salient foreground and concise background. Figure 5a gives an example by an amateur photographer with noisy background and obscure foreground. The edge spatial distribution appears rambling. On the other hand, Fig. 5b shows an edge map of a professional photograph. The foreground contour stands out clearly with few edges in the background. Ke et al. [30] proposes two different methods to measure the compactness of the spatial distribution of edges. A compact and clear distribution of edges in a photograph makes the photograph visually aesthetic. Examples of different edge distributions [30]: (a) an amateur photo, (b) a professional photo Sharpness As a feature to measure contrast, sharpness can be calculated by color, luminance, focus or edges. Obrador et al. [18] measures sharpness by contrasting edges. High contrast edges usually generate high sharpness. Wong et al. [11] extends Fourier transform [30] to compute sharpness. Regions Apart from the low-level features mentioned above, regions and contents may be considered high level features. An appealing image does not mean all of its regions to be aesthetic. Hence, researchers attempt to estimate regions' rather than the entire image's aesthetics. Usually, regions are segmented before aesthetic assessment. Obrador et al. [18] develop a region-based image evaluation framework, which includes measurements of sharpness, contrast and colorfulness. All of these region features are composed to render an appealing map. Exposure, size and homogeneity measurements of the appealing map are then applied. Zhou et al. [9] propose to use salient region detection for photograph aesthetic measurement. Wong et al. [11] use Itti's visual saliency model [31] to obtain the salient locations of a photograph, and then compute the exposure, sharpness and texture details of these salient regions. Their approach also analyzes the position, distribution and the number of salient locations to obtain other evaluation results. Similarly, Obrador et al. [13] generates contrast regions, rather than salient regions, by analyzing five low-level features, including sharpness or focus, luminance, chroma, relevance and saliency. The approach generates five segmentation maps to aid aesthetic measurement. Contents The contents of a photograph always make great contributions to human aesthetic judgment. Different types of objects would give viewers different visual experiences. People are the most common target in photography. Dhar et al. [8] estimate whether there are people in a photograph by a face detection method [32]. If the detected face area is larger than 25% of the whole image, it is considered a portrait depiction. Meanwhile, the presence of faces [18, 20, 33] is another key factor that impacts the photograph's appeal. Based on the face detection result, one may calculate the aesthetic score by assessing the size, color, and expression of the face [34]. Except the average luminance, contrast, color and size of the face, Obrador et al. [18] detect smile [35] with a probability, if the face region is over 3.6%. Another approach trains a SVM classifier to judge the presence of animals in photographs [8]. It also divides the content into indoor and outdoor scenes, and proposes 15 attributes to describe various general scene types. For outdoor scenes, it uses sky-illumination attributes to measure the lighting, which effect the perception of photographs. Photos taken in a sunny day give a clear sky, while those taken in a cloudy day give a dark sky. Obviously, photos with a clear sky appear aesthetic. Overall evaluation After selecting and measuring appropriate features, combining their aesthetic scores to make overall evaluation is the next step. There are two types of evaluation methods: binary classification and rating. A binary method classifies photos into beautiful and unbeautiful. A rating method scores photos according to their appeal, typically from 1 to 10. User studies Obrador et al. [18] conduct user surveys to help selecting appropriate features. They give image appeal six ratings: excellent, very good, good, fair, poor and very poor. For example, excellent photos require higher sharpness, while very good ones need not be in perfect focus. The results of user surveys could also provide reliable basis for feature selection. Users are asked to list and sort the features used in their aesthetic measurement. The selected features can be used for automatic aesthetic quantification. Support Vector Machine (SVM) is one of the most popular methods for binary classification [8]. Based on aesthetic features, one may train SVM classifiers on a labeled training data and classify photos into professional and amateur, or appealing and unappealing. Zhou et al. [9] choose the liv-SVM algorithm, and use the standard RBF kernel to perform classification. The n-fold cross-validation runs 10 times per feature by and filters out top 27 features. Then a greedy algorithm is used to find the top 15 features among the 27 to build a SVM classifier. Although SVM is a strong binary classifier, it performs poorly when many irrelevant features exist. CART (Classification and Regression Tree) [36] is a tree-based and fast approach, which can help analyzing the influence of various features. Probabilistic methods [30, 37] are also commonly used for classification. Given a set of aesthetic quality metrics, researchers usually create a weighted linear combination of metrics. Ke et al. [30] propose a Naïve Bayes classifier using the following equation: $$ \mathrm{q}=\frac{P\left( pr|{q}_1,{q}_2,\dots \dots, {q}_n\right)}{P\left( am|{q}_1,{q}_2,\dots \dots, {q}_n\right)}=\frac{P\left({q}_1,{q}_2,\dots \dots, {q}_n| pr\right)\mathrm{P}(pr)}{P\left({q}_1,{q}_2,\dots \dots, {q}_n| am\right)P(am)} $$ However, one cannot ensure the independence of the features. If any features are interrelated, they would be inoperative. Apart from photographs and images as discussed above, computational aesthetics has been applied to other fields, such as paintings, handwritings, and webpages, as summarized below. The aesthetics of digital or digitized paintings is subjective, varied due to different painters, types of paintings, drawing techniques, etc.. Li et al. [38, 39] build an aesthetic visual quality assessment model, which includes two steps. Step one is a questionnaire. Participants are asked to list at least two factors that affect the aesthetics of paintings. These factors are then grouped into color, composition, content, texture/brushstroke, shape, feeling of motion, balance, style, mood, originality, and unity. Step two is a rating survey. The assessment model uses the survey data to perform training and testing. One could consider aesthetic visual assessment of paintings a machine learning problem. Using the prior survey results and knowledge, one could generate features to represent the given image both globally and locally. Global features include color distribution, brightness, blurring, and edge distribution. Local features include segment shape and color, contrast between segments, and focus regions. Given the global and local features, one could use Naïve Bayes classifier to classify paintings into high-quality and low-quality categories. Assumed independent from each other, the features are given equal weights. Different types of features may, however, carry unequal weights in an aesthetic assessment. AdaBoost [40] assigns different weights adaptively. It performs better than Naïve Bayes classifier, but both perform distinctly better than the random chance approach, as shown in Fig. 6. Comparison of Naïve Bayes and AdaBoost methods [0] Aiming to discover whether white space in Chinese ink paintings is not simply a blank background space but rather meaningful for aesthetic perception, Fan et al. [41] examines the effect of white space on perceiving Chinese paintings. Applying a computational saliency model to analyze the influence of white space on viewers' visual information processing, the authors conducted an eye-tracking experiment. Taking paintings of a well-known artist Wu Guanzhong in a case study, they collect users' subjective aesthetic ratings. Their results (Fig. 7) show that white space is not just a silent background: it is intentionally designed to convey certain information and has a significant effect on viewers' aesthetic experience. Calculated saliency result of "Twin Swallows" (a) and heat map of eye movements on Wu Guanzhong's "Twin Swallows" (b) Fan et al. [42] further quantifies white space using a quadtree decomposition approach, as shown in Fig. 8, in computing the visual complexity of Chinese ink paintings. By conducting regression analysis, they validate the influences of white space, stroke thickness, and color richness on perceived complexity. Their findings indicate that all above three factors influence the complexity of abstract paintings. In contrast, mere white space influences the complexity of representational paintings. a An example of a quadtree decomposition on "A Big Manor". b The quadtree decomposition result of white space in "A Big Manor" Chinese handwriting Due to the special structure of Chinese handwritings, one should design features different from those for photos or paintings. Sun et al. [43] propose two types of features based on the balance between feature generality and sophistication, i.e., global features and component layout features. Global features Global features refer to three aesthetic aspects: alignment and stability, distribution of white space, gap between strokes. As shown in Fig. 9, Sun et al. [43] use the rectangularity of convex hull, slope and intersection of axis, and center of gravity to measure alignment and stability. A larger rectangularity value of the convex hull represents more regular and stable handwriting. Slope and intersection of axis divides a character into two subsets. A symmetrical and balanced character should have an approximately perpendicular axis. The center of gravity is inside the bounding-box of the character, which could describe the stability of handwriting from the perspective of physics. Global features of Chinese handwritings [43] Write Space Ratio (WSR) is a common aesthetic rule in calligraphy, representing the crowdness of characters. Sun et al. [43] use convexity, ratio of axis cutting convex hull, ratio of pixel distribution in quadrants and elastic mesh layout to evaluate the distribution of white space. The orientation and position of a character stroke also influence the aesthetics of handwriting. With Chinese characters, there are four types of strokes projected to 0o, 45o, 90o and 135o rotated X-axis respectively. Sun et al. [43] use the variance of each pixel's projection and maximum gap proportion to measure the gap between strokes. Component layout features Apart from the above global features, layout features divide every Chinese character into several components, each constructed by a set of strokes. As shown in Fig. 10, a component feature vector is constructed by the horizontal overlap, vertical overlap, area overlap and distance between points from two different components. Examples of component layout [43] Similar to paintings, no public Chinese handwriting datasets are available for aesthetic evaluation. Sun et al. [43] build a dataset for this purpose based on the agreement in aesthetic judgments of various people. They compute the Chinese Character aesthetic score by $$ \mathrm{S}=100\times {\mathrm{p}}_{\mathrm{g}}+50\times {\mathrm{p}}_{\mathrm{m}}+0\times {\mathrm{p}}_{\mathrm{b}} $$ which could give an average human evaluation score. Variables pg, pm, and pb are probabilities for good, medium and bad respectively. To evaluate the aforementioned features, the authors construct a back propagation neural network, and show that their approach gives a comparable performance with human ratings. Researchers have also attempted to study the relationships between webpages' computational aesthetic analysis and users' aesthetic judgment [44]. Thirty web pages are selected from different types of network sources with various visual effects. The participants include six women and 16 men with normal vision and non-color blindness, who are tested independently. Each participant labels a page component on a 7-point scale for repelling to appealing, complicated to simple, unprofessional to professional, and dull to captivating. For computational aesthetic analysis, Zheng et al. [44] compute the aesthetics based on low-level image statistics including color, intensity and texture, regions of minimum entropy decomposition. They also evaluate the quad-tree on aesthetic dimensions, including symmetry, balance, and equilibrium, as shown in Fig. 11. They find that the human subjective ratings and computational analysis on several aforementioned aspects are highly correlated. Quad-tree decomposition of a web page [44] The last application example is the evaluation of logos [45, 46], exampled in Fig. 12. The authors select features, such as balance, contrast and harmony based on design principles. To obtain reliable training data, they also collect human ratings of the above features. Using a supervised machine-learning method to train a statistical linear regression model to perform the evaluation, they are able to obtain a high correlation of 0.85. Examples of black and white logos [45] Digital art becomes increasingly expressive and humanized. With the emergence and development of computational aesthetics, advanced artificial intelligence technology could help to generate interesting and unique art works. Levels of automation complexity Machine intelligence is the key to computer-generated abstract paintings. We may classify computer-generated abstract paintings into four levels based on their computational complexity rather than their visual complexity [47]. Level 1 needs full human participation using an existing painting software or platform. First, software producer prepares various visual components either generated manually or automatically. Users can select visual components or draw them using the digital brush provided by the software. Of course, they can change visual attributes as needed. The best representative of Level 2 is fractal art, originated in late 1980s [48]. Fractals require users to provide various attributes, styles and mathematical formulas as inputs. Then a programmed computer can generate results automatically. In other words, at Level 2, results are usually generated based on mathematical formulas parameterized with certain degrees of randomness. The next section will discuss fractal art further. Methods at Level 3 are often heuristics-based using knowledge-based machine intelligence. There are two general approaches in producing abstract paintings at this level: generative and transformational. Using the generative method, one encodes artists' styles into computational rules or algorithms. One of the pioneering works by Noll [49] makes a subjective comparison of Mondrian's "Composition with Lines" with computer-generated images. On the other hand, a transformational method attempts to transform digital images into abstract paintings using image processing techniques. For example, a transformational method can mimic brush strokes or textures and apply them on an input image to transform it into an abstract picture [50]. The best representative of transformational methods is the so-called non-photorealistic rendering [51], which is out of the scope for this paper. Level 4 is an AI-powered and promising direction for approaches in generating highly creative artistic and design forms. For instance, such an approach detects specific styles from existing paintings and give an objective aesthetic evaluation automatically, or the results will be adaptive to audiences' emotional and cultural background. The current advances in deep learning and artificial intelligence have created tremendous opportunities for breakthroughs at this level. Fractal art Fractal geometry, coined by mathematician Benoit Mandelbrot (1924–2010) in 1975, studies the properties and behavior of fractals and describes many situations that cannot be explained easily by classical geometry. Fractals have been applied to computer-generated art and used to model weather, plants, fluid flow, planetary orbits, music, etc. Different from traditional art, fractal art realizes the unity of math and art aesthetics. A curve is the simplest and classical expression in fractal art, which can be generated recursively or iteratively by a computer program. We can easily discover four characteristics of fractal art works: Self-similar: enlarge the local part of a geometry object, if the local part is similar to the entire object, we call it self-similar. Infinitely fine: It has fine structure at any small scale. Irregularity: one cannot describe many fractal objects using simple geometric figures. Fractional dimension: generated based on fractal theory, fractional dimension is an index for characterizing fractal patterns or sets by quantifying their complexity. Singh [52] believes that there is a conversation between him and his computer when he creates his images. In other words, when he talks to his computer, the computer functions would be the translator. He builds on elements library and uses types of string fractals as compositional elements but not the main subject in the image. Figure 13 shows an example of combined result used in the Unfractal series. An example of combined result [52] Seeley uses fractals as the beginning of his art works [53], making them look less like computer-generated. A number of fractal software may be used to create this type of artworks, such as Fractal Studio, Fractal Explorer, Apophysis, ChaosPro, and XaoS. As shown in Fig. 14, named Yellow Dreamer, Sheeley creates the base image using Fractal Studio, and then transforms it with Filter Forge filters, Topaz Adjust 5, and AKVIS Enhancer. Yellow Dreamer [53] Modeling abstract painting of well-known styles According to Arnheim [54], abstract art uses a visual language of shape, form, color and line to create a composition which may exist with a degree of independence from visual references in the world. It is thus clear that a large variety of styles of abstract paintings exist. Accordingly, style analysis is an essential step in generative art, which involves analyzing basic components, background color, component colors and their layout. The components may be independent from each other or dependent with certain rules among them. Geometrical components could be easily modeled by computers while interweaved irregular shapes could be modeled using a layered approach. Style analysis Abstract paintings may be divided into two classes, i.e. geometric abstraction and lyrical abstraction. Here we begin with the pioneer of abstract paintings, Wassily Kandisky, to analyze the style of his abstract paintings during his Bauhaus years (1922–1933), such as "composition VIII" (1923), "black circles" (1924), "Yellow Red Blue" (1925), "several circles" (1926), "Hard But Soft" (1927) and "thirteen rectangles" (1930). We take "Composition VIII" shown in Fig. 15 as an example. According to Kandinsky himself, three primary forms occur frequently: sharp angles in yellow, circles in deeper colors, lines and curves in yellow and deeper colors respectively. He also proposed three pairs of contrast forms: The contrast color pair: yellow vs. blue. For example, a yellow circle is always nested inside a blue circle, or vice versa. A straight line is intersected with a curve line. Several straight lines are intersected with a curve line or individual lines and curves. Some lines are in one color, while others are in segmented colors. Circle(s) with triangle(s). One circle overlaps one triangle, multiple circles overlap one triangle, or several abreast half circles. Kandinsky's "Composition VIII" Piet Mondrian is another well-known abstract artist, whose style is based on geometric and figurative shapes. While his art forms are drastically different from Kandinsky's, he took black vertical and horizontal lines as the principal elements and used primary colors red, yellow, blue to fill some of the grids, as modeled in Fig. 16. Computer generated Mondrian's abstract painting example Russian artist Kazimir Malevich is the originator of avant-garde movement, and his most famous work "Black Square" in 1913 represents the birth of supremacism. He used different types of basic supremacist elements, such as quads, ovals, crosses, triangles, circles, straight lines and semi-crescent shapes. As noted by Tao et al. [55] in Fig. 17, his works are frequently colored boldly and opaque geometric figures above a white or light colored background. In addition, a large quad determines the orientation of other subsidiary shapes. Malevich's abstract painting example Prolific artist Joan Miro developed a unique style in 1920s. He arranged isolate and detailed elements in deliberate compositions. During his middle age, his art works were known as organic abstraction, featuring deformed objects as shown in Fig. 18. Xiong and Zhang [56] call these abstract pictorial elements according to their shapes and appearances. It is easy to find that the colors of Miro's works are always trenchant and bright. He enjoys using a few specific colors, such as red, yellow, blue, black, and white. Miro's "Ciphers and Constellations in Love with a Woman" Zheng et al. [57] attempt to analyze Jackson Pollock's style, who is an influential modern American painter. He draws his paintings by dripping and pouring on canvases instead of traditional painting methods, as shown in Fig. 19. His approach is considered revolutionary for creating aesthetics, as analyzed by Taylor et al. [58] for its visual forms characterized by fractals [5]. Carefully analyzed Pollock's various paintings, Zheng et al. [57] divide Pollock's dripping style into four independent layers, i.e. background layer, irregular shape layer, line layer and paint drop layer from bottom up. The elements on each layer are positioned randomly. A computer generated image of Pollock's "Number 8" [57] Rule-based modeling After analyzing the styles of various types of abstract paintings, researchers use different approaches to generate abstract images that mimic the original artists' styles. The components of an abstract painting are usually interrelated. In fact, their spatial arrangements on canvas follow certain rules. For instance, in "Composition VIII" by Kandinsky, full circles with contrasting colors are often surrounded by shades with gradual changing colors and grid forms are always filled with interleaving colors. Rule-based approach usually follows five steps: Step 1: Choose a specific style for automatic generation of the styled images; Step 2: Generate the background; Step 3: Decide the composition and prepare basic components; Step 4: Position the components based on the designed composition following the analytical rules; Step 5: Add texture and decoration, such as worn signs or pepper noise, if necessary. Zhang and Yu [59] select four abstract paintings of Kandinsky from his Bauhaus period, including "Composition VIII", "Black and Violet", "On White II", and "Several Circles", to generate the artist's style of images automatically. Based on their analysis of the paintings and reading of Kandinsky's abstract art theories, they summarize a set of rules, for example, thin vertical and horizontal lines build the foundations and intersected by angled lines; dark boundaries are filled with light color; red and black always occur together to create a salient effect. Zhang and Yu [59] parameterize various attributes of the artist's typical components, such as boundary color, fill color, size, and location. They then use the above analytical rules to color and position the components, while randomizing other attributes. Example abstract images automatically generated using this approach are shown in Fig. 20. Kandinsky's styles automatically generated: "Composition VIII" (top left), "Several Circles" (top right), "On White II" (bottom left), "Black and Violet" (bottom right) [59] Tao et al. [55] attempt to automatically generate Malevich style of abstract paintings. They first decide the color and decorations of the background, then prepare the basic components with complexities and flexibilities. Different from the generation approach of Zhang and Yu for Kandinsky style [59], they define a bounding box for each component to avoid overlaps among components, and evenly distribute components on canvas. Figure 21 gives three computer-generated results for "Mixed Shape Style". Computer generated Malevich's "Mixed Shapes Style" [55] Layered approach With non-geometrical styles, one could observe the artist's painting process and follow the process with layers of structures and components. A typical example is Pollock's drip style that is quite different from those of Kandinsky and Malevich. It is difficult or even impossible to come up with rules or observe regular patterns. Based on careful analysis, Zheng et al. [57] divide the structures of Pollock's drip paintings into four independent layers, including background layer, irregular layer, line layer and paint drop layer from bottom up as shown in Fig. 22. The background layer covers the entire canvas and sets the fundamental tone of each painting. The irregular shape layer includes ellipses and polygons of random sizes. The line layer is composed of curve lines of varied lengths and widths. The top layer has all the paint drops of varied sizes. Paint drops are filled in different colors and randomly positioned on canvas. The generation order is bottom up as illustrated in Fig. 22. Layered approach for modeling Pollock's drip style [57] Also using a layered approach, Xiong and Zhang [56] propose a process modeling approach to generating Miro's style of abstract paintings, in the following steps: Step 1: Structured drawing; Step 2: Adaptive coloring; Step 3: Space filling; Step 4: Noise injection. Figure 23 shows an example of computer modeled image of "Ciphers and Constellations in Love with a Woman" and an example of generated "Poetess", both of Miro's well-known "Constellation" series. Of course, one could obtain varied and restructured versions of the same style by resetting or randomizing different parameters and attributes. Generated images of Miro's "Ciphers and Constellations in Love with a Woman" (top) and "Poetess" (bottom) [56] In summary, using the aforementioned generative methods, it is entirely feasibly that more diversified, personalized and innovative images could be generated as desired. Neural nets approaches To simulate human aesthetics in depth, Gatys et al. [60, 61] proposed an image transformation approach, using a Deep Neural Network (DNN) approach. Briefly, DNN is a network constructed by layers of many small computational units. In each layer, the units are considered image filters which extract certain features from the input image. A DNN processes the visual information in a feed-forward manner hierarchically. Then, the output of the network is a feature map. Such an approach captures the texture information and obtains a multi-scale representation of the input image. Figure 24 shows an example that combines the content of a photo of Andreas Praefcke in the style of painting "The Starry Night" by Vincent van Gogh (1889). Example that combines the content of a photo with a well-known artwork Computational aesthetics for design generation Utilizing techniques of aesthetic measurements and generative art discussed above, we propose an automatic or semi-automatic design generation framework, initially presented as a poster at VINCI'2017 [62]. In Fig. 25, rectangular boxes are manual operations and oval and diamond boxes are automatic or semi-automatic. Design generation methodology Information elicitation Design information and requirements are collected, including sample design images partially meeting the requirements. Rule specification and refinement Based on the collected information and requirements, designers use their knowledge and experience to specify design rules, such as the logical and spatial relationships among objects, in the first round of the design process. The rules may be specified in an established formalism, such as Shape Grammar [63]. During subsequent rounds of the design, given a selected subset of all the generated designs, rules are automatically adjusted via machine learning approaches. Designers could then refine the adjusted rules. Design generation Given the set of design rules, and/or supervised learning based on the sample designs, the design generation system (e.g. shape grammar interpreter [64, 65])) automatically generates a large number of designs, while applying a set of aesthetic rules and guidelines pre-coded in the system. Deep learning algorithms, e.g., Convolutional Neural Network (CNN), can extract styles from design samples [61], such as distortion, texture, and rendering. The design rules could be responsible in generating a variety of basic designs and deep learning methods help enrich the design with the extracted design styles. In this way, each design both satisfies the design principles and has the distinct artistic style. Moreover, the framework also considers the designer's preferences that could be modeled by personalized recommendation methods, e.g., collaborative filtering and content-based filtering. This step gives priority to the styles preferred by the designer. Some of the generated designs may not have been thought of or imagined by designers. This saves much of designers' time, and enhances their creativity and imagination. Design selection This step is the same as in the traditional design process, except that the design choices presented are automatically generated. In digital forms, they are easily modifiable, selectable and printable. Rules learning and modification When a designer discards many designs, he/she must have used unwritten guidelines and constraints. The design generation system is equipped with an AI tool, such as a constraint solver, that can extract the constraints used by the designers, or deep learning techniques that can learn from elicited sample designs (dashed arrows). The rules involve fundamental visual elements. Deep learning methods, e.g., CNN and Deep Belief Network (DBN), could detect shapes or contours from the design samples. Based on the extracted objects, this step could formulate new elements. Deep learning methods, e.g., Sparse AutoEncoder, could learn color features from the samples to modify the existing coloring rules or generate new rules. According to the new elements, more concrete rules could be automatically learnt. These automatically-modified design rules may or may not be further refined before another round of automatic design generation. It may be undesirable to judge whether a design is satisfactory solely by the designer's own subjective assessment. The selected designs are inspected by the designer and quantitatively measured for their aesthetics possibly via their complexity [45] in a semi-automatic fashion. Quantitative aesthetic measurement methods usually include two steps: aesthetic feature evaluation and decision. In the aesthetic feature evaluation, we select the features, which represent the work best for the specific design application. One example is to objectively evaluate color and shape convexity in logo aesthetic measurement [10]. For advertisement designs, one also needs to consider saliency [9] and composition in the aesthetic measurement. In the decision step, there are two types of methods: binary classification and rating. We obtain positive or negative results by a binary classification method or a soft result providing a score ranking, which can facilitate the designer in objectively selecting designs. By combining the judgements of the human designer and an automated approach, the design generation system could deliver a result improved over the previous round. If one or more designs meet the requirements, they are adopted, before further refinement and final design application. This final adaption and application process would feedback to the first step, i.e. information elicitation, to help enhance the generation system. This iterative process could continue as many times as necessary until a satisfactory design is generated. This paper has introduced the current state-of-the-art of computational aesthetics. It includes two main parts: aesthetic measurement and generative art. Researchers attempt to automate the assessment of aesthetics using different features in an image. Numerous measurement approaches have been used on paintings, photographs, Chinese handwritings, webpages, and logo designs. They are also applicable to film snapshots, advertisements and costume designs in the same principle. Generative art includes fractal art and abstract paintings generated from existing art styles, both of which can generate distinctive art works although they use totally different methods. Fractal art transforms mathematic formula into visual elements and abstract image generation models on the basic elements of the existing abstract painting styles. Given the techniques in aesthetic measurements and generative art, one may generate aesthetic designs automatically or semi-automatically, as presented in the last section. With further development of artificial intelligence and machine learning, computational aesthetics will become easily accessible and significantly influence and change our daily life. A realistic application example would be automatic design generation as discussed above. Hoenig F. Defining computational aesthetics. In: Computational aesthetics'05 proceedings of the first eurographics conference on computational aesthetics in graphics, visualization and imaging. Girona: Eurographics Association; 2005. p. 13–8. Birkhoff GD. Aesthetic measure. Cambridge: Harvard University Press; 1933. Book MATH Google Scholar Nemcsics A. The coloroid color system. Color Res Appl. 1980;5:113–20. Mandelbrot B. How long is the coast of Britain? Statistical self-similarity and fractional dimension. Science. 1967;156:636–8. Spehar B, Clifford CWG, Newell BR, Taylor RP. Chaos and graphics: universal aesthetic of fractals. Comput Graph. 2003;27:813–20. Aber JS, Marzolff I, Ries JB. Photographic composition. In: Aber JS, Marzolff I, Ries JB, editors. Small-format aerial photography: principles, techniques and geoscience applications. Amsterdam: Elsevier; 2010. Bhattacharya S, Sukthankar R, Shah M. A framework for photo-quality assessment and enhancement based on visual aesthetics. In: Proceedings of the 18th ACM international conference on multimedia. Firenze: ACM; 2010. p. 271–80. Dhar S, Ordonez V, Berg TL. High level describable attributes for predicting aesthetics and interestingness. In: CVPR 2011. Colorado Springs: IEEE; 2011. p. 1657–64. Zhou YM, Tan YL, Li GY. Computational aesthetic measurement of photographs based on multi-features with saliency. In: Huang DS, Bevilacqua V, Premaratne P, editors. Intelligent computing theory. Cham: Springer; 2014. p. 357–66. Datta R, Joshi D, Li J, Wang JZ. Studying aesthetics in photographic images using a computational approach. In: Leonardis A, Bischof H, Pinz A, editors. Computer vision – ECCV 2006. Berlin Heidelberg: Springer; 2006. p. 288–301. Wong LK, Low KL. Saliency-enhanced image aesthetics class prediction. In: Proceedings of the 16th IEEE international conference on image processing. Cairo: IEEE; 2009. p. 993–6. Livio M. The golden ratio: the story of PHI, the world's most astonishing number. New York: Broadway Books. 2003;51:18–21. Obrador P, Saad MA, Suryanarayan P, Oliver N. Towards category-based aesthetic models of photographs. In: Schoeffmann K, Merialdo B, Hauptmann AG, Ngo CW, Andreopoulos Y, Breiteneder C, editors. Advances in multimedia modeling. Berlin, Heidelberg: Springer-Verlag; 2012. p. 63–76. Obrador P, Schmidt-Hackenberg L, Oliver N. The role of image composition in image aesthetics. In: Proceedings of 2010 IEEE international conference on image processing. Hong Kong, China: IEEE; 2010. p. 3185–8. Daubechies I. Ten lectures on wavelets. Philadelphia: SIAM; 1992. Matsuda Y. Color design. Tokyo: Asakura Shoten; 1995. Hasler D, Suesstrunk SE. Measuring colorfulness in natural images. In: Proceedings volume 5007, human vision and electronic imaging VIII. Santa Clara: SPIE; 2003. p. 87–95. Obrador P, Moroney N. Low level features for image appeal measurement. In: Proceedings volume 7242, image quality and system performance VI. San Jose: SPIE; 2009. p. 72420T-1-12. Tong HH, Li MJ, Zhang HJ, He JR, Zhang CS. Classification of digital photos taken by photographers or home users. In: Proceedings of the 5th Pacific rim conference on advances in multimedia information processing. Tokyo: Springer-Verlag; 2004. p. 198–205. Jiang W, Loui AC, Cerosaletti CD. Automatic aesthetic value assessment in photographic images. In: Proceedings of 2010 IEEE international conference on multimedia and expo. Suntec City: IEEE; 2010. p. 920–5. Winkler S. Visual fidelity and perceived quality: toward comprehensive metrics. In: Proceedings volume 4299, human vision and electronic imaging VI. San Jose: SPIE; 2001. p. 114–25. Lu P, Peng XJ, Li RF, Wang XJ. Towards aesthetics of image: a Bayesian framework for color harmony modeling. Signal Processing Image Commun. 2015;39:487–98. Itten J. The art of color: the subjective experience and objective rationale of color. Hoboken: John Wiley & Sons; 1997. Moon P, Spencer DE. Geometric formulation of classical color harmony. J Opt Soc Am. 1944;34:46–59. Hård A, Sivik L. A theory of colors in combination—a descriptive model related to the NCS color-order system. Color Res Appl. 2001;26:4–28. Tang XO, Luo W, Wang XG. Content-based photo quality assessment. IEEE Trans Multimed. 2013;15:1930–43. Chamaret C, Urban F. No-reference harmony-guided quality assessment. In: Proceedings of 2013 IEEE conference on computer vision and pattern recognition workshops. Portland: IEEE; 2013. p. 961–7. Tang Z, Miao ZJ, Wan YL, Wang ZF. Color harmonization for images. J Electron Imaging. 2011;20:023001. Nishiyama M, Okabe T, Sato I, Sato Y. Aesthetic quality classification of photographs based on color harmony. In: Proceedings of 2011 IEEE conference on computer vision and pattern recognition (CVPR). Colorado Springs: IEEE; 2011. p. 33–40. Ke Y, Tang XO, Jing F. The design of high-level features for photo quality assessment. In: Proceedings of 2006 IEEE computer society conference on computer vision and pattern recognition. New York: IEEE; 2006. p. 419–26. Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20:1254–9. Viola P, Jones MJ. Robust real-time object detection. Int J Comput Vis. 2002;57:137–54. You JY, Perkis A, Hannuksela MM, Gabbouj M. Perceptual quality assessment based on visual attention analysis. In: Proceedings of the 17th ACM international conference on multimedia. Beijing: ACM; 2009. p. 561–4. Ravì F, Battiato S. A novel computational tool for aesthetic scoring of digital photography. In: European conference on colour in graphics, imaging, and vision. Amsterdam: CGIV; 2012. p. 349–54. Chen XW, Huang T. Facial expression recognition: a clustering-based approach. Pattern Recogn Lett. 2003;24:1295–302. Article MATH Google Scholar Trendowicz A, Jeffery R. Classification and regression trees. In: Trendowicz A, Jeffery R, editors. Software project effort estimation. Cham: Springer; 2014. Zhang LM, Gao Y, Zhang C, Zhang HW, Tian Q, Zimmermann R. Perception-guided multimodal feature fusion for photo aesthetics assessment. In: Proceedings of the 22nd ACM international conference on multimedia. Orlando: ACM; 2014. p. 237–46. Li CC, Loui AC, Chen TH. Towards aesthetics: a photo quality assessment and photo selection system. In: Proceedings of the 18th ACM international conference on multimedia. Firenze: ACM; 2010. p. 827–30. Li CC, Gallagher A, Loui AC, Chen T. Aesthetic quality assessment of consumer photos with faces. In: Proceedings of 2010 IEEE international conference on image processing. Hong Kong, China: IEEE; 2010. p. 3221–4. Li C, Chen T. Aesthetic visual quality assessment of paintings. IEEE J Sel Top Signal Process. 2009;3:236–52. Fan ZB, Zhang K, Zheng XS. Evaluation and analysis of white space in Wu Guanzhong's Chinese paintings. Leonardo: MIT Press; 2016. Fan ZB, Li YN, Yu JH, Zhang K. Visual complexity of Chinese ink paintings. In: Proceedings of ACM symposium on applied perception. Cottbus: ACM; 2017. p. 9. 1–8. Sun RJ, Lian ZH, Tang YM, Xiao JG. Aesthetic visual quality evaluation of Chinese handwritings. In: Proceedings of the 24th international conference on artificial intelligence. Buenos Aires: AAAI Press; 2015. p. 2510–6. Zheng XS, Chakraborty I, Lin JJW, Rauschenberger R. Correlating low-level image statistics with users - rapid aesthetic and affective judgments of web pages. In: Proceedings of the 27th international conference on human factors in computing systems. Boston: ACM; 2009. p. 1–10. Zhang JJ, Yu JH, Zhang K, Zheng XS, Zhang JS. Computational aesthetic evaluation of logos. ACM Trans Appl Percept. 2017;14:20. Li YN, Zhang K, Li DJ. Rule-based automatic generation of logo designs. Leonardo. 2017;50:177–81. Zhang K, Harrell S, Ji X. Computational aesthetics: on the complexity of computer-generated paintings. Leonardo. 2012;45:243–8. Mandelbrot BB. The fractal geometry of nature. San Francisco: W. H. Freeman and Company; 1982. MATH Google Scholar Noll AM. Human or machine: a subjective comparison of Piet Mondrian's "composition with lines" (1917) and a computer-generated picture. Psychol Rec. 1966;16:1–10. Haeberli P. Paint by numbers: abstract image representations. ACM SIGGRAPH Comput Graph. 1990;24:207–14. Green S, Curtis C, Gooch AA, Gooch B, Hertzmann A, Salesin D, et al. SIGGRAPH 99 full day course: non-photorealistic rendering. 2018. http://www.mrl.nyu.edu/publications/npr-course1999/, Accessed June 2018. Singh G. Stringing the fractals. IEEE Comput Graph Appl. 2005;25:4–5. Singh G. Transforming fractals. IEEE Comput Graph Appl. 2014;34:4–5. Arnheim R. Visual thinking. Berkeley: University of California Press; 1969. Tao WY, Liu YX, Zhang K. Automatically generating abstract paintings in Malevich style. In: Proceedings of IEEE/ACS 13th international conference on computer and information science. Taiyuan: IEEE; 2015. p. 201–5. Xiong L, Zhang K. Generation of Miro's surrealism. In: Proceedings of the 9th international symposium on visual information communication and interaction. Dallas: ACM; 2016. p. 130–7. Zheng Y, Nie XC, Meng ZP, Feng W, Zhang K. Layered modeling and generation of Pollock's drip style. Vis Comput. 2015;31:589–600. Pollock TR. Mondrian and nature: recent scientific investigations. In Orsucci FF, Sala N editors, Chaos Complexity Res Compend. Nova Science Publishers, Inc. 2011;1(17):229–41. Zhang K, Yu JH. Generation of kandinsky art. Leonardo. 2016;49:48–55. Gatys LA, Ecker AS, Bethge M. A neural algorithm of artistic style. arXiv preprint. 2015;arXiv:1508.06576v2. Gatys LA, Ecker AS, Bethge M. Image style transfer using convolutional neural networks. In: Proceedings of 2016 IEEE conference on computer vision and pattern recognition. Las Vegas: IEEE; 2016. p. 2414–23. Zhang K, Bo YH, Yang Y. Towards a design generation methodology. In: Proceedings of the 10th international symposium on visual information communication and interaction. Bangkok: ACM; 2017. p. 71–2. Brown K. Grammatical design. IEEE Exp. 1997;12:27–33. Trescak T, Esteva M, Rodriguez I. General shape grammar interpreter for intelligent designs generations. In: Proceedings of sixth international conference on computer graphics, imaging and visualization. Tianjin: IEEE; 2009. p. 235–40. Wang XY, Zhang K. Enhancements to a shape grammar interpreter. In: Proceedings of the 3rd international workshop on interactive and spatial computing. Richardson: ACM; 2018. p. 8–14. The work is partially supported by the National Social Science Fund Art Project (No.17BG134) and Natural Science Foundation of the Beijing Municipal Education Committee (No.KM201710050001), National NSFC project (Grant number 61772463) and National NSFC project (Grant number 61572348). Authors' contribution YB wrote the first draft with most of the data. JY provided additional data and discussions. KZ added further discussions, Design Generation Framework and proofread the manuscript. All authors read and approved the final manuscript. Department of Fine Art, Beijing Film Academy, Beijing, 100088, China Yihang Bo The State Key Lab. of CAD&CG, Zhejiang University, Hangzhou, 310058, China Jinhui Yu Department of Computer Science, The University of Texas at Dallas, Richardson, TX, 75080, USA Kang Zhang Correspondence to Kang Zhang. Bo, Y., Yu, J. & Zhang, K. Computational aesthetics and applications. Vis. Comput. Ind. Biomed. Art 1, 6 (2018). https://doi.org/10.1186/s42492-018-0006-1 Computational aesthetics Aesthetic measurement
CommonCrawl
Approximation of SDEs: a stochastic sewing approach Oleg Butkovsky1, Konstantinos Dareiotis2 & Máté Gerencsér ORCID: orcid.org/0000-0002-7276-70543 Probability Theory and Related Fields volume 181, pages 975–1034 (2021)Cite this article We give a new take on the error analysis of approximations of stochastic differential equations (SDEs), utilizing and developing the stochastic sewing lemma of Lê (Electron J Probab 25:55, 2020. https://doi.org/10.1214/20-EJP442). This approach allows one to exploit regularization by noise effects in obtaining convergence rates. In our first application we show convergence (to our knowledge for the first time) of the Euler–Maruyama scheme for SDEs driven by fractional Brownian motions with non-regular drift. When the Hurst parameter is \(H\in (0,1)\) and the drift is \(\mathcal {C}^\alpha \), \(\alpha \in [0,1]\) and \(\alpha >1-1/(2H)\), we show the strong \(L_p\) and almost sure rates of convergence to be \(((1/2+\alpha H)\wedge 1) -\varepsilon \), for any \(\varepsilon >0\). Our conditions on the regularity of the drift are optimal in the sense that they coincide with the conditions needed for the strong uniqueness of solutions from Catellier and Gubinelli (Stoch Process Appl 126(8):2323–2366, 2016. https://doi.org/10.1016/j.spa.2016.02.002). In a second application we consider the approximation of SDEs driven by multiplicative standard Brownian noise where we derive the almost optimal rate of convergence \(1/2-\varepsilon \) of the Euler–Maruyama scheme for \(\mathcal {C}^\alpha \) drift, for any \(\varepsilon ,\alpha >0\). Since the 1970s, it has been observed that the addition of a random forcing into an ill-posed deterministic system could make it well-posed. Such phenomenon is called regularization by noise. One of the prime examples concerns differential equations of the form $$\begin{aligned} dX_t = b(X_t)\, dt, \end{aligned}$$ where b is a bounded vector field. While Eq. (1.1) might have infinitely many solutions when b fails to be Lipschitz continuous and might possess no solution when b fails to be continuous, Zvonkin [39] and Veretennikov [38] (see also the paper of Davie [9]) showed that the stochastic differential equation (SDE) $$\begin{aligned} dX_t = b(X_t)\,dt + dB_t \end{aligned}$$ driven by a Brownian motion B, has a unique strong solution when b is merely bounded measurable. This result was extended to the case of the fractional Brownian noise in [4, 8, 27, 32, 33]. These papers study the equation $$\begin{aligned} dX_t=b(X_t)\,dt+\,dB^H_t,\qquad X_0=x_0 \end{aligned}$$ where \(B^H\) is a d-dimensional fractional Brownian motion with Hurst parameter \(H\in (0,1)\). It is known [8, Theorem 1.9] that this equation has a unique strong solution if b belongs to the Hölder–Besov space \(\mathcal {C}^\alpha \) and \(\alpha >1-1/(2H)\). Thus, the presence of the noise not only produces solutions in situations where there was none but also singles out a unique physical solution in situations where there were multiple. However, to the best of our knowledge, no construction of this solution through discrete approximations has been known (unless \(H=1/2\)). In this article, we develop a new approach which allows to construct this solution and even obtain rate of convergence of the discrete approximations. Before the formal setup of Sect. 2, let us informally overview the results. First, let us recall that in the standard Brownian case (\(H=1/2\)) the seminal work of Gyöngy and Krylov [18] established the convergence in probability of the Euler–Maruyama scheme $$\begin{aligned} dX^n_t=b(X^n_{\kappa _n(t)})\,dt+\,dB^H_t,\qquad X_0^n=x_0^n,\quad t\geqslant 0 \end{aligned}$$ to the solution of (1.3). Here b is a bounded measurable function and $$\begin{aligned} \kappa _n(t):=\lfloor nt\rfloor /n, \quad n\in \mathbb {N}. \end{aligned}$$ In the present paper, we significantly extend these results by (a) establishing the convergence of the Euler–Maruyama scheme for all \(H\in (0,1)\); (b) showing that the convergence takes place in a stronger (\(L_p(\Omega )\) and almost sure) sense; (c) obtaining the explicit rate of convergence. More precisely, in Theorem 2.1 we show that if b is bounded and Hölder-continuous with exponent \(\alpha >1-1/(2H)\), then the Euler–Maruyama scheme converges with rate \(((1/2+\alpha H)\wedge 1)-\varepsilon \) for any \(\varepsilon >0\). Thus, the approximation results are obtained under the minimal assumption on the drift b that is needed for strong uniqueness of solutions [8, 32] and for the well-posedness of scheme (1.4). Let us also point out that in particular, for \(H<1/2\), one does not need to require any continuity from b to obtain a convergence rate \(1/2-\varepsilon \). Concerning approximations of SDEs driven by fractional Brownian motions with regular coefficients, we refer the reader to the recent works [15, 22] and references therein. Concerning the implementation of such schemes and in particular the simulation of increments of fractional Brownian motions we refer to [37, Section 6] and its references. Our second application is to study equations with multiplicative noise in the standard Brownian case: $$\begin{aligned} dX_t=b(X_t)\,dt+\sigma (X_t)\,dB_t,\qquad X_0=x_0,\quad t\geqslant 0 \end{aligned}$$ and their discretisations $$\begin{aligned} dX^{n}_t=b(X^{n}_{\kappa _n(t)})\,dt+\sigma (X^{n}_{\kappa _n(t)})\,dB_t,\quad X_0^{n}=x_0^n, \quad t\geqslant 0. \end{aligned}$$ Here b, \(\sigma \) are measurable functions, B is a d-dimensional Brownian motion, and \(\kappa _n\) is defined in (1.5). To ensure well-posedness, a nondegeneracy assumption on \(\sigma \) has to be assumed. In the standard Brownian case the rate of convergence for irregular b has been recently actively studied, see among many others [2, 28,29,30, 36] and their references. However, the obtained rate deteriorates as b becomes more irregular: in the setting of (1.6)-(1.7), the best known rate is only proven to be (at least) \(\alpha /2\) for \(b\in \mathcal {C}^\alpha \), \(\alpha >0\) in [2]. It was first shown in [10] that, at least for additive noise, the strong rate does not vanish as the regularity \(\alpha \) approaches 0, and one in fact recovers the rate \(1/2-\varepsilon \) for arbitrary \(\varepsilon >0\), for all \(\alpha >0\). In the present paper we establish the same for multiplicative noise, in which case the rate 1/2 is well-known to be optimal. Our proof offers several other improvements to earlier results: all moments of the error can be treated in the same way, the scalar and multidimensional cases are also not distinguished, and the main error bound (2.9) is uniform in time, showing that \(X_\cdot \) and \(X^{n}_\cdot \) are close as paths. The topology (in time) where the error is measured is in fact even stronger, see Remark 2.3. To obtain these results we develop a new strategy which utilizes the stochastic sewing lemma (SSL) of Lê [27] as well as some other specially developed tools. We believe that these tools might be also of independent interest; let us briefly describe them here. First, we obtain a new stochastic sewing–type lemma, see Theorem 3.3. It provides bounds on the \(L_p\)-norm of the increments of a process, with the correct dependence on p. This improves the corresponding bounds from SSL of Lê (although, under more restrictive conditions). This improved bound is used for proving stretched exponential moment bounds that play a key role in the convergence analysis of the Euler–Maruyama scheme for (1.3), see Sect. 4.3. In particular, using this new sewing-type lemma, we are able to extend the key bound of Davie [9, Proposition 2.1] (this bound was pivotal in his paper for establishing uniqueness of solutions to (1.2) when the driving noise is the standard Brownian motion) to the case of the fractional Brownian noise, see Lemma 4.3. Second, in Sect. 5 we derive density estimates of (a drift-free version of) the solution of (1.7) via Malliavin calculus. Classical results in this direction include that of Gyöngy and Krylov [18], and of Bally and Talay [5, 6]: the former gives sharp short time asymptotics but no smoothness of the density, and the latter vice versa (see Remark 5.1 below). Since our approach requires both properties at the same time, we give a self-contained proof of such an estimate (5.2). Finally let us mention that, as in [10, 11, 34], efficient quadrature bounds play a crucial role in the analysis. These are interesting approximation problems in their own right, see, e.g., [25] and the references therein. Such questions in the non-Markovian setting of fractional Brownian motion have only been addressed recently in [1]. However, there are a few key differences to our quadrature bounds from Lemma 4.1. First, we derive bounds in \(L_p(\Omega )\) for all p, which by Proposition 2.9 also imply the corresponding almost sure rate (as opposed to \(L_2(\Omega )\) rates only in [1]). Second, unlike the standard fractional Brownian motions considered here, [1] requires starting them at time 0 from a random variable with a density, which provides a strong smoothing effect. Third, when approximating the functional of the form $$\begin{aligned} \Gamma _t:=\int _0^tf(B^H_s)\,ds, \end{aligned}$$ also called 'occupation time functional', by the natural discretisation $$\begin{aligned} \Gamma ^n_t=\int _0^tf(B^H_{\kappa _n(s)})\,ds, \end{aligned}$$ our results not only imply pointwise error estimates on \(|\Gamma _T-\Gamma ^n_T|\), but also on the error of the whole path \(\Vert \Gamma _{\cdot }-\Gamma ^n_\cdot \Vert _{\mathcal {C}^\beta }\) measured in a Hölder norm \(\mathcal {C}^\beta \) with some \(\beta >1/2\). This is an immediate consequence of the bounds (4.1) in combination with Kolmogorov's continuity theorem. The rest of the article is structured as follows. Our main results are presented in Sect. 2. In Sect. 3 we outline the main strategy and collect some necessary auxiliary results, including the new sewing lemma–type bound Theorem 3.3. Section 4 is devoted to the error analysis in the additive fractional noise case. In Sect. 5 we prove an auxiliary bound on the probability distribution of the Euler–Maruyama approximation of certain sufficiently nice SDEs. The proofs of the convergence in the multiplicative standard Brownian noise case are given in Sect. 6. We begin by introducing the basic notation. Consider a probability space \((\Omega , \mathcal {F}, \mathbb {P})\) carrying a d-dimensional two-sided Brownian motion \((W_t)_{t \in \mathbb {R}}\). Let \(\mathbb {F}=(\mathcal {F}_t)_{t \in \mathbb {R}}\) be the filtration generated by the increments of W. The conditional expectation given \(\mathcal {F}_s\) is denoted by \(\;\;{\mathbb {E}}\;^s\). For \(H \in (0,1)\) we define the fractional Brownian motion with Hurst parameter H by the Mandelbrot-van Ness representation [35, Proposition 5.1.2] $$\begin{aligned} B^H_t := \int _{-\infty }^0 \bigl (|t-s|^{H-1/2}- |s|^{H-1/2}\bigr ) \, dW_s + \int _0^t |t-s|^{H-1/2} \, dW_s. \end{aligned}$$ Recall that the components of \(B^H\) are independent and each component is a Gaussian process with zero mean and covariance $$\begin{aligned} C(s,t):=\frac{c_H}{2}(s^{2H}+t^{2H}-|t-s|^{2H}),\quad s,t\geqslant 0, \end{aligned}$$ where \(c_H\) is a certain positive constant, see [35, (5.1)]. For \(\alpha \in (0,1]\) and a function \(f:Q\rightarrow V\), where \(Q\subset \mathbb {R}^k\) and \((V,|\cdot |)\) is a normed space, we set $$\begin{aligned}{}[f]_{\mathcal {C}^\alpha (Q,V)}:=\sup _{x\ne y\in Q}\frac{|f(x)-f(y)|}{|x-y|^\alpha }. \end{aligned}$$ For \(\alpha \in (0,\infty )\) we denote by \(\mathcal {C}^\alpha (Q,V)\) the space of all functions \(f:Q\rightarrow V\) having derivatives \(\partial ^\ell f\) for all multi-indices \(\ell \in ( \mathbb {Z}_+)^k\) with \(|\ell |<\alpha \) such that $$\begin{aligned} \Vert f\Vert _{\mathcal {C}^\alpha (Q,V)}:=\sum _{|\ell |< \alpha } \sup _{x\in Q}|\partial ^\ell f(x)|+ \sum _{\alpha -1< |\ell |< \alpha }[\partial ^\ell f]_{\mathcal {C}^{\alpha -|\ell |}(Q,V)}< \infty . \end{aligned}$$ If \(\ell =(0,\ldots ,0)\), then as usual, we use the convention \(\partial ^\ell f=f\). In particular, the \(\mathcal {C}^\alpha \) norm always includes the supremum of the function. We also set \(\mathcal {C}^0(Q,V)\) to be the space of bounded measurable functions with the supremum norm. We emphasize that in our notation elements of \(\mathcal {C}^0\) need not be continuous! If \(\alpha <0\), then by \(\mathcal {C}^\alpha (\mathbb {R}^d,\mathbb {R})\) we denote the space of all distributions \(f \in \mathcal {D}'( \mathbb {R}^d)\), such that $$\begin{aligned} \Vert f \Vert _{\mathcal {C}^\alpha } := \sup _{\varepsilon \in (0,1]} \varepsilon ^{-\alpha /2} \Vert \mathcal {P}_\varepsilon f\Vert _{\mathcal {C}^0(\mathbb {R}^d,\mathbb {R})}< \infty , \end{aligned}$$ where \(\mathcal {P}_\varepsilon f \) is the convolution of f with the d-dimensional Gaussian heat kernel at time \(\varepsilon \). In some cases we use shorthands: if \(Q=\mathbb {R}^d\), or \(V=\mathbb {R}^d\) or \(V=\mathbb {R}^{d\times d}\), they are omitted from the notation. For instance, the reader understands that requiring the diffusion coefficient \(\sigma \) of (1.6) to be of class \(\mathcal {C}^\alpha \) is to require it to have finite \(\Vert \cdot \Vert _{\mathcal {C}^\alpha (\mathbb {R}^d,\mathbb {R}^{d\times d})}\) norm. If \(V=L_p(\Omega )\) for some \(p\geqslant 2\), we write $$\begin{aligned}{}[] f []_{\mathscr {C}^\alpha _p,Q}:=\Vert f\Vert _{\mathcal {C}^\alpha (Q,L_p(\Omega ))}. \end{aligned}$$ Convention on constants: throughout the paper N denotes a positive constant whose value may change from line to line; its dependence is always specified in the corresponding statement. Additive fractional noise Our first main result establishes the convergence of the numerical scheme (1.4) to the solution of Eq. (1.3). Fix \(H\in (0,1)\). It is known ( [8, Theorem 1.9]) that if the drift \(b\in \mathcal {C}^\alpha \) with \(\alpha \in [0,1]\) satisfying \(\alpha >1-1/(2H)\), then for any fixed \(x_0\in \mathbb {R}^d\), Eq. (1.3) admits a unique strong solution, which we denote by X. For any \(n\in \mathbb {N}\) we take \(x_0^n\in \mathbb {R}^d\) and denote the solution of (1.4) by \(X^n\). For a given \(\alpha \in [0,1]\) and \(H\in (0,1)\), we set $$\begin{aligned} \gamma =\gamma (\alpha ,H):=(1/2+\alpha H)\wedge 1. \end{aligned}$$ Now we are ready to present our first main result. Its proof is placed in Sect. 4, a brief outline of it is provided in Sect. 3.1. Theorem 2.1 Let \(\alpha \in [0,1]\) satisfy $$\begin{aligned} \alpha >1-1/(2H). \end{aligned}$$ Suppose \(b\in \mathcal {C}^\alpha \), let \(\varepsilon ,\delta >0\) and \(p\geqslant 2\). Then there exists a constant \(\tau =\tau (\alpha ,H,\varepsilon )>1/2\) such that for all \(n\in \mathbb {N}\) the following bound holds $$\begin{aligned} \Vert X-X^n\Vert _{\mathcal {C}^\tau ([0,1],L_p(\Omega ))}\leqslant N n^{\delta }|x_0-x^n_0| + N n^{-\gamma +\varepsilon +\delta } \end{aligned}$$ with some constant \(N=N(p,d,\alpha ,H,\varepsilon ,\delta ,\Vert b\Vert _{\mathcal {C}^\alpha })\). Remark 2.2 An interesting question left open is whether one can reach \(\alpha =0\) in the \(H=1/2\) case. In dimension 1, this is positively answered [10] using PDE methods, but the sewing approach at the moment does not seem to handle such endpoint situations. For \(H\ne 1/2\) even weak existence or uniqueness is not known for the endpoint \(\alpha =1-1/(2H)\). From (2.6), Kolmogorov's continuity theorem, and Jensen's inequality, one gets the bound $$\begin{aligned} \big \Vert \Vert X-X^n\Vert _{\mathcal {C}^{\tau -\varepsilon '}([0,1],\mathbb {R}^d)}\big \Vert _{L_p(\Omega )}\leqslant N n^\delta |x_0-x^n_0| + N n^{-\gamma +\varepsilon +\delta }. \end{aligned}$$ for any \(\varepsilon '>0\) (with N also depending on \(\varepsilon '\)). In the literature it is more common to derive error estimates in supremum norm, which of course follows: $$\begin{aligned} \big \Vert \sup _{t\in [0,1]}|X_t-X^n_t|\big \Vert _{L_p(\Omega )}\leqslant N n^\delta |x_0-x^n_0| + N n^{-\gamma +\varepsilon +\delta }, \end{aligned}$$ but (2.7) is quite a bit stronger. A trivial lower bound on the rate of convergence of the solutions is the rate of convergence of the initial conditions. In (1.4) we lose \(\delta \) compared to this rate, but \(\delta >0\) can be chosen arbitrarily small. This becomes even less of an issue if one simply chooses \(x_0^n=x_0\). The fact that the error is well-controlled even between the gridpoints is related to the choice of how we extend \(X^n\) to continuous time from the points \(X_0^n,X_{1/n}^n,\ldots \). For other type of extensions and their limitations we refer the reader to [31]. Corollary 2.6 Assume \(\alpha \in [0,1]\) satisfies (2.5) and suppose \(b\in \mathcal {C}^\alpha \). Take \(x_0=x_0^n\) for all \(n\in \mathbb {N}\). Then for a sufficiently small \(\theta >0\) and any \(\varepsilon >0\) there exists an almost surely finite random variable \(\eta \) such that for all \(n\in \mathbb {N}\), \(\omega \in \Omega \) the following bound holds $$\begin{aligned} \sup _{t\in [0,1]}|X_t-X^n_t|\leqslant \Vert X-X^n\Vert _{\mathcal {C}^{1/2+\theta }([0,1],\mathbb {R}^d)} \leqslant \eta n^{-\gamma +\varepsilon }, \end{aligned}$$ where \(\gamma \) was defined in (2.4). An immediate consequence of (2.7), Proposition 2.9 below, and the fact that \(\tau >1/2\). \(\square \) Multiplicative Brownian noise In the multiplicative case we work under the ellipticity and regularity conditions $$\begin{aligned} \sigma \in \mathcal {C}^2,\qquad \qquad \sigma \sigma ^T\succeq \lambda I, \end{aligned}$$ in the sense of positive definite matrices, with some \(\lambda >0\). This, together with \(b\in \mathcal {C}^0\), guarantees the strong well-posedness of equations (1.6) and (1.7) [38, Theorem 1], whose solutions we denote by X and \(X^{n}\), respectively. The second main result then reads as follows, its proof is the content of Sect. 6. Let \(\alpha \in (0,1]\). Suppose \(b\in \mathcal {C}^\alpha \), let \(\varepsilon >0\), \(\tau \in [0,1/2)\), and \(p\geqslant 2\). Suppose \(\sigma \) satisfies (2.8). Then for all \(n\in \mathbb {N}\) the following bound holds $$\begin{aligned} \Vert X-X^n\Vert _{\mathcal {C}^\tau ([0,1],L_p(\Omega ))}\leqslant N|x_0-x_0^n| + N n^{-1/2+\varepsilon } \end{aligned}$$ with some \(N=N(p,d,\alpha ,\varepsilon ,\tau ,\lambda ,\Vert b\Vert _{\mathcal {C}^\alpha }, \Vert \sigma \Vert _{\mathcal {C}^2})\). Let \(\alpha \in (0,1]\), assume \(x_0=x_0^n\) for all \(n\in \mathbb {N}\), suppose \(b\in \mathcal {C}^\alpha \), and suppose \(\sigma \) satisfies (2.8). Let \(\varepsilon >0\), \(\tau \in [0,1/2)\). Then there exists an almost surely finite random variable \(\eta \) such that for all \(n\in \mathbb {N}\), \(\omega \in \Omega \) the following bound holds $$\begin{aligned} \sup _{t\in [0,1]}|X_t-X^n_t|\leqslant \Vert X-X^n\Vert _{\mathcal {C}^{\tau }([0,1],\mathbb {R}^d)} \leqslant \eta n^{-1/2+\varepsilon }. \end{aligned}$$ An immediate consequence of (2.9), Kolmogorov's continuity theorem, and Proposition 2.9 below. \(\square \) Let us conclude by invoking a simple fact used in the proof of Corollaries 2.6 and 2.8, which goes back to at least [20, proof of Theorem 2.3] (see also [13, Lemma 2]). Proposition 2.9 Let \(\rho >0\) and let \((Z_n)_{n\in \mathbb {N}}\) be a sequence of random variables such that for all \(p>0\) and all \(n\in \mathbb {N}\) one has the bound $$\begin{aligned} \Vert Z_n\Vert _{L_p(\Omega )}\leqslant N n^{-\rho } \end{aligned}$$ for some \(N=N(p)\). Then for all \(\varepsilon >0\) there exists an almost surely random variable \(\eta \) such that for all \(n\in \mathbb {N}\), \(\omega \in \Omega \) $$\begin{aligned} |Z_n|\leqslant \eta n^{-\rho +\varepsilon }. \end{aligned}$$ Notice that for any \(q>0\) $$\begin{aligned} \sum _{n\in \mathbb {N}}\mathbb {P}(|Z_n|>n^{-\rho +\varepsilon })\leqslant \sum _{n\in \mathbb {N}}\frac{\;\;{\mathbb {E}}\;|Z_n|^q}{n^{q(-\rho +\varepsilon )}}\leqslant \sum _{n\in \mathbb {N}} N n^{-q\varepsilon }. \end{aligned}$$ Choosing \(q=2/\varepsilon \), the above sum is finite, so by the Borel-Cantelli lemma there exists an almost surely finite \(\mathbb {N}\)-valued random variable \(n_0\) such that \(|Z_n|\leqslant n^{-\rho +\varepsilon }\) for all \(n>n_0\). This yields the claim by setting $$\begin{aligned} \eta :=1\vee \max _{n\leqslant n_0}(|Z_n|n^{\rho -\varepsilon }). \end{aligned}$$ \(\square \) The outline of the strategy The purpose of this section is to outline the main steps in a simple example. Hopefully this gives a clear picture of the strategy to the reader, which otherwise may be blurred by the some complications arising in the proofs of Theorems 2.1 and 2.7. The 'simple example' will be the setting of (1.3) and (1.4) with \(H=1/2\) and \(f\in \mathcal {C}^\alpha \) for some \(\alpha >0\). We furthermore assume \(x_0=x_0^n\) and that the time horizon is given by \([0,T_0]\) instead of [0, 1], with some small \(1\geqslant T_0>0\) to be chosen later. Finally, we will only aim to prove (2.6) with \(\tau =1/2\). Step 1 ("Quadrature bounds"). Our first goal is to bound the quantity $$\begin{aligned} \mathcal {A}_{T_0}:=\int _0^{T_0} b(B_r)-b(B_{\kappa _n(r)})\,dr. \end{aligned}$$ From the Hölder continuity of b, one would have the trivial bound of order \(n^{-\alpha /2}\) in any \(L_p(\Omega )\) norm, but in fact one can do much better, as follows. Fix \(\varepsilon \in (0,1/2)\) and define (recall that by \(\;\;{\mathbb {E}}\;^s\) we denote the conditional expectation given \(\mathcal {F}_s\)) $$\begin{aligned} A_{s,t}=\;\;{\mathbb {E}}\;^s(\mathcal {A}_t-\mathcal {A}_s)=\;\;{\mathbb {E}}\;^s \int _s^t b(B_r)-b(B_{\kappa _n(r)})\,dr. \end{aligned}$$ The stochastic sewing lemma, Proposition 3.2 below, allows one to bound \(\mathcal {A}\) through bounds on A. Given the preceding field \(A_{s,t}\), provided that the conditions (3.8) and (3.9) are satisfied, it is easy to check that the unique adapted process \(\mathcal {A}\) constructed in Proposition 3.2 coincides with the one in (3.1). Indeed, the process in (3.1) satisfies (3.10) and (3.11) with \(\varepsilon _1=\varepsilon \), \(\varepsilon _2=1\), \(K_1=\Vert b\Vert _{\mathcal {C}^0}\) and \(K_2=0\). Therefore it remains to find \(C_1\) and \(C_2\). In fact, it is immediate that one can choose \(C_2=0\), since \(\;\;{\mathbb {E}}\;^s\delta A_{s,u,t}=\;\;{\mathbb {E}}\;^s(A_{s,t}-A_{s,u}-A_{u,t})=0\). We now claim that one can take \(C_1=Nn^{-1/2-\alpha /2+\varepsilon }\) in (3.8). Since \(\Vert b(B_r)-b(B_{\kappa _n(r)})\Vert _{L_p(\Omega )}\leqslant \Vert b\Vert _{\mathcal {C}^\alpha }n^{-\alpha /2}\), if \(|t-s|\leqslant 2 n^{-1}\), then one easily gets by the conditional Jensen's inequality $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}\leqslant N |s-t|n^{-\alpha /2}\leqslant N |s-t|^{1/2+\varepsilon }n^{-1/2-\alpha /2+\varepsilon }. \end{aligned}$$ If \(|t-s|>2n^{-1}\), let \(s'=\kappa _n(s)+2n^{-1}\) be the second gridpoint to the right of s. In particular, \(r\geqslant s'\) implies \(\kappa _n(r)\geqslant s\). Let us furthermore notice that for any \(u\geqslant v\) and any bounded measurable function f, one has \(\;\;{\mathbb {E}}\;^v f(B_u)=\mathcal {P}_{u-v}f(B_v)\), where \(\mathcal {P}\) is the standard heat kernel (see (3.22) below for a precise definition). One can then write $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}\leqslant & {} \int _s^{s'}\Vert b(B_r)-b(B_{\kappa _n(r)})\Vert _{L_p(\Omega )}\,dr+\big \Vert \int _{s'}^t\;\;{\mathbb {E}}\;^s b(B_r)-\;\;{\mathbb {E}}\;^s b(B_{\kappa _n(r)})\,dr\big \Vert _{L_p(\Omega )} \nonumber \\\leqslant & {} N n^{-1-\alpha /2}+\int _{s'}^t\Vert (\mathcal {P}_{r-s}-\mathcal {P}_{\kappa _n(r)-s})b\Vert _{\mathcal {C}^0}\,dr \nonumber \\\leqslant & {} N n^{-1-\alpha /2}+N\int _{s'}^t(r-s')^{-1/2+\varepsilon }n^{-1/2-\alpha /2+\varepsilon }\,dr \nonumber \\\leqslant & {} N|t-s|^{1/2+\varepsilon }n^{-1/2-\alpha /2+\varepsilon } \end{aligned}$$ where in the third line we used a well-known estimate for heat kernels, see Proposition 3.7 (ii) with exponents \(\beta =0\), \(\delta =1/2+\alpha /2-\varepsilon \), and time points \(\kappa _n(r)-s\) in place of s, \(r-s\) in place of t. We also used that for \(r\geqslant s'\), one has \(\kappa _n(r)-s\geqslant r-s'\). By (3.2) and (3.3) we indeed get (3.8) with \(C_1=N n^{-1/2-\alpha /2+\varepsilon }\). Applying the stochastic sewing lemma, (3.12) yields $$\begin{aligned} \Vert \mathcal {A}_t-\mathcal {A}_s\Vert _{L_p(\Omega )}=\big \Vert \int _s^t b(B_r)-b(B_{\kappa _n(r)})\,dr\big \Vert _{L_p(\Omega )}\leqslant N|t-s|^{1/2+\varepsilon }n^{-1/2-\alpha /2+\varepsilon } \end{aligned}$$ for all \(0\leqslant s\leqslant t\leqslant T_0\). Here the constant N depends on \(p,\varepsilon ,\alpha ,d,\Vert b\Vert _{C^\alpha }\), but not on \(T_0\). Step 1.5 (Girsanov transform). An easy application of Girsanov's theorem yields $$\begin{aligned} \big \Vert \int _s^t b(X_r^n)-b(X^n_{\kappa _n(r)})\,dr\big \Vert _{L_p(\Omega )}\leqslant N|t-s|^{1/2+\varepsilon }n^{-1/2-\alpha /2+\varepsilon }. \end{aligned}$$ In general (for example, for fractional Brownian motions) the Girsanov transformation can become involved, but for our present example this is completely straightforward. Step 2 ("regularization bound"). Next, we estimate the quantity $$\begin{aligned} \mathcal {A}_{T_0}=\int _0^{T_0}b(B_r+\psi _r)-b(B_r+\varphi _r)\,dt \end{aligned}$$ for some adapted processes \(\psi ,\varphi \) whose Lipschitz norm is bounded by some constant K. As suggested by the above notation, we use the stochastic sewing lemma again, with \(A_{s,t}\) defined as $$\begin{aligned} A_{s,t}=\;\;{\mathbb {E}}\;^s\int _s^t b(B_r+\psi _s)-b(B_r+\varphi _s)\,dr. \end{aligned}$$ We do not give the details of the calculations at this point. It is an instructive exercise to the interested reader to verify that (3.8) and (3.9) are satisfied with \(\varepsilon _1=\alpha /2\), \(C_1=N[] \psi -\varphi []_{\mathscr {C}^0_p,[0,T_0]}\) and \(\varepsilon _2=\alpha /2\), \(C_2=N[] \psi -\varphi []_{\mathscr {C}^{1/2}_p,[0,T_0]}\). Here N depends on \(p,\alpha ,d,K,\Vert b\Vert _{\mathcal {C}^\alpha }\), but not on \(T_0\). The bound (3.10) is straightforward, with \(K_1=\Vert b\Vert _{\mathcal {C}^0}\). Concerning (3.11), one can write $$\begin{aligned}&|\;\;{\mathbb {E}}\;^s(\mathcal {A}_t-\mathcal {A}_s-A_{s,t})|\leqslant \;\;{\mathbb {E}}\;^s\int _s^t\big |b(B_r+\psi _r)\\&\quad -b(B_r+\psi _s)\big |+\big |b(B_r+\varphi _r)-b(B_r+\varphi _s)\big |\,dr, \end{aligned}$$ and so \(K_2=2K\Vert b\Vert _{\mathcal {C}^\alpha }\) does the job. Therefore, by (3.12), we get $$\begin{aligned} \Vert \mathcal {A}_t-\mathcal {A}_s\Vert _{L_p(\Omega )}= & {} \big \Vert \int _s^t b(B_r+\psi _r)-b(B_r+\varphi _r)\,dr\big \Vert _{L_p(\Omega )} \\\leqslant & {} N |t-s|^{1/2+\alpha /2}[] \psi -\varphi []_{\mathscr {C}^0_p,[0,T_0]}\\&+N |t-s|^{1+\alpha /2}[] \psi -\varphi []_{\mathscr {C}^{1/2}_p,[0,T_0]}. \end{aligned}$$ We will only apply the following simple corollary of this bound: if \(\psi _0=\varphi _0\), then $$\begin{aligned} \big \Vert \int _s^t b(B_r+\psi _s)-b(B_r+\varphi _s)\,dr\big \Vert _{L_p(\Omega )}\leqslant N |t-s|^{1/2+\alpha /2}[] \psi -\varphi []_{\mathscr {C}^{1/2}_p,[0,T_0]}.\nonumber \\ \end{aligned}$$ Step 3 ("Buckling") Let \(\psi \) and \(\psi ^n\) be the drift component of X and \(X^n\), respectively: $$\begin{aligned} \psi _t=x_0+\int _0^t b(X_r)\,dr,\qquad \psi ^n_t=x_0+\int _0^t b(X^n_{\kappa _n(r)})\,dr. \end{aligned}$$ We apply (3.4) and (3.5) with \(\varphi =\psi ^n\), to get $$\begin{aligned} \Vert (\psi -\psi ^n)_t-(\psi -\psi ^n)_s\Vert _{L_p(\Omega )}\leqslant & {} Nn^{-1/2-\alpha /2+\varepsilon }|t-s|^{1/2+\varepsilon } \\&+ N |t-s|^{1/2+\alpha /2}[] \psi -\psi ^n []_{\mathscr {C}^{1/2}_p,[0,T_0]}. \end{aligned}$$ Dividing by \(|t-s|^{1/2}\) and take supremum over \(0\leqslant s\leqslant t\leqslant T_0\), one gets $$\begin{aligned}{}[] \psi -\psi ^n []_{\mathscr {C}^{1/2}_p,[0,T_0]}\leqslant Nn^{-1/2-\alpha /2+\varepsilon } +NT_0^{\alpha /2} [] \psi -\psi ^n []_{\mathscr {C}^{1/2}_p,[0,T_0]}. \end{aligned}$$ Since so far N does not depend on \(T_0\), one can choose \(T_0\) sufficiently small so that \(NT_0^{\alpha /2}\leqslant 1/2\). This yields the desired bound $$\begin{aligned}{}[] X-X^n []_{\mathscr {C}^{1/2}_p,[0,T_0]}=[] \psi -\psi ^n []_{\mathscr {C}^{1/2}_p,[0,T_0]} \leqslant Nn^{-1/2-\alpha /2+\varepsilon }. \end{aligned}$$ Let us point out that the rate of convergence is determined by only the first step. Also, the second step is similar in spirit to the 'averaging bounds' appearing in sewing-based uniqueness proofs for SDEs (see e.g. [8, 27]). In the proof of Theorem 2.1, the more difficult part will be the regularization bound. Applying only the stochastic sewing lemma of Lê apparently does not lead to an optimal result for \(H>1/2\). Therefore at some point one has to move from almost sure bounds (which are similar to [8]) to \(L_p\) bounds. This requires an extension of the Davie's moment bound [9, Proposition 2.1] to the case of the fractional Brownian motion. This is done in Lemma 4.3 using the new stochastic sewing lemma (Theorem 3.3). In contrast, for Theorem 2.7 establishing the quadrature bound will be more difficult. In the above arguments, the heat kernel bounds have to be replaced by estimates on the transition densities of the Euler–Maruyama scheme. These bounds are established via Malliavin calculus, this is the content of Sect. 5. Sewing lemmas As mentioned above, the proof strategy relies on the sewing and stochastic sewing lemmas. For the convenience of the reader, we recall them here. The first two lemmas are well-known, the third one is new. We define for \(0\leqslant S\leqslant T\leqslant 1\) the set \([S,T]_\leqslant :=\{(s,t):\,S\leqslant s\leqslant t\leqslant T\}\). If \(A_{\cdot ,\cdot }\) is a function \([S,T]_\leqslant \rightarrow \mathbb {R}^d\), then for \(s\leqslant u\leqslant t\) we put \(\delta A_{s,u,t}:=A_{s,t}-A_{s,u}-A_{u,t}\). The first statement is the sewing lemma of Gubinelli. [14, Lemma 2.1], [19, Proposition 1] Let \(0\leqslant S\leqslant T\leqslant 1\) and let \(A_{\cdot ,\cdot }\) be a continuous function from \([S,T]_\leqslant \) to \(\mathbb {R}^d\). Suppose that for some \(\varepsilon >0\) and \(C>0\) the bound $$\begin{aligned} |\delta A_{s,u,t}| \leqslant C |t-s|^{1+\varepsilon } \end{aligned}$$ holds for all \(S\leqslant s\leqslant u\leqslant t\leqslant T\). Then there exists a unique function \(\mathcal {A}:[S,T]\rightarrow \mathbb {R}^d\) such that \(\mathcal {A}_S=0\) and the following bound holds for some constant \(K>0\): $$\begin{aligned} |\mathcal {A}_t -\mathcal {A}_s-A_{s,t}|\leqslant K |t-s|^{1+\varepsilon }, \quad (s,t)\in [S,T]_\leqslant . \end{aligned}$$ Moreover, there exists a constant \(K_0\) depending only on \(\varepsilon \), d such that \(\mathcal {A}\) in fact satisfies the above bound with \(K\leqslant K_0 C\). The next statement is the stochastic extension of the above result obtained by Lê. Recall that for any \(s\geqslant 0\) we are using the convention \(\;\;{\mathbb {E}}\;^s[...]:=\;\;{\mathbb {E}}\;[...|\mathcal {F}_s]\). [27, Theorem 2.4]. Let \(p\geqslant 2\), \(0\leqslant S\leqslant T\leqslant 1\) and let \(A_{\cdot ,\cdot }\) be a function \([S,T]_\leqslant \rightarrow L_p(\Omega ,\mathbb {R}^d)\) such that for any \((s,t)\in [S,T]_\leqslant \) the random vector \(A_{s,t}\) is \(\mathcal {F}_t\)-measurable. Suppose that for some \(\varepsilon _1,\varepsilon _2>0\) and \(C_1,C_2\) the bounds $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}\leqslant & {} C_1|t-s|^{1/2+\varepsilon _1}, \end{aligned}$$ $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^s\delta A_{s,u,t}\Vert _{L_p(\Omega )}\leqslant & {} C_2 |t-s|^{1+\varepsilon _2} \end{aligned}$$ hold for all \(S\leqslant s\leqslant u\leqslant t\leqslant T\). Then there exists a unique (up to modification) \(\mathbb {F}\)-adapted process \(\mathcal {A}:[S,T]\rightarrow L_p(\Omega ,\mathbb {R}^d)\) such that \(\mathcal {A}_S=0\) and the following bounds hold for some constants \(K_1,K_2>0\): $$\begin{aligned} \Vert \mathcal {A}_t -\mathcal {A}_s-A_{s,t}\Vert _{L_p(\Omega )}&\leqslant K_1 |t-s|^{1/2+\varepsilon _1}+K_2 |t-s|^{1+\varepsilon _2},\quad (s,t)\in [S,T]_\leqslant , \end{aligned}$$ $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^s\big (\mathcal {A}_t -\mathcal {A}_s-A_{s,t}\big )\Vert _{L_p(\Omega )}&\leqslant K_2|t-s|^{1+\varepsilon _2},\quad (s,t)\in [S,T]_\leqslant . \end{aligned}$$ Moreover, there exists a constant K depending only on \(\varepsilon _1,\varepsilon _2\), d such that \(\mathcal {A}\) satisfies the bound $$\begin{aligned} \Vert \mathcal {A}_t-\mathcal {A}_s\Vert _{L_p(\Omega )} \leqslant KpC_1 |t-s|^{1/2+\varepsilon _1}+KpC_2 |t-s|^{1+\varepsilon _2},\quad (s,t)\in [S,T].\nonumber \\ \end{aligned}$$ The final statement of this section is new. It provides bounds on \(\Vert \mathcal {A}_s-\mathcal {A}_t\Vert _{L_p(\Omega )}\) with the correct dependence on p: namely these bounds are of order \(\sqrt{p}\), rather than p as in (3.12). This will be crucial for the proof of Theorem 2.1; in particular, this would allow to extend the corresponding Davie bound [9, Proposition 2.1] to the case of fractional Brownian motion. The price to pay though is that the assumptions of this theorem are more restrictive than the corresponding assumptions of [27, Theorem 2.4]. Fix \(0\leqslant S\leqslant T\leqslant 1\). Let \((\mathcal {A}_t)_{t\in [S,T]}\) be an \(\mathbb {F}\)–adapted process with values in \(\mathbb {R}^d\). For \((s,t)\in [S,T]_\leqslant \) we will write \(\mathcal {A}_{s, t}:=\mathcal {A}_t-\mathcal {A}_s\). Let \(p\geqslant 2\). Suppose that for some \(m\geqslant 2\), \(\varepsilon _1>0\), \(\varepsilon _2\geqslant 0\), \(\varepsilon _3\geqslant 0\), and \(C_1,C_2, C_3>0\) the bounds $$\begin{aligned}&\Vert \mathcal {A}_{s,t}\Vert _{L_{p\vee m}(\Omega )}\leqslant C_1 |t-s|^{1/2+\varepsilon _1}\end{aligned}$$ $$\begin{aligned}&\Vert \;\;{\mathbb {E}}\;^s\mathcal {A}_{u,t}-\;\;{\mathbb {E}}\;^u\mathcal {A}_{u,t}\Vert _{L_m(\Omega )}\leqslant C_1 |u-s|^{1/m+\varepsilon _1}\end{aligned}$$ $$\begin{aligned}&\Vert \;\;{\mathbb {E}}\;^s\mathcal {A}_{s,t}\Vert _{L_p(\Omega )}\leqslant C_2 |t-s|^{\varepsilon _2}\end{aligned}$$ $$\begin{aligned}&\bigl \Vert \;\;{\mathbb {E}}\;^s[(\;\;{\mathbb {E}}\;^s\mathcal {A}_{u,t}-\;\;{\mathbb {E}}\;^u\mathcal {A}_{u,t})^2]\bigr \Vert _{L_{p/2}(\Omega )}\leqslant C_3 |u-s||t-s|^{\varepsilon _3} \end{aligned}$$ hold for all \(S\leqslant s\leqslant u\leqslant t\leqslant T\). Then there exist a universal constant \(K=K(d,\varepsilon _2,\varepsilon _3)>0\) which does not depend on p, \(C_j\), such that $$\begin{aligned} \Vert \mathcal {A}_{t}-\mathcal {A}_s\Vert _{L_p(\Omega )}\leqslant C_2K|t-s|^{\varepsilon _2}+K\sqrt{p}\,C_3^{1/2}|t-s|^{1/2+\varepsilon _3/2}. \end{aligned}$$ Note that the right–hand side of bound (3.17) does not depend on \(C_1\). Let us recall that the proof of stochastic sewing lemma in [27] requires to apply the BDG inequality infinitely many times but each time to a discrete-time martingale, thus yielding a constant p in the right–hand side of bound (3.12). In our proof we apply the BDG inequality only once, but to a continuous time martingale. This allows to get a better constant (namely \(\sqrt{p}\) instead of p), since the constant in the BDG inequality for the continuous-time martingales is better than in the BDG inequality for general martingales. Proof of Theorem 3.3 This proof is inspired by the ideas of [3, proof of Proposition 3.2] and [8, proof of Theorem 4.3]. For the sake of brevity, in this proof we will write \(L_p\) for \(L_p(\Omega )\). Fix \(s,t\in [S,T]_{\leqslant }\) and for \(i\in \{1,\ldots ,d\}\) consider a martingale \(M^{i}=(M^i_r)_{r\in [s,t]}\), where $$\begin{aligned} M^i_r:=\;\;{\mathbb {E}}\;^r[\mathcal {A}^i_{s,t}],\quad r\in [s,t]. \end{aligned}$$ We will frequently use the following inequality. For \(s\leqslant u\leqslant v\leqslant t \) one has $$\begin{aligned} |M^i_u-M^i_v|\leqslant |\mathcal {A}^i_{u,v}|+|\;\;{\mathbb {E}}\;^u \mathcal {A}^i_{u,v}|+|\;\;{\mathbb {E}}\;^u\mathcal {A}^i_{v,t}-\;\;{\mathbb {E}}\;^v\mathcal {A}^i_{v,t}|. \end{aligned}$$ We begin by observing that $$\begin{aligned} \Vert \mathcal {A}_{s,t}\Vert _{L_p(\Omega )}&\leqslant \sum _{i=1}^d\Vert \mathcal {A}^{i}_{s,t}\Vert _{L_p(\Omega )}=\sum _{i=1}^d\Vert M^{i}_{t}\Vert _{L_p(\Omega )}\nonumber \\&\leqslant \sum _{i=1}^d\Vert M^{i}_{s}\Vert _{L_p(\Omega )}+\sum _{i=1}^d\Vert M^{i}_{t}-M^{i}_{s}\Vert _{L_p(\Omega )}\nonumber \\&=:\sum _{i=1}^dI_1^{i}+\sum _{i=1}^dI_2^{i}. \end{aligned}$$ The first term in (3.19) is easy to bound. By assumption (3.15) we have $$\begin{aligned} I_1^{i}=\Vert \;\;{\mathbb {E}}\;^{s} \mathcal {A}_{s,t}^{i}\Vert _{L_p(\Omega )}\leqslant C_2 |t-s|^{\varepsilon _2}. \end{aligned}$$ To estimate \(I_2^i\) we first observe that for each \(i=1,\dots ,d\) the martingale \(M^{i}\) is continuous. Indeed, for any \(s\leqslant u\leqslant v\leqslant t\) we have using (3.18), (3.13), and (3.14) $$\begin{aligned} \Vert M^{i}_u-M^{i}_v\Vert _{L_m}&\leqslant 2\Vert \mathcal {A}^{i}_{u,v}\Vert _{L_m}+\Vert \;\;{\mathbb {E}}\;^u\mathcal {A}^{i}_{v,t}-\;\;{\mathbb {E}}\;^v\mathcal {A}^{i}_{v,t}\Vert _{L_m}\\&\leqslant 3C_1|u-v|^{1/m+\varepsilon _1}. \end{aligned}$$ Therefore, the Kolmogorov continuity theorem implies that the martingale \(M^{i}\) is continuous. Hence, its quadratic variation \([M^{i}]\) equals its predictable quadratic variation \(\langle M^{i}\rangle \) [24, Theorem I.4.52]. Thus, applying a version of the Burkholder–Davis–Gundy inequality with a precise bound on the constant [7, Proposition 4.2], we get that there exists a universal constant \(N>0\) such that $$\begin{aligned} \Vert M^{i}_t-M^{i}_s\Vert _{L_p(\Omega )}\leqslant N\sqrt{p}\,\Vert \langle M^{i}\rangle _t\Vert _{L_{p/2}}^{1/2}. \end{aligned}$$ For \(n\in \mathbb {N}\), \(j\in \{1,\ldots ,n\}\) put \(t^n_j:=s+(t-s)j/n\). Then, it follows from [23, Theorem 2] that \(\sum _{j=0}^{n-1}\;\;{\mathbb {E}}\;^{t_j^n}[(M^i_{t^n_{j+1}}-M^i_{t^n_j})^2]\) converges to \(\langle M^{i}\rangle _t\) in \(L_1(\Omega )\). In particular, a subsequence indexed over \(n_k\) converges almost surely. Therefore, applying Fatou's lemma, Minkowski's inequality, (3.18) and using the assumptions of the theorem, we deduce $$\begin{aligned} \Vert \langle M^{i}\rangle _t\Vert _{L_{p/2}}&=\Bigl \Vert \lim _{k\rightarrow \infty }\sum _{j=0}^{n_k-1}\;\;{\mathbb {E}}\;^{t_j^{n_k}}(M^{i}_{t^{n_k}_{j+1}}-M^{i}_{t^{n_k}_j})^2\Bigr \Vert _{L_{p/2}} \\&\leqslant \liminf _{k \rightarrow \infty }\sum _{j=0}^{n_k-1}\bigl \Vert \;\;{\mathbb {E}}\;^{t_j^{n_k}}(M^{i}_{t^{n_k}_{j+1}}-M^{i}_{t^{n_k}_j})^2\bigr \Vert _{L_{p/2}}\\&\leqslant 3\lim _{k\rightarrow \infty }\sum _{j=0}^{n_k-1}\bigl (2\Vert \mathcal {A}^{i}_{t^{n_k}_{j},t^{n_k}_{j+1}}\Vert _{L_p(\Omega )}^2+\Vert \;\;{\mathbb {E}}\;^{t_j^{n_k}}(\;\;{\mathbb {E}}\;^{t_j^{n_k}}\mathcal {A}^{i}_{t^{n_k}_{j+1},t}-\;\;{\mathbb {E}}\;^{t_{j+1}^{n_k}}\mathcal {A}^{i}_{t^{n_k}_{j+1},t})^2\bigr \Vert _{L_{p/2}}\bigr )\\&\leqslant \lim _{k\rightarrow \infty }6C_1^2T^{1+2\varepsilon _1}n_k^{-2\varepsilon _1}+3\lim _{k\rightarrow \infty }C_3|t-s|^{1+\varepsilon _3}n_k^{-1-\varepsilon _3}\sum _{j=0}^{n_k-1}(n_k-j)^{\varepsilon _3}\\&\leqslant N C_3|t-s|^{1+\varepsilon _3}. \end{aligned}$$ Substituting this into (3.21) and combining this with (3.19) and (3.20), we obtain (3.17). \(\square \) Some useful estimates In this section we establish a number of useful technical bounds related to Gaussian kernels. Their proofs are mostly standard, however we were not able to find them in the literature. Therefore for the sake of completeness, we provide the proofs of these results in the "Appendix A". Fix an arbitrary \(H\in (0,1)\). Define $$\begin{aligned} c(s,t):=\sqrt{(2H)^{-1}}|t-s|^{H},\quad 0\leqslant s\leqslant t\leqslant 1. \end{aligned}$$ Let \(p_t\), \(t>0\), be the density of a d-dimensional vector with independent Gaussian components each of mean zero and variance t: $$\begin{aligned} p_t(x)=\frac{1}{(2\pi t)^{d/2}}\exp \Bigl (-\frac{|x|^2}{2t}\Bigr ),\quad x\in \mathbb {R}^d. \end{aligned}$$ For a measurable function \(f:\mathbb {R}^d\rightarrow \mathbb {R}\) we write \(\mathcal {P}_t f:=p_t*f\), and occasionally we denote by \(p_0\) the Dirac delta function. Our first statement provides a number of technical bounds related to the fractional Brownian motion. Its proof is placed in the "Appendix A". Let \(p\geqslant 1\). The process \(B^H\) has the following properties: (i): \(\Vert B^H_t-B^H_s\Vert _{L_p(\Omega )}= N |t-s|^H\), for all \(0\leqslant s\leqslant t\leqslant 1\), with \(N=N(p,d,H)\); (ii): for all \(0\leqslant s\leqslant u\leqslant t\leqslant 1\), \(i=1,\ldots ,d\), the random variable \(\;\;{\mathbb {E}}\;^sB_t^{H,i}-\;\;{\mathbb {E}}\;^uB_t^{H,i}\) is independent of \(\mathcal {F}^s\); furthermore, this random variable is Gaussian with mean 0 and variance $$\begin{aligned} \;\;{\mathbb {E}}\;(\;\;{\mathbb {E}}\;^sB_t^{H,i}-\;\;{\mathbb {E}}\;^uB_t^{H,i})^2= c^2(s,t)-c^2(u,t)=:v(s,u,t); \end{aligned}$$ (iii): \(\;\;{\mathbb {E}}\;^s f(B^H_t)=\mathcal {P}_{c^2(s,t)}f(\;\;{\mathbb {E}}\;^sB^H_t)\), for all \(0\leqslant s\leqslant t\leqslant 1\); (iv): \(|c^2(s,t)-c^2(s,u)|\leqslant N|t-u||t-s|^{2H-1}\), for all \(0\leqslant s\leqslant u\leqslant t\) such that \(|t-u|\leqslant |u-s|\), with \(N=N(H)\); (v): \(\Vert \;\;{\mathbb {E}}\;^sB^H_t-\;\;{\mathbb {E}}\;^sB^H_u\Vert _{L_p(\Omega )}\leqslant N|t-u||t-s|^{H-1}\), for all \(0\leqslant s\leqslant u\leqslant t\) such that \(|t-u|\leqslant |u-s|\), with \(N=N(p,d,H)\); The next statement gives the heat kernel bounds which are necessary for the proofs of the main results. Its proof is also placed in the "Appendix A". Recall the definition of the function v in (3.23). Let \(f\in \mathcal {C}^\alpha \), \(\alpha \leqslant 1\) and \(\beta \in [0,1]\). The following hold: There exists \(N=N(d, \alpha , \beta )\) such that $$\begin{aligned} \Vert \mathcal {P}_tf\Vert _{\mathcal {C}^\beta (\mathbb {R}^d)}\leqslant N t^{\frac{(\alpha -\beta )\wedge 0}{2}} \Vert f\Vert _{\mathcal {C}^\alpha (\mathbb {R}^d)}, \end{aligned}$$ for all \(t\in (0,1]\). For all \(\delta \in (0,1]\) with \(\delta \geqslant \frac{\alpha }{2}-\frac{\beta }{2}\), there exists \(N=N(d, \alpha , \beta , \delta )\) such that $$\begin{aligned} \Vert \mathcal {P}_tf-\mathcal {P}_sf\Vert _{\mathcal {C}^\beta (\mathbb {R}^d)}\leqslant N \Vert f\Vert _{\mathcal {C}^{\alpha }(\mathbb {R}^d)} s^{\frac{\alpha }{2}-\frac{\beta }{2}-\delta }(t-s)^{\delta }, \end{aligned}$$ for all \(0\leqslant s\leqslant t \leqslant 1\). For all \(H\in (0,1)\), there exists \(N=N(d,\alpha ,\beta , H)\) such that $$\begin{aligned} \Vert \mathcal {P}_{c^2(s,t)}f-\mathcal {P}_{c^2(u,t)}f\Vert _{\mathcal {C}^\beta (\mathbb {R}^d)}\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha (\mathbb {R}^d)}(u-s)^{\frac{1}{2}}(t-u)^{(H(\alpha -\beta )-\frac{1}{2})\wedge 0}, \end{aligned}$$ for all \(0<s\leqslant u \leqslant t\leqslant 1\). For all \(H\in (0,1)\), \(p\geqslant 2\), there exists \(N=N(d,\alpha ,H,p)\) such that $$\begin{aligned} \Vert \mathcal {P}_{c^2(u,t)}f(x)-\mathcal {P}_{c^2(u,t)}f(x+\xi )\Vert _{L_p(\Omega )} \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } (u-s)^{\frac{1}{2}}(t-u)^{(H\alpha -\frac{1}{2})\wedge 0}; \end{aligned}$$ for all \(x\in \mathbb {R}^d\), \(0<s\leqslant u\leqslant t\leqslant 1\) and all random vectors \(\xi \) whose components are independent, \(\mathcal {N}(0,v(s,u,t))\) random variables. Our next statement relates to the properties of Hölder norms. Its proof can be found in "Appendix A". Let \(\alpha \in \mathbb {R}\), \(f\in \mathcal {C}^\alpha (\mathbb {R}^d,\mathbb {R}^k)\), \(\delta \in [0,1]\). Then there exists \(N=N(\alpha ,\delta ,d , k)\) such that for any \(x\in \mathbb {R}^d\) $$\begin{aligned} \Vert f(x+\cdot )-f(\cdot )\Vert _{\mathcal {C}^{\alpha -\delta }}\leqslant N|x|^\delta \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ Finally, we will also need the following integral bounds. They follow immediately from a direct calculation. Let \(a,b>-1\), \(t>0\). Then for some \(N=N(a,b)\) one has $$\begin{aligned} \int _0^t(t-r)^ar^b\,dr=N t^{a+b+1}. \end{aligned}$$ Let \(a>-2\), \(b<1\), \(t>0\). Then for some \(N=N(a,b)\) one has $$\begin{aligned} \Big |\int _0^t(t-r)^{a}(t^{b}r^{-b}-1)\,dr\Big |=N t^{a+1}. \end{aligned}$$ Girsanov theorem for fractional Brownian motion One of the tools which are important for the proof of Theorem 2.1 is the Girsanov theorem for fractional Brownian motion [12, Theorem 4.9], [32, Theorem 2]. We will frequently use the following technical corollary of this theorem. For the convenience of the reader we put its proof into "Appendix B". Proposition 3.10 Let \(u:\Omega \times [0,1]\rightarrow \mathbb {R}^d\) be an \(\mathbb {F}\)–adapted process such that with a constant \(M>0\) we have $$\begin{aligned} \Vert u \Vert _{L_\infty (0,1)} \leqslant M, \end{aligned}$$ almost surely. Further, assume that one of the following holds: \(H\leqslant 1/2\); \(H>1/2\) and there exists a random variable \(\xi \) such that $$\begin{aligned} \int _0^1\Bigl (\int _0^t\frac{(t/s)^{H-1/2}|u_t-u_s|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt\leqslant \xi \end{aligned}$$ and \(\;\;{\mathbb {E}}\;\exp (\lambda \xi )<\infty \) for any \(\lambda >0\). Then there exists a probability measure \({\widetilde{\mathbb {P}}}\) which is equivalent to \(\mathbb {P}\) such that the process \({\widetilde{B}}^H:=B^H+\int _0^\cdot u_s\,ds\) is a fractional Brownain motion with Hurst parameter H under \({\widetilde{\mathbb {P}}}\). Furthermore for any \(\lambda >0\) we have $$\begin{aligned} \;\;{\mathbb {E}}\;\Bigl (\frac{d \mathbb {P}}{d{\widetilde{\mathbb {P}}}}\Bigr )^\lambda \leqslant {\left\{ \begin{array}{ll} \exp (\lambda ^2 NM^2)\qquad \qquad \qquad \qquad \text {if }H\in (0,1/2]\\ \exp (\lambda ^2 NM^2)\;\;{\mathbb {E}}\;[\exp (\lambda N\xi )]\qquad \text {if }H\in (1/2,1) \end{array}\right. } <\infty , \end{aligned}$$ where \(N=N(H)\). In order to simplify the calculation of the integral in (3.27), we provide the following technical but useful lemma. Since the proof is purely technical, we put its proof in the "Appendix B". Lemma 3.11 Let \(H\in (1/2,1)\) and let \(\rho \in (H-1/2,1]\). Then there exists a constant \(N=N(H,\rho )\), such that for any function \(f\in \mathcal {C}^\rho ([0,1],\mathbb {R}^d)\) and any \(n\in \mathbb {N}\) one has $$\begin{aligned}&\int _0^1\Bigl (\int _0^t\frac{(t/s)^{H-1/2} |f_{\kappa _n(t)}-f_{\kappa _n(s)}|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt\leqslant N[f]_{\mathcal {C}^\rho }^2. \end{aligned}$$ $$\begin{aligned}&\int _0^1\Bigl (\int _0^t\frac{(t/s)^{H-1/2} |f_{t}-f_{s}|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt\leqslant N[f]_{\mathcal {C}^\rho }^2. \end{aligned}$$ In this section we provide the proof of Theorem 2.1. We follow the strategy outlined on Sect. 3.1: In Sects. 4.1 and 4.2 we prove the quadrature bound and the regularization bound, respectively. Based on these bounds, the proof of the theorem is placed in Sect. 4.3. Quadrature estimates The goal of this subsection is to prove the quadrature bound (4.7). The proof consists of two steps. First, in Lemma 4.1 we prove this bound for the case of fractional Brownian motion; then we extend this result to the process X by applying the Girsanov theorem. Recall the definition of functions \(\kappa _n\) in (1.5) and \(\gamma \) in (2.4). Lemma 4.1 Let \(H\in (0,1)\), \(\alpha \in [0,1]\), \(p>0\), and take \(\varepsilon \in (0,1/2]\). Then for all \(f\in \mathcal {C}^\alpha \), \(0\leqslant s\leqslant t\leqslant 1\), \(n\in \mathbb {N}\), one has the bound $$\begin{aligned} \Bigl \Vert \int _s^t (f(B^H_r)-f(B^H_{\kappa _n(r)}))\, dr\Bigr \Vert _{L_p(\Omega )} \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } n^{-\gamma (\alpha , H)+\varepsilon }|t-s|^{1/2+\varepsilon } , \end{aligned}$$ with some \(N=N(p, d,\alpha ,\varepsilon , H)\). It suffices to prove the bound for \(p\geqslant 2\). Define for \(0\leqslant s\leqslant t\leqslant 1\) $$\begin{aligned} A_{s,t}:=\;\;{\mathbb {E}}\;^s \int _s^t (f(B^H_r)-f(B^H_{\kappa _n(r)}))\, dr. \end{aligned}$$ Then, clearly, for any \(0\leqslant s\leqslant u\leqslant t\leqslant 1\) $$\begin{aligned} \delta A_{s,u,t}:&=A_{s,t}-A_{s,u}-A_{u,t}\\&=\;\;{\mathbb {E}}\;^s \int _u^t (f(B^H_r)-f(B^H_{\kappa _n(r)}))\, dr-\;\;{\mathbb {E}}\;^u \int _u^t(f(B^H_r)-f(B^H_{\kappa _n(r)}))\, dr. \end{aligned}$$ Let us check that all the conditions of the stochastic sewing lemma (Proposition 3.2) are satisfied. Note that $$\begin{aligned} \;\;{\mathbb {E}}\;^s \delta A_{s,u,t}=0, \end{aligned}$$ and so condition (3.9) trivially holds, with \(C_2=0\). To establish (3.8), let \(s \in [k/n, (k+1)/n)\) for some \(k \in \{0,\ldots ,n-1\}\). Suppose first that \(t \in [(k+4)/n, 1]\). We write $$\begin{aligned} |A_{s,t}|\leqslant \Big (\int _s^{(k+4)/n} +\int _{(k+4)/n}^t\Big ) |\;\;{\mathbb {E}}\;^s \big ( f(B^H_r)-f(B^H_{\kappa _n(r)})\big )|\, dr=:I_1+I_2. \end{aligned}$$ The bound for \(I_1\) is straightforward: by conditional Jensen's inequality, the definition of \(\mathcal {C}^\alpha \) norm, and Proposition 3.6 (i) we have $$\begin{aligned} \Vert I_1\Vert _{L_p(\Omega )}&\leqslant \int _s^{(k+4)/n} \Vert f(B^H_r)-f(B^H_{\kappa _n(r)}) \Vert _{L_p(\Omega )} \, dr \nonumber \\&\leqslant N \Vert f\Vert _{\mathcal {C}^\alpha }n^{-1-\alpha H} \leqslant N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-\gamma +\varepsilon }|t-s|^{1/2+\varepsilon }, \end{aligned}$$ where the last inequality follows from the fact that \(n^{-1}\leqslant |t-s|\). Now let us estimate \(I_2\). Using Proposition 3.6 (iii), we derive $$\begin{aligned} I_2\leqslant&\int _{(k+4)/n}^t|\mathcal {P}_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^sB^H_r)-\mathcal {P}_{c^2(s,\kappa _n(r))}f(\;\;{\mathbb {E}}\;^sB^H_r)|\,dr \nonumber \\&\quad +\int _{(k+4)/n}^t |\mathcal {P}_{c^2(s,\kappa _n(r))}f(\;\;{\mathbb {E}}\;^sB^H_r)-\mathcal {P}_{c^2(s,\kappa _n(r))}f(\;\;{\mathbb {E}}\;^sB^H_{\kappa _n(r)})|\,dr\nonumber \\ =:&I_{21}+I_{22}. \end{aligned}$$ To bound \(I_{21}\), we apply Proposition 3.7 (ii) with \(\beta =0\), \(\delta =1\) and Proposition 3.6 (iv). We get $$\begin{aligned} \Vert I_{21}\Vert _{L_p(\Omega )}&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } \int _{(k+4)/n}^t\big (c^2(s,r)-c^2(s,\kappa _n(r))\big )c^{\alpha -2}(s,\kappa _n(r))\,dr \nonumber \\&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }\int _{(k+4)/n}^t n^{-1}|r-s|^{2H-1}|r-s|^{H(\alpha -2)}\,dr \nonumber \\&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }n^{-1}\int _s^t |r-s|^{-1+\alpha H}\,dr \nonumber \\&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } n^{-1}|t-s|^{\alpha H}. \end{aligned}$$ To deal with \(I_{22}\), we use Proposition 3.7 (i) with \(\beta =1\) and Proposition 3.6 (v). We deduce $$\begin{aligned} \Vert I_{22}\Vert _{L_p(\Omega )}&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }\int _{(k+4)/n}^t \Vert \;\;{\mathbb {E}}\;^sB^H_r-\;\;{\mathbb {E}}\;^sB^H_{\kappa _n(r)}\Vert _{L_p(\Omega )}c^{\alpha -1}(s,\kappa _n(r))\,dr \nonumber \\&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }\int _{(k+4)/n}^t n^{-1}|r-s|^{H-1}|r-s|^{-H(1-\alpha )}\,dr \nonumber \\&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }n^{-1}|t-s|^{\alpha H}, \end{aligned}$$ where in the second inequality we have also used that \(\kappa _n(r)-s\geqslant (r-s)/2\). Combining (4.5) and (4.6), and taking again into account that \(n^{-1}\leqslant |t-s|\), we get $$\begin{aligned} \Vert I_2\Vert _{L_p(\Omega )} \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } n^{-\gamma +\varepsilon }|t-s|^{1/2+\varepsilon }. \end{aligned}$$ Recalling (4.3), we finally conclude $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}\leqslant N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-\gamma +\varepsilon }|t-s|^{1/2+\varepsilon }. \end{aligned}$$ It remains to show the same bound for \(t \in (s, (k+4)/n]\). However this is almost straightforward. We write $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}&\leqslant \int _s^t \Vert f(B^H_r)-f(B^H_{\kappa _n(r)}) \Vert _{L_p(\Omega )} \, dr \\&\leqslant N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-\alpha H}|t-s| \leqslant N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-\gamma +\varepsilon }|t-s|^{1/2+\varepsilon }, \end{aligned}$$ where the last inequality uses that in this case \(|t-s|\leqslant 4 n^{-1}\). Thus, (3.8) holds, with \(C_1:=N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-\gamma +\varepsilon }\), \(\varepsilon _1:=\varepsilon \). Thus all the conditions of the stochastic sewing lemma are satisfied. The process $$\begin{aligned} {\tilde{\mathcal {A}}}_t:=\int _0^t (f(B^H_r)-f(B^H_{\kappa _n(r)}))\,dr \end{aligned}$$ is also \(\mathbb {F}\)-adapted, satisfies (3.11) trivially (the left-hand side is 0), and $$\begin{aligned} \Vert {\tilde{{\mathcal {A}}}}_t-{\tilde{{\mathcal {A}}}}_s-A_{s,t}\Vert _{L_p(\Omega )} \leqslant \Vert f\Vert _{\mathcal {C}^0} |t-s|\leqslant N |t-s|^{1/2+\varepsilon }, \end{aligned}$$ which shows that it also satisfies (3.10). Therefore by uniqueness \(\mathcal {A}_t={\tilde{\mathcal {A}}}_t\). The bound (3.12) then yields precisely (4.1). \(\square \) Let \(H\in (0,1)\), \(\alpha \in [0,1]\) such that \(\alpha >1-1/(2H)\), \(p>0\), \(\varepsilon \in (0,1/2]\). Let \(b\in \mathcal {C}^\alpha \) and \(X^n\) be the solution of (1.4). Then for all \(f\in \mathcal {C}^\alpha \), \(0\leqslant s\leqslant t\leqslant 1\), \(n\in \mathbb {N}\), one has the bound $$\begin{aligned} \Bigl \Vert \int _s^t (f(X_r^n)-f(X^n_{\kappa _n(r)}))\, dr\Bigr \Vert _{L_p(\Omega )} \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } |t-s|^{1/2+\varepsilon } n^{-\gamma +\varepsilon } \end{aligned}$$ with some \(N=N(\Vert b\Vert _{\mathcal {C}^\alpha },p, d,\alpha ,\varepsilon , H)\). Without loss of generality, we assume \(\alpha <1\). Let $$\begin{aligned} \psi ^n(t):=\int _0^t b(X^n_{\kappa _n(t)})\,dt. \end{aligned}$$ Let us apply the Girsanov theorem (Theorem 3.10) to the function \(u(t)=b(X^n_{\kappa _n(t)})\). First let us check that all the conditions of this theorem hold. First, we obviously have \(|u(t)|\leqslant \Vert b\Vert _{\mathcal {C}^0}\), and thus (3.26) holds with \(M= \Vert b\Vert _{\mathcal {C}^0}\). Second, let us check condition (3.27) in the case \(H>1/2\). Fix \(\lambda >0\) and small \(\delta >0\) such that \(\alpha (H-\delta )>H-1/2\); such \(\delta \) exists thanks to the assumption \(\alpha >1-1/(2H)\). We apply Lemma 3.11 for the function \(f:=b(X^n)\) and \(\rho :=\alpha (H-\delta )\). We have $$\begin{aligned}&\int _0^1\Bigl (\int _0^t\frac{(t/s)^{H-1/2}|b(X^n_{\kappa _n(t)})-b(X^n_{\kappa _n(s)})|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt\\&\quad \leqslant N[b(X^n)]_{\mathcal {C}^{\alpha (H-\delta )}}^2\\&\quad =N\Vert b\Vert _{\mathcal {C}^{\alpha }}^2[X^n]_{\mathcal {C}^{H-\delta }}^{2\alpha }\\&\quad \leqslant N \Vert b\Vert _{\mathcal {C}^{\alpha }}^2 (\Vert b\Vert _{\mathcal {C}^{0}}^{2\alpha }+ [B^H]_{\mathcal {C}^{H-\delta }}^{2\alpha })=:\xi \end{aligned}$$ $$\begin{aligned} \;\;{\mathbb {E}}\;e^{\lambda \xi }\leqslant N(\Vert b\Vert _{\mathcal {C}^{\alpha }},\alpha ,\delta ,H,\lambda )<\infty , \end{aligned}$$ where we used the fact that the Hölder constant \([B^H]_{\mathcal {C}^{H-\delta }}\) satisfies \(\;\;{\mathbb {E}}\;\exp (\lambda [B^H]_{\mathcal {C}^{H-\delta }}^{2\alpha })\leqslant N\) for any \(\lambda \geqslant 0\). Thus, condition (3.27) is satisfied. Hence all the conditions of Theorem 3.10 hold. Thus, there exists a probability measure \({\widetilde{\mathbb {P}}}\) equivalent to \(\mathbb {P}\) such that the process \({\widetilde{B}}^H:=B^H+\psi ^n\) is a fractional H-Brownian motion on [0, 1] under \({\widetilde{\mathbb {P}}}\). Now we can derive the desired bound (4.7). We have $$\begin{aligned}&\;\;{\mathbb {E}}\;^{\mathbb {P}} \Bigl | \int _s^t \left( f(X^n_r)- f(X^n_{\kappa _n(r)}) \right) \, dr \Bigr |^p\nonumber \\&\qquad =\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}} \Bigl [\Bigl | \int _s^t \left( f(X^n_r)- f(X^n_{\kappa _n(r)}) \right) \, dr \Bigr |^p\frac{d\mathbb {P}}{d{\widetilde{\mathbb {P}}}}\Bigr ]\nonumber \\&\qquad \leqslant \Bigl (\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}} \Bigl | \int _s^t \left( f(X^n_r)- f(X^n_{\kappa _n(r)}) \right) \, dr \Bigr |^{2p}\Bigr )^{1/2}\Bigl (\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}}\Bigl [\frac{d\mathbb {P}}{d{\widetilde{\mathbb {P}}}}\Bigr ]^2\Bigr )^{1/2} \nonumber \\&\qquad =\Bigl (\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}} \Bigl | \int _s^t \left( f(\widetilde{B}^H_r+x_0^n)- f(\widetilde{B}^H_{\kappa _n(r)}+x_0^n) \right) \, dr \Bigr |^{2p}\Bigr )^{1/2}\Bigl (\;\;{\mathbb {E}}\;^{\mathbb {P}}\frac{d\mathbb {P}}{d{\widetilde{\mathbb {P}}}}\Bigr )^{1/2} \nonumber \\&\qquad =\Bigl (\;\;{\mathbb {E}}\;^{\mathbb {P}} \Bigl | \int _s^t \left( f( B^H_r+x_0^n)- f( B^H_{\kappa _n(r)}+x_0^n) \right) \, dr \Bigr |^{2p}\Bigr )^{1/2}\Bigl (\;\;{\mathbb {E}}\;^{\mathbb {P}}\frac{d\mathbb {P}}{d{\widetilde{\mathbb {P}}}}\Bigr )^{1/2}. \end{aligned}$$ Taking into account (4.8), we deduce by Theorem 3.10 that $$\begin{aligned} \;\;{\mathbb {E}}\;^{\mathbb {P}}\frac{d\mathbb {P}}{d{\widetilde{\mathbb {P}}}}\leqslant N(\Vert b\Vert _{\mathcal {C}^{\alpha }},\alpha ,\delta ,H,\lambda ). \end{aligned}$$ Hence, using (4.1), we can continue (4.9) in the following way: $$\begin{aligned} \;\;{\mathbb {E}}\;^{\mathbb {P}} \Bigl | \int _s^t \left( f(X^n_r)- f(X^n_{\kappa _n(r)}) \right) \, dr \Bigr |^p\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }^p n^{-p(\gamma (\alpha , H)+\varepsilon )}|t-s|^{p(1/2+\varepsilon )}, \end{aligned}$$ which implies the statement of the theorem. \(\square \) A regularization lemma The goal of this subsection is to establish the regularization bound (4.26). Its proof consists of a number of steps. First, in Lemma 4.3 we derive an extension of the corresponding bound of Davie [9, Proposition 2.1] for the fractional Brownian motion case. It is important that the right–hand side of this bound depends on p as \(\sqrt{p}\) (rather than p); this will be crucial later in the proof of Lemma 4.6 and Theorem 2.1. Then in Lemma 4.6 we obtain the pathwise version of this lemma and extend it to a wider class of processes (fractional Brownian motion with drift instead of a fractional Brownian motion). Finally, in Lemma 4.7 we obtain the desired regularization bound. Let \(H\in (0,1)\), \(\alpha \in (-1/(2H),0]\). Let \(f\in \mathcal {C}^\infty \). Then there exists a constant \(N=N(d,\alpha ,H)\) such that for any \(p\geqslant 2\), \(s,t\in [0,1]\) we have $$\begin{aligned} \Bigl \Vert \int _s^t f(B_r^H)\,dr\Bigr \Vert _{L_p(\Omega )}\leqslant N\sqrt{p}\Vert f\Vert _{\mathcal {C}^\alpha } (t-s)^{H\alpha +1}. \end{aligned}$$ Note that the right–hand side of bound (4.10) depends only on the norm of f in \(\mathcal {C}^\alpha \) and does not depend on the norm of f in other Hölder spaces. Proof of Lemma 4.3 Fix \(p\geqslant 2\). We will apply Theorem 3.3 to the process $$\begin{aligned} \mathcal {A}_{t}:=\int _0^t f(B_r^H)\,dr, \quad t\in [0,1]. \end{aligned}$$ As usual, we write \(\mathcal {A}_{s,t}:=\mathcal {A}_t-\mathcal {A}_s\). Let us check that all the conditions of that theorem hold with \(m=4\) It is very easy to see that $$\begin{aligned} \Vert \mathcal {A}_{s,t}\Vert _{L_{p\vee 4}(\Omega )}\leqslant \Vert f\Vert _{\mathcal {C}^0}|t-s|. \end{aligned}$$ Thus (3.13) holds. By Proposition 3.6 (iii) and Proposition 3.7 (i) we have for some \(N_1=N_1(d,\alpha ,H)\) (recall that by assumptions \(\alpha \leqslant 0\)) $$\begin{aligned} |\;\;{\mathbb {E}}\;^s \mathcal {A}_{s,t}|\leqslant \int _s^t |P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^s B_r^H)|dr\leqslant N_1\Vert f\Vert _{\mathcal {C}^\alpha }(t-s)^{H\alpha +1}. \end{aligned}$$ $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^s \mathcal {A}_{s,t}\Vert _{L_p(\Omega )}\leqslant N_1\Vert f\Vert _{\mathcal {C}^\alpha } (t-s)^{H\alpha +1} \end{aligned}$$ and condition (3.15) is met. We want to stress here that the constant \(N_1\) here does not depend on p (this happens thanks to the a.s. bound (4.11); it will be crucial later in the proof) Thus, it remains to check conditions (3.14) and (3.16). Fix \(0\leqslant s\leqslant u\leqslant t\leqslant 1\). Using Proposition 3.6 (iii), we get $$\begin{aligned} {\;\;{\mathbb {E}}\;^{s}}\mathcal {A}_{u,t}-{\;\;{\mathbb {E}}\;^{u}}\mathcal {A}_{u,t}&=\int _u^t \bigl (P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^s B_r^H)- P_{c^2(u,r)}f(\;\;{\mathbb {E}}\;^u B_r^H)\bigr )\,dr\nonumber \\&=\int _u^t \bigl (P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^s B_r^H)- P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^u B_r^H)\bigr )\,dr\nonumber \\&\quad +\int _u^t \bigl (P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^u B_r^H)- P_{c^2(u,r)}f(\;\;{\mathbb {E}}\;^u B_r^H)\bigr )\,dr\nonumber \\&=: I_1+I_2. \end{aligned}$$ Note that by Proposition 3.6 (ii), the random vector \(\;\;{\mathbb {E}}\;^u B_r^H-\;\;{\mathbb {E}}\;^s B_r^H\) is independent of \(\mathcal {F}^s\). Taking this into account and applying the conditional Minkowski inequality, we get $$\begin{aligned} \Bigl (\;\;{\mathbb {E}}\;^s |I_1|^4\Bigr )^{\frac{1}{4}}&\leqslant \int _u^t \Bigl (\;\;{\mathbb {E}}\;^s\bigl [P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^s B_r^H)- P_{c^2(s,r)}f(\;\;{\mathbb {E}}\;^u B_r^H)\bigr ]^4\Bigr )^{\frac{1}{4}}\,dr\nonumber \\&\leqslant \int _u^t g_r(\;\;{\mathbb {E}}\;^s B_r^H)\,dr, \end{aligned}$$ where for \(x\in \mathbb {R}^d\), \(r\in [u,t]\) we denoted $$\begin{aligned} g_r(x):=\Vert P_{c^2(s,r)}f(x)-P_{c^2(s,r)}f(x+\;\;{\mathbb {E}}\;^u B_r^H-\;\;{\mathbb {E}}\;^s B_r^H)\Vert _{L_4(\Omega )}. \end{aligned}$$ By Proposition 3.6 (ii), the random vector \(\;\;{\mathbb {E}}\;^u B_r^H-\;\;{\mathbb {E}}\;^s B_r^H\) is Gaussian and consists of d independent components with each component of mean 0 and variance v(s, u, t) (recall its definition in (3.23)). Hence Proposition 3.7 (iv) yields now for some \(N_2=N_2(d,\alpha ,H)\) and all \(x\in \mathbb {R}^d\), \(r\in [u,t]\) $$\begin{aligned} g_r(x)\leqslant N_2\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}}(r-u)^{H\alpha -\frac{1}{2}}. \end{aligned}$$ Substituting this into (4.13), we finally get $$\begin{aligned} \Bigl (\;\;{\mathbb {E}}\;^s |I_1|^4\Bigr )^{\frac{1}{4}}\leqslant & {} N_2\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}}\int _u^t (r-u)^{H\alpha -\frac{1}{2}}\, dr\nonumber \\\leqslant & {} N_3\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}} (t-u)^{H\alpha +\frac{1}{2}}, \end{aligned}$$ for some \(N_3=N_3(d,\alpha ,H)\) where we used that, by assumptions, \(H\alpha -1/2>-1\). Similarly, using Proposition 3.7 (iii) with \(\beta =0\), we get for some \(N_4=N_4(d,\alpha ,H)\) $$\begin{aligned} |I_2|\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}}\int _u^t (r-u)^{H\alpha -\frac{1}{2}}\,dr\leqslant N_4\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}} (t-u)^{H\alpha +\frac{1}{2}},\nonumber \\ \end{aligned}$$ where again we used that, by assumptions, \(H\alpha -1/2>-1\). We stress that both \(N_3\), \(N_4\) do not depend on p. Now to verify (3.14), we note that by (4.12), (4.14),(4.15), we have $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^s\mathcal {A}_{u,t}-\;\;{\mathbb {E}}\;^u\mathcal {A}_{u,t}\Vert _{L_4(\Omega )}&\leqslant \Vert I_1\Vert _{L_4(\Omega )}+ \Vert I_2\Vert _{L_4(\Omega )}\nonumber \\&\leqslant \bigl (\;\;{\mathbb {E}}\;[\;\;{\mathbb {E}}\;^s|I_1|^4]\bigr )^{\frac{1}{4}}+\Vert I_2\Vert _{L_4(\Omega )}\nonumber \\&\leqslant (N_3+N_4)\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}}. \end{aligned}$$ Thus, condition (3.14) holds. In a similar manner we check (3.16). We have $$\begin{aligned} \;\;{\mathbb {E}}\;^s[|\;\;{\mathbb {E}}\;^s\mathcal {A}_{u,t}-\;\;{\mathbb {E}}\;^u\mathcal {A}_{u,t}|^2]&\leqslant 2 \;\;{\mathbb {E}}\;^s|I_1|^2 +2 \;\;{\mathbb {E}}\;^s|I_2|^2\leqslant 2\bigl (\;\;{\mathbb {E}}\;^s|I_1|^4\bigr )^{1/2} +2 \;\;{\mathbb {E}}\;^s|I_2|^2\\&\leqslant 2(N_3^2+N_4^2)\Vert f\Vert _{\mathcal {C}^\alpha }^2(u-s)(t-u)^{2H\alpha +1}. \end{aligned}$$ Thus, $$\begin{aligned} \bigl \Vert \;\;{\mathbb {E}}\;^s[|\;\;{\mathbb {E}}\;^s\mathcal {A}_{u,t}-\;\;{\mathbb {E}}\;^u\mathcal {A}_{u,t}|^2]\bigr \Vert _{L_{p/2}(\Omega )}\leqslant 2(N_3^2+N_4^2)\Vert f\Vert _{\mathcal {C}^\alpha }^2(u-s)(t-u)^{2H\alpha +1} \end{aligned}$$ and the constant \(2(N_3^2+N_4^2)\) does not depend on p. Therefore condition (3.16) holds. Thus all the conditions of Theorem 3.3 hold. The statement of the theorem follows now from (3.17). \(\square \) To establish the regularization bound we need the following simple corollary of the above lemma. Let \(H\in (0,1)\), \(\delta \in (0,1]\), \(\alpha -\delta \in (-1/(2H),0]\). Let \(f\in \mathcal {C}^\infty \). Then there exists a constant \(N=N(d,\alpha ,H,\delta )\) such that for any \(p\geqslant 2\), \(s,t\in [0,1]\), \(x,y\in \mathbb {R}^d\) we have $$\begin{aligned} \Bigl \Vert \int _s^t (f(B_r^H+x)-f(B_r^H+y))\,dr\Bigr \Vert _{L_p(\Omega )}\leqslant N\sqrt{p}\Vert f\Vert _{\mathcal {C}^\alpha } (t-s)^{H(\alpha -\delta )+1}|x-y|^\delta .\nonumber \\ \end{aligned}$$ Fix \(x,y\in \mathbb {R}^d\). Consider a function \(g(z):=f(z+x)-f(z+y)\), \(z\in \mathbb {R}^d\). Then, by Lemma 4.3 $$\begin{aligned} \Bigl \Vert \int _s^t (f(B_r^H+x)-f(B_r^H+y))\,dr\Bigr \Vert _{L_p(\Omega )}&= \Bigl \Vert \int _s^t g(B_r^H)\,dr\Bigr \Vert _{L_p(\Omega )}\\&\leqslant N\sqrt{p}\Vert g\Vert _{\mathcal {C}^{\alpha -\delta }} (t-s)^{H(\alpha -\delta )+1}. \end{aligned}$$ The corollary follows now immediately from Proposition 3.8. \(\square \) The next lemma provides a pathwise version of bound (4.17). It also allows to replace fractional Brownian motion by fractional Brownian motion with a drift. Let \(H\in (0,1)\), \(\alpha >1-1/(2H)\), \(\alpha \in [0,1]\), \(f\in \mathcal {C}^\infty \). Let \(\psi :\Omega \times [0,1]\rightarrow \mathbb {R}^d\) be an \(\mathbb {F}\)–adapted process such that \(\psi _0\) is deterministic and for some \(R>0\) $$\begin{aligned} \Vert \psi \Vert _{\mathcal {C}^1([0,1],\mathbb {R}^d)}\leqslant R,\quad a.s. \end{aligned}$$ Suppose that for some \(\rho >H+1/2\) we have for any \(\lambda >0\) $$\begin{aligned} \;\;{\mathbb {E}}\;\exp \big (\lambda \Vert \psi \Vert ^2_{\mathcal {C}^{\rho }([0,1],\mathbb {R}^d)}\big )=:G(\lambda )<\infty . \end{aligned}$$ Then for any \(M>0\), \(\varepsilon >0\), \(\varepsilon _1>0\) there exists a constant \(N=N(d,\alpha ,H,\varepsilon ,\varepsilon _1,G,R,M)\) and a random variable \(\xi \) finite almost everywhere such that for any \(s,t\in [0,1]\), \(x,y\in \mathbb {R}\), \(|x|, |y|\leqslant M\) we have $$\begin{aligned}&\Bigl |\int _s^t (f(B_r^H+\psi _r+x)-f(B_r^H+\psi _r+y))dr\Bigr |\nonumber \\&\quad \leqslant \xi \Vert f\Vert _{\mathcal {C}^\alpha } (t-s)^{H(\alpha -1)+1-\varepsilon }|x-y| \end{aligned}$$ $$\begin{aligned} \;\;{\mathbb {E}}\;\exp (\xi ^{2-\varepsilon _1})<N<\infty . \end{aligned}$$ First we consider the case \(\psi \equiv 0\). Fix \(\varepsilon ,\varepsilon _1>0\). By the fundamental theorem of calculus we observe that for any \(x,y\in \mathbb {R}^d\), \(0\leqslant s \leqslant t \leqslant 1\) $$\begin{aligned}&\int _s^t (f(B_r^H+x)-f(B_r^H+y))\,dr\nonumber \\&\quad =(x-y)\cdot \int _0^1\int _s^t \nabla f(B_r^H+\theta x+(1-\theta )y)\,dr\,d\theta . \end{aligned}$$ Consider the process $$\begin{aligned} F(t,z):=\int _0^t \nabla f(B_r^H+z)\,dr. \end{aligned}$$ Take \(\delta >0\) such that \(\alpha -1-\delta >1/(2H)\). By Lemma 4.3 and Corollary 4.5, there exists \(N_1=N_1(\alpha ,d,H,\delta )\) such that for any \(p\geqslant 2\), \(s,t\in [0,1]\), \(x,y\in \mathbb {R}^d\) we have $$\begin{aligned} \Vert F(t,x)-F(s,y)\Vert _{L_p(\Omega )}&\leqslant \Vert F(t,x)-F(s,x)\Vert _{L_p(\Omega )}+\Vert F(s,x)-F(s,y)\Vert _{L_p(\Omega )}\\&\leqslant N_1\sqrt{p}\Vert \nabla f\Vert _{\mathcal {C}^{\alpha -1}}((t-s)^{H(\alpha -1)+1}+|x-y|^\delta ). \end{aligned}$$ We stress that \(N_1\) does not depend on p. Taking into account that the process F is continuous (because \(f\in \mathcal {C}^\infty )\), we derive from the above bound and the Kolmogorov continuity theorem ( [26, Theorem 1.4.1]) that for any p large enough one has $$\begin{aligned} \sup _{\begin{array}{c} x,y\in \mathbb {R}^d, |x|,|y|\leqslant M\\ s,t\in [0,1] \end{array}} \frac{|F(t,x)-F(s,y)|}{(t-s)^{H(\alpha -1)+1-\varepsilon }+|x-y|^{\delta /2}}=: \xi \Vert f\Vert _{\mathcal {C}^{\alpha }}<\infty \,\,a.s., \end{aligned}$$ and \(\Vert \xi \Vert _{L_p(\Omega )}\leqslant NN_1\sqrt{p}\), where \(N=N(\alpha ,d,H,\delta ,\varepsilon ,M)\). Since N and \(N_1\) do not depend on p, we see that by the Stirling formula $$\begin{aligned} \;\;{\mathbb {E}}\;\exp (\xi ^{2-\varepsilon _1})=\sum _{n=0}^\infty \frac{\;\;{\mathbb {E}}\;\xi ^{n(2-\varepsilon _1)}}{n!}\leqslant \sum _{n=0}^\infty \frac{(NN_1)^{n(2-\varepsilon _1)}n^{n(1-\varepsilon _1/2)}}{n!}<\infty \end{aligned}$$ Therefore we obtain from (4.22) that for any \(x,y\in \mathbb {R}^d\), \(|x|,|y|\leqslant M\) we have $$\begin{aligned}&\Bigl |\int _s^t (f(B_r^H+x)-f(B_r^H+y))\,dr\Bigr | \nonumber \\&\qquad \qquad \leqslant |x-y|\int _0^1 |(F(t,\theta x+(1-\theta )y)-F(s,\theta x+(1-\theta )y))|\,d\theta \nonumber \\&\qquad \qquad \leqslant \xi \Vert f\Vert _{\mathcal {C}^{\alpha }} (t-s)^{H(\alpha -1)+1-\varepsilon }|x-y|. \end{aligned}$$ Now we consider the general case. Assume that the function \(\psi \) satisfies (4.19). Then by Proposition 3.10, bound (3.30) and assumption (4.19) the process $$\begin{aligned} {\widetilde{B}}_t:=B_t+\psi _t-\psi _0 \end{aligned}$$ is a fractional Brownian motion with Hurst parameter H under some probability measure \({\widetilde{\mathbb {P}}}\) equivalent to \(\mathbb {P}\). This yields from (4.25) (we apply this bound with \(M+|\psi _0|\) in place of M) $$\begin{aligned}&\Bigl |\int _s^t (f(B_r^H+\psi _r+x)-f(B_r^H+\psi _r+y))\,dr\Bigr |\\&\quad =\Bigl |\int _s^t (f({\widetilde{B}}_r^H+x+\psi _0)-f({\widetilde{B}}_r^H+y+\psi _0))\,dr\Bigr |\\&\quad \leqslant \eta \Vert f\Vert _{\mathcal {C}^{\alpha }} |x-y| \end{aligned}$$ where \(\eta \) is a random variable with \(\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}} \exp (\eta ^{2-\varepsilon _1})<\infty \). Note that we have used here our assumption that \(\psi _0\) is non-random. The latter implies that for any \(\varepsilon _2>\varepsilon _1\) $$\begin{aligned} \;\;{\mathbb {E}}\;^{\mathbb {P}} \exp (\eta ^{2-\varepsilon _2})&=\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}}\Bigl [ \exp (\eta ^{2-\varepsilon _2}) \frac{d \mathbb {P}}{d {\widetilde{\mathbb {P}}}}\Bigr ]\\&\leqslant \Bigl (\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}} \exp (2\eta ^{2-\varepsilon _2})\Bigr )^{1/2} \Bigl (\;\;{\mathbb {E}}\;^{ \mathbb {P}}\frac{d\mathbb {P}}{d{\widetilde{\mathbb {P}}}} \Bigr )^{1/2}\\&\leqslant \Bigl (\;\;{\mathbb {E}}\;^{{\widetilde{\mathbb {P}}}} \exp (2\eta ^{2-\varepsilon _2})\Bigr )^{1/2} e^{NR}\;\;{\mathbb {E}}\;^{ \mathbb {P}} \exp (N \Vert \psi \Vert ^2_{\mathcal {C}^{\rho }([0,1],\mathbb {R}^d)}) \end{aligned}$$ where the last inequality follows from (3.28) and (3.30). This concludes the proof of the theorem. \(\square \) Now we are ready to present the main result of this subsection, the regularization lemma. Let \(H\in (0,1)\), \(\alpha >1-1/(2H)\), \(\alpha \in [0,1]\), \(p\geqslant 2\), \(f\in \mathcal {C}^\alpha \), \(\varepsilon ,\varepsilon _1>0\). Let \(\tau \in (H(1-\alpha ),1)\). Let \(\varphi , \psi :\Omega \times [0,1]\rightarrow \mathbb {R}^d\) be \(\mathbb {F}\)–adapted processes satisfying condition (4.18). Assume that \(\psi \) satisfies additionally (4.19) for some \(\rho >H+1/2\), \(\rho \in [0,1]\). Suppose that \(\psi _0\) and \(\varphi _0\) are deterministic. Then there exists a constant \(N=N(H,\alpha ,p,d,\tau ,G,R,\varepsilon ,\varepsilon _1)\) such that for any \(L>0\), and any \(s,t\in [0,1]\) we have $$\begin{aligned}&\Bigl \Vert \int _s^t (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr \Vert _{L_p(\Omega )}\nonumber \\&\quad \leqslant NL \Vert f\Vert _{\mathcal {C}^\alpha } (t-s)^{H(\alpha -1)+1-\varepsilon }\big (\Vert \varphi _s-\psi _s\Vert _{L_p(\Omega )}+ \Vert [\varphi -\psi ]_{C^{\tau }([s,t])}\Vert _{L_p(\Omega )}(t-s)^\tau \big )\nonumber \\&\qquad + N \Vert f\Vert _{\mathcal {C}^0}|t-s|\exp (-L^{2-\varepsilon _1}). \end{aligned}$$ We begin with assuming further that \(f\in \mathcal {C}^\infty (\mathbb {R}^d,\mathbb {R}^d)\). Fix \(S,T\in [0,1]_{\leqslant }\), \(\varepsilon _1>0\). Choose any \(\varepsilon >0\) small enough such that $$\begin{aligned} H(\alpha -1)-\varepsilon +\tau >0. \end{aligned}$$ Let us apply the deterministic sewing lemma (Proposition 3.1) to the process $$\begin{aligned} A_{s,t}:=\int _s^t (f(B_r^H+\psi _r+\varphi _s-\psi _s)-f(B_r^H+\psi _r))\,dr,\quad (s,t)\in [S,T]_{\leqslant }. \end{aligned}$$ Let us check that all the conditions of the above lemma are satisfied. First, the process A is clearly continuous, since f is bounded. Then, using Lemma 4.6 with \(M:=4R\), we derive that for any \(S\leqslant s\leqslant u\leqslant T\) there exists a random variable \(\xi \) with \(\;\;{\mathbb {E}}\;\exp (\xi ^{2-\varepsilon _1})\leqslant N=N(d,\alpha ,H,\varepsilon ,\varepsilon _1,G,|\varphi _0|,|\psi _0|,R)<\infty \) such that $$\begin{aligned} |\delta A_{s,u,t}|&=\Bigl |\int _u^t (f(B_r^H+\psi _r+\varphi _u-\psi _u)-f(B_r^H+\psi _r+\varphi _s-\psi _s))\,dr\Bigr |\\&\leqslant \xi \Vert f\Vert _{\mathcal {C}^\alpha }|(\psi _u-\varphi _u)-(\psi _s-\varphi _s)|(t-s)^{H(\alpha -1)+1-\varepsilon }\\&\leqslant \xi \Vert f\Vert _{\mathcal {C}^\alpha }[\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}(t-s)^{H(\alpha -1)+1-\varepsilon +\tau }. \end{aligned}$$ Since, by (4.27), \(H(\alpha -1)+1-\varepsilon +\tau >1\), we see that condition (3.6) is satisfied with \(C=\xi \Vert f\Vert _{\mathcal {C}^\alpha }[\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}\). Thus, all the conditions of Proposition 3.1 hold. By setting now $$\begin{aligned} {\tilde{\mathcal {A}}}_t:=\int _s^t (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr, \end{aligned}$$ we see that for \(S\leqslant s\leqslant t\leqslant T\) $$\begin{aligned} |{\tilde{\mathcal {A}}}_t-{\tilde{\mathcal {A}}}_s-A_{s,t}|&=\Bigl |\int _s^t (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r+\varphi _s-\psi _s))\,dr\Bigr |\Bigr |\\&\leqslant \Vert f\Vert _{\mathcal {C}^1}[\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}|t-s|^{1+\tau } \\&\leqslant \Vert f\Vert _{\mathcal {C}^1}[\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}|t-s|^{H(\alpha -1)+1-\varepsilon +\tau }. \end{aligned}$$ Thus, the process \({\tilde{\mathcal {A}}}\) satisfies (3.7) and therefore coincides with \(\mathcal {A}\). Proposition 3.1 implies now that for any \(S\leqslant s\leqslant t\leqslant T\) $$\begin{aligned}&\Bigl |\int _s^t (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr |\\&\quad \leqslant |A_{s,t}|+N \xi \Vert f\Vert _{\mathcal {C}^\alpha }[\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}(t-s)^{H(\alpha -1)+1-\varepsilon +\tau }\\&\quad \leqslant N\xi \Vert f\Vert _{\mathcal {C}^\alpha }(t-s)^{H(\alpha -1)+1-\varepsilon }\bigl (|\psi -\varphi |_{\mathcal {C}^0([S,T])}+ [\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}(t-s)^{\tau }\bigr ), \end{aligned}$$ where the bound on \(|A_{s,t}|\) follows again from Lemma 4.6. By putting in the above bound \(s=S\) and \(t=T\) and using that \(|\psi -\varphi |_{\mathcal {C}^0([S,T])}\leqslant |\psi _S-\varphi _S|+ [\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}(T-S)^{\tau }\), we obtain for \(S,T\in [0,1]_{\leqslant }\) $$\begin{aligned}&\Bigl |\int _S^T (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr |\\&\quad \leqslant N\xi \Vert f\Vert _{\mathcal {C}^\alpha }(T-S)^{H(\alpha -1)+1-\varepsilon }\bigl (|\psi _S-\varphi _S|+ [\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}(T-S)^{\tau }\bigr ). \end{aligned}$$ On the other hand, we have the following trivial bound. $$\begin{aligned} \Bigl |\int _S^T (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr |\leqslant 2\Vert f\Vert _{\mathcal {C}^0}|T-S|. \end{aligned}$$ $$\begin{aligned}&\Bigl \Vert \int _S^T (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr \Vert _{L_p(\Omega )} \\&\quad \leqslant \Bigl \Vert \mathbf {1}_{\xi \leqslant L}\int _S^T (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr \Vert _{L_p(\Omega )} \\&\qquad +\Bigl \Vert \mathbf {1}_{\xi \geqslant L}\int _S^T (f(B_r^H+\varphi _r)-f(B_r^H+\psi _r))\,dr\Bigr \Vert _{L_p(\Omega )} \\&\quad \leqslant LN \Vert f\Vert _{\mathcal {C}^\alpha }(T-S)^{H(\alpha -1)+1-\varepsilon }\bigl (\Vert \psi _S-\varphi _S\Vert _{L_p(\Omega )}+ \Vert [\psi -\varphi ]_{\mathcal {C}^\tau ([S,T])}\Vert _{L_p(\Omega )}(T-S)^{\tau }\bigr ) \\&\qquad +2\big (\mathbb {P}(\xi \geqslant L)\big )^{1/p}\Vert f\Vert _{\mathcal {C}^0}|T-S|. \end{aligned}$$ By Chebyshev inequality and (4.21), we finally get (4.26) for the case of smooth f. Now we are ready to remove the extra assumption on the smoothness of f. Let us set \(f_n= \mathcal {P}_{1/n}f \in \mathcal {C}^\infty \). By applying the statement of the lemma to \(f_n\) and using that \(\Vert f_n\Vert _{\mathcal {C}^ \beta } \leqslant \Vert f\Vert _{\mathcal {C}^\beta }\) for \(\beta =\alpha , 0\) we get $$\begin{aligned}&\Bigl \Vert \int _s^t ( f_n(B_r^H+\varphi _r)- f_n(B_r^H+\psi _r))\,dr\Bigr \Vert _{L_p(\Omega )}\nonumber \\&\quad \leqslant NL \Vert f\Vert _{\mathcal {C}^\alpha } (t-s)^{H(\alpha -1)+1-\varepsilon }(\Vert \varphi _s-\psi _s\Vert _{L_p(\Omega )}+ \Vert [\varphi -\psi ]_{C^{\tau }([s,t])}\Vert _{L_p(\Omega )}(t-s)^\tau )\nonumber \\&\qquad + N \Vert f\Vert _{\mathcal {C}^0}|t-s|\exp (-L^{2-\varepsilon _1}). \end{aligned}$$ If \(\alpha >0\), then \(f_n(x) \rightarrow f(x)\) for all \(x \in \mathbb {R}^d\) and the claim follows by Fatou's lemma. So we only have to consider the case \(\alpha =0\). Clearly, it suffices to show that for each \(r>0\), almost surely $$\begin{aligned} ( f_n(B_r^H+\varphi _r)- f_n(B_r^H+\psi _r)) \rightarrow ( f(B_r^H+\varphi _r)- f(B_r^H+\psi _r)), \end{aligned}$$ as \(n \rightarrow \infty \). Notice that almost surely \(f_n(B^H_r) \rightarrow f(B^H_r)\) as \(n \rightarrow \infty \), since the law of \(B^H_r\) is absolutely continuous (for \(r>0\)). Moreover, since \(\alpha =0\), we have by assumption that \(H< 1/2\). By Proposition 3.10 (recall that \(\varphi \) satisfies (4.18), therefore is Lipschitz) there exists a neasure equivalent to \(\mathbb {P}\) under which \(B^H+ \varphi \) is a fractional brownian motion. Consequently, for all \(r >0\), almost surely $$\begin{aligned} f_n(B_r^H+\varphi _r) \rightarrow f(B_r^H+\varphi _r), \end{aligned}$$ as \(n \rightarrow \infty \). With the same reasoning we obtain that almost surely \(f_n(B_r^H+\psi _r) \rightarrow f(B_r^H+\psi _r)\). The lemma is now proved. \(\square \) Without loss of generality we assume \(\alpha \ne 1\). Let us denote $$\begin{aligned} \psi _t:=x_0+\int _0^t b(X_r)\,dr,\quad \psi ^n_t:=x^n_0+\int _0^t b(X^n_{\kappa _n(r)})\,dr,\quad t\in [0,1]. \end{aligned}$$ Fix \(\varepsilon >0\) such that $$\begin{aligned} \varepsilon <\frac{1}{2}+H(\alpha -1). \end{aligned}$$ By assumption (2.5) such \(\varepsilon \) exists. Fix now large enough \(p\geqslant 2\) such that $$\begin{aligned} d/p<\varepsilon /2. \end{aligned}$$ Fix \(0\leqslant S\leqslant T\leqslant 1\). Then, taking into account (4.7), for any \(S\leqslant s\leqslant t\leqslant T\) we have $$\begin{aligned}&\Vert (\psi _t- \psi _s)-(\psi ^n_t- \psi ^n_s)\Vert _{L_p(\Omega )}\nonumber \\&\quad =\Bigl \Vert \int _s^t (b(X_r)-b(X^n_{\kappa _n(r)}))\,dr\Bigr \Vert _{L_p(\Omega )}\nonumber \\&\quad \leqslant \Bigl \Vert \int _s^t (b(X_r)-b(X^n_r))\,dr\Bigr \Vert _{L_p(\Omega )}+N|t-s|^{1/2+\varepsilon } n^{-\gamma +\varepsilon }. \end{aligned}$$ Let \(M\geqslant 1\) be a parameter to be fixed later. We wish to apply Lemma 4.7 with \(\psi ^n\) in place of \(\varphi \), \(\frac{1}{2}+H(\alpha -1)-\varepsilon \) in place of \(\varepsilon \), and \(\tau :=1/2+\varepsilon /2\). Let us check that all the conditions of this lemma are satisfied. First, we note that by (4.29) we have \(\frac{1}{2}+H(\alpha -1)-\varepsilon >0\), which is required by the assumptions of the lemma. Second, we note that \(1/2+\varepsilon /2>H(1-\alpha )\) thanks to (2.5), thus this choice of \(\tau \) is allowed. Next, it is clear that \(\psi _0\) and \(\psi ^n_0\) are deterministic. Further, since the function b is bounded, we see \(\psi \) and \(\psi ^n\) satisfy (4.18). Finally, let us verify that \(\psi \) satisfies (4.19). If \(H<1/2\), this condition holds automatically thanks to the boundedness of b. If \(H\geqslant 1/2\) then pick \(H'\in (0,H)\) such that $$\begin{aligned} \alpha H'>H-\frac{1}{2}. \end{aligned}$$ Note that such \(H'\) exists thanks to assumption (2.5). Then, by definition of \(\psi \), we clearly have $$\begin{aligned}{}[\psi ]_{\mathcal {C}^{1+\alpha H'}}\leqslant |x_0|+\Vert b\Vert _{\mathcal {C}^0}+[b(X_{\cdot })]_{\mathcal {C}^{\alpha H'}} \leqslant |x_0|+\Vert b\Vert _{\mathcal {C}^0}+\Vert b\Vert _{\mathcal {C}^0}^{\alpha }+[B^H]_{{\mathcal {C}^{ H'}}}^\alpha . \end{aligned}$$ Therefore for any \(\lambda >0\) we have $$\begin{aligned} \;\;{\mathbb {E}}\;e^{\lambda [\psi ]_{\mathcal {C}^{1+\alpha H'}}^2}\leqslant N(|x_0|,\Vert b\Vert _{\mathcal {C}^0})\;\;{\mathbb {E}}\;\exp ([B^H]_{{\mathcal {C}^{ H'}}}^{2\alpha })<\infty . \end{aligned}$$ By taking \(\rho := 1+\alpha H'\) and recalling (4.32), we see that \(\rho >H+1/2\) and thus condition (4.19) holds. Therefore all conditions of Lemma 4.7 are met. Applying this lemma, we get $$\begin{aligned}&\Bigl \Vert \int _s^t (b(X_r)- b(X^n_r))\,dr\Bigr \Vert _{L_p(\Omega )}\\&\quad =\Bigl \Vert \int _s^t (b(B^H_r+\psi _r)-b(B^H_r+\psi ^n_r))\,dr\Bigr \Vert _{L_p(\Omega )} \\&\quad \leqslant M N|t-s|^{\frac{1}{2}+\varepsilon }\Vert \psi _S-\psi _S^n\Vert _{L_p(\Omega )} \\&\qquad + MN|t-s|^{1+3\varepsilon /2} \Vert [\psi -\psi ^n]_{\mathcal {C}^{1/2+\varepsilon /2}([s,t])}\Vert _{L_p(\Omega )}+ N \exp (-M^{2-\varepsilon _0})|t-s|\\&\quad \leqslant M N|t-s|^{\frac{1}{2}+\varepsilon }\Vert \psi _S-\psi _S^n\Vert _{L_p(\Omega )} \\&\qquad + MN|t-s|^{1+3\varepsilon /2} [] \psi -\psi ^n []_{\mathscr {C}^{1/2+\varepsilon }_p,[s,t]}+ N \exp (-M^{2-\varepsilon _0})|t-s|, \end{aligned}$$ where the last inequality follows from the Kolmogorov continuity theorem and (4.30). Using this in (4.31), dividing by \(|t-s|^{1/2+\varepsilon }\) and taking supremum over \(S\leqslant s\leqslant t\leqslant T\), we get for some \(N_1\geqslant 1\) independent of M, n $$\begin{aligned}&[] \psi -\psi ^n []_{\mathscr {C}^{1/2+\varepsilon }_p,[S,T]}\nonumber \\&\quad \leqslant MN_1 \Vert \psi _S-\psi ^n_S\Vert _{L_p(\Omega )} +MN_1|T-S|^{1/2+\varepsilon /2}[] \psi -\psi ^n []_{\mathscr {C}^{1/2+\varepsilon }_p,[S,T]}\nonumber \\&\quad + N_1 n^{-\gamma +\varepsilon }+N_1 \exp (-M^{2-\varepsilon _0}). \end{aligned}$$ Fix now m to be the smallest integer so that \(N_1M m^{-1/2-\varepsilon /2}\leqslant 1/2\) (we stress that m does not depend on n). One gets from (4.33) $$\begin{aligned}&[] \psi -\psi ^n []_{\mathscr {C}^{1/2+\varepsilon }_p,[S,S+1/m]} \leqslant 2M N_1 \Vert \psi _S-\psi ^n_S\Vert _{L_p(\Omega )} \nonumber \\&\quad + 2N_1 n^{-\gamma +\varepsilon }+2N_1 \exp (-M^{2-\varepsilon _0}), \end{aligned}$$ and thus $$\begin{aligned}&\Vert \psi _{S+1/m}-\psi ^n_{S+1/m}\Vert _{L_p(\Omega )} \leqslant 2MN_1 \Vert \psi _S-\psi ^n_S\Vert _{L_p(\Omega )} \\&\quad + 2N_1 n^{-\gamma +\varepsilon }+2N_1 \exp (-M^{2-\varepsilon _0}). \end{aligned}$$ Starting from \(S=0\) and applying the above bound k times, \(k=1,\ldots ,m\), one can conclude $$\begin{aligned} \Vert \psi _{k/m}-\psi ^n_{k/m}\Vert _{L_p(\Omega )}&\leqslant (2MN_1)^k \Bigl (\Vert \psi _0-\psi ^n_0\Vert _{L_p(\Omega )}\\&\quad + 2N_1 n^{-\gamma +\varepsilon }+ +2N_1 \exp (-M^{2-\varepsilon _0})\Bigr )\\&\leqslant (2MN_1)^m \Bigl (|x_0-x^n_0| \\&\quad + 2N_1 n^{-\gamma +\varepsilon } +2N_1 \exp (-M^{2-\varepsilon _0})\Bigr ). \end{aligned}$$ Substituting back into (4.34), we get $$\begin{aligned}{}[] \psi -\psi ^{n} []_{\mathscr {C}^{1/2+\varepsilon }_p([0,1])}&\leqslant m \sup _{k=1,\dots ,m}[] \psi -\psi ^{n} []_{\mathscr {C}^{1/2+\varepsilon }_p([k/m,(k+1)/m])}\nonumber \\&\leqslant (2N_1M)^{m+5}\Bigl (|x_0-x_0^n|+N_1 n^{-\gamma +\varepsilon }+N_1 \exp (-M^{2-\varepsilon _0})\Bigr ). \end{aligned}$$ It follows from the definition of m that \(m\leqslant 2N_1^2M^{2-\varepsilon }\). At this point we choose \(\varepsilon _0=\varepsilon /2\) and note that for some universal constant \(N_2\) one has $$\begin{aligned} (2N_1M)^{m+5}=e^{(m+5)\log (2 N_1M)}\leqslant e^{(2N_1^2M^{2-\varepsilon }+5)\log (2 N_1M)}\leqslant N_2 e^{\frac{1}{2}M^{2-\varepsilon /2}}. \end{aligned}$$ Thus, we can continue (4.35) as follows. $$\begin{aligned}&[] \psi -\psi ^{n} []_{\mathscr {C}^{1/2+\varepsilon }_p([0,1])} \leqslant e^{N_3M^{2-\varepsilon }\log M}\nonumber \\&\quad \Bigl (|x_0-x_0^n|+N_1 n^{-\gamma +\varepsilon }\Bigr )+N_1N_2 \exp (-\frac{1}{2}M^{2-\varepsilon /2}). \end{aligned}$$ Fix now \(\delta >0\) and choose \(N_4=N_4(\delta )\) such that for all \(M>0\) one has $$\begin{aligned} \exp (\frac{1}{2}M^{2-\varepsilon /2})\geqslant N_4 e^{\delta ^{-1}N_3M^{2-\varepsilon }\log M}. \end{aligned}$$ It remains to notice that by choosing \(M>1\) such that $$\begin{aligned} e^{N_3M^{2-\varepsilon }\log M}= n^{\delta }, \end{aligned}$$ one has $$\begin{aligned} e^{-\frac{1}{2}M^{2-\varepsilon /2}}\leqslant N n^{-1}. \end{aligned}$$ Substituting back to (4.36) and since \(X-X^n=\psi -\psi ^n\), we arrive to the required bound (2.6). \(\square \) Malliavin calculus for the Euler–Maruyama scheme In the multiplicative standard Brownian case, we first consider Euler–Maruyama schemes without drift: for any \(y\in \mathbb {R}^d\) define the process \({\bar{X}}^n(y)\) by $$\begin{aligned} d{\bar{X}}^n_t(y)=\sigma ({\bar{X}}^n_{\kappa _n(t)}(y))\,dB_t,\quad {\bar{X}}^n_0=y. \end{aligned}$$ This process will play a similar role as \(B^H\) in the previous section. Similarly to the proof of Lemma 4.1, we need sharp bounds on the conditional distribution of \({\bar{X}}^n_t\) given \(\mathcal {F}_s\), which can be obtained from bounds of the density of \({\bar{X}}^n_t\). A trivial induction argument yields that for \(t>0\), \({\bar{X}}^n_t\) indeed admits a density, but to our knowledge such inductive argument can not be used to obtain useful quantitative information. While the densities of Euler–Maruyama approximations have been studied in the literature, see e.g. [5, 6, 18], none of the available estimates suited well for our purposes. In [18], under less regularity assumption on \(\sigma \), \(L_p\) bounds of the density (but not its derivatives) are derived. In [5, 6], smoothness of the density is obtained even in a hypoelliptic setting, but without sharp control on the short time behaviour of the norms. Let \(\sigma \) satisfy (2.8), \({\bar{X}}^n\) be the solution of (5.1), and let \(G\in \mathcal {C}^1\). Then for all \(t=1/n,2/n,\ldots ,1\) and \(k=1,\ldots ,d\) one has the bound $$\begin{aligned} |\;\;{\mathbb {E}}\;\partial _k G({\bar{X}}^n_t)|\leqslant N \Vert G\Vert _{\mathcal {C}^0}t^{-1/2} + N\Vert G\Vert _{\mathcal {C}^1}e^{-cn} \end{aligned}$$ with some constant \(N=N(d,\lambda ,\Vert \sigma \Vert _{\mathcal {C}^2})\) and \(c=c(d,\Vert \sigma \Vert _{\mathcal {C}^2})>0\). We will prove Theorem 5.2 via Malliavin calculus. In our discrete situation, of course this could be translated to finite dimensional standard calculus, but we find it more instructive to follow the basic terminology of [35], which we base on the lecture notes [21]. Define \(H=\{h=(h_i)_{i=1,\ldots ,n}:\,h_i\in \mathbb {R}^d\}\), with the norm $$\begin{aligned} \Vert h\Vert ^2_H=\frac{1}{n}\sum _{i=1}^n|h_i|^2=\frac{1}{n}\sum _{i=1}^n\sum _{k=1}^d|h_i^k|^2. \end{aligned}$$ One can obtain a scalar product from \(\Vert \cdot \Vert _H\), which we denote by \(\langle \cdot ,\cdot \rangle _H\). Let us also denote \(\mathcal {I}=\{1,\ldots ,n\}\times \{1,\ldots ,d\}\). One can of course view H as a copy of \(\mathbb {R}^\mathcal {I}\), with a rescaled version of the usual \(\ell _2\) norm. We denote by \(e_{(i,k)}\) the element of H whose elements are zero apart from the i-th one, which is the k-th unit vector of \(\mathbb {R}^d\). Set \(\Delta W_{(i,k)}:=W^{k}_{i/n}-W^k_{(i-1)/n}\). Then for any \(\mathbb {R}\)-valued random variable X of the form $$\begin{aligned} X=F(\Delta W_{(i,k)}:\,(i,k)\in \mathcal {I}), \end{aligned}$$ where F is a differentiable function, with at most polynomially growing derivative, the Malliavin derivative of X is defined as the H-valued random variable $$\begin{aligned} \mathscr {D}X := \sum _{(i,k)\in \mathcal {I}}(\mathscr {D}^k_i X)e_{(i,k)} :=\sum _{(i,k)\in \mathcal {I}}\partial _{(i,k)}F( \Delta W_{(j,\ell )}:\,(j,\ell )\in \mathcal {I})e_{(i,k)}. \end{aligned}$$ For multidimensional random variables we define \(\mathscr {D}\) coordinatewise. In the sequel we also use the matrix norm on \(\mathbb {R}^{d\times d}\) defined in the usual way \(\Vert M\Vert :=\sup _{x\in \mathbb {R}^d, |x|=1}|Mx|\). Recall that if M is positive semidefinite, then one has \(\Vert M\Vert =\sup _{x\in \mathbb {R}^d, |x|=1}x^*Mx\). It follows that \(\Vert \cdot \Vert \) is monotone increasing with respect to the usual order \(\preceq \) on the positive semidefinite matrices. The following few properties are true in far larger generality, for the proofs we refer to [21]. One easily sees that the derivative \(\mathscr {D}\) satisfies the chain rule: namely, for any differentiable \(G:\mathbb {R}^d\rightarrow \mathbb {R}\), one has \(\mathscr {D}G(X)=\nabla G(X)\cdot \mathscr {D}X\). The operator \(\mathscr {D}\) is closable, and its closure will also be denoted by \(\mathscr {D}\), whose domain we denote by \(\mathcal {W}\subset L_2(\Omega )\). The adjoint of \(\mathscr {D}\) is denoted by \(\delta \). One then has that the domain of \(\delta \) is included in \(\mathcal {W}(H)\) and the following identity holds: $$\begin{aligned} \;\;{\mathbb {E}}\;|\delta u|^2=\;\;{\mathbb {E}}\;\Vert u\Vert ^2_H+\;\;{\mathbb {E}}\;\frac{1}{n^2}\sum _{(i,k),(j,m)\in \mathcal {I}}(\mathscr {D}^k_i u^m_j)(\mathscr {D}^m_j u^k_i). \end{aligned}$$ Stochastic difference equations First let us remark that the Eq. (5.1) does not define an invertible stochastic flow: indeed, for any \(t>0\), \(y\rightarrow {\bar{X}}^n_t(y)\) may not even be one-to-one. Therefore in order to invoke arguments from the Malliavin calculus for diffusion processes, we consider a modified process equation that does define an invertible flow. Unfortunately, this new process will not have a density, but its singular part (as well as its difference from the original process) is exponentially small. Take a smooth function \(\varrho :\mathbb {R}\rightarrow \mathbb {R}\) such that \(|\varrho (r)|\leqslant |r|\) for all \(r\in \mathbb {R}\), \(\varrho (r)=r\) for \(|r|\leqslant (4\Vert \sigma \Vert _{\mathcal {C}^1} d^2)^{-1}\), \(\varrho (r)=0\) for \(|r|\geqslant (2\Vert \sigma \Vert _{\mathcal {C}^1} d^2)^{-1}\), and that satisfies \(|\partial ^k\varrho |\leqslant N\) for \(k=0,\ldots ,3\) with some \(N=N(d,\Vert \sigma \Vert _{\mathcal {C}^1})\). Define the recursion, for \(x\in \mathbb {R}^d\) and \(j=1,\ldots , n\), \(k=1,\ldots ,d\) $$\begin{aligned} \mathcal {X}_{j}^{k}(x)=\mathcal {X}_{j-1}^k(x)+\sum _{\ell =1}^d\sigma ^{k\ell }\big (\mathcal {X}_{j-1}(x)\big )\varrho (\Delta W_{(j,\ell )}),\qquad \mathcal {X}_{0}(x)=x. \end{aligned}$$ By our definition of \(\varrho \), for any j, (5.4) defines a diffeomorphism from \(\mathbb {R}^d\) to \(\mathbb {R}^d\) by \(x\rightarrow \mathcal {X}_{j}(x)\). It is easy to see that its Jacobian \(J_{j}(x)=\big (J_{j}^{m,k}(x)\big )=\big (\partial _{x^m}\mathcal {X}^k_{j}(x)\big )_{k,m=1,\ldots ,d; \,j=1,\ldots ,n}\) satisfies the recursion $$\begin{aligned} J_{j}^{m,k}(x)=J_{j-1}^{m,k}(x)+ \sum _{q=1}^d J_{j-1}^{m,q}(x) \Big [\sum _{\ell =1}^d\partial _{q}\sigma ^{k\ell }\big (\mathcal {X}_{j-1}(x)\big )\varrho (\Delta W_{(j,\ell )})\Big ],\qquad J_{0}(x)=\mathrm {id}. \end{aligned}$$ It is also clear that \(\mathscr {D}_i^m\mathcal {X}^k_j=0\) for \(j<i\), while for \(j>i\) we have the recursion $$\begin{aligned} \mathscr {D}_i^m\mathcal {X}^k_j(x)= & {} \mathscr {D}_i^m\mathcal {X}^k_{j-1}(x) + \sum _{q=1}^d\mathscr {D}_i^m \mathcal {X}^q_{j-1}(x) \Big [ \sum _{\ell =1}^d \partial _q\sigma ^{k\ell }\big (\mathcal {X}_{j-1}(x)\big ) \varrho (\Delta W_{(j,\ell )})\Big ], \\ \mathscr {D}^m_i\mathcal {X}^k_i= & {} \sigma ^{km}\big (\mathcal {X}_{i-1}(x)\big )\varrho '(\Delta W_{(i,m)}). \end{aligned}$$ From now on we will usually suppress the dependence on x in the notation. Save for the initial conditions, the two recursions coincide for the matrix-valued processes \(J_\cdot \) and \(\mathscr {D}_i \mathcal {X}_\cdot \). Since the recursion is furthermore linear, \(j\mapsto J_j^{-1}\mathscr {D}_i \mathcal {X}_j\) is constant in time for \(j\geqslant i\geqslant 1\). In particular, $$\begin{aligned} J_{j}^{-1}\mathscr {D}_i \mathcal {X}_j=J_i^{-1}\big (\sigma ^{km}(\mathcal {X}_{i-1})\varrho '(\Delta W_{(i,m)})\big )_{k,m=1,\ldots ,d}\,, \end{aligned}$$ or, with the notation \(J_{i,j}=J_jJ_i^{-1}\), $$\begin{aligned} \mathscr {D}_i \mathcal {X}_j=J_{i,j}\big (\sigma ^{km}(\mathcal {X}_{i-1})\varrho '(\Delta W_{(i,m)})\big )_{k,m=1,\ldots ,d}\,. \end{aligned}$$ Let us now define the event \({\hat{\Omega }}\subset \Omega \) by $$\begin{aligned} {\hat{\Omega }}=\{|\Delta W_{(i,k)}|\leqslant (4\Vert \sigma \Vert _{\mathcal {C}^1} d^2)^{-1}, \forall (i,k)\in \mathcal {I}\} \end{aligned}$$ as well as the (matrix-valued) random variables \(\mathcal {D}_{i,j}\) by $$\begin{aligned} \mathcal {D}_{i,j}=J_{i,j}\sigma (\mathcal {X}_{i-1}). \end{aligned}$$ Clearly, on \({\hat{\Omega }}\) one has \(\mathcal {D}_{i,j}=\mathscr {D}_i \mathcal {X}_j\). Note that for fixed j, m one may view \(\mathcal {D}_{\cdot ,j}^{\cdot ,m}\) as an element of H, while for fixed i, j one may view \(\mathcal {D}_{i,j}\) as a \(d\times d\) matrix. One furthermore has the following exponential bound on \({\hat{\Omega }}\). There exist N and \(c>0\) depending only on d and \(\Vert \sigma \Vert _{\mathcal {C}^1}\), one has \(\mathbb {P}({\hat{\Omega }})\geqslant 1-Ne^{-cn}\). For each \((i,k)\in \mathcal {I}\), since \(\Delta W_{(i,k)}\) is zero mean Gaussian with variance \(n^{-1}\), one has $$\begin{aligned} \mathbb {P}\big (\varrho (\Delta W_{(i,k)})\ne \Delta W_{(i,k)}\big )\leqslant \mathbb {P}\big (|\Delta W_{(i,k)}|\geqslant (4\Vert \sigma \Vert _{\mathcal {C}^1} d^2)^{-1}\big )\leqslant N'e^{-c'n} \end{aligned}$$ with some \(N'\) and \(c'>0\) depending only on d and \(\Vert \sigma \Vert _{\mathcal {C}^1}\), by the standard properties of the Gaussian distribution. Therefore, by the elementary inequality \((1-x)^\alpha \geqslant 1-\alpha x\), valid for all \(x\in [0,1]\) and \(\alpha \geqslant 1\), one has $$\begin{aligned} \mathbb {P}({\hat{\Omega }})\geqslant \big (1-(N'e^{-c'n}\wedge 1)\big )^{nd}\geqslant 1-N'nde^{-c'n}\geqslant 1-Ne^{-(c'/2)n}. \end{aligned}$$ We now fix \((j,k)\in \mathcal {I}\), \(G\in \mathcal {C}^1\), and we aim to bound \(|\;\;{\mathbb {E}}\;\partial _k G(X_j)|\) in terms of \(t:=j/n\) and \(\Vert G\Vert _0\), and some additional exponentially small error term. To this end, we define the Malliavin matrix \(\mathscr {M}\in \mathbb {R}^{d\times d}\) $$\begin{aligned} \mathscr {M}^{m,q}=\langle \mathcal {D}_{\cdot ,j}^{\cdot ,m},\mathcal {D}_{\cdot ,j}^{\cdot ,q}\rangle _H=\frac{1}{n}\sum _{(i,v)\in \mathcal {I}}\mathcal {D}_{i,j}^{v,m}\mathcal {D}_{i,j}^{v,q}, \end{aligned}$$ with \(m,q=1,\ldots ,d\). As we will momentarily see (see (5.21)), \(\mathscr {M}\) is invertible. Define $$\begin{aligned} Y=\sum _{m=1}^d(\mathcal {D}_{\cdot ,j}^{\cdot ,m})(\mathscr {M}^{-1})^{m,k}\in H. \end{aligned}$$ One then has by the chain rule that on \({\hat{\Omega }}\), \(\partial _k G(\mathcal {X}_j)= \langle \mathscr {D}G(X_j),Y\rangle _H\). Therefore, $$\begin{aligned} \;\;{\mathbb {E}}\;\partial _k G(\mathcal {X}_j)= & {} \;\;{\mathbb {E}}\;\langle \mathscr {D}G(X_j),Y\rangle _H+\;\;{\mathbb {E}}\;\partial _k G(\mathcal {X}_j)\mathbf {1}_{{\hat{\Omega ^c}}}-\;\;{\mathbb {E}}\;\langle \mathscr {D}G(\mathcal {X}_j),Y\rangle _H\mathbf {1}_{{\hat{\Omega ^c}}} \nonumber \\= & {} \;\;{\mathbb {E}}\;(G( X_j),\delta Y)+\;\;{\mathbb {E}}\;\partial _k G(\mathcal {X}_j)\mathbf {1}_{{\hat{\Omega ^c}}}-\;\;{\mathbb {E}}\;\langle \mathscr {D}G(\mathcal {X}_j),Y\rangle _H\mathbf {1}_{{\hat{\Omega ^c}}} \nonumber \\=: & {} \;\;{\mathbb {E}}\;(G( \mathcal {X}_j),\delta Y)+I_1+I_2. \end{aligned}$$ Recalling (5.3), one has $$\begin{aligned} \;\;{\mathbb {E}}\;|\delta Y|^2\leqslant \;\;{\mathbb {E}}\;\Vert Y\Vert ^2_H+\;\;{\mathbb {E}}\;\frac{1}{n^2}\sum _{(i,q),(r,m)\in \mathcal {I}}(\mathscr {D}^q_i Y^m_r)(\mathscr {D}^m_rY^q_i). \end{aligned}$$ Theorem 5.2 will then follow easily once we have the appropriate moment bounds of the objects above. Recall the notation \(t=j/n\). Assume the above notations and let \(\sigma \) satisfy (2.8). Then for any \(p>0\), one has the bounds $$\begin{aligned} \;\;{\mathbb {E}}\;\sup _{i=1,\ldots ,j}\Vert J_{i,j}(x)\Vert ^p+\;\;{\mathbb {E}}\;\sup _{1\leqslant i\leqslant j}\Vert J_{i,j}^{-1}(x)\Vert ^p\leqslant & {} N, \end{aligned}$$ $$\begin{aligned} \;\;{\mathbb {E}}\;\sup _{i=1,\ldots ,j}\Vert \mathcal {D}_{i,j}(x)\Vert ^p\leqslant & {} N, \end{aligned}$$ $$\begin{aligned} \;\;{\mathbb {E}}\;\Vert \mathscr {M}^{-1}(x)\Vert ^p\leqslant & {} Nt^{-p}, \end{aligned}$$ $$\begin{aligned} \sup _{i =1,\ldots , j}\;\;{\mathbb {E}}\;\sup _{r=1,\ldots ,j}\Vert \mathscr {D}_i Y_r(x)\Vert ^p\leqslant & {} N t^{-p}. \end{aligned}$$ for all \(x \in \mathbb {R}^d\), with some \(N=N(p,d,\lambda ,\Vert \sigma \Vert _{\mathcal {C}^2})\). As before, we omit the dependence on \(x\in \mathbb {R}^d\) in order to ease the notation. We first bound the moments of \(\sup _j\Vert J_j\Vert \). Recall that we have the recursion $$\begin{aligned} J_j= J_{j-1}(I+ \Gamma _{j/n}), \end{aligned}$$ where the matrix \(\Gamma _t=(\Gamma _t)_{q,k=1}^d\) is given by $$\begin{aligned} \Gamma ^{q,k}_t = \sum _{\ell =1}^d \partial _q \sigma ^{k \ell } ( \mathcal {X}_{n \kappa _n(t)}) \varrho ( W^\ell _t -W^\ell _{\kappa _n(t)}), \end{aligned}$$ By Itô's formula it follows that $$\begin{aligned} \varrho ( W^\ell _t -W^\ell _{\kappa _n(t)})= \int _{{\kappa _n(t)}}^t \varrho '(W^\ell _s-W^\ell _{\kappa _n(t)}) \, dW^\ell _s + \frac{1}{2}\int _{{\kappa _n(t)}}^t\varrho ''(W^\ell _s-W^\ell _{\kappa _n(t)}) \, ds. \end{aligned}$$ Consequently, for \(j=0, \ldots , n\) we have that \(J_j= Z_{j/n}\), where the matrix-valued process \(Z_t\) satisfies $$\begin{aligned} dZ_t = \sum _{q=1}^d Z_{\kappa _n(t)}\mathcal {A}_t \, dt + \sum _{\ell =1}^d Z_{\kappa _n(t)} \mathcal {B}^{\ell }_t dW^\ell _t, \qquad Z_0= I, \end{aligned}$$ with matrices \(\mathcal {A}_s=(\mathcal {A}^{q, k}_s)_{q,k=1,\ldots ,d}\) and \(\mathcal {B}^\ell _s=(\mathcal {B}^{\ell ,q,k}_s)_{q,k=1,\ldots ,d}\) given by $$\begin{aligned} \mathcal {A}^{q,k}_s= & {} \frac{1}{2} \sum _{\ell =1}^d \partial _q \sigma ^{ k \ell } (\mathcal {X}_{n \kappa _n(s)})\varrho ''(W^\ell _s-W^\ell _{\kappa _n(s)}) \\ \mathcal {B}^{\ell ,q,k}_s= & {} \partial _q \sigma ^{k \ell } (\mathcal {X}_{n \kappa _n})\varrho '(W^\ell _s-W^\ell _{\kappa _n(s)}). \end{aligned}$$ Notice that there exists a constant \(N=N (\Vert \sigma \Vert _{\mathcal {C}^1}, \Vert \varrho \Vert _{\mathcal {C}^2})\) such that almost surely, for all \((t, x) \in [0,1] \times \mathbb {R}^d\) $$\begin{aligned} \Vert \mathcal {A}_t\Vert + \sum _{\ell =1}^d\Vert \mathcal {B}^{\ell }_t\Vert \leqslant N. \end{aligned}$$ This bound combined with the fact that \(Z_t\) satisfies (5.14) imply the bounds $$\begin{aligned} \;\;{\mathbb {E}}\;\sup _{t \leqslant 1} \Vert Z_t\Vert ^p \leqslant N \end{aligned}$$ for all \(p>0\). Hence, $$\begin{aligned} \;\;{\mathbb {E}}\;\sup _{j=1,..,n}\Vert J_j\Vert ^p \leqslant \;\;{\mathbb {E}}\;\sup _{t \leqslant 1} \Vert Z_t\Vert ^p \leqslant N. \end{aligned}$$ We now bound the moments of \(\sup _j \Vert J^{-1}_j\Vert \). By (5.12) we get $$\begin{aligned} J_j^{-1}=(I+ \Gamma _{j/n})^{-1} J_{j-1}^{-1} \end{aligned}$$ Recall that for \(t \in [ (j-1)/n, j/n]\) $$\begin{aligned} \Gamma _t= \int _{(j-1)/n}^t \mathcal {A}_s \, ds + \sum _{\ell =1}^d \int _{(j-1)/n}^t \mathcal {B}^\ell _s \, dW^\ell _s, \end{aligned}$$ and that by the definition of \(\varrho \) and (5.13), for all \(t \in [0,T]\), the matrix \(I+\Gamma _t\) is invertible. Hence, by Itô's formula, we have for \(t \in [ (j-1)/n, j/n]\) $$\begin{aligned} (I+\Gamma _t)^{-1}= I +\int _{(j-1)/n}^t {\tilde{{\mathcal {A}}}}_s \, ds + \sum _{\ell =1}^d \int _{(j-1)/n}^t {\tilde{{\mathcal {B}}}}^\ell _s \, d W^\ell _s, \end{aligned}$$ $$\begin{aligned} {\tilde{{\mathcal {A}}}}_s= & {} \sum _{\ell =1}^d (I+\Gamma _s)^{-1} \mathcal {B}_s^\ell (I+\Gamma _s)^{-1}\mathcal {B}_s^\ell (I+\Gamma _s)^{-1} -(I+\Gamma _s)^{-1} \mathcal {A}_s(I+\Gamma _s)^{-1}, \\ {\tilde{{\mathcal {B}}}}_s^\ell= & {} -(I+\Gamma _s)^{-1} \mathcal {B}^\ell _s(I+\Gamma _s)^{-1}. \end{aligned}$$ Moreover, by definition or \(\varrho \), almost surely, for all \((t,x) \in [0,T] \times \mathbb {R}^d\) one has $$\begin{aligned} \Vert {\tilde{{\mathcal {A} }}}_t\Vert +\sum _{\ell =1}^d \Vert {\tilde{{\mathcal {B}}}}^\ell _t \Vert \leqslant N. \end{aligned}$$ By (5.17) and (5.18), for \(j=1,...,n\) we have that \( J^{-1}_j= {\tilde{Z}}_{j/n}\), where the matrix valued process \({\tilde{Z}}_t\) is defined by $$\begin{aligned} d{\tilde{Z}}_t= \tilde{\mathcal {A}}_t {\tilde{Z}}_{\kappa _n(t)} \, dt + \sum _{\ell =1}^d \tilde{\mathcal {B}}^\ell _t {\tilde{Z}}_{\kappa _n(t)} \, dW^\ell _s, \qquad {\tilde{Z}}_0 =I . \end{aligned}$$ By this and the bounds (5.19) we have the bounds $$\begin{aligned} \;\;{\mathbb {E}}\;\sup _{t \leqslant 1} \Vert {\tilde{Z}}_t\Vert ^p \leqslant N \end{aligned}$$ for all \(p>0\). Consequently, $$\begin{aligned} \;\;{\mathbb {E}}\;\sup _{j=1,...,n} \Vert J^{-1}_j\Vert ^p \leqslant \;\;{\mathbb {E}}\;\sup _{t \leqslant 1} \Vert {\tilde{Z}}_t\Vert ^p \leqslant N. \end{aligned}$$ Finally, from (5.16) and (5.20) we obtain (5.8). The bound (5.9) then immediately follows from (5.8), the definition (5.5), and the boundedness of \(\sigma \). Next, we show (5.10). On the set of positive definite matrices we have that on one hand, matrix inversion is a convex mapping, and on the other hand, the function \(\Vert \cdot \Vert ^p\) is a convex increasing mapping for \(p\geqslant 1\). It is also an elementary fact that if \(B\succeq \lambda I\), then \(\Vert (ABA^*)^{-1}\Vert \leqslant \lambda ^{-1}\Vert (AA^*)^{-1}\Vert \). One then writes $$\begin{aligned} \Vert \mathscr {M}^{-1}\Vert ^p= & {} \Big (\frac{n}{j}\Big )^p\Big \Vert \Big (\frac{1}{j}\sum _{i=1}^j\big [J_{i,j}\sigma (\mathcal {X}_{i-1})\big ]\big [J_{i,j}\sigma (\mathcal {X}_{i-1})\big ]^*\Big )^{-1}\Big \Vert ^p \nonumber \\\leqslant & {} t^{-p}\frac{1}{j}\sum _{i=1}^j\Vert \big (\big [J_{i,j}\sigma (\mathcal {X}_{i-1})\big ]\big [J_{i,j}\sigma (\mathcal {X}_{i-1})\big ]^*\big )^{-1}\Vert ^p \nonumber \\\leqslant & {} \lambda ^{-p}t^{-p}\frac{1}{j}\sum _{i=1}^j\Vert J_{i,j}^{-1}\Vert ^{2p} \nonumber \\\leqslant & {} \lambda ^{-p}t^{-p}\sup _{i=1,\ldots ,j}\Vert J_{i,j}^{-1}\Vert ^{2p}. \end{aligned}$$ Therefore (5.10) follows from (5.8) We now move to the proof of (5.11). First of all, notice that the above argument yields $$\begin{aligned} \sup _{i = 1,...,n} \;\;{\mathbb {E}}\;\sup _{j=1,...,n} \Vert \mathscr {D}_i \mathcal {X}_j\Vert ^p \leqslant N. \end{aligned}$$ for all \(p>0\). Indeed, the proof of this is identical to the proof of (5.16) since \((\mathscr {D}_i \mathcal {X}_j)_{j \geqslant i}\) has the same dynamics as \((J_j)_{j\geqslant 0} \) and initial condition \(\mathscr {D}^k_i \mathcal {X}^m_i=\sigma ^{km} ( \mathcal {X}_{i-1}) \varrho '(\Delta W_{(i,m)})\) which is bounded. Recall that $$\begin{aligned} Y_r = \sum _{m=1}^d ( \mathcal {D}^{\cdot , m}_{r,j}) (\mathscr {M}^{-1})^{m,k}. \end{aligned}$$ By Leibniz's rule, for each \(i, r \in \{0,..,n\}\), \(\mathscr {D}_iY^r\) is a \(\mathbb {R}^d \otimes \mathbb {R}^d\)-valued random variable given by $$\begin{aligned} \mathscr {D}_iY_r= \sum _{m=1}^d ( \mathscr {D}_i \mathcal {D}^{\cdot , m}_{r,j}) (\mathscr {M}^{-1})^{m,k}+ \sum _{m=1}^d \mathcal {D}^{\cdot , m}_{r,j} \otimes \mathscr {D}_i (\mathscr {M}^{-1})^{m,k} \end{aligned}$$ We start with a bound for \(\sup _r \Vert \mathscr {D}_i \mathcal {D}_{r,j}\Vert \). By definition of \(\mathcal {D}_{i,j}\) we have that $$\begin{aligned} \mathscr {D}_i\mathcal {D}_{r,j} = (\mathscr {D}_iJ_j ) J^{-1}_r \sigma (\mathcal {X}_{r-1})+ J_j (\mathscr {D}_iJ^{-1}_r) \sigma (\mathcal {X}_{r-1})+ J_j J^{-1}_r (\mathscr {D}_i \sigma (\mathcal {X}_{r-1})),\nonumber \\ \end{aligned}$$ where for \(A \in (\mathbb {R}^d)^{\otimes 2}\), \(B \in (\mathbb {R}^d)^{\otimes 3}\), the product AB or BA is an element of \((\mathbb {R}^d)^{\otimes 3}\) that arises by considering B as a \(d\times d\) matrix whose entries are elements of \(\mathbb {R}^d\). We estimate the term \(\mathscr {D} _i J_j\). As before, we have that \(\mathscr {D}_i J_j = \mathscr {D}_i Z_{j/n}\), where Z is given by (5.14). We have that \(\mathscr {D}_i Z_t=0\) for \(t <i/n\) while for \(t \geqslant i/n\) the process \(\mathscr {D}_i Z_t=:\mathscr {Z}^i_t\) satisfies $$\begin{aligned} \mathscr {Z}^i_t= & {} \left( \mathscr {Z}^i_{\kappa _n(t)} \mathcal {A}_t + Z_{\kappa _n(t)} \mathscr {D}_iA_t \right) \, dt+ \sum _{\ell =1}^d \left( \mathscr {Z}^i_{\kappa _n(t)} \mathcal {B}^\ell _t + Z_{\kappa _n(t)} \mathscr {D}_i \mathcal {B}^\ell _t \right) dW^\ell _t \nonumber \\ \mathscr {Z}^i_{i/n}= & {} Z_{i/n}\sum _{\ell =1}^d \mathcal {B}^\ell _{i/n} \end{aligned}$$ By the chain rule and (5.22) it follows that for \(p>0\) there exists \(N=N(\Vert \sigma \Vert _{\mathcal {C}^2}, \Vert \varrho \Vert _{\mathcal {C}^3}, d,p)\) such that $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\left( \sup _{t \leqslant 1} \Vert \mathscr {D}_i \mathcal {A}_t\Vert ^p + \sum _{\ell =1}^d\sup _{t \leqslant 1}\Vert \mathscr {D}_i \mathcal {B}^\ell _t \Vert ^p \right) \leqslant N \end{aligned}$$ This combined with (5.16) shows that for the 'free terms' of (5.25) we have $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\left( \sup _{t \leqslant 1} \Vert Z_{\kappa _n(t)}\mathscr {D}_i \mathcal {A}_t\Vert ^p + \sum _{\ell =1}^d\sup _{t \leqslant 1}\Vert Z_{\kappa _n(t)} \mathscr {D}_i \mathcal {B}^\ell _t \Vert ^p \right) \leqslant N. \end{aligned}$$ This, along with (5.15) and (5.16), implies that $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{j=1,...,n} \Vert \mathscr {D}_i J_j\Vert ^p \leqslant \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{i/n \leqslant t \leqslant 1} \Vert \mathscr {Z}^i_t \Vert ^p \leqslant N. \end{aligned}$$ This in turn, combined with (5.20) and the boundedness of \(\sigma \), implies that $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{r=1,...,n} \Vert (\mathscr {D}_iJ_j ) J^{-1}_r \sigma (\mathcal {X}_{r-1})\Vert ^p \leqslant N. \end{aligned}$$ Next, by the chain rule we have $$\begin{aligned} \Vert J_j (\mathscr {D}_iJ^{-1}_r) \sigma (\mathcal {X}_{r-1})\Vert \leqslant \Vert J_j \Vert \Vert J_r^{-1}\Vert ^{2}\Vert \mathscr {D}_iJ_r\Vert \Vert \sigma (\mathcal {X}_{r-1})\Vert . \end{aligned}$$ By (5.16), (5.20), (5.27), and the boundedness of \(\sigma \), we see that $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{r=1,...,n}\Vert J_j (\mathscr {D}_iJ^{-1}_r) \sigma (\mathcal {X}_{r-1})\Vert ^p \leqslant N. \end{aligned}$$ Finally, from (5.16), (5.20), the boundedness of \(\nabla \sigma \), and (5.22) we get $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{r=1,...,n}\Vert J_j J^{-1}_r (\mathscr {D}_i \sigma (\mathcal {X}_{r-1})\Vert ^p \leqslant N. \end{aligned}$$ Recalling (5.24), we obtain $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{r=1,...,n}\Vert \mathscr {D}_i\mathcal {D}_{r,j}\Vert ^p \leqslant N, \end{aligned}$$ which combined with (5.10) gives $$\begin{aligned} \sup _{i=1,...,n} \;\;{\mathbb {E}}\;\sup _{r=1,...,n}\Vert \sum _{m=1}^d ( \mathscr {D}_i \mathcal {D}^{\cdot , m}_{r,j}) (\mathscr {M}^{-1})^{m,k} \Vert ^p \leqslant N t^{-p}. \end{aligned}$$ We proceed by obtaining a similar bound for the second term at the right hand side of (5.23). First, let us derive a bound for \(\mathscr {D}_i \mathscr {M}\). For each entry \(\mathscr {M}^{m,q}\) of the matrix \(\mathscr {M}\) we have $$\begin{aligned} \mathscr {D}_i \mathscr {M}^{m,q} = \frac{1}{n} \sum _{\ell =1}^n \sum _{v=1}^d \left( \mathcal {D}_{\ell ,j}^{v,q}\mathscr {D}_i \mathcal {D}_{\ell ,j}^{v,m} + \mathcal {D}_{\ell ,j}^{v,m} \mathscr {D}_i\mathcal {D}_{\ell ,j}^{v,q}\right) . \end{aligned}$$ Then, notice that on \(\hat{\Omega }\), for \(\ell >j\) we have \( \mathcal {D}_{\ell ,j}= \mathscr {D}_\ell \mathcal {X}_j=0\). Hence, by taking into account (5.9) and (5.28) we get $$\begin{aligned} \sup _{i=1,...,n} \big (\;\;{\mathbb {E}}\;\Vert \mathscr {D}_i \mathscr {M}^{m,q}\Vert ^p \big ) ^{1/p} \leqslant N\big ( \frac{j}{n}+ n (\mathbb {P}(\hat{\Omega }^c))^{1/p}\big )\leqslant N \big ( \frac{j}{n}+ n e^{-cn/p}\big ) \leqslant N \frac{j}{n}=Nt . \end{aligned}$$ Summation over m, q gives $$\begin{aligned} \sup _{i=1,...,n} \big (\;\;{\mathbb {E}}\;\Vert \mathscr {D}_i \mathscr {M}\Vert ^p \big ) ^{1/p} \leqslant N t . \end{aligned}$$ Therefore, we get $$\begin{aligned} \Vert \sum _{m=1}^d \mathcal {D}^{\cdot , m}_{r,j} \otimes \mathscr {D}_i (\mathscr {M}^{-1})^{m,k}\Vert \leqslant N \Vert \mathcal {D}_{r,j}\Vert \Vert \mathscr {M}^{-1}\Vert ^2 \Vert \mathscr {D}_i\mathscr {M}\Vert , \end{aligned}$$ which by virtue of (5.9), (5.10), and (5.30) gives $$\begin{aligned} \;\;{\mathbb {E}}\;\Vert \sum _{m=1}^d \mathcal {D}^{\cdot , m}_{r,j} \otimes \mathscr {D}_i (\mathscr {M}^{-1})^{m,k}\Vert ^p \leqslant N t^{-p}. \end{aligned}$$ This combined with (5.29), by virtue of (5.23), proves (5.11). This finishes the proof. \(\square \) Recalling that \(Y_i=0\) for \(i>j\), we can write, using (5.9) and (5.10), $$\begin{aligned} \;\;{\mathbb {E}}\;\Vert Y\Vert _H^2\leqslant \;\;{\mathbb {E}}\;\frac{1}{n}\sum _{i=1}^j(\sup _{i=1,\ldots ,j}\Vert \mathcal {D}_{i,j}\Vert \Vert \mathscr {M}^{-1}\Vert )^2\leqslant N(j/n)t^{-2}\leqslant Nt^{-1}. \end{aligned}$$ One also has $$\begin{aligned} |\;\;{\mathbb {E}}\;\frac{1}{n^2}\sum _{(i,q),(r,m)\in \mathcal {I}}(\mathscr {D}^q_i Y^m_r)(\mathscr {D}^m_rY^q_i)| \leqslant t^2 \;\;{\mathbb {E}}\;\sup _{i,r=1,\ldots j}\Vert \mathscr {D}_i Y_r\Vert ^2\leqslant N. \end{aligned}$$ Therefore, by (5.7), we have the following bound on the main (first) term on the right-hand side of (5.6) $$\begin{aligned} |\;\;{\mathbb {E}}\;(G(\mathcal {X}_j),\delta Y)|\leqslant \Vert G\Vert _{\mathcal {C}^0}(\;\;{\mathbb {E}}\;|\delta Y|^2)^{1/2}\leqslant N t^{-1/2}\Vert G\Vert _{\mathcal {C}^0}. \end{aligned}$$ As for the other two terms, Proposition 5.3 immediately yields $$\begin{aligned} |I_1|\leqslant N\Vert G\Vert _{\mathcal {C}^1}e^{-cn}, \end{aligned}$$ while for \(I_2\) we can write $$\begin{aligned} |I_2|\leqslant & {} Ne^{-cn}\Big [\;\;{\mathbb {E}}\;\Big (\frac{1}{n}\sum _{i=1}^j(\mathscr {D}_iG(\mathcal {X}_j),Y_i)\Big )^2 \Big ]^{1/2} \\\leqslant & {} N e^{-cn} t\frac{1}{j}\sum _{i=1}^j \big (\;\;{\mathbb {E}}\;\sup _{i=1,\ldots ,j}|\mathscr {D}_i G(\mathcal {X}_j)|^6\big )^{1/6} \big (\;\;{\mathbb {E}}\;\sup _{i=1,\ldots ,j}\Vert \mathcal {D}_{i,j}\Vert ^6\big )^{1/6} \big (\;\;{\mathbb {E}}\;\Vert \mathscr {M}^{-1}\Vert ^6\big )^{1/6} \\\leqslant & {} N\Vert G\Vert _{\mathcal {C}^1}e^{-cn}. \end{aligned}$$ Therefore, by (5.6), we obtain $$\begin{aligned} |\;\;{\mathbb {E}}\;\partial _k G(\mathcal {X}_j)\Vert \leqslant N \Vert G\Vert _{\mathcal {C}^0}t^{-1/2} + N\Vert G\Vert _{\mathcal {C}^1}e^{-cn}, \end{aligned}$$ and since on \({\hat{\Omega }}\), one has \(\mathcal {X}_j={\bar{X}}^n_{j/n}={\bar{X}}^n_t\), the bound (5.2) follows. \(\square \) Let \(y\in \mathbb {R}^d\), \(\varepsilon _1\in (0,1/2)\), \(\alpha \in (0,1)\), \(p>0\). Suppose that \(\sigma \) satisfies (2.8) and that \({\bar{X}}^n:={\bar{X}}^n(y)\) is the solution of (5.1). Then for all \(f\in \mathcal {C}^\alpha \), \(0\leqslant s\leqslant t\leqslant 1\), \(n\in \mathbb {N}\), one has the bound $$\begin{aligned} \big \Vert \int _s^t (f({\bar{X}}_r^n)-f({\bar{X}}_{\kappa _n(r)}^n))\, dr\big \Vert _{L_p(\Omega )} \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } n^{-1/2+2 \varepsilon _1}|t-s|^{1/2+\varepsilon _1} , \end{aligned}$$ with some \(N=N(\alpha , p, d,\varepsilon _1,\lambda ,\Vert \sigma \Vert _{\mathcal {C}^2})\). It clearly suffices to prove the bound for \(p\geqslant 2\), and, as in [10], for \(f\in \mathcal {C}^\infty \). We put for \(0\leqslant s\leqslant t\leqslant T\) $$\begin{aligned} A_{s,t}:=\;\;{\mathbb {E}}\;^s \int _s^t (f({\bar{X}}_r^n)-f({\bar{X}}_{\kappa _n(r)}^n))\, dr. \end{aligned}$$ Then, clearly, for any \(0\leqslant s\leqslant u\leqslant t\leqslant T\) $$\begin{aligned} \delta A_{s,u,t}:&=A_{s,t}-A_{s,u}-A_{u,t}\\&=\;\;{\mathbb {E}}\;^s \int _u^t (f({\bar{X}}_r^n)-f({\bar{X}}_{\kappa _n(r)}^n))\, dr-\;\;{\mathbb {E}}\;^u \int _u^t(f({\bar{X}}_r^n)-f({\bar{X}}_{\kappa _n(r)}^n))\, dr. \end{aligned}$$ Let us check that all the conditions (3.8)-(3.9) of the stochastic sewing lemma are satisfied. Note that and so condition (3.9) trivially holds, with \(C_2=0\). As for (3.8), let \(s \in [k/n, (k+1)/n)\) for some \(k \in \mathbb {N}_0\). Suppose first that \(t \in [(k+4)/n, 1]\). We write $$\begin{aligned} |A_{s,t}|= | I_1+I_2|:= \Big |\Big (\int _s^{(k+4)/n} +\int _{(k+4)/n}^t\Big ) \;\;{\mathbb {E}}\;^s \big ( f({\bar{X}}^n_r)-f({\bar{X}}^n_{k_n(r)})\big )\, dr\Big |. \end{aligned}$$ For \(I_2\) we write, $$\begin{aligned} I_2 = \;\;{\mathbb {E}}\;^s \int _{(k+4)/n}^t \;\;{\mathbb {E}}\;^{(k+1)/n}\big (\;\;{\mathbb {E}}\;^{\kappa _n(r)} f({\bar{X}}^n_r)-f({\bar{X}}^n_{k_n(r)})\big ) \, dr. \end{aligned}$$ Next, denote by \(p_{\Sigma }\) the density of a Gaussian vector in \(\mathbb {R}^d\) with covariance matrix \(\Sigma \) and let \(\mathcal {P}_{\Sigma } f =p_{\Sigma }* f\) (recall that for \(\theta \geqslant 0\), we denote \(p_\theta := p _{\theta I}\), where I is the \(d \times d \) identity matrix). With this notation, we have $$\begin{aligned} \;\;{\mathbb {E}}\;^{k_n(r)} f\left( {\bar{X}}^n_{k_n(r)}+\sigma ({\bar{X}}^n_{k_n(r)}) (W_r-W_{k_n(r)})\right) =\mathcal {P}_{\sigma \sigma ^{\intercal }({\bar{X}}^n_{k_n(r)})(r-k_n(r))} f ({\bar{X}}^n_{k_n(r)}), \end{aligned}$$ so with $$\begin{aligned} g(x):=g^n_r(x):=f(x)-\mathcal {P}_{\sigma \sigma ^{\intercal } (x)(r-\kappa _n(r))}f(x) \end{aligned}$$ $$\begin{aligned} I_2=\;\;{\mathbb {E}}\;^s\int _{(k+4)/n}^t\;\;{\mathbb {E}}\;^{(k+1)/n}g^n_r({\bar{X}}^n_{\kappa _n(r)})\,dr. \end{aligned}$$ Moreover, notice that by (2.8) we have for a constant \(N=(\Vert \sigma \Vert _{\mathcal {C}^1}, \alpha )\) $$\begin{aligned} \Vert g\Vert _{\mathcal {C}^{\alpha /2}} \leqslant N \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ Let us use the shorthand \(\delta =r-\kappa _n(r)\leqslant n^{-1}\). We can then write $$\begin{aligned} \mathcal {P}_\varepsilon g (x) =&\int _{\mathbb {R}^d}\int _{\mathbb {R}^d} p_\varepsilon (z) p_{\sigma \sigma ^{\intercal } (x-z)\delta }( y) \big (f(x-z)- f(x-y-z) \big ) \, dy \,dz \nonumber \\ =&\int _{\mathbb {R}^d}\int _{\mathbb {R}^d} p_\varepsilon (z) p_{\sigma \sigma ^{\intercal } (x-z)\delta }( y) \int _0^1 y_i \partial _{z_i}f(x-z-\theta y) \,d\theta dy \,dz \nonumber \\ =&\int _{\mathbb {R}^d}\int _{\mathbb {R}^d} \partial _{z_i}\big ( p_\varepsilon (z) p_{\sigma \sigma ^{\intercal } (x-z)\delta }( y) \big ) \int _0^1 y_i f(x-z-\theta y) \,d\theta dy \,dz. \end{aligned}$$ with summation over i implied. It is well known that $$\begin{aligned} | \partial _{z_i} p_\varepsilon (z)| \leqslant N |z|\varepsilon ^{-1} p_\varepsilon (z). \end{aligned}$$ Furthermore, with the notation \(\Sigma (z):= \sigma \sigma ^{\intercal } (x-z) \), we have $$\begin{aligned} |\partial _{z_i} p_{ \Sigma (z) \delta }( y)|= & {} \Big | \frac{ \partial _{z_i} ( y^{\intercal } \Sigma ^{-1}(z) y ) }{2\delta } + \frac{\partial _{z_i} \det \Sigma (z) }{ 2 \det \Sigma (z)} \Big | p_{\Sigma (z)\delta }( y) \nonumber \\\leqslant & {} N (\delta ^{-1}|y|^2+1) p_{\Sigma (z)\delta }( y), \end{aligned}$$ where for the last inequality we have used (2.8). Therefore, by (6.4), (6.5), and (6.6) we see that $$\begin{aligned} \Vert \mathcal {P}_\varepsilon g\Vert _{\mathcal {C}^0}&\leqslant N\Vert f\Vert _{\mathcal {C}^0} \int _{\mathbb {R}^d}\int _{\mathbb {R}^d}\Big (\varepsilon ^{-1}|z|+\delta ^{-1}|y|^2+1\Big ) \Big ( |y| p_\varepsilon (z) p_{\sigma \sigma ^{\intercal } (x-z)\delta }( y)\Big )\,dy\,dz \\&\leqslant N|f\Vert _{\mathcal {C}^0}(\varepsilon ^{-1/2}\delta ^{1/2}+\delta ^{1/2}) \leqslant N\Vert f\Vert _{\mathcal {C}^0}\varepsilon ^{-1/2}n^{-1/2}. \end{aligned}$$ One also has the trivial estimate \(\Vert \mathcal {P}_\varepsilon g\Vert _{\mathcal {C}^0}\leqslant 2 \Vert f\Vert _{\mathcal {C}^0}\), and combining these two bounds yields $$\begin{aligned} \Vert g\Vert _{\mathcal {C}^\beta }\leqslant N\Vert f\Vert _{\mathcal {C}^0} n^{\beta /2}. \end{aligned}$$ for all \(\beta \in [-1,0)\). Note that the restriction of \({\bar{X}}^n_t(\cdot )\) to the gridpoints \(t=0,1/n,\ldots ,1\) is a Markov process with state space \(\mathbb {R}^d\). Therefore we can write $$\begin{aligned} |\;\;{\mathbb {E}}\;^{(k+1)/n}g\big ({\bar{X}}^n_{\kappa _n(r)}(y)\big )|&=|\;\;{\mathbb {E}}\;g\big ({\bar{X}}^n_{\kappa _n(r)-(k+1)/n}(x)\big )|\Big |_{x={\bar{X}}^n_{(k+1)/n}(y)} \nonumber \\&\leqslant \sup _{x\in \mathbb {R}^d} |\;\;{\mathbb {E}}\;g\big ({\bar{X}}^n_{\kappa _n(r)-(k+1)/n}(x)\big )|. \end{aligned}$$ Since \(g \in \mathcal {C}^{\alpha /2}\) we have that \((I+\Delta )u= g\) where \(u \in \mathcal {C}^{2+(\alpha /2)}\) and $$\begin{aligned} \Vert u\Vert _{\mathcal {C}^{2+(\alpha /2)}} \leqslant N \Vert g\Vert _{\mathcal {C}^{\alpha /2}}, \qquad \Vert u\Vert _{\mathcal {C}^{{1+2\varepsilon _1}}} \leqslant N \Vert g\Vert _{\mathcal {C}{-1+2\varepsilon _1}}. \end{aligned}$$ Hence, by combining (6.8), (5.2), (6.9), (6.7), and (6.3), we get $$\begin{aligned} |\;\;{\mathbb {E}}\;^{(k+1)/n}g\big ({\bar{X}}^n_{\kappa _n(r)}(y)\big )|\leqslant & {} \sup _{x\in \mathbb {R}^d} |\;\;{\mathbb {E}}\;(u+\Delta u) \big ({\bar{X}}^n_{\kappa _n(r)-(k+1)/n}(x)\big )| \\\leqslant & {} N \Vert u \Vert _{\mathcal {C}^1} |\kappa _n(r)-(k+1)/n|^{-1/2}+N \Vert u \Vert _{\mathcal {C}^2} e^{-cn} \\\leqslant & {} N \Vert u \Vert _{\mathcal {C}^{1+2\varepsilon _1}} |\kappa _n(r)-(k+1)/n|^{-1/2}+N \Vert u \Vert _{\mathcal {C}^2} e^{-cn} \\\leqslant & {} N \Vert g\Vert _{\mathcal {C}^{-1+2\varepsilon _1}} |\kappa _n(r)-(k+1)/n|^{-1/2}+N \Vert g \Vert _{\mathcal {C}^{\alpha /2}} e^{-cn} \\\leqslant & {} N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-1/2+\varepsilon _1}|\kappa _n(r)-(k+1)/n|^{-1/2} \end{aligned}$$ Putting this back into (6.2) one obtains $$\begin{aligned} \Vert I_2\Vert _{L_p(\Omega )}\leqslant & {} N\Vert f\Vert _{\mathcal {C}^0}n^{-1/2+\varepsilon _1}\int _{(k+4)/n}^t|\kappa _n(r)-(k+1)/n|^{-1/2}\,dr \\\leqslant & {} N\Vert f\Vert _{\mathcal {C}^\alpha }|t-s|^{1/2}n^{-1/2+\varepsilon _1} \\\leqslant & {} N\Vert f\Vert _{\mathcal {C}^\alpha }|t-s|^{1/2+\varepsilon _1}n^{-1/2+2\varepsilon _1}, \end{aligned}$$ where we have used that \(n^{-1} \leqslant |t-s|\). The bound for \(I_1\) is straightforward: $$\begin{aligned} \Vert I_1\Vert _{L_p(\Omega )}\leqslant & {} \int _s^{(k+4)/n} \Vert f({\bar{X}}_r)-f({\bar{X}}_{k_n(r)}) \Vert _{L_p(\Omega )} \, dr \\\leqslant & {} N \Vert f\Vert _{\mathcal {C}^0}n^{-1} \leqslant N \Vert f\Vert _{\mathcal {C}^0} n^{-1/2+\varepsilon _1}|t-s|^{1/2+\varepsilon _1}. \end{aligned}$$ $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}\leqslant N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-1/2+2\varepsilon _1}|t-s|^{1/2+\varepsilon _1}. \end{aligned}$$ It remains to show the same bound for \(t \in (s, (k+4)/n]\). Similarly to the above we write $$\begin{aligned} \Vert A_{s,t}\Vert _{L_p(\Omega )}\leqslant & {} \int _s^t \Vert f({\bar{X}}_r)-f({\bar{X}}_{k_n(r)}) \Vert _{L_p(\Omega )} \, dr \\\leqslant & {} N \Vert f\Vert _{\mathcal {C}^0} |t-s| \leqslant N \Vert f\Vert _{\mathcal {C}^0} n^{-1/2+\varepsilon _1}|t-s|^{1/2+\varepsilon _1}. \end{aligned}$$ using that \(|t-s|\leqslant 4 n^{-1}\) and \(\varepsilon _1<1/2\). Thus, (3.8) holds with \(C_1=N \Vert f\Vert _{\mathcal {C}^\alpha } n^{-1/2+2\varepsilon _1}\). From here we conclude the bound (6.1) exactly as is Lemma 4.1. \(\square \) Let \(\alpha \in [0,1]\), take \(\varepsilon _1\in (0,1/2)\). Let \(b\in \mathcal {C}^0\), \(\sigma \) satisfy (2.8), and \(X^n\) be the solution of (1.7). Then for all \(f\in \mathcal {C}^\alpha \), \(0\leqslant s\leqslant t\leqslant 1\), \(n\in \mathbb {N}\), and \(p>0\), one has the bound $$\begin{aligned} \big \Vert \int _s^t (f(X_r^n)-f(X_{\kappa _n(r)}^n))\, dr\big \Vert _{L_p(\Omega )} \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha } n^{-1/2+2\varepsilon _1}|t-s|^{1/2+\varepsilon _1} \end{aligned}$$ with some \(N=N(\Vert b\Vert _{\mathcal {C}^0},p, d,\alpha ,\varepsilon _1, \lambda ,\Vert \sigma \Vert _{\mathcal {C}^2})\). Let us set $$\begin{aligned} \rho = \exp \left( -\int _0^1 (\sigma ^{-1}b)(X_{\kappa _n(r)}^n) \, dB_r - \frac{1}{2}\int _0^1 \big |(\sigma ^{-1}b)(X_{\kappa _n(r)}^n)\big |^2 \, dr \right) \end{aligned}$$ and define the measure \({\tilde{\mathbb {P}}}\) by \(d {\tilde{\mathbb {P}}} = \rho d \mathbb {P}\). By Girsanov's theorem, \(X^n\) solves (5.1) with a \({\tilde{\mathbb {P}}}\)-Wiener process \({{\tilde{B}}}\) in place of B. Since Lemma 6.1 only depends on the distribution of \({\bar{X}}^n\), we can apply it to \(X^n\), to bound the desired moments with respect to the measure \({\tilde{\mathbb {P}}}\). Going back to the measure \(\mathbb {P}\) can then be done precisely as in [10]: the only property needed is that \(\rho \) has finite moments of any order, which follows easily from the boundedness of b and (2.8). \(\square \) The replacement for the heat kernel bounds from Proposition 3.7 is the following estimate on the transition kernel \({\bar{P}}\) of (1.6). Similarly to before, we denote \({\bar{P}}_t f(x)=\;\;{\mathbb {E}}\;f(X_t(x))\), where \(X_t(x)\) is the solution of (1.6) with initial condition \(X_0(x)=x\). The following bound then follows from [16, Theorem 9/4/2]. Assume \(b\in \mathcal {C}^\alpha \), \(\alpha >0\) and \(f\in \mathcal {C}^{\alpha '}\), \(\alpha '\in [0,1]\). Then for all \(0< t\leqslant 1\), \(x,y\in \mathbb {R}^d\) one has the bounds $$\begin{aligned} |{\bar{\mathcal {P}}}_tf(x)-{\bar{\mathcal {P}}}_tf(y)|\leqslant N\Vert f\Vert _{\mathcal {C}^{\alpha '}}|x-y| t^{-(1-\alpha ')/2} \end{aligned}$$ with some \(N=N(d,\alpha ,\lambda ,\Vert b\Vert _{\mathcal {C}^\alpha },\Vert \sigma \Vert _{\mathcal {C}^1})\). Let \(\alpha \in (0,1]\) and \(\tau \in (0,1]\) satisfy $$\begin{aligned} \tau +\alpha /2-1/2>0. \end{aligned}$$ Let \(b\in \mathcal {C}^\alpha \), \(\sigma \) satisfy (2.8), and X be the solution of (1.6). Let \(\varphi \) be an adapted process. Then for all sufficiently small \(\varepsilon _3,\varepsilon _4>0\), for all \(f\in \mathcal {C}^\alpha \), \(0\leqslant s\leqslant t\leqslant 1\), and \(p>0\), one has the bound $$\begin{aligned}&\big \Vert \int _s^t f(X_r) -f(X_r+\varphi _{r}) \,dr\big \Vert _{L_p(\Omega )} \leqslant N |t-s|^{1+\varepsilon _3} [] \varphi []_{\mathscr {C}^\tau _p,[s,t]}\nonumber \\&\quad + N|t-s|^{1/2+\varepsilon _4}[] \varphi []_{\mathscr {C}^0_p,[s,t]}. \end{aligned}$$ with some \(N=N(p,d,\alpha ,\tau ,\lambda ,\Vert \sigma \Vert _{\mathcal {C}^1}).\) Set, for \(s\leqslant s'\leqslant t'\leqslant t\), $$\begin{aligned} A_{s',t'}=\;\;{\mathbb {E}}\;^{s'}\int _{s'}^{t'} f(X_r)-f(X_r+\varphi _{s'})\,dr. \end{aligned}$$ Let us check the conditions of the stochastic sewing lemma. We have $$\begin{aligned} \delta A_{s',u,t'}=\;\;{\mathbb {E}}\;^{s'} \int _{u}^{t'} (f(X_r)-f(X_r+\varphi _{s'}))\, dr-\;\;{\mathbb {E}}\;^u \int _u^{t'}(f(X_r)-f(X_r+\varphi _u))\, dr, \end{aligned}$$ so \(\;\;{\mathbb {E}}\;^{s'}\delta A_{s',u,t'}=\;\;{\mathbb {E}}\;^{s'}\hat{\delta }A_{s',u,t'}\), with $$\begin{aligned} {\hat{\delta }} A_{s',u,t'}= & {} \;\;{\mathbb {E}}\;^u\int _u^{t'}\big (f(X_r)-f(X_r+\varphi _{s'})\big )-\big (f(X_r)+f(X_r+\varphi _u)\big )\,dr \\= & {} \int _u^{t'} {\bar{\mathcal {P}}}_{r-u}f(X_u+\varphi _{s'})-{\bar{\mathcal {P}}}_{r-u}f(X_u+\varphi _u)\,dr. \end{aligned}$$ Invoking (6.11), we can write $$\begin{aligned} |{\hat{\delta }} A_{s',u,t'}|&\leqslant N \int _{u}^{t'}|\varphi _{s'}-\varphi _u||r-u|^{-(1-\alpha )/2}\,dr. \end{aligned}$$ Hence, using also Jensen's inequality, $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^{s'}\delta A_{s',u,t'}\Vert _{L_p(\Omega )} \leqslant \Vert {\hat{\delta }} A_{s',u,t'}\Vert _{L_p(\Omega )}&\leqslant N[] \varphi []_{\mathscr {C}^\tau _p,[s,t]}|t'-s'|^{1+\tau -(1-\alpha )/2} \end{aligned}$$ The condition (6.12) implies that for some \(\varepsilon _3>0\), one has $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^{s'}\delta A_{s',u,t'}\Vert _{L_p(\Omega )} \leqslant N |t'-s'|^{1+\varepsilon _3} [] \varphi []_{\mathscr {C}^\tau _p,[s,t]}. \end{aligned}$$ Therefore (3.9) is satisfied with \(C_2=N[] \varphi []_{\mathscr {C}^\tau _p,[s,t]}\). Next, to bound \(\Vert A_{s',t'}\Vert _{L_p(\Omega )}\), we write $$\begin{aligned} |\;\;{\mathbb {E}}\;^s f(X_r)- \;\;{\mathbb {E}}\;^s f(X_r+\varphi _{s'})|= & {} |{\bar{\mathcal {P}}}_{r-s'}f(X_{s'})-{\bar{\mathcal {P}}}_{r-s'}f(X_{s'}+\varphi _{s'})| \\\leqslant & {} N |\varphi _{s'}||r-s'|^{-(1-\alpha )/2}. \end{aligned}$$ So after integration with respect to r and by Jensen's inequality, we get the bound, for any sufficiently small \(\varepsilon _4>0\), $$\begin{aligned} \Vert A_{s',t'}\Vert _{L_p(\Omega )}\leqslant N|t'-s'|^{1/2+\varepsilon _4}[] \varphi []_{\mathscr {C}^0_p,[s,t]}. \end{aligned}$$ Therefore (3.8) is satisfied with \(C_1=N[] \varphi []_{\mathscr {C}^0_p,[s,t]}\), and we can conclude the bound (6.1) as usual. \(\square \) First let us recall the following simple fact: if g is a predictable process, then by the Burkholder-Gundy-Davis and Hölder inequalities one has $$\begin{aligned} \;\;{\mathbb {E}}\;\big |\int _s^t g_r\,dB_r\big |^p\leqslant N\;\;{\mathbb {E}}\;\int _s^t|g_r|^p\,dr|t-s|^{(p-2)/2}\end{aligned}$$ with \(N=N(p)\). This in particular implies $$\begin{aligned}{}[] g []_{\mathscr {C}^{1/2-\varepsilon }_p,[s,t]}\leqslant N \Vert g\Vert _{L_p(\Omega \times [s,t])}. \end{aligned}$$ whenever \(p\geqslant 1/\varepsilon \). Without the loss of generality we will assume that p is sufficiently large and \(\tau \) is sufficiently close to 1/2. Let us rewrite the equation for \(X^n\) as $$\begin{aligned} dX^n_t=b(X^n_{\kappa _n(t)})\,dt+\big [\sigma (X_t)+(\sigma (X^n_t)-\sigma (X_t))+R^n_r\big ]\,dB_t, \end{aligned}$$ where \(R^n_t=\sigma (X^n_{\kappa _n(t)})-\sigma (X^n_t)\) is an adapted process such that one has $$\begin{aligned} \Vert R^n_t\Vert _{L_p(\Omega )}\leqslant N n^{-1/2} \end{aligned}$$ for all \(t\in [0,1]\). Let us denote $$\begin{aligned} -\varphi ^n_t= & {} x_0-x^n_0+\int _0^t b(X_r)\,dr-\int _0^t b(X_{\kappa _n(r)}^n)\,dr, \\ \mathcal {Q}^n_t= & {} \int _0^t\sigma (X^n_r)-\sigma (X_r)\,dB_r, \\ \mathcal {R}^n_t= & {} \int _0^tR^n_r\,dB_r. \end{aligned}$$ Take some \(0\leqslant S\leqslant T\leqslant 1\). Choose \(\varepsilon _1\in (0,\varepsilon /2)\) so that \((1/2-2\varepsilon _1)\geqslant 1/2-\varepsilon \). Then, taking into account (6.10), for any \(S\leqslant s< t\leqslant T\), we have $$\begin{aligned} \Vert \varphi ^n_t-\varphi ^n_s\Vert _{L_p(\Omega )}= & {} \big \Vert \int _s^t (b(X_r)-b(X^n_{\kappa _n(r)}))\,dr\big \Vert _{L_p(\Omega )}\nonumber \\\leqslant & {} \big \Vert \int _s^t (b(X_r)-b(X^n_r))\,dr\big \Vert _{L_p(\Omega )}+N|t-s|^{1/2+\varepsilon _1} n^{-1/2+\varepsilon }.\nonumber \\ \end{aligned}$$ We wish to apply Lemma 6.4, with \(\varphi =\varphi ^n+\mathcal {Q}^n+\mathcal {R}^n\). It is clear that for sufficiently small \(\varepsilon _2>0\), \(\tau =1/2-\varepsilon _2\) satisfies (6.12). Therefore, $$\begin{aligned}&\big \Vert \int _s^t (b(X_r)-b(X^n_r))\,dr\big \Vert _{L_p(\Omega )} =\big \Vert \int _s^t (b(X_r)-b(X_r+\varphi _r))\,dr\big \Vert _{L_p(\Omega )} \\&\quad \leqslant N|t-s|^{1/2+\varepsilon _4\wedge (1/2+\varepsilon _3)}\big ([] \varphi ^n []_{\mathscr {C}^\tau _p,[s,t]} +[] \mathcal {Q}^n []_{\mathscr {C}^\tau _p,[s,t]} +[] \mathcal {R}^n []_{\mathscr {C}^\tau _p,[s,t]}\big ) \end{aligned}$$ By (6.14), for sufficiently large p, we have $$\begin{aligned}{}[] \mathcal {Q}^n []_{\mathscr {C}^\tau _p,[s,t]}\leqslant & {} N\Vert X-X^n\Vert _{L_p(\Omega \times [0,T])}, \\ [] \mathcal {R}^n []_{\mathscr {C}^\tau _p,[s,t]}\leqslant & {} Nn^{-1/2}. \end{aligned}$$ Putting these in the above expression, and using \(\tau <1/2\) repeatedly, one gets $$\begin{aligned}&\big \Vert \int _s^t (b(X_{r})-b(X^n_{r}))\,dr\big \Vert _{L_p(\Omega )} \\&\quad \leqslant N |t-s|^{\tau }|T-S|^{\varepsilon _5} \big ([] \varphi ^n []_{\mathscr {C}^\tau _p,[S,T]}+\Vert X-X^n\Vert _{L_p(\Omega \times [0,T])}+n^{-1/2}\big ) \end{aligned}$$ with some \(\varepsilon _5>0\). Combining with (6.15), dividing by \(|t-s|^\tau \) and taking supremum over \(s<t\in [S,T]\), we get $$\begin{aligned}{}[] \varphi ^n []_{\mathscr {C}^\tau _p,[S,T]}&\leqslant N\Vert \varphi ^n_S\Vert _{L_p(\Omega )}+|T-S|^{\varepsilon _5}[] \varphi ^n []_{\mathscr {C}^\tau _p,[S,T]} \nonumber \\&\qquad +N\Vert X-X^n\Vert _{L_p(\Omega \times [0,T])}+Nn^{-1/2+\varepsilon }. \end{aligned}$$ Fix an \(m\in \mathbb {N}\) (not depending on n) such that \(Nm^{-\varepsilon _5}\leqslant 1/2\). Whenever \(|S-T|\leqslant m^{-1}\), the second term on the right-hand side of (6.16) can be therefore discarded, and so one in particular gets $$\begin{aligned}{}[] \varphi ^n []_{\mathscr {C}^\tau _p,[S,T]} \leqslant N\Vert \varphi ^n_S\Vert _{L_p(\Omega )}+N\Vert X-X^n\Vert _{L_p(\Omega \times [0,T])}+Nn^{-1/2+\varepsilon }, \end{aligned}$$ and thus also $$\begin{aligned} \Vert \varphi ^n_{T}\Vert _{L_p(\Omega )} \leqslant N\Vert \varphi ^n_S\Vert _{L_p(\Omega )}+N\Vert X-X^n\Vert _{L_p(\Omega \times [0,T])}+Nn^{-1/2+\varepsilon }. \end{aligned}$$ Iterating this inequality at most m times, one therefore gets $$\begin{aligned} \Vert \varphi ^n_T\Vert _{L_p(\Omega )} \leqslant N\Vert \varphi ^n_0\Vert _{L_p(\Omega )}+N\Vert X-X^n\Vert _{L_p(\Omega \times [0,T])}+Nn^{-1/2+\varepsilon }. \end{aligned}$$ We can then write, invoking again the usual estimates for the stochastic integrals \(\mathcal {Q}^n\), \(\mathcal {R}^n\) $$\begin{aligned} \sup _{t\in [0,T]}\big \Vert X_t-X_t^n\big \Vert _{L_p(\Omega )}^p\leqslant & {} N\sup _{t\in [0,T]}\big \Vert \varphi ^n_t\big \Vert _{L_p(\Omega )}^p \\&\quad +N\sup _{t\in [0,T]}\big \Vert \mathcal {Q}^n_t\big \Vert _{L_p(\Omega )}^p +N\sup _{t\in [0,T]}\big \Vert \mathcal {R}^n_t\big \Vert _{L_p(\Omega )}^p \\\leqslant & {} N\Vert \varphi ^n_0\Vert _{L_p(\Omega )}^p+N\int _0^T\Vert X_t-X^n_t\Vert _{L_p(\Omega )}^p\,dt+Nn^{-p(1/2-\varepsilon )}. \end{aligned}$$ Gronwall's lemma then yields $$\begin{aligned} \sup _{t\in [0,T]}\big \Vert X_t-X_t^n\big \Vert _{L_p(\Omega )}\leqslant N\Vert \varphi ^n_0\Vert _{L_p(\Omega )}+Nn^{-1/2+\varepsilon }. \end{aligned}$$ Putting (6.17)–(6.18)–(6.19) together, we obtain $$\begin{aligned}{}[] \varphi ^n []_{\mathscr {C}^\tau _p,[0,1]} \leqslant N\Vert \varphi ^n_0\Vert _{L_p(\Omega )}+Nn^{-1/2+\varepsilon }. \end{aligned}$$ Therefore, recalling (6.14) again, $$\begin{aligned}{}[] X-X^n []_{\mathscr {C}^\tau _p,[0,1]}\leqslant & {} [] \varphi ^n []_{\mathscr {C}^\tau _p,[0,1]} +[] \mathcal {Q}^n []_{\mathscr {C}^\tau _p,[0,1]} +[] \mathcal {R}^n []_{\mathscr {C}^\tau _p,[0,1]} \\\leqslant & {} N\Vert \varphi ^n_0\Vert _{L_p(\Omega )}+Nn^{-1/2+\varepsilon }+\sup _{t\in [0,1]}\big \Vert X_t-X_t^n\big \Vert _{L_p(\Omega )} \\\leqslant & {} N\Vert \varphi ^n_0\Vert _{L_p(\Omega )}+Nn^{-1/2+\varepsilon }, \end{aligned}$$ as desired. \(\square \) Altmeyer, R.: Estimating occupation time functionals. ArXiv e-prints arXiv: 1706.03418 Bao, J., Huang, X., Yuan, C.: Convergence rate of Euler–Maruyama scheme for SDEs with Hölder–Dini continuous drifts. J. Theo. Probob. (2018). https://doi.org/10.1007/s10959-018-0854-9 Article MATH Google Scholar Butkovsky, O., Mytnik, L.: Regularization by noise and flows of solutions for a stochastic heat equation. Ann. Probab. 47(1), 165–212 (2019). https://doi.org/10.1214/18-AOP1259 Article MathSciNet MATH Google Scholar Baños, D., Nilssen, T., Proske, F.: Strong existence and higher order Fréchet differentiability of stochastic flows of fractional Brownian motion driven SDE's with singular drift. ArXiv e-prints arXiv: 1511.02717 Bally, V., Talay, D.: The law of the euler scheme for stochastic differential equations. Probab. Theory Relat. Fields 104(1), 43–60 (1996). https://doi.org/10.1007/BF01303802 Bally, V., Talay, D.: The law of the euler scheme for stochastic differential equations: Ii: convergence rate of the density. Monte Carlo Methods Appl. 2(2), 93–128 (1996). https://doi.org/10.1515/mcma.1996.2.2.93 Barlow, M.T., Yor, M.: Semimartingale inequalities via the Garsia-Rodemich-Rumsey lemma, and applications to local times. J. Funct. Anal. 49(2), 198–229 (1982). https://doi.org/10.1016/0022-1236(82)90080-5 Catellier, R., Gubinelli, M.: Averaging along irregular curves and regularisation of odes. Stoch. Process. Appl. 126(8), 2323–2366 (2016). https://doi.org/10.1016/j.spa.2016.02.002 Davie, A.M.: Uniqueness of solutions of stochastic differential equations. Int. Math. Res. Not. IMRN 26, 24 (2007). https://doi.org/10.1093/imrn/rnm124 Dareiotis, K., Gerencsér, M.: On the regularisation of the noise for the Euler-Maruyama scheme with irregular drift. Electron. J. Probab. 25, 1–18 (2020). https://doi.org/10.1214/20-EJP479 De Angelis, T., Germain, M., Issoglio, E.:. A numerical scheme for stochastic differential equations with distributional drift. ArXiv e-prints arXiv: 1906.11026 (2019) Decreusefond, L., Üstünel, A.S.: Stochastic analysis of the fractional Brownian motion. Potential Anal. 10(2), 177–214 (1999). https://doi.org/10.1023/A:1008634027843 Faure, O.: Simulation du Mouvement Brownien et des Diffusions. Ph.D. thesis, Ecole National des Ponts et Chausses (1992) Feyel, D., de La Pradelle, A.: Curvilinear integrals along enriched paths. Electron. J. Probab. 11(34), 860–892 (2006). https://doi.org/10.1214/EJP.v11-356 Friz, P., Riedel, S.: Convergence rates for the full gaussian rough paths. Ann. Inst. H. Poincaré Probab. Stat. 50(1), 154–194 (2014). https://doi.org/10.1214/12-AIHP507 Friedman, A.: Partial differential equations of parabolic type. R.E. Krieger Pub. Co. (1983) Gubinelli, M., Imkeller, P., Perkowski, N.: Paracontrolled distributions and singular PDEs. Forum Math. Pi 3, e6 (2015). https://doi.org/10.1017/fmp.2015.2 Gyöngy, I., Krylov, N.: Existence of strong solutions for Itô's stochastic equations via approximations. Probab. Theory Relat. Fields 105(2), 143–158 (1996). https://doi.org/10.1007/BF01203833 Gubinelli, M.: Controlling rough paths. J. Funct. Anal. 216(1), 86–140 (2004). https://doi.org/10.1016/j.jfa.2004.01.002 Gyöngy, I.: A Note on Euler's Approximations. Potential Anal. 8(3), 205–216 (1998). https://doi.org/10.1023/A:1016557804966 Hairer, M.: Advanced stochastic analysis, 2016. http://hairer.org/notes/Malliavin.pdf. Lecture notes Hu, Y., Liu, Y., Nualart, D.: Rate of convergence and asymptotic error distribution of euler approximation schemes for fractional diffusions. Ann. Appl. Probab. 26(2), 1147–1207 (2016). https://doi.org/10.1214/15-AAP1114 Jakubowski, A.: An almost sure approximation for the predictable process in the Doob-Meyer decomposition theorem. In: Séminaire de Probabilités XXXVIII, vol. 1857 of Lecture Notes in Math., pp. 158–164. Springer, Berlin (2005). https://doi.org/10.1007/978-3-540-31449-3_11 Jacod, J., Shiryaev, A. N.: Limit theorems for stochastic processes, vol. 288 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, second ed. (2003). https://doi.org/10.1007/978-3-662-05265-5 Kohatsu-Higa, A., Makhlouf, A., Ngo, H.: Approximations of non-smooth integral type functionals of one dimensional diffusion processes. Stoch. Process. Appl. 124(5), 1881–1909 (2014). https://doi.org/10.1016/j.spa.2014.01.003 Kunita, H.: Stochastic flows and stochastic differential equations, vol. 24 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1997. Reprint of the 1990 original Lê, K.: A stochastic sewing lemma and applications. Electron. J. Probab. 25, 55 (2020). https://doi.org/10.1214/20-EJP442 Leobacher, G., Szölgyenyi, M.: Convergence of the Euler-Maruyama method for multidimensional SDEs with discontinuous drift and degenerate diffusion coefficient. Numerische Mathematik 138(1), 219–239 (2018). https://doi.org/10.1007/s00211-017-0903-9 Mikulevičius, R., Xu, F.: On the rate of convergence of strong Euler approximation for SDEs driven by Levy processes. Stochastics 90(4), 569–604 (2018). https://doi.org/10.1080/17442508.2017.1381095 Article MathSciNet Google Scholar Müller-Gronbach, T., Yaroslavtseva,L.: On the performance of the Euler-Maruyama scheme for SDEs with discontinuous drift coefficient. ArXiv e-prints arXiv: 1809.08423 (2018) Neuenkirch, A.: Optimal approximation of sde's with additive fractional noise. J. Compl. 22(4), 459–474 (2006). https://doi.org/10.1016/j.jco.2006.02.001 Nualart, D., Ouknine, Y.: Regularization of differential equations by fractional noise. Stoch. Process. Appl. 102(1), 103–116 (2002). https://doi.org/10.1016/S0304-4149(02)00155-2 Nualart, D., Ouknine,Y.: Stochastic differential equations with additive fractional noise and locally unbounded drift. In: Stochastic inequalities and applications, vol. 56 of Progr. Probab., pp. 353–365. Birkhäuser, Basel (2003) Neuenkirch, A., Szölgyenyi, M.: The Euler–Maruyama scheme for SDEs with irregular drift: convergence rates via reduction to a quadrature problem. IMA. J. Numer. Anal. 41(2), 1164–1196 (2021). https://doi.org/10.1093/imanum/draa007 Nualart, D.: The Malliavin Calculus and Related Topics. Springer-Verlag (2006). https://doi.org/10.1007/3-540-28329-3 Pamen, O.M., Taguchi, D.: Strong rate of convergence for the Euler-Maruyama approximation of SDEs with Hölder continuous drift coefficient. Stoch. Process. Appl. 127(8), 2542–2559 (2017). https://doi.org/10.1016/j.spa.2016.11.008 Shevchenko, G.: Fractional Brownian motion in a nutshell. In: Analysis of fractional stochastic processes, vol. 36 of Int. J. Modern Phys. Conf. Ser., 1560002, 16. World Sci. Publ., Hackensack, NJ (2015) Veretennikov, A.J.: Strong solutions and explicit formulas for solutions of stochastic integral equations. Math. Sb. 111(153), 434–452 (1980) MathSciNet MATH Google Scholar Zvonkin, A.K.: A transformation of the phase space of a diffusion process that will remove the drift. Mat. Sb. 93(135), 129–149 (1974) OB has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 683164) and from the DFG Research Unit FOR 2402. MG was supported by the Austrian Science Fund (FWF) Lise Meitner programme M2250-N32. Part of the work on the project has been done during the visits of the authors to IST Austria, Technical University Berlin, and Hausdorff Research Institute for Mathematics (HIM). We thank them all for providing excellent working conditions, support and hospitality. Finally, we thank the referee for the careful reading of our paper and several useful comments. Open access funding provided by TU Wien (TUW). Weierstrass Institute, Mohrenstraße 39, 10117, Berlin, Germany Oleg Butkovsky University of Leeds, Woodhouse, Leeds, LS2 9JT, UK Konstantinos Dareiotis TU Vienna, Wiedner Hauptstrasse 8-10, 1040, Wien, Austria Máté Gerencsér Correspondence to Máté Gerencsér. A: Proofs of the auxiliary bounds from Section 3.3 Proof of Proposition 3.6 (i). Fix \(0\leqslant s\leqslant t\leqslant 1\). It follows from the definition of \(B^H\) that \(B^H_t-B^H_s\) is a Gaussian vector consisting of d independent components, each of them having zero mean and variance $$\begin{aligned} C(t,t)-2C(s,t)+C(s,s)=c_H(t-s)^{2H}, \end{aligned}$$ where the function C was defined in (2.2). This implies the statement of the proposition. (ii). We have $$\begin{aligned} \;\;{\mathbb {E}}\;^uB_t^{H,i}-\;\;{\mathbb {E}}\;^sB_t^{H,i}=\int _s^u (t-r)^{H-1/2} dW_r^i. \end{aligned}$$ Therefore, \(\;\;{\mathbb {E}}\;^sB_t^{H,i}-\;\;{\mathbb {E}}\;^uB_t^{H,i}\) is a Gaussian random variable independent of \(\mathcal {F}_s\). It is of mean 0 and variance \(c^2(s,t)-c^2(u,t)\). This implies the statement of the lemma. (iii). It suffices to notice that the random vector \(B^H_t-\;\;{\mathbb {E}}\;^s B^H_t\) is Gaussian, independent of \(\mathcal {F}_s\), consists of d independent components, and each of its components has zero mean and variance $$\begin{aligned} \;\;{\mathbb {E}}\;\big (\int _s^t|t-r|^{H-1/2}\,dW_r\big )^2=c^2(s,t). \end{aligned}$$ (iv). One can simply write by the Newton-Leibniz formula $$\begin{aligned} c^2(s,t)-c^2(s,u)\leqslant N \int _u^t |r-s|^{2H-1}\,dr\leqslant N |t-u||t-s|^{2H-1}, \end{aligned}$$ since by our assumption on s, u, t, for all \(r\in [u,t]\) one has \(r-s\leqslant t-s\leqslant 2(r-s)\). (v). It follows from (2.1) that $$\begin{aligned} \;\;{\mathbb {E}}\;^sB^H_t-\;\;{\mathbb {E}}\;^sB^H_u=\int _{-\infty }^s (|t-r|^{H-1/2}-|u-r|^{H-1/2})\,dW_r. \end{aligned}$$ Therefore, by the Burkholder–Davis–Gundy inequality one has $$\begin{aligned} \Vert \;\;{\mathbb {E}}\;^sB^H_t-\;\;{\mathbb {E}}\;^sB^H_r\Vert _{L_p(\Omega )}^2\leqslant & {} N \int _{-\infty }^s \bigl (|t-r|^{H-1/2}-|u-r|^{H-1/2}\bigr )^2\,dr\\\leqslant & {} N\int _{-\infty }^s\Bigl (\int _u^t|v-r|^{H-3/2}\,dv\Bigr )^2\,dr \\\leqslant & {} N\int _{-\infty }^s|t-u|^2|u-r|^{2H-3}\,dr \\\leqslant & {} N(t-u)^2 (u-s)^{2H-2}\leqslant N(t-u)^2 (t-s)^{2H-2}, \end{aligned}$$ where the last inequality follows from the fact that by the assumption \(u-s\geqslant (t-s)/2\). \(\square \) (i). Case \(\beta \leqslant \alpha \): There is nothing to prove since $$\begin{aligned} \Vert \mathcal {P}_tf\Vert _{\mathcal {C}^\beta (\mathbb {R}^d)}\leqslant \Vert \mathcal {P}_tf\Vert _{\mathcal {C}^\alpha (\mathbb {R}^d)}\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha (\mathbb {R}^d)}. \end{aligned}$$ Case \(\beta =0\), \(\alpha <0\): The bound follows immediately from the definition of the norm. Case \(\alpha =0\), \(\beta \in (0, 1]\): By differentiating the Gaussian density we have $$\begin{aligned} \Vert \nabla \mathcal {P}_t f\Vert _{\mathcal {C}^0} \leqslant N t^{-1/2} \Vert f\Vert _{\mathcal {C}^0}. \end{aligned}$$ Consequently, $$\begin{aligned} | \mathcal {P}_tf(x)-\mathcal {P}_tf(y)|\leqslant & {} | \mathcal {P}_tf(x)-\mathcal {P}_tf(y)|^\beta \Vert f\Vert _{\mathcal {C}^0}^{(1-\beta )} \\\leqslant & {} N t^{-\beta /2} |x-y|^\beta \Vert f\Vert _{\mathcal {C}^0}, \end{aligned}$$ which implies that $$\begin{aligned}{}[\mathcal {P}_t f]_{\mathcal {C}^\beta } \leqslant N t^{-\beta /2} \Vert f\Vert _{\mathcal {C}^0}. \end{aligned}$$ This, combined with the trivial estimate \( \Vert \mathcal {P}_tf\Vert _{\mathcal {C}^0} \leqslant \Vert f\Vert _{L_\infty }\) give the desired estimate. Case \(0< \alpha< \beta <1\): We refer the reader to [17, Lemma A.7] where the estimate is proved in the Besov scale. The desired estimate then follows from the equivalence \(\mathcal {B}^\gamma _{\infty , \infty } \sim \mathcal {C}^\gamma \) for \(\gamma \in (0,1)\). Case \(\alpha \in (0,1)\), \(\beta =1\): We have $$\begin{aligned} \Vert \nabla \mathcal {P}_t f \Vert _{L\infty }= & {} \sup _{x \in \mathbb {R}^d} \Big | \int _{\mathbb {R}^d} \nabla p_t(x-y)f(y) \, dy \Big | \\= & {} \sup _{x \in \mathbb {R}^d} \Big | \int _{\mathbb {R}^d} \nabla p_t(x-y)\big (f(y)-f(x) \big ) \, dy \Big | \\\leqslant & {} N [f]_{\mathcal {C}^\alpha } \int _{\mathbb {R}^d} | \nabla p_t(y)| |y|^\alpha \, dy \\\leqslant & {} N [f]_{\mathcal {C}^\alpha } t^{(\alpha -1)/2}, \end{aligned}$$ which again combined with \(\Vert \mathcal {P}_tf\Vert _{\mathcal {C}^0} \leqslant \Vert f \Vert _{\mathcal {C}^0}\) proves the claim. Case \(\alpha < 0\), \(\beta \in [0,1]\): $$\begin{aligned} \Vert \mathcal {P}_t f\Vert _{\mathcal {C}^\beta } = \Vert \mathcal {P}_{\frac{t}{2}+\frac{t}{2}} f\Vert _{\mathcal {C}^\beta } \leqslant N t^{-\beta /2} \Vert \mathcal {P}_{t/2}f\Vert _{\mathcal {C}^0}\leqslant & {} N t^{(\alpha -\beta )/2} \sup _{\varepsilon \in (0,1]} \varepsilon ^{-\alpha /2}\Vert \mathcal {P}_\varepsilon f\Vert _{\mathcal {C}^0} \\= & {} N t^{(\alpha -\beta )/2} \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ (ii). Fix \(\delta \in (0,1]\) such that \(\delta \geqslant {\frac{\alpha }{2}-\frac{\beta }{2}}\). Then we have $$\begin{aligned} \Vert \mathcal {P}_tf-\mathcal {P}_sf\Vert _{\mathcal {C}^{\beta }(\mathbb {R}^d)}&\leqslant \int _s^t \Bigl \Vert \frac{\partial }{\partial r}\mathcal {P}_rf\Bigr \Vert _{\mathcal {C}^{\beta }(\mathbb {R}^d)}\,dr\\&=\int _s^t \Bigl \Vert \mathcal {P}_r \Delta f\Bigr \Vert _{\mathcal {C}^{\beta }(\mathbb {R}^d)}\,dr\\&\leqslant N\int _s^t r^{\frac{\alpha -\beta -2}{2}} \Bigl \Vert \Delta f\Bigr \Vert _{\mathcal {C}^{\alpha -2}(\mathbb {R}^d)}\,dr\\&\leqslant N\Vert f\Vert _{\mathcal {C}^{\alpha }(\mathbb {R}^d)}\int _s^t r^{\frac{\alpha }{2}-\frac{\beta }{2}-\delta }r^{-1+\delta }\,dr\\&\leqslant N\Vert f\Vert _{\mathcal {C}^{\alpha }(\mathbb {R}^d)}s^{\frac{\alpha }{2}-\frac{\beta }{2}-\delta }(t-s)^{\delta }, \end{aligned}$$ where the last inequality follows from the facts that \(r\geqslant s\) and \(r\geqslant r-s\), and that both of the exponents in the penultimate inequality are nonpositive thanks to the conditions on \(\delta \). This yields the statement of (ii). (iii). First let us deal with the case \(H\leqslant 1/2\). Then the bound follows easily by applying part (ii) of the proposition with \(\delta =1/2\). Indeed, for any \(0\leqslant s\leqslant u \leqslant t\) we have $$\begin{aligned}&\Vert \mathcal {P}_{c^2(s,t)}f-\mathcal {P}_{c^2(u,t)}f\Vert _{\mathcal {C}^\beta }\leqslant N \Vert f\Vert _{\mathcal {C}^\alpha } c^{\alpha -\beta -1}(u,t) (c^2(s,t)-c^2(u,t))^{\frac{1}{2}}\\&\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }(t-u)^{H(\alpha -\beta -1)}(u-s)^{\frac{1}{2}} (t-u)^{H-\frac{1}{2}}\\&=N\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}} (t-u)^{H(\alpha -\beta )-\frac{1}{2}}, \end{aligned}$$ where we also used the fact that $$\begin{aligned} c^2(s,t)-c^2(u,t)\leqslant N (u-s)(t-u)^{2H-1}. \end{aligned}$$ (A.1) This establishes the desired bound. Now let us consider the case \(H>1/2\) (in this case \(2H-1>0\) and thus bound (A.1) does not hold). Put for \(0\leqslant s\leqslant u \leqslant t\) $$\begin{aligned} k(s,u,t):= & {} c^2(u,t)+(u-s)\partial _tc^2(u,t)\nonumber \\= & {} (2H)^{-1}(t-u)^{2H}+(u-s)(t-u)^{2H-1}. \end{aligned}$$ Note that by convexity of the function \(z\mapsto z^{2H}\) one has for any \(0\leqslant z_1\leqslant z_2\) $$\begin{aligned} z_1^{2H}+2H(z_2-z_1)z_1^{2H-1}\leqslant z_2^{2H} \leqslant z_1^{2H}+2H(z_2-z_1)z_1^{2H-1}+(z_2-z_1)^{2H}. \end{aligned}$$ Hence for \(0\leqslant s\leqslant u \leqslant t\) we have $$\begin{aligned} c^2(u,t)\leqslant k(s,u,t)\leqslant c^2(s,t) \leqslant k(s,u,t)+c^2(s,u) \end{aligned}$$ Now we are ready to obtain the desired bound. We have $$\begin{aligned} \Vert \mathcal {P}_{c^2(s,t)}f-\mathcal {P}_{c^2(u,t)}f\Vert _{\mathcal {C}^\beta }&\leqslant \Vert \mathcal {P}_{c^2(s,t)}f-\mathcal {P}_{k(s,u,t)}f\Vert _{\mathcal {C}^\beta } +\Vert \mathcal {P}_{k(s,u,t)}f-\mathcal {P}_{c^2(u,t)}f\Vert _{\mathcal {C}^\beta }\nonumber \\&\leqslant I_1+I_2. \end{aligned}$$ We bound \(I_1\) and \(I_2\) using part (ii) of the proposition but with different \(\delta \). First, we apply part (ii) with \(\delta =\frac{1}{4H}\vee (\alpha /2-\beta /2)\). Recalling (A.3), we deduce $$\begin{aligned} I_1\leqslant N\Vert f\Vert _{C^\alpha }k(s,u,t)^{\frac{\alpha }{2}-\frac{\beta }{2}-\delta }c^{2\delta }(s,u) \leqslant N\Vert f\Vert _{C^\alpha }(u-s)^{\frac{1}{2}}(t-u)^{(H(\alpha -\beta )-\frac{1}{2})\wedge 0}.\nonumber \\ \end{aligned}$$ Applying now part (ii) with \(\delta =1/2\), we obtain $$\begin{aligned} I_2\leqslant N \Vert f\Vert _{C^\alpha }c^{\alpha -\beta -1}(u,t)(u-s)^{\frac{1}{2}}(t-u)^{H-\frac{1}{2}}\leqslant N \Vert f\Vert _{C^\alpha } (u-s)^{\frac{1}{2}} (t-u)^{H(\alpha -\beta )-\frac{1}{2}}. \end{aligned}$$ This, combined with (A.4) and (A.5) implies the desired bound for the case \(H>1/2\). (iv). We begin with the case \(H\leqslant 1/2\). Then, applying part (i) of the theorem with \(\beta =1\), we deduce for \(0\leqslant s \leqslant u \leqslant t\leqslant 1\) $$\begin{aligned} |\mathcal {P}_{c^2(u,t)}f(x)-\mathcal {P}_{c^2(u,t)}f(x+\xi )|\leqslant N\Vert f\Vert _{C^\alpha }(t-u)^{H(\alpha -1)}|\xi |. \end{aligned}$$ Hence for any \(p\geqslant 2\) we have $$\begin{aligned} \Vert \mathcal {P}_{c^2(u,t)}f(x)-\mathcal {P}_{c^2(u,t)}f(x+\xi )\Vert _{L_p(\Omega )}&\leqslant N\Vert f\Vert _{C^\alpha }(t-u)^{H(\alpha -1)}\Vert \xi \Vert _{L_p(\Omega )}\\&\leqslant N\Vert f\Vert _{C^\alpha }(u-s)^{\frac{1}{2}}(t-u)^{H\alpha -\frac{1}{2}}, \end{aligned}$$ where the last inequality follows from the bound (A.1) and the definition of the random variable \(\xi \). This completes the proof for the case \(H\leqslant 1/2\). Now let us deal with the case \(H\in (1/2,1)\). Fix \(0\leqslant s \leqslant u \leqslant t\leqslant 1\). Let \(\eta \) and \(\rho \) be independent Gaussian random vectors consisting of d independent identically distributed components each. Suppose that for any \(i=1,\ldots ,d\) we have \(\;\;{\mathbb {E}}\;\eta ^i=\;\;{\mathbb {E}}\;\rho ^i=0\) and $$\begin{aligned} \mathop {\mathrm{Var}}(\eta ^i)=(u-s)(t-u)^{2H-1};\quad \mathop {\mathrm{Var}}(\rho ^i)=v(s,u,t)-(u-s)(t-u)^{2H-1}. \end{aligned}$$ It is clear that $$\begin{aligned}&\Vert \mathcal {P}_{c^2(u,t)}f(x)-\mathcal {P}_{c^2(u,t)}f(x+\xi )\Vert _{L_p(\Omega )}\nonumber \\&\quad = \Vert \mathcal {P}_{c^2(u,t)}f(x)-\mathcal {P}_{c^2(u,t)}f(x+\eta +\rho )\Vert _{L_p(\Omega )}\nonumber \\&\quad \leqslant \Vert \mathcal {P}_{c^2(u,t)}f(x)-\mathcal {P}_{c^2(u,t)}f(x+\eta )\Vert _{L_p(\Omega )}\nonumber \\&\qquad + \Vert \mathcal {P}_{c^2(u,t)}f(x+\eta )-\mathcal {P}_{c^2(u,t)}f(x+\eta +\rho )\Vert _{L_p(\Omega )}\nonumber \\&\quad =:I_1+I_2. \end{aligned}$$ Applying part (i) of the theorem with \(\beta =1\), we get $$\begin{aligned} I_1 \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }c^{\alpha -1}(u,t)\Vert \eta \Vert _{L_p(\Omega )}\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}}(t-u)^{\alpha H-\frac{1}{2}}. \end{aligned}$$ Similarly, using part (i) of the theorem with \(\beta =\frac{1}{2H}\vee \alpha \) and recalling (A.3), we deduce $$\begin{aligned} I_2 \leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }c^{(\alpha -\frac{1}{2H})\wedge 0}(u,t) \,\Vert \,|\rho |^{\frac{1}{2H}\vee \alpha }\,\Vert _{L_p(\Omega )}\leqslant N\Vert f\Vert _{\mathcal {C}^\alpha }(u-s)^{\frac{1}{2}}(t-u)^{(\alpha H-\frac{1}{2})\wedge 0}. \end{aligned}$$ Combined with (A.6) and (A.7), this yields the required bound. \(\square \) Obviously it suffices to show it for \(k=1\). 1. Case \(\alpha -\delta =0\): The statement follows directly by definition of the \(\mathcal {C}^\alpha \) norm. 2. Case \(\alpha -\delta \in (0,1]\): First, let us consider \(\alpha \in (0,1]\). For all \(\beta \in [0,1]\) we have $$\begin{aligned} |f(y+x)-f(y)-f(z+x)-f(z)|&\leqslant (2|x|^\alpha [f]_{\mathcal {C}^\alpha })^ \beta (2|y-z|^{\alpha } [f]_{\mathcal {C}^\alpha })^{(1-\beta )} \end{aligned}$$ which upon dividing by \(|y-z|^{\alpha -\delta }\), choosing \(\beta = \delta / \alpha \), and taking suprema over \(y \ne z\) gives $$\begin{aligned}{}[f(\cdot +x)-f(\cdot )]_{\mathcal {C}^{\alpha -\delta }} \leqslant 4 |x|^\delta [f]_{\mathcal {C}^\alpha }. \end{aligned}$$ Similarly, we have $$\begin{aligned} \Vert f(\cdot +x)-f(\cdot )\Vert _{\mathcal {C}^0} \leqslant |x|^\delta [f]_{\mathcal {C}^\alpha }^{\delta /\alpha } (2\Vert f\Vert _{\mathcal {C}^0})^{1-\delta /\alpha } \leqslant 2 |x|^\delta \Vert f\Vert _{\mathcal {C}^\alpha }, \end{aligned}$$ which combined with the inequality above gives $$\begin{aligned} \Vert f(\cdot +x)-f(\cdot )\Vert _{\mathcal {C}^{\alpha -\delta }} \leqslant 6 |x|^\delta \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ Now let us consider the case \(\alpha \in (1,2]\). By the fundamental theorem of calculus we have for any \(\beta \in [0,1]\) $$\begin{aligned}&\frac{|f(y+x)-f(y)-f(z+x)-f(z)|}{|y-z|^{\alpha -\delta }} \\&\quad = \frac{1}{|y-z|^{\alpha -\delta }} \Big |\int _0^1 x_i\big ( \partial _{x_i}f(y+\theta x)-\partial _{x_i}f(z+\theta x)\big ) \, d \theta \Big |^\beta \\&\qquad \times \Big |\int _0^1 (y_i-z_i)\big (\partial _{x_i}f(z+x + \theta (y-z))- \partial _{x_i}f(z + \theta (y-z)) \big ) \, d \theta \Big |^{(1-\beta )} \\&\quad \leqslant N \frac{( | x| [\nabla f]_{\mathcal {C}^{\alpha -1}}|y-z|^{\alpha -1} )^\beta (|y-z| [ \nabla f ]_{C^{\alpha -1}} |x|^{\alpha -1})^{1-\beta } }{|y-z|^{\alpha -\delta }} \\&\quad \leqslant N |x|^{\beta +(\alpha -1)(1-\beta )}\Vert f\Vert _{\mathcal {C}^\alpha } |y-z|^{(\alpha -1)\beta +1-\beta -\alpha +\delta }, \end{aligned}$$ which upon choosing \(\beta =(\delta +1-\alpha )/2\alpha \) and taking suprema over \( y \ne z\) gives $$\begin{aligned}{}[f(x+\cdot )-f(\cdot )]_{\mathcal {C}^{\alpha -\delta }} \leqslant N |x|^\delta \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ In addition, we have $$\begin{aligned} \Vert f(\cdot +x)-f(\cdot )\Vert _{\mathcal {C}^0} \leqslant |x|^\delta [f]_{\mathcal {C}^\delta } \leqslant N |x|^\delta \Vert f\Vert _{C^\alpha }, \end{aligned}$$ which combined with the above proves the claim. 3. Case \(\alpha -\delta \in (k, k+1]\) for \(k \in \mathbb {N}\): The statement follows by proceeding as above, considering also derivatives of f up to sufficiently high order. 4. Case \(\alpha - \delta <0\): We first consider the case \(\alpha \in [0,1)\), for which we have by virtue of Proposition 3.7 (i) $$\begin{aligned} \Vert f(x+\cdot )- f(\cdot )\Vert _{\mathcal {C}^{\alpha -\delta }}= & {} \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2}} \Vert \mathcal {P}_\varepsilon f(x+ \cdot )- \mathcal {P}_\varepsilon f(\cdot )\Vert _{\mathcal {C}^0} \\\leqslant & {} \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2}} |x|^ \delta \Vert \mathcal {P}_\varepsilon f \Vert _{\mathcal {C}^\delta } \\\leqslant & {} N \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2} }|x|^\delta \varepsilon ^{\frac{\alpha -\delta }{2}} \Vert f\Vert _{\mathcal {C}^\alpha }= N |x|^\delta \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ We move to the case \(\alpha <0\). We have $$\begin{aligned} \Vert f(x+\cdot )- f(\cdot )\Vert _{\mathcal {C}^{\alpha -\delta }}= & {} \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2}} \Vert \mathcal {P}_\varepsilon f(x+ \cdot )- \mathcal {P}_\varepsilon f(\cdot )\Vert _{\mathcal {C}^0} \\\leqslant & {} \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2}} |x|^ \delta \Vert \mathcal {P}_\varepsilon f \Vert _{\mathcal {C}^\delta } \\= & {} \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2}} |x|^ \delta \Vert \mathcal {P}_{\frac{\varepsilon }{2}+\frac{\varepsilon }{2}} f \Vert _{\mathcal {C}^\delta } \\\leqslant & {} N \sup _{\varepsilon \in (0,1]} \varepsilon ^{\frac{\delta -\alpha }{2} }|x|^\delta \varepsilon ^{\frac{-\delta }{2}} \Vert \mathcal {P}_{\frac{\varepsilon }{2}}f\Vert _{\mathcal {C}^0} \leqslant N |x|^\delta \Vert f\Vert _{\mathcal {C}^\alpha }. \end{aligned}$$ The proposition is proved. \(\square \) B: Proofs of the results from Section 3.4 related to the Girsanov theorem Proof of Proposition 3.10 If \(H=1/2\), then there is nothing to prove; the statement of the proposition follows from the standrad Girsanov theorem for Brownian motion. Otherwise, if \(H\ne 1/2\), let us verify that all the conditions of the Girsanov theorem in the form of [32, Theorem 2] are satisfied. Note that even though this theorem is stated in [32] in the one–dimensional setting, its extension to the multidimensional setup is immediate. First, let us check condition (i) of [32, Theorem 2]. If \(H<1/2\), then \(\int _0^1 u_s^2 ds\leqslant M^2<\infty \) and thus this condition is satisfied by the statement given at [32, last paragraph of Section 3.1]. If \(H>1/2\), then $$\begin{aligned} \bigl [D_{0+}^{H-1/2}u\bigr ](t)=Nu_tt^{-H+1/2}+N(H-1/2)\int _0^t\frac{u_t-u_s}{(t-s)^{H+1/2}}\,ds, \end{aligned}$$ where \(D_{0+}^{\beta }\) denotes the left-sided Riemann–Liouville derivative of of order \(\beta \) at 0, \(\beta \in (0,1)\), see [32, formula (4)]. Therefore, taking into account that \(H<1\) and assumption 3.27, $$\begin{aligned} \int _0^1\Bigl |\bigl [D_{0+}^{H-1/2}u\bigr ](t)\Bigr |^2\,dt\leqslant NM^2+ N\int _0^1\Bigl (\int _0^t\frac{|u_t-u_s|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt<\infty \,\,\text {a.s.}. \end{aligned}$$ Thus, \(D_{0+}^{H-1/2}u\in L_2([0,1])\) a.s. and hence condition (i) of [32, Theorem 2] is satisfied. Now let us verify condition (ii) of [32, Theorem 2]. Consider the following kernel: $$\begin{aligned} K_H(t,s):=(t-s)^{H-1/2}F(H-1/2,1/2-H,H+1/2,1-t/s),\quad 0\leqslant s\leqslant t\leqslant 1, \end{aligned}$$ where F is the Gauss hypergeometric function, see [12, equation (2)]. It follows from [12, Corollary 3.1], that there exists a constant \(k_H>0\) and d–dimensional Brownian motion \({\widetilde{W}}\) such that $$\begin{aligned} B^H(t)=k_H\int _0^t K_H(t,s)\, d{\widetilde{W}}_s,\quad 0\leqslant t\leqslant 1. \end{aligned}$$ Consider a random variable $$\begin{aligned} \rho :=\exp \Bigl (-\int _0^1 v_s d{\widetilde{W}}_s-\frac{1}{2}\int _0^1 |v_s|^2 ds\Bigr ), \end{aligned}$$ where the vector v is defined in the following way. If \(H<1/2\), then $$\begin{aligned} v_t:=\frac{\sin (\pi (H+1/2))}{\pi k_H}t^{H-1/2}\int _0^t (t-s)^{-H-1/2}s^{1/2-H}u_s\, ds, \end{aligned}$$ (B.1) and if \(H>1/2\), then $$\begin{aligned} v_t:=\frac{\sin (\pi (H-1/2))}{\pi k_H (H-1/2)}\Bigl (t^{1/2-H}u_t+(H-1/2)\int _0^t \frac{u_t-t^{H-1/2}s^{1/2-H}u_s}{{(t-s)}^{H+1/2}}\, ds \Bigr ).\nonumber \\ \end{aligned}$$ Taking into account [32, formulas (11) and (13)], we see that condition (ii) of [32, Theorem 2] is equivalent to the following one: \(\;\;{\mathbb {E}}\;\rho =1\). We claim that actually $$\begin{aligned} \;\;{\mathbb {E}}\;\exp (\lambda \int _0^1|v_t|^2 dt)\leqslant R(\lambda )<\infty \end{aligned}$$ $$\begin{aligned}&R(\lambda ):=\exp (\lambda N(H)M^2)\quad \text {if }H<1/2;\\&R(\lambda ):=\exp (\lambda N(H)M^2)\;\;{\mathbb {E}}\;\exp (\lambda N(H)\xi )\quad \text {if }H\in (1/2,1). \end{aligned}$$ By the Novikov theorem this, of course, implies that \(\;\;{\mathbb {E}}\;\rho =1\). Now let us verify (B.3). If \(H<1/2\), then it follows from (B.1) and (3.24) that $$\begin{aligned} |v_t|\leqslant N(H)Mt^{-H+1/2}, \end{aligned}$$ which immediately yields (B.3). If \(H>1/2\), then we make use of (B.2) and (3.25) to deduce $$\begin{aligned} |v_t|\leqslant&N(H)Mt^{1/2-H}+N(H)\int _0^t\frac{|u_t| (t^{H-1/2}s^{1/2-H}-1)}{{(t-s)}^{H+1/2}}\, ds\\&+N(H)\int _0^t\frac{|u_t-u_s|t^{H-1/2}s^{1/2-H}}{{(t-s)}^{H+1/2}}\, ds\\ \leqslant&N(H)Mt^{1/2-H}+N(H)\int _0^t\frac{|u_t-u_s|t^{H-1/2}s^{1/2-H}}{{(t-s)}^{H+1/2}}\, ds. \end{aligned}$$ Taking into account assumption (3.27), we obtain (B.3). Thus, by above, condition (ii) of [32, Theorem 2] is satisfied. Therefore all the conditions of [32, Theorem 2] are satisfied. Hence the process \({\widetilde{B}}^H\) is indeed a fractional Brownian motion with Hurst parameter H under \({\widetilde{\mathbb {P}}}\) defined by \(d{\widetilde{\mathbb {P}}}/d\mathbb {P}=\rho \). Finally, to show (3.28), we fix \(\lambda >0\). Then, applying the Cauchy–Schwarz inequality, we get $$\begin{aligned} \;\;{\mathbb {E}}\;\rho ^\lambda =&\;\;{\mathbb {E}}\;\exp \Bigl (-\lambda \int _0^1 v_s d{\widetilde{W}}_s-\frac{\lambda }{2}\int _0^1 |v_s|^2 ds\Bigr )\\ =&\;\;{\mathbb {E}}\;\exp \Bigl (-\lambda \int _0^1 v_s d{\widetilde{W}}_s-\lambda ^2\int _0^1 |v_s|^2 ds+(\lambda ^2-\lambda /2)\int _0^1 |v_s|^2 ds\Bigr )\\ \leqslant&\Bigl [\;\;{\mathbb {E}}\;\exp \Bigl (-2\lambda \int _0^1 v_s d{\widetilde{W}}_s-2\lambda ^2\int _0^1 |v_s|^2 ds\Bigr )\Bigr ]^{1/2} \Bigl [\;\;{\mathbb {E}}\;\exp \Bigl ((2\lambda ^2-\lambda )\int _0^1 |v_s|^2 ds\Bigr )\Bigr ]^{1/2}\\ =&\Bigl [\;\;{\mathbb {E}}\;\exp \Bigl ((2\lambda ^2-\lambda )\int _0^1 |v_s|^2 ds\Bigr )\Bigr ]^{1/2}\\ \leqslant&R(2\lambda ^2)^{1/2} <\infty , \end{aligned}$$ where the last inequality follows from (B.3). This completes the proof of the proposition. \(\square \) Proof of Lemma 3.11 We begin with establishing bound (3.29). Fix \(n\in \mathbb {N}\) and let us split the inner integral in (3.29) into two parts: the integral over \([0,\kappa _n(t)-(2n)^{-1}]\) and \([\kappa _n(t)-(2n)^{-1},t]\). For the first part we have $$\begin{aligned} I_1(t)&:=\int _0^{\kappa _n(t)-(2n)^{-1}}\frac{(t/s)^{H-1/2} |f_{\kappa _n(t)}-f_{\kappa _n(s)}|}{(t-s)^{H+1/2}}\,ds\nonumber \\&\leqslant [f]_{\mathcal {C}^\rho }t^{H-1/2} \int _0^{\kappa _n(t)-(2n)^{-1}} s^{1/2-H} |\kappa _n(t)-\kappa _n(s)|^\rho (t-s)^{-H-1/2}\,ds\nonumber \\&\leqslant N[f]_{\mathcal {C}^\rho }t^{H-1/2} \int _0^{\kappa _n(t)-(2n)^{-1}} s^{1/2-H} |t-s|^{\rho -H-1/2}\,ds\nonumber \\&\leqslant N[f]_{\mathcal {C}^\rho }t^{H-1/2} \int _0^{t} s^{1/2-H} |t-s|^{\rho -H-1/2}\,ds\nonumber \\&\leqslant N[f]_{\mathcal {C}^\rho }t^{\rho -H+1/2}, \end{aligned}$$ where we used bound (3.24), the assumption \(\rho -H-1/2>-1\), and the fact that for \(s\in [0,\kappa _n(t)-(2n)^{-1}]\) one has $$\begin{aligned} \kappa _n(t)-\kappa _n(s)\leqslant t-s+1/n\leqslant 3(t-s). \end{aligned}$$ Now let us move on and estimate the second part of the inner integral in (3.29). If \(t\geqslant 1/n\), then we have $$\begin{aligned} I_2(t)&:=\int _{\kappa _n(t)-(2n)^{-1}}^t\frac{(t/s)^{H-1/2} |f_{\kappa _n(t)}-f_{\kappa _n(s)}|}{(t-s)^{H+1/2}}\,ds\nonumber \\&= t^{H-1/2}|f_{\kappa _n(t)}-f_{\kappa _n(t)-1/n}| \int _{\kappa _n(t)-(2n)^{-1}}^{\kappa _n(t)} s^{1/2-H} (t-s)^{-H-1/2}\,ds\nonumber \\&\leqslant N [f]_{\mathcal {C}^\rho }n^{-\rho }\frac{t^{H-1/2}}{(\kappa _n(t)-(2n)^{-1})^{H-1/2}} \int _{\kappa _n(t)-(2n)^{-1}}^{\kappa _n(t)} (t-s)^{-H-1/2}\,ds\nonumber \\&\leqslant N [f]_{\mathcal {C}^\rho }n^{-\rho }(t-\kappa _n(t))^{-H+1/2}, \end{aligned}$$ where in the last inequality we used that for \(t\geqslant 1/n\) one has $$\begin{aligned} t\leqslant \kappa _n(t)+\frac{1}{n}\leqslant 4\kappa _n(t)-\frac{2}{n}=4\Bigl (\kappa _n(t)-\frac{1}{2n}\Bigr ). \end{aligned}$$ Now, using (B.5) and (B.5), we can bound the left–hand side of (3.29). We deduce $$\begin{aligned}&\int _0^1\Bigl (\int _0^t\frac{(t/s)^{H-1/2} |f_{\kappa _n(t)}-f_{\kappa _n(s)}|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt\\&\qquad \leqslant N\int _0^1 I_1(t)^2\,dt+ N\int _0^1 I_2(t)^2\,dt\\&\qquad \leqslant N[f]_{\mathcal {C}^\rho }^2+N[f]_{\mathcal {C}^\rho }^2n^{-2\rho } \sum _{i=1}^{n-1} \int _{\frac{i}{n}}^{\frac{i+1}{n}}|t-\kappa _n(t)|^{1-2H}\,dt\\&\qquad \leqslant N[f]_{\mathcal {C}^\rho }^2+N[f]_{\mathcal {C}^\rho }^2n^{-2\rho } \sum _{i=1}^{n-1} n^{-(2-2H)}\\&\qquad \leqslant N[f]_{\mathcal {C}^\rho }^2+N[f]_{\mathcal {C}^\rho }^2n^{2H-1-2\rho }\leqslant N[f]_{\mathcal {C}^\rho }^2, \end{aligned}$$ where the very last inequality follows from the assumption \(\rho >H-1/2\). This establishes (3.29). Not let us prove (3.30). Using the assumption \(\rho >H-1/2\) and identity (3.24), we deduce $$\begin{aligned} \int _0^1\Bigl (\int _0^t\frac{(t/s)^{H-1/2} |f_{t}-f_{s}|}{(t-s)^{H+1/2}}\,ds\Bigr )^2\,dt&\leqslant [f]_{\mathcal {C}^\rho }^2 \int _0^1\Bigl (\int _0^t s^{-H+1/2} (t-s)^{\rho -H-1/2}\,ds\Bigr )^2\,dt\\&\leqslant N[f]_{\mathcal {C}^\rho }^2. \end{aligned}$$ This proves (3.30). \(\square \) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Butkovsky, O., Dareiotis, K. & Gerencsér, M. Approximation of SDEs: a stochastic sewing approach. Probab. Theory Relat. Fields 181, 975–1034 (2021). https://doi.org/10.1007/s00440-021-01080-2 Revised: 02 July 2021 Issue Date: December 2021 Stochastic differential equations Regularization by noise Irregular drift Strong rate of convergence Fractional Brownian motion Mathematics Subject Classification
CommonCrawl
09 Nov 15 questions on Real Analysis for NET and GATE aspirants Posted at 01:22h in Articles, English, NET / GATE / SET, Problems by Manjil Saikia Find the correct option: Let $$f:[2,4]\rightarrow R$$ be a continuous function such that $$f(2)=3$$ and $$f(4)=6.$$ The most we can say about the set $$f([2,4])$$ is that A. It is a set which contains [3,6]. B. It is a closed interval. C. It is a set which contains 3 and 6. D. It is a closed interval which contains [3,6]. Let $$f:]1,5[\rightarrow R$$ be a continuous function such that $$f(2)=3$$ and $$f(4)=6.$$ The most we can say about the set $$f(]1,5[)$$ is that A. It is an interval which contains [3,6]. B. It is an open interval which contains [3,6]. C. It is a bounded set which contains [3,6]. D. It is a bounded interval which contains [3,6]. Let $$f:]1,5[\rightarrow R$$ be a uniformly continuous function such that $$f(2)=3$$ and $$f(4)=6.$$ The most we can say about the set $$f(]1,5[)$$ is that A. It is a bounded set which contains [3,6]. C. It is a bounded interval which contains [3,6]. D. It is an open bounded interval which contains [3,6]. Let $$A$$ be a set. What does it mean for $$A$$ to be finite? A. is a proper subset of the natural numbers. B. There exists a natural number $$n$$ and a bijection $$f$$ from $${i\in N:i<n}$$ to $$A.$$ C. There is a bijection from $$A$$ to a proper subset of the natural numbers. D. There exists a natural number $$n$$ and a bijection $$f$$ from $${i\in N:i\leq n}$$ to $$A.$$ Let $$A$$ be a set. What does it mean for $$A$$ to be countable? A. One can assign a different element of $$A$$ to each natural number in $$N.$$ B. There is a way to assign a natural number to every element of $$A,$$ such that each natural number is assigned to exactly one element of $$A.$$ C. $$A$$ is of the form $${a_1,a_2,a_3,\dots}$$ for some sequence $$a_1,a_2,a_3,\dots$$ D. One can assign a different natural number to each element of $$A.$$ Let $$A$$ be a set. What does it mean for $$A$$ to be uncountable? A. There is no way to assign a distinct element of $$A$$ to each natural number. B. There exist elements of $$A$$ which cannot be assigned to any natural number at all. C. There is no way to assign a distinct natural number to each element of $$A.$$ D. There is a bijection $$f$$ from $$A$$ to the real numbers $$R.$$ $$A$$ and $$B$$ be bounded non-empty sets. Following are two groups of statements: (i) $$\inf(A)\leq \inf(B)$$ (ii) $$\inf(A)\leq \sup(B)$$ (iii) $$\sup(A)\leq \inf(B)$$ (iv) $$\sup(A)\leq \sup(B)$$ (p) For every $$\epsilon >0$$ $$\exists a\in A$$ & $$b\in B$$ s.t. $$a<b+\epsilon.$$ (q) For every $$b\in B$$ and $$\epsilon >0$$ $$\exists a\in A$$ s.t. $$a<b+\epsilon.$$ (r) For every $$a\in A$$ and $$\epsilon >0$$ $$\exists b\in B$$ s.t. $$a<b+\epsilon.$$ (s) For every $$a\in A$$ and $$b\in B,$$ $$a\leq b.$$ Find the correct option from the following: A. $$(i)\Rightarrow (p), (ii)\Rightarrow (s), (iii)\Rightarrow (q), (iv)\Rightarrow (r).$$ B. $$(i)\Rightarrow (q), (ii)\Rightarrow (r), (iii)\Rightarrow (p), (iv)\Rightarrow (s).$$ C. $$(i)\Rightarrow (q), (ii)\Rightarrow (p), (iii)\Rightarrow (s), (iv)\Rightarrow (r).$$ D. $$(i)\Rightarrow (s), (ii)\Rightarrow (q), (iii)\Rightarrow (r), (iv)\Rightarrow (s).$$ The radius of convergence of the power series $$\sum a_nx^n$$ is $$R$$ and $$k$$ be a positive integer. Then the radius of convergent of the power series $$\sum a_nx^{kn}$$ is A. $$\frac{R}{k}.$$ B. $$R.$$ C. not depend on $$k.$$ D. $$R^{\frac{1}{k}}.$$ Let $$f:R\rightarrow R$$ s.t. $$f(x)=\left{\begin{array}{ll} \frac{|x|}{x} & \mbox{if $$x\neq0$$};\ 0 & \mbox{if $$x=0$$}. \end{array} \right}.$$ $$g:R\rightarrow R$$ s.t. $$g(x)=\left{\begin{array}{ll} \frac{|x|}{x} & \mbox{if $$x\neq0$$};\ 1 & \mbox{if $$x=0$$}. \end{array} \right}.$$ A. f and g both are continuous at x=0. B. Neither f nor g is continuous at x=0. C. f is continuous at x=0, but g is not. D. g is continuous at x=0, but f is not. Let $$f(x)=\left{\begin{array}{ll} 8x & \mbox{for $$x\in Q$$};\ 2x^2+8 & \mbox{for $$x\in Q^c$$}. A. f is not continuous. B. f is continuous at x=0. C. f is continuous at x=2. D. f is continuous at both x=0 and x=2. $$f(x)=\left{\begin{array}{ll} X^2-1 & \mbox{if $$x\in Q$$};\ 0 & \mbox{if $$x\in Q^c$$}. \end{array} \right}.$$ B. f is continuous at x=1, but not continuous at x=-1. C. f is continuous at both x=1 and x=-1. D. f is continuous at x=-1, but not continuous at x=1. Let $$f:\Rightarrow R$$ be continuous and $$f(x)=\sqrt{2} \forall x\in Q.$$ Then $$f(\sqrt{2})$$ equals to A. $$\sqrt{2}.$$ C. Neither $$\sqrt{2}$$ nor $$0.$$ D. None of these. Let $$f:\Rightarrow R$$ be continuous, $$f(0)<0$$ and $$f(1)>1.$$ Then, (i) There exist $$c\in (0,1)$$ such that $$f(c)=c^2.$$ (ii) There exist $$d\in (0,1)$$ such that f(d)=d. A. (i) is true, but (ii) is not true. B. (ii) is true, but (i) is not true. C. Both (i) and (ii) are true. D. None of above. $$f:\Rightarrow {-1,1}$$ be onto. Then B. f is continuous. C. f is differentiable everywhere. D. f is continuous, but not differentiable anywhere. The sequence $${\frac{\sin{\frac{n\pi}{2}}}{n}}_{n=1}^{\infty}$$ A. is convergent. B. is divergent. C. converges to 0. D. converges to 1. The answers are here. Manjil Saikia Managing Editor of the English Section, Gonit Sora and Research Associate, Cardiff University, UK. www.manjilsaikia.in mathematical analysis, Real Analysis
CommonCrawl
We've created a LaTeX template here for you to use that contains the prompts for each question. This assignment is a modified version of the Driverless Car assignment written by Chris Piech. A study by the World Health Organization found that road accidents kill a shocking 1.24 million people a year worldwide. In response, there has been great interest in developing autonomous driving technology that can drive with calculated precision and reduce this death toll. Building an autonomous driving system is an incredibly complex endeavor. In this assignment, you will focus on the sensing system, which allows us to track other cars based on noisy sensor readings. Getting started. You will be running two files in this assignment - grader.py and drive.py. The drive.py file is not used for any grading purposes, it's just there to visualize the code you will be writing and help you gain an appreciation for how different approaches result in different behaviors (and to have fun!). Let's start by trying to drive manually. python drive.py -l lombard -i none You can steer by either using the arrow keys or 'w', 'a', and 'd'. The up key and 'w' accelerates your car forward, the left key and 'a' turns the steering wheel to the left, and the right key and 'd' turns the steering wheel to the right. Note that you cannot reverse the car or turn in place. Quit by pressing 'q'. Your goal is to drive from the start to finish (the green box) without getting in an accident. How well can you do on the windy Lombard street without knowing the location of other cars? Don't worry if you're not very good; the teaching staff were only able to get to the finish line 4/10 times. An accident rate of 60% is pretty abysmal, which is why we're going to use AI to do this. Flags for python drive.py: -a: Enable autonomous driving (as opposed to manual). -i <inference method>: Use none, exactInference, particleFilter to (approximately) compute the belief distributions over the locations of the other cars. -l <map>: Use this map (e.g. small or lombard). Defaults to small. -d: Debug by showing all the cars on the map. -p: All other cars remain parked (so that they don't move). Modeling car locations We assume that the world is a two-dimensional rectangular grid on which your car and $K$ other cars reside. At each time step $t$, your car gets a noisy estimate of the distance to each of the cars. As a simplifying assumption, we assume that each of the $K$ other cars moves independently and that the noise in sensor readings for each car is also independent. Therefore, in the following, we will reason about each car independently (notationally, we will assume there is just one other car). At each time step $t$, let $C_t \in \mathbb R^2$ be a pair of coordinates representing the actual location of the single other car (which is unobserved). We assume there is a local conditional distribution $p(c_t \mid c_{t-1})$ which governs the car's movement. Let $a_t \in \mathbb R^2$ be your car's position, which you observe and also control. To minimize costs, we use a simple sensing system based on a microphone. The microphone provides us with $D_t$, which is a Gaussian random variable with mean equal to the true distance between your car and the other car and variance $\sigma^2$ (in the code, $\sigma$ is Const.SONAR_STD, which is about two-thirds the length of a car). In symbols, $D_t \sim \mathcal N(\|a_t - C_t\|_2, \sigma^2)$. For example, if your car is at $a_t = (1,3)$ and the other car is at $C_t = (4,7)$, then the actual distance is $5$ and $D_t$ might be $4.6$ or $5.2$, etc. Use util.pdf(mean, std, value) to compute the probability density function (PDF) of a Gaussian with given mean mean and standard deviation std, evaluated at value. Note that evaluating a PDF at a certain value does not return a probability -- densities can exceed $1$ -- but for the purposes of this assignment, you can get away with treating it like a probability. The Gaussian probability density function for the noisy distance observation $D_t$, which is centered around your distance to the car $\mu = \|a_t - C_t\|_2$, is shown in the following figure: Your job is to implement a car tracker that (approximately) computes the posterior distribution $\mathbb P(C_t \mid D_1 = d_1, \dots, D_t = d_t)$ (your beliefs of where the other car is) and update it for each $t = 1, 2, \dots$. We will take care of using this information to actually drive the car (i.e., set $a_t$ to avoid a collision with $c_t$), so you don't have to worry about that part. To simplify things, we will discretize the world into tiles represented by (row, col) pairs, where 0 <= row < numRows and 0 <= col < numCols. For each tile, we store a probability representing our belief that there's a car on that tile. The values can be accessed by: self.belief.getProb(row, col). To convert from a tile to a location, use util.rowToY(row) and util.colToX(col). Here's an overview of the assignment components: In Problems 1 and 2 (code), you will implement ExactInference, which computes a full probability distribution of another car's location over tiles (row, col). In Problem 3 (code), you will implement ParticleFilter, which works with particle-based representation of this same distribution. Problem 4 (written) gives you a chance to extend your probability analyses to a slightly more realistic scenario where there are multiple other cars and we can't automatically distinguish between them. A few important notes before we get started: Past experience suggests that this will be one of the most conceptually challenging assignments of the quarter for many students. Please start early, especially if you're low on late days! We strongly recommend that you attend/watch the lectures on Bayesian networks and HMMs before getting started, and keep the slides handy for reference while you're working. The code portions of this assignment are short and straightforward -- no more than about 30 lines in total -- but only if your understanding of the probability concepts is clear! (If not, see the previous point.) As a notational reminder: we use the lowercase expressions $p(x)$ or $p(x|y)$ for local conditional probability distributions, which are defined by the Bayesian network. We use the uppercase expressions $\mathbb P(X = x)$ or $\mathbb P(X = x | Y = y)$ for joint and posterior probability distributions, which are not pre-defined in the Bayesian network but can be computed by probabilistic inference. Please review the lecture slides for more details.
CommonCrawl
Only $35.99/year Valuation - Advanced bryanaln How do you value banks and financial institutions differently from other companies? You mostly use the same methodologies, EXCEPT: - You look at P/E and P/BV multiples rather than EV/Rev, EV/EBITDA, and other "normal" multiples, since banks have unique capital structure Walk me through an IPO valuation for a company that's about to go public. 1. Unlike normal valuations, for an IPO valuation we ONLY CARE ABOUT PUBLIC COMPS. 2. After picking the public comps we decide on the most relevant multiple to use and them estimate our company's EV based on that. 3. Once we have the EV, we work backward to get to EV and also subtract the IPO proceeds because this is "new" cash. 4. Then we divide by the total number of shares (old and newly created) to get its per-share price. When people say "An IPO is PRICED at..." this is what they're referring to. If you were using P/E or any other "equity Value-based multiple" for the multiple in step #2 here, then you would get to EV instead and then subtract the IPO proceeds from there. I'm looking at financial data for a public comp, and it's April (Q2) right now. Walk me through how you would "calendarize" this company's financial statements to show the TTM as opposed to just the last Fiscal Year. The "formula" to calendarize financial statements is as follows: TTM = Most Recent Fiscal Year + New Partial Period - Old Partial Period So in the example above, we would take the company's Q1 numbers, add the most recent fiscal year's numbers, and then SUBTRACT the Q1 numbers from that most recent fiscal year. For US companies you can find these quarterly numbers in the 10-Q; for international companies they're in the "interim" reports. Walk me through an M&A premiums analysis. The purpose of this analysis is to look at similar transactions and see the PREMIUMS that buyers have paid to sellers' share prices when acquiring them. For example, if a company is trading at $10.00/share and the buyer acquires it for $15.00/share, that's a 50% premium. 1. First, select the precedent transactions based on industry, date (past 2-3 years for example), and size (example: over $1B market cap). 2. For each transaction, get the seller's share price 1 day, 20 days, and 60 days before the transaction was announced (you can also look at even longer intervals, or 30 days, 45 days, etc.). 3. Then, calculate the 1-day premium, 20-day premium, etc. by dividing the per-share purchase price by the appropriate share prices on each day. 4. Get the medians for each set, and then apply them to your company's current share price, share price 20 days ago, etc. to estimate how much of a premium a buyer might pay for it. Note that you ONLY use this analysis when valuing public companies because private companies don't have share prices. Sometimes the set of companies here is exactly the same as your set of precedent transactions but typically it is BROADER. Walk me through a future share price analysis. The purpose of this analysis is to PROJECT what a company's share price might be 1 or 2 years from now and then DISCOUNT IT BACK TO ITS PRESENT VALUE. 1. Get the median historical (usually TTM) P/E of your public company comp. 2. Apply this P/E multiple to your company's 1-year forward or 2-year forward projected EPS to get its implied future share price. 3. Then, discount this back to its present value by using a discount rate in-line with the company's Ke figures. You normally look at a range of P/E multiples as well as a range of discount rates for this type of analysis, and make a sensitivity table with these as inputs. Both M&A premiums analysis and precedent transactions involve looking at previous M&A transactions. What's the difference in how we select them? - All the sellers in the M&A premiums analysis must be public. - Usually we use a BROADER set of transactions for M&A premiums; we might use fewer than 10 precedent transactions but we might have dozens of M&A premiums. The industry and financial screens are usually less stringent. - Aside from those, the screening criteria is similar: financial, industry, geography, and date. Walk me through a Sum-of-the-Parts analysis. In a Sum-of-the-Parts analysis, you value each division of a company using separate comps and transactions, get to separate multiples, and then add up each division's value to get the total for the company. We have a manufacturing division with $100 million EBITDA, an entertainment division with $50 million EBITDA and a consumer goods division with $75 million EBITDA. We've selected comps and transactions for each division, and the median multiples come out to 5x EBITDA for manufacturing, 8x EBITDA for entertainment, and 4x EBITDA for consumer goods. Our calculation would be $100 5x + $50 8x + $75 * 4x = $1.2 billion for the company's total value. How do you value NOLs and take them into account in a valuation? You value NOLs based on how much they'll save the company in taxes in future years, and then take the present value of the sum of tax savings in future years. Two ways to assess the tax savings in future years: 1. Assume that a company can use its NOLs to completely offset its taxable income until the NOLs run out. 2. In an acquisition scenario, use Section 382 and multiply the adjusted long-term rate by the equity purchase price of the seller to determine the maximum allowed NOL usage in each year - and then use that to figure out the offset to taxable income. You might look at NOLs in a valuation but you rarely add them in - if you did, they would be similar to cash and you would subtract NOLs to go from EV to Enterprise Value and vice versa. I have a set of public comps and need to get the projections from equity research. How do I select which report to use? This varies by bank and group, but two common methods: 1. You pick the report with the most detailed information. 2. You pick the report with numbers in the middle of the range. Note that you DO NOT pick reports based on which bank they're coming from. So if you're at Goldman Sachs, you would not pick all Goldman Sachs equity research - in fact that would be bad because then your valuation would not be objective. I have a set of precedent transactions but I'm missing information like EBITDA for a lot of companies - how can I find it if it's not available via public sources? 1. Search online and see if you can find press releases or articles in the financial press with these numbers. 2. Failing that, look in equity research for the BUYER around the time of the transaction and see if any of the analysts estimate the seller's numbers. 3. Also look on online sources like Capital IQ, FactSet, PitchBook, and CrunchBase, and see if any of them disclose numbers or give estimates. How far back and forward do we usually go for public comps and precedent transaction multiples? Usually you look at the TTM period for both sets, and then you look forward either 1 or 2 years. You're more likely to look backward more than 1 year and go forward more than 2 years for public comps; for precedent transactions it's odd to go forward more than 1 year because your information is more limited. I have one company with a 40% EBITDA margin trading at 8x EBITDA, and another company with a 10% EBITDA margin trading at 16x EBITDA. What's the problem with comparing these two valuations directly? There's no "rule" that says this is wrong or not allowed, but it can be misleading to compare companies with dramatically different margins. Due to basic arithmetic, the 40% margin company will usually have a lower multiple - whether or not its actual value is lower. In this situation, we might consider screening based on margins and remove the outliers - you would never try to "normalize" the EBITDA multiples based on margins. Walk me through how we might value an oil & gas company and how it's different from a "standard" company. You use the same methodologies, except: - You look at the industry-specific multiples like P/MCFE and P/NAV in addition to the more standard ones. - You need to project the prices of commodities like oil and natural gas, and also the company's reserves to determine its revenue and cash flows in future years. - Rather than a DCF, you use a NAV (Net Asset Value) model - it's similar, but everything flows from the company's RESERVES rather than simple revenue growth/EBITDA margin projections. In addition to all of the above, there are also some accounting complications with energy companies and you need to think about what a "proven" reserve is vs. what is more speculative. Walk me through how we would value a REIT and how it differs from a "normal" company. Similar to energy, real estate is asset-intensive and a company's value depends on how much cash flow specific properties generate. - You look at Price/FFO (Funds from Operations) and Price/AFFO (Adjusted Funds From Operations), which add back Depreciation and subtract gains on property sales; NAV (Net Asset Value) is also important. - You VALUE PROPERTIES by dividing NOI (Net Operating Income = Property's Gross Income - Operating Expenses) by the CAPITALIZATION RATE (based on market data). - REPLACEMENT VALUATION is more common because you can actually estimate the cost of buying new land and building new properties. - A DCF is still a DCF, but it flows from specific properties and it might be useless depending on what kind of company you're valuing. Sets found in the same folder BIWS 400 - Basic Accounting 33 terms BIWS 400 - Advanced Accounting Enterprise / Equity Value - Basic Enterprise / Equity Value - Advanced 3 terms Other sets by this creator LBO - Advanced LBO Model - Basic Merger Model - Advanced Merger Model - Basic Verified questions Biscayne Bay Water Inc. bottles and distributes spring water. On May 14 of the current year, Biscayne Bay Water Inc. reacquired 23,500 shares of its common stock at $75 per share. On September 6, Biscayne Bay Water Inc. sold 14,000 of the reacquired shares at$81 per share. The remaining 9,500 shares were sold at $72 per share on November 30. d. For what reasons might Biscayne Bay Water Inc. have purchased the treasury stock? Verified answer On December 1, Quality Electronics has three DVD players left in stock. All are identical, all are priced to sell at $85. One of the three DVD players left in stock, with serial #1012, was purchased on June 1 at a cost of$52. Another, with serial #1045, was purchased on November 1 for $48. The last player, serial #1056, was purchased on November 30 for$40. ***Instructions*** (a) Calculate the cost of goods sold using the FIFO periodic inventory method, assuming that two of the three players were sold by the end of December, Quality Electronics' year-end. l.ist three of your personal financial goals (for example, paying for education beyond high school). What role might banks have in helping you achieve those goals? In the book Essentials of Marketing Research, William R. Dillon, Thomas J. Madden, and Neil H. Firtle $(1993)$ present preexposure and postexposure attitude scores from an advertising study involving $10$ respondents. The data for the experiment are given in Table $10.3$. Assuming that the differences between pairs of postexposure and preexposure scores are normally distributed: Table $10.3$ $$ \begin{array}{llll} \text{Subject} & \text{Preexposure Attitudes}\ (A_{1}) & \text{Preexposure Attitudes}\ (A_{2}) & \text{Attitude Change}\ (d_{i}) \\ 1 &50& 53 &3 \\ 2 &25& 27 &2 \\ 3 &30 &38 &8\\ 4 &50 &55 &5\\ 5 &60 &61 &1\\ 6 &80 &85 &5\\ 7 &45 &45 &0\\ 8 &30 &31 &1\\ 9 &65 &72 &7\\ 10 &70& 78 &8\\ \end{array} $$ Set up the null and alternative hypotheses needed to attempt to establish that the advertisement increases the mean attitude score (that is, that the mean postexposure attitude score is higher than the mean preexposure attitude score). Other Quizlet sets Anatomy and Physiology 1 Final pt. 1 maleab7Plus Pharm 167 Exam 2 149 terms Alexwri98Plus fantasticMomof4boyz Exam 2-Stats 1 ollerab_Plus About Quizlet How Quizlet works Modern Learning Lab Quizlet Plus Quizlet Plus for teachers DeutschEnglish (UK)English (USA)EspañolFrançais (FR)Français (QC/CA)Bahasa IndonesiaItalianoNederlandspolskiPortuguês (BR)РусскийTürkçeУкраїнськаTiếng Việt한국어中文 (简体)中文 (繁體)日本語 © 2023 Quizlet, Inc.
CommonCrawl
MRCS: matrix recovery-based communication-efficient compressive sampling on temporal-spatial data of dynamic-scale sparsity in large-scale environmental IoT networks Zhonghu Xu1, Linjun Zhang1, Jinqi Shen1, Hao Zhou1, Xuefeng Liu2, Jiannong Cao3 & Kai Xing ORCID: orcid.org/0000-0002-3449-88421 In the past few years, a large variety of IoT applications has been witnessed by fast proliferation of IoT devices (e.g., environment surveillance devices, wearable devices, city-wide NB-IoT devices). However, launching data collection from these mass IoT devices raises a challenge due to limited computation, storage, bandwidth, and energy support. Existing solutions either rely on traditional data gathering methods by relaying data from each node to the sink, which keep data unaltered but suffering from costly communication, or tackle the spacial data in a proper basis to compress effectively in order to reduce the magnitude of data to be collected, which implicitly assumes the sparsity of the data and inevitably may result in a poor data recovery on account of the risk of sparsity violation. Note that these data collection approaches focus on either the fidelity or the magnitude of data, which can solve either problem well but never both simultaneously. This paper presents a new attempt to tackle both problems at the same time from theoretical design to practical experiments and validate in real environmental datasets. Specifically, we exploit data correlation at both temporal and spatial domains, then provide a cross-domain basis to collect data and a low-rank matrix recovery design to recover the data. To evaluate our method, we conduct extensive experimental study with real datasets. The results indicate that the recovered data generally achieve SNR 10 times (10 db) better than compressive sensing method, while the communication cost is kept the same. As Internet-of-Things (IoT) applications are proliferating rapidly in recent years, the vastly distributed IoT devices and the resultant large volume of sensing data attract lots of research effort and have triggered a wide range of applications, such as smart city, transportation, and agriculture, owing to its capability of completing complex social and geographical sensing applications. Such social and geographical sensing usually requires large amounts of participants (usually IoT devices) to sense the surrounding environment and collect data from the sensing devices to a data sink due to limited computation, storage, and energy support. Due to the scale of mass data generated in IoT networks, it is difficult to continuously gather the original data from the network, since such collection usually requires considerable effort of communication and storage at intermediate nodes. A traditional way of solving this problem includes wavelet-based collaborative aggregation [1], cluster-based aggregation and compression [2, 3], and distributed source coding [4, 5]. All of them utilize the spatial correlation of device readings among device nodes. But they may meet robustness issues when dealing with cross-domain (temporal and spatial) event readings and behave limited capacity in compression. In recent years, it is suggested that compressed sensing (CS) may benefit the compression in data aggregation scenarios. It avoids introducing excessive computational, communication, and storage overheads at each device. Therefore, it meets the capacity limitation at each sensing device and is viewed as a promising technology for data gathering in IoT networks. However, compressive sensing is based on constant sparsity, which means a stable/fixed transform basis (though unnecessary to known) is required according to prior information of sensed data. Such a situation hardly holds in real cases, and data with changing sparsity would impact the recovery quality significantly. In order to address this problem, Wang et al. [13] proposed an adaptive data gathering scheme based on CS. The "adaptive" here has twofold meaning: for one, the CS reconstruction becomes adaptive to the sensed data, which is accomplished by the adjustment of autoregressive (AR) parameters in the objective function, and for the other, the number of measurements required to the sensed data is turned adaptively according to the variation of data. To further deal with the varied sensing data, Wang et al. suggested that each time when the reconstruction is accomplished at sink node, the result is approximately evaluated and forms a feedback to the device nodes. The intuition here is that the temporal correlation between historically reconstructed data could help estimate current reconstruction result at sink node. It is notable that compression of original readings with CS-based method [6–8] or matrix completion-based method [9, 10] will reduce the quality of recovered data at sink node and routing the raw data to sink to preserve the fidelity brings considerable overhead. There is a conflict between high compression ratio and high fidelity. Data gathering and recovery with event readings is another problem studied in compressive sensing-based data gathering. A well-known method to tackle this problem is to decompose data d into dn+dα, assuming dα is sparse in time domain since the abnormal readings are usually sporadic. However, when environment changes occur, it may result in a significant amount of readings beginning to change, which would further make dα not sparse in spatial domain. Besides, though dn is sparse in spatial domain under a proper basis and dα is sparse in time domain, they are not necessarily sparse at the same time under the same basis. Therefore, it is doubtful that the sparsity of d could preserve cross time and space domain. Furthermore, the proper basis may vary in accordance with different events. In this paper, we consider the temporal and spatial correlation of the sensed data and provide a low-rank matrix recovery-based data aggregation design, which could compress the data and address the event data gathering problem at the same time. Compared with the existent work of data gathering in device networks, our approach has the following contributions: In IoT sensing and data aggregation scenarios, note that either the fidelity problem or the magnitude problem can be solved well but never both simultaneously. This paper presents a new attempt to tackle both problems at the same time in large-scale IoT networks with diverse time/space-scale events and reduce global-scale communication cost without introducing intensive computation or complicated transmissions at each IoT device. The experiments of this paper on a real environmental IoT sensing network observe that constant sparsity hardly holds in real cases with diverse time/space-scale events, while low-rank property may be true. This observation may provide a fresh vision for research in both compressive sampling applications and IoT sensing and data aggregation scenarios. This paper further generalizes the low-rank-based optimization design to a nuclear norm-based optimization design, to make the proposed approach more general and robust. Theoretical analysis indicates that our matrix recovery-based method is robust over diverse time/space-scale event readings. The extensive experimental results show that event readings are almost kept unaltered under the proposed design and our method outperforms typical compressive sensing [11] in terms of SNR by 10 times (10 db) generally in the meanwhile. This paper is organized as follows. Section 2 introduces the preliminaries and the network model. Section 3 proposes the data gathering and recovery design. Section 4 analyzes communication overhead of the proposed method with comparison to compressive sensing. Section 5 presents the experimental results with real environmental IoT sensing datasets from [12]. Then, we summarize related work in Section 6. Finally, we give out the discussion in Section 7 and conclude this paper in Section 8. Preliminaries and assumptions Matrix recovery Let X denote the original data, where X is a M×N matrix and is no longer to be sparse even in a proper basis (different from compressive sensing). Let rank(X)=r, where r is assumed to be much smaller than min{M,N}. According to [13], in order to recover X from a linear combinations of Xij, the number of the combinations needed is no larger than cr(M+N), where c is a constant. Let A denote a linear map from RM×N space to Rp space; we have the following optimization problem: $$ \underset{\boldsymbol{X}}{\text{min}} ||\boldsymbol{X}||_{*} \quad \text{s.t.} A\boldsymbol{X} = \boldsymbol{b} $$ where b is the vector, and ||·||∗ denotes the nuclear norm (the sum of σii in SVD decomposition). Note that we replace the rank (the number of the nonzero singular values) with the nuclear norm (the sum of the singular values), which makes the problem become a convex optimization problem and be solvable if p≥Cn5/4rlogn (where n=max(M,N) and C is a constant) [13]. Considering noisy measurements, we further modify the problem in the following format: $$ \underset{\boldsymbol{X}}{\text{min}}\quad \mu ||\boldsymbol{X}||_{*}+ ||A\boldsymbol{X-b}||_{L_{2}} \quad \text{s.t.} A(\boldsymbol{X})=\boldsymbol{b} $$ Network model We consider a participatory IoT sensing network in which a base station (BS) continuously collects data from participatory IoT devices. Due to the scale of mass data and the limited ability in computation, storage, bandwidth, and energy at each device, these devices need to compress data with light computation overhead before data transmission. Suppose there are N resource-constrained IoT devices in the network, whose positions can be determined after deployment via a self-positioning mechanism such as those proposed in [14–16]. Then, the data collection path could be predetermined by the base station and be aware by each device. We further assume that the clocks of all nodes are loosely synchronized [17–19]. In particular, t1,t2,⋯,ti,⋯,tj,⋯ are used to represent the time instants in the network, where ti<tj given i<j, i,j∈Z+. Every time instance, a device generates a reading. \(\mathcal {N}(u)\) is denoted as the n nearest neighbors of an open neighborhood of u. Note that \(\mathcal {N}(u)\) could be the one-hop neighborhood or any neighboring area containing more nodes. In this paper, we assume that participatory IoT devices follow a semi-honest model [20]. Specifically speaking, they are honest and follow the protocol properly except that they may record intermediate results. We assume that the messages are securely transmitted within the network, which can be achieved via conventional symmetric encryption and key distribution schemes. Temporal-spatial compressive sampling design Given M time instances and N devices in the network, the original data in the network can be represented by a m×n matrix X, where each row represents the readings in the network at a time instant and each column represents the readings of an IoT device at a different time instant. Xij(1≤i≤M,1≤j≤N) denotes the reading of each node j at time instant ti. Let A denote a linear map from RM×N space to Rp space and vec denote a linear map to transform a matrix to a vector by overlaying one column on another; we have $$A(\boldsymbol{X})=\boldsymbol{\Phi} \cdot vec(\boldsymbol{X}) $$ where Φ is a p×MN matrix. Let Φ be a random matrix satisfying the RIP condition [21]. Before deployment, each device is equipped with a pseudo-random number generator. Once the device produces a reading at some time instance, the pseudo-random number generator will generate a random vector of length p with the combination of current time instance and the device's ID as random seed. The elements of this random vector is i.i.d. sampled from a Gaussian distribution of mean 0 and variance 1/p. Note that this pseudo-random number generation at each device could be reproduced by the base station by using the same generator. p, the dimension of A, means the number of elements (namely the combinations) to recover X. Typically, p should be not less than cr(3m+3n−5r) [13]. Therefore, the problem can be formulated as the following optimization problem: $$ \underset{\boldsymbol{X} \in R^{M\times N}}{\text{min}}\quad \frac{1}{2} ||A(\boldsymbol{X})-b||^{2}_{F}+ \mu ||\boldsymbol{X}||_{*} \quad \text{s.t.} A{\boldsymbol{X}}=\boldsymbol{b} $$ where the first part is for noise and the second part is for low rank. In IoT networks, devices may produce erroneous readings due to noisy environment or error-prone hardware. The erroneous readings usually occur at sporadic time and locations and thus may have few impacts on the data sparsity of the network. Thus, outlier/abnormal reading recovery/detection could still work in compressive sensing-based data gathering. However, device measurements on the same event usually have strong inter-correlations and geographically concentrated in a group of devices in close proximity. Such events may spread in diverse time and space scale and result in dynamic sparsity of the data, which would further violate the assumption of constant sparsity in compressive sensing and thus lead to poor recovery. Given N M−dim signal vectors generated from N devices within M time instances, a good basis to make these vectors sparse may not be easy to find. Interestingly, [22] has analyzed different sets of data from two independent device network testbeds. The results indicate that the N×Mdim data matrix may be approximately low rank under various scenarios under investigation. Therefore, such N×M temporal-spatial signal gathering problem with diverse scale event data that cannot be well addressed by CS method could be tackled under the low-rank frameworkFootnote 1. Path along compressive collection In this paper, we provide a generalization of current data gathering methods on temporal-spatial signals with diverse scale events, during which device readings are compressively collected along the relay paths, e.g., chain-type or mesh topology, to the sink. At each device sj, given the reading produced from sj at time instance t1, sj generates a random vector Φ1j of length p, with time instance t1 and its ID sj as the seed, and computes the vector X1jΦ1j. At the next time instance t2, sj generates a random vector Φ2j, computes X2jΦ2j, and adds it to the previous vector X1jΦ1j. At time instance tM, sj computes XMjΦMj and would have the summation \(S_{j}=\sum \limits ^{M}_{i=1} \boldsymbol {X}_{ij}\Phi _{ij}\). In the network, each device sj continuously updates its vector sum Sj till time instance tM. After that, device sj relays the vector Sj to the next device si. Then, si adds Sj with its vector sum Si and forwards Si+Sj to the next device. After the collection along the relay paths, the sink receives \(\sum \limits ^{M}_{i=1} \boldsymbol {X}_{ij}\Phi _{ij}\). During data gathering, each node sends out only one vector of fixed length along the collection path, regardless of the distance to the sink (The property of the fixed-length vector will be discussed in Section 4). Considering event data, recall that the row of data matrix X (the signal in the network) represents the data acquired at some time instance from all devices and each column of matrix X represents the data got from one device at different time instances. Outlier readings could come from the internal errors at error-prone devices, for example, noise, systematic errors, or caused by external events due to environmental changes. Former internal errors are often sparse in spatial domain, while the latter readings are usually low rank in time domain. They both keep sparse at the corresponding domain but together may lead to dynamic changes of data sparsity. Let matrix X be decomposed into two parts, the normal one and the abnormal one: X=Xn+Xs. We could have: $$ \begin{aligned} A\boldsymbol{X}&=A\left(\boldsymbol{X}_{n}+\boldsymbol{X}_{s}\right)\\ &=A\cdot[\!I,I]\left(\boldsymbol{X}_{n},{X}_{s}\right)^{T}\\ &=[\!A,A]\left[\boldsymbol{X}_{n},\boldsymbol{X}_{s}\right]^{T} \quad \text{s.t.} A(\boldsymbol{X})=\boldsymbol{b}\\ \end{aligned} $$ Based on Eq. 1, [A,A] is a new linear map. The formulated problem could be solved in the framework of matrix recovery. That is, given the observation vector y∈Rp, the original data matrix X∗ could be recovered in R2M×N. A basic design of data recovery This section provides the generalization of data recovery method from compressive sensing to the realm of matrix recovery. The advantages of such an extension are twofold: (1) it exploits the data correlation in both time and space domains and (2) the diverse scale of event data, which would mute the power of CS method due to sparsity changes, could be tackled with the proposed method. According to Eqs. 3 and 4, the general form of the problem could be expressed with the following minimization problem: $$ \underset{\boldsymbol{X} \in R^{m\times n}}{\text{min}}\quad \frac{1}{2} ||A(\boldsymbol{X})-b||^{2}_{F}+ \mu ||\boldsymbol{X}||_{*} \quad \text{s.t.} A(\boldsymbol{X})=\boldsymbol{b} $$ where A(x)=ΦT(x) given T(·) as the transformation of a matrix to a vector by overlaying one column of x on another. Φ is a p×MN random matrix. Note that Eq. 3 is the Lasso form of Eq. 2. In relaxed conditions, its solution is the solution of Eq. 2 [23]. Therefore, we consider Eq. 3 (Eqs. 3 and 5 are essentially same) instead of the original problem in Eq. 2. This problem could be further transformed into the following form: $$ \underset{\boldsymbol{X} \in R^{m\times n}}{\text{min}}\quad F(x)\triangleq f(x) +P(x) $$ where \(f(x)=\frac {1}{2} ||A(\boldsymbol {X})-b||^{2}_{F}\) and P(x)=μ||X||∗ Note that both parts are convex, but only the first part is differential while the second part may not. Then, we could have $$\nabla f(\boldsymbol{X})= A^{*}(A(\boldsymbol{X})-b) $$ where A∗ is the dual operator of A. Since A∗(X)=ΦTX, we have $$\nabla f(\boldsymbol{X})= A^{*}(A(\boldsymbol{X})-b)=\Phi^{T}(\Phi^{*}T(X)-b)^{*} $$ Because ∇f is linear, it is Lipschitz continuous. Then, we could have a positive constant Lf to satisfy the following inequation: $$||\nabla f(\boldsymbol{X})-\nabla f(\boldsymbol{Y})||_{F}\leq L_{f} ||\boldsymbol{X}-\boldsymbol{Y}||_{F}\quad\forall \boldsymbol{X},\boldsymbol{Y}\in R^{M\times N} $$ Lemma 1 A rough estimation of Lf $$\sqrt{MN\cdot \underset{i}{\text{max}}\: \left\{\left(\Phi^{T}\Phi\right)^{2}_{i})\right\}}, $$ where \(\left (\Phi ^{T}\Phi)^{2}_{i}\right)^{2}\) is the ith column of the matrix ΦTΦ. \(||\nabla f(\boldsymbol {X})-\nabla f(\boldsymbol {Y})||_{F}^{2}=||\Phi ^{T}(\Phi ^{*}T(\boldsymbol {X}-\boldsymbol {Y}))||^{2}_{2}\) Set \(\Phi ^{T}\Phi = \left (\begin {array}{ccc} a_{11} & \cdots & a_{1,MN} \\ \vdots & & \vdots \\ a_{p1} &\cdots &a_{p,MN} \end {array}\right)\), \(T(X-Y)=\left (\begin {array}{c} x_{11} \\ \vdots \\ x_{MN} \end {array}\right)\), and \(h= \underset {i}{\text {max}}\; \left \{\left (\Phi ^{T}\Phi \right)^{2}_{i}\right)\),then $$\begin{aligned} ||\Phi^{T} \Phi T(X-Y) ||^{2}_{2}&=\sum\limits^{p}_{j=1}\left(\sum\limits^{MN}_{i=1} a_{ji}x_{i}\right)\\ &\leq h(x_{1}+\ldots+x_{MN})^{2}\\ &\leq M\cdot N \cdot h(x_{1}+\ldots+x_{MN})^{2}\\ &=MNh||X-Y||^{2}_{F} \end{aligned} $$ Thus, \(L_{f}\leq \sqrt {MNh}\). □ A much smaller Lf could be found in various real scenarios and may help converge quickly. The experimental results of this paper show that the Lf could be much smaller than the rough estimation above, given the matrix sampled from a Gaussian distribution. Considering the following quadratic approximation of F(·) of Eq. 6 at Y: $$ \begin{aligned} Q_{\tau}(X,Y)&\triangleq f(Y)+<\nabla f(Y),X-Y>\\ &\quad+\frac{\tau}{2}||X-Y||^{2}_{F} +P(X)\\ &= \frac{\tau}{2}||X-G||^{2}_{F}+P(X)+f(Y)\\ &\quad-\frac{1}{2\tau}||\nabla F(Y)||^{2}_{F}\\ \end{aligned} $$ where τ>0 is a given parameter, G=Y−τ−1∇f(Y). Since the above function of X is strong convex, it has a unique global minimizer. Considering the minimization problem $$ \underset{X\in R^{M\times N}}{\text{min}}\quad \frac{\tau}{2}||X-G||^{2}_{F}+\mu||X||_{*} $$ where G∈RM×N. Note that if G=Y−τ−1A∗(A(Y)−b), then the above minimization problem is a special case of Eq. 7 with \(f(X)=\frac {1}{2}||A(X)-b||^{2}_{2}\) and P(X)=μ||X||∗ when we ignore the constant term. Let Sτ(G) denote the minimizer of (6). According to [24], we further have $$S_{\tau}(G)=U\cdot diag((\delta-\mu/\tau)_{+})\cdot V^{T} $$ given the SVD decomposition of G=Y−τ−1A∗(A(Y)−b)=U·diag(δ)·VT. Here, for a given vector x∈Rp, we let x+=max{x,0} where the maximum is taken component-wise. Based on the accelerated proximal gradient(APG) design given [13, 24], we further denote t0=t1=1 and τk=Lf and {Xk},{Yk},{tk} as the sequence generated by APG. For i=1,2,3,⋯, we have Step 1: Set \(Y_{k}=X_{k}+\frac {t^{k-1}-1}{t^{k}}\left (X_{k}-X_{k-1}\right)\) Step 2: Set Gk=Yk−(τk)−1A∗(A(Yk)−b). Compute \(S_{\tau _{k}}(G_{k})\) from the SVD of Gk Step 3: Set \(X^{k+1}=S_{\tau _{k}}(G_{k})\) Step 4: Set \(t_{k+1}=\frac {1+\sqrt {1+4(t_{k})^{2}}}{2}\) For any μ>0, the optimal solution X∗ of Eq. 3 is bounded according to [13, 24]. And ||X||F<χ where $$ \chi = \left\{ \begin{array}{ll} min\left\{||b||^{2}_{2}/(2\mu), ||X_{LS}||_{*}\right\} & \text{if A is surjective}\\ ||b||^{2}_{2}/(2\mu) & \text{Otherwise} \end{array} \right. $$ with XLS=A∗(AA∗)−1b Based on this lemma, we could reach a deterministic estimation of the procedure and speed of convergence of data recovery. Let {Xk},{Yk},{tk} be the sequence generated by APG. Then, for any k≥1, we could have $$F(X_{k})-F(X^{*})\leq \frac{2L_{f}||X^{*}-X_{0}||^{2}_{F}}{(k+1)^{2}} $$ $$F(X_{k})-F(X^{*})\leq \varepsilon \quad \text{if}\quad k\geq \sqrt{\frac{2L_{f}}{\varepsilon}}(||X_{0}||_{F}+\chi)-1. $$ Let δ(x) denote dist(0,∂(f(x))+μ||X||∗), where δ(x) represents the convergence speed of data recovery. It is easy to see that the process naturally stops when δ(x) is small enough. Since ||X||∗ is not differential, it may not be easy to compute δ(x). However, there is a good upper bound for δ(x) provided by APG designs [24]. $$\begin{aligned} \tau_{k}(G_{k}-X_{k+1})&=\tau_{k}(Y_{k}-X_{k+1})-\nabla f(Y_{k})\\ &=\tau_{k} (Y_{k}-X_{k+1})\\ &\quad-\Phi^{T}(\Phi\cdot vec(Y_{k})-b) \end{aligned} $$ Note that $$\partial (\mu ||X_{k+1}||_{*}) \geq \tau_{k}(G_{k}-X_{k+1}) $$ $$\begin{aligned} S_{k+1}&\triangleq \tau_{k}(Y_{k}-X_{k+1})+\nabla f(X_{k+1})-\nabla f(Y_{k})\\ &= \tau_{k}(Y_{k}-X_{k+1}) +A^{*}(A(X_{k+1})-A(Y_{k}))\\ &=\tau_{k}(Y_{k}-X_{k+1})+\Phi^{T}(\Phi \cdot T(Y_{k}-X_{k+1})) \end{aligned} $$ we could have $$S_{k+1} \in \partial (f(X_{k+1})+\mu ||X_{k+1}||_{*}) $$ Therefore, we have δ(Xk+1)≤||Sk+1||. According to the derivation above, the stopping condition could be given as follows, $$\hspace{45pt} \frac{||S_{k+1}||_{F}}{\tau_{k} \text{max}\{1,||X_{k+1}||_{F}\}}\leq Tol $$ where Tol is a tolerance defined by user, usually moderately small threshold. Advanced design of data recovery This section provides a generalization of previous low-rank-based matrix recovery design to a nuclear-form-based design. Suppose X0 denotes an M×N matrix with rank r given the singular value decomposition (SVD) UΣV∗, where M≤N, Σ is r×r, U is M×r, and V is N×r. Let subspace T denote the set of matrices of the form UY∗+XV∗, where X (Y) is an arbitrary M×r (N×r) matrix. UY∗ and XV∗ are both M×N matrices. The span of UY∗ and XV∗ have dimension of Mr and Nr, respectively, and the intersection of two spans has dimension of r2. Therefore, $$d_{T} = dim(T) = r(M + N-r) $$ Let T⊥ denote the subspace of matrices spanned by the family (xy∗) and x and y denote arbitrary vectors orthogonal to U and V, respectively. Note that the spectral norm ||·|| is dual to the nuclear norm. We have the subdifferential of the nuclear norm at X0 $$\partial ||\boldsymbol{X}_{0}||_{*}=\{\boldsymbol{Z} : P_{T}(\boldsymbol{Z})=\boldsymbol{UV^{*}} and ||P_{T^{\perp}}(\boldsymbol{Z}) ||\leq 1\} $$ where UV∗ is equal to \(\sqrt r\) under the Euclidean norm. Given X0, an arbitrary M×N rank-r-matrix, and ||·||, the matrix nuclear norm, considering a Gaussian mapping Φ with m≤c·r(3M+3N−5r) for some c>1, the recovery is exact with probability at least 1−−2e(1−c)n/8, where n=max(M,N) [13]. Here the Gaussian mapping Φ is an M×N random matrix with i.i.d., zero-mean Gaussian entries with variance 1/p. It adopts a linear operator where \([\Phi (Z)]_{i} = \text {tr}(\Phi _{i}^{*}\cdot Z)\). By stacking the vector(column) of Z on top of one another, Φ could be equivalently written by a p×(MN) dimensional matrix. Then, we have the dual multiplier $$Y = \Phi^{*}\cdot\Phi_{T} (\Phi^{*}_{T}\cdot\Phi_{T})^{-1}(UV^{*}) $$ According to this theorem, each device sends out only one vector of fixed length of cr(3M+3N−5r) along the collection path at the end of time M, with an overwhelming recovery probability of the original data. Given p<(M+N−r)r, we could always find two distinct matrices Z and Z0 of rank at most r with the property A(Z)=A(Z0), no matter what A is. Let U∈RM×r,V∈RN×r be two matrices with orthogonal columns, considering the linear space of matrices $$T = \left\{UX^{*}Y V^{*} : X \in R^{N\times r}, Y \in R^{N*r}\right\} $$ Note that the dimension of T is r(M+N−r); if p<(M+N−r)r, there exists Z=UX∗YV∗=0 in T such that Φ(Z)=0, since we have Φ(UX∗)=Φ(YV∗) for two distinct matrices of rank at most r. Interestingly, different from the results in compressive sensing, the number of measurements required is within a constant of a theoretical lower limit—No extra log factor. Comparing with compressive sensing (CS)-based data gathering design, the length of vector sent by each device at each time instance with CS -based design is O(logN). Based on recent results on the bounds for low-complexity recovery models [25], the total amount of vectors collected during all M time instances will be O(MN log(N)) in compressive sensing. When M is larger than O(N/ log(N)), the proposed design will exhibit advantage in communication overhead. When M and N have the same order of magnitude, the proposed method has similar communication overhead compared with CS-based method. Before estimating the recovery error and its upper bound, we first introduce the restricted isometry property (RIP): Let r=1,2,…,n, the isometry constant δr of A is the smallest quantity such that $$(1-\delta_{r}) || X ||^{2}_{F}\leq || A(X) ||^{2}_{2}\leq (1 +\delta_{r}) || X ||^{2}_{F} $$ holds for all matrices of rank at most r. If δr is bounded by a sufficiently small constant between 0 and 1, we say that A satisfies the RIP at rank r. Suppose X∗ is the solution of the recovery method. Given the noise z satisfying ||Φ∗(z)||≤ε and ||ΦT(z)||∞≤η, for some ε≤η, if \(\delta _{r}<\frac {1}{3}\) with r≤2, then ||X−X∗||F≤(ε+η)+ Based on this theorem, given a random matrix Φ properly chosen from i.i.d. zero-mean Gaussian distribution with variance 1/p, the error of the proposed method could be bounded under the noise. Based on the above analysis, we could see that (1) the vector kept by each device is bounded by cr(3M+3N−5r) and (2) the communication overhead of the network, i.e., the total number of message, is O(Ncr(3M + 3N − 5r)). Comparing with the overhead of CS-based data gathering method, which are M·(2cs logN + s) and MN·(2cs logN+s), respectively, for low-rank data [25], it is easy to see that the proposed method could outperform CS-based data gathering methods in terms of communication given a large collection period M. To compare with matrix completion-based method, we take STCDG proposed in [22] as an example under the same assumption that N nodes are deployed randomly. According to [13], the overhead of STCDG can be derived as O(n5/4N1/2r logn)(n=max(M,N)). STCDG may suffer a much larger overhead compared with our method under large-scale IoTs. According to the analysis, the larger the sampling period M at each device, the better the communication overhead efficiency the proposed method has. To evaluate data recovery quality and robustness of the proposed method, we conduct experiments on both artificial datasets and real sensor datasets. Artificial datasets are constructed by a 100×100 matrix representing a random deployed sensor network of 100 nodes within a 100-h duration. The real sensor datasets are extracted from CitySee project [12], which has deployed a large-scale wireless sensor network consisting of multiple sub-networks in a urban area in Wuxi, China. Specially, we compare the proposed method with a compressive sensing (CS)-based method proposed in [11] on the temperature and humidity data generated from 55 sensors in 115 h. The CS-based method proposed in [11] generates sampling matrix randomly and keeps original readings sparse in DCT domain. To detect abnormal readings, [11] decomposes the original reading d=d0+ds where d0 contains the normal readings and ds contains the deviated values of abnormal readings and constructs a sparse basis for d=[d0,ds]. The sink reconstructs sensor readings with linear programming (LP) techniques [26]. We generate sampling matrix with the same distribution in CS-based method and our proposed method with sampling rate about 47% on original readings (115×55 matrix). In data gathering and recovery problem, event readings usually result in dynamic and diverse sparsity changes in both time and space domains, which may seriously undermine the foundation of CS-based method during environment changes. Prior works [6, 27–30] have made an attempt to tackle data recovery with small-scale event readings with CS-based methods, e.g., events reported from several devices brought by device accidents or small-range environment change. However, when events spread in large range and various time scales, it is doubtful whether CS-based data gathering method could deal with it or not. In this paper, we conduct the experiments to study the data recovery quality on the data with both small-range events and large-range events on both CS-based method and the proposed method. Recovery quality and robustness study on data with large-scale events As shown in Figs. 1 and 2, the proposed method achieves high recovery quality with large-range event in Figs. 1c and 2c. Event readings are recovered almost exactly the same as the original data in spatial domain. As shown in Fig. 3b, c, although large-range event leads to dynamic and diverse scale of sparsity changes and brings more challenge to data recovery, the proposed method generally achieves about 10 db better recovery quality than that of CS-base method. We further confirm the observation in (1) snapshot in spatial domain and time domain of humidity data with large-range event recovered by the proposed method and CS-based method at the 5th, 25th, and 50th nodes in Figs. 4, 5, and 6, respectively, and (2) snapshot in spatial domain of temperature data with large-range event at 115 h recovered by MR method and CS-based method in Fig. 7. 3D contour map of humidity data with a large-range event. a Original humidity data with a large-range event. b CS-recovered humidity data with a large-range event. c MR-recovered humidity data with a large-range event 3D contour map of temperature data with a large-range event at 115 h. a Original temperature data with a large-range event. b CS-recovered temperature data with a large-range event. c MR-recovered temperature data with a large-range event SNR comparison of MR and CS methods among different datasets. a SNR of MR- and CS- recovered temperature data. b SNR of MR- and CS-recovered temperature data with a small-range event. c SNR of MR- and CS-recovered temperature data with a large-range event Comparison of MR and CS methods on humidity data with large-range event at the 5th node. a Data recovered by MR at the 5th node. b Data recovered by CS at the 5th node Comparison of MR and CS methods on humidity data with large-range event at the 25th node. a Data recovered by MR at the 25th node. b Data recovered by CS at the 25th node Comparison of MR and CS methods on temperature data with large-range event. a Data recovered by MR. b Data recovered by CS In the meanwhile, CS-based method could not recover the data as shown in Figs. 1b and 2b. The recovered data in the event area are almost overwhelmed in the noise due to the changes of the sparsity foundation of CS-based method. And the recovery quality of CS method in other areas (except event area) is affected by event readings due to the violation of static sparsity. Therefore, CS-based method has limited recovery capability and less robustness against large-scale event compared with the proposed method. Recovery quality and robustness study on data with small-scale events As shown in Fig. 8, the humidity data with small-range event is plotted in 3D contour maps. It is obvious that the proposed method recovers the data in high quality. Event readings are easy to observe by the small hill in the map of Fig. 8a, c, while CS-based method can only recover the data to some degree as shown in Fig. 8b, since event readings are recovered in low quality as the recovered data in the event area are almost overwhelmed in the noise. What is worse is that some areas are obviously altered due to the change of sparsity. It is easy to find that CS-based method provides much worse recovery robustness against small-scale event compared with the proposed method. 3D contour map of humidity data with a small-range event. a Original humidity data with a small-range event. b CS-recovered humidity data with a small-range event. c MR-recovered humidity data with a small-range event As shown in Fig. 3b, our method generally achieves about 10 db better recovery quality than CS-based method under small-scale events. We further confirm the observation in comparison of snapshot in time domain of humidity data with small-range event recovered by the proposed method with that of CS-based method at arbitrary mode respectively in Fig. 9. Comparison of MR and CS methods on humidity data with small-range event. a Humidity data recovered by MR at an arbitrary node. b Humidity data recovered by CS at an arbitrary node Recovery quality and robustness study on data without events The temperature data and humidity data (original, recovered by CS-based method and recovered by the proposed method) are plotted in a 3D contour map in Figs. 10 and 11. As shown in Figs. 10c and 11c compared with Figs. 10a and 11a, the contour map of data recovered by the proposed method makes little change compared with the original data. As artificial datasets are generated randomly, the 2D contour map in Fig. 12 gives the comparison more obviously. We also plot the temperature data and humidity data in 2D contour maps as shown in Figs. 13 and 14 to make the result more clear. The result is further confirmed on the study of recovery quality quantitatively measured in SNR. Assuming Xj∈RM denoting the reading of the jth node and \(\widehat {X}_{j}\) denoting the recovered reading respectively, SNR of node j in time domain is defined as \({SNR}_{j}=-20\log _{10}\frac {||X_{j}-\widehat {X_{j}}||_{2}}{||X_{j}||_{2}}\). It is shown that our proposed method achieves about 20 db gain in the recovered data in Fig. 3a. We also measure the recovery performance by the root mean square error (RMSE). In time domain, the RMSE of node j is \({RMSE}_{j}=\sqrt {\frac {\sum _{i=1}^{M}{\left (\widehat {X_{ij}}-X_{ij}\right)^{2}}}{M}}\). In spatial domain, the RMSE of time slot i is \({RMSE}_{t_{i}}=\sqrt {\frac {\sum _{j=1}^{N}{\left (\widehat {X_{ij}}-X_{ij}\right)^{2}}}{N}}\). RMSE measurement on temperature and humidity data is shown in Figs. 15 and 16 which indicates that our method brings less error than CS method. 3D contour map of temperature data. a Original temperature data. b CS-recovered temperature data. c MR-recovered temperature data 3D contour map of humidity data. a Original humidity data, b CS-recovered humidity data. c MR-recovered humidity data 2D contour map of artificial data. a Original artificial data. b CS-recovered artificial data. c MR-recovered artificial data 2D contour map of artificial data. a Original contour data. b CS-recovered contour data. c MR-recovered contour data RMSE comparison of MR and CS methods on temperature data. a RMSE in spatial domain. b RMSE in time domain RMSE comparison of MR and CS methods on humidity data. a RMSE in spatial domain. b RMSE in time domain As shown in Fig. 10b, compressive sensing-based method can recover the temperature data in some degree. It is interesting to observe that CS-based method could hardly keep recovery quality stable, while the proposed method can achieve much better recovery quality as well as robustness. It can be further confirmed with the comparison of the SNR result of both methods at each sensor. The proposed method outperforms CS-based method in SNR with about 10 times (10 db) as shown in Fig. 3a. In device networks, data gathering usually results in considerable communication overhead. Traditional approaches dealing with such problem include distributed source coding [31, 32], in-network collaborative wavelet transform [33–35], holistic aggregation [36], and clustered data aggregation and compression [37, 38]. Though these approaches to some extent utilize the spatial correlation of device readings, they lack the ability to support the recovery of diverse-scale events. In the past decade, compressive sensing (CS) has gained increasing attention due to its capacity of sparse signal sampling and reconstruction [39, 40] and triggered a large variety of applications, ranging from image processing to gathering geophysics data [41]. In terms of data gathering, various CS-based approaches have been proposed to the decentralized data compression and gathering of networked devices, aiming to efficiently collect data among a vast number of distributed nodes [6, 27, 28]. Liu et al. [7] present a novel compressive data collection scheme for IoT sensing networks adopting a power-law decaying data model verified by real data sets. Zheng et al. [8] propose another method handling with data gathering in IoT sensing networks by random walk algorithm. Xie and Jia [42] develop a clustering method that uses hybrid CS for device networks reducing the number of transmissions significantly. Li et al. [43] apply compressive sensing technique into data sampling and acquisition in IoT sensing networks and Internet of Things (IoT). Mamaghanian et al. [44] propose the potential of the compressed sensing for signal acquisition and compression in low complexity ECG data in wireless body device networks (WBSN). Zhang et al. [29] propose a compressive sensing-based approach for sparse target counting and positioning in IoT sensing networks. Tian and Giannakis [45] utilize compressed sensing technique for the coarse sensing task of spectrum hole identification. In addition, there are several papers researching in CS for device network focusing on throughput, routing, video streaming processing, and sparse event detection in [30, 46–48]. Cheng et al. [49] focus on dealing with continuous sensed data. Extracting kernel or dominant dataset from big sensory data in WSN provides another compressing method in [50, 51]. In recent years, low-rank matrix recovery (LRMR) extends the vectors' sparsity to the low rank of matrices, becoming another important method to obtain and represent data after CS given only incomplete and indirect observations [10]. Keshavan et al. compared the performance of three matrix completion algorithms based on low-rank matrix completion with noisy observations [52]. Zhang et al. [53] present a spatio-temporal compressive sensing framework on Internet traffic matrices. Yi et al. [9] take advantage of both the low-rankness and the DCT compactness features improving the recovery accuracy. Compared with prior work based on LRMR, our method achieves better compression ratio and lower communication overhead. According to the analysis and experimental study, it is interesting to observe that the proposed method enables IoT networks the ability of dealing with both fidelity problem and magnitude problem simultaneously with diverse time/space-scale events and reduce global-scale communication cost without intensive edge computation. The experiments of this paper on a real environmental IoT sensing network also reveal that constant sparsity hardly holds in real cases with diverse time/space-scale events, while low-rank property may be true. While events may violate constant sparsity in compressive sensing and reduce the recovery quality severely, the recovery quality of the proposed method still keeps the fidelity of event readings, which is about 10 times (10 db) better than typical compressive sensing [11] in terms of SNR. This observation may provide a fresh vision for research in both compressive sampling applications and IoT sensing and data aggregation scenarios. However, it is worth noting that there is still limitation in the cases that low rank property does not hold in the network. To deal with this problem, this paper further generalizes the low-rank-based optimization design to a nuclear norm-based optimization design, to make the proposed approach more general and robust. In future work, we would like to focus on enhancing the performance of our method in IoT networks with events. In this paper, we have shown the effectiveness and validity of cross-domain matrix recovery in data compression, gathering, and recovery through the study on environmental IoT sensing datasets. It is obvious that the proposed method could be further extended to a large variety of other IoT application scenarios. In particular, we have demonstrated the capacity of the proposed MRCS method dealing with both data fidelity and magnitude problems simultaneously in data gathering of IoT networks, via both theoretical analysis and experimental study. The results show that the proposed MRCS method outperforms the original CS method in terms of recovery quality. Our work provides a new approach in both compressive sampling applications and IoT networks with diverse time/space-scale events and suggests a general design given the relaxation from low-rank-based optimization to nuclear norm-based optimization. Indeed, the matrix could be recovered by solving the nuclear-norm based MR optimization problem rather than the low rank based MR optimization problem, the details would be elaborated in Section 4. BS: CS: IoT: LRMR: Low-rank matrix recovery SNR: A. L. S. Orozco, J. R. Corripio, J. C. Hernandez-Castro, Source identification for mobile devices, based on wavelet transforms combined with sensor imperfections. Computing. 96(9), 829–841 (2014). P. Kasirajan, C. Larsen, S. Jagannathan, A new data aggregation scheme via adaptive compression for wireless sensor networks. ACM Trans. Sens. Netw.9(1), 1–26 (2012). G. Yang, M. Xiao, S. Zhang, Data aggregation scheme based on compressed sensing in wireless sensor network. Netw. J.8(1), 556–561 (2013). C. Tapparello, O. Simeone, M. Rossi, Dynamic compression-transmission for energy-harvesting multihop networks with correlated sources. IEEE ACM Trans. Netw.22(6), 1729–1741 (2014). A. Zahedi, J. Ostergaard, S. H. Jensen, P. Naylor, S. Bech, in Data Compression Conference (DCC), 2014. Distributed remote vector gaussian source coding for wireless acoustic sensor networks (IEEE, 2014), pp. 263–272. J. Haupt, W. U. Bajwa, M. Rabbat, R. Nowak, Compressed sensing for networked data. IEEE Signal Process. Mag.25(2), 92–101 (2008). X. Y. Liu, Y. Zhu, L. Kong, C. Liu, Cdc : Compressive data collection for wireless sensor networks. IEEE Trans. Parallel Distrib. Syst.26(8), 2188–2197 (2015). H. Zheng, F. Yang, X. Tian, X. Gan, X. Wang, S. Xiao, Data gathering with compressive sensing in wireless sensor networks: a random walk based approach. IEEE Trans. Parallel Distrib. Syst.26(1), 35–44 (2014). K. Yi, J. Wan, T. Bao, L. Yao, A DCT regularized matrix completion algorithm for energy efficient data gathering in wireless sensor networks. Int. Distrib. J. Sensor Netw.2015(1), 96 (2015). S. Maok, Y. Xie, Maximum entropy low-rank matrix recovery. IEEE J. Sel. Top. Sign. Process. (2017). IEEE. C. Luo, F. Wu, J. Sun, C. W. Chen, in Proceedings of the 15th annual international conference on Mobile computing and networking. Compressive data gathering for large-scale wireless sensor networks (ACM, 2009), pp. 145–156. X. Mao, X. Miao, Y. He, X. Y. Li, Y. Liu, in INFOCOM, 2012 Proceedings IEEE. Citysee: urban CO2 monitoring with sensors (IEEE, 2012), pp. 1611–1619. E. J. Candes, B. Recht, Exact matrix completion via convex optimization. Found. Comput. Math.9(6), 717 (2009). X. Cheng, A. Thaeler, G. Xue, D. Chen, in INFOCOM 2004. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies. vol. 4. TPS: a time-based positioning scheme for outdoor wireless sensor networks (IEEE, 2004), pp. 2685–2696. https://doi.org/10.1109/INFCOM.2004.1354687. F. Liu, X. Cheng, D. Hua, D. Chen, in Wireless Sensor Networks and Applications. TPSS: a time-based positioning scheme for sensor networks with short range beacons (Springer, 2005), pp. 175–193. A. Thaeler, M. Ding, X. Cheng, iTPS: an improved location discovery scheme for sensor networks with long-range beacons. Parallel. J. Distrib. Comput.65(2), 98–106 (2005). K. Sun, P. Ning, C. Wang, Secure and resilient clock synchronization in wireless sensor networks. IEEE Sel. J. Areas Commun.24(2), 395–408 (2006). L. Chen, J. Leneutre, in Parallel Processing Workshops, 2006. ICPP 2006 Workshops. 2006 International Conference on. A secure and scalable time synchronization protocol in ieee 802.11 ad hoc networks (IEEE, 2006), pp. 8–pp. K. Römer, in Proceedings of the 2nd ACM international symposium on Mobile ad hoc networking & computing. Time synchronization in ad hoc networks (ACM, 2001), pp. 173–182. https://doi.org/10.1145/501416.501440. O. Goldreich, Foundations of cryptography volume 2. 2(1), 1–14 (2004). Cambridge University Press. E. J. Candes, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies?IEEE Trans. Inf Theory. 52(12), 5406–5425 (2004). J. Cheng, Q. Ye, H. Jiang, D. Wang, C. Wang, STCDG: an efficient data gathering algorithm based on matrix completion for wireless sensor networks. IEEE Trans. Wirel. Commun.12(2), 850–861 (2013). E. Richard, P. A. Savalle, N. Vayatis, Estimation of simultaneously sparse and low rank matrices. Int. Conf. Mach. Learn., 51–58 (2012). K-C Toh, S Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. Optim. J.6(3), 615–640 (2009). MathSciNet MATH Google Scholar E. Candes, B. Recht, Simple bounds for low-complexity model reconstruction. Acta Bot. Gallica Bull. Soc. Bot Fr.156(3), 477–486 (2011). D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52:, 1289–1306 (2006). J. Wang, S. Tang, B. Yin, X. Y. Li, in INFOCOM, 2012 Proceedings IEEE. Data gathering in wireless sensor networks through intelligent compressive sensing (IEEE, 2012), pp. 603–611. B. Zhang, X. Cheng, N. Zhang, Y. Cui, Y. Li, Q. Liang, in INFOCOM, 2011 Proceedings IEEE. Sparse target counting and localization in sensor networks based on compressive sensing (IEEE, 2011), pp. 2255–2263. J. Meng, H. Li, Z. Han, in Information Sciences and Systems, 2009. CISS 2009. 43rd Annual Conference on. Sparse event detection in wireless sensor networks using compressive sensing (IEEE, 2009), pp. 181–185. A. J. Aljohani, S. X. Ng, L. Hanzo, Distributed source coding and its applications in relaying-based transmission. IEEE Access. 4:, 1940–1970 (2016). A. Ciancio, S. Pattem, A. Ortega, B. Krishnamachari, in Proceedings of the 5th international conference on Information processing in sensor networks. Energy-efficient data representation and routing for wireless sensor networks based on a distributed wavelet compression algorithm (ACM, 2006), pp. 309–316. M. Crovella, E. Kolaczyk, in INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies. Graph wavelets for spatial traffic analysis, vol. 3 (IEEE, 2003), pp. 1848–1857. X. H. Xu, X. Y. Li, P. J. Wan, S. J. Tang, Efficient scheduling for periodic aggregation queries in multihop sensor networks. IEEE/ACM Trans. Netw.20(3), 690–698 (2012). J. Li, S. Cheng, Y. Li, Z. Cai, Approximate holistic aggregation in wireless sensor networks. ACM Trans. Sensor Netw.13(2), 11 (2017). A. Sinha, D. K. Lobiyal, Performance evaluation of data aggregation for cluster-based wireless sensor network. Human-centric Comput. Inf. Sci.3(1), 13 (2013). X. Xu, R. Ansari, A. Khokhar, A. V. Vasilakos, Hierarchical data aggregation using compressive sensing (HDACS) in WSNs. ACM Trans. Sensor Netw.11(3), 1–25 (2015). E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006). D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006). S. Qaisar, R. M. Bilal, W. Iqbal, M. Naureen, S. Lee, Compressive sensing: from theory to applications, a survey. Commun. J. Netw.15(5), 443–456 (2013). R. Xie, X. Jia, Transmission-efficient clustering method for wireless sensor networks using compressive sensing. IEEE Trans. Parallel Distrib. Syst.25(3), 806–815 (2014). S. Li, L. D. Xu, X. Wang, Compressed sensing signal and data acquisition in wireless sensor networks and Internet of Things. IEEE Trans. Ind. Inform.9(4), 2177–2186 (2013). H. Mamaghanian, N. Khaled, D. Atienza, P. Vandergheynst, Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes. IEEE Trans. Biomed. Eng.58(9), 2456–2466 (2011). Z. Tian, G. B. Giannakis, in IEEE International Conference on Acoustics, Speech and Signal Processing. Compressed sensing for wideband cognitive radios (Michigan Technological Univ Houghton, 2007), pp. 1357–1360. J. Luo, L. Xiang, C. Rosenberg, in Communications (ICC), 2010 IEEE international conference on. Does compressed sensing improve the throughput of wireless sensor networks? (IEEE, 2010), pp. 1–6. S. Lee, S. Pattem, M. Sathiamoorthy, B. Krishnamachari, A. Ortega, in International Conference on GeoSensor Networks. Spatially-localized compressed sensing and routing in multi-hop sensor networks (Springer, 2009), pp. 11–20. S. Pudlewski, A. Prasanna, T. Melodia, Compressed-sensing-enabled video streaming for wireless multimedia sensor networks. IEEE Trans. Mob. Comput.11(6), 1060–1072 (2012). S. Cheng, Z. Cai, J. Li, Curve query processing in wireless sensor networks. IEEE Trans. Veh. Technol.64(11), 5198–5209 (2015). S. Cheng, Z. Cai, J. Li, H. Gao, Extracting kernel dataset from big sensory data in wireless sensor networks. IEEE Trans. Knowl. Data Eng.29(4), 813–827 (2017). S. Cheng, Z. Cai, J. Li, X. Fang, in Computer Communications (INFOCOM), 2015 IEEE Conference on. Drawing dominant dataset from big sensory data in wireless sensor networks (IEEE, 2015), pp. 531–539. R. H. Keshavan, A. Montanari, S. Oh, in Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on. Low-rank matrix completion with noisy observations: a quantitative comparison (IEEE, 2009), pp. 1216–1222. Y. Zhang, M. Roughan, W. Willinger, L. Qiu, in ACM SIGCOMM Computer Communication Review. Spatio-temporal compressive sensing and Internet traffic matrices, vol. 39(4) (ACM, 2009), pp. 267–278. This work was financially supported by NSFC 61332004. The experiment is based on CitySee project [12]. We conduct the experiment on a subnetwork of 55 sensors within 115 h duration. The data can be found at: https://github.com/oleotiger/experimental-data. School of Computer Science, University of Science and Technology of China, Hefei, Anhui, 230026, China Zhonghu Xu, Linjun Zhang, Jinqi Shen, Hao Zhou & Kai Xing School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China Xuefeng Liu Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China Jiannong Cao Zhonghu Xu Linjun Zhang Jinqi Shen Hao Zhou Kai Xing The authors have contributed jointly to the manuscript. All authors have read and approved the final manuscript. Correspondence to Kai Xing. Xu, Z., Zhang, L., Shen, J. et al. MRCS: matrix recovery-based communication-efficient compressive sampling on temporal-spatial data of dynamic-scale sparsity in large-scale environmental IoT networks. J Wireless Com Network 2019, 18 (2019). https://doi.org/10.1186/s13638-018-1312-1 Device data gathering Advances in Optimization or Designs for Energy Efficient Internet of Things: Technologies, Algorithms, and Communication Protocols
CommonCrawl
FBQS J1644+2619: multiwavelength properties and its place in the class of gamma-ray emitting Narrow Line Seyfert 1s (1801.08750) J. Larsson, F. D'Ammando, S. Falocco, M. Giroletti, M. Orienti, E. Piconcelli, S. Righini Jan. 26, 2018 astro-ph.GA, astro-ph.HE A small fraction of Narrow Line Seyfert 1s (NLSy1s) are observed to be gamma-ray emitters. Understanding the properties of these sources is of interest since the majority of NLSy1s are very different from typical blazars. Here, we present a multi-frequency analysis of FBQS J1644+2619, one of the most recently discovered gamma-ray emitting NLSy1s. We analyse an ~80 ks XMM-Newton observation obtained in 2017, as well as quasi-simultaneous multi-wavelength observations covering the radio - gamma-ray range. The spectral energy distribution of the source is similar to the other gamma-ray NLSy1s, confirming its blazar-like nature. The X-ray spectrum is characterised by a hard photon index (Gamma = 1.66) above 2 keV and a soft excess at lower energies.The hard photon index provides clear evidence that inverse Compton emission from the jet dominates the spectrum, while the soft excess can be explained by a contribution from the underlying Seyfert emission. This contribution can be fitted by reflection of emission from the base of the jet, as well as by Comptonisation in a warm, optically thick corona. We discuss our results in the context of the other gamma-ray NLSy1s and note that the majority of them have similar X-ray spectra, with properties intermediate between blazars and radio-quiet NLSy1s. Very Deep Inside the SN 1987A Core Ejecta: Molecular Structures Seen in 3D (1706.04675) F. J. Abellán, R. Indebetouw, J. M. Marcaide, M. Gabler, C. Fransson, J. Spyromilio, D. N. Burrows, R. Chevalier, P. Cigan, B. M. Gaensler, H. L. Gomez, H.-Th. Janka, R. Kirshner, J. Larsson, P. Lundqvist, M. Matsuura, R. McCray, C.-Y. Ng, S. Park, P. Roche, L. Staveley-Smith, J. Th. Van Loon, J. C. Wheeler, S. E. Woosley June 14, 2017 astro-ph.GA, astro-ph.SR, astro-ph.HE Most massive stars end their lives in core-collapse supernova explosions and enrich the interstellar medium with explosively nucleosynthesized elements. Following core collapse, the explosion is subject to instabilities as the shock propagates outwards through the progenitor star. Observations of the composition and structure of the innermost regions of a core-collapse supernova provide a direct probe of the instabilities and nucleosynthetic products. SN 1987A in the Large Magellanic Cloud (LMC) is one of very few supernovae for which the inner ejecta can be spatially resolved but are not yet strongly affected by interaction with the surroundings. Our observations of SN 1987A with the Atacama Large Millimeter/submillimeter Array (ALMA) are of the highest resolution to date and reveal the detailed morphology of cold molecular gas in the innermost regions of the remnant. The 3D distributions of carbon and silicon monoxide (CO and SiO) emission differ, but both have a central deficit, or torus-like distribution, possibly a result of radioactive heating during the first weeks ("nickel heating"). The size scales of the clumpy distribution are compared quantitatively to models, demonstrating how progenitor and explosion physics can be constrained. ALMA spectral survey of Supernova 1987A --- molecular inventory, chemistry, dynamics and explosive nucleosynthesis (1704.02324) M. Matsuura, V. Bujarrabal, M. J. Barlow, J. Spyromilio, J. Larsson, R. Chevalier, M. Meixner Cardiff University, UK, University of Virginia, USA, University of California, Santa Cruz, USA, Observatorio Astronomico Nacional University of California, Berkeley, USA, Stockholm University, Sweden, , Garching, Germany, ARC Centre of Excellence for All-Sky Astrophysics University of Oxford, UK, Keele University, UK, Universiteit Gent, Belgium, Space Telescope Science Institute, USA, Johns Hopkins University, USA, April 7, 2017 astro-ph.SR, astro-ph.HE We report the first molecular line survey of Supernova 1987A in the millimetre wavelength range. In the ALMA 210--300 and 340--360 GHz spectra, we detected cold (20--170 K) CO, 28SiO, HCO+ and SO, with weaker lines of 29SiO from ejecta. This is the first identification of HCO+ and SO in a young supernova remnant. We find a dip in the J=6--5 and 5--4 SiO line profiles, suggesting that the ejecta morphology is likely elongated. The difference of the CO and SiO line profiles is consistent with hydrodynamic simulations, which show that Rayleigh-Taylor instabilities cause mixing of gas, with heavier elements much more disturbed, making more elongated structure. We obtained isotopologue ratios of 28SiO/29SiO>13, 28SiO/30SiO>14, and 12CO/13CO>21, with the most likely limits of 28SiO/29SiO>128, 28SiO/30SiO>189. Low 29Si and 30Si abundances in SN 1987A are consistent with nucleosynthesis models that show inefficient formation of neutron-rich isotopes in a low metallicity environment, such as the Large Magellanic Cloud. The deduced large mass of HCO+ (~5x10^-6 Msun) and small SiS mass (<6x10^-5 Msun) might be explained by some mixing of elements immediately after the explosion. The mixing might have caused some hydrogen from the envelope to sink into carbon and oxygen-rich zones after the explosion, enabling the formation of a substantial mass of HCO+. Oxygen atoms may have penetrated into silicon and sulphur zones, suppressing formation of SiS. Our ALMA observations open up a new window to investigate chemistry, dynamics and explosive-nucleosynthesis in supernovae. Tale of J1328+2752: a misaligned double-double radio galaxy hosted by a binary black-hole? (1612.06452) S. Nandi, M. Jamrozy, R. Roy, J. Larsson, D.J. Saikia, M. Baes, M. Singh Dec. 19, 2016 astro-ph.CO, astro-ph.GA We present a radio and optical study of the double-double radio galaxy J1328+2752 based on new low-frequency GMRT observations and SDSS data. The radio data were used to investigate the morphology and to perform a spectral index analysis. In this source we find that the inner double is misaligned by $\sim$30$^\circ$ from the axis of the outer diffuse structure. The SDSS spectrum shows that the central component has double-peaked line profiles with different emission strengths. The average velocity off-set of the two components is 235$\pm$10.5 km s$^{-1}$. The misaligned radio morphology along with the double-peaked emission lines indicate that this source is a potential candidate binary supermassive black hole. This study further supports mergers as a possible explanation for repeated jet activity in radio sources. Ultrafast terahertz-field-driven ionic response in ferroelectric BaTiO$_3$ (1608.08470) F. Chen, Y. Zhu, S. Liu, Y. Qi, H.Y. Hwang, N.C. Brandt, J. Lu, F. Quirin, H. Enquist, P. Zalden, T. Hu, J. Goodfellow, M.-J. Sher, M.C. Hoffmann, D. Zhu, H.Lemke, J. Glownia, M. Chollet, A. R. Damodaran, J. Park, Z. Cai, I.W. Jung, M.J.Highland, D.A. Walko, J. W. Freeland, P.G. Evans, A. Vailionis, J. Larsson, K.A. Nelson, A.M. Rappe, K. Sokolowski-Tinten, L. W. Martin, H. Wen, A.M. Lindenberg Oct. 31, 2016 cond-mat.mtrl-sci The dynamical processes associated with electric field manipulation of the polarization in a ferroelectric remain largely unknown but fundamentally determine the speed and functionality of ferroelectric materials and devices. Here we apply sub-picosecond duration, single-cycle terahertz pulses as an ultrafast electric field bias to prototypical BaTiO$_3$ ferroelectric thin films with the atomic-scale response probed by femtosecond x-ray scattering techniques. We show that electric fields applied perpendicular to the ferroelectric polarization drive large amplitude displacements of the titanium atoms along the ferroelectric polarization axis, comparable to that of the built-in displacements associated with the intrinsic polarization and incoherent across unit cells. This effect is associated with a dynamic rotation of the ferroelectric polarization switching on and then off on picosecond timescales. These transient polarization modulations are followed by long-lived vibrational heating effects driven by resonant excitation of the ferroelectric soft mode, as reflected in changes in the c-axis tetragonality. The ultrafast structural characterization described here enables direct comparison with first-principles-based molecular dynamics simulations, with good agreement obtained. Three-dimensional distribution of ejecta in Supernova 1987A at 10 000 days (1609.04413) J. Larsson, C. Fransson, J. Spyromilio, B. Leibundgut, P. Challis, R. A. Chevalier, K. France, A. Jerkstrand, R. P. Kirshner, P. Lundqvist, M. Matsuura, R. McCray, N. Smith, J. Sollerman, P. Garnavich, K. Heng, S. Lawrence, S. Mattila, K. Migotto, G. Sonneborn, F. Taddia, J. C. Wheeler Sept. 14, 2016 astro-ph.SR, astro-ph.HE Due to its proximity, SN 1987A offers a unique opportunity to directly observe the geometry of a stellar explosion as it unfolds. Here we present spectral and imaging observations of SN 1987A obtained ~10,000 days after the explosion with HST/STIS and VLT/SINFONI at optical and near-infrared wavelengths. These observations allow us to produce the most detailed 3D map of H-alpha to date, the first 3D maps for [Ca II] \lambda \lambda 7292, 7324, [O I] \lambda \lambda 6300, 6364 and Mg II \lambda \lambda 9218, 9244, as well as new maps for [Si I]+[Fe II] 1.644 \mu m and He I 2.058 \mu m. A comparison with previous observations shows that the [Si I]+[Fe II] flux and morphology have not changed significantly during the past ten years, providing evidence that it is powered by 44Ti. The time-evolution of H-alpha shows that it is predominantly powered by X-rays from the ring, in agreement with previous findings. All lines that have sufficient signal show a similar large-scale 3D structure, with a north-south asymmetry that resembles a broken dipole. This structure correlates with early observations of asymmetries, showing that there is a global asymmetry that extends from the inner core to the outer envelope. On smaller scales, the two brightest lines, H-alpha and [Si I]+[Fe II] 1.644 \mu m, show substructures at the level of ~ 200 - 1000 km/s and clear differences in their 3D geometries. We discuss these results in the context of explosion models and the properties of dust in the ejecta. A Panchromatic View of Relativistic Jets in Narrow-Line Seyfert 1 Galaxies (1609.04434) F. D'Ammando, M. Orienti, J. Larsson, M. Giroletti, C. M. Raiteri Sept. 14, 2016 astro-ph.HE The discovery by the Large Area Telescope on board Fermi of variable gamma-ray emission from radio-loud narrow-line Seyfert 1 (NLSy1) galaxies revealed the presence of a possible third class of Active Galactic Nuclei (AGN) with relativistic jets in addition to blazars and radio galaxies. Considering that NLSy1 are usually hosted in spiral galaxies, this finding poses intriguing questions about the nature of these objects and the formation of relativistic jets. We report on a systematic investigation of the gamma-ray properties of a sample of radio-loud NLSy1, including the detection of new objects, using 7 years of Fermi-LAT data with the new Pass 8 event-level analysis. In addition we discuss the radio-to-very-high-energy properties of the gamma-ray emitting NLSy1, their host galaxy, and black hole mass in the context of the blazar scenario and the unification of relativistic jets at different scales. GAMMA-400 gamma-ray observatory (1507.06246) N.P. Topchiev, A.M. Galper, V. Bonvicini, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, A.V. Bakaldin, L. Bergstrom, E. Berti, G. Bigongiari, S.G. Bobkov, M. Boezio, E.A. Bogomolov, L. Bonechi, M. Bongi, S. Bottai, G. Castellini, P.W. Cattaneo, P. Cumani, O.D. Dalkarov, G.L. Dedenko, C. De Donato, V.A. Dogiel, N. Finetti, D. Gascon, M.S. Gorbunov, Yu.V. Gusakov, B.I. Hnatyk, V.V. Kadilin, V.A. Kaplin, A.A. Kaplun, M.D. Kheymits, V.E. Korepanov, J. Larsson, A.A. Leonov, V.A. Loginov, F. Longo, P. Maestro, P.S. Marrocchesi, M. Martinez, A.L. Menshenin, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I.V. Moskalenko, P.Yu. Naumov, P. Papini, J.M. Paredes, M. Pearce, P. Picozza, A. Rappoldi, S. Ricciarini, M.F. Runtso, F. Ryde, O.V. Serdin, R. Sparvoli, P. Spillantini, Yu.I. Stozhkov, S.I. Suchkov, A.A. Taraskin, M. Tavani, A. Tiberio, E.M. Tyurin, M.V. Ulanov, A. Vacchi, E. Vannuccini, G.I. Vasilyev, J.E. Ward, Yu.T. Yurkin, N. Zampa, V.N. Zirakashvili, V.G. Zverev Nov. 12, 2015 astro-ph.IM The GAMMA-400 gamma-ray telescope with excellent angular and energy resolutions is designed to search for signatures of dark matter in the fluxes of gamma-ray emission and electrons + positrons. Precision investigations of gamma-ray emission from Galactic Center, Crab, Vela, Cygnus, Geminga, and other regions will be performed, as well as diffuse gamma-ray emission, along with measurements of high-energy electron + positron and nuclei fluxes. Furthermore, it will study gamma-ray bursts and gamma-ray emission from the Sun during periods of solar activity. The energy range of GAMMA-400 is expected to be from ~20 MeV up to TeV energies for gamma rays, up to 20 TeV for electrons + positrons, and up to 10E15 eV for cosmic-ray nuclei. For high-energy gamma rays with energy from 10 to 100 GeV, the GAMMA-400 angular resolution improves from 0.1{\deg} to ~0.01{\deg} and energy resolution from 3% to ~1%; the proton rejection factor is ~5x10E5. GAMMA-400 will be installed onboard the Russian space observatory. Strongly interacting few-fermion systems in a trap (1504.01303) C. Forssén, R. Lundmark, J. Rotureau, J. Larsson, D. Lidberg April 6, 2015 cond-mat.quant-gas Few- and many-fermion systems on the verge of stability, and consisting of strongly interacting particles, appear in many areas of physics. The theoretical modeling of such systems is a very difficult problem. In this work we present a theoretical framework that is based on the rigged Hilbert space formulation. The few-body problem is solved by exact diagonalization using a basis in which bound, resonant, and non-resonant scattering states are included on an equal footing. Current experiments with ultracold atoms offer a fascinating opportunity to study universal properties of few-body systems with a high degree of control over parameters such as the external trap geometry, the number of particles, and even the interaction strength. In particular, particles can be allowed to tunnel out of the trap by applying a magnetic-field gradient that effectively lowers the potential barrier. The result is a tunable open quantum system that allows detailed studies of the tunneling mechanism. In this Contribution we introduce our method and present results for the decay rate of two distinguishable fermions in a one-dimensional trap as a function of the interaction strength. We also study the numerical convergence. Many of these results have been previously published (R. Lundmark, C. Forss\'en, and J. Rotureau, arXiv: 1412.7175). However, in this Contribution we present several technical and numerical details of our approach for the first time. Extremely narrow spectrum of GRB110920A: further evidence for localised, subphotospheric dissipation (1503.05926) S. Iyyani, F. Ryde, B. Ahlgren, J. M. Burgess, J. Larsson, A. Pe'er, C. Lundman, M. Axelsson, S. McGlynn March 31, 2015 astro-ph.HE Much evidence points towards that the photosphere in the relativistic outflow in GRBs plays an important role in shaping the observed MeV spectrum. However, it is unclear whether the spectrum is fully produced by the photosphere or whether a substantial part of the spectrum is added by processes far above the photosphere. Here we make a detailed study of the $\gamma-$ray emission from single pulse GRB110920A which has a spectrum that becomes extremely narrow towards the end of the burst. We show that the emission can be interpreted as Comptonisation of thermal photons by cold electrons in an unmagnetised outflow at an optical depth of $\tau \sim 20$. The electrons receive their energy by a local dissipation occurring close to the saturation radius. The main spectral component of GRB110920A and its evolution is thus, in this interpretation, fully explained by the emission from the photosphere including localised dissipation at high optical depths. A separation of electrons and protons in the GAMMA-400 gamma-ray telescope (1503.06657) A.A. Leonov, A.M. Galper, V. Bonvicini, N.P. Topchiev, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, L. Bergstrom, E. Berti, G. Bigongiari, S.G. Bobkov, M. Boezio, E.A. Bogomolov, S. Bonechi, M. Bongi, S. Bottai, G. Castellini, P.W. Cattaneo, P. Cumani, G.L. Dedenko, C. De Donato, V.A. Dogiel, M.S. Gorbunov, Yu.V. Gusakov, B.I. Hnatyk, V.V. Kadilin, V.A. Kaplin, A.A. Kaplun, M.D. Kheymits, V.E. Korepanov, J. Larsson, V.A. Loginov, F. Longo, P. Maestro, P.S. Marrocchesi, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I.V. Moskalenko, P.Yu. Naumov, P. Papini, M. Pearce, P.Picozza, A.V. Popov, A. Rappoldi, S. Ricciarini, M.F. Runtso, F. Ryde, O.V. Serdin, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, A.A. Taraskin, A. Tiberio, E.M. Tyurin, M.V. Ulanov, A. Vacchi, E. Vannuccini, G.I. Vasilyev, Yu.T. Yurkin, N. Zampa, V.N. Zirakashvili, V.G. Zverev March 23, 2015 physics.ins-det, astro-ph.IM The GAMMA-400 gamma-ray telescope is intended to measure the fluxes of gamma rays and cosmic-ray electrons and positrons in the energy range from 100 MeV to several TeV. Such measurements concern with the following scientific goals: search for signatures of dark matter, investigation of gamma-ray point and extended sources, studies of the energy spectra of Galactic and extragalactic diffuse emission, studies of gamma-ray bursts and gamma-ray emission from the active Sun, as well as high-precision measurements of spectra of high-energy electrons and positrons, protons, and nuclei up to the knee. The main components of cosmic rays are protons and helium nuclei, whereas the part of lepton component in the total flux is ~10E-3 for high energies. In present paper, the capability of the GAMMA-400 gamma-ray telescope to distinguish electrons and positrons from protons in cosmic rays is investigated. The individual contribution to the proton rejection is studied for each detector system of the GAMMA-400 gamma-ray telescope. Using combined information from all detector systems allow us to provide the proton rejection from electrons with a factor of ~4x10E5 for vertical incident particles and ~3x10E5 for particles with initial inclination of 30 degrees. The calculations were performed for the electron energy range from 50 GeV to 1 TeV. Study of the Gamma-ray performance of the GAMMA-400 Calorimeter (1502.03287) P.Cumani, A.M. Galper, V. Bonvicini, N.P. Topchiev, O. Adriani, R.L. Aptekar, A. Argan, I.V. Arkhangelskaja, A.I. Arkhangelskiy, L. Bergstrom, E. Berti, G. Bigongiari, S.G. Bobkov, M. Boezio, E.A. Bogomolov, S. Bonechi, M. Bongi, S. Bottai, A. Bulgarelli, G. Castellini, P.W. Cattaneo, G.L. Dedenko, C. De Donato, V.A. Dogiel, I. Donnarumma, V. Fioretti, M.S. Gorbunov, Yu.V. Gusakov, B.I. Hnatyk, V.V. Kadilin, V.A. Kaplin, A.A. Kaplun, M.D. Kheymits, V.E. Korepanov, J. Larsson, A.A. Leonov, V.A. Loginov, F. Longo, P. Maestro, P.S. Marrocchesi, A.L. Menshenin, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I.V. Moskalenko, P.Yu. Naumov, F. Palma, P. Papini, M. Pearce, G. Piano, P. Picozza, A.V. Popov, A. Rappoldi, S. Ricciarini, M.F. Runtso, F. Ryde, S. Sabatini, R.Sarkar, O.V. Serdin, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, A.A. Taraskin, M. Tavani, A. Tiberio, E.M. Tyurin, M.V. Ulanov, A. Vacchi, E. Vannuccini, G.I. Vasilyev, V. Vittorini, Yu.T. Yurkin, N. Zampa, V.N. Zirakashvili, V.G. Zverev March 7, 2015 astro-ph.IM, astro-ph.HE GAMMA-400 is a new space mission, designed as a dual experiment, capable to study both high energy gamma rays (from $\sim$100 MeV to few TeV) and cosmic rays (electrons up to 20 TeV and nuclei up to $\sim$10$^{15}$ eV). The full simulation framework of GAMMA-400 is based on the Geant4 toolkit. The details of the gamma-ray reconstruction pipeline in the pre-shower and calorimeter will be outlined. The performance of GAMMA-400 (PSF, effective area) have been obtained using this framework. The most updated results on them will be shown. Evidence for jet launching close to the black hole in GRB 101219B - a Fermi GRB dominated by thermal emission (1502.00645) J. Larsson, J. L. Racusin, J. M. Burgess Feb. 12, 2015 astro-ph.HE We present observations by the Fermi Gamma-Ray Space Telescope Gamma-Ray Burst Monitor (GBM) of the nearby (z=0.55) GRB 101219B. This burst is a long GRB, with an associated supernova and with a blackbody component detected in the early afterglow observed by the Swift X-ray Telescope (XRT). Here we show that the prompt gamma-ray emission has a blackbody spectrum, making this the second such burst observed by Fermi GBM. The properties of the blackbody, together with the redshift and our estimate of the radiative efficiency, makes it possible to calculate the absolute values of the properties of the outflow. We obtain an initial Lorentz factor Gamma=138\pm 8, a photospheric radius r_phot=4.4\pm 1.9 \times 10^{11} cm and a launch radius r_0=2.7\pm 1.6 \times 10^{7} cm. The latter value is close to the black hole and suggests that the jet has a relatively unobstructed path through the star. There is no smooth connection between the blackbody components seen by GBM and XRT, ruling out the scenario that the late emission is due to high-latitude effects. In the interpretation that the XRT blackbody is prompt emission due to late central engine activity, the jet either has to be very wide or have a clumpy structure where the emission originates from a small patch. Other explanations for this component, such as emission from a cocoon surrounding the jet, are also possible. The GAMMA-400 Space Mission (1502.02976) P.Cumani, A.M. Galper, V. Bonvicini, N.P. Topchiev, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, L. Bergstrom, E. Berti, G. Bigongiari, S.G. Bobkov, M. Boezio, E.A. Bogomolov, S. Bonechi, M. Bongi, S. Bottai, G. Castellini, P.W. Cattaneo, G.L. Dedenko, C. De Donato, V.A. Dogiel, M.S. Gorbunov, Yu.V. Gusakov, B.I. Hnatyk, V.V. Kadilin, V.A. Kaplin, A.A. Kaplun, M.D. Kheymits, V.E. Korepanov, J. Larsson, A.A. Leonov, V.A. Loginov, F. Longo, P. Maestro, P.S. Marrocchesi, A.L. Menshenin, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I.V. Moskalenko, P.Yu. Naumov, P. Papini, M. Pearce, P. Picozza, A.V. Popov, A. Rappoldi, S. Ricciarini, M.F. Runtso, F. Ryde, O.V. Serdin, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, A.A. Taraskin, A. Tiberio, E.M. Tyurin, M.V. Ulanov, A. Vacchi, E. Vannuccini, G.I. Vasilyev, Yu.T. Yurkin, N. Zampa, V.N. Zirakashvili, V.G. Zverev Feb. 10, 2015 astro-ph.IM, astro-ph.HE GAMMA-400 is a new space mission which will be installed on board the Russian space platform Navigator. It is scheduled to be launched at the beginning of the next decade. GAMMA-400 is designed to study simultaneously gamma rays (up to 3 TeV) and cosmic rays (electrons and positrons from 1 GeV to 20 TeV, nuclei up to 10$^{15}$-10$^{16}$ eV). Being a dual-purpose mission, GAMMA-400 will be able to address some of the most impelling science topics, such as search for signatures of dark matter, cosmic-rays origin and propagation, and the nature of transients. GAMMA-400 will try to solve the unanswered questions on these topics by high-precision measurements of the Galactic and extragalactic gamma-ray sources, Galactic and extragalactic diffuse emission and the spectra of cosmic-ray electrons + positrons and nuclei, thanks to excellent energy and angular resolutions. The GAMMA-400 space observatory: status and perspectives (1412.4239) A.M. Galper, V. Bonvicini, N.P. Topchiev, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, L. Bergstrom, E. Berti, G. Bigongiari, S.G. Bobkov, M. Boezio, E.A. Bogomolov, S. Bonechi, M. Bongi, S. Bottai, K.A. Boyarchuk, G. Castellini, P.W. Cattaneo, P. Cumani, G.L. Dedenko, C. De Donato, V.A. Dogiel, M.S. Gorbunov, Yu.V. Gusakov, B.I. Hnatyk, V.V. Kadilin, V.A. Kaplin, A.A. Kaplun, M.D. Kheymits, V.E. Korepanov, J. Larsson, A.A. Leonov, V.A. Loginov, F. Longo, P. Maestro, P.S. Marrocchesi, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I.V. Moskalenko, P.Yu. Naumov, P. Papini, M. Pearce, P. Picozza, A.V. Popov, A. Rappoldi, S. Ricciarini, M.F. Runtso, F. Ryde, O.V. Serdin, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, A.A. Taraskin, A. Tiberio, E.M. Tyurin, M.V. Ulanov, A. Vacchi, E. Vannuccini, G.I. Vasilyev, Yu.T. Yurkin, N. Zampa, V.N. Zirakashvili, V.G. Zverev Dec. 13, 2014 physics.ins-det, astro-ph.IM, astro-ph.HE The present design of the new space observatory GAMMA-400 is presented in this paper. The instrument has been designed for the optimal detection of gamma rays in a broad energy range (from ~100 MeV up to 3 TeV), with excellent angular and energy resolution. The observatory will also allow precise and high statistic studies of the electron component in the cosmic rays up to the multi TeV region, as well as protons and nuclei spectra up to the knee region. The GAMMA-400 observatory will allow to address a broad range of science topics, like search for signatures of dark matter, studies of Galactic and extragalactic gamma-ray sources, Galactic and extragalactic diffuse emission, gamma-ray bursts and charged cosmic rays acceleration and diffusion mechanism up to the knee. The GAMMA-400 gamma-ray telescope characteristics. Angular resolution and electrons/protons separation (1412.1486) A.A. Leonov, A.M. Galper, V. Bonvicini, N.P. Topchiev, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, L. Bergstrom, E. Berti, G. Bigongiari, S.G. Bobkov, M. Boezio, E.A. Bogomolov, S. Bonechi, M. Bongi, S. Bottai, K.A. Boyarchuk, G. Castellini, P.W. Cattaneo, P. Cumani, G.L. Dedenko, C. De Donato, V.A. Dogiel, M.S. Gorbunov, Yu.V. Gusakov, B.I. Hnatyk, V.V. Kadilin, V.A. Kaplin, A.A. Kaplun, M.D. Kheymits, V.E. Korepanov, J. Larsson, V.A. Loginov, F. Longo, P. Maestro, P.S. Marrocchesi, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I.V. Moskalenko, P.Yu. Naumov, P. Papini, M. Pearce, P. Picozza, A.V. Popov, A. Rappoldi, S. Ricciarini, M.F. Runtso, F. Ryde, O.V. Serdin, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, A.A. Taraskin, A. Tiberio, E.M. Tyurin, M.V. Ulanov, A. Vacchi, E. Vannuccini, G.I. Vasilyev, Yu.T. Yurkin, N. Zampa, V.N. Zirakashvili, V.G. Zverev Dec. 11, 2014 astro-ph.IM The measurements of gamma-ray fluxes and cosmic-ray electrons and positrons in the energy range from 100 MeV to several TeV, which will be implemented by the specially designed GAMMA-400 gamma-ray telescope, concern with the following broad range of science topics. Searching for signatures of dark matter, surveying the celestial sphere in order to study gamma-ray point and extended sources, measuring the energy spectra of Galactic and extragalactic diffuse gamma-ray emission, studying gamma-ray bursts and gamma-ray emission from the Sun, as well as high precision measuring spectra of high-energy electrons and positrons, protons and nuclei up to the knee. To clarify these scientific problems with the new experimental data the GAMMA-400 gamma-ray telescope possesses unique physical characteristics comparing with previous and present experiments. For gamma-ray energies more than 100 GeV GAMMA-400 provides the energy resolution of ~1% and angular resolution better than 0.02 deg. The methods developed to reconstruct the direction of incident gamma photon are presented in this paper, as well as, the capability of the GAMMA-400 gamma-ray telescope to distinguish electrons and positrons from protons in cosmic rays is investigated. The most powerful flaring activity from the NLSy1 PMN J0948+0022 (1410.7144) F. D'Ammando, M. Orienti, C. M. Raiteri, T. Hovatta, J. Larsson, W. Max-Moerbeck, J. Perkins, A. C. S. Readhead, J. L. Richards, M. Beilicke, W. Benbow, K. Berger, R. Bird, V. Bugaev, J. V. Cardenzana, M. Cerruti, X. Chen, L. Ciupik, H. J. Dickinson, J. D. Eisch, M. Errando, A. Falcone, J. P. Finley, H. Fleischhack, P. Fortin, L. Fortson, A. Furniss, L. Gerard, G. H. Gillanders, S. T. Griffiths, J. Grube, G. Gyuk, N. Hakansson, J. Holder, T. B. Humensky, P. Kar, M. Kertzman, Y. Khassen, D. Kieda, F. Krennrich, S. Kumar, M. J. Lang, G. Maier, A. McCann, K. Meagher, P. Moriarty, R. Mukherjee, D. Nieto, A. O' Faolain de Bhroithe, R. A. Ong, A. N. Otte, M. Pohl, A. Popkow, H. Prokoph, E. Pueschel, J. Quinn, K. Ragan, P. T. Reynolds, G. T. Richards, E. Roache, J. Rousselle, M. Santander, G. H. Sembroski, A. W. Smith, D. Staszak, I. Telezhinsky, J. V. Tucci, J. Tyler, A. Varlotta, V. V. Vassiliev, S. P. Wakely, A. Weinstein, R. Welsing, D. A. Williams, B. Zitzer Oct. 27, 2014 astro-ph.HE We report on multifrequency observations performed during 2012 December-2013 August of the first narrow-line Seyfert 1 galaxy detected in gamma rays, PMN J0948+0022 ($z$ = 0.5846). A gamma-ray flare was observed by the Large Area Telescope on board Fermi during 2012 December-2013 January, reaching a daily peak flux in the 0.1-100 GeV energy range of (155 $\pm$ 31) $\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$ on 2013 January 1, corresponding to an apparent isotropic luminosity of about 1.5$\times$10$^{48}$ erg s$^{-1}$. The gamma-ray flaring period triggered Swift and VERITAS observations in addition to radio and optical monitoring by OVRO, MOJAVE, and CRTS. A strong flare was observed in optical, UV, and X-rays on 2012 December 30, quasi-simultaneously to the gamma-ray flare, reaching a record flux for this source from optical to gamma rays. VERITAS observations at very high energy (E > 100 GeV) during 2013 January 6-17 resulted in an upper limit of F (> 0.2 TeV) < 4.0$\times$10$^{-12}$ ph cm$^{-2}$ s$^{-1}$. We compared the spectral energy distribution (SED) of the flaring state in 2013 January with that of an intermediate state observed in 2011. The two SEDs, modelled as synchrotron emission and an external Compton scattering of seed photons from a dust torus, can be modelled by changing both the electron distribution parameters and the magnetic field. The First Pulse of the Extremely Bright GRB 130427A: A Test Lab for Synchrotron Shocks (1311.5581) R. Preece, J. Michael Burgess, A. von Kienlin, P. N. Bhat, M. S. Briggs, D. Byrne, V. Chaplin, W. Cleveland, A. C. Collazzi, V. Connaughton, A. Diekmann, G. Fitzpatrick, S. Foley, M. Gibby, M. Giles, A. Goldstein, J. Greiner, D. Gruber, P. Jenke, R. M. Kippen, C. Kouveliotou, S. McBreen, C. Meegan, W. S. Paciesas, V. Pelassa, D. Tierney, A. J. van der Horst, C. Wilson-Hodge, S. Xiong, G. Younes, H.-F. Yu, M. Ackermann, M. Ajello, M. Axelsson, L. Baldini, G. Barbiellini, M. G. Baring, D. Bastieri, R. Bellazzini, E. Bissaldi, E. Bonamente, J. Bregeon, M. Brigida, P. Bruel, R. Buehler, S. Buson, G. A. Caliandro, R. A. Cameron, P. A. Caraveo, C. Cecchi, E. Charles, A. Chekhtman, J. Chiang, G. Chiaro, S. Ciprini, R. Claus, J. Cohen-Tanugi, L. R. Cominsky, J. Conrad, F. D'Ammando, A. de Angelis, F. de Palma, C. D. Dermer, R. Desiante, S. W. Digel, L. Di Venere, P. S. Drell, A. Drlica-Wagner, C. Favuzzi, A. Franckowiak, Y. Fukazawa, P. Fusco, F. Gargano, N. Gehrels, S. Germani, N. Giglietto, F. Giordano, M. Giroletti, G. Godfrey, J. Granot, I. A. Grenier, S. Guiriec, D. Hadasch, Y. Hanabata, A. K. Harding, M. Hayashida, S. Iyyani, T. Jogler, G. Jóannesson, T. Kawano, J. Knödlseder, D. Kocevski, M. Kuss, J. Lande, J. Larsson, S. Larsson, L. Latronico, F. Longo, F. Loparco, M. N. Lovellette, P. Lubrano, M. Mayer, M. N. Mazziotta, P. F. Michelson, T. Mizuno, M. E. Monzani, E. Moretti, A. Morselli, S. Murgia, R. Nemmen, E. Nuss, T. Nymark, M. Ohno, T. Ohsugi, A. Okumura, N. Omodei, M. Orienti, D. Paneque, J. S. Perkins, M. Pesce-Rollins, F. Piron, G. Pivato, T. A. Porter, J. L. Racusin, S. Rainò, R. Rando, M. Razzano, S. Razzaque, A. Reimer, O. Reimer, S. Ritz, M. Roth, F. Ryde, A. Sartori, J. D. Scargle, A. Schulz, C. Sgrò, E. J. Siskind, G. Spandre, P. Spinelli, D. J. Suson, H. Tajima, H. Takahashi, J. G. Thayer, J. B. Thayer, L. Tibaldo, M. Tinivella, D. F. Torres, G. Tosti, E. Troja, T. L. Usher, J. Vandenbroucke, V. Vasileiou, G. Vianello, V. Vitale, M. Werner, B. L. Winer, K. S. Wood, S. Zhu Nov. 21, 2013 astro-ph.HE Gamma-ray burst (GRB) 130427A is one of the most energetic GRBs ever observed. The initial pulse up to 2.5 s is possibly the brightest well-isolated pulse observed to date. A fine time resolution spectral analysis shows power-law decays of the peak energy from the onset of the pulse, consistent with models of internal synchrotron shock pulses. However, a strongly correlated power-law behavior is observed between the luminosity and the spectral peak energy that is inconsistent with curvature effects arising in the relativistic outflow. It is difficult for any of the existing models to account for all of the observed spectral and temporal behaviors simultaneously. Dark Matter Search Perspectives with GAMMA-400 (1307.2345) A.A. Moiseev, A.M. Galper, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, G.A.Avanesov, L.Bergstrom, M.Boezio, V.Bonvicini, K.A.Boyarchuk, V.A.Dogiel, Yu.V. Gusakov, M.I. Fradkin, Ch. Fuglesang, B.I. Hnatyk, V.A. Kachanov, V.A. Kaplin, M.D. Kheymits, V. Korepanov, J. Larsson, A.A. Leonov, F. Longo, P. Maestro, P. Marrocchesi, E.P. Mazets, V.V. Mikhailov, E. Mocchiutti, N. Mori, I. Moskalenko, P.Yu. Naumov, P. Papini, M. Pearce, P. Picozza, M.F. Runtso, F. Ryde, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, N.P. Topchiev, A. Vacchi, E. Vannuccini, Yu.T. Yurkin, N. Zampa, V.N. Zarikashvili, V.G. Zverev July 9, 2013 astro-ph.IM, astro-ph.HE GAMMA-400 is a future high-energy gamma-ray telescope, designed to measure the fluxes of gamma-rays and cosmic-ray electrons + positrons, which can be produced by annihilation or decay of dark matter particles, and to survey the celestial sphere in order to study point and extended sources of gamma-rays, measure energy spectra of Galactic and extragalactic diffuse gamma-ray emission, gamma-ray bursts, and gamma-ray emission from the Sun. GAMMA-400 covers the energy range from 100 MeV to ~3000 GeV. Its angular resolution is ~0.01 deg(Eg > 100 GeV), and the energy resolution ~1% (Eg > 10 GeV). GAMMA-400 is planned to be launched on the Russian space platform Navigator in 2019. The GAMMA-400 perspectives in the search for dark matter in various scenarios are presented in this paper The Space-Based Gamma-Ray Telescope GAMMA-400 and Its Scientific Goals (1306.6175) A.M. Galper, O. Adriani, R.L. Aptekar, I.V. Arkhangelskaja, A.I. Arkhangelskiy, G.A. Avanesov, L. Bergstrom, E.A. Bogomolov, M. Boezio, V. Bonvicini, K.A. Boyarchuk, V.A. Dogiel, Yu.V. Gusakov, M.I. Fradkin, Ch. Fuglesang, B.I. Hnatyk, V.A. Kachanov, V.V. Kadilin, V.A. Kaplin, M.D. Kheymits, V. Korepanov, J. Larsson, A.A. Leonov, F. Longo, P. Maestro, P. Marrocchesi, V.V. Mikhailov, E. Mocchiutti, A.A. Moiseev, N. Mori, I. Moskalenko, P.Yu. Naumov, P. Papini, M. Pearce, P. Picozza, M.F. Runtso, F. Ryde, R. Sparvoli, P. Spillantini, S.I. Suchkov, M. Tavani, N.P. Topchiev, A. Vacchi, E. Vannuccini, G.I. Vasiliev, Yu.T. Yurkin, N. Zampa, V.N. Zarikashvili, V.G. Zverev June 26, 2013 astro-ph.IM The design of the new space-based gamma-ray telescope GAMMA-400 is presented. GAMMA-400 is optimized for the energy 100 GeV with the best parameters: the angular resolution ~0.01 deg, the energy resolution ~1%, and the proton rejection factor ~10E6, but is able to measure gamma-ray and electron + positron fluxes in the energy range from 100 MeV to 10 TeV. GAMMA-400 is aimed to a broad range of science topics, such as search for signatures of dark matter, studies of Galactic and extragalactic gamma-ray sources, Galactic and extragalactic diffuse emission, gamma-ray bursts, as well as high-precision measurements of spectra of cosmic-ray electrons + positrons, and nuclei. Direct measurement of time-dependent density-density correlations in a solid through the acoustic analog of the dynamical Casimir effect (1301.3503) M. Trigo, M. Fuchs, J. Chen, M. P. Jiang, M. E. Kozina, G. Ndabashimiye, M. Cammarata, G. Chien, S. Fahy, D. M. Fritz, K. Gaffney, S. Ghimire, A. Higginbotham, S. L. Johnson, J. Larsson, H. Lemke, A. M. Lindenberg, F. Quirin, K. Sokolowski-Tinten, C. Uher, J. S. Wark, D. Zhu, D. A. Reis Jan. 15, 2013 cond-mat.mtrl-sci The macroscopic characteristics of a solid, such as its thermal, optical or transport properties are determined by the available microscopic states above its lowest energy level. These slightly higher quantum states are described by elementary excitations and dictate the response of the system under external stimuli. The spectrum of these excitations, obtained typically from inelastic neutron and x-ray scattering, is the spatial and temporal Fourier transform of the density-density correlation function of the system, which dictates how a perturbation propagates in space and time. As frequency-domain measurements do not generally contain phase information, time-domain measurements of these fluctuations could yield a more direct method for investigating the excitations of solids and their interactions both in equilibrium and far-from equilibrium. Here we show that the diffuse scattering of femtosecond x-ray pulses produced by a free electron laser (FEL) can directly measure these density-density correlations due to lattice vibrations in the time domain. We obtain spectroscopic information of the lattice excitations with unprecedented momentum- and frequency- resolution, without resolving the energy of the outgoing photon. Correlations are created via an acoustic analog of the dynamical Casimir effect, where a femtosecond laser pulse slightly quenches the phonon frequencies, producing pairs of squeezed phonons at momenta +q and -q. These pairs of phonons manifest as macroscopic, time-dependent coherences in the displacement correlations that are then probed directly by x-ray scattering. Since the time-dependent correlations are preferentially created in regions of strong electron-phonon coupling, the time-resolved approach is natural as a spectroscopic tool of low energy collective excitations in solids, and their microscopic interactions, both in linear response and beyond. APEX sub-mm monitoring of gamma-ray blazars (1206.3799) S. Larsson, L. Fuhrmann, A. Weiss, E. Angelakis, T. P. Krichbaum, I. Nestoras, J. A. Zensus, M. Axelsson, D. Nilsson, F. Ryde, L. Hjalmarsdotter, J. Larsson, A. Lundgren, F. Mac-Auliffe, R. Parra, G. Siringo June 17, 2012 astro-ph.HE So far, no systematic long-term blazar monitoring programs and detailed variability studies exist at sub-mm wavelengths. Here, we present a new sub-mm blazar monitoring program using the APEX 12-m telescope. A sample of about 40 gamma-ray blazars has been monitored since 2007/2008 with the LABOCA bolometer camera at 345 GHz. First light curves, preliminary variability results and a first comparison with the longer cm/mm bands (F-GAMMA program) are presented, demonstrating the extreme variability characteristics of blazars at such short wavelengths. X-ray illumination of the ejecta of Supernova 1987A (1106.2300) J. Larsson, C. Fransson, G. Östlin, P. Gröningsson, A. Jerkstrand, C. Kozma, J. Sollerman, P. Challis, R. P. Kirshner, R. A. Chevalier, K. Heng, R. McCray, N. B. Suntzeff, P. Bouchet, A. Crotts, J. Danziger, E. Dwek, K. France, P. M. Garnavich, S. S. Lawrence, B. Leibundgut, P. Lundqvist, N. Panagia, C. S. J. Pun, N. Smith, G. Sonneborn, L. Wang, J. C. Wheeler June 12, 2011 astro-ph.SR We present the late-time optical light curve of the ejecta of SN 1987A measured from HST imaging observations spanning the past 17 years. We find that the flux from the ejecta declined up to around year 2001, powered by the radioactive decay of 44Ti. Then the flux started to increase, more than doubling by the end of 2009. We show that the increase is the result of energy deposited by X-rays produced in the interaction with the circumstellar medium. We suggest that the change of the dominant energy input to the ejecta, from internal to external, marks the transition from supernova to supernova remnant. The details of the observations and the modelling are described in the accompanying supplementary information. Spectral components in the bright, long GRB 061007: properties of the photosphere and the nature of the outflow (1102.4739) J. Larsson, F. Ryde, C. Lundman, S. McGlynn, S. Larsson, M. Ohno. K. Yamaoka We present a time-resolved spectral analysis of the bright, long GRB 061007 (z=1.261) using Swift BAT and Suzaku WAM data. We find that the prompt emission of GRB 061007 can be equally well explained by a photospheric component together with a power law as by a Band function, and we explore the implications of the former model. The photospheric component, which we model with a multicolour blackbody, dominates the emission and has a very stable shape throughout the burst. This component provides a natural explanation for the hardness-intensity correlation seen within the burst and also allows us to estimate the bulk Lorentz factor and the radius of the photosphere. The power-law component dominates the fit at high energies and has a nearly constant slope of -1.5. We discuss the possibility that this component is of the same origin as the high-energy power laws recently observed in some Fermi LAT bursts. Identification and properties of the photospheric emission in GRB090902B (0911.2025) F. Ryde, M. Axelsson, B.B. Zhang, S. McGlynn, A. Pe'er, C. Lundman, S. Larsson, M. Battelino, B. Zhang, E. Bissaldi, J. Bregeon, M.S. Briggs, J. Chiang, F. de Palma, S. Guiriec, J. Larsson, F. Longo, S. McBreen, N. Omodei, V. Petrosian, R. Preece, A.J. van der Horst Dec. 26, 2009 astro-ph.CO, astro-ph.HE The Fermi Gamma-ray Space Telescope observed the bright and long GRB090902B, lying at a redshift of z = 1.822. Together the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM) cover the spectral range from 8 keV to >300 GeV is covered. Here we show that the prompt burst spectrum is consistent with emission from the jet photosphere combined with non-thermal emission described by a single power-law with photon index -1.9. The photosphere gives rise to a strong quasi-blackbody spectrum which is somewhat broader than a single Planck function and has a characteristic temperature of ~290keV. We derive the photospheric radius Rph = (1.1 \pm 0.3) x 10^12 Y^{1/4} cm and the bulk Lorentz factor of the flow, which is found to vary by a factor of two and has a maximal value of Gamma = 750 Y^{1/4}. Here Y is the ratio between the total fireball energy and the energy emitted in the gamma-rays. We find that during the first quarter of the prompt phase the photospheric emission dominates, which explains the delayed onset of the observed flux in the LAT compared to the GBM. We model the photospheric emission with a multi-color blackbody and its shape indicates that the photospheric radius increases at higher latitudes. We interpret the broad band emission as synchrotron emission at R ~ 4x10^15 cm. Our analysis emphasize the importance of having high temporal resolution when performing spectral analysis on GRBs, since there is strong spectral evolution.
CommonCrawl
Home / Calculus / Length of Curve Calculator Length of Curve Calculator Learn how to calculate the length of a curve. f(x) = Left Endpoint (a) = Right Endpoint (b) = √x ∙10x Return to calculator Unlock full steps and graph Fill in the input fields to calculate the solution. Want unlimited access to calculators, answers, and solution steps? 100% risk free. Cancel anytime. Length of Curve Lesson What is the Length of a Curve? Why do we Learn About the Length of a Curve? How to Calculate the Length of a Curve Example Problem 1 How the Calculator Works The length of a curve, which is also called the arc length of a function, is the total distance traveled by a point when it follows the graph of a function along an interval [a, b]. To visualize what the length of a curve looks like, we can pretend a function such as y = f(x) = x2 is a rope that was laid down on the x-y coordinate plane starting at x = -2 and ending at x = 2. This rope is not pulled tight since it is laid down in the shape of a parabola. However, if we were to pull on both ends it would become a tight, linear rope. The length of this rope line is the length of the curve y = f(x) = x2 from x = -2 to x = 2, notated as the interval [-2, 2]. Finding the length of a curve is another useful tool for our problem solving toolbox. We can use this concept for more than just completing some math problems. What if we are designing a rocket nozzle and want to find out how much ablative coating is required for the inside of this nozzle? An ablative coating protects a substrate that is exposed to high velocity, high temperature gases and particles for a finite amount of time. A substrate is the material that the coating is applied to. Ablatives are commonly used on space vehicles and launch equipment to protect critical components under the resulting thermal loads seen during launch and flight. Image Credit: NASA If we are designing a bell shaped rocket nozzle that is parabolic in shape, we can take the function for the inner profile of the nozzle and revolve that profile around an axis to find the surface area of the nozzle's inner surface. This essentially boils down to finding the length of the curve (function), multiplying it by a constant and the function itself, and then integrating over the interval of interest to find the surface area. Additionally, by taking the coating's weight per unit area into account, we can then determine how much weight is added to our system when the coating is applied to the surface area of interest. The formula for calculating the length of a curve is given as: $$\begin{align} L = \int_{a}^{b} \sqrt{1 + \left( \frac{dy}{dx} \right)^2} \: dx \end{align}$$ Where L is the length of the function y = f(x) on the x interval [a, b] and dy/dx is the derivative of the function y = f(x) with respect to x. The arc length formula is derived from the methodology of approximating the length of a curve. To approximate, we break a curve into many segments. If each segment is treated as a straight line, we may use the distance formula to determine each line's length. Adding up the lengths from these many straight lines gives an approximation of the curve's length. The accuracy of that approximation gets better as we break the curve into a greater number of shorter straight lines. After setting up the distance formula for the length of these line segments we may use an integral to make those line segments infinite in quantity and infinitesimally short. Each small change in x value is the dx from the arc length formula. In fact, the arc length formula is a simplified summation of an infinite number of distance formula evaluations for the straight lines. $$\begin{align} & \text{1.) Find the length of } y = f(x) = x^{2} \text{ between } -2 \leq x \leq 2 \hspace{20ex} \\ \\ & \hspace{3ex} \text{ Using the arc length formula } \: L = \int_{a}^{b} \sqrt{1 + \left( \frac{dy}{dx} \right)^2} \: dx \\ \\ & \text{2.) Given } y = f(x) = x^2\text{, find } \frac{dy}{dx} : \\ \\ & \hspace{3ex} \frac{dy}{dx} = 2 \cdot x\\ \\ & \text{3.) Plug lower x limit } a \text{, upper x limit } b \text{, and } \frac{dy}{dx} \text{ into the arc length formula} : \\ \\ & \hspace{3ex} L = \int_{-2}^{2} \sqrt{1 + \left(2 \cdot x\right)^2} \: dx\\ \\ & \text{4.) Solve for length by evaluating the integral}: \\ \\ & \hspace{3ex} L = \int_{-2}^{2} \sqrt{1 + \left(2 \cdot x\right)^2} \: dx = 9.2936 \end{align}$$ & \text{1.) Find the length of } y = f(x) = \ln \left(x\right) \: - \: 2x \text{ between } \frac{2}{3} \leq x \leq 12 \hspace{20ex} \\ \\ & \hspace{3ex} \text{ Using the arc length formula } \: L = \int_{a}^{b} \sqrt{1 + \left( \frac{dy}{dx} \right)^2} \: dx \\ \\ & \text{2.) Given } y = f(x) = \ln \left(x\right) \: - \: 2x \text{, find } \frac{dy}{dx} : \\ \\ & \hspace{3ex} \frac{dy}{dx} = - 2 + \frac{1}{x}\\ \\ & \text{3.) Plug lower x limit } a \text{, upper x limit } b \text{, and } \frac{dy}{dx} \text{ into the arc length formula} : \\ \\ & \hspace{3ex} L = \int_{\frac{2}{3}}^{12} \sqrt{1 + \left(- 2 + \frac{1}{x}\right)^2} \: dx\\ \\ & \text{4.) Solve for length by evaluating the integral}: \\ \\ & \hspace{3ex} L = \int_{\frac{2}{3}}^{12} \sqrt{1 + \left(- 2 + \frac{1}{x}\right)^2} \: dx = 22.8515 \end{align}$$ The Voovers Length of Curve Calculator is written in the web programming languages HTML (HyperText Markup Language), CSS (Cascading Style Sheets), and JS (JavaScript). The HTML creates the architecture of the calculator, the CSS provides the visual styling, and the JS provides all functionality and interactiveness. When the "calculate" button is clicked, your inputted function and interval are read by the JS routine. The routine calls on a JS computer algebra system (CAS) which can perform a derivative symbolically, maneuvering equations and applying derivative rules just like a person! The CAS performs the differentiation to find dy⁄dx. Then, that expression is plugged into the arc length formula. The integral is evaluated, and that answer is rounded to the fourth decimal place. The final length of curve result is printed to the answer area along with the solution steps. Then, a large list of (x, y) coordinate pairs is generated for the function. These coordinate pairs are fed to a JS graphing utility that draws a smooth curve through the points. This function plot is displayed below the solution steps. Learning math has never been easier. Get unlimited access to more than 165 personalized lessons and 69 interactive calculators. Join Voovers+ Today
CommonCrawl
Integrating sol-gel and carbon dots chemistry for the fabrication of fluorescent hybrid organic-inorganic films Stefania Mura ORCID: orcid.org/0000-0002-4459-386X1, Róbert Ludmerczki1, Luigi Stagi1, Sebastiano Garroni2, Carlo Maria Carbonaro ORCID: orcid.org/0000-0001-6353-64093, Pier Carlo Ricci ORCID: orcid.org/0000-0001-6191-46133, Maria Francesca Casula4, Luca Malfatti1 & Plinio Innocenzi1 Scientific Reports volume 10, Article number: 4770 (2020) Cite this article Optical properties and devices Organic–inorganic nanostructures Highly fluorescent blue and green-emitting carbon dots have been designed to be integrated into sol-gel processing of hybrid organic-inorganic materials through surface modification with an organosilane, 3-(aminopropyl)triethoxysilane (APTES). The carbon dots have been synthesised using citric acid and urea as precursors; the intense fluorescence exhibited by the nanoparticles, among the highest reported in the scientific literature, has been stabilised against quenching by APTES. When the modification is carried out in an aqueous solution, it leads to the formation of silica around the C-dots and an increase of luminescence, but also to the formation of large clusters which do not allow the deposition of optically transparent films. On the contrary, when the C-dots are modified in ethanol, the APTES improves the stability in the precursor sol even if any passivating thin silica shell does not form. Hybrid films containing APTES-functionalized C-dots are transparent with no traces of C-dots aggregation and show an intense luminescence in the blue and green range. Carbon dots (C-dots) are fluorescent nanomaterials with optical properties comparable to semiconductor quantum dots. C-dots, however, have a much lower cost and environmental impact, which make them a hot topic of research1,2. A major advantage is the possibility to produce C-dots from an almost endless variety of precursors and methods. On the other hand, strict control of the properties through the process is still challenging to achieve, and the main efforts are now dedicated to obtaining reliable and reproducible synthesis. The citric acid (CA) alone or in combination with other compounds is one of the most popular precursors for C-dots3. CA-based C-dots have on their surface different carboxy-groups, which increase the solubility and allow surface passivation or functionalization with organic molecules4,5,6 or polymers7. In general, pure CA C-dots, without any modification, show a weak emission, and doping1 with B, N, S, Si and P atoms is a possible solution to improve their quantum yield. Most of the CA C-dots are doped with nitrogen that enhances the luminescence by producing azo-compounds through the reaction between the carboxylic and amino groups; after carbonisation, they form water-dispersible and highly emitting C-dots8. Different amines have been used for this purpose, such as ethylenediamine (EDA)2,5,9, hexamethylenetetramine6, o-phenylenediamine (o-PD)10, triethylenetetramine11, hexadecylamine (HDA)8, and triethanolamine6. Quantum yields (QY) under 8% for most of the amines, with the exception of EDA6 which gives a QY of 86%, have been obtained. Urea, because of the high nitrogen content, can be used for doping CA C-dots1,5,12,13,14,15,16,17 and different methods have been developed so far. Hydrothermal treatment in an autoclave and microwave exposure are simple synthesis for producing luminescent C-dots from citric acid and urea. Low QY (16%)1 have been obtained by processing the dots via oven treatments, with even lower QY values observed in microwave processed samples (10–15%)11,13,14. An exception has been reported15 for a synthesis carried out in microwave. However the synthesis has been followed by a purification of the product through size exclusion chromatography achieving a QY of 73%. By using the autoclave treatment, one of the best QY has been obtained in toluene (51.2%), using rhodamine 6 G in ethanol as a standard12. higher value (78.8% QY) has been reported after an autoclave and annealing treatment at 250 °C, using quinine sulphate as reference standard17. All the recent works claim the difficulty in producing C-dots with high quantum yield (QY) and emissive products at higher wavelengths (>500 nm)2. In any case, a comparison of the different QY reported in literature should be made with care because most of the measures have been performed using a reference fluorescent dye (Relative Quantum Yield) and at different wavelengths; this could result in a much higher QYs with respect to the real absolute value. Another important goal is the production of functionalized C-dots to be integrated into materials and processed in the form of films for solid-state photonic applications. C-dots usually exhibit a fair solubility in water and other polar solvents, however, the surface modification is expected to increase the solubility, avoiding the aggregation and improving the stability towards quenching. Up to now, only a few examples have reported on surface modification for embedding the C-dots into polymeric matrixes and even less in inorganic or hybrid organic-inorganic hosts obtained via sol-gel processing9. To fulfil this goal, we have developed highly emissive blue and green C-dots which have been functionalised by 3-aminopropyltriethoxysilane to achieve effective incorporation within a hybrid film. The post-synthesis grafting process, in fact, allows for a better control of the C-dots properties to be used for the design of sol-gel nanocomposites. Incorporation of the C-dots in a sol-gel material is, in fact, usually realised by dissolving the nanoparticles into the precursor sol17,18,19. This is the simplest route, but homogeneous dispersion of the C-dots within the film is difficult to achieve; at the same time, full integration with the sol-gel chemistry20,21,22 via sol-gel hydrolysis and condensation reactions23 is also hampered. In the present work, we have successfully obtained blue and green hybrid organic-inorganic films using C-dots modified with an organosilane which has allowed a full integration with the sol-gel chemistry. This step is very important for the future development of solid-state optical devices based on C-dots. Keeping in mind the purpose of the present work we have developed a simple but effective synthesis which allows preparing highly fluorescent C-dots modified with an organofunctional alkoxide, 3-(aminopropyl)triethoxysilane (APTES). These C-dots can be well integrated into any sol-gel process to obtain fluorescent hybrid materials with controlled properties. We have prepared several batches of samples using different CA/urea molar ratios, and we have found that the best performances in the blue and green emission ranges are obtained using 1:2 (CU2) (blue) and 1:25 (CU25) (green) CA/urea molar ratios. Blue C-dots The UV-Vis spectra of the CU2 sample dispersed in water (Fig. 1a) (black line) show two UV absorption bands attributed to the π- π* transition of aromatic sp2 domains (234 nm) and to the n - π* transition of C=O bond (345 nm)13. Another absorption band, peaking at 425 nm, is assigned to n- π* transitions of the functional groups on the C-dots surfaces. (a) UV-vis absorption spectra of CU2 aqueous solutions (concentration 0.1 mg mL−1) (black line), and after the functionalization with APTES (red line). (b) Emission spectra (λex = 350 nm) of CU2 samples (1 mg L−1 concentration) (black line), and after the functionalization with APTES (red line). (c) 3D emission-excitation-intensity spectra of CU2 sample before and (d) after the functionalization with APTES in water. Interestingly this band at longer wavelengths completely disappears upon functionalization of the CU2 dots with APTES in water (red line). Figure 1c,d show the 3D photoluminescence spectra (excitation (y-axis), emission (x-axis), intensity (false colours scale)) of the C-dots samples in water before (Fig. 1c) and after the functionalization with APTES (Fig. 1d). The CU2 spectra are characterised by two main emissions, with maxima observed at 445 and 520 nm in emission and 350 and 420 nm in excitation, in accordance with the UV-vis spectra. The emission peaking at 445 nm (Fig. 1b) (λex = 350 nm) increases in intensity after the functionalization with APTES while the 520 nm band in CU2 is no longer observed, such as in the UV-vis spectra (Fig. 1d). Because the UV-Vis absorption of citrazinic acid is characterised by a main band peaking around 340 nm, this molecule and its derivates should be the primary source of the fluorescence at around 450 nm1,7,13 (Supplementary Information, Fig. S1). The emission at 520 nm is, instead, quenched upon functionalization by APTES (vide infra). The C-dots after the functionalization with APTES show a 12% increase of the absorbance, (Fig. 1a), a blue shift of the absorption maximum from 345 to 337 nm, and a 10% increase of emission intensity (Fig. 1b). We have also experimentally observed a significant change in the optical response if the reaction of APTES with the C-dots is carried out in ethanol or water (vide infra). In ethanol, in fact, the reactivity of APTES is slower and the solutions are more stable over time. If APTES is mixed with the C-dots in EtOH, the 425 nm band is not quenched anymore, an increase in absorbance and emission intensity are also observed (Supplementary Information, Fig. S2a,b). The quantum yield (QY) of CU2, using quinine sulfate24 as reference (λex = 365 nm, QY 55% in 0.1 M H2SO4) (Fig. S3), is 30% (Supplementary Information, Table S1). This is one of the highest values reported in the literature for blue C-dots synthesised by citric acid and urea. Green C-dots To obtain green C-dots with a very high emission, we have increased the urea content up to CA: urea = 1: 25 molar ratio (CU25). This is an optimised value which gives highly fluorescent green C-dots. The CU25 UV-vis spectra (Fig. 2a) show the presence of different absorption bands around 213, 248, 272 and 410 nm. In a recent article, Kasprzyk et al.13 have discussed the origin of the green emission of CA-urea C-dots formed at high molar ratios. They have identified 4-hydroxy-1H-pyrrolo[3,4-c]pyridine-1,3,6(2 H,5 H)-trione (HPPT), which forms from the citrazinic acid, as the main source of fluorescence. By comparing both the absorption and the photoluminescence of our samples with respect to those results, we attribute the green emission of CU2 and CU25 samples to the formation of HPPT fluorophores. The strong UV band gives a signature of this molecule at 410 nm attributed to n- π* transitions of the functional groups on the C-dots surfaces. The emission shows (λex = 400 nm) a 70 nm shift of the maximum with respect to the blue C-dots, from 450 to 520 nm (Fig. 2b) and is 85% lower with respect to the blue C-dots. The absorption band at 410 nm after the APTES functionalization is quenched, while a new band at 328 nm is, instead, detected (Fig. 2a). The emission spectra (Fig. 2b) show the decrease in intensity of the band in the green (520 nm) and the rise of a 466 nm band in the blue (Fig. 2c,d). These effects are attributed, such as in the case of blue C-dots, to the formation of a silica passivation layer. Because the main source of green emission in the C-dots is mainly the surface5, the modification by APTES causes the changes in the absorption and emission spectra. (a) UV-vis absorption spectra of CU25 C-dots in aqueous solutions (0.1 mg mL−1 concentration) (black line) and after the functionalization with APTES (red line). (b) Emission spectra (λex = 400 nm) of CU25 C-dots (1 mg L−1 concentration), (black line) and after the functionalization with APTES (red line). 3D emission-excitation-intensity spectra (c) before and (d) after the functionalization with APTES. The UV-Vis and emission spectra (Supplementary Information, Fig. S4a,b) do not change if the C-dots are dissolved in ethanol, instead of water. The intensity of the spectra also results three times larger in comparison with the samples obtained by functionalization in water (Fig. 3). The PL emission spectra and digital images of CU2 and CU25 in water and EtOH have been directly compared in Fig. S5 for the sake of clarity. The CU2 dots show an asymmetric emission 3D map, which is composed of two components, by the UV-vis spectra. The CU25 spectra show, instead, differently from what has been observed in water (Fig. 3), only one component. 3D emission-excitation-intensity spectra of CU2 C-dots before (a) and after (b) the functionalization with APTES in ethanol. CU25 C-dots before (c) and after (d) the functionalization with APTES in ethanol (1 mg L−1 concentration). The dot lines in the spectra are a guide for eyes to identify the maxima of emission and excitation. The CU25 C-dots in ethanol have shown a very high QY, 99.5% (Supplementary Information, Table S1), the highest to our knowledge reported in the literature for the method23,24 based on rhodamine 6 G (QY 95% in ethanol) as reference (Supplementary Information, Fig. S6). We have also measured the Absolute Quantum Yield (AQY) using an integrating sphere. This value is not dependent on the reference standard (Relative Quantum Yield)24 and can be used to compare the QY of C-dots prepared in different experimental conditions. The AQY of blue C-dots is: 9.3% (CU2 in H2O), 29.1% (CU2 in EtOH) and 30.7% (CU25 in EtOH) and 12.12% (CU25 in H2O) (Supplementary Information, Fig. S7). The difference between the relative and absolute QY of CU2 (30 and 29 respectively) and CU25 (99.5 and 30) is due to the dyes used as a standards for the relative QY estimate. In fact, these dyes show different QY depending on their purity, the solvent used, the concentration and the environmental conditions like the temperature. These differences can lead to a large variability of the relative quantum yield for the Rhodamine B and 6 G24. These QY values are lower than those obtained by a standard which gives in general an overestimation of the QY. TR-PL measurements The time-resolved photoluminescence (TR-PL) streak images have been collected by exciting four samples (CU2-APTES and CU25-APTES in water and ethanol) at 350, 370 and 400 nm in a time range of 100 ns, allowing to extract both the PL spectra and the decay time profiles. Figure 4a reports the case of CU2 C-dots modified with APTES in water as an example (the inset shows the streak image, time profile while the PL spectrum has been extracted by the integration of the full image over wavelength or time scale, respectively). (a) Emission spectrum (red curve, bottom x-axis) and decay time profile (blue curve, top x-axis) of CU2 C-dots modified with APTES in water upon excitation at 350 nm. The inset shows the streak image. (b) Emission spectrum integrated over a time window of 10 ns (black curve) and of 80 ns (blue curve) of CU25 C-dots modified with APTES in ethanol excited at 350 nm. (c) Decay time profile of CU25 C-dots modified with APTES in water (blue curve) and in ethanol (green curve) excited at 350 nm and 400 nm respectively. The whole set of samples has been analysed by fitting the time profiles with two exponential decays, of about 2–4 ns and 10 ns, with an estimated uncertainty of about 1.4 ns (estimated through the 10–90% rise time). The faster decay is beyond the object of the present investigation, being on the resolution limit of the experimental set-up for this time scale. We have checked it also over faster time windows (at 5 ns with an estimated uncertainty of 0.2 ns) confirming the value of about 2 ns. We have also observed that this decay affects the emission spectrum as a whole, suggesting the presence of a non-radiative contribution that can be accessed from the entire chromophore ensemble related to the recorded emission. The slower decay (≈10 ns) can be generally fitted in the energy space with two Gaussian bands picking around 2.5 eV (500 nm) and 2.8 eV (450 nm). The relative contribution of these bands depends on the synthesis conditions (blue or green C-dots), thus confirming the results gathered with the excitation/emission maps. We have not been able to single out the decay of each band, a feature indicating that the two emissions have comparable decays. To extract more information from the streak images, we have integrated the signal over different time windows, the first with a duration of 10 ns and the following 80 ns. The comparison of the two spectra allows assessing that the blue band is faster than the green one; its relative contribution being reduced in the second time windows (see Fig. 4b). Then, considering the effect of APTES in water and ethanol in CU25 C-dots samples (Fig. 4c), we have compared the streak images excited at 350 and 400 nm for the water and ethanol cases, respectively. Indeed, in the first case we have been able to isolate a single emission band at about 450 nm with a decay time of 8.7 ns, while, in the second one, we have singled out the band at 510 nm with a decay time of 11.6 ns. For a better comparison, the lifetime of C-dots before and after the functionalization with APTES have been reported in Supplementary Information Table S2. The C-dots structure upon surface modification Figure 5a shows the FTIR absorption spectra in the 3050–2800 cm−1 range of the CU25 C-dots modified with APTES in water or ethanol; the APTES spectrum is also shown as a reference. (a) FTIR absorption spectra in the 3050–2800 cm−1 range and (b) in the 1250–900 cm−1 range of the CU25 C-dots modified with APTES in water (black line) or in ethanol (red line); the APTES spectrum is also shown as a reference (blue line). In this wavenumber range the stretching bands, symmetric and antisymmetric, of CH2 and CH3 are detected. The CH3 stretching modes (νas at 2974 cm−1 and νs at 2882 cm−1) are assigned to the alkoxy groups (–OCH2CH3) in APTES and the CH2 stretching modes (νas at 2926 cm−1 and νs at 2865 cm−1) to CH2 in the alkoxide and in the propyl chain in APTES. The spectra of the APTES modified C-dots using water or ethanol as solvent show a remarkable difference between them. The signal due to CH3 stretching in the alkoxy (νas at 2974 cm−1) completely disappears in the sample prepared using water as solvent. This indicates that APTES has reacted in hydrolytic condition and that the reaction has been autocatalysed by APTES whose addition in the aqueuos solution has increased the pH from neutral to basic (pH = 10). In ethanol, instead, APTES does not react, as shown by the FTIR spectra. Figure 5b shows the FTIR absorption spectra at lower wavenumbers, between 1250 and 900 cm−1. The spectra of APTES and CU25 C-dots functionalized with APTES in ethanol are characterised by the symmetric stretching νs (C-O) mode at 1073 cm−1 and the rocking ρ (CH3) mode at 1165 and 951 cm−1. The ρ(CH3) vibrational mode disappears in the sample functionalized with APTES in hydrolytic conditions, in accordance with Fig. 5a. This is an indication of the reaction of alkoxides groups which hydrolyse and form a silica structure. The CU25 sample modified with APTES in water shows the rise of a large band peaking around 1030 cm−1 with a shoulder around 1078 cm−1. These bands are the signature of formation of silica structures and are assigned to νas (Si-O-Si). In principle, the C-dots surface carboxyls may react with the amines of APTES to form amide bonds. In general, this reaction is quite difficult to observe because the amines deprotonate the carboxylic acid and do not form reactive carboxylates. In the present conditions, the FTIR analysis (Supplementary Information, Fig. S8), has confirmed that a very small amount of amides are formed only when the CU25 dots react with APTES in water. Raman bands of CU25 C-dots modified with APTES in ethanol have been recorded in the spectral region between 1000 and 1700 cm−1 (Fig. 6a) where most of the bands can be assigned to the Raman spectrum of APTES molecules25,26. The main signal, at 1449 cm−1 is assigned to the CH2 scissoring mode27, while the scattering bands at 1307, 1349 and 1412 cm−1 are attributed to the stretching mode of C–C, N–H and CN, respectively28. The Raman spectrum of APTES, however, shows two additional bands between 1600 and 1650 cm−1 25, related to the NH2 vibration modes (bending and stretching, respectively)29, that cannot be clearly observed in our spectra. Whilst no vibrational features of silica can be distinguished in the ethanol sample, the ω1 and D1 silica fingerprint bands, related to the five and sixfold ring structures and to fourfold ring structures respectively30,31, are clearly detected in the sample modified by APTES in water (Fig. 6b). These results confirm that, in ethanol, APTES does not react and do not form silica interconnected structures. (a) Raman spectrum in the 1100–1750 cm−1 range of CU25-APTES in ethanol; Raman spectra in the 350–1000 cm−1 range of the CU25-APTES in (b) ethanol and (c) water. The analysis of FTIR and Raman data combined with the emission spectra of the samples suggests that the C-dots surface modification by APTES in hydrolytic conditions forms a thin silica passivating layer, which protects the C-dots from the emission quenching. The autocatalytic effect of APTES, which causes an increase of pH, determines the hydrolysis and condensation reactions of silica. To get a better understanding of the effect of C-dots surface modification by APTES, a systematic analysis of the samples Z-potential has been performed. Table 1 lists the different values measured for CU2 and CU25 samples in water and ethanol without and with APTES. The samples are stable in water and ethanol, but after the addition of APTES in water, the Z-potential decreases in absolute value, indicating lower stability in solution. This is an effect of silica condensation, which causes quick precipitation of the C-dots. On the other hand, the C-dots/EtOH/APTES system keeps its stability, and the small decrease of the Z-potential is an indication of a small modification, probably via secondary bonding, of the particle's surface. The surface modification by APTES does not affect the stability in solution, however it plays a primary role in increasing the fluorescence by avoiding the quenching effect due to the solvent molecules (vide supra). Table 1 Zeta potential measurements of CU2 and CU25 samples before and after APTES modification. C-dots have been analysed by TEM to study the morphological changes after APTES modification in ethanol. Figures 7a,b show the as prepared CU2 and CU25 samples, as a reference. In both cases, the dots are round-shaped with a large size distribution, which ranges from 80 to ≈190 nm. The TEM images, however, do not allow measuring a precise C-dots size because of a partial clusterisation of several dots in the dried state. The C-dots have been deposited on the TEM grids by casting a few droplets of samples dispersed in ethanol and the solvent evaporation results in an aggregation of C-dots. (a,b) Representative TEM images of CU2 and CU25 in ethanol; (c) and (d) CU2 and C25 modified with APTES in ethanol. The insets show the magnification of single C-dots in the different samples (scale bar = 50 nm). After surface modification, the C-dots do not show specific morphological differences (Fig. 7c,d). The low contrast provided by the C-dots deposited on the TEM grids makes, however, extremely difficult to identify with precision the edge of the nanoparticles. We have also tried to magnify isolated C-dots, reported as an inset, to get a better view of the nanoparticle surface, we did not observe any evidence of a core-shell structure, eventually obtained by APTES polycondensation on the C-dots surface, in agreement with FTIR and Raman data. Sols and films with C-dots Mixing the as prepared C-dots with the precursor sol results in the formation of aggregates; these solutions have not been employed, therefore, for the film deposition. When the aqueous solution of APTES-functionalized C-dots has been added to the MTES-TEOS precursor sol, the mixture also resulted opaque and did not allow depositing optically transparent and homogeneous films. On the contrary, the ethanol solution of APTES-functionalized C-dots has allowed obtaining a stable sol with a high concentration of nanoparticles. We have analysed the two sols by light scattering, and the results have been compared with those of MTES-TEOS sols containing naked C-dots, previously dispersed in water and ethanol. The C-dots hydrodynamic radii in the four sols show that the naked particles (black bars in Fig. 8) have the tendency to agglomerate and form clusters with a Gaussian distribution peaking around 2 μm (Fig. 8). Upon functionalization by APTES, the clusters reduce the average size to ≈300 nm. Light scattering analysis of C-dots in the precursor solution (black bars) and upon functionalization with APTES in ethanol (red bars) for CU2 (a) and CU25 (b) samples. The FTIR and Raman characterizations (Figs. 5 and 6) have suggested that in ethanol, APTES does not react; however, as shown by light scattering, it is very effective to reduce aggregation of the particles in the precursor sol. These experimental results suggest that APTES works as a dispersing agent with the amine groups bonded to the surface of the C-dots via secondary bonds, which stabilise the particles avoiding or reducing aggregation. Considering that C-dots are generally negatively charged32, the interaction between APTES and C-dots surface should occur through hydrogen bonding, probably between the carboxylic or carbonyl ending groups of the nanoparticles and the amino groups of the APTES (Fig. 9a). In the absence of water, which is necessary for the hydrolysis and polycondensation of the alkoxy groups, the hybrid silica precursor serves as a stabilising agent for the C-dots, avoiding clustering and precipitation through steric hindrance. On the other hand, silica thin shells form in APTES modified C-dots in water (Fig. 9b). Schematics of the C-dots/APTES interactions in ethanol and water (a,b). We have, therefore, used the MTES-TEOS sol containing the alcoholic solutions of APTES-functionalized C-dots to deposit hybrid organic-inorganic silica films. The films are fluorescent under UV illumination with maxima at 440 (CU2) and 490 nm (CU25) (Fig. 10). The green dots are more sensitive to the external environment than the blue dots and upon incorporation in the sol-gel matrix show a shift in the emission maximum of around 30 nm to shorter wavelengths. The films have a thickness of around 1 μm and a refractive index of 1.464 and 1.446 for CU2 and CU25, respectively. (a) Emission spectra of the hybrid sol-gel films containing CU2 C-dots dissolved in ethanol (blue line) or CU25 C-dots (green line) functionalized with APTES in ethanol. In the inset it is shown the picture of the films under UV lamp. (b) Refractive index as a function of the wavelength for CU2 (blue line) and CU25 (green line) films. A simplified synthesis based on citric acid and urea has been designed to produce highly fluorescent blue and green C-dots with a high quantum yield, among the highest reported so far. Time-resolved photoluminescence has revealed that both blue and green emissions have a decay time of around 10 nm, the blue contribution being faster than the green. Functionalization by APTES has shown to be very effective to incorporate the C-dots into a hybrid film. In the aqueous sol, APTES reacts with water and forms a silica shell around the C-dots that avoids emission quenching, but causes the formation of large aggregates which are not suitable for film deposition. In ethanol, the hydrolysis and condensation of APTES do not occur and the organosilane stabilises the C-dots acting as a surfactant via secondary bonding of the primary amines with the surface groups. C-dots functionalized with APTES in ethanol can be used to prepare hybrid-inorganic materials via liquid phase deposition and optically transparent blue and green fluorescent films have been obtained. The present method allows achieving a full integration of C-dots chemistry with sol-gel processing. Chemicals and reagents Citric acid monohydrate (CA) (purity 99.9%) (Fluka), urea for electrophoresis (purity 98%) (Sigma), 3-(aminopropyl)triethoxysilane (APTES, 98%) (Sigma), tetraethylorthosilicate (TEOS) (Aldrich, 99% purity), methyltriethoxysilane (MTES) (Aldrich, 98% purity), ethanol (EtOH) (Fluka, >99.8%), hydrochloric acid (Sigma-Aldrich, 37% wt/wt), quinine anhydrous (Sigma, >98%), rhodamine 6 G (Sigma), sulfuric acid (Sigma, 99.9%) and water (milli-Q) were used as received without further purification. 381 mm-thick silicon wafers (Si-Mat) and silica slides (Heraeus suprasil 25 × 25 mm slides) were used as substrates for film deposition. The silicon wafers and silica slides were washed with acetone and ethanol and then dried with compressed air before film deposition. Synthesis of C-dots from citric acid and urea The synthesis of C-dots has been done using two components: citric acid and urea. Two different citric acid / urea molar ratios: 1:2 and 1:25 (samples CU2 and CU25, respectively) were employed. CA and urea were dissolved in 10 mL of mQ water and heated in an open vessel using an oil bath at 190 °C for 2 h. The final product was dissolved in water or ethanol and purified in a centrifuge at 12000 rpm, for 20 min, to remove the large particles. Then the surnatant was filtered with a 0.20 µm syringe filter. Solutions with a concentration of 0.1 mg mL−1 and 1 mg mL−1 were prepared for the UV-Vis and for photoluminescence (PL) analysis, respectively. The final product obtained after 2 hours of reaction appeared as a black solid. For the functionalization with APTES, 10 mg of the final product were dissolved in 10 mL of mQ water, sonicated 5 minutes to complete the dissolution and then analyzed by UV-Vis and fluorescence spectroscopy. Functionalization of C-dots with APTES The C-dots prepared with the two different syntheses were functionalized with APTES using ethanol or water as solvent. 330 µL of APTES (3.33% v/v) were added to 10 mL of C-dots in water or EtOH (1 mg mL−1). The solutions were reacted at 25 °C under stirring at 500 rpm in a closed vessel for 72 h; UV and PL spectra were recorded at different reaction times. The pH of the C-dots solutions in water changed from neutral to basic (pH > 10) after the addition of APTES for both CU2 and CU25. Precursor sol preparation The precursor sol to be employed for film deposition was prepared by mixing 3 mL C-dots/APTES in EtOH with 1 mL EtOH, 6 mL MTES, 2 mL TEOS, 0.3 mL water and 0.2 mL HCl (2 M) in a glass vial (molar ratios, MTES:TEOS:EtOH:H2O:HCl = 3.4: 1.0: 7.7: 3.7: 0.1). The sol was stirred for 21 h at room temperature in a closed glass bottle. Then 200 mL of milli-Q water were added to the hybrid sol (MTES–TEOS) and the solution was left to react under stirring for 2 h before deposition. Preparation of hybrid organic-inorganic films The hybrid films were deposited by dip-coating onto silicon wafers and silica slides with a withdrawal rate of 15 cm min−1. The deposition was performed at 25 °C and 24% relative humidity; the as deposited films were then placed in an oven at 60 °C. Transparency of the hybrid films was measured with a Nicolet Evolution 300 UV–Vis spectrophotometer in the range 200–600 nm with a bandwidth of 1.5 nm. A clean quartz slide was used as a background reference. PL measurements were collected on silica slides with excitation at 350 and 400 nm for blue and green emitting films, respectively. UV-Vis measurements were performed in absorbance, using a Nicolet Evolution 300 spectrophotometer from 200 to 600 nm, on C-dots solutions at concentration 0.1 mg mL−1. Fluorescence spectroscopic measurements of C-dots solubilized in water and EtOH and on films deposited on silica slides, were performed using a Horiba Jobin Yvon FluoroMax-3 spectrofluorometer; three-dimensional mapping was obtained with a 450 W xenon lamp as the excitation source. Three-dimensional maps were collected with an excitation range of 200–600 nm and an emission range of 200–600 nm with a 5 nm slit for excitation and emission. Fourier-transform infrared (FTIR) spectroscopy analysis was performed with a Bruker infrared Vertex 70 interferometer. The spectra were recorded in absorbance mode between 4000 and 400 cm−1 by averaging 256 scans with 4 cm−1 of resolution. 2 mg of solid were used to prepare KBr pellets for measurements in transmission mode. Attenuated total reflectance (ATR) mode was used to direclty analyze the solids or the evaporating solutions. For the films a silicon wafer was used as the background; the data were analyzed with OPUS 7.0 and ORIGIN Pro 8 software. Micro Raman scattering measurements were carried out in back scattering geometry with the 1064 nm line of an Nd:YAG laser. Measurements were performed in air at room temperature with a B&W TEK (Newark-USA) i-Raman Ex integrated system with a resolution of about 8 cm−1. As previously reported, a Wollam-spectroscopic ellipsometer with fixed angle geometry was used to estimate the thickness and refractive index dispersions of the hybrid films deposited on silicon substrates, by fitting the experimental data with a model for transparent films on Si substrates. The fit showed an average mean square error always lower than 6.533. Time resolved photoluminescence (TR-PL) measurements were recorded by exciting the samples with 200 fs long pulses delivered by an optical parametric amplifier (Light Conversion TOPAS-C) pumped by a regenerative Ti:sapphire amplifier (Coherent Libra-HE). The repetition frequency was 1 kHz and the PL signal was recovered by a streak camera (Hamamatsu C10910) equipped with a grating spectrometer (Princeton Instruments Acton SpectraPro SP-2300). All the measurements were collected in the front face configuration to reduce inner filter effects. Proper emission filters were applied to remove the reflected contribution of the excitation light, as reported in previous studies34. Zeta potential (z) and light scattering were measured on a Zetasizer Nano ZSP (Malvern Instruments) operating in backscatter configuration (θ = 173°) at laser wavelength of λ = 633 nm. Samples were syringe filtered and transferred into a capillary folded elecrophoretic cell for analysis at 25 °C. Quantum Yield (QY) of the C-dots were measured by a relative method; quinine sulfate (QY = 55% in 0.1 M H2SO4) was used as the reference for the 400–480 nm emission range (for CA: Urea = 1: 2), rhodamine 6 G (QY = 95% in ethanol) for the 480–560 nm emission range (for CA: Urea = 1: 25). The QY was then calculated using Eq. (1) from ref. 22. $$\phi ={\phi }^{{\prime} }\frac{{A}^{{\prime} }}{{I}^{{\prime} }}\frac{{\rm{I}}}{{\rm{A}}}\frac{{{\rm{n}}}^{2}}{{{{\rm{n}}}^{{\prime} }}^{2}}$$ where ϕ is the QY of the testing sample, I is the testing sample's integrated emission intensity, n is the refractive index (1.33 for water and 1.36 for ethanol), and A is the optical density. Φ′, A′, I′ and n′ are values of the referenced fluorescence dyes of known QYs. To obtain the QY values for CU2 we have used the absorbance at 365 nm and an excitation at the same wavelength; the emission spectra have been integrated in the 380–600 nm range, using quinine as reference. The QY for CU25 has been, instead, calculated using the absorbance and excitation at 420 nm and integrating the emission between 430 and 700 nm. To obtain reliable results, the C-dots and reference dyes solutions were diluted to have an optical absorbance between 0 and 0.1. The data were analyzed using ORIGIN software. QYs were calculated by comparison of the integrated PL intensity vs absorbance curves (refractive index, n, was also considered). Absolute photoluminescence quantum yield (AQY) measurements have been performed using the quanta-ϕ (HORIBA) integrating sphere accessory, attached to the "NanoLog" spectrofluorometer. Water or ethanol were used as a blank ref. 34. Transmission electron microscopy (TEM) images were obtained by using a FEI TECNAI 200 microscope working with a field emission electron gun operating at 200 kV. Sample preparation was done by dispersing the nanoparticles in ethanol by ultrasonication and then dropping them onto a carbon-coated copper grid and drying them for observations35. Zholobak, N. M. et al. Facile fabrication of luminescent organic dots by thermolysis of citric acid in urea melt, and their use for cell staining and polyelectrolyte microcapsule labelling. Beilstein J. Nanotechnol. 7, 1905–1917 (2016). Ding, H., Wei, J. S., Zhong, N., Gao, Q. Y. & Xiong, H. M. Highly Efficient Red-Emitting Carbon Dots with Gram-Scale Yield for Bioimaging. Langmuir 33, 12635–12642 (2017). Ludmerczki, R. et al. Carbon dots from citric acid and its intermediates formed by thermal decomposition. Chem. Eur. J. 25, 11963–11974 (2019). Khan, W. U. et al. High quantum yield green-emitting carbon dots for Fe(III) detection, biocompatible fluorescent ink and cellular imaging. Sci. Rep. 7, 1–9 (2017). Schneider, J. et al. Molecular fluorescence in citric acid-based carbon dots. J. Phys. Chem. C 121, 2014–2022 (2017). Lin, Y. et al. Tunable Fluorescent Silica-Coated Carbon Dots: A Synergistic Effect for Enhancing the Fluorescence Sensing of Extracellular Cu 2+ in Rat Brain. ACS Appl. Mater. Interfaces 7, 27262–27270 (2015). Panniello, A. et al. Luminescent Oil-Soluble Carbon Dots toward White Light Emission: A Spectroscopic Study. J. Phys. Chem. C 122, 839–849 (2018). Suzuki, K. et al. Design of Carbon Dots Photoluminescence through Organo-Functional Silane Grafting for Solid-State Emitting Devices. Sci. Rep. 7, 1–11 (2017). Bhattacharya, D., Mishra, M. K. & De, G. Carbon Dots from a Single Source Exhibiting Tunable Luminescent Colors through the Modification of Surface Functional Groups in ORMOSIL Films. J. Phys. Chem. C 121, 28106–28116 (2017). Vassilakopoulou, A., Georgakilas, V. & Koutselas, I. Encapsulation and protection of carbon dots within MCM-41 material. J. Sol-Gel Sci. Technol. 82, 795–800 (2017). Cheng, J., Wang, C.-F., Zhang, Y., Yang, S. & Chen, S. Zinc ion-doped carbon dots with strong yellow photoluminescence. RSC Adv. 6, 37189–37194 (2016). Simões, E. F. C., Leitão, J. M. M. & da Silva, J. C. G. E. Carbon dots prepared from citric acid and urea as fluorescent probes for hypochlorite and peroxynitrite. Microchim. Acta 183, 1769–1777 (2016). Kasprzyk, W. et al. Luminescence phenomena of carbon dots derived from citric acid and urea-a molecular insight. Nanoscale 10, 13889–13894 (2018). Sciortino, A. et al. β-C3N4 Nanocrystals: Carbon Dots with Extraordinary Morphological, Structural, and Optical Homogeneity. Chem. Mater. 30, 1695–1700 (2018). Lai, C. W., Hsiao, Y. H., Peng, Y. K. & Chou, P. T. Facile synthesis of highly emissive carbon dots from pyrolysis of glycerol; Gram scale production of carbon dots/mSiO2 for cell imaging and drug release. J. Mater. Chem. 22, 14403–14409 (2012). Peng, Y., Zhou, X., Zheng, N., Wang, L. & Zhou, X. Strongly tricolor-emitting carbon dots synthesized by a combined aging-annealing route and their bio-application. RSC Adv. 7, 50802–50811 (2017). Chandra, S., Beaune, G., Shirahata, N. & Winnik, F. M. A one-pot synthesis of water soluble highly fluorescent silica nanoparticles. J. Mater. Chem. B 5, 1363–1370 (2017). Liu, C. et al. Fluorescence-Converging Carbon Nanodots-Hybridized Silica Nanosphere. Small 12, 4702–4706 (2016). ADS CAS Article Google Scholar Malfatti, L. & Innocenzi, P. Sol-Gel Chemistry for Carbon Dots. Chem. Rec. 18, 1192–1202 (2018). Suzuki, K. et al. Energy transfer induced by carbon quantum dots in porous zinc oxide nanocomposite films. J. Phys. Chem. C 119, 2837–2843 (2015). Innocenzi, P., Malfatti, L. & Carboni, D. Graphene and carbon nanodots in mesoporous materials: an interactive platform for functional applications. Nanoscale 7, 12759–12772 (2015). Carbonaro, C. M. et al. Carbon Dots in Water and Mesoporous Matrix: Chasing the Origin of their Photoluminescence. J. Phys. Chem. C 122, 25638–25650 (2018). Jiang, K. et al. Red, green, and blue luminescence by carbon dots: Full-color emission tuning and multicolor cellular imaging. Angew. Chemie - Int. Ed. 54, 5360–5363 (2015). Würth, C., Grabolle, M., Pauli, J., Spieles, M. & Resch-Genger, U. Relative and absolute determination of fluorescence quantum yields of transparent samples. Nat. Protoc. 8, 1535–1550 (2013). Hiraoui, M. et al. Spectroscopy studies of functionalized oxidized porous silicon surface for biosensing applications. Mater. Chem. Phys. 128, 151–156 (2011). Jang, M., Cho, I. & Callahan, P. Adsorption of Globular Proteins to Vaccine Adjuvants. Journal of Biochemistry and Molecular Biology 30, 346–351 (1997). Ruan, C., Wang, W. & Gu, B. Detection of alkaline phosphatase using surface-enhanced raman spectroscopy. Anal. Chem. 78, 3379–3384 (2006). Pryce, R. S. & Hench, L. L. Tailoring of bioactive glasses for the release of nitric oxide as an osteogenic stimulus. J. Mater. Chem. 14, 2303–2310 (2004). Shih, P. T. K. & Koenig, J. L. Raman studies of silane coupling agents. Mater. Sci. Eng. 20, 145–154 (1975). King, S. Ring Configurations in a Random Network Model of Vitreous Silica. Nature 213, 1112–1113 (1967). Galeener, F. L., Barrio, R. A., Martinez, E. & Elliot, R. J. Vibrational Decoupling of Rings in Amorphous Solids. Phys. Rev. Lett. 53, 2429–2432 (1984). Sciortino, A., Cannizzo, A. & Messina, F. Carbon Nanodots: A Review—From the Current Understanding of the Fundamental Photophysics to the Full Control of the Optical Response. C 4, 67–102 (2018). Jang, Y. et al. Hard X-rays for processing of hybrid organic-inorganic thick films. J. Sync. Rad. 23, 267–273 (2016). Ren, J. et al. Boron oxynitride two-colour fluorescent dots and their incorporation in a hybrid organic-inorganic film. J. Colloid Interface Sci. 560, 398–406 (2019). ADS Article Google Scholar Carboni, D., Lasio, B., Malfatti, L. & Innocenzi, P. Magnetic core-shell nanoparticles coated with a molecularly imprinted organogel for organophosphate hydrolysis. J. Sol-Gel Sci. Technol. 79, 295–302 (2016). We acknowledge the CeSAR (Centro Servizi d'Ateneo per la Ricerca) of the University of Cagliari, Italy for the Time Resolved Photoluminescence experiments. S. Oreta is gratefully acknowledged for helpful discussion. Italian Ministry of Education, University and Research (MIUR) is acknowledged for funding through projects of national interest (PRIN) "CANDL2" (Grant 2017W75RAE). Laboratory of Materials Science and Nanotechnology, CR-INSTM, Department of Chemistry and Pharmacy, University of Sassari, Via Vienna 2, 07100, Sassari, Italy Stefania Mura, Róbert Ludmerczki, Luigi Stagi, Luca Malfatti & Plinio Innocenzi Department of Chemistry and Pharmacy, University of Sassari, Via Vienna 2, 07100, Sassari, Italy Sebastiano Garroni Department of Physics, University of Cagliari, Campus of Monserrato, sp n.8, km 0.700, 09042, Monserrato, Italy Carlo Maria Carbonaro & Pier Carlo Ricci DIMCM-Department of Mechanical, Chemical, and Materials Engineering INSTM and University of Cagliari Via Marengo 2, I, 09123, Cagliari, Italy Maria Francesca Casula Stefania Mura Róbert Ludmerczki Luigi Stagi Carlo Maria Carbonaro Pier Carlo Ricci Luca Malfatti Plinio Innocenzi S.M., R.L., P.I. and L.M. planned the experiments, S.M., S.G., C.M.C., M.F.C, P.C.R., M.C. and R.L, performed the experiments, S.M., L.S., C.M.C., M.F.C, L.M. and P.I. analysed the results. P.I., S.M., L.S. and L.M. wrote the article. All authors reviewed the manuscript. Correspondence to Plinio Innocenzi. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Information. Mura, S., Ludmerczki, R., Stagi, L. et al. Integrating sol-gel and carbon dots chemistry for the fabrication of fluorescent hybrid organic-inorganic films. Sci Rep 10, 4770 (2020). https://doi.org/10.1038/s41598-020-61517-x DOI: https://doi.org/10.1038/s41598-020-61517-x Top 100 in Materials Science
CommonCrawl
Limitations of the study Bayesian spatiotemporal analysis of malaria infection along an international border: Hlaingbwe Township in Myanmar and Tha-Song-Yang District in Thailand Aung Minn Thway1, Chawarat Rotejanaprasert1, Jetsumon Sattabongkot2, Siam Lawawirojwong3, Aung Thi4, Tin Maung Hlaing5, Thiha Myint Soe6 and Jaranit Kaewkungwal1Email author Received: 10 August 2018 Accepted: 9 November 2018 One challenge in moving towards malaria elimination is cross-border malaria infection. The implemented measures to prevent and control malaria re-introduction across the demarcation line between two countries require intensive analyses and interpretation of data from both sides, particularly in border areas, to make correct and timely decisions. Reliable maps of projected malaria distribution can help to direct intervention strategies. In this study, a Bayesian spatiotemporal analytic model was proposed for analysing and generating aggregated malaria risk maps based on the exceedance probability of malaria infection in the township-district adjacent to the border between Myanmar and Thailand. Data of individual malaria cases in Hlaingbwe Township and Tha-Song-Yang District during 2016 were extracted from routine malaria surveillance databases. Bayesian zero-inflated Poisson model was developed to identify spatial and temporal distributions and associations between malaria infections and risk factors. Maps of the descriptive statistics and posterior distribution of predicted malaria infections were also developed. A similar seasonal pattern of malaria was observed in both Hlaingbwe Township and Tha-Song-Yang District during the rainy season. The analytic model indicated more cases of malaria among males and individuals aged ≥ 15 years. Mapping of aggregated risk revealed consistently high or low probabilities of malaria infection in certain village tracts or villages in interior parts of each country, with higher probability in village tracts/villages adjacent to the border in places where it could easily be crossed; some border locations with high mountains or dense forests appeared to have fewer malaria cases. The probability of becoming a hotspot cluster varied among village tracts/villages over the year, and some had close to no cases all year. The analytic model developed in this study could be used for assessing the probability of hotspot cluster, which would be beneficial for setting priorities and timely preventive actions in such hotspot cluster areas. This approach might help to accelerate reaching the common goal of malaria elimination in the two countries. Border areas Spatiotemporal analysis As malaria transmission continually declines, control measures will progressively rely upon precise information of the identified risk factors, as well as the capacity to characterize high-risk areas and populations for targeted interventions [1]. One challenge consistently noted in malaria elimination is cross-border malaria infection, owing to migrant populations being particularly difficult to monitor [2]. Measures to prevent and control malaria re-introduction along the border between countries require the analyses and interpretation of data from disease surveillance systems of both sides in border areas, to make correct and timely decisions [2]. Reliable maps of projected malaria distribution can help in directing intervention strategies, to optimize the use of limited human and financial resources in the areas with greatest need [3]. In this study, a spatiotemporal analytic model was proposed for assessing the malaria distribution along the border between Myanmar and Thailand, where there has been consistent malaria incidence for decades. In Myanmar, malaria has been reported in 284 out of 330 townships. The morbidity and mortality rates of malaria in Myanmar were 24.35 per 1000 population and 12.62 per 100,000 population in 1990, and 6.44 per 1000 population and 0.48 per 100,000 population in 2013, respectively. As per the World Malaria Report 2015, 32 million individuals are residing in malaria transmission zones, of whom 16% are in high-risk areas. Although the morbidity and mortality owing to malaria has been declining in Myanmar, concerns remain about population movement, especially of migrants at border areas, and the occurrence of multidrug resistance of Plasmodium falciparum. Such situations may arise from the fact that the Myanmar national malaria control programme (NMCP) cannot provide adequate coverage in some border areas because of the local political setting and military conflicts in those fringe areas [1, 4]. In Thailand, local malaria transmission was reported in 46 of 77 provinces, 155 of 930 districts, and 5502 of 74,956 villages in 2014; this has declined continually to 4512 villages in 2016, as reported by the Thailand NMCP. Malaria cases in Thailand have generally occurred in provinces bordering Myanmar, Cambodia and Malaysia. The challenges for NMCPs in different countries in controlling malaria situations are due to differing government policies, sociocultural and political situations, economic status, and public health infrastructure. However, timely and accurate malaria disease mapping of both sides in border areas could help with understanding the contemporaneous conditions, assessing malaria transmission patterns, and conducting objective attempts at situation management. This approach could yield more precise data for local disease control units to, for example, assess malaria management actions involving the activities of rapid response teams and village health volunteers, or enforce mosquito control and drug allocation [1, 5]. The specific aim of this study was to develop and propose a spatiotemporal analytic model for assessing the malaria situation along the border between Myanmar and Thailand, using surveillance data from Hlaingbwe Township in Myanmar and Tha-Song-Yang District in Thailand. Based on the model, the exceedance probability of malaria incidence in space and time and the effect size of demographic factors associated with malaria infections in the two countries were explored. In addition, the hierarchical modelling framework [3, 6–8] used for the analysis of malaria cases was used to identify spatiotemporal hot-spot clustering of malaria cases in each country. Study area and settings The study was conducted in Hlaingbwe Township in Myanmar and Tha-Song-Yang District in Thailand. Hlaingbwe Township is the third largest township of Kayin State in Myanmar, with a population of 265,883. The area of the township is 4329.8 km2, and it is sub-divided into 75 village tracts. Tha-Song-Yang District is situated in Tak Province of Thailand, with a population of 61,161. The area of the district is 1920.38 km2, and it has 6 sub-districts with a total of 66 villages (Fig. 1). The names of the village tracts and villages in both Tha-Song-Yang and Hlaingbwe are described in the supplemental material (Additional file 1: Table S1, Additional file 2: Table S2). Malaria cases were identified as patients who were diagnosed with malaria, with either P. falciparum, Plasmodium vivax, or mixed infection, using microscopy or rapid diagnostic test. Map of the study area Data of individual malaria cases was extracted from the routine paper-based surveillance database of the Myanmar NMCP and from Thailand's national electronic Malaria Information System (eMIS). The malaria data used in this study included malaria cases reported at malaria clinics in both countries during the period of January to December 2016. Population and other demographic parameters for Thailand were obtained from the Statistical Yearbook Thailand 2016 [9]. For Myanmar, the information was obtained from the 2014 Myanmar Population and Housing Census published by the Department of Population, Ministry of Immigration and Population [10]. Statistical analytic models The Bayesian spatiotemporal zero-inflated Poisson (ZIP) model was proposed to assess the probability of disease-clustering areas. The ZIP was a proper model in this case because the counting data of malaria cases in all village tracts/villages over 12 months in the study areas had excess zeros. Some other studies also used the ZIP model in developing risk maps, for example, use of the ZIP model for mapping the malaria vector sporozoite rate [11, 12] or mapping of schistosomiasis [13], with inference made using the MCMC approach. Suitable mapping will likely be obtained with selection of a better model, which will help in evaluating risk areas. Spatiotemporal ZIP regression model with the Bayesian approach [14–16] were assembled using WinBUGS software, version 1.4.3 (MRC Biostatistics Unit, Cambridge, UK). The monthly number of reported cases of P. vivax, P. falciparum, and mixed malaria infection (January to December 2016) in each village and village tract of Tha-Song-Yang and Hlaingbwe were analysed. An assumption was set such that the case counts are independently distributed Poisson variates. With predictors in the model (Xs) of age, gender and the interaction between age and gender for each case, the Poisson regression model was: $$ Y_{ij} \sim Poisson(\mu_{ij} ) $$ $$ { \log } \left( {\mu_{ij} } \right) = { \log } \left( {{\text{E}}_{ij} } \right) + { \log }\left( {\theta_{ij} } \right), $$ $$\begin{aligned} \log \left( {\mu_{ij} } \right) & = \log \left( {E_{ij} } \right) + \alpha + \beta_{1} X_{ijAge} \beta_{2} X_{ijSex} \\ & \quad + \beta_{3} X_{ijAge*Sex} + \psi_{i} + \varphi_{j} + \delta_{ij} .\end{aligned} $$ To estimate counts over regions and time periods a form of indirect standardization, standardized incidence rate (SIR), was used; the expected rates were computed as \( {\text{E}}\__{ij} = n_{ij} ( \sum _{i} \sum _{j} y_{ij} / \sum _{i} \sum _{j} n_{ij} ) \) where yij is the disease count and nij is the population in the i–jth space–time unit. Although other more complex standardizations, such as stratification of the population could be pursued, such population characteristics were not available in the study databases. The above calculation was thus used to estimate the expected rates, which also represent the population offset for each space–time unit For the excess zero counts, a spatiotemporal ZIP mixture model can be defined as: $$ {\text{P}}\left( {Y_{ij} = y_{ij} } \right) \, = \left\{ {\begin{array}{l} {\omega_{ij} + \left( {1 - \omega_{ij} } \right)e^{{ - \mu_{ij} }} , \quad y_{ij} = 0} \\ {\frac{{\left( {1 - \omega_{ij} } \right)e^{{ - \mu_{ij} }} \mu_{ij}^{{y_{ij} }} }}{{y_{ij} }}, \quad \quad \quad \; \; y_{ij} > 0;} \\ \end{array} } \right. $$ $$ \log \left({\mu_{ij} } \right) = \log \left({{\text{E}}_{ij} } \right) + { \log }(\theta_{ij}) $$ $$\begin{aligned} \log \left({\mu_{ij} } \right) & = \log \left({E_{ij} } \right) + \alpha + \beta_{1} X_{ijAge} + \beta_{2} X_{ijSex} \\ & \quad + \beta_{3} X_{ijAge*Sex} + \psi_{i} + \varphi_{j} + \delta_{ij} , \end{aligned}$$ where ωij is the probability of a Bernoulli zero in village or village tract i in month j and \( 1 - \omega_{ij} \) is the probability of a Poisson count in village or village tract i in month j, either zero or non-zero. The beta distribution was specified for Bernoulli zero and the Poisson distribution for Poisson count. In the model, Yij is the observed number of cases in village or village tract i in month j, and Eij is the expected monthly number of cases in village or village tract i, which does not change by month because the population is considered static and acting as an offset. θij is the relative risk, the parameter α is the intercept, and \( \beta_{1} ,\; \beta_{2} ,\;\beta_{3} \) are the vector of coefficients for the covariates XijAge, XijSex, and \( X_{ijAge*Sex} \); ψi is the spatial variability and φj is the temporal effect for each month. The spatiotemporal component δij is the space and time main effect, that is, space–time clusters of risk. The shared interaction term δit gives an exchangeable hierarchical structure, i.e., δit ∼ N(0, \( \tau_{\delta }^{ - 1} \)) with a constant variance. The first-order random walk process RW(1) drives the temporal effect φt and in the random walk process, the variability of the previous month's influence on each month except for the first one. The spatial, temporal, and space–time random effects have a uniform prior distribution for their precisions. The spatial variability has two components: the unstructured random effect vi with a mean of zero and precision \( \tau_{\nu }^{ - 1} \), and the spatially structured random effect ui with a mean of zero and precision \( \tau_{\mu }^{ - 1} \). The spatially structured random effect is specified by a conditional autoregressive prior structure. Spatial relationships between villages or village tracts were determined using an adjacency weights matrix. If two villages or village tracts share a border, a weight of 1 is assigned whereas if they do not, the weight is 0. A normal prior distribution is specified for the coefficient whereas a flat prior distribution is specified for the intercept. The uniform prior distributions for the precisions (inverses of the variances) are specified for the unstructured and spatially structured random effects. Markov chain Monte Carlo (MCMC) simulation techniques [17, 18] were also used to estimate the model parameters, and two chain-samplers with a burn-in of 20,000 iterations were performed. Thinning was used to lessen the autocorrelation level of the main parameters. The deviance information criterion (DIC) [19] and as well as the DICr [20], which is more appropriate for the mixture model, were used for model evaluation in this study. An exceedance probability was used to examine localized behaviour of a model; this is one of the main tools for determining unusual elevations of disease. The exceedance probability is usually calculated from the posterior sample values and is defined as where G is the sampler sample size. There are two components in the exceedance probability to be assessed. The first one is the cutoff point c for the theta, and it can be 1, 2 or 3, for the extent of extreme risk. The second part is an exceedance probability threshold to an unusual risk area. Some of the selected thresholds are 0.95, 0.975, and 0.99 for P (θi > c). The levels of extreme risk depend on the values of c [21]. Temporal pattern of malaria cases In 2016, there were a total 266 cases (incidence: 9.99 per 10,000) in Hlaingbwe Township and 561 cases (incidence: 66.72 per 10,000) in Tha-Song-Yang District. Similar patterns observed in both Hlaingbwe Township and Tha-Song-Yang District revealed a seasonal pattern with two major peaks, during May–June–July and December–January–February. As shown in Fig. 2, the highest number of malaria cases in the border township-district was found during the rainy season from May to July, and another high number of cases was seen in January, in comparison with other months. When classified by gender, the overall ratio of male:female cases in Hlaingbwe Township was 163:103 (incidence per 10,000, ratio: 12.52:7.57); in Tha-Song-Yang, it was 342:219 (incidence ratio: 78.64:53.96). This demonstrated that approximately two-thirds of the total cases involved males in both Tha-Song-Yang and Hlaingbwe. The pattern of malaria cases reported on a monthly basis was similar for both gender, as shown in Fig. 2. Malaria cases and incidences in Tha-Song-Yang and Hlaingbwe, 2016 When classified by age group, the ratio of malaria cases involving individuals aged ≥ 15 years versus < 15 years in Hlaingbwe Township was 154:112 and that in Tha-Song-Yang was 302:259. The result showed that among male cases, there were more cases among adults than younger males; 65.64% vs. 34.36% for Hlaingbwe Township, and 58.77% vs. 41.23% in Tha-Song-Yang District (Table 1). Contradictorily, among female cases, there was more cases among younger females than adult females; 54.37% vs. 45.63% in Hlaingbwe Township and 53.88% vs. 46.12% in Tha-Song-Yang District. Interestingly, more malaria infections occurred in the age group of ≥ 15 years in Hlaingbwe Township. It should be noted, however, that the ratios of malaria cases by age group in Tha-Song-Yang District were not consistent across the 12 months; there were more cases among individuals aged ≥ 15 years during the peak months (January, June, July) but more cases among those aged < 15 years in other months of the year, as shown in Fig. 3a. For malaria cases in Tha-Song-Yang District, there were more non-Thai people with malaria than cases among Thais across all months of the year; the cases in Hlaingbwe Township were all reported to be Myanmar nationals, as shown in Fig. 3b. Malaria cases classified by gender and age groups in Tha-Song-Yang and Hlaingbwe, 2016 Tha-Song-Yang Hlaingbwe Township ≥ 15 years 56 (34.36%) Malaria cases in Tha-Song-Yang and Hlaingbwe in 2016, classified by different demographic characteristics Spatial distribution of malaria cases and incidences As shown in Fig. 4, mapping of malaria cases and incidences in Hlaingbwe Township was presented using village-tract shapes within the township; those in Tha-Song-Yang District were plotted using village centre points (as there were no shape files for the villages). Mapping of the total cases over the year (Fig. 4a) revealed that cases were scattered over more than half of the area (50 of 66 villages) in Tha-Song-Yang District and over approximately half of the area (30 of 75 village tracts) in Hlaingbwe Township. Mapping of the incidences over the year (Fig. 4b) showed that villages and village tracks with high numbers of cases were most likely to be those with high incidence rates per 10,000 population size. A higher number of malaria cases and incidences could be seen in the inner and northern of Hlaingbwe Township, except Tar Le, which is situated along the Thai–Myanmar border. The village tracts in the inner and upper side of Hlaingbwe Township with a high number of cases (> 20 cases) included: Yin Baing, Me Tha Mu, Ka Mawt Le (Ma Ae) (Ah Lel) and Tar Le; those with high incidence rates (> 10 per 10,000 population) also included Yin Baing, Me Tha Mu, Ka Mawt Le (Ma Ae) (Ah Lel), Tar le, Ka Mawt Le (Kyaung) and Me LaYaw. In the Tha-Song-Yang region, a higher number of malaria cases (> 20 cases) occurred in Ban Mo Ku Tu, Ban Mae Tun, Ban Tha-Song-Yang, Ban Suan Oi, Ban Mae Chawang, Ban Mae Salit Luang, and Ban Mae La Thai. The incidence rates were much higher in Tha-Song-Yang District compared to those in Hlaingbwe Township. High incidence rates (> 10 per 10,000 population) were shown in about half of the total number of villages; similar to villages with high number of cases, those with very high incidence rate (> 135 per 10,000 population) included Ban Mo Ku Tu, Ban Mae Tun, Ban Suan Oi, Ban Mae Chawang, Ban Lam Rong, and Ban Mae La Thai All of these are situated in both the inner side and along the Thai–Myanmar border (Fig. 4). Total malaria cases and incidences in Tha-Song-Yang and Hlaingbwe regions during 2016. a Total malaria cases. b Incidence rate per 10,000 population Bayesian spatiotemporal models Figures 4, 5, 6 and 7 show several study areas had no cases in months during 2016 which indicates a potential issue of excess zeros. The zero-inflated Poisson (ZIP) regression model then was fitted to link the count data of monthly malaria cases in different villages (Tha-Song-Yang) and village tracts (Hlaingbwe) with 3 observed variables (covariates) including gender (male vs female cases), age group (≥ 15 years vs < 15 years), and interaction between age and gender. Table 2 describes the results for regression spatiotemporal coefficients of the covariates as predictors of monthly malaria cases in Tha-Song-Yang District and Hlaingbwe Township. The 3 covariates were statistically significant in the model. In the Tha-Song-Yang and Hlaingbwe models, the incidence rate ratios of malaria infection among those aged ≥ 15 years compared with those aged < 15 years were 1.87 (95% CI 1.31, 2.48) and 2.82 (95% CI 2.03, 3.62), respectively. Regarding gender, in the Tha-Song-Yang and Hlaingbwe models, the incidence rate ratios of malaria infection among male compared with female cases were 2.27 (95% CI 1.71, 2.86) and 2.87 (95% CI 2.07, 3.68), respectively. The interaction between age and gender showed negative associations with malaria cases: for the Tha-Song-Yang model, the incidence rate ratio of malaria infection was − 2.68 (95% CI − 3.50, − 1.88) and for the Hlaingbwe model was − 3.45 (95% CI − 4.58, − 2.35). Exceedance probability of relative risk in Tha-Song-Yang and Hlaingbwe regions from January to April, 2016 Exceedance probability of relative risk in Tha-Song-Yang and Hlaingbwe regions from May to August, 2016 Exceedance probability of relative risk in Tha-Song-Yang and Hlaingbwe regions from September to December, 2016 Regression coefficients with 95% credible interval and deviance information criterion (DIC) from zero-inflated Poisson models for malaria cases in 2016 Model/variables Tha-Song-Yang model Hlaingbwe model Zero-inflated Poisson − 1.38 (− 1.90, − 0.84) Age (> 15 years) Sex (male) DICr Exceedance probability map Based on the ZIP model, the exceedance probability of relative risk was estimated with a cut-off point of 1. As shown in Figs. 5, 6 and 7, maps of hotspot clusters were generated for each month over the year. For Tha-Song-Yang District, the disease-clustering areas were indicated in villages in the upper (Tha-Song-Yang sub-district), middle (Mae Song and Mae U Su sub-districts), and lower (Mae Tan and Mae La sub-districts) parts of the area along the Thai–Myanmar border, particularly during the highest malaria infection months (June and July), and only in the upper and lower parts for the other months. Some apparent hotspots appeared along the border and some were scattered over areas including Mae Wa Luang sub-district, which is far from the border area in Tha-Song-Yang District. In Hlaingbwe Township, most of the aggregated risk occurred in village tracts farther from the border, and frequently occurred in the northern part. However, it can be seen that some aggregated risk with an excess of higher thresholds in this region existed in areas adjacent to Tha-Song-Yang District during some months. Some apparent hotspots could be seen spread out over interior parts of Hlaingbwe, far from the border area, and only one or two hotspots were situated along the Thai–Myanmar border during some months. This study used data from malaria surveillance systems collected by the healthcare sectors in the adjacent township-district of Myanmar and Thailand, where malaria cases have been consistently reported. In addition to basic spatial and temporal analyses, Bayesian spatiotemporal ZIP model was developed to determine the probability of aggregated risk in each village tract or village of the two study areas. As reported in many other studies [22–24], there are two peaks of malaria infection in this region, with the higher peak during the rainy months when there is abundant rainfall. Such incidence can be explained by proliferation of the vector in aquatic habitats and more work-related activities in the agricultural sector than in other sectors in both study areas. It is clear that there were many more cases in Tha-Song-Yang District than in the adjacent Hlaingbwe Township. However, from the data shown in this study, more than half of the monthly reported malaria cases in Tha-Song-Yang District involved non-Thai individuals. Migration, as well as work-related population movement in this region, has been indicated as important factors contributing to malaria epidemiology [25–35]. Moreover, displaced minority populations also increase the risk of malaria infection in this region; it has been suggested in the literature that political instability among minority ethnic groups in Myanmar has led to considerable cross-border population movement. Among non-Thais, migrant workers from Myanmar represent the largest population of foreign workers [25, 36]. There are also displaced people without a nationality and illegal immigrants in considerable numbers [37, 38]. A study of mosquito vectors in these study areas found that P. falciparum infections were more concentrated seasonally among the recent migrant population while P. vivax cases were significantly associated with the dynamics of the local mosquito population and less with migrant status [37]. It would be interesting for further study to collect the detailed migration patterns of the infected cases prospectively, rather than using surveillance data, and then performing model fit with variations in risk based on species together with other specific host characteristics. Some previous studies in the Greater Mekong Sub-region have indicated that malaria clustered along the international borders is associated with forests and forest edges [2, 25, 39]. Even though in the present study, there was no analysis of the association of the number of malaria cases with geographical features, it can be seen from the map of malaria case/incidence distribution that the border areas with high mountains and heavily forested terrain, in some upper and middle areas along the border, tended to have fewer malaria cases. On the other hand, there appeared to be more cases/incidences in village tracts or villages along the border that are situated on a flat plain or with a shallow river as a natural demarcation line. Particularly along the lower plain agricultural zone, large mobile cross-border population movement could occur in both directions between the two countries. With year-round movement as well as river crossings, migrants from a malaria-endemic zone can bring the parasite to a new, non-malaria zone [40]. The cross-border workforce that spends time on either side in the malaria-endemic area can be infected with the parasite and may then carry it back and forth across the border [2]. Studies on the spread of drug-resistant strains [41–44] have noted that migrants can transport drug-resistant strains from visited areas to new locations, regardless of whether malaria is present or not in these new sites. This means that individuals who enter malaria regions can influence epidemiological dynamics [45, 46]. This situation along the border represents a difficult challenge for the management of imported malaria on both sides. Similar to other studies regarding the epidemiological dynamics of risk factors for malaria infection [3, 23, 47], there were more malaria cases among males in both study sites. In both Tha-Song-Yang District and Hlaingbwe Township, the number of cases and the incidence of malaria infection among males appear to be higher than females throughout the year. The observation of differences in infection rates by gender may indicate behavioural differences and occupational orientations. The main reason is clearly related to the nature of men's work, in high-risk areas with high human-vector contact such as on farms, in orchards, or in forests. However, it should be noted that the number of cases of malaria among women was not significantly lower than that among men. The target groups should include both gender in planning strategies for malaria prevention and control. Regarding malaria infection among different age groups in the study areas, it is interesting to see that there were more cases among those aged ≥ 15 years in Hlaingbwe Township, but more cases among those aged < 15 years in Tha-Song-Yang District. It should be noted, however, that this observation is based on case counts and not incidence (as there were no denominators for age groups). Malaria infection in the higher age group (≥ 15 years) might be associated with individuals who engaged in more activities in high-risk areas and used inadequate protective measures; it could also be related to lower natural immunity with increased age [48]. A cross-sectional survey of sub-clinical malaria infections in Southeast Asia, including similar Thai–Myanmar border areas, also reported the age distributions of their study participants; the median age was 21 years with 37% under 15 years of age [49]. Another report of malaria case findings among mobile populations and migrant workers in Myanmar indicated that migrant workers from rural areas were likely to migrate to other rural areas and the majority of migrants were men (> 60%) with about 80% between the ages of 11 and 30 years [50]. However, the higher number of malaria infections among those aged < 15 years in Tha-Song-Yang District might be owing to the fact that the children often accompany their parents to their workplaces or to forests where vectors thrive, or they spend their time in or near the home or at school in areas with high vector populations [51]. A study on the ecology and epidemiology of malaria along the Thai–Myanmar border suggested that the presence and distribution of mosquito vectors was directly related to the availability of hosts and contact patterns between vectors and hosts; the biting habits of the mosquito vectors abundant in the region occurred as frequently indoors as outdoors in open houses in forests [52]. It was also noted that their feeding patterns (early versus late) were sometimes contradictory, even in the same site and species across different years or locations. A few studies of the entomological determinants of malaria transmission in the same areas as this study along the Thai–Myanmar border also reported that Anopheles mosquitoes exhibited an outdoor and early biting pattern with active timing between 06:00 and 07:00 [53, 54]. Several studies have reported high malaria morbidity among children [55, 56]; interestingly, one study in the Laiza refugee camp along the Myanmar–China border [57] speculated that daytime malaria transmission might occur near the primary school attended by younger children. It would be interesting to conduct an entomological investigation to explore this hypothesis in Tha-Song-Yang District. However, there is a possibility that this high morbidity is related to unequal health service utilization or variation in behavioural exposure to disease. Regarding the ZIP model, all of the covariates including age, gender and interaction of age and gender showed statistically significant associations with the number of malaria cases. This confirmed that, in general, malaria cases occur more often among males and those aged ≥ 15 years in the study area. It is interesting that the interaction of the covariates age and gender had a negative relationship with malaria infection cases. The interaction could be explained as shown in Table 1 such that there were more cases among adult males than younger males in contrast to more cases among younger females than adult females. Similar to other studies [49, 58], there were differences between children and adults for malaria cases among both females and males. This finding of more infections among adult males was consistent with those of other studies [33, 59, 60], which might be associated with their work-related activities in high-risk areas previously discussed. However, with respect to the high number of malaria cases among younger females, it could be that some transmission was occurring in or close to schools or areas where children spend most of their time, as previously discussed [51], or it could be speculated that when girls become adults, they tend to go outside less and are more likely to work indoors or in less risky settings. The contradicting number of malaria cases among different age groups and gender in both Tha-Song-Yang District and Hlaingbwe Township requires further investigation. Based on the ZIP model, the exceedance probabilities of relative risk with a threshold of 0.9 was developed to detect disease clustering by vigorously put forward localized concentrations, which are different from isolated hotspots [21]. Mapping to detect hotspot clusters using exceedance probability in this study revealed that hotspots in risky areas could vary across spatial and temporal parameters. Some village tracts/villages had a consistently high probability of malaria infection whereas others had consistently moderate or low probability of malaria infection. The probability of becoming a hotspot, however, varied among certain village tracts/villages in different months, with some having nearly no cases at all times. It is also interesting to note that the probabilities of hotspot development varied over the year in the township-district adjacent to the border. This model can be helpful in characterizing the spatiotemporal pattern of malaria and deciding linkages between spatiotemporal patterns and driving factors of malaria transmission risk. Appropriate intervention and resource allocation can then be managed in respective areas if the government as well as malaria control and prevention partners have better knowledge of the spatiotemporal clustering of malaria. The models developed in this study were based on only the association between human population density and malaria cases in space and time, with the parameters age and gender. There might be several other important factors that affect malaria incidence that could be applied, to obtain a better model. It should be noted that the data used in this study were secondary data from government surveillance systems. These data may have limitations in terms of data quality such as completeness, underreported data and validity. Some data were unavailable (e.g., number of non-Thai or migrant workers, and age groups at township or district levels); the analyses had to be based on the number of cases rather than the incidence. In this study, a Bayesian spatial and temporal model was performed to assess malaria infections along the township-district adjacent to the Myanmar–Thailand border. The findings of this study confirmed commonly known information about risk factors regarding gender and age groups for cases of malaria infection. However, contradicting proportions of malaria cases among the different gender and age groups were noted in this study. The analytic model developed in this study could be used to assess the probability of hotspot areas, which could be beneficial for establishing priority zones in preventive actions, with appropriate timetables in high-risk border areas. To obtain a more inclusive view of malaria risk, a future advanced model is planned, to include infectivity, densities and distribution of the vector, identifiable breeding sites of the vector, as well as climatic and environmental factors, as the underlying causes of increased risk in the identified areas. AMT and JK conceived the study. AMT compiled the data, TMH, AT, and TMS assisted in data collection in Hlaingbwe Township and collection of other data in Myanmar; JS assisted in data collection in Tha-Song-Yang District and collection of other data in Thailand. CR and AMT developed the programme for the analytic models, and analysed and interpreted the models. SL assisted in providing geographical information and GIS analysis. AMT and JK drafted the manuscript. All authors read and approved the final manuscript. The authors gratefully thank Mahidol Vivax Research Unit of the Faculty of Tropical Medicine, Mahidol University for supporting this study. In addition, the authors would like to sincerely thanks all the individuals who had assisted in the data collection and analyses in this study. We also thank Analisa Avila, ELS, of Edanz Group (http://www.edanzediting.com/ac) for editing a draft of this manuscript. The data were collected with the permission of authorized persons from both the Myanmar and Thailand sides of the border. This study was approved by the Institutional Review Board of the Faculty of Tropical Medicine, Mahidol University (MUTM 2017-064-01) and the Institutional Review Board of the Defence Services Medical Research Centre (DSMRC), Myanmar (IRB/2017/127). Mahidol Vivax Research Unit of the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand. 12936_2018_2574_MOESM1_ESM.docx Additional file 1: Table S1. Name of villages, Tha-Song-Yang District. 12936_2018_2574_MOESM2_ESM.docx Additional file 2: Table S2. Name of village tracts, Hlaingbwe Township. Department of Tropical Hygiene, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand Mahidol Vivax Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand Geo-Informatics and Space Technology Development Agency, Bangkok, Thailand National Malaria Control Program, Nay Pyi Taw, Myanmar Defence Services Medical Research Centre, Nay Pyi Taw, Myanmar National Malaria Control Program, Hpa-An, Myanmar Hu Y, Zhou G, Ruan Y, Lee MC, Xu X, Deng S, et al. Seasonal dynamics and microgeographical spatial heterogeneity of malaria along the China–Myanmar border. Acta Trop. 2016;157:12–9.View ArticleGoogle Scholar Cui L, Yan G, Sattabongkot J, Cao Y, Chen B, Chen X, et al. Malaria in the Greater Mekong Subregion: heterogeneity and complexity. Acta Trop. 2012;121:227–39.View ArticleGoogle Scholar Zayeri F, Salehi M, Pirhosseini H. Geographical mapping and Bayesian spatial modeling of malaria incidence in Sistan and Baluchistan province, Iran. Asian Pac J Trop Med. 2011;4:985–92.View ArticleGoogle Scholar Ministry of Health, Nay Pyi Taw: Health in Myanmar 2014. The Republic of Union of Myanmar; 2014.Google Scholar USAID. President's Malaria Initiative Greater Mekong Sub-Region-Malaria Operational Plan FY 2017. 2017. https://www.pmi.gov/resource-library/mops/fy-2017. Accessed 4 Nov 2018. Zacarias OP, Majlender P. Comparison of infant malaria incidence in districts of Maputo province, Mozambique. Malar J. 2011;10:93.View ArticleGoogle Scholar Reid HL, Haque U, Roy S, Islam N, Clements AC. Characterizing the spatial and temporal variation of malaria incidence in Bangladesh, 2007. Malar J. 2012;11:170.View ArticleGoogle Scholar Villalta D, Guenni L, Rubio-Palis Y, Arbeláez RR. Bayesian space–time modeling of malaria incidence in Sucre state, Venezuela. Asta-Adv Stat Anal. 2013;97:151–71.View ArticleGoogle Scholar MICT. Statistical Yearbook Thailand 2016: National Statistical office of Thailand; 2017.Google Scholar MIP. Myanmar Population and Housing Census 2014, The Union Report. Nay Pyi Taw: Department of Population; 2015.Google Scholar Amek N, Bayoh N, Hamel M, Lindblade KA, Gimnig J, Laserson KF, et al. Spatio-temporal modeling of sparse geostatistical malaria sporozoite rate data using a zero-inflated binomial model. Spat Spatiotemporal Epidemiol. 2011;2:283–90.View ArticleGoogle Scholar Nobre AA, Schmidr AM, Lopes HF. Spatio-temporal models for mapping the incidence of malaria in Pará. Environmetrics. 2005;16:291–304.View ArticleGoogle Scholar Vounatsou P, Raso G, Tanner M, N'Goran EK, Utzinger J. Bayesian geostatistical modelling for mapping schistosomiasis transmission. Parasitology. 2009;136:1695–705.View ArticleGoogle Scholar Berk R, MacDonald JM. Overdispersion and Poisson regression. J Quant Criminol. 2008;24:269–84.View ArticleGoogle Scholar Lambert D. Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics. 1992;34:1–14.View ArticleGoogle Scholar McCulloch CE, Searle SR. Generalized, linear, and mixed models. New York: Wiley; 2004.Google Scholar Gelman A, Meng XL, Brooks S, Jones GL. Handbook of Markov Chain Monte Carlo. Boca Raton: CRC Press, Taylor & Francis Group; 2011.Google Scholar Robert C, Casella G. Monte Carlo statistical methods. Berlin: Springer Science & Business Media; 2013.Google Scholar Spiegelhalter DJ, Best NG, Carlin BP, Van Der Linde A. Bayesian measures of model complexity and fit. J R Stat Soc B. 2002;64:583–639.View ArticleGoogle Scholar Gelman A, Carlin JB, Stern HS, Rubin DB. Bayesian data analysis. 2nd ed. Boca Raton: Chapman & Hall/CRC; 2004.Google Scholar Lawson AB, Rotejanaprasert C. Childhood brain cancer in Florida: a Bayesian clustering approach. Stat Public Policy. 2014;1:99–107.View ArticleGoogle Scholar Wangdi K, Kaewkungwal J, Singhasivanon P, Silawan T, Lawpoolsri S, White NJ. Spatio-temporal patterns of malaria infection in Bhutan: a country embarking on malaria elimination. Malar J. 2011;10:89.View ArticleGoogle Scholar Zhao Y, Zeng J, Zhao Y, Liu Q, He Y, Zhang J, et al. Risk factors for asymptomatic malaria infections from seasonal cross-sectional surveys along the China–Myanmar border. Malar J. 2018;17:247.View ArticleGoogle Scholar Chowdhury FR, Ibrahim QSU, Bari MS, Alam MJ, Dunachie SJ, Rodriguez-Morales AJ, et al. The association between temperature, rainfall and humidity with common climate-sensitive infectious diseases in Bangladesh. PLoS ONE. 2018;13:e0199579.View ArticleGoogle Scholar Bhumiratana A, Intarapuk A, Sorosjinda-Nunthawarasilp P, Maneekan P, Koyadun S. Border malaria associated with multidrug resistance on Thailand–Myanmar and Thailand–Cambodia borders: transmission dynamic, vulnerability, and surveillance. Biomed Res Int. 2013;2013:363417.View ArticleGoogle Scholar Delacollette C, D'Souza C, Christophel E, Thimasarn K, Abdur R, Bell D, et al. Malaria trends and challenges in the Greater Mekong Subregion. Southeast Asian J Trop Med Public Health. 2009;40:674.PubMedGoogle Scholar Kitvatanachai S, Rhongbutsri P. Malaria in asymptomatic migrant workers and symptomatic patients in Thamaka District, Kanchanaburi Province, Thailand. Asian Pacific J Trop Dis. 2012;2:S374–7.View ArticleGoogle Scholar Markwardt R, Sorosjinda-Nunthawarasilp P, Saisang V. Human activities contributing to a malaria outbreak in Thong Pha Phum District, Kanchanaburi, Thailand. Southeast Asian J Trop Med Public Health. 2008;39:10.Google Scholar Singhanetra-Renard A. Malaria and mobility in Thailand. Soc Sci Med. 1993;37:1147–54.View ArticleGoogle Scholar Tatem AJ, Smith DL. International population movements and regional Plasmodium falciparum malaria elimination strategies. Proc Natl Acad Sci USA. 2010;107:12222–7.View ArticleGoogle Scholar Stoddard ST, Morrison AC, Vazquez-Prokopec GM, Paz Soldan V, Kochel TJ, Kitron U, et al. The role of human movement in the transmission of vector-borne pathogens. PLoS Negl Trop Dis. 2009;3:e481.View ArticleGoogle Scholar Martens P, Hall L. Malaria on the move: human population movement and malaria transmission. Emerg Infect Dis. 2000;6:103–9.View ArticleGoogle Scholar Erhart A, Thang ND, Van Ky P, Tinh TT, Van Overmeir C, Speybroeck N, et al. Epidemiology of forest malaria in central Vietnam: a large scale cross-sectional survey. Malar J. 2005;4:58.View ArticleGoogle Scholar WHO. Malaria in the Greater Mekong Subregion: regional and country profiles, New Delhi, India 2010. http://www.searo.who.int/myanmar/documents/malariainthegreatermekongsubregion.pdf. Accessed 4 Nov 2018. Ward C, Motus, N. Mosca, D. A global report on population mobility and malaria: moving towards elimination with migration in mind, Geneva 2013. https://www.iom.int/files/live/sites/iom/files/What-We-Do/docs/REPORT-14Aug2013-v3-FINAL-IOM-Global-Report-Population-Mobility-and-Malaria.pdf. Accessed 4 Nov 2018. Richards AK, Banek K, Mullany LC, Lee CI, Smith L, Oo EK, et al. Cross-border malaria control for internally displaced persons: observational results from a pilot programme in eastern Burma/Myanmar. Trop Med Int Health. 2009;14:512–21.View ArticleGoogle Scholar Sriwichai P, Karl S, Samung Y, Kiattibutr K, Sirichaisinthop J, Mueller I, et al. Imported Plasmodium falciparum and locally transmitted Plasmodium vivax: cross-border malaria transmission scenario in northwestern Thailand. Malar J. 2017;16:258.View ArticleGoogle Scholar Haddawy P, Hasan AI, Kasantikul R, Lawpoolsri S, Sa-angchai P, Kaewkungwal J, et al. Spatiotemporal Bayesian networks for malaria prediction. Artif Intell Med. 2017;84:127–38.View ArticleGoogle Scholar WHO. Malaria in the Greater Mekong Subregion: regional and country profiles. New Delhi: World Health Organization, Regional Office for Sout-East Asia; 2010. http://www.wpro.who.int/mvp/documents/MAL-260/en/. Accessed 4 Nov 2018. Le Menach A, Tatem AJ, Cohen JM, Hay SI, Randell H, Patil AP, et al. Travel risk, malaria importation and malaria transmission in Zanzibar. Sci Rep. 2011;1:93.View ArticleGoogle Scholar Wongsrichanalai C, Sirichaisinthop J, Karwacki JJ, Congpuong K, Miller RS, Pang L, et al. Drug resistant malaria on the Thai–Myanmar and Thai–Cambodian borders. Southeast Asian J Trop Med Public Health. 2001;32:41–9.PubMedGoogle Scholar Klein E. Antimalarial drug resistance: a review of the biology and strategies to delay emergence and spread. Int J Antimicrob Agents. 2013;41:311–7.View ArticleGoogle Scholar Pinichpongse S, Doberstyn E, Cullen J, Yisunsri L, Thongsombun Y, Thimasarn K. An evaluation of five regimens for the outpatient therapy of falciparum malaria in Thailand 1980–81. Bull World Health Organ. 1982;60:907–12.PubMedPubMed CentralGoogle Scholar Ariey F, Duchemin JB, Robert V. Metapopulation concepts applied to falciparum malaria and their impacts on the emergence and spread of chloroquine resistance. Infect Genet Evol. 2003;2:185–92.View ArticleGoogle Scholar Baird JK, Basri H, Weina P, MaGuire J, Barcus M, Picarema H, et al. Adult Javanese migrants to Indonesian Papua at high risk of severe disease caused by malaria. Epidemiol Infect. 2003;131:791–7.View ArticleGoogle Scholar Lalloo DG, Trevett AJ, Paul M, Korinhona A, Laurenson IF, Mapao J, et al. Severe and complicated falciparum malaria in Melanesian adults in Papua New Guinea. Am J Trop Med Hyg. 1996;55:119–24.View ArticleGoogle Scholar Wang RB, Zhang J, Zhang QF. Malaria baseline survey in four special regions of northern Myanmar near China: a cross-sectional study. Malar J. 2014;13:302.View ArticleGoogle Scholar Pradhan N, Pradhan P, Pati S, Hazra RK. Epidemiological profile of malaria among various socio-demographic groups in a western district of Odisha, India. Le Infezioni in Medicina : Rivista Periodica di Eziologia, Epidemiologia, Diagnostica, Clinica e Terapia Delle Patologie Infettive. 2018;26:37–45.Google Scholar Imwong M, Nguyen TN, Tripura R, Peto TJ, Lee SJ, Lwin KM, et al. The epidemiology of subclinical malaria infections in South-East Asia: findings from cross-sectional surveys in Thailand–Myanmar border areas, Cambodia, and Vietnam. Malar J. 2015;14:381.View ArticleGoogle Scholar Kheang ST, Lin MA, Lwin S, Naing YH, Yarzar P, Kak N, et al. Malaria case detection among mobile populations and migrant workers in Myanmar: comparison of 3 service delivery approaches. Glob Health Sci Pract. 2018;6:381–6.View ArticleGoogle Scholar Parker DM. Border demography and border malaria among Karen populations along the Thailand–Myanmar border. A Dissertation in Anthropology and Demography. The Graduate School College of the Liberal Arts, The Pennsylvania State University. 2014.Google Scholar Parker DM, Carrara VI, Pukrittayakamee S, McGready R, Nosten FH. Malaria ecology along the Thailand–Myanmar border. Malar J. 2015;14:388.View ArticleGoogle Scholar Chaumeau V, Fustec B, Hsel SN, Montazaeu C, Nyo SN, Metaane S, et al. Entomological determinants of malaria transmission in Kayin state, Eastern Myanmar: a 24-month longitudinal study in four villages. Wellcome Open Research. 2018;3:109.View ArticleGoogle Scholar NetWorks Project Vector Control Assessment Report in the GMS. Review of malaria prevention strategies tools: Stakeholder, target group segmentation, behavioral issues, private sector development options. 2012.Google Scholar Svenson JE, MacLean JD, Gyorkos TW, Keystone J. Imported malaria: clinical presentation and examination of symptomatic travelers. Arch Intern Med. 1995;155:861–8.View ArticleGoogle Scholar Munyekenye OG, Githeko AK, Zhou G, Mushinzimana E, Minakawa N, Yan G. Plasmodium falciparum spatial analysis, western Kenya highlands. Emerg Infect Dis. 2005;11:1571–7.View ArticleGoogle Scholar Huang F, Takala-Harrison S, Liu H, Xu JW, Yang HL, Adams M, et al. Prevalence of clinical and subclinical Plasmodium falciparum and Plasmodium vivax malaria in two remote rural communities on the Myanmar–China Border. Am J Trop Med Hyg. 2017;97:1524–31.View ArticleGoogle Scholar Shannon KL, Khan WA, Sack DA, Alam MS, Ahmed S, Prue CS, et al. Subclinical Plasmodium falciparum infections act as year-round reservoir for malaria in the hypoendemic Chittagong Hill districts of Bangladesh. Int J Infect Dis. 2016;49:161–9.View ArticleGoogle Scholar Lin H, Lu L, Tian L, Zhou S, Wu H, Bi Y, et al. Spatial and temporal distribution of falciparum malaria in China. Malar J. 2009;8:130.View ArticleGoogle Scholar Incardona S, Vong S, Chiv L, Lim P, Nhem S, Sem R, et al. Large-scale malaria survey in Cambodia: novel insights on species distribution and risk factors. Malar J. 2007;6:37.View ArticleGoogle Scholar
CommonCrawl
Research article | Open | Published: 13 June 2019 Identification of critical connectors in the directed reaction-centric graphs of microbial metabolic networks Eun-Youn Kim1, Daniel Ashlock2 & Sung Ho Yoon ORCID: orcid.org/0000-0003-0171-944X3 Detection of central nodes in asymmetrically directed biological networks depends on centrality metrics quantifying individual nodes' importance in a network. In topological analyses on metabolic networks, various centrality metrics have been mostly applied to metabolite-centric graphs. However, centrality metrics including those not depending on high connections are largely unexplored for directed reaction-centric graphs. We applied directed versions of centrality metrics to directed reaction-centric graphs of microbial metabolic networks. To investigate the local role of a node, we developed a novel metric, cascade number, considering how many nodes are closed off from information flow when a particular node is removed. High modularity and scale-freeness were found in the directed reaction-centric graphs and betweenness centrality tended to belong to densely connected modules. Cascade number and bridging centrality identified cascade subnetworks controlling local information flow and irreplaceable bridging nodes between functional modules, respectively. Reactions highly ranked with bridging centrality and cascade number tended to be essential, compared to reactions that other central metrics detected. We demonstrate that cascade number and bridging centrality are useful to identify key reactions controlling local information flow in directed reaction-centric graphs of microbial metabolic networks. Knowledge about the local flow connectivity and connections between local modules will contribute to understand how metabolic pathways are assembled. Models and methods from the graph theory have been developed to characterize structural properties in various kinds of complex networks in social, technological, and biological areas [1, 2]. In the analysis of biological networks, graph theory has been successful in detecting global topological features of biological networks such as short path lengths, scale-freeness with the appearance of hubs [3], hierarchical modular structures [4], and network motifs [5]. While the topological analysis as a whole can give insight on network evolution and cellular robustness [3, 6], investigation of influences of individual nodes in a biological network has potential for practical applicability such as identification of drug targets, design of effective strategies for disease treatment [7], and development of microbial hosts for mass-production of various bioproducts [8]. Ranking of a node by its topological feature depends on various centrality metrics, each of which identifies central nodes affecting the network architecture from global or local perspectives [1, 9]. For example, degree centrality and clustering coefficient which are based on nodes' degree identify nodes of global topological importance of hubs and modules, respectively. Examples of centrality metrics based on information flow are betweenness centrality which is the proportion of shortest paths passing through a node [10] and bridging centrality that identifies bridging nodes lying between modules [11]. Such global topological analyses have been mostly performed using undirected bionetworks. Recent studies extended several global measures, such as in/out-degree distribution, betweenness, closeness, clustering coefficient, and modularity for application into directed networks [1, 12, 13]. These measures are strongly correlated with high degrees, focusing on densely connected sub-structures. Although they discovered global topological properties and global roles of individual nodes, they are insufficient to explain connections between modules and local connectivity, typically within a few of steps of neighbors surrounding the node, in networks with directed flows. For example, nodes of high degree have global topological importance in a network, however, the fact that they have so many interactions means that they are poor channels for conveying information. A signal that controls a specific cellular process must have some specificity in how its signal is received and interpreted [14, 15]. If systems in several parts of the cell responded to the signal, as they do with high degree nodes, the node in question would not be a control for the specific process. Such need for specificity of signal effect means that high degree nodes in the network may be ignored or removed when performing topological analysis to locate nodes that are critical in particular pathways. As majority of biological networks such as metabolic, gene regulatory, and signal transduction networks show the sequential interaction of elements, they can be best represented as directed graphs [1]. Unlike undirected networks, there is a directed information flow, creating an asymmetric influence between the nodes in a directed network. Any directed path in a network represents a sequence of reactions, ordered in pairs where each is a pre-requisite of the next. Information flow arises from these reaction cascades, and thus, it can represent the potential for temporal correlation of activity changes in a network. The information flow through a node in a network can be estimated as the number of nodes downstream from it whose behavior will be influenced if that node is removed or disables. Thus, centrality metrics based on a node's information flow can be well suited to reflect the directionality of information flow in real biological networks. Metabolism is the totality of all biochemical reactions that produce building blocks, energy, and redox requirements for cellular functions. Metabolism consists of metabolic pathways, each of which is a directed path from the source metabolites to target metabolites mediated by a sequence of biochemical reactions. Recent sequencing technology and databases of metabolic pathways allow the reconstruction of genome-wide metabolic networks in diverse organisms [16, 17]. Databases about metabolic pathways, such as KEGG [18], Reactome [19], MetaCyc, and BioCyc [20] are available; methods have been developed for the (semi-) automated reconstruction of metabolic networks [21, 22]. The existing availability of databases of metabolic networks has greatly facilitated the computational analysis of metabolic networks. In general, metabolic networks have been represented as a metabolite-centric graph with the metabolites as nodes and reactions as edges [23,24,25]. In a metabolite-centric graph, two metabolites are connected if there is a reaction using one metabolite as a substrate and the other as a product. The other way is a reaction-centric graph where two reactions are connected by at least one arc representing a substrate or product metabolite. The practical advantage of the reaction-centric graph is that its topological analysis can yield testable biological insights, such as the identification of essential reactions, which can be experimentally verified by a gene deletion study. Another way to describe metabolic networks is a bipartite graph with two types of nodes representing metabolites and reactions [26], however, centrality metrics used for topological analysis of unipartite metabolic networks cannot be directly applied to the bipartite metabolic graph [13]. So far, centrality metrics for topological analysis of unipartite metabolic networks have been mostly performed with metabolite-centric graphs. Only a few studies have attempted to apply centrality metrics to reaction-centric graphs, such as the topological analysis of cancer metabolic networks using degree-based centrality metrics [13]. Especially, to our knowledge, centrality metrics that are not based on high connections are unexplored for directed reaction-centric graphs. In this work, we investigated the topological roles of individual reaction nodes in directed reaction-centric graphs using centrality metrics including those not depending on nodes' degree. We applied various centrality metrics to analysis of directed reaction-centric graphs of metabolic networks of five phylogenetically diverse microorganisms of Escherichia coli (Gammaproteobacteria), Bacillus subtilis (Firmicutes), Geobacter metallireducens (Deltaproteobacteria), Klebsiella pneumonia (Gammaproteobacteria), and Saccharomyces cerevisiae (Eukaryota). To identify nodes of global topological importance, central metrics depending on high connections (degree, modularity, clustering coefficient, and betweenness centrality) were applied. To investigate the role of a node more locally, we modified bridging centrality reflecting reaction directionality and developed a novel metric called cascade number. To link reactions highly ranked with each central metric to their biological importance, the proportions of the essential reactions predicted by flux balance analysis (FBA) were calculated according to the centrality metrics. These analyses identified topological features of individual nodes in the directed reaction-centric graphs from global and local connectivity perspectives. We begin by explaining concepts of central metrics using a toy network model. Next, we investigated global features and roles of existing central metrics in the five directed reaction-centric graphs, each of which was derived from the metabolic network model of E. coli (iJO1366) [27], B. subtilis (iYO844) [28], G. metallireducens (iAF987) [29], K. pneumonia (iYL1228) [30], or S. cerevisiae (iMM904) [31] (Table 1). Then, as for the five reaction graphs, global and local features of central metrics were accessed, followed by analysis of the cascade number. As E. coli metabolic network is the most accurate and comprehensive metabolic model developed up to date [27, 32], we provided in-depth analyses using reaction-centric network of E. coli. Table 1 Metabolic networks and their reaction-centric graphs Toy example: topological roles of centrality metrics in a directed network In graph theory, various kinds of centrality metrics have been developed, and each of them expresses an individual node's importance in a network by summarizing relations among the nodes from a different perspective. The most frequently used centrality metrics are degree, betweenness centrality, and clustering coefficient, and each of them detects a central node with a different character. Bridging centrality combines two measurements of betweenness centrality and bridging coefficient. Therefore, it detects nodes which act as the bottlenecks of information flow, as well as the bridges (Additional file 1: Figure S1). We explained the properties of the centrality metrics using a synthetic directed network (Fig. 1 and Table 2). Node A has the highest cascade number with a cascade set of {B,C,D,E}, meaning that the removal of node A closes off the information flow from A, to nodes B, C, D, and E. This also implies that the removal of node A would result in the separation of local connectivity if the exemplified network belongs to the larger network. A node with high bridging centrality tends to be in the cascade set, for example, node E with the highest bridging centrality belongs to the cascade set of node A. Nodes B and C have zero values of betweenness centrality and bridging centrality, as no shortest path passes through them. This implies that a bridging node plays an important role in connecting information flow; it has to be located between modules. The clustering coefficients of nodes B and C are the highest, as all of their neighbors are still connected after their removal. Node D has the highest betweenness centrality as there are many shortest paths passing through it. As node D has the highest degree in a module, and is connected to a bridge, it has the lowest bridging coefficient, resulting in a moderate value of bridging centrality. Node E has the highest bridging coefficient as it is located between two neighbors with high degrees. It also has high betweenness centrality, resulting in the highest bridging centrality value. This indicates that bridging centrality which was modified for the directed network analysis in this study reflects the importance in considering the topological location of a bridging node well as connection of information flow. Example of a synthetic network Table 2 Centrality values, cascade numbers, and cascade sets shown in Fig. 1 The toy example demonstrates that both bridging centrality and the cascade number measure a type of influence of a node on the flow of information within a network. Nodes with high bridging centrality are at points where large parts of the graph, called modules, are connected to one another and so have relatively high information flow through them. Nodes with high cascade number will have locally large influence as they have many downstream nodes that depend on them, which means that they have substantial control of information flow in their neighborhood. Global topology in the reaction-centric metabolic graphs There are many ways to translate metabolites and reactions into a graph [33]. In many cases, metabolic networks have been represented as a metabolite-centric graph with metabolites as nodes and reactions as arcs [23,24,25]. In this study, we represented a metabolic network as a directed reaction-centric graph (reaction graph, hereafter) with reactions as nodes and metabolites as arcs. To measure modularity in each of the five reaction graphs, we generated 1000 random networks in which the numbers of in-degree and out-degree are set to be those of the corresponding reaction graph. Modularity is widely used to measure how strongly a network is segregated into modules [34], and is defined as the fraction of the arcs that belong within the given modules minus the expected fraction if arcs were distributed at random. All the five reaction graphs were strongly modularized (Additional file 1: Table S1). For example, the modularity in the E. coli reaction graph (0.6103) was significantly higher (P-value = 0) than those in the degree-matched random networks (mean modularity of 0.2009 and standard deviation of 0.003). In the five reaction graphs studied, the degree (k) distributions of in-, out- and total-degrees followed a power-law (Fig. 2). For example, in the E. coli reaction graph, the degree distributions of in-, out- and total-degrees followed a power-law, with γ in = − 1.32, γ out = − 1.50, and γ total = − 1.29, respectively. These indicate that the reaction graph is scale-free, characterized by a small number of heavily connected reaction nodes (hubs). Degree distribution in the reaction-centric metabolic networks. (a) Escherichia coli (iJO1366), (b) Bacillus subtilis (iYO844), (c) Geobacter metallireducens (iAF987), (d) Klebsiella pneumonia (iYL1228), and (e) Saccharomyces cerevisiae (iMM904). In-degree (denoted as a red square), out-degree (blue triangle), or total-degree (black circle) was plotted against their probabilities on logarithmic scales Relation of centrality metrics and reaction essentiality Central metrics can give a ranking of nodes according to their importance in a network. To address biological importance of reactions ranked highly with each central metric, we calculated and compared proportions of the predicted essential reactions in the top 5% of high degree, betweenness, and bridging centralities in the five reaction graphs (Table 3). The essential reactions were predicted using FBA which is a constrained optimization method based on reaction stoichiometry and steady-state assumption [35]. Reactions with high bridging centralities tended to be essential, compared to those with high degree centralities. The exception was the reaction graph of K. pneumoniae where the percentages of essential reactions with each centrality metric were almost same. Table 3 Proportions of the predicted essential reactions in the top 5% of reactions with high centralities in the reaction-centric metabolic networks To expand insights on the influences of each centrality metrics (bridging centrality, betweenness centrality, clustering coefficient, and degrees) on the reaction graph of E. coli, numbers of total reactions and essential reactions were plotted according to each of the centrality metrics in the E. coli reaction graph (Fig. 3). Reaction deletion simulation by FBA predicted 246 out of the total 1251 reactions to be essential. Among them, 29 were ranked in the top 5% of high bridging centralities (P-value = 1.52 × 10− 7) and 23 were listed in the top 5% of high betweenness centralities (P-value = 2.86 × 10− 4). Reactions with high bridging centrality tended to be essential (correlation coefficient (r) between bridging centrality and percentage of essential reactions = 0.87) (Fig. 3a). For example (Additional file 1: Figure S2a), among the reactions with high bridging centralities, DHDPRy and HSK were identified as essential reactions by FBA, and were placed on the bridges branched from ASAD to synthesize lysine and threonine, respectively. They also connected each pathway to the reaction which produced input metabolites for the synthesis of the target. Moreover, HSK was located on the tree, which comprised cascade sets leading with ASAD. In case of another example (Additional file 1: Figure S2b), RBFSb and RBFSa were identified as essential reactions by FBA, and they were located on the linear pathway of riboflavin biosynthesis. Interestingly, they were connected with the cascade set that had a leading reaction GTPCI. Reactions with high betweenness centrality tended to be essential as well (r = 0.82) (Fig. 3b). The reactions with high clustering coefficients tended to be non-essential (r = − 0.86) (Fig. 3c), since in their absence, there was an alternative connection between their neighbors. Unexpectedly, the degree and percentage of essential reactions was not correlated (r = 0.21) (Fig. 3d). Reaction deletion simulation showed that the average degree of essential reactions was 14.34, which was quite close to the average degree of all reactions (14.54). This indicates that reactions with high degree tend to have back up pathways or alternative pathways, which acted as substitutes when the high degree reaction was removed. Number distributions of total reactions and essential reactions according to each of the centrality measures in the reaction-centric network of E. coli. (a) bridging centrality, (b) betweenness centrality, (c) clustering coefficient, and (d) total degree. In each stacked bar, the numbers of predicted essential and non-essential reactions are colored in black and gray, respectively, and their summation is equal to the number of total reactions in E. coli. A reaction was considered essential if when its removal from the model led to a growth rate less than the default threshold of 5% of the growth objective value simulated for the wild type strain. The percentage of essential reactions among the total reactions is denoted as a black circle As illustrated in the synthetic network (Fig. 1 and Table 2), the modified bridging centrality detected nodes functioning as bottlenecks of information flow, as well as the bridges. One of the major differences between nodes having high bridging centrality and high betweenness centrality is their position in the network. For example, in the reaction graph of E. coli, while nodes having high betweenness centrality tended to belong to the densely connected modules (such as the pyruvate metabolism pathway or citric acid cycle) (Additional file 1: Table S2), nodes having high bridging centrality were located on bridges between local biosynthesis modules with a few connections (mostly cofactor and prosthetic group biosynthetic pathways) (Additional file 1: Table S3). Moreover, nodes having high bridging centrality have a much lesser metabolic flux value from FBA of wild-type E. coli than the nodes having high betweenness centrality. For a node to have high bridging centrality, the node itself has to have a low degree while its neighbors have relatively high degrees. Majority of such cases were found in reactions involved in cofactor biosynthesis. Cofactors are non-protein chemical compounds required for activity of some enzymes. They participate in catalysis, however, are not used as substrates in the enzymatic reactions. In many cases, cofactors are required in minute amounts, and their cellular compositions are very low. For example, serial reactions of RBFSa and RBFSb for riboflavin (vitamin B2) biosynthesis showed high bridging centrality scores in the E. coli reaction graph. Riboflavin can be synthesized by other six reactions using the reduced form of riboflavin (rbfvrd), which needs to be converted from riboflavin by NAD(P)H-associated reactions. RBFSb is the only riboflavin biosynthetic reaction which does not use rbfvrd. As the riboflavin has stoichiometry of 0.000223 in the E. coli growth objective function, the metabolic flux on RBFSb was quite small (0.0004 mmol/gDCW/h) in FBA of the wild-type E. coli, although RBFSb was essential predicted by the reaction deletion simulation. Analysis of cascade sets and cascade numbers In evaluating the local influence of a node, it is logical to say that the node had a high degree of control over information flow if its deletion or inactivation deprived its downstream neighbors of information flow within a network. In this study, we developed the cascade algorithm based on counting of nodes which are closed off from the information flow when a particular node is removed. Thus, the cascade number of a node can measure the local controllability for the node. To address the importance of a cascade number in the reaction-centric metabolic networks, we checked whether the removal of a leading reaction node generating a cascade set led to no growth by the reaction deletion simulation of the metabolic network models. Percentage of those essential leading cascade reactions in the total leading cascade reactions were calculated, according to the cascade number (Table 4). In all the five graphs, more than half reactions had zero cascade numbers and didn't belong to any cascade sets of other reactions. In other words, more than half reactions neither affected network flows when removed. This indicates that majority of reactions did not have any influence over their local connectivity. Table 4 Proportions of essential leading cascade reactions according to the cascade number in the reaction-centric metabolic networks Nodes with higher cascade numbers tended to be essential (r > 0.63) (Table 4). The exception was the reaction graph converted from iYO844 of B. subtilis (r = 0.43), mainly due to the presence of non-essential reactions having high cascade numbers. Interestingly, leading cascade reactions became to be essential or not, depending on whether the growth objective function of a metabolic network included the metabolite(s) associated with the cascade set. For example, cascade set reactions by GLUTRS make uroporphyrinogen III (uppg3) which is required to make prosthetic group of siroheme (sheme) (Additional file 1: Figure S2c). Cascade numbers of GLUTRS are 7 and 10 in the reaction graphs of iJO1366 (E. coli) and iYO844 (B. subtilis), respectively. From the reaction deletion simulation, GLUTRS was essential in iJO1366 and was non-essential in iYO844. The discrepancy in the essentiality of the same reaction in different metabolic models was casused by that sheme was included only in the the growth objective function of iJO1366. In other words, since the growth objective function of iJO1366 contained sheme, growth cannot occur without GLUTRS, and thus, GLUTRS is essential in iJO1366. However, GLUTRS is non-essential in iYO844 whose growth objective function does not have sheme. This example demonstrates that essentiality of a node with a high cascade number can be used in refining a metabolic network model. When the E. coli reaction graph was analyzed using the cascade algorithm, 959 out of 1251 reactions had zero cascade number, implying that most reactions do not have any influence over their local connectivity. Twenty-three reactions had cascade number of ≥4, and each had independent cascade sets forming acyclic subnetworks (Additional file 1: Table S4). Out of the 23 leading cascade reactions, 8 were predicted to be essential by the reaction deletion simulation. Remarkably, all the reactions with a cascade number of 7 (MECDPDH5, ASAD, GTPCI, and GLUTRS) were predicted to be essential, indicating that their removal will result in severe system failure (Table 5). For example (Additional file 1: Figure S2a), the reaction ASAD (catalyzed by aspartate-semialdehyde dehydrogenase) generates 'aspsa' (L-aspartate-semialdehyde), which is involved in both the lysine biosynthesis and homoserine biosynthesis. Its cascade set has seven member reactions performing the intermediate steps in the biosynthetic pathway of branched-chain amino acids (leucine, isoleucine, and valine), serine, and glycine. In another example (Additional file 1: Figure S2b), two reactions (GTPCI and GTPCII2) catalyzed by GTP cyclohydrolases, which share the source metabolite GTP, are involved in the first steps of riboflavin biosynthesis and tetrahydrofolate biosynthesis, respectively. The cascade sets of GTPCI, with a cascade number of 7, and GTPCII2, with a cascade number of 3, form subnetworks of tree and linear path, respectively. The cascade set of MECDPDH5 connected the biosynthetic pathways of isoprenoid and ubiquinol. The cascade sets involved many reactions with high bridging centralities, while they had much lesser intersections with reactions with high betweenness centralities (Additional file 1: Figure S3). This is not surprising, considering bridging centrality tended to be placed on bridges between modules with a few connections. Table 5 Cascade sets with the highest cascade number in the reaction-centric metabolic network of E. coli The idea of breakage of information flow was also implemented in topological flux balance (TFB) failure algorithm based on flux balance criterion which was devised to search bidirectional failure along the directed bipartite metabolic graph having two types of nodes (metabolites and reactions) [36]. Under the steady-state assumption of a metabolic network, TFB detects large-scale cascading failure where the removal of a single reaction can delete downstream neighbored nodes which lose all the inputs as well as upstream neighbors which lose all the outputs [36], and thus, it is more suitable for measuring global robustness of a directed bipartite network. By contrast, the cascade algorithm developed in this study searches only the downstream neighbors which lose all the inputs when a specific node is removed, focusing on the local cascading failure in a directed network. Topological analysis of a metabolic network provides valuable insights into the internal organization of the network and topological roles of individual nodes [1, 9]. Detection of central nodes in asymmetrically directed biological networks depends on biological questions about the global and local topology of the network. Various centrality metrics seek to quantify an individual node's prominence in a network by summarizing structural relations among the nodes, although most centrality metrics correlate with degree indicating that highly connections among nodes are important. In this study, for the topological analysis of metabolic networks, we applied various centrality metrics to directed reaction-centric graphs of the five phylogenetically distant organisms. Degree centrality, betweenness centrality, clustering coefficient, and modularity were found to be useful in discovering global topological properties and modular structures of the reaction graphs. To explain connections between modules and local connectivity in directed reaction-centric graphs, we modified the bridging centrality and developed the cascade number. We demonstrated that the cascade algorithm and the modified bridging centrality can identify cascade subnetworks controlling local information flow and irreplaceable bridging nodes between functional modules, respectively. When metabolic and biochemical networks are represented as metabolite graphs, they have been known to be scale-free and small-world [3, 24, 37]. In this work, we found that the distribution of the degree of the reaction graphs of all the five phylogenetically distant microorganisms followed a power law (Fig. 2). This agrees with previous report that reaction graphs of cancer metabolic networks followed power law degree distribution [13]. However, this is in contrast with a previous work showing that the E. coli reaction graph with undirected edges was not scale-free [38]. This discrepancy can be attributed to the differences in network size and directionality: we used a directed reaction graph of E. coli metabolic network that is much bigger than that in the previous study [38], and considered the directionality of the reaction flow, which added more nodes and information to the network. In this study, we found that reaction nodes linking between modules needed not be hubs with high degree. This is contrasting to the metabolite hubs which connect modules in metabolite-centric metabolic networks [3, 24]. There were two types of connections among the modules in the reaction graphs: the bottleneck with high betweenness centrality and the bridge with high bridging centrality. The high betweenness reactions had the potential to disconnect the network and damage the organism's growth rate when removed. Although betweenness centrality was not correlated with degree, the degrees of high betweenness reactions were relatively high or medium (Additional file 1: Table S2), suggesting that betweenness centrality would measure global connectivity among central modules with many connections. On the other hand, bridging centrality could detect nodes which were placed on the bridges between local biosynthesis modules with a few connections (Additional file 1: Table S3). We developed a novel metric, called the cascade number, to identify local connectivity structures in directed graphs. The cascade number can count how many reactions shut down if one reaction is perturbed at a steady state, and can measure their influence over local connectivity for metabolite flow. Perturbation of a node with a high cascade number could alter the local route of metabolic process, or cause damage to the metabolic system. In the E. coli reaction graph, 959 out of the 1251 total reactions had the cascade number of zero, which implies that most reactions did not have any influence over their local connectivity. It has been known that universal metabolic pathways across species, such as citric acid cycle and glycolytic pathways, have relatively few essential reactions [39, 40]. This fact indicates that important reactions are more likely to have a backup pathway [40, 41], and therefore, the cascade number of such reactions tended to be low or zero. By contrast, nodes with higher cascade numbers tended to be essential, implying that their removal will result in severe breakage of information flow in a metabolic network (Table 4 and Additional file 1: Table S4). Both bridging centrality and the cascade number are local properties, reflecting local information flow within a metabolic network. Bridging centrality can be used to locate nodes in the network that lie on the boundaries of modules within a network. The nodes with high bridging centrality, even though they are located with local information, can have global importance, forming breakpoints in the information flow. The importance of the cascade number is also potentially global, though less so than bridging centrality. A node with a high cascade number is a node with larger degree of influence on the network. The global impact of a node with high local influence can be accessed by simulation or biological experimentation. Knowing the nodes with a large cascade number informs the design of such experiments: these nodes are more likely than others to have a large influence and can be looked at first. In this study, we explored topological features of individual reaction nodes in reaction-centric metabolic networks from global and local perspectives. In particular, we demonstrated that the cascade number and the modified bridging centrality can identify reaction nodes that control the local information flow in the reaction graphs. Identification of central connectors between local modules with the modified bridging centrality, together with local flow connectivity, which was ascertained with the cascade algorithm, is critical to understand how metabolic pathways are assembled. A metabolic network is a map that assembles central and local biosynthesis pathways where the metabolites run through the reactions. Identifying reaction nodes and their associated genes important in global and local connectivity between modules can be useful to prioritize targets in the fields of metabolic engineering and medicine. Centrality metrics in a directed network Several centrality metrics have been developed to identify important components in a network from different centrality viewpoints [1]. Among them, we applied the clustering coefficient and betweenness centrality to the analysis of directed networks. As bridging centrality had been developed for undirected networks [11], we modified it to be applied for directed networks. Clustering coefficient The neighbors of a node i are defined as a set of nodes connected directly to the node i. The clustering coefficient of a node in a network quantifies how well its neighbors are connected to each other [42]. The clustering coefficient of a node i, C(i), is the ratio of the number of arcs between the neighbors of i to the total possible number of arcs between its neighbors. For a directed network, C(i) can be calculated as: $$ C(i)=\frac{n_i}{k_i\left({k}_i-1\right)}, $$ where ni is the number of arcs between neighbors of the node i, and ki is the number of neighbors of the node i. The closer the clustering coefficient of a node is to 1, the more likely it is for the node and its neighbors to form a cluster. By definition, it measures the tendency of a network to be divided into clusters, and thus, is related to network modularity. The majority of biological networks have a considerably higher average value for the clustering coefficient in comparison to random networks, indicating that they have a modular nature [1]. Betweenness centrality The betweenness centrality of a node is the fraction of shortest paths from all nodes to all others that pass through the particular node [10]. The betweenness centrality of a node i, B(i), is calculated as: $$ B(i)=\sum \limits_{j\ne i\ne k}\frac{\sigma_{jk}(i)}{\sigma_{jk}}, $$ where σjk is the total number of shortest paths from node j to node k, and σjk(i) is the total number of those paths that pass through node i. The higher the betweenness centrality of a node is, the higher the number of shortest paths that pass through the node. A node with a high betweenness centrality has a large influence on the information flow through the network, under the assumption that reaction flow follows the shortest paths [43]. The node with a high betweenness centrality tends to be a linker between modules, and has often been called a bottleneck in the network [44]. Although a bottleneck node does not necessarily have many interactions like a hub node, its removal often results in a higher fragmentation of a network, than when a hub node is removed. Modification of bridging centrality The bridging centrality identifies bridging nodes lying between densely connected regions called modules [11]. The bridging centrality of node i, BrC(i), is calculated as the product of the betweenness centrality, B(i), and the bridging coefficient, BC(i), which measure the global and local features of a node, respectively [11]. $$ BrC(i)=B(i)\times BC(i) $$ Previously, the bridging coefficient in an undirected network was defined [11] as: $$ BC(i)=\frac{{\left( degree(i)\right)}^{-1}}{\sum_{j\ in\ \varLambda (i)}{\left( degree(j)\right)}^{-1}}, $$ where Λ(i) is the set neighbors of the node i. In a directed network where the information flows through a node, the node needs to have both incoming and outgoing edges. Thus, we modified the bridging coefficient in a directed network as: $$ BC(i)=\left\{\begin{array}{c}\ \frac{{\left( degre{e}_{total}(i)\right)}^{-1}}{\sum_{j\ in\ \varLambda (i)}{\left( degre{e}_{total}(j)\right)}^{-1}}\kern0.5em if\ degre{e}_{in}(i)\ne 0\ and\ degre{e}_{out}(i)\ne 0\\ {}0\kern9.5em otherwise\end{array}\right., $$ where degreetotal(i) is the sum of degreein(i) and degreeout(i) of node i. By definition, for a node to have a high bridging coefficient, degrees of the node and the number of its neighbors have to be low and high, respectively. Both betweenness centrality and bridging coefficient have a positive effect on bridging centrality. These indicate that from the perspective of information flow, a good example of a node with high bridging centrality would be a bridge in the form of a path with length two, uniquely delivering information between neighbors that themselves have high degrees (Additional file 1: Figure S1). Development of a cascade algorithm We devised a cascade algorithm for detecting how many nodes are closed off from information flow when a particular node is removed in a directed network. If a node is locked down or suffers an accidental shutdown, such a change is propagated through the network. Any nodes dependent on the failed node cannot receive the information if there are no alternate path(s) bypassing the failed node. We defined the "cascade set" of a node as the set of nodes that cease to receive information when the node fails, and the "cascade number" of a node as the number of nodes in the cascade set. For two cascade sets A and B, if a leading cascade node generating A belongs to B, A is included in B. A cascade set becomes independent if its member nodes are not included in any other cascade sets. A node generating an independent cascade set was referred to as a "leading cascade node". Let a directional network be an ordered pair, (V, A), where V is the set of nodes and A is the set of arcs of the network. Then, the cascade set and cascade number are computed by the following algorithm: Graph representation of a directed reaction-centric metabolic network The reaction graph was represented as a directed graph with metabolic reactions as nodes and metabolites as arcs. The reactions and metabolites were collected from the metabolic network models of E. coli (iJO1366) [27], B. subtilis (iYO844) [28], G. metallireducens (iAF987) [29], K. pneumonia (iYL1228) [30], and S. cerevisiae (iMM904) [31] (Table 1), which were downloaded from the BIGG database [45] in the SBML file format. For each of the metabolic network models, the collected reactions and metabolites were used to reconstruct the reaction graph (Table 1). For example, 1805 unique metabolites and 2583 metabolic reactions in iJO1366 of E. coli were reconstructed to the reaction graph consisting of 1251 nodes (reactions) and 9099 arcs associated with 2014 metabolites. Adjacency matrices of the five reaction graphs converted from the downloaded metabolic network models are provided as Additional file 2. A reaction graph is G = (V, A) where V is a set of reaction nodes, and A is a set of V's arcs. There exists an arc from the reaction B to the reaction C when a product of B is consumed by C. For example, consider following three consecutive reactions: ASAD: 4pasp ↔ aspsa HSDy: aspsa ↔ hom-L HSK: hom-L → phom The corresponding arcs are ASAD→HSDy, HSDy→ASAD, and HSDy→HSK (i.e., ASAD↔HSDy→ HSK), where two consecutive reversible reactions of ASAD and HSDy form the directed cycle with length of two. Currency metabolites such as ATP, NAD, and H2O are ubiquitously associated with metabolic reactions. However, they are not incorporated into the final products. As pathways routing through the currency metabolites result in a biologically meaningless short path length, the currency metabolites were removed [24, 38, 46]. Similarly, transport and exchange reactions occurring at the cell boundary were removed, as they do not affect any relationship or reaction flow among intracellular reactions, while they inflate the size of the network and the average path length, and weaken the modular structure of intracellular connectivity. In the converted reaction graph, the degree of a reaction node is the number of other reactions that produce (or consume) metabolites which are consumed (or produced) by the reaction node. For example, consider a reaction AACPS1 (ACP[c] + atp[c] + ttdca[c] - > amp[c] + myrsACP[c] + ppi[c]). AACPS1 has two metabolites of ACP[c] and ttdca[c] as reactants, and one metabolite of myrsACP[c] as a product. (Recall that the currency metabolites of atp[c], amp[c], and ppi[c] were removed in the reaction graph.) ACP[c] and ttdca[c] are produced from other 57 reactions, and myrsACP[c] is consumed in 7 reactions. Therefore, the in-degree and out-degree of the reaction node AACPS1 are 57 and 7, respectively. Simulation of reaction essentiality in the metabolic networks To identify reactions which are essential for cell growth, flux balance analysis (FBA) [47] was performed to simulate cell growth when each reaction was removed from each metabolic network model. The default flux boundaries in the downloaded SBML files were used for the simulation condition and maximum growth rate was for the objective function. In FBA, the allowed nutrients for iJO1366 (E. coli) were Ca2+, Cl−, CO2, Co2+, Cob(I)alamin, Cu2+, Fe2+, Fe3+, glucose, H+, H2O, HPO42−, K+, Mg2+, Mn2+, MoO42−, Na+, NH4+, Ni2+, O2, selenate, selenite, SO42−, tungstate, and Zn2+; for iYO844 (B. subtilis), Ca2+, CO2, Fe3+, glucose, H+, H2O, HPO42−,K+, Mg2+, Na+, NH4+, O2, and SO42−; for iYL1228 (K. pneumoniae), Ca2+, Cl−, CO2, Co2+, Cu2+, Fe2+, Fe3+, glucose, H+, H2O, HPO42−, K+, Mg2+, Mn2+, MoO42−, Na+, NH4+, O2, SO42−, tungstate, and Zn2+; for iMM904 (S. cerevisiae), Fe2+, glucose, H+, H2O, HPO42−, K+, O2, Na+, NH4+, and SO42−; and for iAF987 (G. metallireducens), acetate, Cd2+, Ca2+, Cl−, chromate, CO2, Co2+, Cu+, Cu2+, Fe2+, Fe3+, H+, H2O, HPO42−, K+, Mg2+, Mn2+, MoO42−, Na+, N2, NH4+, Ni2+, SO42−, SO32−, tungstate, and Zn2+. A reaction was considered essential if when its removal from the model led to a growth rate less than the default threshold of 5% of the growth objective value simulated for the wild type strain [48]. The simulation was carried out using COBRA toolbox version 2.0 [49] in MATLAB R2016a (Mathworks Inc.). Flux balance analysis Pavlopoulos GA, Secrier M, Moschopoulos CN, Soldatos TG, Kossida S, Aerts J, Schneider R, Bagos PG. Using graph theory to analyze biological networks. BioData Min. 2011;4:10. Albert R, Barabási A-L. Statistical mechanics of complex networks. Rev Mod Phys. 2002;74(1):47–97. Jeong H, Tombor B, Albert R, Oltvai ZN, Barabasi AL. The large-scale organization of metabolic networks. Nature. 2000;407(6804):651–4. Ravasz E, Somera AL, Mongru DA, Oltvai ZN, Barabasi AL. Hierarchical organization of modularity in metabolic networks. Science. 2002;297(5586):1551–5. Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U. Network motifs: simple building blocks of complex networks. Science. 2002;298(5594):824–7. Barabasi AL, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004;5(2):101–13. Somvanshi PR, Venkatesh KV. A conceptual review on systems biology in health and diseases: from biological networks to modern therapeutics. Syst Synth Biol. 2014;8(1):99–116. Lee SY, Kim HU. Systems strategies for developing industrial microbial strains. Nat Biotechnol. 2015;33(10):1061–72. Wuchty S, Stadler PF. Centers of complex networks. J Theor Biol. 2003;223(1):45–53. Freeman LC. A set of measures of centrality based on betweenness. Sociometry. 1977;40:35–41. Hwang W, Cho Y, Zhang A, Ramanathan M. Bridging centrality: identifying bridging nodes in scale-free networks. Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining. 2006. Winterbach W, Van Mieghem P, Reinders M, Wang H, de Ridder D. Topology of molecular interaction networks. BMC Syst Biol. 2013;7:90. Asgari Y, Salehzadeh-Yazdi A, Schreiber F, Masoudi-Nejad A. Controllability in cancer metabolic networks according to drug targets as driver nodes. PLoS One. 2013;8(11):e79397. Holmstrom KM, Finkel T. Cellular mechanisms and physiological consequences of redox-dependent signalling. Nat Rev Mol Cell Biol. 2014;15(6):411–21. Clark AR, Toker A. Signalling specificity in the Akt pathway in breast cancer. Biochem Soc Trans. 2014;42(5):1349–55. Feist AM, Herrgard MJ, Thiele I, Reed JL, Palsson BO. Reconstruction of biochemical networks in microorganisms. Nat Rev Microbiol. 2009;7(2):129–43. Kim H, Kim S, Yoon SH. Metabolic network reconstruction and phenome analysis of the industrial microbe, Escherichia coli BL21(DE3). PLoS One. 2018;13(9):e0204375. Kanehisa M, Furumichi M, Tanabe M, Sato Y, Morishima K. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2017;45(D1):D353–d61. Fabregat A, Jupe S, Matthews L, Sidiropoulos K, Gillespie M, Garapati P, Haw R, Jassal B, Korninger F, May B, et al. The Reactome pathway knowledgebase. Nucleic Acids Res. 2018;46(D1):D649–d55. Caspi R, Billington R, Ferrer L, Foerster H, Fulcher CA, Keseler IM, Kothari A, Krummenacker M, Latendresse M, Mueller LA, et al. The MetaCyc database of metabolic pathways and enzymes and the BioCyc collection of pathway/genome databases. Nucleic Acids Res. 2016;44(D1):D471–80. Devoid S, Overbeek R, DeJongh M, Vonstein V, Best AA, Henry C. Automated genome annotation and metabolic model reconstruction in the SEED and model SEED. Methods Mol Biol. 2013;985:17–45. Thiele I, Palsson BO. A protocol for generating a high-quality genome-scale metabolic reconstruction. Nat Protoc. 2010;5(1):93–121. Guimera R, Nunes Amaral LA. Functional cartography of complex metabolic networks. Nature. 2005;433(7028):895–900. Ma H, Zeng AP. Reconstruction of metabolic networks from genome data and analysis of their global structure for various organisms. Bioinformatics. 2003;19(2):270–7. Resendis-Antonio O, Hernandez M, Mora Y, Encarnacion S. Functional modules, structural topology, and optimal activity in metabolic networks. PLoS Comput Biol. 2012;8(10):e1002720. Pavlopoulos GA, Kontou PI, Pavlopoulou A, Bouyioukos C, Markou E, Bagos PG. Bipartite graphs in systems biology and medicine: a survey of methods and applications. Gigascience. 2018;7(4):1–31. Orth JD, Conrad TM, Na J, Lerman JA, Nam H, Feist AM, Palsson BO. A comprehensive genome-scale reconstruction of Escherichia coli metabolism-2011. Mol Syst Biol. 2011;7:535. Oh YK, Palsson BO, Park SM, Schilling CH, Mahadevan R. Genome-scale reconstruction of metabolic network in Bacillus subtilis based on high-throughput phenotyping and gene essentiality data. J Biol Chem. 2007;282(39):28791–9. Feist AM, Nagarajan H, Rotaru AE, Tremblay PL, Zhang T, Nevin KP, Lovley DR, Zengler K. Constraint-based modeling of carbon fixation and the energetics of electron transfer in Geobacter metallireducens. PLoS Comput Biol. 2014;10(4):e1003575. Liao YC, Huang TW, Chen FC, Charusanti P, Hong JS, Chang HY, Tsai SF, Palsson BO, Hsiung CA. An experimentally validated genome-scale metabolic reconstruction of Klebsiella pneumoniae MGH 78578, iYL1228. J Bacteriol. 2011;193(7):1710–7. Mo ML, Palsson BO, Herrgard MJ. Connecting extracellular metabolomic measurements to intracellular flux states in yeast. BMC Syst Biol. 2009;3:37. Reed JL, Palsson BO. Thirteen years of building constraint-based in silico models of Escherichia coli. J Bacteriol. 2003;185(9):2692–9. Planes FJ, Beasley JE. Path finding approaches and metabolic pathways. Discret Appl Math. 2009;157:2244–56. Newman ME. Modularity and community structure in networks. Proc Natl Acad Sci U S A. 2006;103(23):8577–82. O'Brien EJ, Monk JM, Palsson BO. Using genome-scale models to predict biological capabilities. Cell. 2015;161(5):971–87. Smart AG, Amaral LA, Ottino JM. Cascading failure and robustness in metabolic networks. Proc Natl Acad Sci U S A. 2008;105(36):13223–8. Cai JJ, Borenstein E, Petrov DA. Broker genes in human disease. Genome Biol Evol. 2010;2:815–25. Wagner A, Fell DA. The small world inside large metabolic networks. Proc Biol Sci. 2001;268(1478):1803–10. Gerdes SY, Scholle MD, Campbell JW, Balazsi G, Ravasz E, Daugherty MD, Somera AL, Kyrpides NC, Anderson I, Gelfand MS, et al. Experimental determination and system level analysis of essential genes in Escherichia coli MG1655. J Bacteriol. 2003;185(19):5673–84. Ghim CM, Goh KI, Kahng B. Lethality and synthetic lethality in the genome-wide metabolic network of Escherichia coli. J Theor Biol. 2005;237(4):401–11. Kim PJ, Lee DY, Kim TY, Lee KH, Jeong H, Lee SY, Park S. Metabolite essentiality elucidates robustness of Escherichia coli metabolism. Proc Natl Acad Sci U S A. 2007;104(34):13638–42. Watts DJ, Strogatz SH. Collective dynamics of 'small-world' networks. Nature. 1998;393(6684):440–2. Goh KI, Oh E, Jeong H, Kahng B, Kim D. Classification of scale-free networks. Proc Natl Acad Sci U S A. 2002;99(20):12583–8. Yu H, Kim PM, Sprecher E, Trifonov V, Gerstein M. The importance of bottlenecks in protein networks: correlation with gene essentiality and expression dynamics. PLoS Comput Biol. 2007;3(4):e59. King ZA, Lu J, Drager A, Miller P, Federowicz S, Lerman JA, Ebrahim A, Palsson BO, Lewis NE. BiGG models: a platform for integrating, standardizing and sharing genome-scale models. Nucleic Acids Res. 2016;44(D1):D515–22. Croes D, Couche F, Wodak SJ, van Helden J. Inferring meaningful pathways in weighted metabolic networks. J Mol Biol. 2006;356(1):222–36. Orth JD, Thiele I, Palsson BO. What is flux balance analysis? Nat Biotechnol. 2010;28(3):245–8. Ebrahim A, Lerman JA, Palsson BO, Hyduke DR. COBRApy: COnstraints-based reconstruction and analysis for python. BMC Syst Biol. 2013;7:74. Schellenberger J, Que R, Fleming RM, Thiele I, Orth JD, Feist AM, Zielinski DC, Bordbar A, Lewis NE, Rahmanian S, et al. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA toolbox v2.0. Nat Protoc. 2011;6(9):1290–307. SHY was supported by the National Research Foundation of Korea through the Technology Development Program to Solve Climate Changes on Systems Metabolic Engineering for Biorefineries (NRF-2012M1A2A2026559) and Bio and Medical Technology Development Program (NRF-2018M3A9F3021968). E-YK was supported by the National Research Foundation of Korea (NRF-2017R1E1A1A03070806). DA was supported by the Canadian Natural Sciences and Engineering Research Council. The funders played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. All data generated or analysed during this study are included in this published article [and its supplementary information files]. MATLAB implementation of the cascade algorithm is available at https://github.com/sybirg/cascade with a DOI of https://doi.org/10.5281/zenodo.2634927. School of Basic Sciences, Hanbat National University, Daejeon, 34158, Republic of Korea Eun-Youn Kim Department of Mathematics and Statistics, the University of Guelph, Guelph, Ontario, N1G 2W1, Canada Daniel Ashlock Department of Bioscience and Biotechnology, Konkuk University, Seoul, 05029, Republic of Korea Sung Ho Yoon Search for Eun-Youn Kim in: Search for Daniel Ashlock in: Search for Sung Ho Yoon in: E-YK and SHY conceived and designed the analyses. E-YK and DA developed the algorithm. E-YK implemented the methods, and performed the analyses. E-YK and SHY wrote the manuscript. All authors reviewed and approved the final manuscript. Correspondence to Sung Ho Yoon. Table S1. Modularity and scale-freeness of the reaction-centric metabolic networks; Table S2. The top 2% of reactions with high betweenness centrality in the reaction-centric metabolic network of E. coli; Table S3. The top 2% of reactions with high bridging centrality scores in the reaction-centric metabolic network of E. coli; Table S4. Cascade sets (with a cascade number of ≥4) and their characteristics in the reaction-centric metabolic network of E. coli; Figure S1. Example of a bridge node (n) with high bridging centrality; Figure S2. Examples of cascade sets consisting of a linear path and a tree; Figure S3. Comparison of reactions with high centralities identified in the reaction-centric metabolic network of E. coli. (DOCX 221 kb) Adjacency matrices of the five reaction graphs. (XLSX 13419 kb) Directed network Metabolic network Reaction-centric graph Cascade number Centrality metric Information flow Networks analysis
CommonCrawl
fa10075847Plus Alternation of generations, multicellular, dependent embryos, walled spores produced in sporangia, multicellular gametangia, apical meristems What are the five key traits present in nearly all plants but not in charophytes? Plants alternate between two multicellular generations, a reproductive cycle called _____ __ ________ alternation of generations The ________ generations is haploid and produces haploid gametes by mitosis Gametophyte Fusion of a sperm and egg gives rise to the diploid ______, which produces haploid spores by meiosis Sporophyte Plants are called __________ because of the dependency of the embryo on the parent embryophytes The sporophyte produces spores in organs called ______ Sporangia Gametes are produced within Jordan's called ________ Gametangia Plants sustain continual growth in length by repeated cell division within the _______ _______ apical meristems _________, symbiotic associations between fungi and plants, may have helped plants without true roots obtain nutrients Mycorrhizae Most plants have _____ ______, cells jointed into tubes for the transport of water and nutrients Vascular tissue What does it mean when we say bryophytes do not form a monophyletic group? They are nonvascular plants, its a clade A ______ is an embryo and nutrients surrounded by a protective coat _______ anchor gametophytes to substrate rhizoids What are the characteristics of vascular plants? Life cycles with dominant sporophytes, vascular tissues called xylem and phloem, well developed roots and leaves, spore-bearing leaves can sporophylls What does gymnosperm mean? naked seed What are angiosperms? Angio _ vessel, sperm = seed What is a flower? The flower us an angiosperm structure specialized for sexual reproduction What is a fruit? A fruit is formed when the ovary thickens and matures What are monocots? One cotlyedon What are eudicots? Two cotlyedons Which of the following could occur only after plants moved from the oceans to land? Animals could also move onto land because there were opportunities for new food sources. The most direct ancestors of land plants were probably ____ Why have biologists hypothesized that the first land plants had a low, sprawling growth habit? The ancestors of land plants, green algae, lacked the structural support to stand erect in air. Spores and seeds have basically the same function—dispersal—but are vastly different because spores ________. are unicellular; seeds are not You find a green organism in a pond near your house and believe it is a plant, not an alga. The mystery organism is most likely a plant and not an alga if it ________. Is surrounded by a cuticle Which of the following statements about stomata is accurate? Stomata are important in terrestrial plants because they allow CO2 to diffuse into the plant Grades, as opposed to clade,________ Represent groups with similar traits Liverworts, hornworts, and mosses are grouped together as bryophytes. Besides not having vascular tissue, what do they all have in common? They require water for reproduction. Which of these are spore-producing structures? Sporophyte (capsule) of a moss Which of the following features of how seedless land plants get sperm to egg are the same as for some of their algal ancestors? Flagellated sperm swim to the eggs in a water drop The evolution of a vascular system in plants allowed which of the following to occur? increased height, improved competition for light, and increased spore dispersal distance such If you walk through an area with mosses and ferns, you are seeing ________. Both sporophyte and gametophytes generations Which of the following is a major trend in land plant evolution? the trend toward a sporophyte-dominated life cycle The advantages of seeds, compared to spores, include ________ containing a nutrient store for a developing sporophyte The agouti is most directly involved with the Brazil nut tree's dispersal of ________. Sporophyte embryos In onions (Allium), cells of the sporophyte have 16 chromosomes within each nucleus. How many chromosomes should be in the nucleus of an egg within the embryo sac prior to fertilization? Robbie and Saurab are pre-med and pre-pharmacy students, respectively. They complain to their biology professor that they should not have to study plants because plants have little relevance to their chosen professions. Which of the following statements are correct with regard to what physicians and pharmacists need to know about plants? Land plants produce poisons and medicines. Sets found in the same folder Ch 31: Fungi Ch 35: Plant Structure Ch 28: Kingdom Protista 2317 Exam 1 Study Guide: Chapters 1-3 Psyc Bio Ch 1 Bio Psyc Chapter 2 Bio Psych Ch 4 The potential difference across the terminals of a battery is 8.40 V when there is a current of 1.50 A in the battery from the negative to the positive terminal. When the current is 3.50 A in the reverse direction, the potential difference becomes 10.20 V. (a) What is the internal resistance of the battery? (b) What is the emf of the battery? Compare the concepts of variation and diversity. A boat moves through water $\left(T=40^{\circ} \mathrm{F}\right)$, at $18.0\ \mathrm{mi} / \mathrm{h}$. A flat portion of the boat hull is $3.3\ \mathrm{ft}$ long, and is treated as a simple smooth flat plate boundary layer, to first approximation. Is the boundary layer on this flat part of the hull laminar, transitional, or turbulent? Discuss. Write a nuclear equation for the alpha decay of each nucleus: (a) ${ }_{83}^{211} \mathrm{Bi}$ (b) ${ }_{90}^{230} \mathrm{Th}$ (c) ${ }_{88}^{222} \mathrm{Ra}$ Recommended textbook solutions 1st Edition•ISBN: 9780133669510 (4 more)Kenneth R. Miller, Levine 2,591 solutions Organizational Behavior: Managing People and Organizations 13th Edition•ISBN: 9781337918756Jean Phillips, Ricky W. Griffin, Stanley Gully Genetic Analysis: An Integrated Approach, Global Edition 2nd Edition•ISBN: 9781292092454John L Bowman, Mark F Sanders 16th Edition•ISBN: 9781264354726Krista Rompolski, Stuart Fox 5. ch26 The_PoRo Midterm Study Set Marthabecerra_ Cog final L_knox13 Lesson 4: Identifying Social Engineering & Malware bribabee98
CommonCrawl
npj climate and atmospheric science Modeling fine-grained spatio-temporal pollution maps with low-cost sensors Non-linear probabilistic calibration of low-cost environmental air pollution sensor networks for neighborhood level spatiotemporal exposure assessment Andrew Patton, Abhirup Datta, … Kirsten Koehler Using a network of lower-cost monitors to identify the influence of modifiable factors driving spatial patterns in fine particulate matter concentrations in an urban environment S. Rose Eilenberg, R. Subramanian, … Allen L. Robinson Air quality assessment and pollution forecasting using artificial neural networks in Metropolitan Lima-Peru Chardin Hoyos Cordova, Manuel Niño Lopez Portocarrero, … Javier Linkolk López-Gonzales High resolution annual average air pollution concentration maps for the Netherlands Oliver Schmitz, Rob Beelen, … Derek Karssenberg Neighborhood Emission Mapping Operation (NEMO): A 1-km anthropogenic emission dataset in the United States Siqi Ma & Daniel Q. Tong Noise estimation model development using high-resolution transportation and land use regression Omer Harouvi, Eran Ben-Elia, … Itai Kloog Predicting traffic noise using land-use regression—a scalable approach Jeroen Staab, Arthur Schady, … Hannes Taubenböck Long-term field comparison of multiple low-cost particulate matter sensors in an outdoor urban environment Florentin M. J. Bulot, Steven J. Johnston, … Matthew Loxham Comparing quantile regression methods for probabilistic forecasting of NO2 pollution levels Sebastien Pérez Vasseur & José L. Aznarte Shiva R. Iyer1, Ananth Balashankar1, William H. Aeberhard2, Sujoy Bhattacharyya3,4, Giuditta Rusconi4,5, Lejo Jose6, Nita Soans6, Anant Sudarshan7, Rohini Pande8 & Lakshminarayanan Subramanian ORCID: orcid.org/0000-0001-8101-12431 npj Climate and Atmospheric Science volume 5, Article number: 76 (2022) Cite this article The use of air quality monitoring networks to inform urban policies is critical especially where urban populations are exposed to unprecedented levels of air pollution. High costs, however, limit city governments' ability to deploy reference grade air quality monitors at scale; for instance, only 33 reference grade monitors are available for the entire territory of Delhi, India, spanning 1500 sq km with 15 million residents. In this paper, we describe a high-precision spatio-temporal prediction model that can be used to derive fine-grained pollution maps. We utilize two years of data from a low-cost monitoring network of 28 custom-designed low-cost portable air quality sensors covering a dense region of Delhi. The model uses a combination of message-passing recurrent neural networks combined with conventional spatio-temporal geostatistics models to achieve high predictive accuracy in the face of high data variability and intermittent data availability from low-cost sensors (due to sensor faults, network, and power issues). Using data from reference grade monitors for validation, our spatio-temporal pollution model can make predictions within 1-hour time-windows at 9.4, 10.5, and 9.6% Mean Absolute Percentage Error (MAPE) over our low-cost monitors, reference grade monitors, and the combined monitoring network respectively. These accurate fine-grained pollution sensing maps provide a way forward to build citizen-driven low-cost monitoring systems that detect hazardous urban air quality at fine-grained granularities. Pollution prediction in cities with dense populations can be critical for generating fine-grained policy recommendations and public health warnings1,2,3. The scale of accurate sensor-based monitoring required to achieve this can come at a huge cost and thus inhibit building a dense fine-grained pollution sensing map. The deployment of low-cost particulate matter sensors to replace or augment reference grade pollution air quality monitoring systems has been studied extensively recently, and have addressed issues of calibration4,5,6, design7,8, data selection9, and personal exposure quantification10,11. However, building a highly accurate large scale fine-grained pollution sensing and monitoring map that leverages the size of a pollution network has been largely unexplored. Specifically, modeling the behavior of noisy low-cost sensors in cities with high pollution and population density has not been studied previously, with recent state-of-the-art mapping approaches providing errors only in the range of 30–40%12,13. This high error lends the pollution sensing map unusable for policymaking and air quality hazard detection. Prior work on deploying low-cost sensor networks for air pollution have been successful on a small scale (within 2 km radius) with high rates of agreement for PM 2.5 measurements in Southeastern United States14. Survey studies have shown that there is a necessity for a paradigm shift towards crowd-funded sensor networks to enable fine-grained sensing-based applications on a large scale15. The question of calibration issues in such large scale settings has been explored recently with promising results without the need for significant recalibration16 after well-controlled laboratory calibration17. PM 2.5 prediction models recently have explored deep neural networks like long-short term memory (LSTM), convolution neural networks (CNN), attention-based models; vector regression, partial differential equations, but focus on a single unified model at a single location, rather than in a large scale sensor network setting18,19,20,21,22,23,24. Recent work has also explored the use of distributed sensor networks to gather information on air pollution and other meteorological variables in urban contexts25,26,27,28,29. Clements et al. 30 provide a comprehensive review of many such works. Researchers have sought to learn more about how pollution sensing systems of low-cost sensors may be deployed in urban contexts14,31,32,33,34,35,36. With the exception of Gao et al. 36, who examine the performance of fine particulate sensors in Xi'an in China, most of these deployments have occurred in areas with significantly lower air pollution than the city of Delhi in India. Gao et al. 36 also point out that low-cost PM2.5 sensors may perform worse in very low pollution environments, suggesting that they may be relatively more useful when particulate concentrations are high. Related approaches in this space can be broadly classified into three groups—spatial interpolation approaches, land-use regression, and dispersion models Xie et al. 37, Jerrett et al. 38. In the case of dispersion models, they assume that an appropriate chemical transport model is identified along with their parameter values, and a high-quality emissions inventory. In the case of land-use regression models, having access to environmental characteristics that significantly influence pollution is critical. This additional data is often suited for longer range predictions, as the geographical and meteorological data vary over a longer temporal and coarser spatial grids39,40. In this paper, we describe a methodology to model and predict urban air quality at a fine-grained level using dense and noisy, low-cost sensors. There are two main questions we seek to answer in this paper—(i) how can we use a network of low-cost and portable air quality monitors in order to build a fine-grained pollution heatmap in a city that provides accurate prediction?, (ii) does it help to augment existing monitoring networks by the local governments with low-cost air quality sensors? We deploy a network of 28 low-cost sensors, many of them concentrated in the south Delhi area, in collaboration with Kaiterra41, a company that makes low-cost air quality monitors and air filters. We dramatically increase the density of the deployment by 28× in Delhi (area 573 mi2) with 28 sensors, compared to previous deployments (Xi'an - area 3898 mi2, 8 low-cost sensors). Further, the large longitudinal dataset we have been able to capture over 2 years as compared to prior work, which captured at most a few weeks of data, allows us to model long-term seasonal changes and train more complex neural network models that can adapt to seasonal and daily patterns. We build on prior work and model the pollution network in its entirety, with prediction models at each sensor location using data from near-by sensor locations. We model pollution at any location in Delhi as measured by the concentration of fine particulate matter (PM2.5) measured in μgm−3 using historical data of up to 8 h from all the sensors in the network. We make this choice of building a fine-grained pollution sensing map over shorter timelines to leverage the primary advantage of low-cost sensors while overcoming the drawback of noise by aggregating numerous spatio-temporal measurements. By learning the variability of each of these noisy measurements through message passing neural networks (MPRNN) which have the ability to model each sensor separately, we learn to not only separate the signal from the noise, but build an accurate sensing network of low-cost sensors that achieves <10% root mean squared earror (RMSE) in predicting up to one hour in advance over a fine-grained spatio-temporal grid as compared to baseline modeling approaches that provide 30% RMSE. By using a sparse network of sensors, whose signals are shared through neural network embeddings, we learn to capture the information from nearby sources that might affect the readings of nearby sources (e.g., factory) and ignore the ones which are heavily localized (e.g., food cart). Such an accurate, fine-grained pollution sensing map (≤10% MAPE) is usable by policymakers in deciding which neighborhoods of the city need interventions to improve the air quality and population health. To the best of our knowledge, we are the first in attempting to model a city-scale sensor network deployment with low-cost sensors augmenting high-quality government monitoring stations. With a sensor network the size of a city, with 60 sensors spread across the city of Delhi (700 sq km), capturing spatio-temporal variations and constructing accurate pollution maps necessitates modeling each sensor separately. By increasing the scale and addressing the corresponding modeling challenges, our work has widespread implications for pollution sensing and its low-cost deployability. Our data consists of PM2.5 concentration data averaged to the hour from the 28 low-cost sensors and the 32 government monitors, a total of 60 monitors, collected over a period of 24 months, from May 1, 2018, to May 1, 2020. We use the until Oct 30, 2019 for training (75%) and hold out the remaining (25%) for testing. We report two criteria—the RMSE and the mean absolute percentage error (MAPE). We evaluate our models on the data from the combined set of our 28 low-cost sensors and the 32 government monitors, as well as separately on each set. For each of these locations, we compare our model-based predictions with the ground truth of the measurement of the pollution sensor. Overall, the MPRNN model with imputed data using STHM along with the spline correction provides a very highly accurate estimation of the PM concentration level across all locations (ref Table 1). The best performing model is able to predict PM2.5 concentrations with an average RMSE of 10.1 μgm−3 and MAPE of 9.6% across all the locations and over the testing period. While estimating a spline per location provides the best predictive performance, we note that using an average spline across all observed locations only marginally increases the RMSE and MAPE errors. The average spline is computed after averaging the data over all the locations. Across all locations, the median RMSE and MAPE are 9.15 μgm−3 and 8.64% respectively (ref Fig. 1). The best case values are 4.28 μgm−3 and 5.57% respectively, and the worst case values are 24.1 μgm−3 and 19.64% respectively. The location where we have minimum MAPE is at a location in Green Park, a very busy area of south Delhi, further validating the need for fine-grained pollution sensing in a large city like Delhi. Table 1 RMSE and MAPE of prediction of PM concentrations, averaged across all the sensor locations. Fig. 1: Prediction errors of PM2.5 during the test period (Nov 1, 2019–May 1, 2020) shown as the mean absolute percentage error (MAPE) of the ground truth and predicted PM2.5 concentration. In this period, the PM2.5 concentration values ranges between 0 and 1000 μgm−3, and average value being ~130 μgm−3. a Bar plot comparing our methodology with other competing approaches. We note that modeling spatiotemporal interactions using a neural network such as MPRNN and accounting for intra-day periodic patterns in the form of spline corrections together make a big difference in the performance. b Distribution of MAPE for the best performing model - Per-Sensor Spline with STHM imputation + MPRNN, across all the locations shown as a cumulative density function (CDF). c Prediction errors of the best performing model (MPRNN+Spline) at every monitoring location on the map. d Errors of the final prediction zoomed into the regions with highest concentration of sensors (New Delhi and South Delhi). Spatial variations The 3-way cubic spline fit shows a common trend of baseline pollution rising steadily up to 8 am, then decreasing up to 4 pm and then increasing again until midnight. We note that this is the composite polynomial model of the PM concentrations in an average day (ref Fig. 2). The median error of this model is about 40 μgm−3 at each of the three windows, 12 am–8 am, 8 am–4 pm and 4 pm–12 am, and this is reduced to about 10 μgm−3 post the neural network model fit on the residuals. Figure 2 and Supplementary Fig. 2 show the per-sensor splines and the average spline in detail. Not only do the per-sensor splines vary widely across space, we notice that regions with significantly high spline residual errors like the sensors A838, E8E4, and 2E9C in Supplementary Fig. 2, are all located in central locations of Delhi with well established commercial activity like Connaught Place, Sardarjung Enclave and Lado Sarai respectively. Further, in Supplementary Fig. 2, the outliers with significantly high residual error splines among the government monitoring stations are Patparganj DPCC, Punjabi Bagh DPCC, and DKSSR DPCC. While Patparganj is situated next to an industrial area, Punjabi Bagh is a well-known residential locality with established commercial activity centers, and DKSSR, short for Dr. Karni Singh Shooting Range, is a shooting range located in the outskirts of Delhi next to an interstate highway. The diversity of these splines across various geographical regions further indicate the need to model fine-grained pollution profiles in seemingly remote as well as central locations of Delhi. We also note that the average spline can sufficiently operate for bootstrapping at locations where we do not have enough sensor data to begin with. Fig. 2: The interpretation of the spline correction, and its effect on the residual. The top two rows show the distribution of the residuals (in PM units of μg/m3) over space, before and after the spline correction. Three different splines were fitted over the residuals in three different time slots in the day. We observe that for the most part, locations that exhibited high residual errors after MPRNN fit (in the upper quantiles of the residual error distribution) continued to show high error (relative to other locations) even after spline correction, even though the magnitude of the residual does decrease. This phenomenon is partially explained by the high baseline values of the sensors with high residual errors, that is often coupled with high variance in measurement. a Slot 1 (12 AM–8 AM). b Slot 2 (8 AM–4 PM). c Slot 3 (4 PM–12 AM). d Composite cubic spline correction consisting of three splines fitted for three non-overlapping parts of the day—midnight to early morning (12 AM to 8 AM), midday (8 AM to 4 PM), and evening to midnight (4 PM to 12 AM). e Ground truth PM2.5 (blue), along with MPRNN prediction (green) and final prediction after spline correction (red) at one of our sensor locations in Chanakyapuri in New Delhi. f Ground truth PM2.5 (blue), along with MPRNN prediction (green) and final prediction after spline correction (red) at the CPCB monitor at Sirifort in South Delhi. For the most part, locations that exhibited high residual errors after MPRNN fit continued to show high error (relative to other locations) even after spline correction, even though the magnitude of the residual decreases. This phenomenon is partially explained by the high baseline values of the sensors with high residual errors, that is often coupled with high variance in measurement. Effect of network size and training data The fewer the monitors we used in our hybrid model, the greater was the final prediction performance. As Supplementary Fig. 3 shows, with only one monitor in the network, the predictive errors are about 35 and 20 μgm−3, respectively, for the low-cost sensor network and government network. However, as we include data from more nodes in the network, final prediction error drops sharply to about 15% and then gradually tails off at about 10%. The error flattens out about 30 sensors, which is approximately the number of sensors of each type that we have in our experiment. We infer that having an even denser deployment likely adds little value to the predictive performance. Further, decreasing the amount of training data to train the model shows that at minimum, one year of data is required to capture the seasonal trends and achieve RMSE of almost 10% (Supplementary Table 3). The low MAPE and RMSE across all monitors in Delhi provided by our Per-Sensor Spline+MPRNN with STHM imputation model are significant as it means that our model can detect hazardous air quality with high precision. The RMSE error is significantly lower than the observed variance in PM2.5 concentrations in a day, making it useful for short-term and intraday analyses as well. The WHO air quality standards prescribe that PM2.5 levels should not exceed 5 and 15 μgm−3 at an annual and daily average levels, while the Indian Government air quality standards prescribe 40 and 60 μgm−3, respectively. We note that for the 60 sensors, Delhi has exceeded these prescribed levels 371 out of the 641 days on a daily level, across 2 years of our measurement. The 9.6 % MAPE error that we are able to achieve, corresponds to the ability to detect hazardous air quality as per Indian government standards with 93.5% precision and 90.8% recall. This further indicates that the low error rate we have obtained leads to an almost exact prediction of hazardous air quality. This enables citizen-driven sensing where pollution sensor readings can be crowdsourced, and effective policy interventions like clean energy policies that penalize construction sites that have PM2.5 levels more than 25% higher than the nearest monitoring center can be operationalized42. Specifically, the improvement in predictive power is achieved in specific pollution hotspots like bus stations, markets, etc. (Fig. 1). In addition, we can provide transparency of the overall average pollution of the city43 and contribute towards increasing the co-benefits of clean energy policies44,45. Since the data used to measure the model performance is new, it is important to understand the spatial variations and heterogeneity in measurements that underlies the sensor network. To further ensure that the improvement in model's prediction performance is better than the noise in the data, we performed extensive calibration of the sensors. For this, we leveraged the calibration performed in-house by the sensor manufacturer (Kaiterra46) (more info in Appendix) which confirms that re-calibration is not required47, and also perform validation by comparing our sensor readings with the readings provided by the nearest government pollution monitoring station. Supplementary Figure 5 shows the cross-calibration of the average pollution value reported by the 28 government monitors with the average value of the 18 sensors in our testbed in the locality of South Delhi. We observe that the sensors have been fairly well calibrated with the reference monitors and report a similar average value across the city despite individual sensor level and spatio-temporal variations. This provides confidence in the data generated from this pilot to be useful as a reference for pollution modeling and forecasting. Further, we also performed a nearest neighbor calibration where we compute temporal correlation of our sensor with the nearest government monitoring station of that sensor. Supplementary Table 4 shows that on average the correlation coefficients are >0.8, which indicates that there is no statistical significant difference between them on average (t-test, confidence level: 0.05, p-value: 0.0011). Further, in Supplementary Fig. 4, we see that when we order our sensors by the nearest neighboring government station, the cross-correlations between our sensors are correspondingly aligned, with high correlation between nearby sensors and low correlation between farther sensors. This further emphasizes the importance of the improvement in modeling as it significantly improves the prediction capabilities of a fine-grained sensor network, which can capture spatial variations in pollution of Delhi. The development of fine-grained pollution sensing maps at low-costs can further catalyze the deployment of such monitoring networks in other polluted cities, where the pollution networks are sparse. With citizens procuring, deploying, and modeling pollution of cities accurately, this paper provides a way forward for developing high-quality fine-grained pollution sensing maps. We model the spatio-temporal prediction problem as a graph prediction problem, where we predict a value at every node at a certain time using as input the historical values from neighboring nodes. In our setting, each sensor location v ∈ V is a node in an undirected graph. Assuming that air pollutants diffuse uniformly in all directions and exert their influence throughout our region of interest, in this case the greater Delhi region, we make the graph complete, where an edge exists between every pair of nodes. The end goal is to train a model that predicts at any node, the pollution level, measured in terms of the concentration of fine particulate matter PM2.5, at time t given one or more readings from neighboring locations prior to t. The first step is to interpolate the gaps in the data. We use a geostatistics model for this task, called the Spatio-temporal Hierarchical Model (STHM). Then we fit a cubic spline based on daily trends at each sensor location, and then finally train a Message-Passing Recurrent Neural Network (MPRNN) (Section 4.4) to predict residuals over the baseline. In order to account for the amount of influence based on the pairwise distances, we include the Euclidean distance between sensors as part of our feature embedding in our message-passing formulation. We test this model by predicting values at locations where sensors, and therefore ground truth information, are present, but the model is generalized enough to be used to predict at locations where there is no ground truth data available. If \(y_{v,t}\) is the reading of the sensor at location v, at timestamp t, and \({\hat{y}}_{v,t}\) is our corresponding prediction, the prediction model aims to minimize the mean absolute percentage loss: $${\rm{MAPE}} = \sum_v \sum_t \frac{|{\hat y}_{v,t} - y_{v,t}|}{y_{v,t}}$$ Our pollution forecasting model for estimating the PM2.5 particulate matter concentration across space and time consists of three important steps. Given the variations in data availability across our pollution sensors, the first step of our method uses a standard Spatio-Temporal Hierarchical Model (STHM) to estimate the missing data. Our STHM model is a standard statistical modeling framework from geostatistics that combines multiple sources of information, accommodates missing values, and computes predictions in both space and time. Based on daily variation patterns observed at each of the pollution sensors, the second step in our method estimates a three-way cubic spline at each sensor location, one for each disjoint 8 h interval in a 24 h period (12 am to 8 am, 8 am to 4 pm and 4 pm to 12 am), representing three different patterns in the PM2.5 variations. The cubic splines for each sensor represented a baseline level of PM2.5 concentration. The cubic splines may provide a good approximation to the overall average daily variations across sensors but do not capture short term spatio-temporal variations represented by the residual errors in the baseline. The final step of our method is to train a Message-Passing Recurrent Neural Network (MPRNN) across the pollution monitoring points to estimate the residual errors from neighboring sensors. We will briefly describe the characteristics of our data and then explain the cubic spline and MPRNN methodology in this section. We refer the reader to the supplementary text for a detailed description of the STHM model. The data used for the modeling the air pollution levels in Delhi was sourced from a combination of 32 local government monitors and a network of 28 low-cost sensors deployed by us in various locations of Delhi from May 2018 to May 2020. The average availability of each of these sensors are about 90 and 30% over the measured period, respectively. This disparity is attributed to a variety of factors such as disconnection for periodic necessary calibration, network outages and periodic servicing of sensors. The sensors are calibrated against the government sensors, by conducting a longitudinal comparison study by measuring in proximity to the location of the government monitoring centers. The locations and their summary statistics of the sensors by location is given by the Supplementary Tables 1 and 2, and are shown visually in the box plots in Supplementary Fig. 1. Cubic splines We observe that on a daily basis, depending on the time of the day and the location, there is a low-frequency component that makes up an approximate "baseline level" of PM concentration. Based on this observation, we fit a piecewise polynomial function, called a spline, to model this low-frequency component. We divided a single day into a number of epochs and fit a spline for each epoch. Prior to implementing the cubic splines, we observed that the residual errors from the MPRNN model exhibits different errors at different times in the day. We then proceeded to fit cubic splines based on the daily spatio-temporal patterns per sensor and per location. For example, if our prediction error follows a temporal pattern of say, higher prediction error in the morning, while lower in the afternoon, we can leverage this fitting separate splines for morning and afternoon to subtract out this component. The spline can be of any order, but given our residual error patterns, but we found that piecewise cubic spline works best. Suppose at time t and location v, the raw PM value is given by yv,t. Then, the piecewise spline to predict y, with time period p is given by: $${\hat{y}}_{p}(v,t)={\alpha }_{v,p}* {t}^{3}+{\beta }_{v,p}* {t}^{2}+{\kappa }_{v,p}* t+{\nu }_{v,p}$$ Note that the chosen parameters per sensor αv,p, βv,p, κv,p, νv,p, where p ∈ {"morning", "afternoon", "evening"}, depend on the patterns in our residual errors and are fit accordingly to minimize the root mean-squared residual error: $${\rm{RMSE}}(v)=\mathop{\sum}\limits_{t}\mathop{\sum}\limits_{p}\sqrt{{(y(v,t)-{\hat{y}}_{p}(v,t))}^{2}}$$ Message-passing recurrent neural network MPRNN, based on refs. 48,49, is a neural network architecture that is applied on a graph in order to predict values at each node in the graph. This approach enables to us incorporates spatial interactions between each pair of nodes as "messages" that are broadcast from every node to its neighbors. Each node has a modified version of a long short term memory (LSTM) network that iterates between message-passing and the recurrent computations. Suppose yv,t is a quantity of interest at node v and time t, for which we would like to build a predictive model. Mathematically, we would like to learn a function \({{{\mathcal{F}}}}\) such that, \({y}_{v,t+1}={{{\mathcal{F}}}}({v}_{1},{y}_{{v}_{1},t},{v}_{2},{y}_{{v}_{2},t},\ldots ;{v}_{j}\in {{{\mathcal{V}}}})\) where the set \({{{\mathcal{V}}}}\) denotes the set of all the nodes in the graph. A recurrent neural network unit is assigned to each node in the graph, with each node v maintaining a hidden state hv,t at time t. Through a message-passing phase and a time-recurrent phase, our model infers the next hidden state, hv,t+1 from which the PM value at v is decoded. A message-passing operation allows one segment to observe the hidden state of its neighboring segments. The computation proceeds in five steps, as five layers of the neural network. In the first phase, the observation phase, the input observations \({Y}_{t}=\{{y}_{v,t}| v\in {{{\mathcal{V}}}}\}\) at time t are encoded into hv,t by the observation operation Ov. In the second and third phases, one or more iterations of messaging (M) and updating (U) operations are performed to propagate the observations in the graph. In the fourth phase, for each node, a time-recurrent operator Tv utilizing an LSTM unit takes as input the final hidden state hv,t and predicts the next hidden state hv,t+1. The final phase is the readout operation Rv, which decodes the hidden state to produce the output value to be predicted \({\hat{y}}_{v,t+1}\). These five steps are shown below. The message function takes as input the hidden states of a pair of nodes v and n and the Euclidean distance between them, dv,n as the influence of the pollution at a given location on the pollution at another location would depend on the distance between them. Hence, we include the distance in the embedding. $${h}_{v,t}={O}_{v}({h}_{v,t-1},{y}_{v,t})$$ $${m}_{v,t}=\mathop{\sum}\limits_{n\in V-v}M({h}_{v,t},{h}_{n,t},{d}_{v,n})$$ $${h}_{v,t}=U({h}_{v,t},{m}_{v,t})$$ $${h}_{v,t+1}={T}_{v}({h}_{v,t})$$ $${\hat{y}}_{v,t+1}={R}_{v}({h}_{v,t+1})$$ For a selection of nodes \({{{\mathcal{W}}}}\) in the graph, the components of the model \(\{{O}_{w},M,U,{T}_{w},{R}_{w},| w\in {{{\mathcal{W}}}}\}\) are defined. During inference, the states \({H}_{t}=\{{h}_{w,t}| w\in {{{\mathcal{W}}}}\}\) are maintained at each time step. The hidden state for each segment is initialized at t = 0 randomly during training and evaluation \({h}_{v,0} \sim {{{\mathcal{N}}}}(0,1)\). Training and validation We used the data from May 1, 2018, to Nov 1, 2019, a period of 18 months, as the training period. The number of samples we had for training were 166,979 from our low-cost sensor network, and 371,806 from the government network, resulting in a total of 538,785 samples. The model was trained at each sensor location, using as input data from all the other monitors except itself, over the entire training period. We used the Adam optimizer50 with a learning rate of 0.001, and ran the training for 30 epochs to ensure a robust and well-trained model. To validate the model, we used the remaining 6 months data from Nov 1, 2019, to May 1, 2020. The number of ground truth samples available in this period were 20,408 and 91,493 in the low-cost network and government network, respectively, resulting in a total of 111901 samples. However, only 12 out of the 28 low-cost sensors were operational in the testing phase, since many of them had not been serviced properly, partly owing to the COVID-19 pandemic. The testing error reported under Results (§2), therefore, shows the predictions tested at 12 low-cost sensor locations and 32 government monitors, a total of 44 locations combined. Further, to understand the implications of availability of less data during training, we evaluated our model as shown in Supplementary Table 3 and found that with training data less than a year, our model's performance significantly decreases as seasonal trends are not well captured. The MPRNN is implemented using the Deep Graph Library51 and PyTorch 52 in Python. The model diagram is shown in Fig. 3. Fig. 3: Message passing recurrent neural network for pollution monitoring in Delhi. a Network of air quality monitors in the entire greater Delhi region. b Model architecture, showing M sensor inputs feeding into the layers and producing a single real output, illustrated by zooming on the selected region in (a). The computation goes from top to bottom. The green boxes represent input PM concentrations from a set of locations, the gray boxes the hidden linear transformation layers, with the numbers in the boxes representing the number of internal parameters to be learned, and the orange box shows the RNN with the LSTM cells. Here 256 is the embedding size of the hidden layer messages passed, that was chosen empirically based on performance. The final output is the single real value of PM concentration. The input to the RNN is the vector output of length 256 from the hidden layer. More details are in the supplementary text. c Sample model of a low-cost sensor. d Our experimental testbed of monitors, and the quality of the PM2.5 data obtained. We had to contend with frequent outages and communication issues that plagued our sensor network and affected data availability. We contrast our combined model with two alternative modeling approaches in order to set a baseline to benchmark the MPRNN model performance. The first one is the STHM itself, a state-of-the-art spatio-temporal modeling methodology. When the STHM is used solely for the prediction, it performs poorly, as it does not model unknown non-linear spatial dependencies due to dispersion. The second baseline is an alternative neural network formulation that collects information from a specified number (K) of nearest neighbors to a location L, and feeds them into a trained recurrent neural network, to predict the value at L. Unlike the MPRNN, this model does not account for explicit spatial influence between every pair of sensors, thus allowing us to see how a more simplified multi-variate non-linear model might perform. We call this model the k-Nearest Neighbor (k-NN) Spatial Neural Network. The data that supports the findings of this study comprises two parts—the PM2.5 data from the government monitors and the data collected from our low-cost sensor network. The former is public data and can be accessed here53. The data can also be provided by the authors upon request. The latter is third-party data and the authors are bound by a confidentiality agreement with Kaiterra, the makers of the low-cost sensors, and can only be made available for confidential peer review, if requested by reviewers, within the terms of the data use agreement and if compliant with ethical and legal requirements. All the relevant code can be obtained upon request from the corresponding author. The code is also available on GitHub: https://github.com/shivariyer/epod-nyu-delhi-pollution. Shaddick, G., Thomas, M., Mudu, P., Ruggeri, G. & Gumy, S. Half the world's population are exposed to increasing air pollution. NPJ Clim. Atmos. Sci. 3, 1–5 (2020). Rao, N. D., Kiesewetter, G., Min, J., Pachauri, S. & Wagner, F. Household contributions to and impacts from air pollution in India. Nat. Sustain. 4, 1–9 (2021). Geng, G. et al. Drivers of pm2. 5 air pollution deaths in china 2002–2017. Nat. Geosci. 14, 645–650 (2021). Liu, H.-Y., Schneider, P., Haugen, R. & Vogt, M. Performance assessment of a low-cost pm2. 5 sensor for a near four-month period in Oslo, Norway. Atmosphere 10, 41 (2019). Liu, X. et al. Low-cost sensors as an alternative for long-term air quality monitoring. Environ. Res. 185, 109438 (2020). Giordano, M. R. et al. From low-cost sensors to high-quality data: A summary of challenges and best practices for effectively calibrating low-cost particulate matter mass sensors. J. Aerosol Sci. 158, 105833 (2021). Tryner, J. et al. Design and testing of a low-cost sensor and sampling platform for indoor air quality. Building Environ. 206, 108398 (2021). Prakash, J. et al. Real-time source apportionment of fine particle inorganic and organic constituents at an urban site in Delhi city: An iot-based approach. Atmospheric Pollution Res. 12, 101206 (2021). Bi, J. et al. Publicly available low-cost sensor measurements for pm2.5 exposure modeling: Guidance for monitor deployment and data selection. Environ. Int. 158, 106897 (2022). Zusman, M. et al. Calibration of low-cost particulate matter sensors: Model development for a multi-city epidemiological study. Environ. Int. 134, 105329 (2020). Mahajan, S. & Kumar, P. Evaluation of low-cost sensors for quantitative personal exposure monitoring. Sustainable Cities Soc. 57, 102076 (2020). Spyropoulos, G. C., Nastos, P. T. & Moustris, K. P. Performance of aether low-cost sensor device for air pollution measurements in urban environments. accuracy evaluation applying the air quality index (aqi). Atmosphere 12, 1246 (2021). Chu, H.-J., Ali, M. Z. & He, Y.-C. Spatial calibration and pm 2.5 mapping of low-cost air quality sensors. Sci. Rep. 10, 1–11 (2020). Jiao, W. et al. Community air sensor network (cairsense) project: Evaluation of low-cost sensor performance in a suburban environment in the southeastern united states. Atmos. Meas. Tech. 9, 5281–5292 (2016). Morawska, L. et al. Applications of low-cost sensing technologies for air quality monitoring and exposure assessment: How far have they gone? Environ. Int. 116, 286–299 (2018). Stavroulas, I. et al. Field evaluation of low-cost pm sensors (purple air pa-ii) under variable urban air quality conditions, in Greece. Atmosphere 11, 926 (2020). Tancev, G. & Pascale, C. The relocation problem of field calibrated low-cost sensor systems in air quality monitoring: a sampling bias. Sensors 20, 6198 (2020). Kim, H. S. et al. Development of a daily pm 10 and pm 2.5 prediction system using a deep long short-term memory neural network model. Atmos. Chem. Phys. 19, 12935–12951 (2019). Kalajdjieski, J., Mirceva, G. & Kalajdziski, S. Attention models for pm 2.5 prediction. In 2020 IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT) 1–8 (IEEE, 2020). Lin, L., Chen, C.-Y., Yang, H.-Y., Xu, Z. & Fang, S.-H. Dynamic system approach for improved pm 2.5 prediction in Taiwan. IEEE Access 8, 210910–210921 (2020). Pérez, P., Trier, A. & Reyes, J. Prediction of pm2. 5 concentrations several hours in advance using neural networks in Santiago, Chile. Atmos. Environ. 34, 1189–1196 (2000). Song, L., Pang, S., Longley, I., Olivares, G. & Sarrafzadeh, A. Spatio-temporal pm 2.5 prediction by spatial data aided incremental support vector regression. In 2014 International Joint Conference on Neural Networks (ijcnn) 623–630 (IEEE, 2014). Wang, Y., Wang, H., Chang, S. & Avram, A. Prediction of daily pm 2.5 concentration in china using partial differential equations. PLoS One 13, e0197666 (2018). Qin, D. et al. A novel combined prediction scheme based on cnn and lstm for urban pm 2.5 concentration. IEEE Access 7, 20050–20059 (2019). Liu, T. et al. Seasonal impact of regional outdoor biomass burning on air pollution in three Indian cities: Delhi, Bengaluru, and Pune. Atmos. Environ. 172, 83–92 (2018). Chambliss, S. E. et al. Local- and regional-scale racial and ethnic disparities in air pollution determined by long-term mobile monitoring. Proc. Natl Acad. Sci. USA 118, e2109249118 (2021). Liang, Y. et al. Wildfire smoke impacts on indoor air quality assessed using crowdsourced data in California. Proc. Natl Acad. Sci. USA 118, e2106478118 (2021). Ferraro, P. J. & Agrawal, A. Synthesizing evidence in sustainability science through harmonized experiments: Community monitoring in common pool resources. Proc. Natl Acad. Sci. USA 118, e2106489118 (2021). Ludescher, J. et al. Network-based forecasting of climate phenomena. Proc. Natl Acad. Sci. USA 118, e1922872118 (2021). Clements, A. L. et al. Low-cost air quality monitoring tools: From research to practice (a workshop summary). Sensors 17, 2478 (2017). Lin, C. et al. Evaluation and calibration of aeroqual series 500 portable gas sensors for accurate measurement of ambient ozone and nitrogen dioxide. Atmos. Environ. 100, 111–116 (2015). Shusterman, A. A. et al. The Berkeley atmospheric co 2 observation network: Initial evaluation. Atmos. Chem. Phys. 16, 13449–13463 (2016). Moltchanov, S. et al. On the feasibility of measuring urban air pollution by wireless distributed sensor networks. Sci. Total Environ. 502, 537–547 (2015). Sun, L. et al. Development and application of a next generation air sensor network for the hong kong marathon 2015 air quality monitoring. Sensors 16, 211 (2016). Tsujita, W., Yoshino, A., Ishida, H. & Moriizumi, T. Gas sensor network for air-pollution monitoring. Sensors Actuators B: Chem. 110, 304–311 (2005). Gao, M., Cao, J. & Seto, E. A distributed network of low-cost continuous reading sensors to measure spatiotemporal variations of pm2. 5 in Xi'an, China. Environ. Pollution 199, 56–65 (2015). Xie, X. et al. A review of urban air pollution monitoring and exposure assessment methods. ISPRS Int. J. Geo-Inform. 6, 389 (2017). Jerrett, M. et al. A review and evaluation of intraurban air pollution exposure models. J. Exposure Sci. Environ. Epidemiol. 15, 185 (2005). Yeh, C. et al. Using publicly available satellite imagery and deep learning to understand economic well-being in Africa. Nat. Commun. 11, 1–11 (2020). U.S. Environmental Protection Agency, Office of Air Quality Planning and Standards. Air Quality Assessment Division, Research Triangle Park, NC (2021). Technologies, K. Laser egg. kaiterra.com (2022). Harigovind, A. Dust management committee recommends air quality monitors at all large construction sites in Delhi. https://indianexpress.com/article/cities/delhi/dust-management-committee-recommends-air-quality-monitors-at-large-delhi-construction-sites-7437599/ (2021). Somvanshi, A. Delhi's air quality and number games. https://www.downtoearth.org.in/blog/air/delhi-s-air-quality-and-number-games-76214 (2021). Qian, H. et al. Air pollution reduction and climate co-benefits in china's industries. Nat. Sustain. 4, 417–425 (2021). Tibrewal, K. & Venkataraman, C. Climate co-benefits of air quality and clean energy policy in India. Nat. Sustain. 4, 305–313 (2021). Johnson, C. How kaiterra ensures that sensedge devices are accurate and correctly calibrated. https://learn.kaiterra.com/en/resources/how-sensedge-devices-are-accurate-and-correctly-calibrated (2022). Technologies, K. Does the laser egg need to be recalibrated? https://support.kaiterra.com/does-the-laser-egg-need-to-be-recalibrated (2022). Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry. Proceedings of the 34th International Conference on Machine Learning (ICML). Vol. 70, 1263–1272 (2017). Iyer, S. R., An, U. & Subramanian, L. Forecasting sparse traffic congestion patterns using message-passing rnns. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3772–3776 (2020). Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR). (2015). Wang, M. et al. Deep graph library: A graph-centric, highly-performant package for graph neural networks. Preprint at https://arxiv.org/abs/1909.01315 (2019). Paszke, A. et al. H. Advances in Neural Information Processing Systems 32 (eds. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché- Buc, F., Fox, E., & Garnett, R.) 8024–8035 (Curran Associates, Inc., 2019). Central Pollution Control Board (CPCB). Central Control Room for Air Quality Management - All India. https://app.cpcbccr.com/ccr/#/caaqm-dashboard-all/caaqm-landing/caaqm-comparison-data (2022). The work done by the authors Shiva Iyer, Ananth Balashankar, and Lakshminarayanan Subramanian in this paper was supported by funding from industrial affiliates in the NYUWIRELESS research group (https://www.nyuwireless.com), that funded Shiva Iyer in part as well as the air quality sensors used in the study. Shiva was also funded in part by an NSF Grant (award number OAC-2004572) titled "A Data-informed Framework for the Representation of Sub-grid Scale Gravity Waves to Improve Climate Prediction". Mr. Balashankar is a Ph.D. student at New York University, and is also funded in part, by the Google Student Research Advising Program. We acknowledge our collaboration with Kaiterra for their efforts in the development and installation of the low-cost sensors. We acknowledge the data availability from CPCB on their public portal. We also acknowledge the contributions of Ulzee An, a former masters' student, in writing code for older baseline models. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NYUWIRELESS or Kaiterra. Department of Computer Science, New York University, New York, NY, USA Shiva R. Iyer, Ananth Balashankar & Lakshminarayanan Subramanian Swiss Data Science Center, ETH Zurich, Zurich, Switzerland William H. Aeberhard Columbia University, New York, NY, USA Sujoy Bhattacharyya Evidence for Policy Design (EPoD) at the Institute for Financial Management and Research (IFMR), New Delhi, New Delhi, India Sujoy Bhattacharyya & Giuditta Rusconi State Secretariat for Education, Research and Innovation (SERI), Bern, Switzerland Giuditta Rusconi Kai Air Monitoring Pvt Ltd, Gautam Buddha Nagar, UP, India Lejo Jose & Nita Soans Department of Economics, University of Chicago, Chicago, IL, USA Anant Sudarshan Department of Economics, Yale University, New Haven, CT, USA Rohini Pande Shiva R. Iyer Ananth Balashankar Lejo Jose Nita Soans Lakshminarayanan Subramanian S.I., A.S., R.P., and L.S. contributed to problem conceptualization and design. S.I., A.B., W.A., and L.S. contributed to the spatio-temporal models. S.I., A.B., and W.A. contributed to the code, data analysis, and visualizations. S.B. and G.R. contributed to the sensor network deployment and data gathering efforts in Delhi guidance of R.P. S.I., A.B., W.A., R.P., A.S., and L.S. helped in writing and editing various sections of the paper. Correspondence to Lakshminarayanan Subramanian. Prof. Subramanian declares no competing non-financial interests but the following competing financial interests: Prof. Subramanian is a co-founder of Entrupy Inc, Velai Inc, and Gaius Networks Inc and has served as a consultant for the World Bank and the Governance Lab. Dr. Subramanian reports that Velai Inc broadly works in the area of socio-economic predictive models. All other authors declare no competing interests. Iyer, S.R., Balashankar, A., Aeberhard, W.H. et al. Modeling fine-grained spatio-temporal pollution maps with low-cost sensors. npj Clim Atmos Sci 5, 76 (2022). https://doi.org/10.1038/s41612-022-00293-z About the Partner For Authors and Referees npj Climate and Atmospheric Science (npj Clim Atmos Sci) ISSN 2397-3722 (online)
CommonCrawl
Proof general state space similarity transformation to controllable canonical form Given a state space model of the form, $$ \begin{align} \dot{x} &= A\,x + B\,u \\ y &= C\,x + D\,u \end{align} \tag{1} $$ however I think that this would also apply to a discrete time model. Assuming that this state space model is controllable, I would like to find a nonsingular similarity transform $z=T\,x$, which would transform the state space to the following model, $$ \begin{align} \dot{z} &= \underbrace{T\,A\,T^{-1}}_{\bar{A}}\,z + \underbrace{T\,B}_{\bar{B}}\,u \\ y &= \underbrace{C\,T^{-1}}_{\bar{C}}\,z + \underbrace{D}_{\bar{D}}\,u \end{align} \tag{2} $$ such that it is in the controllable canonical form with, $$ \bar{A} = \begin{bmatrix} 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1 \\ -a_n & -a_{n-1} & \cdots & -a_2 & -a_1 \end{bmatrix} \tag{3a} $$ $$ \bar{B} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \tag{3b} $$ When $A$ is in the Jordan canonical form, with Jordan blocks of at most size one by one (so no of diagonal terms), $$ A = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{bmatrix} \tag{4} $$ with at most algebraic multiplicity of one. The states of matrix $(3a)$ can be seen as integrals of the next state and the last state a linear combination of the previous ones, therefore it can be shown that similarity transforms of the form, $$ T = \left[\begin{array}{c c} \alpha_1 \begin{pmatrix} 1 \\ \lambda_1 \\ \lambda_1^2 \\ \vdots \\ \lambda_1^{n-1} \end{pmatrix} & \alpha_2 \begin{pmatrix} 1 \\ \lambda_2 \\ \lambda_2^2 \\ \vdots \\ \lambda_2^{n-1} \end{pmatrix} & \cdots & \alpha_n \begin{pmatrix} 1 \\ \lambda_n \\ \lambda_n^2 \\ \vdots \\ \lambda_n^{n-1} \end{pmatrix} \end{array}\right] \tag{5} $$ would bring $(4)$ to $(3a)$. The values for $\alpha_i$ can be solved for using $\bar{B}=T\,B$ and $(3b)$, when defining $B$ as, $$ B = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix} \tag{6} $$ then this equality can be written as, $$ \begin{bmatrix} b_1 & b_2 & \cdots & b_n \\ \lambda_1\,b_1 & \lambda_2\,b_2 & \cdots & \lambda_n\,b_n \\ \lambda_1^2\,b_1 & \lambda_2^2\,b_2 & \cdots & \lambda_n^2\,b_n \\ \vdots & \vdots & \cdots & \vdots \\ \lambda_1^{n-1}\,b_1 & \lambda_2^{n-1}\,b_2 & \cdots & \lambda_n^{n-1}\,b_n \end{bmatrix} \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \tag{7} $$ It can be noted that in this case the matrix in equation $(7)$ is the same as the transpose of the controllability matrix, $$ \mathcal{C} = \begin{bmatrix}B & A\,B & A^2B & \cdots & A^{n-1}B\end{bmatrix} \tag{8} $$ so the solution to equation $(7)$ can also be written as, $$ \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{bmatrix} = \mathcal{C}^{-T} \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \tag{9a} $$ $$ \vec{\alpha} = \mathcal{C}^{-T} \bar{B} \tag{9b} $$ The transpose of $T$ can, similar to equation $(7)$, also be written as, $$ T^T = \begin{bmatrix}\vec{\alpha} & A\,\vec{\alpha} & A^2\vec{\alpha} & \cdots & A^{n-1}\vec{\alpha}\end{bmatrix} \tag{10} $$ or if define a new vector $\vec{v}$ as the transpose of $\vec{\alpha}$ and substitute $\vec{\alpha}$ for the right hand side of equation $(9b)$, $$ \vec{v} = \begin{bmatrix}0 & \cdots & 0 & 1\end{bmatrix} \mathcal{C}^{-1} \tag{11a} $$ $$ T = \begin{bmatrix} \vec{v} \\ \vec{v}\, A \\ \vec{v}\, A^2 \\ \vdots \\ \vec{v}\, A^{n-1} \end{bmatrix} \tag{11b} $$ From this expression it can also be seen that if $\mathcal{C}$ is not full-rank, then such a transformation would not exist. After some testing it seems that this expression also seem to hold for any $A$ and $B$, also long as $\mathcal{C}$ is full-rank/invertible, but in that case equation $(10)$ should contain $A^T$ instead of $A$ (but when using equation $(4)$, then $A=A^T$). However I do not know how I could go about proving that this is always the case. Also a small side question: How could one define this transformation when $B$ is of size $n$ by $m$, with $m>1$? I suspect that in the controllable canonical form $\bar{B}$ should be of the form, $$ \bar{B} = \begin{bmatrix} 0 & \cdots & 0 \\ \vdots & \cdots & \vdots \\ 0 & \cdots & 0 \\ 1 & \cdots & 1 \end{bmatrix} \tag{12} $$ linear-transformations control-theory linear-control Kwin van der VeenKwin van der Veen For a single-input system the transformation that yields the controller canonical form is $$T=\left[\matrix{q\\qA\\ \vdots\\qA^{n-1}}\right]$$ where $q$ is the last row of the controllability matrix inverse i.e. $$\mathcal{C}^{-1}=\left[\matrix{X\\ \hline q}\right]$$ This property ensures that $$qA^{i-1}b=\begin{cases}0,\quad i=1,\cdots,n-1\\ 1,\quad i=n \end{cases}$$ which can be used along with the Cayley-Hamilton theorem to prove that $$Tb=\left[\matrix{qb \\ \vdots \\ qA^{n-2}b \\qA^{n-1}b}\right]=\left[\matrix{0 \\ \vdots \\ 0 \\1}\right]=\bar{B}$$ $$TA=\left[\matrix{qA \\ \vdots \\ qA^{n-1} \\qA^{n}}\right]=\left[\matrix{qA \\ \vdots \\ qA^{n-1} \\-q\sum_{i=1}^{n}a_{n-i+1}A^{i-1}}\right]=\left[\matrix{0 & 1 & 0& \cdots & 0\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 1\\-a_n & -a_{n-1} & -a_{n-2}& \cdots & -a_1}\right]\left[\matrix{q \\ qA\\ \vdots \\ qA^{n-2} \\qA^{n-1}}\right]=\bar{A}T$$ For the multiple input case $B\in\mathbb{R}^{n\times m}$ the situation is more complex. The calculation involves the so called controllability indices $\mu_1,\mu_2,\cdots,\mu_m$ and $\bar{B}$ is of the form $$\bar{B}=\left[\matrix{0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\\ 1 & * & * & \cdots & *\\\hline 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\\ 0 & 1 & * & \cdots & *\\\hline \vdots & \vdots & \vdots & \ddots & \vdots\\\hline 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\\ 0 & 0 & 0 & \cdots & 1}\right] $$ where $*$ denotes a not necessarily zero element. The $m$ nonzero rows of $\bar{B}$ are the $\mu_1,\mu_1+\mu_2,\cdots,\mu_1+\mu_2+\cdots+\mu_m$ rows. For more details I suggest you to consult the book Antsaklis and Michel, "A linear systems primer" . RTJRTJ $\begingroup$ I think I understand the proof for $\bar{B}$, but I will have to take a closer look for $\bar{A}$. But how would you calculate $\mathcal{C}^{-1}$ when $m>1$, since then $\mathcal{C}$ is not a square matrix, or do you then need to use a different definition for $\mathcal{C}$? $\endgroup$ – Kwin van der Veen $\begingroup$ @fibonatic You extract a new $n\times n$ matrix $\bar{\mathcal{C}}$ from $\mathcal{C}$ which is of dimension $n\times nm$ through a proper selection of $n$ linear independent columns of $\mathcal{C}$. Its form is $\bar{\mathcal{C}}=[\matrix{b_1 & Ab_1 & \cdots & A^{\mu_1-1}b_1 & b_2 & Ab_2&\cdots & A^{\mu_2-1}b_2 & \cdots & A^{\mu_m-1}b_m}]$ with $b_1,\cdots,b_m$ the columns of $B$. $\endgroup$ – RTJ Not the answer you're looking for? Browse other questions tagged linear-transformations control-theory linear-control or ask your own question. How to find a transformation matrix which will make the system a chain of integrators? How should I interpret the static gain from MATLAB's command zpkdata? Convert a linear difference equation into a controllable state-space model State space representation of transfer function $ 1/s $ State space system, with a state space system as feedback Rank of square matrix $A$ with $a_{ij}=\lambda_j^{p_i}$, where $p_i$ is an increasing sequence Cascade of state space models for linear systems Model reduction of states - State space model How can I handle the delays in Generalized/Model Predictive Control? 0-controllability of three simple systems.
CommonCrawl
Environmental Science and Pollution Research Endocrine disruptors in bottled mineral water: total estrogenic burden and migration from plastic bottles Martin Wagner Jörg Oehlmann AREA 6 • PERSISTANT ORGANIC POLLUTANTS • RESEARCH ARTICLE Background, aim, and scope Food consumption is an important route of human exposure to endocrine-disrupting chemicals. So far, this has been demonstrated by exposure modeling or analytical identification of single substances in foodstuff (e.g., phthalates) and human body fluids (e.g., urine and blood). Since the research in this field is focused on few chemicals (and thus missing mixture effects), the overall contamination of edibles with xenohormones is largely unknown. The aim of this study was to assess the integrated estrogenic burden of bottled mineral water as model foodstuff and to characterize the potential sources of the estrogenic contamination. Materials, methods, and results In the present study, we analyzed commercially available mineral water in an in vitro system with the human estrogen receptor alpha and detected estrogenic contamination in 60% of all samples with a maximum activity equivalent to 75.2 ng/l of the natural sex hormone 17β-estradiol. Furthermore, breeding of the molluskan model Potamopyrgus antipodarum in water bottles made of glass and plastic [polyethylene terephthalate (PET)] resulted in an increased reproductive output of snails cultured in PET bottles. This provides first evidence that substances leaching from plastic food packaging materials act as functional estrogens in vivo. Discussion and conclusions Our results demonstrate a widespread contamination of mineral water with xenoestrogens that partly originates from compounds leaching from the plastic packaging material. These substances possess potent estrogenic activity in vivo in a molluskan sentinel. Overall, the results indicate that a broader range of foodstuff may be contaminated with endocrine disruptors when packed in plastics. Endocrine disrupting chemicals Estradiol equivalents Human exposure In vitro effects In vivo effects Mineral water Plastic bottles Plastic packaging Polyethylene terephthalate Potamopyrgus antipodarum Yeast estrogen screen Xenoestrogens Responsible editor: Markus Hecker 1 Background, aim, and scope With the publication of Theo Colborn's scientific best-seller Our stolen future (Colborn et al. 1996), endocrine disruption became a public, political, and scientific issue. Since then, the list of suspected endocrine disrupting chemicals (EDCs) has been steadily growing, and the research in this field has made substantial progress (Hotchkiss et al. 2008). However, the causality between the exposure to EDCs and adverse human health effects is still controversially discussed (Safe 2000, 2005; Sharpe 2003; Waring and Harris 2005) due to the multifactoral etiology of hormone-related diseases, although evidence for causality between exposure to xenohormones and developmental as well as reproductive disorders strengthens (Sharpe 2003). For instance, in utero exposure to phthalate plasticizers has been shown to be associated with a decreased anogenital distance in male infants indicating undervirilization induced by environmental levels of these endocrine disruptors (Swan et al. 2005). Vice versa, phthalate exposure to girls has claimed to be correlated with an earlier onset of puberty (Colón et al. 2000), an effect that has been experimentally verified in mice in the case of the plastic component bisphenol A as well (Howdeshell et al. 1999). Recently, the debate about endocrine disruption has been heated up by findings that some EDCs may exhibit epigenetic transgenerational effects (Anway et al. 2005). Though many endocrine disruptors are ubiquitous in the environment and humans are known to be contaminated with a wide range of compounds (Damstra et al. 2002), exact routes of human exposure remain largely unknown (Damstra 2003; Schettler 2006; Sharpe 2003). Apparently there are various sources and pathways of xenohormone uptake: inhalation (i.e., from indoor air), dermal absorption (i.e., from personal care products), and ingestion of food. The contamination of foodstuff by production-related compounds has been documented analytically. Nonylphenols, as degradation products of commercial and industrial surfactants, for example, are identified ubiquitously in a broad variety of nourishments (Guenther et al. 2002). Another source of xenobiotics in foodstuff is rarely taken into account when dealing with endocrine disruption: substances migrating from packaging material into edibles (Lau and Wong 2000). In order to optimize the properties of packaging materials (i.e., durability, elasticity, color), a variety of additives, such as stabilizers, antioxidants, coupling agents, and pigments, is used in the formulation. Especially additives from plastics (so-called plasticizers) are known to leach out of the packaging and consequently accumulate in the foodstuff (Biles et al. 1998; Casajuana and Lacorte 2003; Fankhauser-Noti et al. 2006; Mcneal et al. 2000; Zygoura et al. 2005). Given the fact that some of these compounds are known EDCs (i.e., bisphenol A, vom Saal and Hughes 2005), we hypothesize that the migration of substances from packaging material into foodstuff may contribute to human exposure with xenohormones. In the current study, bottled mineral water serves as a model foodstuff because it is a simple matrix and it does not contain endogenous hormones, like for example dairy products. Moreover, consumption of mineral water is increasing worldwide (Montuori et al. 2008). On the German market, mineral water is available in two major sorts of packaging material: glass and PET (PETE, polyethylene terephthalate, resin identification code 1) bottles. Moreover, some brands of mineral water are sold in a packaging called Tetra Pak (Tetra Brick) although only to a minor extent. These paperboard boxes are coated with an inner plastic film and are more commonly used for packing milk and fruit juices. 2 Materials and methods 2.1 Samples Twenty brands of mineral water (coded as A–O, nine bottled in glass and plastic each, two bottled in Tetra Pak) from different price segments were chosen either because of their high market shares or ratio of glass/PET use in Germany. The samples included mineral water from four producers (A–D) that were obtained both in glass and in PET bottles. For each brand, six mineral water bottles were purchased in local shops and stored under consumer relevant conditions (dark, 4°C before analysis). Mineral water samples were taken from three bottles per brand and tested directly with the yeast estrogen screen (YES) in three independent experiments. A representative subset of four brands of glass bottles and six brands of plastic bottles (brands A–E) was chosen for the reproduction test with Potamopyrgus antipodarum (three bottles per brand). 2.2 Yeast estrogen screen The yeast strain contains the stably transfected human estrogen receptor alpha (hERα) gene and an expression plasmid containing the reporter gene lacZ encoding β-galactosidase under the control of estrogen response elements (ERE). Upon receptor activation and ERE binding, β-galactosidase is expressed. The β-galactosidase activity is measured as the change in absorbance at 540 nm caused by cleavage of the chromogenic substrate chlorophenol red-β-d-galactopyranoside (CPRG). Assay procedure and data analysis were conducted as described previously (Routledge and Sumpter 1996; Rutishauser et al. 2004), with several modifications to test water samples directly and to accelerate the sample throughput. Minimal medium was prepared by supplementing ultrapure water with 0.67% w/v yeast nitrogen base without amino acids, 2% w/v d-(+)-glucose, and the appropriate amino acids. Yeast cultures were grown overnight to log-phase. Seventy-five microliters water sample were added to a 96-well microtiter plate in eight replicates. Preliminary experiments indicated that laboratory tap water was least contaminated and hence served as a negative control (eight replicates on each plate). Fivefold minimal medium was supplemented with 100 µM copper(II)sulfate, 0.67 mg/ml ampicillin, and streptomycin. Twenty-five microliters medium (containing 1% v/v ethanol) were added to each well. A serial dilution of 17β-estradiol in fivefold medium (in 1% v/v ethanol, 3 pM–100 nM final concentration) served as a positive control (eight replicates per concentration in each experiment). Yeast cells from the log-phase culture were diluted 1:5 in fresh minimal medium, and 20 µl of the cell suspension were added to each well. For blank values (eight replicates on one plate) medium was added instead of cells. The microtiter plates were sealed with a gas permeable membrane (Breathe-Easy, Diversified Biotech, Boston, MA, USA) to avoid cross contamination and were incubated for 24 h (30°C, 750 rpm). Relative cell density was determined by measuring optical density at 595 nm. For β-galactosidase assay, buffer Z was supplemented with 40% w/v CPRG, 0.25% v/v β-mercaptoethanol, and 170 U/ml lyticase. One hundred microliters were added to each well. Optical density at 540 nm was determined in 30-min intervals over a period of 4 h. Optical densities were corrected according to blank values and relative cell density. For each time of measurement, dose–response relationship for 17β-estradiol was calculated using a four-parameter logistic function (Prism 5.0, GraphPad Software Inc., San Diego, CA, USA). To assure comparability of independent experiments, only those measurements were considered whose half maximal response (EC50) was next to 1 × 10−10 M 17β-estradiol (range, 9 × 10−11 to 2 × 10−10 M) with a correlation coefficient r 2 >0.9. Estradiol equivalents (EEQ) of the water samples were calculated by inserting the corrected optical densities (corrected OD) in the inversion of the four parameter logistic function (Eq. 1) fitted with the curve parameters obtained from the appropriate positive control. EEQ values of the samples were corrected for background values in the negative control (1.28 ± 0.18 ng/l EEQ, n = 226) and dilution factor (1.6). $$\log {\text{EEQ}} = \log {\text{EC}}_{50} - \left( {\frac{{\left( {\log \left( {{{\left[ {{\text{top}} - {\text{bottom}}} \right]\;} \mathord{\left/ {\vphantom {{\left[ {{\text{top}} - {\text{bottom}}} \right]\;} {\;\left[ {{\text{corrected}}\;{\text{OD}} - {\text{bottom}}} \right] - 1}}} \right. \kern-\nulldelimiterspace} {\;\left[ {{\text{corrected}}\;{\text{OD}} - {\text{bottom}}} \right] - 1}}} \right)} \right)}}{{{\text{hillslope}}}}} \right)$$ 2.3 Reproduction test with Potamopyrgus antipodarum The test was conducted with the following specifications: All bottles were rinsed three times with ultrapure water and filled with 700 ml defined culturing water (pH 8.0 ± 0.5, conductivity 770 ± 100 µS/cm). One hundred individuals of P. antipodarum (from a laboratory stock consisting exclusively of parthenogenetic females, see Schmitt et al. 2008) were inserted in each bottle. Borosilicate Erlenmeyer flasks with 700 ml culturing water served as a negative solvent control (0.0014% v/v ethanol, n = 3); 17α-ethinylestradiol (EE2, CAS 57-63-6) was used as positive control at a nominal concentration of 25 ng/l (in 0.0014% v/v ethanol, n = 3). The test was conducted under defined conditions (15 ± 1°C, constant aeration, 16/8 h light/dark rhythm, random placement of replicates). P. antipodarum were fed every 4 to 5 days with 0.2 mg TetraPhyll® per replicate. At the same time, all vessels were cleaned. A maximum mortality of 5% was observed at the end of the experiment. In each replicate, 20 mudsnails were investigated for parthenogenetic production of embryos after 14, 28, and 56 days following a relaxation in 2.5% w/v magnesium chloride as reported previously (Duft et al. 2003b; Jobling et al. 2004). The total number of embryos per female after 56 days exposure proved to be a robust parameter for assessing endocrine disruptive effects in vivo. 2.4 Statistical analysis Data analysis was performed using GraphPad Prism® 5.0 (GraphPad Software, Inc., San Diego, USA). Nonparametric Mann–Whitney tests (two-tailed) were applied to compare the medians of data sets. In case of YES, data outliners were detected using Grubb's test (p < 0.05) and excluded. All presented data comprise of mean ± standard error of the mean (SEM). Screening of mineral water samples with the YES revealed a significantly elevated estrogenic activity in 12 of 20 brands (Fig. 1). Results were consistent for the three independent replicates per brand as well as for the three independent experiments performed. Average estrogenic potencies of the individual brands expressed in concentrations equivalent to 17β-estradiol (EEQ) ranged from below quantification limit (seven brands) to a maximum of 75.2 ± 5.95 ng/l EEQ (brand C–P, n = 67). The calculated average estrogenic burden of all samples was 18.0 ± 0.80 ng/l EEQ (n = 1363). Furthermore, we detected a significantly increased hormonal activity in 33% of all mineral water samples bottled in glass (three of nine brands). Compared to that, 78% of the waters from PET bottles (seven of nine brands) and both samples bottled in Tetra Pak were estrogen positive. Estrogenic potencies of mineral water expressed as estradiol equivalent concentrations (EEQ) measured with the yeast estrogen screen. Mineral water from three individual bottles of several brands (A–O) were tested in three independent experiments (each sample in eight replicates). Negative control (NC), n = 226; water samples, n = 65 to 75. b.q.l. below quantification limit, double stars p < 0.01 and triple stars p < 0.001 as determined by Mann–Whitney test To evaluate the influence of the packaging material directly, we analyzed four mineral waters that originated from the same source (A–D) but were bottled in both glass and plastic. Apart from source D, mineral waters purchased in glass bottles were less estrogenic than the corresponding samples in PET bottles (Table 1). Water produced by spring B is available in three different plastic bottles (B-PC/M/N). Estrogenic activity was not detectable in samples taken from glass bottles (B-G) and one sort of PET bottle (B-PM) that is returnable and is cleaned and filled several times. In contrast, the same water is estrogenic when supplied in one-way containers (B-PC/N). The same phenomenon is observed in case of spring D: Again, estrogenic activity in water from multi-use PET bottles (D-P) is not elevated compared to samples from the same bottler purchased in glass packaging (D-G). Properties of the mineral water brands tested in the yeast estrogen screen Volume (l) Estrogenic activity, EEQ (ng/l) Non-reusable b.q.l. b.q.l. below quantification limit P. antipodarum responds highly sensitively to estrogens: After 56 days of exposure to 25 ng/l ethinylestradiol (positive control), the reproductive output of mudsnails (embryos per female) was more than doubled (211 ± 12.6%) compared to the negative control group (100 ± 11.6%). Culturing of P. antipodarum in water bottles filled with defined culturing water led to a significant increase of reproduction (139.4 ± 13.9% to 222 ± 12.9%) in all PET brands of vessels (Fig. 2). In contrast, embryo production by mudsnails cultured in glass mineral water bottles was slightly but not significantly enhanced (108.11 ± 14.3% to 131.3 ± 14.5%). Reproduction of P. antipodarum bred in two brands of plastic bottles (B-PM and E-P, see Fig. 2) increased only moderately to 140%. Interestingly, we also did not detect estrogenic activity in mineral water from the multi-use PET bottles of brand B (B-PM, see Fig. 1). Vice versa, reproduction of P. antipodarum in brand D PET vessels (D-P, also reusable) increased significantly to 220 ± 12.9% (see Fig. 2), whereas in the YES, the mineral water itself (D-P, see Fig. 1) did not contain higher estrogenicity when compared to the corresponding glass bottles (D-G). Number of embryos produced by Potamopyrgus antipodarum after a 56-day period of culturing in glass and plastic bottles (three replicates per brand with 20 snails each). Negative control (NC), positive control (EE2), and samples, n = 60. Single star p < 0.05, double star p < 0.01, and triple stars p < 0.001 as determined by Mann–Whitney test Simultaneously, the estrogenic potency of water samples taken from all bottles was investigated in vitro with the YES. EEQ regularly recorded over the test period were integrated and compared to the number of embryos produced in the same bottles. The resulting correlation (Spearman, p = 0.0075, r s = 0.479, 30 data pairs) indicates that, in bottles with high estrogenic potency in vitro, reproduction of mudsnails was enhanced. The presence of endogenous estrogenic compounds in food is well documented in cases concerned with phytoestrogens and steroid hormones (Daxenberger et al. 2001; Fritsche and Steinhart 1999). Furthermore, the contamination of foodstuff with well-known EDCs like bisphenol A, phthalates, and alkylphenols is confirmed analytically (McNeal et al. 2000). Yet, only limited data exists on the total estrogenic burden of edibles, integrating known and unidentified EDCs as well as potential mixture effects. Therefore it seems justifiable to put the estrogenic potencies measured in mineral water in context with endogenous estrogens found in food and beverages. Hartmann et al. (1998) proposed that dairy products are the main source for steroidal estrogens and calculated a total daily intake of 80–100 ng estrogens per day for adults. Based on our data, a theoretical daily consumption of 3 l mineral water (drinking water required to maintain hydration, Howard and Bartram 2003) would result in a mean total intake of 54 ng EEQ per day. In a worst case scenario (3 l of brand C-P), the total daily intake would increase to 226 ng EEQ per day, exceeding the intake of estrogens naturally found in food (Hartmann et al. 1998) by more than 100%. In a more recent study, Courant et al. (2007) analytically determined concentrations of 23 ng/l 17ß-estradiol in milk. The concentration of natural estrogen in milk is comparable to the mean hormonal potency we measured in 20 brands of mineral water (18 ng/l EEQ) and three times lower than the maximal EEQ detected in one brand of water (75 ng/l EEQ). Therefore, consumption of mineral water results in a human exposure to xenoestrogens with at least the same hormonal potency as steroidal estrogens naturally occurring in food. Few authors utilized the YES or other in vitro assays to assess the total estrogenicity of beverages or foodstuff. Klinge et al. (2003) detected a maximum 84 ng/l EEQ in red wine. Promberger et al. (2001) calculated 23–41 ng/l EEQ for beer, a result that was confirmed by Takamura-Enya et al. (2003), who detected an estrogenic potency of approximately 30 ng/l EEQ in beer; extracts from soy based food (miso, tofu, and soy sauce) and coffee also contained EEQ in low nanogram per liter range. The range of estrogenic burden we found in several brands of mineral water is comparable to the hormonal activity of wine, beer, and soy products, detected with the same in vitro assay. Again, the distinction is that the abovementioned studies confirmed naturally occurring phytoestrogens as an endogenous source of estrogenicity, whereas mineral water does not contain phytoestrogens. Using a well-established in vitro assay, we provide first quantitative data on the estrogenic burden of commercially available mineral water. The high abundance (60% of all samples) and potency (mean of 18 ng/l EEQ) of estrogenicity clearly demonstrates that food obviously lacking endogenous estrogens significantly contributes to human exposure with estrogenic compounds. In contrast to estrogens naturally occurring in foodstuff and beverages, the sources of estrogens in water must be exogenous. From the in vitro data shown in this study, we conclude that there are three sources for the estrogenic contamination of mineral water: First of all, the water may be estrogenic by itself, implying that the untreated groundwater from the spring contains substances with hormonal potency. So far, there is no evidence for the presence of intrinsic estrogenicity in water. Another source of hormonal activity in groundwater may be the reflux of synthetic estrogens like 17α-ethinylestradiol and other pharmaceuticals from wastewater discharge. Although shown for surface water (Cargouet et al. 2004), no clear evidence for the entry of 17α-ethinylestradiol in groundwater is available up to now. A second source of the estrogenic activity of mineral water is the production process. Especially with regard to the water samples from springs A and C (see Fig. 1), which contained a conspicuously high estrogenic burden independent of the packaging material, a production-related contamination with xenoestrogens seems probable. The presence of several phthalate plasticizers in citrus essential oils, for example, was attributed to new plastic components used in the production (Di Bella et al. 2001). Another source might be residual detergents and disinfectants used for cleaning the filling system. Guenther et al. (2002) detected estrogenic nonylphenols in a broad range of foodstuff and concluded that part of it might originate from nonionic surfactants used in cleaning agents. As a third source of xenoestrogens in mineral water, we propose the migration of EDCs from the packaging material. The analysis of data according to the packaging material (Fig. 3a) demonstrates that the estrogenic contamination of mineral water bottled in plastic (PET and Tetra Pak) is significantly higher compared to that of water bottled in glass (p < 0.001). This implies an influence of the packaging material: Substances with estrogenic potency leaching from the plastic packaging could contribute to the hormonal burden of mineral water reported in this study. With regard to the four mineral waters that originated from the same source but were purchased in glass and plastic (A–D), the lower estrogenicity of the appropriate glass bottled waters (apart from spring D, see Fig. 1) supports our hypothesis of estrogenic contamination by the plastic packaging. The estrogenic burden of water purchased in Tetra Pak has to be interpreted tentatively since only two brands were examined due to low market shares. Again, higher contamination compared to samples from glass bottles (see Fig. 3 a, p < 0.001) could be attributed to the migration of EDCs from the inner lining of the Tetra Pak packaging, which consists of a polyethylene plastic film. Estrogenic potencies of mineral water in vitro (a) and reproduction of Potamopyrgus antipodarum (b). Data were pooled according to packaging material. a Estrogenic potencies (EEQ) measured with the yeast estrogen screen. Negative control (NC), n = 226; glass, n = 610; PET, n = 620; Tetra Pak, n = 133. b Number of embryos produced by Potamopyrgus antipodarum after a 56-day period of culturing in glass and plastic bottles. Negative control (NC) and positive control (EE2), n = 60; glass, n = 240; PET, n = 360. Triple stars p < 0.001 as determined by Mann–Whitney test To exclude the influence of the first two sources of xenoestrogens in mineral water (contamination by spring or production) and to exclusively characterize the estrogenic potency emerging from the packaging material, we conducted a reproduction test with the New Zealand mudsnail P. antipodarum. In this study, the prosobranch snail acts as a sentinel for xenoestrogens. In the current experiment, this is documented by the positive control with the synthetic estrogen 17α-ethinylestradiol (EE2). After 56 days of exposure to 25 ng/l EE2, the reproduction of P. antipodarum, expressed as number of embryos produced per female, was more than doubled compared to control animals (see Fig. 2). Other studies emphasize that the sensitivity of P. antipodarum is not limited to EE2 but applies for a wider range of EDCs: Jobling et al. (2004) observed effects of EE2, bisphenol A, and octylphenol on P. antipodarum that were very similar to the ones shown in this study. Duft et al. (2003a, b) provide data on xenoestrogens (bisphenol A, octylphenol, and nonylphenol) as well as on the xenoandrogens triphenyltin and tributyltin. Although the mechanism, through which EDCs act on mollusks, is not yet elucidated (Köhler et al. 2007), many gastropod species are known to be susceptible to EDC exposure (Oehlmann et al. 2006; Oehlmann et al. 2007). In the current experiment, breeding of P. antipodarum in several brands of glass bottles resulted in a slightly enhanced production of embryos compared to the negative control group (see Fig. 2), which consisted of inert Borosilicate glass vessels. Since this difference is not statistically significant, there is little evidence for the occurrence of xenoestrogens migrating from the glass material into the culturing water. In contrast, reproductive patterns of specimens kept in PET bottles changed distinctly during the test period: Females cultured in four brands of plastic bottles produced approximately twice as much embryos compared to the negative control (90–120%, see Fig. 2). Again, analysis of data according to the packaging material reveals a significantly enhanced progeny of P. antipodarum from the PET group compared to specimens from the glass group and negative control (p < 0.001, Fig. 3b). Taking into account that the same culturing water was filled in all vessels at the beginning of the experiment, it is obvious that the observed effects can only be attributed to xenoestrogen leaching from these plastic bottles. Moreover, the compounds released by the PET material were potent to trigger estrogenic effects in vivo similar to those of EE2 at a concentration of 25 ng/l. A similar observation was made by Howdeshell et al. (2003): Uterine weight of prepubertal female mice housed in cages made of polycarbonate increased by 16% compared to mice from polypropylene cages. Although this estrogenic effect was not statistically significant, it was linked to the exposure to bisphenol A leaching from polycarbonate cages. In contrast to polycarbonates, PET is believed to be free from bisphenol A. Still, Toyo'oka and Oshige (2000) detected 3–10 ng/l bisphenol A in several brands of mineral water from PET bottles but did not confirm its origin. These results were not confirmed by Shao et al. (2005), who could not detect bisphenol A in different beverages from plastic bottles (material not stated) including mineral water. Nonetheless, estrogenic nonylphenols were detected in both studies in concentrations from 16–465 ng/l. The leaching of p-nonylphenol from plastic tubes used in the laboratory was first described by Soto et al. (1991). Because bisphenol A and nonylphenols are ubiquitous, we cannot exclude their presence in mineral water. The same is true for endocrine-disrupting phthalates: Despite some claims that phthalates are not used in manufacturing PET food packaging (Enneking 2006), Kim et al. (1990) extracted several phthalates from PET water bottles (among them DEHP, DBP, and DEP). The migration of DEHP from PET into mineral water was reported by Biscardi et al. (2003). Casajuana and Lacorte (2003) monitored several phthalates in mineral water and found increased concentrations of DMP, DEP, DBP, and DEHP after storing water in PET bottles for 10 weeks. In a recent study, Montuori et al. (2008) compared mineral waters bottled in glass and PET and detected significantly higher amounts of phthalates (DMP, DEP, DiBP, DBP, and DEHP) in plastic bottled water. The sum of studied compounds was thus 12 times higher in water from PET bottles compared to samples from glass bottles. Taken together, there is good analytical evidence for the migration of certain phthalates from PET food packaging materials, some of them well-known xenoestrogens (Jobling et al. 1995). However, the estrogenicity reported in this study might also arise from unexpected compounds: Shotyk et al. (2006) found antimony in up to 30 times higher concentrations in mineral water from PET compared to glass bottles and confirmed its leaching from PET (Shotyk and Krachler 2007), in whose manufacturing antimony trioxide is used as catalyst. The maximum concentrations detected in mineral water (1–2 µg/l) have been shown to exhibit estrogenic activity in vitro (Choe et al. 2003). The analytical data from the literature suggest that the observed estrogenic effects reported in this paper cannot be attributed to one of the compounds alone, owing to the fact that individual concentrations are too low to be effective. Since the assays used in this study integrate all xenoestrogens present in the samples, we propose that there are either potent estrogenic mixtures causing the in vitro and in vivo effects (Rajapakse et al. 2002; Silva et al. 2002) or so far unidentified compounds with strong estrogenic potency. Our findings provide first evidence for a broad contamination of mineral water with xenoestrogens, typically in the range of 2–40 ng/l EEQ with maximum values of 75 ng/l EEQ. Consumption of commercially bottled mineral water may therefore contribute to the overall exposure of humans with endocrine disruptors. Moreover, it is probable that this estrogenic contamination originates from plastic food packaging materials because mineral water bottled in PET and Tetra Pak is more estrogenic than water bottled in glass. This gives rise to the assumption that additives such as plasticizers or catalysts migrate from the plastic packaging into the foodstuff. Though yet unidentified, these substances act as functionally active estrogens in vitro on the human estrogen receptor alpha and in vivo in a molluskan model. Therefore, we may have identified just the tip of the iceberg in that plastic packaging may be a major source for xenohormone contamination of many other edibles. Still, this study was not designed to evaluate whether the consumption of plastic packed nourishments comprehends the risk of endocrine disruptive effects in humans. It instead provides an insight into the potential exposure to EDCs due to unexpected sources of contamination. This work was supported by the German Federal Environment Agency (206 67 448/04). We thank Prof. Dr. F. vom Saal, Dr. U. Schulte-Oehlmann, P. Di Benedetto, and M. Hess for the positive input to the project and comments on this manuscript and Dr. S. Jobling (Brunel University London) for providing the YES strain. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. Anway MD, Cupp AS, Uzumcu M, Skinner MK (2005) Epigenetic transgenerational actions of endocrine disruptors and male fertility. Science 308:1466–1469CrossRefGoogle Scholar Biles JE, McNeal TP, Begley TH, Hollifield HC (1998) Determination of bisphenol A in reusable polycarbonate food-contact plastics and migration to food-simulating liquids. J Agric Food Chem 46:2894–2894CrossRefGoogle Scholar Biscardi D, Monarca S, De Fusco R, Senatore F, Poli P, Buschini A, Rossi C, Zani C (2003) Evaluation of the migration of mutagens/carcinogens from PET bottles into mineral water by Tradescantia/micronuclei test, Comet assay on leukocytes and GC/MS. Sci Total Environ 302:101–108CrossRefGoogle Scholar Cargouet M, Perdiz D, Mouatassim-Souali A, Tamisier-Karolak S, Levi Y (2004) Assessment of river contamination by estrogenic compounds in Paris area (France). Sci Total Environ 324:55–66CrossRefGoogle Scholar Casajuana N, Lacorte S (2003) Presence and release of phthalic esters and other endocrine disrupting compounds in drinking water. Chromatographia 57:649–655CrossRefGoogle Scholar Choe SY, Kim SJ, Kim HG, Lee JH, Choi Y, Lee H, Kim Y (2003) Evaluation of estrogenicity of major heavy metals. Sci Total Environ 312:15–21CrossRefGoogle Scholar Colborn T, Dumanoski D, Myers JP (1996) Our stolen future: are we threatening our fertility, intelligence and, survival? A scientific detective story. Dutton, New YorkGoogle Scholar Colón I, Caro D, Bourdony CJ, Rosario O (2000) Identification of phthalate esters in the serum of young Puerto Rican girls with premature breast development. Environ Health Perspect 108:895–900CrossRefGoogle Scholar Courant F, Antignac JP, Maume D, Monteau F, Andre F, Le Bizec B (2007) Determination of naturally occurring oestrogens and androgens in retail samples of milk and eggs. Food Addit Contam 24:1358–1366CrossRefGoogle Scholar Damstra T, Barlow S, Bergman A, Kavlock R, Van Der Kraak G (2002) Global Assessment of the State-of-the-Science of Endocrine Disruptors. WHO publication no. WHO/PCS/EDC/02.2. World Health Organisation, Geneva, SwitzerlandGoogle Scholar Damstra T (2003) Endocrine disrupters: the need for a refocused vision. Toxicol Sci 74:231–232CrossRefGoogle Scholar Daxenberger A, Ibarreta D, Meyer HHD (2001) Possible health impact of animal oestrogens in food. Hum Reprod Update 7:340–355CrossRefGoogle Scholar Di Bella G, Saitta M, Lo Curto S, Salvo F, Licandro G, Dugo G (2001) Production process contamination of citrus essential oils by plastic materials. J Agric Food Chem 49:3705–3708CrossRefGoogle Scholar Duft M, Schulte-Oehlmann U, Weltje L, Tillmann M, Oehlmann J (2003a) Stimulated embryo production as a parameter of estrogenic exposure via sediments in the freshwater mudsnail Potamopyrgus antipodarum. Aquat Toxicol 64:437–449CrossRefGoogle Scholar Duft M, Schulte-Oehlmann U, Tillmann M, Markert B, Oehlmann J (2003b) Toxicity of triphenyltin and tributyltin to the freshwater mudsnail Potamopyrgus antipodarum in a new sediment biotest. Environ Toxicol Chem 22:145–152CrossRefGoogle Scholar Enneking PA (2006) Phthalates not in plastic food packaging. Environ Health Perspect 114:A89–A90Google Scholar Fankhauser-Noti A, Biedermann-Brem S, Grob K (2006) PVC plasticizers/additives migrating from the gaskets of metal closures into oily food: Swiss market survey June 2005. Eur Food Res Technol 223:447–453CrossRefGoogle Scholar Fritsche S, Steinhart H (1999) Occurrence of hormonally active compounds in food: a review. Eur Food Res Technol 209:153–179CrossRefGoogle Scholar Guenther K, Heinke V, Thiele B, Kleist E, Prast H, Raecker T (2002) Endocrine disrupting nonylphenols are ubiquitous in food. Environ Sci Technol 36:1676–1680CrossRefGoogle Scholar Hartmann S, Lacorn M, Steinhart H (1998) Natural occurrence of steroid hormones in food. Food Chem 62:7–20CrossRefGoogle Scholar Hotchkiss AK, Rider CV, Blystone CR, Wilson VS, Hartig PC, Ankley GT, Foster PM, Gray CL, Gray LE (2008) Fifteen years after 'Wingspread'—Environmental endocrine disrupters and human and wildlife health: where we are today and where we need to go. Toxicol Sci 105:235–259CrossRefGoogle Scholar Howard G, Bartram J (2003) Domestic water quantity, service level and health. World Health Organisation, GenevaGoogle Scholar Howdeshell KL, Hotchkiss AK, Thayer KA, Vandenbergh JG, vom Saal FS (1999) Environmental toxins—exposure to bisphenol A advances puberty. Nature 401:763–764CrossRefGoogle Scholar Howdeshell KL, Peterman PH, Judy BM, Taylor JA, Orazio CE, Ruhlen RL, vom Saal FS, Welshons WV (2003) Bisphenol A is released from used polycarbonate animal cages into water at room temperature. Environ Health Perspect 111:1180–1187Google Scholar Jobling S, Reynolds T, White R, Parker MG, Sumpter JP (1995) A variety of environmentally persistent chemicals, including some phthalate plasticizers, are weakly estrogenic. Environ Health Perspect 103:582–587CrossRefGoogle Scholar Jobling S, Casey D, Rogers-Gray T, Oehlmann J, Schulte-Oehlmann U, Pawlowski S, Baunbeck T, Turner AP, Tyler CR (2004) Comparative responses of molluscs and fish to environmental estrogens and an estrogenic effluent. Aquat Toxicol 66:207–222CrossRefGoogle Scholar Kim H, Gilbert SG, Johnson JB (1990) Determination of potential migrants from commercial amber polyethylene terephthalate bottle wall. Pharm Res 7:176–179CrossRefGoogle Scholar Klinge CM, Risinger KE, Watts MB, Beck V, Eder R, Jungbauer A (2003) Estrogenic activity in white and red wine extracts. J Agric Food Chem 51:1850–1857CrossRefGoogle Scholar Köhler HR, Kloas W, Schirling M, Lutz I, Reye AL, Langen JS, Triebskorn R, Nagel R, Schonfelder G (2007) Sex steroid receptor evolution and signalling in aquatic invertebrates. Ecotoxicology 16:131–143CrossRefGoogle Scholar Lau OW, Wong SK (2000) Contamination in food from packaging material. J Chromatogr A 882:255–270CrossRefGoogle Scholar McNeal TP, Biles JE, Begley TH, Craun JC, Hopper ML, Sack CA (2000) Determination of suspected endocrine disruptors in foods and food packaging. In: Analysis of Environmental Endocrine Disruptors, Vol. 747, pp 33–52. American Chemical Society, WashingtonGoogle Scholar Montuori P, Jover E, Morgantini M, Bayona JM, Triassi M (2008) Assessing human exposure to phthalic acid and phthalate esters from mineral water stored in polyethylene terephthalate and glass bottles. Food Addit Contam 25:511–518CrossRefGoogle Scholar Oehlmann J, Schulte-Oehlmann U, Bachmann J, Oetken M, Lutz I, Kloas W, Ternes TA (2006) Bisphenol A induces superfeminization in the ramshorn snail Marisa cornuarietis (Gastropoda: Prosobranchia) at environmentally relevant concentrations. Environ Health Perspect 114:127–133Google Scholar Oehlmann J, Di Benedetto P, Tillmann M, Duft M, Oetken M, Schulte-Oehlmann U (2007) Endocrine disruption in prosobranch molluscs: evidence and ecological relevance. Ecotoxicology 16:29–43CrossRefGoogle Scholar Promberger A, Dornstauder E, Fruhwirth C, Schmid ER, Jungbauer A (2001) Determination of estrogenic activity in beer by biological and chemical means. J Agric Food Chem 49:633–640CrossRefGoogle Scholar Rajapakse N, Silva E, Kortenkamp A (2002) Combining xenoestrogens at levels below individual no-observed-effect concentrations dramatically enhances steroid hormone action. Environ Health Perspect 110:917–921Google Scholar Routledge EJ, Sumpter JP (1996) Estrogenic activity of surfactants and some of their degradation products assessed using a recombinant yeast screen. Environ Toxicol Chem 15:241–248CrossRefGoogle Scholar Rutishauser BV, Pesonen M, Escher BI, Ackermann GE, Aerni HR, Suter MJF, Eggen RIL (2004) Comparative analysis of estrogenic activity in sewage treatment plant effluents involving three in vitro assays and chemical analysis of steroids. Environ Toxicol Chem 23:857–864CrossRefGoogle Scholar Safe SH (2000) Endocrine disruptors and human health—is there a problem? An update. Environ Health Perspect 108:487–493CrossRefGoogle Scholar Safe SH (2005) Clinical correlates of environmental endocrine disruptors. Trends Endocrinol Metab 16:139–144CrossRefGoogle Scholar Schettler T (2006) Human exposure to phthalates via consumer products. Int J Androl 29:134–139CrossRefGoogle Scholar Schmitt C, Oetken M, Dittberner O, Wagner M, Oehlmann J (2008) Endocrine modulation and toxic effects of two commonly used UV screens on the aquatic invertebrates Potamopyrgus antipodarum and Lumbriculus variegatus. Environ Pollut 152:322–329CrossRefGoogle Scholar Shao B, Han H, Hu JY, Zhao J, Wu GH, Xue Y, Ma YL, Zhang SJ (2005) Determination of alkylphenol and bisphenol A in beverages using liquid chromatography/electrospray ionization tandem mass spectrometry. Anal Chim Acta 530:245–252CrossRefGoogle Scholar Sharpe RM (2003) The 'oestrogen hypothesis'—where do we stand now. Int J Androl 26:2–15CrossRefGoogle Scholar Shotyk W, Krachler M, Chen B (2006) Contamination of Canadian and European bottled waters with antimony from PET containers. J Environ Monit 8:288–292CrossRefGoogle Scholar Shotyk W, Krachler M (2007) Contamination of bottled waters with antimony leaching from polyethylene terephthalate (PET) increases upon storage. Environ Sci Technol 41:1560–1563CrossRefGoogle Scholar Silva E, Rajapakse N, Kortenkamp A (2002) Something from 'nothing'—eight weak estrogenic chemicals combined at concentrations below NOECs produce significant mixture effects. Environ Sci Technol 36:1751–1756CrossRefGoogle Scholar Soto AM, Justicia H, Wray JW, Sonnenschein C (1991) Para-nonyl-phenol—an estrogenic xenobiotic released from modified polystyrene. Environ Health Perspect 92:167–173CrossRefGoogle Scholar Swan SH, Main KM, Liu F, Stewart SL, Kruse RL, Calafat AM, Mao CS, Redmon JB, Ternand CL, Sullivan S, Teague JL (2005) Decrease in anogenital distance among male infants with prenatal phthalate exposure. Environ Health Perspect 113:1056–1061Google Scholar Takamura-Enya T, Ishihara J, Tahara S, Goto S, Totsuka Y, Sugimura T, Wakabayashi K (2003) Analysis of estrogenic activity of foodstuffs and cigarette smoke condensates using a yeast estrogen screening method. Food Chem Toxicol 41:543–550CrossRefGoogle Scholar Toyo'oka T, Oshige Y (2000) Determination of alkylphenols in mineral water contained in PET bottles by liquid chromatography with coulometric detection. Anal Sci 16:1071–1076CrossRefGoogle Scholar vom Saal FS, Hughes C (2005) An extensive new literature concerning low-dose effects of bisphenol A shows the need for a new risk assessment. Environ. Health Perspect 113:926–933CrossRefGoogle Scholar Waring RH, Harris RM (2005) Endocrine disrupters: a human risk. Mol Cell Endocrinol 244:2–9CrossRefGoogle Scholar Zygoura PD, Paleologos EK, Riganakos KA, Kontominas MG (2005) Determination of diethylhexyladipate and acetyltributylcitrate in aqueous extracts after cloud point extraction coupled with microwave assisted back extraction and gas chromatographic separation. J Chromatogr A 1093:29–35CrossRefGoogle Scholar © The Author(s) 2009 1.Department of Aquatic EcotoxicologyJohann Wolfgang Goethe UniversityFrankfurt am MainGermany Wagner, M. & Oehlmann, J. Environ Sci Pollut Res (2009) 16: 278. https://doi.org/10.1007/s11356-009-0107-7
CommonCrawl
Unified vector quasiequilibrium problems via improvement sets and nonlinear scalarization with stability analysis Existence and iterative approximation method for solving mixed equilibrium problem under generalized monotonicity in Banach spaces March 2020, 10(1): 93-106. doi: 10.3934/naco.2019035 Characterization of efficient solutions for a class of PDE-constrained vector control problems Savin Treanţă University Politehnica of Bucharest, Faculty of Applied Sciences, Department of Applied Mathematics, 313 Splaiul Independentei, 060042 Bucharest, Romania Received October 2018 Revised March 2019 Published May 2019 In this paper, we define a V-KT-pseudoinvex multidimensional vector control problem. More precisely, we introduce a new condition on the functionals which are involved in a multidimensional multiobjective (vector) control problem and we prove that a V-KT-pseudoinvex multidimensional vector control problem is characterized so that all Kuhn-Tucker points are efficient solutions. Also, the theoretical results derived in this paper are illustrated with an application. Keywords: Multidimensional vector control problem, Kuhn-Tucker optimality conditions, V-KT-pseudoinvexity, multiple integral cost functional, flow. Mathematics Subject Classification: Primary: 26B25, 65K10, 90C29; Secondary: 90C30, 49K20, 46T20, 58J32. Citation: Savin Treanţă. Characterization of efficient solutions for a class of PDE-constrained vector control problems. Numerical Algebra, Control & Optimization, 2020, 10 (1) : 93-106. doi: 10.3934/naco.2019035 V. M. Alekseev, M. V. Tikhomirov and S. V. Fomin, Commande Optimale, Mir, Moscow, 1982. Google Scholar M. Arana-Jiménez, R. Osuna-Gómez, A. Rufián-Lizana and G. Ruiz-Garzón, KT-invex control problem, Appl. Math. Comput., 197 (2008), 489-496. doi: 10.1016/j.amc.2007.07.064. Google Scholar F. Cardin and C. Viterbo, Commuting Hamiltonians and Hamilton-Jacobi multi-time equations, Duke Math. J., 144 (2008), 235-284. doi: 10.1215/00127094-2008-036. Google Scholar D. A. Deckert and L. Nickel, Consistency of multi-time Dirac equations with general interaction potentials, J. Math. Phys., 57 (2016), 072301. doi: 10.1063/1.4954947. Google Scholar P. A. M. Dirac, V. A. Fock and B. Podolski, On quantum electrodynamics, Physikalische Zeitschrift der Sowjetunion, 2 (1932), 468-479. Google Scholar A. Friedman, The Cauchy problem in several time variables, Journal of Mathematics and Mechanics (Indiana Univ. Math. J.), 11 (1962), 859-889. Google Scholar S. Keppeler and M. Sieber, Particle creation and annihilation at interior boundaries: One-dimensional models, Preprint, arXiv: 1511.03071. doi: 10.1088/1751-8113/49/12/125204. Google Scholar W. S. Kendall, Contours of Brownian processes with several-dimensional times, Probability Theory and Related Fields, 52 (1980), 267-276. doi: 10.1007/BF00538891. Google Scholar M. Lienert and L. Nickel, A simple explicitly solvable interacting relativistic $N$-particle model, J. Phys. A: Math. Theor., 48 (2015), 325301. doi: 10.1088/1751-8113/48/32/325301. Google Scholar D. H. Martin, The essence of invexity, J. Optim. Theory Appl., 47 (1985), 65-76. doi: 10.1007/BF00941316. Google Scholar Şt. Mititelu and S. Treanţă, Efficiency conditions in vector control problems governed by multiple integrals, J. Appl. Math. Comput., 57 (2018), 647-665. doi: 10.1007/s12190-017-1126-z. Google Scholar B. Mond and M. A. Hanson, Duality for control problems, SIAM J. Control, 6 (1968), 114-120. Google Scholar B. Mond and I. Smart, Duality and sufficiency in control problems with invexity, J. Math. Anal. Appl., 136 (1988), 325-333. doi: 10.1016/0022-247X(88)90135-7. Google Scholar M. Motta and F. Rampazzo, Nonsmooth multi-time Hamilton-Jacobi systems, Indiana Univ. Math. J., 55 (2006), 1573-1614. doi: 10.1512/iumj.2006.55.2760. Google Scholar S. Petrat and R. Tumulka, Multi-time wave functions for quantum field theory, Ann. Phys., 345 (2014), 17-54. doi: 10.1016/j.aop.2014.03.004. Google Scholar V. Preda, On duality and sufficiency in control problems with general invexity, Bull. Math. de la Soc. Sci. Math de Roumanie, 35 (1991), 271-280. Google Scholar V. Prepeliţă, Stability of a class of multidimensional continuous-discrete linear systems, Math. Reports, 9 (2007), 387-398. Google Scholar D. J. Saunders, The Geometry of Jet Bundles, London Math. Soc. Lecture Notes Series, 142 (1989), Cambridge Univ. Press, Cambridge doi: 10.1017/CBO9780511526411. Google Scholar S. Teufel and R. Tumulka, New type of Hamiltonians without ultraviolet divergence for quantum field theories, Preprint, https://arxiv.org/abs/1505.04847v1. Google Scholar S. Tomonaga, On a relativistically invariant formulation of the quantum theory of wave fields, Progress of Theoretical Physics, 1 (1946), 27-42. doi: 10.1080/10724117.1994.11974884. Google Scholar S. Treanţă, PDEs of Hamilton-Pfaff type via multi-time optimization problems, U.P.B. Sci. Bull., Series A: Appl. Math. Phys., 76 (2014), 163-168. Google Scholar S. Treanţă, Optimal control problems on higher order jet bundles, The Intern. Conf. "Differential Geometry - Dynamical Systems", October 10-13, 2013, Bucharest-Romania, Balkan Society of Geometers, Geometry Balkan Press (2014), 181–192. Google Scholar S. Treanţă, Multiobjective fractional variational problem on higher-order jet bundles, Commun. Math. Stat., 4 (2016), 323-340. doi: 10.1007/s40304-016-0087-0. Google Scholar S. Treanţă, Higher-order Hamilton dynamics and Hamilton-Jacobi divergence PDE, Comput. Math. Appl., 75 (2018), 547-560. doi: 10.1016/j.camwa.2017.09.033. Google Scholar S. Treanţă and M. Arana-Jiménez, KT-pseudoinvex multidimensional control problem, Optim. Control Appl. Meth., 39 (2018), 1291-1300. doi: 10.1002/oca.2410. Google Scholar S. Treanţă and M. Arana-Jiménez, On generalized KT-pseudoinvex control problems involving multiple integral functionals, Eur. J. Control, 43 (2018), 39-45. doi: 10.1016/j.ejcon.2018.05.004. Google Scholar S. Treanţă, On a new class of vector variational control problems, Numer. Func. Anal. Opt., 39 (2018), 1594-1603. doi: 10.1080/01630563.2018.1488142. Google Scholar C. Udrişte and I. Ţevy, Multitime dynamic programming for multiple integral actions, J. Glob. Optim., 51 (2011), 345-360. doi: 10.1007/s10898-010-9599-4. Google Scholar G-W. Weber, F. Yilmaz, H.Ö. Bakan and E. Savku, Approximation of Optimal Stochastic Control Problems for Multi-dimensional Stochastic Differential Equations by Using Itô-Taylor Method with Malliavin Calculus, The 9th International Conference on Optimization: Techniques and Applications, Taipei, Taiwan, 2013. Google Scholar N. I. Yurchuk, A partially characteristic mixed boundary value problem with Goursat initial conditions for linear equations with two-dimensional time, Diff. Uravn., 5 (1969), 898-910. Google Scholar Figure 1. Graphical illustrations for x(t) and u(t) Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109 Shuang Liu, Yuan Lou. A functional approach towards eigenvalue problems associated with incompressible flow. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3715-3736. doi: 10.3934/dcds.2020028 Max E. Gilmore, Chris Guiver, Hartmut Logemann. Sampled-data integral control of multivariable linear infinite-dimensional systems with input nonlinearities. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021001 Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083 Yunfeng Jia, Yi Li, Jianhua Wu, Hong-Kun Xu. Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3485-3507. doi: 10.3934/dcds.2019227 Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099 Amru Hussein, Martin Saal, Marc Wrona. Primitive equations with horizontal viscosity: The initial value and The time-periodic problem for physical boundary conditions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020398 Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303 Aurelia Dymek. Proximality of multidimensional $ \mathscr{B} $-free systems. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021013 Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020050 Onur Şimşek, O. Erhun Kundakcioglu. Cost of fairness in agent scheduling for contact centers. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021001 Anton A. Kutsenko. Isomorphism between one-dimensional and multidimensional finite difference operators. Communications on Pure & Applied Analysis, 2021, 20 (1) : 359-368. doi: 10.3934/cpaa.2020270 Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 Manxue You, Shengjie Li. Perturbation of Image and conjugate duality for vector optimization. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020176 Ravi Anand, Dibyendu Roy, Santanu Sarkar. Some results on lightweight stream ciphers Fountain v1 & Lizard. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020128 Elvio Accinelli, Humberto Muñiz. A dynamic for production economies with multiple equilibria. Journal of Dynamics & Games, 2021 doi: 10.3934/jdg.2021002
CommonCrawl
Existence results for some nonlinear elliptic equations with measure data in Orlicz-Sobolev spaces Ge Dong1,2 & Xiaochun Fang1 An Erratum to this article was published on 10 July 2017 This article has been updated We prove the existence results in the setting of Orlicz spaces for the following nonlinear elliptic equation: $$A(u)+g(x,u,Du)=\mu, $$ where A is a Leray-Lions operator defined on \(D(A)\subset W_{0}^{1}L_{M}(\Omega)\), while g is a nonlinear term having a growth condition with respect to Du, but does not satisfy any sign condition. The right-hand side μ is a bounded Radon measure data. Let Ω be a bounded domain in \(\mathbb{R}^{N}\). In the classical Sobolev space \(W^{1,p}_{0}(\Omega)\), Porretta [1] studied the solution of the following problem: $$ -\operatorname{div} a(x,u,D u)=H(x,u,D u)+\mu, $$ where a is supposed to satisfy a polynomial growth condition with respect to u and Du, H has natural growth with respect to Du without any sign condition (i.e., \(H(x,s,\xi )s\geq0\)), that is, a and H satisfy (a1) \(|a(x,s,\xi)|\leq\beta(k(x)+|s|^{p-1}+|\xi|^{p-1})\), \(k(x)\in L^{p'}(\Omega)\), \(\beta>0\), \(p>1\), \(\frac{1}{p}+\frac{1}{p'}=1\), \(|H(x,s,\xi)|\leq\gamma(x)+g(s)|\xi|^{p}\), \(\gamma(x)\in L^{1}(\Omega)\), and \(g:\mathbb{R}\rightarrow\mathbb{R}^{+}\) is continuous, \(g\geq0\), \(g\in L^{1}(\mathbb{R})\), for almost every \(x\in\Omega\), for all \(s\in\mathbb{R}\), \(\xi\in \mathbb{R}^{N}\). The right-hand side μ is a nonnegative bounded Radon measure on Ω. The model example is the equation $$-\Delta_{p}(u)+g(u)|D u|^{p}=\mu $$ in Ω coupled with a Dirichlet boundary condition. Aharouch et al. [2] proved the existence results in the setting of Orlicz spaces for the unilateral problem associated to the following equation: $$ A(u)+g(x,u,D u)=f, $$ where \(A(u)=-\operatorname{div} a(x,u,D u)\) is a Leray-Lions operator defined on \(D(A)\subset W_{0}^{1}L_{M}(\Omega)\), a and g satisfy the following growth conditions: \(|a(x,s,\xi)|\leq c(x)+k_{1}\bar{P}^{-1}(M(k_{2}|s|))+k_{3}\bar {M}^{-1}(M(k_{4}|\xi|))\), \(k_{1},k_{2},k_{3},k_{4} \geq0\), \(c(x)\in E_{\bar{M}}(\Omega)\), \(|g(x,s,\xi)|\leq\gamma(x)+\rho(s)M(|\xi|)\), \(\gamma(x)\in L^{1}(\Omega)\), and \(\rho:\mathbb{R}\rightarrow\mathbb{R}^{+}\) is continuous, \(\rho\geq0\), \(\rho\in L^{1}(\mathbb{R})\), for almost every \(x\in\Omega\), for all \(s\in\mathbb{R}\), \(\xi\in \mathbb{R}^{N}\), where M and P are N-functions such that \(P\ll M\). The right-hand side f belongs to \(L^{1}(\Omega)\). The obstacle is a measurable function. Youssfi et al. [3] proved the existence of bounded solutions of problem (2) whose principal part has a degenerate coercivity, where g does not satisfy the sign condition and f is an appropriate integrable source term. Some elliptic equations in Orlicz spaces with variational structure of the form $$\int_{\Omega}M\bigl(|Du|\bigr)\,dx $$ have been studied, where \(u:\Omega\rightarrow\mathbb{R}^{N}\), \(\Omega \subset\mathbb{R}^{n}\) is a bounded open set (see, e.g., [4–7]). The associated Euler-Lagrange system is $$-\operatorname{div} \biggl(M'\bigl(|Du|\bigr)\frac{Du}{|Du|} \biggr)=0\quad (\mbox{see}, \textit{e.g.}, \mbox{[5]}). $$ In this case methods from the calculus of variations can be used and regularity of solutions can be shown. However, the assumptions are strong. For example, it is needed that M satisfies \(\Delta_{2}\) condition in [4] and [6]. The purpose of this paper is to study the existence of a solution for the following nonlinear Dirichlet problem: $$ A(u)+g(x,u,D u)=\mu, $$ where \(A(u)=-\operatorname{div} a(x,u,D u)\) is a Leray-Lions operator defined on \(D(A)\subset W_{0}^{1}L_{M}(\Omega)\) having the following growth condition: $$\bigl|a(x,s,\xi)\bigr|\leq\beta \bigl[c(x)+\bar{M}^{-1} \bigl(M\bigl(|s|\bigr) \bigr)+ \bar{M}^{-1} \bigl(M\bigl(|\xi |\bigr) \bigr) \bigr],\quad \beta>0, c(x)\in E_{\bar{M}}(\Omega) $$ for almost every \(x\in\Omega\), for all \(s\in\mathbb{R}\), \(\xi\in \mathbb{R}^{N}\), g is a nonlinear term having the growth condition (g) without any sign condition, and μ is a nonnegative bounded Radon measure on Ω. When trying to relax the restriction on a and H in Eq. (1), we are led to replace Sobolev spaces by Orlicz-Sobolev spaces without assuming any restriction on M (i.e., without the \(\Delta _{2}\) condition). The choice \(M(t)=t^{p}\), \(p>1\), \(t>0\) leads to [1]. A nonstandard example is \(M(t)=t\ln(1+t)\), \(t>0\) (see, e.g., [8, 9]). Taking \(M(t)=e^{t}-1\), \(t>0\), M does not satisfy \(\Delta_{2}\)-condition. Moreover, the elimination of the term g in Eq. (3) can lead to [10]. A specific example to which our result applies includes the following: $$-\operatorname{div} \biggl(a(u)\frac{M(|Du|)Du}{|Du|^{2}} \biggr)+ a'(u)\int _{0}^{|D u|}\frac{M(t)}{t}\,dt=\delta, $$ where \(a(s)\) is a smooth function, and δ is a Dirac measure. This paper is organized as follows. In Section 2, we recall some preliminaries and some technical lemmas which will be needed in Section 3. In Section 3, we first prove that there exist solutions in \(W_{0}^{1}E_{M}(\Omega)\) for approximate equations by using a linear functional analysis method; next, following [1–3, 10], we prove the existence results for problem (9)-(10) and show that solutions belong to Orlicz-Sobolev spaces \(W_{0}^{1}L_{B}(\Omega)\) for any \(B\in\mathcal{P}_{M}\), where \(\mathcal{P}_{M}\) is a special class of N-functions (see Theorem 3.1 below). For some classical results on equations, we refer to [11–18]. N-function Let \(M:\mathbb{R}^{+}\to\mathbb{R}^{+}\) be an N-function; i.e., M is continuous, convex with \(M(u)>0\) for \(u>0\), \(M(u)/u\to 0\) as \(u\to0\), and \(M(u)/u\to\infty\) as \(u\to\infty\). Equivalently, M admits the representation \(M(u)=\int_{0}^{u}\phi(t)\,dt\), where \(\phi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is a nondecreasing, right-continuous function with \(\phi(0)=0\), \(\phi(t)>0\) for \(t>0\), and \(\phi(t)\to\infty\) as \(t\to\infty\). The conjugated N-function M̄ of M is defined by \(\bar{M}(v)=\int_{0}^{v}\psi(s)\,ds\), where \(\psi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is given by \(\psi(s)=\sup\{t:\phi(t)\leq s\}\). The N-function M is said to satisfy the \(\Delta_{2}\) condition if, for some \(k>0\), $$M(2u)\leq kM(u),\quad \forall u\geq0. $$ The N-function M is said to satisfy the \(\Delta_{2}\) condition near infinity if, for some \(k>0\) and \(u_{0}>0\), \(M(2u)\leq kM(u)\), \(\forall u\geq u_{0}\) (see [19, 20]). Moreover, one has the following Young inequality: $$\forall u, v\geq0,\quad uv\leq M(u)+\bar{M}(v). $$ We will extend these N-functions into even functions on all \(\mathbb{R}\). Let P, Q be two N-functions, \(P\ll Q\) means that P grows essentially less rapidly than Q; i.e., for each \(\varepsilon>0\), \(P(t)/Q(\varepsilon t)\to0\) as \(t\to\infty\). This is the case if and only if \(\lim_{t\to\infty} Q^{-1}(t)/P^{-1}(t)=0\) (see [19, 21]). Orlicz spaces Let Ω be an open subset of \(\mathbb{R}^{N}\) and M be an N-function. The Orlicz class \(K_{M}(\Omega)\) (resp. the Orlicz space \(L_{M}(\Omega)\)) is defined as the set of (equivalence classes of) real valued measurable functions u on Ω such that $$\int_{\Omega}M \bigl(u(x) \bigr)\,dx< +\infty \quad \biggl( \mbox{resp. } \int_{\Omega}M \biggl(\frac{u(x)}{\lambda} \biggr) \,dx< + \infty \mbox{ for some }\lambda>0 \biggr). $$ \(L_{M}(\Omega)\) is a Banach space under the Luxemburg norm $$\|u\|_{(M)}=\inf \biggl\{ \lambda>0: \int_{\Omega}M \biggl(\frac{u(x)}{\lambda} \biggr)\,dx\leq1 \biggr\} , $$ and \(K_{M}(\Omega)\) is a convex subset of \(L_{M}(\Omega)\) but not necessarily a linear space. The closure in \(L_{M}(\Omega)\) of the set of bounded measurable functions with compact support in Ω̄ is denoted by \(E_{M}(\Omega)\). The equality \(E_{M}(\Omega)=L_{M}(\Omega)\) holds if and only if M satisfies the \(\Delta_{2}\) condition for all t or for t large according to whether Ω has infinite measure or not. The dual space of \(E_{M}(\Omega)\) can be identified with \(L_{\bar {M}}(\Omega)\) by means of the pairing \(\int_{\Omega}u(x)v(x)\,dx\), and the dual norm of \(L_{\bar{M}}(\Omega)\) is equivalent to \(\|\cdot\| _{(\bar{M})}\). Orlicz-Sobolev spaces We now turn to the Orlicz-Sobolev spaces. The class \(W^{1}L_{M}(\Omega)\) (resp., \(W^{1}E_{M}(\Omega)\)) consists of all functions u such that u and its distributional derivatives up to order 1 lie in \(L_{M}(\Omega)\) (resp., \(E_{M}(\Omega)\)). The classes \(W^{1}L_{M}(\Omega)\) and \(W^{1}E_{M}(\Omega)\) of such functions may be given the norm $$\|u\|_{\Omega,M}=\sum_{|\alpha|\leq1}\bigl\| D^{\alpha}u \bigr\| _{(M)}. $$ These classes will be Banach spaces under this norm. We refer to spaces of the forms \(W^{1}L_{M}(\Omega)\) and \(W^{1}E_{M}(\Omega)\) as Orlicz-Sobolev spaces. Thus \(W^{1}L_{M}(\Omega)\) and \(W^{1}E_{M}(\Omega)\) can be identified with subspaces of the product of \(N+1\) copies of \(L_{M}(\Omega)\). Denoting this product by \(\Pi L_{M}\), we will use the weak topologies \(\sigma(\Pi L_{M},\Pi E_{\bar{M}})\) and \(\sigma(\Pi L_{M},\Pi L_{\bar{M}})\). If M satisfies \(\Delta_{2}\) condition (near infinity only when Ω has finite measure), then \(W^{1}L_{M}(\Omega)=W^{1}E_{M}(\Omega)\). The space \(W_{0}^{1}E_{M}(\Omega)\) is defined as the (norm) closure of the Schwartz space \(\mathcal{D}(\Omega)\) in \(W^{1}E_{M}(\Omega)\) and the space \(W^{1}_{0}L_{M}(\Omega)\) as the \(\sigma(\Pi L_{M},\Pi E_{\bar{M}})\) closure of \(\mathcal{D}(\Omega)\) in \(W^{1}L_{M}(\Omega)\). We recall that a sequence \(u_{n}\) converges to u for the modular convergence in \(W^{1}L_{M}(\Omega)\) if there exists \(\lambda>0\) such that $$\int_{\Omega}M \biggl(\frac{|D^{\alpha}u_{n}-D^{\alpha}u|}{\lambda} \biggr)\,dx\rightarrow0 \quad\mbox{as } n\rightarrow\infty \mbox{ for all } |\alpha|\leq1. $$ Let \(W^{-1}L_{\bar{M}}(\Omega)\) (resp. \(W^{-1}E_{\bar{M}}(\Omega)\)) denote the space of distributions on Ω which can be written as sums of derivatives of order ≤1 of functions in \(L_{\bar{M}}(\Omega)\) (resp. \(E_{\bar{M}}(\Omega)\)). It is a Banach space under the usual quotient norm. If the open set Ω has the segment property, then the space \(\mathcal{D}(\Omega)\) is dense in \(W^{1}_{0}L_{M}(\Omega)\) for the modular convergence and thus for the topology \(\sigma(\Pi L_{M},\Pi L_{\bar{M}})\). Consequently, the action of a distribution in \(W^{-1}L_{\bar{M}}(\Omega)\) on an element of \(W^{1}_{0}L_{M}(\Omega)\) is well defined. The dual space of \(W^{1}_{0}E_{M}(\Omega)\) is \(W^{-1}L_{\bar{M}}(\Omega )\) and the dual space of \(W^{-1}E_{\bar{M}}(\Omega)\) is \(W^{1}_{0}L_{M}(\Omega )\) (see [21, 22]). For the above results, the readers can also be referred to [8, 23–25]. We recall some lemmas which will be used later. (see [26]) For all \(u\in W_{0}^{1}L_{M}(\Omega)\), one has $$\int_{\Omega}M\bigl(|u|/\operatorname{diam}\Omega\bigr)\,dx\leq\int _{\Omega} M\bigl(|Du|\bigr)\,dx, $$ where diamΩ is the diameter of Ω. If the open set Ω has the segment property, \(u\in W_{0}^{1}L_{M}(\Omega)\), then there exists \(\lambda>0\) and a sequence \(u_{k}\in\mathcal {D}(\Omega)\) such that for any \(|\alpha|\leq1\), \(\rho_{M}(|D^{\alpha}u_{k}-D^{\alpha}u|/\lambda)\rightarrow0\), \(k\rightarrow\infty\). Definition 2.1 Let \(V_{m}=\operatorname{span}\{\omega_{1},\ldots,\omega_{m}\}\); then \(u_{m}\in V_{m}\) is called a Galerkin solution of \(A(u)=f\) in \(V_{m}\) if and only if $$\bigl(A(u_{m}),v \bigr)=(f,v)\quad \forall v\in V_{m}. $$ The proof of the following lemma can be found in Lemma 5.12.1 in [28]. Let \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) be a continuous mapping with $$ \lim_{|x|\rightarrow\infty}\frac{\langle x,f(x)\rangle}{|x|}=a, $$ where a is a constant with \(-\infty\leq a<0\) or \(0< a\leq+\infty\), \(|\cdot|\) is a norm in \(\mathbb{R}^{m}\), \(\langle\cdot,\cdot\rangle\) is an inner product defined as \(\langle x,f(x)\rangle=\sum_{i=1}^{m}x_{i}f_{i}(x)\) with \(x=(x_{1},x_{2},\ldots,x_{m})\) and \(f(x)=(f_{1}(x),f_{2}(x),\ldots,f_{m}(x))\). Then the range of f is the whole of \(\mathbb{R}^{m}\). Let \(u_{0}\in\mathbb{R}^{m}\) and define \(f^{*}(x)=f(x)-u_{0}\). Then \(f^{*}\) satisfies (4). Consequently, it is sufficient to prove that the range of any map satisfying (4) contains the origin. If \(0< a\leq+\infty\), using (4) we see that we may choose r large enough so that $$ \frac{\langle x,f(x)\rangle}{|x|}>0 \quad\mbox{for }|x|=r. $$ But from (5), it follows that the mapping $$w(\xi)=-\frac{rf(\xi)}{|f(\xi)|},\quad |\xi|\leq r. $$ Then \(w:B(0,r)\rightarrow B(0,r)\) is continuous where \(B(0,r)=\{x\in \mathbb{R}^{m}, |x|\leq r\}\). By the Brouwer fixed point theorem, f is continuous from \(B(0,r)\subset\mathbb{R}^{m}\) into \(B(0,r)\), and f has a fixed point, i.e., there exists \(x\in B(0,r)\) such that \(x=w(x)\). Then $$|x|=\bigl|w(x)\bigr|= \biggl\vert -\frac{rf(x)}{|f(x)|} \biggr\vert =r, $$ which implies that $$\frac{\langle x,f(x)\rangle}{|x|}=\frac{\langle x,-\frac {x}{r}|f(x)|\rangle}{|x|} =-\frac{|f(x)|\langle x,x\rangle}{r|x|}=-\frac{|f(x)|\cdot |x|^{2}}{r|x|} =-\frac{|f(x)|\cdot|x|}{r}< 0. $$ It is a contradiction with (5). Therefore, f is surjective. If \(-\infty\leq a<0\), then let \(g=-f\). Thanks to (4), we have $$\lim_{|x|\rightarrow\infty}\frac{\langle x,g(x)\rangle}{|x|}=-a. $$ From this we deduce that g is surjective. Therefore −g is surjective, too. Immediately, f is surjective, i.e., the range of f is the whole of \(\mathbb{R}^{m}\). □ Let V be a vector space of finite dimension and \(A:V\rightarrow V^{*}\) be a continuous mapping with $$ \lim_{\|u\|_{V}\rightarrow\infty}\frac{(A(u),u)}{|x|}=a, $$ where a is the constant in Lemma 2.3 and \(V^{*}\) is the dual space of V, then A is surjective. Clearly, condition (4) is weaker than the one of Lemma 5.12.1 in [28]. If condition (4) is replaced by $$\lim_{|x|\rightarrow\infty}\frac{|\langle x,f(x)\rangle|}{|x|}=a, $$ then f is not surjective. For example, let \(f(x)=|x|\), then \(f:\mathbb {R}\rightarrow\mathbb{R}\) is continuous and $$\frac{|\langle x,f(x)\rangle|}{|x|}=\frac{| x\cdot|x||}{|x|} =|x|\rightarrow+\infty \quad\mbox{as }|x| \rightarrow+\infty. $$ However, the range of f is \([0,+\infty)\). Therefore, Lemma 1 in Landes [27] should be without absolute. (see [20] and [21]) If a sequence \(u_{n}\in L_{M}(\Omega)\) converges a.e. to u and if \(u_{n}\) remains bounded in \(L_{M}(\Omega)\), then \(u\in L_{M}(\Omega)\) and \(u_{n}\rightharpoonup u\) for \(\sigma(L_{M}, E_{\bar{M}})\). Let \(u_{k}, u\in L_{M}(\Omega)\). If \(u_{k}\rightarrow u\) with respect to the modular convergence, then \(u_{k}\rightarrow u\) for \(\sigma(L_{M}, L_{\bar{M}})\). For N-function M, \(\mathcal{T}_{0}^{1,M}(\Omega)\) is defined as the set of measurable functions \(u:\Omega\rightarrow\mathbb{R}\) such that for all \(k>0\) the truncated functions \(T_{k}(u)\in W_{0}^{1}L_{M}(\Omega)\) with \(T_{k}(s)=\max(-k,\min(k,s))\). The following lemmas will be applied to the truncation operators. (see [2, 23] and [24]) Let \(F:\mathbb{R}\rightarrow\mathbb{R}\) be uniformly Lipschitzian with \(F(0) = 0\). Let M be an N-function, and let \(u\in W^{1}L_{M}(\Omega)\) (resp. \(W^{1}E_{M}(\Omega)\)). Then \(F(u)\in W^{1}L_{M}(\Omega)\) (resp. \(W^{1}E_{M}(\Omega)\)). Moreover, we have \(\frac{\partial}{\partial x_{i}}F(u)=F^{\prime}(u)\frac{\partial }{\partial x_{i}}u\), a.e. in \(\{x\in\Omega|u(x)\notin D\}\), and \(\frac {\partial}{\partial x_{i}}F(u)=0\), a.e. in \(\{x\in\Omega|u(x)\in D\}\), where D is the set of discontinuity points of \(F^{\prime}\). If \(u\in W^{1}L_{M}(\Omega)\), then \(u^{+},u^{-}\in W^{1}L_{M}(\Omega)\) and $$ Du^{+}= \textstyle\begin{cases} Du, &\textit{if }u>0, \\ 0, & \textit{if }u\leq0, \end{cases}\displaystyle \quad\textit{and} \quad Du^{-}= \textstyle\begin{cases} 0, & \textit{if }u\geq0, \\ -Du,& \textit{if }u< 0. \end{cases} $$ (see [2]) For every \(u\in\mathcal {T}_{0}^{1,M}(\Omega)\), there exists a unique measurable function \(v:\Omega\rightarrow\mathbb{R}\) such that \(DT_{k}(u)=v\chi_{\{|u|< k\}}\) almost everywhere in Ω for every \(k>0\). Define the gradient of u as the function v, and denote it by \(v=Du\). Existence theorem Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded domain with the segment property, \(N\geq2\), M be an N-function, M̄ be a complementary function of M. Assume that M is twice continuously differentiable. Denote by \(\mathcal{P}_{M}\) the following subset of N-functions defined as: $$\begin{aligned} \mathcal{P}_{M} =& \biggl\{ B:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}: N\mbox{-function}: B \mbox{ is twice continuously differentiable}, \\ &{} B^{\prime\prime}/B^{\prime}\leq M^{\prime\prime}/M^{\prime}; \int_{0}^{1}B\circ H^{-1} \bigl(1/t^{1-1/N} \bigr)\,dt< \infty \biggr\} , \end{aligned}$$ where \(H(r)=M(r)/r\). Assume that there exists \(Q\in\mathcal{P}_{M}\) such that $$ Q\circ H^{-1} \mbox{ is an }N\mbox{-function}. $$ Let μ be a bounded nonnegative Radon measure on Ω. We consider the following Dirichlet problem: $$\begin{aligned}& A(u)+g(x,u,D u)=\mu \quad\mbox{in } \Omega, \end{aligned}$$ $$\begin{aligned}& u=0, \quad\mbox{on } \partial\Omega, \end{aligned}$$ where \(A:D(A)\subset W_{0}^{1}L_{M}(\Omega)\to W^{-1}L_{\bar{M}}(\Omega)\) is a mapping given by \(A(u)=-\mathop{\operatorname{div}} a(x,u,D u)\). \(a:\Omega\times\mathbb{R}\times\mathbb{R}^{N}\to\mathbb{R}^{N}\) is a Carathéodory function satisfying for a.e. \(x\in\Omega\) and all \(s\in\mathbb{R}\), \(\xi,\eta\in\mathbb{R}^{N}\) with \(\xi\neq\eta\): $$\begin{aligned}& \bigl|a(x,s,\xi)\bigr|\leq\beta \bigl[c(x)+\bar{M}^{-1} \bigl(M\bigl(|s|\bigr) \bigr)+ \bar{M}^{-1} \bigl(M\bigl(|\xi|\bigr) \bigr) \bigr], \end{aligned}$$ $$\begin{aligned}& { \bigl[}a(x,s,\xi)-a(x,s,\eta) \bigr] [\xi-\eta]>0, \end{aligned}$$ $$\begin{aligned}& a(x,s,\xi)\xi\geq\alpha M\bigl(|\xi|\bigr), \end{aligned}$$ where \(\alpha,\beta>0\), \(k_{1},k_{2}\geq0\), \(c(x)\in E_{\bar{M}}(\Omega)\). \(g:\Omega\times\mathbb{R}\times\mathbb{R}^{N}\to\mathbb{R}\) is a Carathéodory function satisfying for a.e. \(x\in\Omega\) and all \(s\in\mathbb{R}\), \(\xi\in\mathbb{R}^{N}\): $$ \bigl|g(x,s,\xi)\bigr|\leq\gamma(x)+\rho(s)M\bigl(|\xi|\bigr), $$ where \(\rho:\mathbb{R}\rightarrow\mathbb{R}^{+}\) is a continuous positive function which belongs to \(L^{1}(\mathbb{R})\) and \(\gamma(x)\) belongs to \(L^{1}(\Omega)\). For example, \(g(x,u,D u)=\gamma(x)+|\sin u|e^{-u}M(|D u|)\) (see [2]). We have the following theorem. Assume that (8)-(14) hold. Then there exists at least one solution of the following problem: $$ \textstyle\begin{cases} u\in\mathcal{T}_{0}^{1,M}(\Omega)\cap W_{0}^{1}L_{B}(\Omega),& \forall B\in\mathcal{P}_{M}, \\ \langle A(u),\phi\rangle+\int_{\Omega}g(x,u,D u)\phi \,dx =\langle\mu,\phi\rangle,&\forall\phi\in\mathcal{D}(\Omega). \end{cases} $$ It is well known that there exists a sequence \(\mu_{n}\in\mathcal {D}(\Omega)\) such that \(\mu_{n}\) converges to μ in the distributional sense with \(\|\mu_{n}\|_{L^{1}(\Omega)}\leq\|\mu\|_{\mathcal{M}_{b}(\Omega)}\) and \(\mu_{n}\) is nonnegative if μ is nonnegative. (1) Benkirane and Bennouna [30, Remark 2.2] give some examples of N-functions M for which the set \(\mathcal{P}_{M}\) is not empty. For example, assume that the N-function M is defined only at infinity, and let \(M(t)=t^{2}\log t\) and \(B(t)=t\log t\), then \(H(t)=t \log t\) and \(H^{-1}(t)=t(\log t)^{-1}\) at infinity (see, e.g., [30] or [20]). Hence, the N-function B belongs to \(\mathcal{P}_{M}\). (2) Let \(M(t)=|t|^{p}\) and \(B(t)=|t|^{q}\), then \(B\in\mathcal{P}_{M}\Leftrightarrow1< q<\tilde{p}=\frac{N(p-1)}{N-1}\) and \(p>2-\frac{1}{N}\). So that we find the same result given in [1]. Our theorem gives a refinement of the regularity result. For example, take \(B_{1}(t)=\frac{t^{\tilde{p}}}{\log^{\alpha}(e+t)}\) with \(\alpha>1\). We have the following proposition. Proposition 3.1 Assume that (8)-(14) hold. Then, for any \(n\in \mathbb{N}\), there exists at least one solution \(u_{n}\in W_{0}^{1}E_{M}(\Omega)\) of the following approximate equation: $$ \int_{\Omega} \bigl[a(x,u,Du)Dv+g_{n}(x,u,Du)v \bigr]\,dx=\int_{\Omega}\mu_{n}v\,dx, \quad\forall v \in W_{0}^{1}L_{M}(\Omega), $$ where \(g_{n}(x,s,\xi)=\frac{g(x,s,\xi)}{1+\frac{1}{n}|g(x,s,\xi)|}\). Denote \(V=W_{0}^{1}E_{M}(\Omega)\). Define \(A_{n}:V\rightarrow V^{\ast}\), $$(A_{n}u,w):=\int_{\Omega} \bigl[a(x,u,Du)Dw(x) +g_{n}(x,u,Du)w(x) \bigr]\,dx,\quad \forall w\in V. $$ Then \(A_{n}\) is well defined. Indeed, from (11) we have $$\int_{\Omega}\bar{M} \biggl(\frac{1}{3\beta}\bigl|a(x,u,Du)\bigr| \biggr)\,dx \leq\int_{\Omega}\frac{1}{3} \bigl[\bar{M} \bigl(c(x) \bigr)+M\bigl(|u|\bigr) +M\bigl(|Du|\bigr) \bigr]\,dx< \infty. $$ Therefore, \(a(x,u,Du)\in(L_{\bar{M}}(\Omega))^{N}\). On the other hand, for every fixed n, \(\int_{\Omega}\bar{M}(|g_{n}(x,u, Du)|)\,dx\leq\bar{M}(n) \operatorname{meas}(\Omega)<\infty\). Thus \(g_{n}(x,u, Du)\in L_{\bar{M}}(\Omega)\). There exists a sequence \(\{w_{j}\}_{n=1}^{\infty}\subset\mathcal{D}(\Omega)\) such that \(\{w_{j}\}_{n=1}^{\infty}\) dense in V. Let \(V_{m}=\operatorname{span}\{w_{1},\ldots, w_{m}\}\) and consider \(A_{n}|_{V_{m}}\). \(\int_{\Omega}|Du|\,dx\) and \(\|Du\|_{(M)}\) to be two norms of \(V_{m}\) equivalent to the usual norm of finite dimensional vector spaces. Claim: the mapping \(u\rightarrow A_{n}|_{V_{m}}u:V_{m}\rightarrow V_{m}^{*}\) is continuous. Indeed, if \(u_{j}\rightarrow u\) in \(V_{m}\) and there exists \(\varepsilon_{0}>0\) such that $$ \|A_{n}|_{V_{m}}u_{j}-A_{n}|_{V_{m}}u \|_{V_{m}^{\ast}}\geq\varepsilon_{0}, $$ and since \(u_{j}\rightarrow u\) strongly in \(V_{m}\), $$\int_{\Omega}M\bigl(2|u_{j}-u|\bigr)\,dx\rightarrow0 \quad \mbox{and}\quad \int_{\Omega}M\bigl(2|Du_{j}-Du|\bigr)\,dx \rightarrow0, $$ then there exists a subsequence of \(\{u_{j}\}\) still denoted by \(\{u_{j}\}\) and \(f_{1}, f_{2}\in L^{1}(\Omega)\) such that \(M(2|u_{j}-u|)\leq f_{1}\) and \(M(2|Du_{j}-Du|)\leq f_{2}\). By the convexity of M, we deduce that $$ M\bigl(|u_{j}|\bigr)\leq \frac{1}{2}M\bigl(2|u_{j}-u|\bigr)+ \frac{1}{2}M\bigl(2|u|\bigr)\leq\frac{1}{2}f_{1} + \frac{1}{2}M\bigl(2|u|\bigr). $$ Similarly, $$ M\bigl(|Du_{j}|\bigr)\leq\frac{1}{2}f_{2} + \frac{1}{2}M\bigl(2|Du|\bigr). $$ For \(\forall w\in V_{m}\), by (11), (18), (19) and Young inequality, one has $$\begin{aligned} & \bigl|a(x,u_{j},Du_{j})Dw(x) +g_{n}(x,u_{j},Du_{j})w(x)\bigr| \\ &\quad\leq \beta \bigl[c(x)+ \bar{M}^{-1} \bigl(M\bigl(|u_{j}|\bigr) \bigr)+ \bar{M}^{-1}M\bigl(|Du_{j}|\bigr) \bigr]|Dw| +n|w| \\ &\quad\leq \beta \bigl[\bar{M} \bigl(c(x) \bigr)+3M\bigl(|Dw|\bigr) + M\bigl(|u_{j}|\bigr)+M\bigl(|Du_{j}|\bigr) \bigr] + \bigl[\bar{M}(n)+M\bigl(|w|\bigr) \bigr] \\ &\quad\leq\beta \biggl[\bar{M} \bigl(c(x) \bigr)+3M\bigl(|Dw|\bigr)+ \frac{1}{2}f_{1} +\frac{1}{2}M\bigl(2|u|\bigr)+\frac{1}{2}f_{2} + \frac{1}{2}M\bigl(2|Du|\bigr) \biggr] \\ &\qquad{} +\bar{M}(n)+M\bigl(|w|\bigr). \end{aligned}$$ Hence \((A_{n}|_{V_{m}}u_{j},w)<\infty\) for all \(w\in V_{m}\). By the Banach-Steinhaus theorem \(\{\|A_{n}|_{V_{m}}u_{j}\|_{V_{m}^{\ast}}\}_{j}\) is bounded. Hence \(\{A_{n}|_{V_{m}}u_{j}\}_{j}\) is relatively sequently compact in \(V_{m}^{\ast}\). Passing to a subsequence if necessary, there exists \(\eta_{n}\in V_{m}^{\ast}\) such that $$\|A_{n}|_{V_{m}}u_{j}-\eta_{n} \|_{V_{m}^{\ast}}\rightarrow 0. $$ On the other hand, passing to a subsequence if necessary, $$u_{j}(x)\rightarrow u(x) \quad\mbox{a.e. in } \Omega \quad\mbox{and} \quad Du_{j}(x)\rightarrow Du(x) \quad\mbox{a.e. in } \Omega. $$ By the Lebesgue theorem, we know that for each \(w\in V_{m}\), $$\lim_{j\rightarrow\infty}(A_{n}|_{V_{m}}u_{j},w) =(A_{n}|_{V_{m}}u,w). $$ Hence \(A_{n}|_{V_{m}}u=\eta_{n}\), it is a contradiction with (17). Thanks to (13) and Lemma 2.1, for all \(u\in V_{m}\), $$\begin{aligned} (A_{n}u,u) =& \int_{\Omega} \bigl[a(x,u,Du)Du +g_{n}(x,u,Du)u \bigr]\,dx \\ \geq& \int_{\Omega} \bigl[\alpha M\bigl(|Du|\bigr)-n|u| \bigr]\,dx \\ \geq& \alpha\int_{\Omega} M\bigl(|Du|\bigr)\,dx-\int_{\Omega} \biggl[\bar{M} \biggl(\frac {1}{\alpha_{0}}(n\operatorname{diam}\Omega) \biggr) +M \biggl(\alpha_{0}\frac{|u|}{\operatorname{diam}\Omega} \biggr) \biggr]\,dx \\ \geq& \alpha\int_{\Omega} M\bigl(|Du|\bigr)\,dx- \bar{M} \biggl( \frac{1}{\alpha _{0}}(n\operatorname{diam}\Omega) \biggr)\operatorname{meas}\Omega- \int_{\Omega} \alpha_{0}M\bigl(|Du|\bigr)\,dx \\ =&(\alpha-\alpha_{0})\int_{\Omega} M\bigl(|Du|\bigr)\,dx- \bar{M} \biggl(\frac{1}{\alpha _{0}}(n\operatorname{diam}\Omega) \biggr) \operatorname{meas}\Omega, \end{aligned}$$ where \(\alpha_{0}=\min\{\frac{\alpha}{2},1\}\). By Lemma 2.1, one has \(\|u\|_{(M)}\leq \operatorname{diam}\Omega\|Du\| _{(M)}\). It follows that \(\|u\|_{\Omega, M}\leq(1+\operatorname{diam}\Omega)\| Du\|_{(M)}\). We have $$ \frac{\int_{\Omega} M(|Du|)\,dx}{\|u\|_{\Omega, M}} \geq\frac{1}{ 1+\operatorname{diam}\Omega} \frac{\int_{\Omega} M(|Du|)\,dx}{\|Du\| _{(M)}} \geq \frac{1}{1+\operatorname{diam}\Omega} $$ since \(\int_{\Omega}M(u)\,dx>\|u\|_{(M)}\) whenever \(\|u\|_{(M)}>1\). Combining (21) and (22), one has $$ \frac{(A_{n}u,u)}{\|u\|_{\Omega, M}}\geq\frac{1}{1+\operatorname{diam}\Omega}. $$ By Remark 2.1, \(A_{n}\) is surjective, i.e., there exists a Galerkin solution \(u_{m}\in V_{m}\) for every m such that $$ (A_{n}u_{m},v)=(\mu_{n},v),\quad \forall v\in V_{m}. $$ We will show that the sequence \(\{u_{m}\}\) is bounded in V. In fact, for every \(u_{m}\in V\), if \(\|u_{m}\|_{\Omega, M}\rightarrow \infty\), then by (23), \((A_{n}u_{m},u_{m})\rightarrow\infty\). It is a contradiction with (24). Therefore \(\{u_{m}\}\) is bounded in V. It follows from (20) that we can deduce \(\{\| A_{n}|_{V_{m}}u_{m}\|_{V^{*}}\}_{m}\) is bounded. So we can extract a subsequence \(\{u_{k}\}_{k=1}^{\infty}\) of \(\{ u_{m}\}_{m=1}^{\infty}\) such that $$ u_{k}\rightharpoonup u_{0} \quad\mbox{in } V \mbox{ for } \sigma(\Pi L_{M}, \Pi E_{\bar{M}}),\qquad A_{n}u_{k}\rightharpoonup\xi_{n} \quad\mbox{in } V^{*} \mbox{ for } \sigma (\Pi L_{\bar{M}}, \Pi E_{M}), $$ as \(k\rightarrow\infty\) and \((\xi_{n},w)=(\mu_{n},w)\) for all \(w\in\bigcup _{m=1}^{\infty}\{w_{m}\}\). By the density of \(\{w_{m}\}\), we get $$(\xi_{n},w)=(\mu_{n},w),\quad \forall w\in V. $$ By the imbedding theorem (see, e.g., [31]) we have $$ u_{k}\rightarrow u_{0} \quad\mbox{strongly in } L_{M}(\Omega) \mbox{ as }k\rightarrow\infty. $$ Hence, passing to a subsequence if necessary $$ u_{k}(x)\rightarrow u_{0}(x) \quad \mbox{a.e. }x \in\Omega \mbox{ as }k\rightarrow\infty. $$ On the other hand, thanks to (26), we have $$\int_{\Omega}g_{n}(x,u_{k},Du_{k}) (u_{k}-u_{0})\,dx\rightarrow0 \quad\mbox{and}\quad\int _{\Omega}\mu_{n}(u_{k}-u_{0})\,dx \rightarrow0 $$ as \(k\rightarrow\infty\). Thus we obtain that $$\begin{aligned} & \int_{\Omega}a(x,u_{k},Du_{k}) (Du_{k}-Du_{0})\,dx \\ &\quad=\int_{\Omega}\mu_{n}(u_{k}-u_{0}) \,dx -\int_{\Omega}g_{n}(x,u_{k},Du_{k}) (u_{k}-u_{0})\,dx \rightarrow0. \end{aligned}$$ Fix a positive real number r and define \(\Omega_{r}=\{x\in\Omega :|Du_{0}(x)|\leq r\}\) and denote by \(\chi_{r}\) the characteristic function of \(\Omega_{r}\). Taking \(s\geq r\), one has $$\begin{aligned} 0 \leq&\int_{\Omega_{r}} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0}) \bigr] (Du_{k}-Du_{0})\,dx \\ \leq&\int_{\Omega_{s}} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0}) \bigr] (Du_{k}-Du_{0})\,dx \\ =&\int_{\Omega_{s}} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0} \chi_{s}) \bigr] (Du_{k}-Du_{0} \chi_{s})\,dx \\ \leq&\int_{\Omega} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0} \chi_{s}) \bigr] (Du_{k}-Du_{0} \chi_{s})\,dx. \end{aligned}$$ On the other hand, $$\begin{aligned} &\int_{\Omega}a(x,u_{k},Du_{k}) (Du_{k}-Du_{0})\,dx \\ &\quad=\int_{\Omega} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0} \chi_{s}) \bigr] (Du_{k}-Du_{0} \chi_{s})\,dx \\ &\qquad{} -\int_{\Omega}a(x,u_{k},Du_{k})Du_{0} \chi_{\Omega\backslash\Omega_{s}}\,dx +\int_{\Omega}a(x,u_{k},Du_{0} \chi_{s}) (Du_{k}-Du_{0}\chi_{s}) \,dx. \end{aligned}$$ $$\begin{aligned} & \int_{\Omega} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0} \chi_{s}) \bigr] (Du_{k}-Du_{0} \chi_{s})\,dx \\ &\quad=\int_{\Omega}a(x,u_{k},Du_{k}) (Du_{k}-Du_{0})\,dx \\ & \qquad{} +\int_{\Omega}a(x,u_{k},Du_{k})Du_{0} \chi_{\Omega\backslash\Omega_{s}}\,dx -\int_{\Omega}a(x,u_{k},Du_{0} \chi_{s}) (Du_{k}-Du_{0}\chi _{s}) \,dx. \end{aligned}$$ In view of (28) the first term of the right-hand side of (29) tends to 0 as \(k\rightarrow\infty\). \(\{a(x,u_{k},Du_{k})\}_{k}\) is bounded in \((L_{\bar{M}}(\Omega))^{N}\). Indeed, for every \(w\in(E_{M}(\Omega))^{N}\), $$\begin{aligned} & \int_{\Omega}a(x,u_{k},Du_{k})w\,dx \\ &\quad=\int_{\Omega}\mu_{n}w\,dx -\int _{\Omega}g_{n}(x,u_{k},Du_{k})w \,dx \\ &\quad\leq\|\mu_{n}\|_{\bar{M}}\cdot\|w\|_{(M)}+\|n \|_{\bar{M}}\cdot\|w\|_{(M)} =\bigl(\|\mu_{n} \|_{\bar{M}}+\|n\|_{\bar{M}}\bigr)\|w\|_{(M)}< +\infty. \end{aligned}$$ By the Banach-Steinhaus theorem, \(\{\|a(x,u_{k},Du_{k})\|_{\bar{M}}\} _{k}\) is bounded. Thus, there exists \(h\in(L_{\bar{M}}(\Omega))^{N}\) such that (for a subsequence still denoted by \(\{u_{k}\}\)) $$a(x,u_{k},Du_{k})\rightharpoonup h \quad\mbox{in } \bigl(L_{\bar{M}}(\Omega ) \bigr)^{N} \mbox{ for } \sigma(\Pi L_{\bar{M}},\Pi E_{M}). $$ It follows that the second term of the right-hand side of (29) tends to \(\int_{\Omega\backslash\Omega_{s}}hDu_{0}\,dx\) as \(k\rightarrow\infty\). Since \(a(x,u_{k},Du_{0}\chi_{s})\rightharpoonup a(x,u_{0},Du_{0}\chi_{s})\) strongly in \((E_{\bar{M}}(\Omega))^{N}\), while by (25) \(Du_{k}-Du_{0}\chi_{s}\rightharpoonup Du_{0}-Du_{0}\chi_{s}\) tends weakly in \((E_{M}(\Omega))^{N}\) for \(\sigma((L_{M}(\Omega))^{N},(E_{\bar {M}}(\Omega))^{N})\), the third term of the right-hand side of (29) tends to \(-\int_{\Omega}a(x,u_{0},Du_{0}\chi_{s})(Du_{0}-Du_{0}\chi_{s})\,dx =-\int_{\Omega\backslash\Omega_{s}}a(x,u_{0},0)Du_{0}\,dx\). $$\begin{aligned} &\int_{\Omega} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0} \chi_{s}) \bigr] (Du_{k}-Du_{0} \chi_{s})\,dx \\ &\quad=\int_{\Omega\backslash\Omega_{s}} \bigl[h-a(x,u_{0},0) \bigr]Du_{0}\,dx +\varepsilon(k). \end{aligned}$$ We have then proved that $$\begin{aligned} 0 \leq&\limsup_{k\rightarrow\infty} \int_{\Omega_{r}} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0}) \bigr] (Du_{k}-Du_{0})\,dx \\ =&\int_{\Omega\backslash\Omega_{s}} \bigl[h-a(x,u_{0},0) \bigr]Du_{0}\,dx. \end{aligned}$$ Using the fact that \([h-a(x,u_{0},0)]Du_{0}\in L^{1}(\Omega)\) and letting \(s\rightarrow\infty\), we get, since \(\operatorname{meas}(\Omega\backslash \Omega_{s})\rightarrow0\), $$\int_{\Omega_{r}} \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0}) \bigr] (Du_{k}-Du_{0})\,dx\rightarrow0 \quad\mbox{as }k \rightarrow \infty, $$ which gives $$ \bigl[a(x,u_{k},Du_{k})-a(x,u_{k},Du_{0}) \bigr] (Du_{k}-Du_{0})\,dx \rightarrow0 \quad\mbox{a.e. in } \Omega_{r} $$ (for a subsequence still denoted by \(\{u_{k}\}\)), say, for each \(x\in \Omega_{r}\backslash Z\) with \(\operatorname{meas}(Z)=0\). As the proof of Eq. (3.23) in [32], we can construct a subsequence such that $$ Du_{k}(x)\rightarrow Du_{0}(x) \quad\mbox{a.e. in }\Omega. $$ Consequently, we get $$a(x,u_{k},Du_{k})\rightarrow a(x,u_{0},Du_{0}) \quad\mbox{a.e. in }\Omega, $$ $$g_{n}(x,u_{k},Du_{k})\rightarrow g_{n}(x,u_{0},Du_{0}) \quad\mbox{a.e. in } \Omega. $$ By Lemma 2.4, we get $$a(x,u_{k},Du_{k})\rightharpoonup a(x,u_{0},Du_{0}) \quad \mbox{in } \bigl(L_{\bar{M}}(\Omega) \bigr)^{N} \mbox{ for } \sigma \bigl( \bigl(L_{\bar{M}}(\Omega ) \bigr)^{N}, \bigl(E_{M}( \Omega) \bigr)^{N} \bigr), $$ $$g_{n}(x,u_{k},Du_{k})\rightharpoonup g_{n}(x,u_{0},Du_{0}) \quad\mbox{in } \bigl(L_{\bar{M}}(\Omega) \bigr)^{N} \mbox{ for }\sigma \bigl( \bigl(L_{\bar {M}}(\Omega) \bigr)^{N}, \bigl(E_{M}( \Omega) \bigr)^{N} \bigr). $$ $$\int_{\Omega} \bigl[a(x,u_{k},Du_{k})Dw+g_{n}(x,u_{k},Du_{k})w \bigr]\,dx \rightarrow\! \int_{\Omega} \bigl[a(x,u_{0},Du_{0})Dw+g_{n}(x,u_{0},Du_{0})w \bigr]\,dx $$ for every \(w\in V\). Thus, we get \((A_{n}u_{k},w)\rightarrow(A_{n}u_{0},w)\) for every \(w\in V\). It follows that \(A_{n}u_{0}=\xi_{n}\). Therefore, $$(A_{n}u_{0},w)=(\mu_{n},w), \quad\forall w\in W_{0}^{1}E_{M}(\Omega). $$ Furthermore, by Lemmas 2.2 and 2.5, we have $$(A_{n}u_{0},v)=(\mu_{n},v),\quad \forall v\in W_{0}^{1}L_{M}(\Omega). $$ Hence, for every n, there exists at least one solution \(u_{n}\) of (16) with \(u_{n}\in W_{0}^{1}E_{M}(\Omega)\). □ From Proposition 3.1, we have the following approximate equations: $$ \int_{\Omega} \bigl[a(x,u_{n},Du_{n})Dv+g_{n}(x,u_{n},Du_{n})v \bigr]\,dx=\int_{\Omega}\mu_{n}v\,dx, \quad\forall v \in W_{0}^{1}L_{M}(\Omega), $$ where \(u_{n}\in W_{0}^{1}E_{M}(\Omega)\). Clearly, condition (11) is weaker than $$ \bigl|a(x,s,\xi)\bigr|\leq\beta \bigl[c(x)+\bar{P}^{-1} \bigl(M\bigl(|s|\bigr) \bigr)+\bar{M}^{-1} \bigl(M\bigl(|\xi|\bigr) \bigr) \bigr], $$ whenever \(P\ll M\). If condition (11) is replaced by (33) in Proposition 3.1, the approximate equations (16) has at least one solution \(u_{n}\in W_{0}^{1}L_{M}(\Omega)\) by the classical result of [33]. The proof of the following proposition is similar to the proof of Lemma 2.2 in [34]. Assume that (8)-(14) hold true, and let \(\{u_{n}\} _{n}\) be a solution of the approximate problem (16). Let \(\varphi\in W_{0}^{1}L_{M}(\Omega)\cap L^{\infty}(\Omega)\) with \(\varphi\geq0\). Then \(\exp(G(T_{k}(u_{n})))\varphi\) can be taken as a test function in (32) and $$\begin{aligned} &\int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(G(u_{n}) \bigr)D\varphi \,dx \\ &\quad\leq\int_{\Omega}\mu_{n}\exp \bigl(G(u_{n}) \bigr)\varphi \,dx +\int_{\Omega} \gamma(x) \exp \bigl(G(u_{n}) \bigr)\varphi \,dx; \end{aligned}$$ \(\exp(-G(T_{k}(u_{n})))\varphi\) can be taken as a test function in (32) and $$\begin{aligned} &\int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(-G(u_{n}) \bigr)D\varphi \,dx +\int_{\Omega} \gamma(x)\exp \bigl(-G(u_{n}) \bigr)\varphi \,dx \\ &\quad\geq\int_{\Omega}\mu_{n}\exp \bigl(-G(u_{n}) \bigr)\varphi \,dx. \end{aligned}$$ (1) Choosing \(\exp(G(T_{k}(u_{n})))\varphi\) as a test function in (32), we have $$\begin{aligned} &\int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr) \frac{\rho (T_{k}(u_{n}))}{\alpha} DT_{k}(u_{n})\varphi \,dx \\ &\qquad{} +\int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)D\varphi \,dx \\ &\qquad{} +\int_{\Omega}g_{n}(x,u_{n},Du_{n}) \exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\varphi \,dx \\ &\quad=\int_{\Omega}\mu_{n}\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\varphi \,dx, \end{aligned}$$ which implies by (13) $$\begin{aligned} &\int_{\Omega}\alpha M \bigl(\bigl|DT_{k}(u_{n})\bigr| \bigr)\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr) \frac{\rho (T_{k}(u_{n}))}{\alpha} \varphi \,dx \\ &\quad\leq\int_{\Omega}a \bigl(x,T_{k}(u_{n}),DT_{k}(u_{n}) \bigr)\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr) \frac{\rho(T_{k}(u_{n}))}{\alpha} DT_{k}(u_{n})\varphi \,dx \\ &\quad= \int_{\Omega}a(x,u_{n},Du_{n})\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\frac{\rho (T_{k}(u_{n}))}{\alpha} DT_{k}(u_{n})\varphi \,dx. \end{aligned}$$ Since \(T_{k}(u_{n})\rightarrow u_{n}\) and \(DT_{k}(u_{n})\rightarrow Du_{n}\) a.e. in Ω as \(k\rightarrow\infty\), by the Fatou lemma, we get $$\begin{aligned} &\int_{\Omega}\alpha M\bigl(|Du_{n}|\bigr)\exp \bigl(G(u_{n}) \bigr) \frac{\rho(u_{n})}{\alpha } \varphi \,dx \\ &\quad\leq\liminf_{k\rightarrow\infty}\int_{\Omega}\alpha M \bigl(\bigl|DT_{k}(u_{n})\bigr| \bigr)\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr) \frac{\rho(T_{k}(u_{n}))}{\alpha } \varphi \,dx \\ &\quad\leq\liminf_{k\rightarrow\infty} \int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr) \frac{\rho (T_{k}(u_{n}))}{\alpha} DT_{k}(u_{n})\varphi \,dx. \end{aligned}$$ On the other hand, the functions \(a(x,u_{n},Du_{n})D\varphi\), \(g_{n}(x,u_{n},Du_{n})\varphi\), and \(\mu_{n}\varphi\) are summable, and the functions \(\exp(G(T_{k}(u_{n})))\) are bounded in \(L^{\infty}(\Omega )\); so Lebesgue's dominated convergence theorem may be applied in the remaining integrals. Indeed, thanks to (11) and Young inequality, one has $$\begin{aligned} &\bigl|a(x,u_{n},Du_{n})\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)D\varphi\bigr| \\ &\quad\leq e^{\frac{\|\rho\|_{L^{1}(\mathbb{R})}}{\alpha}}\beta \bigl[c(x)+\bar {M}^{-1} \bigl(M\bigl(|u_{n}|\bigr) \bigr) +\bar{M}^{-1} \bigl(M\bigl(|Du_{n}|\bigr) \bigr) \bigr]|D\varphi| \\ &\quad\leq e^{\frac{\|\rho\|_{L^{1}(\mathbb{R})}}{\alpha}}\beta \bigl[\bar {M} \bigl(c(x) \bigr) +M\bigl(|u_{n}|\bigr) +M\bigl(|Du_{n}|\bigr)+3M\bigl(|D\varphi|\bigr) \bigr]. \end{aligned}$$ Since \(a(x,u_{n},Du_{n})\exp(G(T_{k}(u_{n})))D\varphi\rightarrow a(x,u_{n},Du_{n})\exp(G(u_{n}))D\varphi\) a.e. in Ω as \(k\rightarrow\infty\), and by Lebesgue's dominated convergence theorem, we deduce that $$\int_{\Omega}a(x,u_{n},Du_{n})\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)D\varphi \,dx \rightarrow \int_{\Omega}a(x,u_{n},Du_{n})\exp \bigl(G(u_{n}) \bigr)D\varphi \,dx $$ as \(k\rightarrow\infty\). Since \(g_{n}(x,u_{n},Du_{n})\exp(G(T_{k}(u_{n})))\varphi\rightarrow g_{n}(x,u_{n},Du_{n})\exp(G(u_{n}))\varphi\) a.e. in Ω as \(k\rightarrow\infty\), and $$\bigl|g_{n}(x,u_{n},Du_{n})\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\varphi\bigr| \leq n e^{\frac{\|\rho\|_{L^{1}(\mathbb{R})}}{\alpha}}\|\varphi\| _{\infty}, $$ by Lebesgue's dominated convergence theorem one has $$\int_{\Omega}g_{n}(x,u_{n},Du_{n}) \exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\varphi \,dx \rightarrow\int_{\Omega}g_{n}(x,u_{n},Du_{n}) \exp \bigl(G(u_{n}) \bigr)\varphi \,dx $$ as \(k\rightarrow\infty\). Since \(\mu_{n}\exp(G(T_{k}(u_{n})))\varphi \rightarrow\mu_{n}\exp(G(u_{n}))\varphi\) a.e. in Ω as \(k\rightarrow\infty\), and $$\bigl|\mu_{n}\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\varphi\bigr| \leq e^{\frac{\|\rho\|_{L^{1}(\mathbb{R})}}{\alpha}}\mu_{n}\|\varphi\| _{\infty}, $$ $$\int_{\Omega}\mu_{n}\exp \bigl(G \bigl(T_{k}(u_{n}) \bigr) \bigr)\varphi \,dx \rightarrow\int _{\Omega}\mu_{n} \exp \bigl(G(u_{n}) \bigr) \varphi \,dx $$ as \(k\rightarrow\infty\). Thus, letting k tend to ∞ in (36), we obtain $$\begin{aligned} &\int_{\Omega}M\bigl(|Du_{n}|\bigr)\exp \bigl(G(u_{n}) \bigr)\rho(u_{n}) Du_{n}\varphi \,dx \\ &\qquad{} +\int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(G(u_{n}) \bigr)D\varphi \,dx +\int_{\Omega}g_{n}(x,u_{n},Du_{n}) \exp \bigl(G(u_{n}) \bigr)\varphi \,dx \\ &\quad\leq\int_{\Omega}\mu_{n}\exp \bigl(G(u_{n}) \bigr)\varphi \,dx. \end{aligned}$$ By (14), (37) is reduced to (34). (2) Similarly, taking \(\exp(-G(T_{k}(u_{n})))\varphi\) as a test function in (32), we obtain (35). □ Assume that (8)-(14) hold true, and let \(\{u_{n}\} _{n}\) be a solution of the approximate problem (16). Then, for all \(k>0\), there exists a constant C (which does not depend on the n and k) such that $$ \int_{\Omega}M \bigl(\bigl|DT_{k}(u_{n})\bigr| \bigr)\,dx\leq Ck. $$ Let \(\varphi=T_{k}(u_{n})^{+}\) in (34). Also let \(G(\pm\infty )=\frac{1}{\alpha}\int_{0}^{\pm\infty}\rho(s)\,ds\) which are well defined since \(\rho\in L^{1}(\mathbb{R})\), then \(G(-\infty)\leq G(s)\leq G(+\infty)\) and \(|G(\pm\infty)|\leq\|\rho\|_{L^{1}(\mathbb{R})}/\alpha\). We have $$\int_{\Omega}a(x,u_{n},Du_{n})\exp \bigl(G(u_{n}) \bigr)DT_{k}(u_{n})^{+} \,dx \leq e^{\frac{\|\rho\|_{L^{1}(\mathbb{R})}}{\alpha}}k \bigl[\|\mu\|_{\mathcal {M}_{b}(\Omega)} +\bigl\| \gamma(x) \bigr\| _{L^{1}(\Omega)} \bigr]. $$ Immediately, by (13) we get $$ \int_{\Omega}M \bigl(\bigl|DT_{k}(u_{n})^{+}\bigr| \bigr)\,dx \leq Ck . $$ Similarly, let \(\varphi=T_{k}(u_{n})^{-}\) in (35). We obtain $$ \int_{\Omega}M \bigl(\bigl|DT_{k}(u_{n})^{-}\bigr| \bigr)\,dx \leq Ck . $$ Combing (39) and (40), we deduce (38). □ Assume that (8)-(14) hold true, and let \(\{u_{n}\} _{n}\) be a solution of the approximate problem (16). Then there exists a measurable function u such that for all \(k>0\) we have (for a subsequence still denoted by \(\{u_{n}\}_{n}\)), \(u_{n}\rightarrow u\) a.e. in Ω; \(T_{k}(u_{n})\rightharpoonup T_{k}(u)\) weakly in \(W_{0}^{1}E_{M}(\Omega)\) for \(\sigma(\Pi L_{M}, \Pi E_{\bar{M}})\); \(T_{k}(u_{n})\rightarrow T_{k}(u)\) strongly in \(E_{M}(\Omega )\) and a.e. in Ω. $$\begin{aligned} & M \biggl(\frac{k}{\operatorname{diam}\Omega} \biggr)\operatorname{meas} \bigl\{ \bigl|T_{k}(u_{n})\bigr|=k \bigr\} \\ &\quad= \int_{\{|T_{k}(u_{n})|=k\}}M \biggl(\frac{|T_{k}(u_{n})|}{\operatorname{diam}\Omega} \biggr)\,dx \leq \int_{\Omega}M \bigl(\bigl|DT_{k}(u_{n})\bigr| \bigr)\,dx \leq Ck \end{aligned}$$ and \(\{|u_{n}|>k\}\subset\{|T_{k}(u_{n})|=k\}\), we get $$\operatorname{meas}\bigl\{ |u_{n}|>k\bigr\} \leq\operatorname{meas} \bigl\{ \bigl|T_{k}(u_{n})\bigr|=k \bigr\} \leq\frac{Ck}{M(\frac{k}{\operatorname{diam}\Omega})} $$ for all n and for all k. Similar to the proof of Proposition 4.3 in [2], assertions (1)-(3) hold. □ Assume that (8)-(14) hold true, and let \(\{u_{n}\} _{n}\) be a solution of the approximate problem (16). Then, for all \(k>0\), \(\{a(x,T_{k}(u_{n}), DT_{k}(u_{n}))\}_{n}\) is bounded in \(L_{\bar{M}}(\Omega)^{N}\); \(Du_{n}\rightarrow Du\) a.e. in Ω (for a subsequence) as \(n\rightarrow\infty\). (1) Let \(w\in(E_{M}(\Omega))^{N}\) be arbitrary. By condition (11) and Young inequality, we have $$\begin{aligned} &\int_{\Omega}a \bigl(x,T_{k}(u_{n}), DT_{k}(u_{n}) \bigr)w\,dx \\ &\quad\leq\beta\int_{\Omega} \bigl[\bar {M} \bigl(c(x) \bigr)+M(k)+M \bigl(\bigl|DT_{k}(u_{n})\bigr| \bigr)+3M\bigl(|w|\bigr) \bigr] \,dx \\ &\quad\leq\beta \biggl[\int_{\Omega}\bar{M} \bigl(c(x) \bigr) \,dx+M(k) \operatorname{meas}\Omega+\int_{\Omega}M \bigl(\bigl|DT_{k}(u_{n})\bigr| \bigr)\,dx+3\int_{\Omega}M\bigl(|w|\bigr) \,dx \biggr] \\ &\quad\leq\beta \biggl[\int_{\Omega}\bar{M} \bigl(c(x) \bigr) \,dx+M(k) \operatorname{meas}\Omega+Ck+3\int_{\Omega}M\bigl(|w|\bigr)\,dx \biggr]=C(k)< +\infty, \end{aligned}$$ where \(C(k)\) is a constant independent of n. By the Banach-Steinhaus theorem \(\{\|a(x,T_{k}(u_{n}), DT_{k}(u_{n}))\| _{\bar{M}}\}_{n}\) is bounded; this completes the proof of assertion (1). (2) Let \(\Omega_{s}=\{x\in\Omega||DT_{k}(u_{n})|< s\}\) and denote by \(\chi_{s}\) the characteristic function of \(\Omega_{s}\). Clearly, \(\Omega _{s}\subset\Omega_{s+1}\) and \(\operatorname{meas}(\Omega\backslash\Omega _{s})\rightarrow0\) as \(s\rightarrow\infty\). Step (i). We shall show the following assertion: $$ \lim_{j\rightarrow\infty}\limsup_{n\rightarrow\infty}\int _{\{-(j+1)\leq u_{n}\leq-j\}}a(x,u_{n}, Du_{n})Du_{n} \,dx=0. $$ Indeed, the term in (35) with \(\mu_{n}\) can be neglected since it is nonnegative. Hence $$ -\int_{\Omega}a(x,u_{n},Du_{n}) \exp \bigl(-G(u_{n}) \bigr)D\varphi \,dx \leq\int_{\Omega} \gamma(x)\exp \bigl(-G(u_{n}) \bigr)\varphi \,dx. $$ Taking \(\varphi=T_{1}(u_{n}-T_{j}(u_{n}))^{-}\) in (42), we obtain $$\begin{aligned} &\int_{\{-(j+1)\leq u_{n}\leq-j\}}a(x,u_{n},Du_{n})Du_{n} \exp \bigl(-G(u_{n}) \bigr)\,dx \\ & \quad \leq\int_{\Omega}\gamma(x)\exp \bigl(-G(u_{n}) \bigr)T_{1} \bigl(u_{n}-T_{j}(u_{n}) \bigr)^{-}\,dx. \end{aligned}$$ Since \(|\gamma(x)\exp(-G(u_{n}))T_{1}(u_{n}-T_{j}(u_{n}))^{-}| \leq e^{\frac{\|\rho\|_{L^{1}(\mathbb{R})}}{\alpha}}|\gamma(x)|\), we deduce $$\lim_{j\rightarrow\infty}\lim_{n\rightarrow\infty} \int _{\Omega}\gamma (x)\exp \bigl(-G(u_{n}) \bigr)T_{1} \bigl(u_{n}-T_{j}(u_{n}) \bigr)^{-}\,dx=0, $$ by Lebesgue's dominate convergence theorem, which implies (41). Step (ii). Taking \(\varphi =(T_{k}(u_{n})-T_{k}(v_{i}))^{-}[1-|T_{1}(u_{n}-T_{j}(u_{n}))|]\) and \(\varphi =(T_{k}(v_{i})-T_{k}(u_{n}))^{-}[1-|T_{1}(u_{n}-T_{j}(u_{n}))|]\) in (42) with \(j>k\), as in [2], we can deduce that, by passing to a subsequence if necessary, $$ DT_{k}(u_{n})\rightarrow DT_{k}(u) \quad \mbox{a.e. in } \Omega, $$ $$ Du_{n}\rightarrow Du \quad\mbox{a.e. in } \Omega. $$ Proof of Theorem 3.1 (1) We are going to show that as \(n\rightarrow\infty\), $$ g_{n}(x,u_{n}, Du_{n}) \rightarrow g(x,u, Du)\quad \mbox{in } L^{1}(\Omega). $$ Indeed, taking \(v=\exp(-G(T_{k}(u_{n})))\int_{T_{k}(u_{n})}^{0}\rho (s)\chi_{\{s<-h\}}\,ds\) as a test function in (32), we have $$\begin{aligned} & \int_{\Omega}a(x,u_{n},D u_{n})DT_{k}(u_{n}) \frac{\rho(T_{k}(u_{n}))}{\alpha}\exp \bigl(-G \bigl(T_{k}(u_{n}) \bigr) \bigr) \int_{T_{k}(u_{n})}^{0}\rho(s)\chi_{\{s< -h\}}\,ds \,dx \\ &\qquad{} + \int_{\Omega}a(x,u_{n},Du_{n})D T_{k}(u_{n}) \exp \bigl(-G \bigl(T_{k}(u_{n}) \bigr) \bigr)\rho \bigl(T_{k}(u_{n}) \bigr) \chi_{\{T_{k}(u_{n})< -h\}}\,dx \\ &\quad=\int_{\Omega}g_{n}(x,u_{n},Du_{n}) \exp \bigl(-G \bigl(T_{k}(u_{n}) \bigr) \bigr) \int _{T_{k}(u_{n})}^{0}\rho(s)\chi_{\{s< -h\}}\,ds\,dx \\ &\qquad{} - \int_{\Omega}\mu_{n}\exp \bigl(-G \bigl(T_{k}(u_{n}) \bigr) \bigr)\int_{T_{k}(u_{n})}^{0} \rho(s)\chi_{\{s< -h\}}\,ds\,dx. \end{aligned}$$ Using (13) and by Fatou's lemma and Lebesgue's theorem, we can deduce that $$\begin{aligned} & \int_{\Omega}\alpha M\bigl(|D u_{n}|\bigr)\exp \bigl(-G(u_{n}) \bigr) \frac{\rho(u_{n})}{\alpha} \int_{u_{n}}^{0} \rho(s)\chi_{\{s< -h\}}\,ds\,dx \\ &\qquad{} + \int_{\Omega}\alpha M\bigl(|D u_{n}|\bigr)\exp \bigl(-G(u_{n}) \bigr) \rho(u_{n})\chi_{\{u_{n}< -h\}}\,dx \\ &\quad\leq\int_{\Omega}g_{n}(x,u_{n},Du_{n}) \exp \bigl(-G(u_{n}) \bigr) \int_{u_{n}}^{0} \rho(s)\chi_{\{s< -h\}}\,ds\,dx \\ &\qquad{} - \int_{\Omega}\mu_{n}\exp \bigl(-G(u_{n}) \bigr)\int_{u_{n}}^{0} \rho(s)\chi _{\{s< -h\}}\,ds\,dx, \end{aligned}$$ $$\begin{aligned} &\int_{\Omega}\alpha M\bigl(|D u_{n}|\bigr)\exp \bigl(-G(u_{n}) \bigr) \rho(u_{n})\chi_{\{u_{n}< -h\}}\,dx \\ &\quad\leq \int_{\Omega}\gamma(x) \exp \bigl(-G(u_{n}) \bigr) \int_{u_{n}}^{0}\rho(s)\chi_{\{s< -h\}}\,ds \,dx \\ & \qquad{}- \int_{\Omega}\mu_{n}\exp \bigl(-G(u_{n}) \bigr)\int_{u_{n}}^{0} \rho(s)\chi_{\{ s< -h\}}\,ds\,dx. \end{aligned}$$ Since \(\rho\geq0\), we get $$\int_{u_{n}}^{0}\rho(s)\chi_{\{s< -h\}}\,ds \leq \int_{-\infty}^{-h}\rho(s)\,ds. $$ $$\begin{aligned} & \int_{\Omega}M\bigl(|D u_{n}|\bigr)\exp \bigl(-G(u_{n}) \bigr)\rho(u_{n})\chi_{\{u_{n}< -h\}}\,dx \\ & \quad\leq\frac{1}{\alpha}e^{\frac{\|\rho\|_{L^{1}(\mathbb {R})}}{\alpha}} \int_{-\infty}^{-h} \rho(s)\,ds \bigl(\|\gamma\|_{L^{1}(\Omega)}+\|\mu\|_{\mathcal{M}_{b}(\Omega)}\bigr) =C\int _{-\infty}^{-h}\rho(s)\,ds. \end{aligned}$$ Consequently, one has $$\int_{\Omega}M\bigl(|D u_{n}|\bigr)\rho(u_{n}) \chi_{\{u_{n}< -h\}}\,dx \leq C\int_{-\infty}^{-h}\rho(s) \,ds. $$ Letting \(h\rightarrow+\infty\), one has $$\int_{-\infty}^{-h}\rho(s)\,ds\rightarrow0. $$ $$\lim_{h\rightarrow+\infty}\sup_{n\in N} \int _{\{u_{n}< -h\}}M\bigl(|D u_{n}|\bigr)\rho(u_{n})\,dx=0. $$ Taking \(v=\exp(G(T_{k}(u_{n})))\int_{0}^{T_{k}(u_{n})}\rho(s)\chi_{\{ s>h\}}\,ds\) as a test function in (32), similarly we obtain that $$\lim_{h\rightarrow+\infty}\sup_{n\in N} \int _{\{u_{n}>h\}}M\bigl(|D u_{n}|\bigr)\rho(u_{n})\,dx=0. $$ $$ \lim_{h\rightarrow+\infty}\sup_{n\in N} \int _{\{|u_{n}|>h\}}M\bigl(|D u_{n}|\bigr)\rho(u_{n})\,dx=0. $$ Following the proof of step 1 in Theorem 3.1 of [2], we can deduce (45). (2) We will prove that \(a(x,u_{n},D u_{n})\rightharpoonup a(x,u,Du)\) weakly for \(\sigma(\Pi L_{Q\circ H^{-1}}, \Pi E_{\overline{Q\circ H^{-1}}})\). By (44), we have $$ a(x,u_{n},D u_{n})\to a(x,u,D u) \quad\mbox{a.e. in } \Omega. $$ By \(Q\in\mathcal{P}_{M}\) and (8), one has \(Q^{\prime\prime}/Q^{\prime}\leq M^{\prime\prime}/M^{\prime}\). Then $$\int\frac{Q^{\prime\prime}(t)}{Q^{\prime}(t)}\,dt \leq\int\frac{M^{\prime\prime}(t)}{M^{\prime}(t)}\,dt. $$ Thus, there exists a constant C such that \(\ln|Q^{\prime}(t)|\leq\ln|M^{\prime}(t)|+C\). Therefore, $$Q^{\prime}(t)\leq CM^{\prime}(t). $$ It implies that $$ Q(r)=\int_{0}^{r}Q^{\prime}(t)\,dt\leq C \int_{0}^{r}M^{\prime }(t)\,dt=CM(r). $$ Let \(s=H(r)\), then \(s=\frac{M\circ H^{-1}(s)}{H^{-1}(s)}\). By Young inequality we have $$M\circ H^{-1} \biggl(\frac{s}{2} \biggr)=\frac{s}{2} \cdot H^{-1} \biggl(\frac{s}{2} \biggr) \leq\frac{1}{2} \bar{M}(s)+\frac{1}{2}M\circ H^{-1} \biggl(\frac{s}{2} \biggr). $$ Hence $$ M\circ H^{-1} \biggl(\frac{s}{2} \biggr)\leq\bar{M}(s). $$ In view of (48) and (49), we get $$ \int_{\Omega}Q\circ H^{-1} \biggl( \frac{1}{2}c(x) \biggr)\,dx \leq C\int_{\Omega}M\circ H^{-1} \biggl( \frac{1}{2}c(x) \biggr)\,dx \leq C\int _{\Omega}\bar{M} \bigl(c(x) \bigr)\,dx< \infty. $$ Since \(\bar{M}^{-1}(M(|D u_{n}|))\leq2\frac{M(|D u_{n}|)}{|D u_{n}|}\), we have $$\frac{1}{2}\bar{M}^{-1} \bigl(M\bigl(|D u_{n}|\bigr) \bigr) \leq \frac{M(|D u_{n}|)}{|D u_{n}|}. $$ $$\int_{\Omega}Q\circ H^{-1} \biggl(\frac{1}{2} \bar{M}^{-1} \bigl(M\bigl(|D u_{n}|\bigr) \bigr) \biggr)\,dx \leq\int _{\Omega}Q\circ H^{-1} \biggl(\frac{M(|D u_{n}|)}{|D u_{n}|} \biggr) \,dx =\int_{\Omega}Q\bigl(|D u_{n}|\bigr)\,dx, $$ $$\int_{\Omega}Q\circ H^{-1} \biggl(\frac{1}{2} \bar{M}^{-1} \bigl(M\bigl(|u_{n}|\bigr) \bigr) \biggr)\,dx \leq\int _{\Omega} Q\bigl(|u_{n}|\bigr)\,dx. $$ For \(t>0\), by taking \(T_{h}(u_{n}-T_{t}(u_{n}))\) as a test function in (32), from (14) and (46), we can deduce that $$\int_{\{t< |u_{n}|\leq t+h\}}a(x,u_{n},D u_{n})D u_{n}\,dx \leq Ch, $$ where C is a constant independent of n, h, t, which gives $$\frac{1}{h}\int_{\{t< |u_{n}|\leq t+h\}}M\bigl(|D u_{n}|\bigr)\,dx\leq C, $$ and by letting \(h\to0\), $$-\frac{d}{dt}\int_{\{|u_{n}|>t\}}M\bigl(|D u_{n}|\bigr)\,dx\leq C. $$ Let now \(B\in\mathcal{P}_{M}\). Following the lines of [35], it is easy to deduce that $$\int_{\Omega}B\bigl(|D u_{n}|\bigr)\,dx\leq C,\quad \forall n. $$ This implies that \(\{u_{n}\}\) is bounded in \(W_{0}^{1}L_{Q}(\Omega)\) and converges to u strongly in \(L_{Q}(\Omega)\). Consequently, using the convexity of \(Q\circ H^{-1}\) and by (50), we have $$\begin{aligned} &\int_{\Omega}Q\circ H^{-1} \biggl(\frac{|a(x,u_{n},D u_{n})|}{6\beta} \biggr)\,dx \\ &\quad\leq\frac{1}{3}\int_{\Omega}Q\circ H^{-1} \biggl(\frac{1}{2}c(x) \biggr)\,dx +\frac{1}{3}\int _{\Omega}Q\circ H^{-1} \biggl(\frac{1}{2}\bar {M}^{-1} \bigl(M\bigl(|u_{n}|\bigr) \bigr) \biggr)\,dx \\ &\qquad{} +\frac{1}{3}\int_{\Omega}Q\circ H^{-1} \biggl( \frac{1}{2}\bar {M}^{-1} \bigl(M\bigl(|D u_{n}|\bigr) \bigr) \biggr)\,dx \\ &\quad\leq\frac{1}{3} \biggl[C\int_{\Omega}\bar{M} \bigl(c(x) \bigr)\,dx+\int_{\Omega }Q\bigl(|u_{n}|\bigr)\,dx+\int _{\Omega}Q\bigl(|D u_{n}|\bigr)\,dx \biggr]\leq C, \end{aligned}$$ where C is independent of n. Thus we get $$ a(x,u_{n},D u_{n})\rightharpoonup a(x,u,D u) \quad\mbox{weakly for } \sigma(\Pi L_{Q\circ H^{-1}} \Pi E_{\overline{Q\circ H^{-1}}}). $$ Thanks to (45) and (51) we can pass to the limit in (32) and we obtain that u is a solution of (15). □ An erratum to this article has been published. Porretta, A: Nonlinear equations with natural growth terms and measure data. Electron. J. Differ. Equ. Conf. 09, 183-202 (2002) MathSciNet MATH Google Scholar Aharouch, L, Benkirane, A, Rhoudaf, M: Existence results for some unilateral problems without sign condition with obstacle free in Orlicz spaces. Nonlinear Anal. 68, 2362-2380 (2008) MathSciNet Article MATH Google Scholar Youssfi, A, Benkirane, A, El Moumni, M: Bounded solutions of unilateral problems for strongly nonlinear equations in Orlicz spaces. Electron. J. Qual. Theory Differ. Equ. 2013, 21 (2013) Breit, D, Stroffolini, B, Verde, A: A general regularity theorem for functionals with φ-growth. J. Math. Anal. Appl. 383, 226-233 (2011) Diening, L, Stroffolini, B, Verde, A: Everywhere regularity of functionals with φ-growth. Manuscr. Math. 129, 449-481 (2009) Fuchs, M: Local Lipschitz regularity of vector valued local minimizers of variational integrals with densities depending on the modulus of the gradient. Math. Nachr. 284, 266-272 (2011) Marcellini, P, Papi, G: Nonlinear elliptic systems with general growth. J. Differ. Equ. 221, 412-443 (2006) Aharouch, L, Rhoudaf, M: Existence of solutions for unilateral problems with \(l^{1}\) data in Orlicz spaces. Proyecciones 23, 293-317 (2004) MathSciNet Article Google Scholar Breit, D, Diening, L, Fuchs, M: Solenoidal Lipschitz truncation and applications in fluid mechanics. J. Differ. Equ. 253, 1910-1942 (2012) Dong, G: Elliptic equations with measure data in Orlicz spaces. Electron. J. Differ. Equ. 2008, 76 (2008) Elmahi, A, Meskine, D: Existence of solutions for elliptic equations having natural growth terms in Orlicz spaces. Abstr. Appl. Anal. 12, 1031-1045 (2004) Boccardo, L, Gallouet, T: Non-linear elliptic and parabolic equations involving measure as data. J. Funct. Anal. 87, 149-169 (1989) Dong, G, Shi, Z: An existence theorem for weak solutions for a class of elliptic partial differential systems in Orlicz spaces. Nonlinear Anal. 68, 1037-1042 (2008) Dong, G: An existence theorem for weak solutions for a class of elliptic partial differential systems in general Orlicz-Sobolev spaces. Nonlinear Anal. 69, 2049-2057 (2008) Fang, X, Hou, E, Dong, G: Solutions to the system of operator equations \(a_{1}x=c_{1}\), \(xb_{2}=c_{2}\), and \(a_{3}xb_{3}=c_{3}\) on Hilbert \(C^{*}\)-modules. Abstr. Appl. Anal. (2013). doi:10.1155/2013/826564 Fang, X, Yu, J: Solutions to operator equations on Hilbert \(C^{*}\)-modules ii. Integral Equ. Oper. Theory 68, 23-60 (2010) Fang, X, Yu, J, Yao, H: Solutions to operator equations on Hilbert \(C^{*}\)-modules. Linear Algebra Appl. 431, 2142-2153 (2009) Vecchio, T: Nonlinear elliptic equations with measure data. Potential Anal. 4, 185-203 (1995) Adams, R: Sobolev Spaces. Academic Press, New York (1975) Krasnosel'skii, M, Rutickii, Y: Convex Functions and Orlicz Space. Noordhoff, Groningen (1961) Gossez, J: Nonlinear elliptic boundary value problems for equations with rapidly (or slowly) increasing coefficients. Trans. Am. Math. Soc. 190, 163-205 (1974) Gossez, J: Some approximation properties in Orlicz-Sobolev spaces. Stud. Math. 74, 17-24 (1982) Aharouch, L, Bennouna, J: Existence and uniqueness of solutions of unilateral problems in Orlicz spaces. Nonlinear Anal. 72, 3553-3565 (2010) Benkirane, A, Elmahi, A: An existence for a strongly nonlinear elliptic problem in Orlicz spaces. Nonlinear Anal. 36, 11-24 (1999) Meskine, D: Parabolic equations with measure data in Orlicz spaces. J. Evol. Equ. 5, 529-543 (2005) Lieberman, G: The natural generalization of the natural conditions of Ladyzhenskaya an Ural'tseva for elliptic equations. Commun. Partial Differ. Equ. 16, 311-361 (1991) Landes, R: On Galerkin's method in the existence theory of quasilinear elliptic equations. J. Funct. Anal. 39, 123-148 (1980) Morrey, C: Multiple Integrals in the Calculus of Variations. Springer, York (1966) Rodrigues, J, Teymurazyan, R: On the two obstacles problem in Orlicz-Sobolev spaces and applications. Complex Var. Elliptic Equ. 56, 769-787 (2011) Benkirane, A, Bennouna, J: Existence and uniqueness of solution of unilateral problems with \(L^{1}\)-data in Orlicz spaces. Ital. J. Pure Appl. Math. 16, 87-102 (2004) García-Huidobro, M, Le, V, Manásevich, R, Schmitt, K: On principal eigenvalues for quasilinear elliptic differential operators: an Orlicz-Sobolev space setting. Nonlinear Differ. Equ. Appl. 6, 207-225 (1999) Benkirane, A, Emahi, A: Almost everywhere convergence of the gradients of solutions to elliptic equations in Orlicz spaces and application. Nonlinear Anal. 28, 1769-1784 (1997) Gossez, J, Mustonen, V: Variational inequalities in Orlicz-Sobolev spaces. Nonlinear Anal. 11, 379-392 (1987) Boccardo, L, Segura, S, Trombeti, C: Bounded and unbounded solutions for a class of quasi-linear elliptic problems with a quadratic gradient term. J. Math. Pures Appl. 80, 919-940 (2001) Talenti, G: Nonlinear elliptic equations, rearrangements of functions and Orlicz spaces. Ann. Mat. Pura Appl. (4) 120, 159-184 (1979) The authors are highly grateful for the referees' careful reading and comments on this paper. The first author was supported by 'Chen Guang' Project (supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation) (10CGB25), and Shanghai Universities for Outstanding Young Teachers' Scientific Research Selection and Training Special Fund (sjq08011). The second author was supported by the National Natural Science Foundation of China (11371279). Department of Mathematics, Tongji University, Siping Road, Shanghai, 200092, China Ge Dong & Xiaochun Fang Department of Basic Teaching, Shanghai Jianqiao College, Kangqiao Road 1500, Shanghai, 201319, China Ge Dong Xiaochun Fang Correspondence to Xiaochun Fang. All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript. The original version of this article was revised: In the publication of this article (1), there was an error in the References. The error in reference 3: 'Youssfi, A, Benkirane, A, Moumni, A: Bounded solutions of unilateral problems for strongly nonlinear equations in Orlicz spaces. Electron. J. Qual. Theory Differ. Equ. 2013, 21 (2013)' Should instead read: 'Youssfi, A, Benkirane, A, El Moumni, M: A: Bounded solutions of unilateral problems for strongly nonlinear equations in Orlicz spaces. Electron. J. Qual. Theory Differ. Equ. 2013, 21 (2013)'. An erratum to this article is available at https://doi.org/10.1186/s13661-017-0839-0. Dong, G., Fang, X. Existence results for some nonlinear elliptic equations with measure data in Orlicz-Sobolev spaces. Bound Value Probl 2015, 18 (2015). https://doi.org/10.1186/s13661-014-0278-0 nonlinear elliptic equations measure data
CommonCrawl
Spatial distribution and behavior of dissolved selenium speciation in the South China Sea and Malacca Straits during spring inter-monsoon period Wanwan Cao, Yan Chang, Shan Jiang, Jian Li, Zhenqiu Zhang, Jie Jin, Jianguo Qu, Guosen Zhang, Jing Zhang Selenium (Se) has been recognized as a key trace element that is associated with growth of primary producers in oceans. During March and May 2018, surface water (67 samples) was collected and measured by HG-ICP-MS to investigate the distribution and behavior of selenite [Se(IV)], selenate [Se(VI)] and dissolved organic selenides (DOSe) concentrations in the Zhujiang River Estuary (ZRE), South China Sea (SCS) and Malacca Straits (MS). It showed that Se(IV) (0.14–3.44 nmol/L) was the dominant chemical species in the ZRE, related to intensive manufacture in the watershed; while the major species shifted to DOSe (0.05–0.79 nmol/L) in the MS, associated with the wide coverage of peatland and intensive agriculture activities in the Malaysian Peninsula. The SCS was identified as the northern and southern sections (NSCS and SSCS) based on the variations of surface circulation. The insignificant variation of Se(IV) in the NSCS and SSCS was obtained in March, potentially resulting from the high chemical activity and related preferential assimilation by phytoplankton communities. Contrastively, the lower DOSe concentrations in the SSCS likely resulted from higher primary production and utilization during March. During May, the concentration of Se(IV) remained low in the NSCS and SSCS, while DOSe concentrations increased notably in the SSCS, likely due to the impact of terrestrial inputs from surface current reversal and subsequent accumulation. On a global scale, DOSe is the dominant Se species in tropical oceans, while Se(IV) and Se(VI) are major fractions in high-latitude oceans, resulting from changes in predominated phytoplankton and related biological assimilation. Analysis of the spatial and temporal distributions of ecological variables and the nutrient budget in the Beibu Gulf Huanglei Pan, Dishi Liu, Dalin Shi, Shengyun Yang, Weiran Pan Based on a hydrodynamic-ecological model, the temperature, salinity, current, phytoplankton (Chl a), zooplankton, and nutrient (dissolved inorganic nitrogen, DIN, and dissolved inorganic phosphorous, DIP) distributions in the Beibu Gulf were simulated and the nutrient budget of 2015 was quantitatively analyzed. The simulated results show that interface processes and monsoons significantly influence the ecological processes in the gulf. The concentrations of DIN, DIP, phytoplankton and zooplankton are generally higher in the eastern and northern gulf than that in the western and southern gulf. The key regions affected by ecological processes are the Qiongzhou Strait in winter and autumn and the estuaries along the Guangxi coast and the Red River in summer. In most of the studied domains, biochemical processes contribute more to the nutrient budget than do physical processes, and the DIN and DIP increase over the year. Phytoplankton plays an important role in the nutrient budget; phytoplankton photosynthetic uptake is the nutrient sink, phytoplankton dead cellular release is the largest source of DIN, and phytoplankton respiration is the largest source of DIP. The nutrient flux in the connected sections of the Beibu Gulf and open South China Sea (SCS) inflows from the east and outflows to the south. There are 113 709 t of DIN and 5 277 t of DIP imported from the open SCS to the gulf year-around. Porewater-derived dissolved inorganic carbon and nutrient fluxes in a saltmarsh of the Changjiang River Estuary Xiaogang Chen, Jinzhou Du, Xueqing Yu, Xiaoxiong Wang Saltmarshes are one of the most productive ecosystems, which contribute significantly to coastal nutrient and carbon budgets. However, limited information is available on soil nutrient and carbon losses via porewater exchange in saltmarshes. Here, porewater exchange and associated fluxes of nutrients and dissolved inorganic carbon (DIC) in the largest saltmarsh wetland (Chongming Dongtan) in the Changjiang River Estuary were quantified. Porewater exchange rate was estimated to be (37±35) cm/d during December 2017 using a radon (222Rn) mass balance model. The porewater exchange delivered 67 mmol/(m2·d), 38 mmol/(m2·d) and 2 690 mmol/(m2·d) of dissolved inorganic nitrogen (DIN), dissolved silicon (DSi) and DIC into the coastal waters, respectively. The dominant species of porewater DIN was \begin{document}${\rm {NH}}_4^+ $\end{document} (>99% of DIN). However, different with those in other ecosystems, the dissolved inorganic phosphorus (DIP) concentration in saltmarsh porewater was significantly lower than that in surface water, indicating that saltmarshes seem to be a DIP sink in Chongming Dongtan. The porewater-derived DIN, DSi and DIC accounted for 12%, 5% and 18% of the riverine inputs, which are important components of coastal nutrient and carbon budgets. Furthermore, porewater-drived nutrients had obviously high N/P ratios (160–3 995), indicating that the porewater exchange process may change the nutrient characteristics of the Changjiang River Estuary and further alter the coastal ecological environment. The decomposition rate of the organic carbon content of suspended particulate matter in the tropical seagrass meadows A'an Johan Wahyudi, Karlina Triana, Afdal Afdal, Hanif Budi Prayitno, Edwards Taufiqurrahman, Hanny Meirinawati, Rachma Puspitasari, Lestari Lestari, Suci Lastrini In terms of downward transport, suspended particulate matter (SPM) from marine or terrigenous sources is an essential contributor to the carbon cycle. Within mesoscale environments such as seagrass ecosystems, SPM flux is an essential part of the total carbon budget that is transported within the ecosystem. By assessing the total SPM transport from water column to sediment, potential carbon burial can be estimated. However, SPM may decompose or reforming aggregate during transport, so estimating the vertical flux without knowing the decomposition rate will lead to over- or underestimation of the total carbon budget. Here this paper presents the potential decomposition rate of the SPM in seagrass ecosystems in an attempt to elucidate the carbon dynamics of SPM. SPM was collected from the seagrass ecosystems located at Sikka and Sorong in Indonesia. In situ experiments using SPM traps were conducted to assess the vertical downward flux and decomposition rate of SPM. The isotopic profile of SPM was measured together with organic carbon and total nitrogen content. The results show that SPM was transported to the bottom of the seagrass ecosystem at a rate of up to (129.45±53.79) mg/(m2·h) (according to carbon). Considering the whole period of inundation of seagrass meadows, SPM downward flux reached a maximum of 3 096 mg/(m2·d) (according to carbon). The decomposition rate was estimated at from 5.9 µg/(mg·d) (according to carbon) to 26.6 µg/(mg·d) (according to carbon). Considering the total downward flux of SPM in the study site, the maximum decomposed SPM was estimated 39.9 mg/(m2·d) (according to carbon) and 82.6 mg/(m2·d) (according to carbon) for study site at Sorong and Sikka, respectively. The decomposed SPM can be 0.6%–2.7% of the total SPM flux, indicating that it is a small proportion of the total flux. The seagrass ecosystems of Sorong and Sikka SPM show an autochthonous tendency with the primary composition of marine-end materials. Lipid biomarker composition in surface sediments from the Carlsberg Ridge near the Tianxiu Hydrothermal Field Shengyi Mao, Hongxiang Guan, Lihua Liu, Xiqiu Han, Xueping Chen, Juan Yu, Yongge Sun, Yejian Wang Hydrothermal venting has a profound effect on the chemical and biological properties of local and distal seawater and sediments. In this study, lipid biomarkers were analyzed to examine the potential influence of hydrothermal activity on the fate of organic matter (OM) in surface sediments around Tianxiu Hydrothermal Field in the Carlsberg Ridge (CR), Northwest Indian Ocean. By comparing the biomarker distributions of the samples with that of other typical hydrothermal sediments in the mid ocean ridge, it is shown that the location of the samples is not affected by the hydrothermal activity. The relatively low abundances of terrestrial n-alkyl lipids and riverine 1,15-C32 diol suggested a minor contribution of terrigenous OM to the study area. The bacteria contributed predominantly to sedimentary marine OM; however, other marine source organisms, e.g., eukaryotes (i.e., phytoplankton and fungi) could not be completely neglected. The marine-originated biomarkers showed significantly variable distributions between the two sediments, suggesting different dynamic physical and biogeochemical processes controlling the fate of marine OM. This study identified various diagnostic biomarkers (5,5-diethyl alkanes, diols and β-OH FAs), which may have significant environmental implications for future works in this region. Spatiotemporal variations in the organic carbon accumulation rate in mangrove sediments from the Yingluo Bay, China, since 1900 Yao Zhang, Xianwei Meng, Peng Xia, Jun Zhang, Dahai Liu, Zhen Li, Wanzhu Wang Mangroves can not only provide multiple ecosystem service functions, but are also efficient carbon producers, capturers, and sinks. The estimation of the organic carbon accumulation rate (OCAR) in mangrove sediments is fundamental for elucidating the role of mangroves in the global carbon budget. In particular, understanding the past changes in the OCAR in mangrove sediments is vital for predicting the future role of mangroves in the rapidly changing environment. In this study, three dated sediment cores from interior and fringe of mangroves in the Yingluo Bay, China, were used to reconstruct the spatiotemporal variations of the calculated OCAR since 1900 in this area. The increasing OCAR in the mangrove interior was attributed to mangrove flourishment induced by climate change characterized by the rising temperature. However, in the mangrove fringe, the strengthening hydrodynamic conditions under the sea level rise were responsible for the decreasing OCAR, particularly after the 1940s. Furthermore, the duration of inundation by seawater was the primary factors controlling the spatial variability of the OCAR from the mangrove fringe to interior, while the strengthened hydrodynamic conditions after the 1940s broke this original pattern. The influence of coupling mode of methane leakage and debris input on anaerobic oxidation of methane Rui Xie, Daidai Wu, Jie Liu, Guangrong Jin, Tiantian Sun, Lihua Liu, Nengyou Wu [Abstract](213) [FullText HTML] (100) [PDF 2868KB](6) [Cited by] () Anaerobic oxidation of methane (AOM) is an important biogeochemical process, which has important scientific significance for global climate change and atmospheric evolution. This research examined the δ34S, terrigenous clastic indices of TiO2 and Al2O3, and times for formation of the Ba front at site SH1, site SH3 and site 973-4 in the South China Sea. Three different coupling mechanisms of deposition rate and methane flux were discovered. The different coupling mechanisms had different effects on the role of AOM. At site 973-4, a high deposition rate caused a rapid vertical downward migration of the sulphate–methane transition zone (SMTZ), and the higher input resulted in mineral dissolution. At site SH3, the deposition rate and methane flux were basically in balance, so the SMTZ and paleo-SMTZ were the most stable of any site, and these were in a slow process of migration. At site SH1, the methane flux dominated the coupled mode, so the movement of the SMTZ at site SH1 was consistent with the general understanding. Understanding the factors influencing the SMTZ is important for understanding the early diagenesis process. Population dynamics and condition index of natural stock of blood cockle, Tegillarca granosa (Mollusca, Bivalvia, Arcidae) in the Marudu Bay, Malaysia Joanna W. Doinsing, Vienna Anastasia Admodisastro, Laditah Duisan, Julian Ransangan The population parameters of blood cockles, Tegillarca granosa in the intertidal zone of Marudu Bay, Sabah, Malaysia were investigated based on monthly length-weight frequency data (July 2017 to June 2018). A total of 279 cockle individuals with shell length and weight ranging from 27.7 mm to 82.2 mm and 13.11 g to 192.7 g were subjected to analysis. T. granosa in Marudu Bay showed a consistent moderately high condition index 4.98±0.86 throughout the year. The exponent b of the length-weight relationship was 2.6 demonstrating negative allometric growth. The estimated asymptotic length (L∞), growth coefficient (K) and growth performance (ϕ) of the T. granosa population in Marudu Bay were estimated at 86.68 mm, 0.98 a–1 and 3.87, respectively. The observed maximum shell length was 82.55 mm and the predicted maximum shell length was 84.44 mm with estimated maximum life span (tmax) of 3.06 years. The estimated mean lengths at the end of 2, 4, 6, 8, 10 and 12 months of age were 21.31 mm, 31.16 mm, 39.53 mm, 46.63 mm, 52.67 mm and 57.79 mm. Total, natural, and fishing mortalities were estimated at 2.39 a–1, 1.32 a–1 and 1.07 a–1. The exploitation level (E) was 0.45. Results of the current study also demonstrated that T. granosa in the Marudu Bay has two major recruitment peaks; one in March and another in October. The exploitation level revealed that natural stock of T. granosa in the Marudu Bay was approaching the maximum exploitation level. If such trend continues or demand for T. granosa is increasing, coupled with no effective fisheries management in place, possibility of the T. granosa population in the Marudu Bay to collapse is likely to elevate. Feeding ecology of Japanese Spanish mackerel (Scomberomorus niphonius) along the eastern coastal waters of China Haozhi Sui, Ying Xue, Yunkai Li, Binduo Xu, Chongliang Zhang, Yiping Ren Feeding activities provide necessary nutrition and energy to support the reproduction and development of fish populations. The feeding ecology and dietary plasticity of fish are important factors determining their recruitment and population dynamics. As a top predator, Japanese Spanish mackerel (Scomberomorus niphonius) supports one of the most valuable fisheries in China. In this study, the feeding ecology and diet composition of Japanese Spanish mackerel spawning groups were analysed based on samples collected from six spawning grounds along the eastern coastal waters of China during spring (March to May) in 2016 and 2017. Both stomach contents and stable isotope analysis were conducted. Stomach content analysis showed that spawning groups of Japanese Spanish mackerel mainly fed on fish, consuming more than 40 different prey species. Diets were significantly different among sampling locations. The most important prey species were Stolephorus in Fuzhou, Japanese anchovy Engraulis japonicus in Xiangshan, Euphausia pacifica in Lüsi, sand lance Ammodytes personatus in Qingdao and Weihai, and Leptochela gracilis in Laizhou Bay. Stable isotope analysis showed that the trophic level of Japanese Spanish mackerel was relatively high and generally increased with latitude from south to north. In the 1980s, the diet of Japanese Spanish mackerel was dominated solely by Japanese anchovies in the eastern coastal waters of China. The results in the present study showed that the importance of Japanese anchovies declined considerably, and this fish was not the most dominant diet in most of the investigated waters. Both the spatial variations in diet composition and changes in the dominant diet over the long term indicated the high adaptability of Japanese Spanish mackerel to the environment. Combining the results of stomach analysis and stable isotope analysis from different tissues provided more comprehensive and accurate dietary information on Japanese Spanish mackerel. The study provides essential information about the feeding ecology of Japanese Spanish mackerel and will benefit the management of its populations in the future. Developing an intermediate-complexity projection model for China's fisheries: A case study of small yellow croaker (Larimichthys polyactis) in the Haizhou Bay, China Ming Sun, Yunzhou Li, Yiping Ren, Yong Chen Projection models are commonly used to evaluate the impacts of fishing. However, previously developed projection tools were not suitable for China's fisheries as they are either overly complex and data-demanding or too simple to reflect the realistic management measures. Herein, an intermediate-complexity projection model was developed that could adequately describe fish population dynamics and account for management measures including mesh size limits, summer closure, and spatial closure. A two-patch operating model was outlined for the projection model and applied to the heavily depleted but commercially important small yellow croaker (Larimichthys polyactis) fishery in the Haizhou Bay, China, as a case study. The model was calibrated to realistically capture the fisheries dynamics with hindcasting. Three simulation scenarios featuring different fishing intensities based on status quo and maximum sustainable yield (MSY) were proposed and evaluated with projections. Stochastic projections were additionally performed to investigate the influence of uncertainty associated with recruitment strengths and the implementation of control targets. It was found that fishing at FMSY level could effectively rebuild the depleted stock biomass, while the stock collapsed rapidly in the status quo scenario. Uncertainty in recruitment and implementation could result in variabilities in management effects; but they did not much alter the management effects of the FMSY scenario. These results indicate that the lack of science-based control targets in fishing mortality or catch limits has hindered the achievement of sustainable fisheries in China. Overall, the presented work highlights that the developed projection model can promote the understanding of the possible consequences of fishing under uncertainty and is applicable to other fisheries in China. Comparing different spatial interpolation methods to predict the distribution of fishes: A case study of Coilia nasus in the Changjiang River Estuary Shaoyuan Pan, Siquan Tian, Xuefang Wang, Libin Dai, Chunxia Gao, Jianfeng Tong Spatial-temporal distribution of marine fishes is strongly influenced by environmental factors. To obtain a more continuous distribution of these variables usually measured by stationary sampling designs, spatial interpolation methods (SIMs) is usually used. However, different SIMs may obtain varied estimation values with significant differences, thus affecting the prediction of fish spatial distribution. In this study, different SIMs were used to obtain continuous environmental variables (water depth, water temperature, salinity, dissolved oxygen (DO), pH, chlorophyll a and chemical oxygen demand (COD)) in the Changjiang River Estuary (CRE), including inverse distance weighted (IDW) interpolation, ordinary Kriging (OK) (semivariogram model: exponential (OKE), Gaussian (OKG) and spherical (OKS)) and radial basis function (RBF) (regularized spline function (RS) and tension spline function (TS)). The accuracy and effect of SIMs were cross-validated, and two-stage generalized additive model (GAM) was used to predict the distribution of Coilia nasus from 2012 to 2014 in CRE. DO and COD were removed before model prediction due to their autocorrelation coefficient based on variance inflation factors analysis. Results showed that the estimated values of environmental variables obtained by the different SIMs differed (i.e., mean values, range etc.). Cross-validation revealed that the most suitable SIMs of water depth and chlorophyll a was IDW, water temperature and salinity was RS, and pH was OKG. Further, different interpolation results affected the predicted spatial distribution of Coilia nasus in the CRE. The mean values of the predicted abundance were similar, but the differences between and among the maximum value were large. Studies showed that different SIMs can affect estimated values of the environmental variables in the CRE (especially salinity). These variations further suggest that the most applicable SIMs to each variable will also differ. Thus, it is necessary to take these potential impacts into consideration when studying the relationship between the spatial distribution of fishes and environmental changes in the CRE. Spatio-temporal distribution of Konosirus punctatus spawning and nursing ground in the South Yellow Sea Xiangyu Long, Rong Wan, Zengguang Li, Yiping Ren, Pengbo Song, Yongjun Tian, Binduo Xu, Ying Xue In recent years, Konosirus punctatus has accounted for a large portion in catch composition and become important economic species in the South Yellow Sea. However, the distribution of K. punctatus early life stages is still poorly understood. In this study, generalized additive models with Tweedie distribution were used to analyze the relationships between K. punctatus ichthyoplankton and environmental factors (longitude and latitude, sea surface temperature (SST), sea surface salinity (SSS) and depth), and predict distribution K. punctatus spawning ground and nursing ground, based on samplings collected in 6 months during 2014–2017. The results showed that K. punctatus' spawning ground were mainly distributed in central and north study area (from 33.0°N to 37.0°N). By comparison, the nursing ground shifted southward, which were approximately located along central and south coast of study area (from 31.7°N to 35.5°N). The optimal models identified that suitable SST, SSS and depth for eggs were 19–26°C, 25–30 and 9–23 m, respectively. The suitable SSS for larvae were 29–31. The K. punctatus spawning habit might have changed in the past decades, which was a response to increasing SST and fishing pressure. That needs to be proved in further study. The study provides references of conservation and exploitation for K. punctatus. Influences of environmental factors on the spawning stock-recruitment relationship of Portunus trituberculatus in the northern East China Sea Li Gao, Yingbin Wang Based on the Ricker-type models, the spawning stock-recruitment (S-R) relationship of Portunus trituberculatus was analysed under the impacts of environmental factors (including red tide area (AORT), sea level height (SLH), sea surface salinity (SSS) and typhoon landing times (TYP)) in the northern East China Sea in 2001 and 2014. Besides the traditional Ricker model, two other Ricker-type S-R models were built: Ricker model with ln-linear environmental impact (Ricker-type 2) and Ricker model with ln-quadratic polynomial environmental impact (Ricker-type 3). Results showed that AORT, SLH, SSS and TYP had great influences on the recruitment of P. trituberculatus. When SSS reached 29 and 31, recruitment decreased from 20.7×103 million to 8.3×103 million individuals. In this case, recruitment declined, whereas AORT and TYP increased. Analysis of the S-R model showed that the Akaike information criterion (AIC) value of the traditional Ricker model was 14.619, which remarkably decreased after addition of the environmental factors. Different numbers of environmental factors were added to the Ricker model, and the best result was obtained when four factors were added to the model together. Moreover, Ricker-type 2 model, with the AIC value of −5.307, was better than Ricker-type 3 model (add above four environmental factors at the same time). The findings indicated that the mechanisms by which various environmental factors affect the S-R relationship are different. Application of DNA metabarcoding to characterize the diet of the moon jellyfish Aurelia coerulea polyps and ephyrae Tingting Sun, Lei Wang, Jianmin Zhao, Zhijun Dong Dietary studies of polyps and ephyrae are important to understand the formation and magnitude of jellyfish blooms and provide important insights into the marine food web. However, the diet of polyps and ephyrae in situ is largely unknown. Here, prey species of the polyps and ephyrae of the moon jellyfish Aurelia coerulea in situ were identified using high-throughput DNA sequencing techniques. The results show that A. coerulea polyps and ephyrae consume a variety of prey items. The polyps consume both planktonic and benthic prey, including hydromedusae, copepods, ciliates, polychaetes, stauromedusae, and phytoplankton. A. coerulea ephyrae mainly feed on copepods and hydromedusae. Gelatinous zooplankton, including Rathkea octopunctata and Sarsia tubulosa, were frequently found as part of the diet of A. coerulea polyps and ephyrae. The utilization of high-throughput sequencing technique is a useful tool for studying the diet of polyps and ephyrae in the field, complementing the traditional techniques towards a better understanding of the complex role of gelatinous animals in marine ecosystems. Characterization of DNA polymerase δ from deep-sea hydrothermal vent shrimp Rimicaris exoculata Wenlin Wu, Hongyun Li, Tiantian Ma, Xiaobo Zhang DNA polymerase δ (Polδ) plays a crucial and versatile role in DNA replication and DNA repair processes. Vent shrimp Rimicaris exoculata is the primary megafaunal community living in hydrothermal vents. In this study, the Polδ from shrimp Rimicaris exoculata was cloned, expressed and characterized. The results showed that the Polδ catalytic subunit (POLD1), 852 amino acids in length, shared high homology with crayfish Procambarus clarkii and shrimp Oratosquilla oratoria. The recombinant POLD1 expressed in Escherichia coli showed that the enzyme was active in a range of 20°C to 40°C with an optimum temperature at 25°C and in a wide range of pH with an optimum at pH 6.0. The activities of POLD1 were significantly enhanced in the presence of Triton-X 100, Tween 20 and Mn2+. The Km (dNTP) value of POLD1 was 4.7 μmol/L. The present study would be helpful to reveal the characterization of Polδ of deep-sea vent animals. Evidence of return of chum salmon released from Tangwang River by strontium marking method Jilong Wang, Wei Liu, Peilun Li, Fujiang Tang, Wanqiao Lu, Jian Yang, Tao Jiang In order to assess the effect of enhancement release of chum salmon (Oncorhynchus keta), otolith strontium (Sr) marking method was used to tag chum salmon released in Tangwang River in 2016. The homing chum salmon were detected and the samples were collected in Tangwang River, Ussuri River and Suifen River in the autumn of 2018. The samples were analyzed by examining Sr and calcium (Ca) fingerprints in the otolith using electron probe microanalysis. The results suggested that two samples collected in Tangwang River had the marking ring near the core of otolith where the Sr concentration and Sr/Ca ratio were significantly higher than comparative samples. Proving that the two fish belonged to the released population in Tangwang River in 2016. This article indicated the success of the enhancement release of chum salmon from the Tangwang River for the first time and also confirmed the validity of Sr marking in enhancement release of fishes. Application research of narrow band Internet of things buoy and surface hydrodynamics monitoring Yiqun Xu, Jia Wang, Li Guan This paper applies the narrow band Internet of things communication technology to develop a wireless network equipment and communication system, which can quickly set up a network with a radius of 100 km on water surface. A disposable micro buoy based on narrow-band Internet of things and Beidou positioning function is also developed and used to collect surface hydrodynamic data online. In addition, a web-based public service platform is designed for the analysis and visualization of the data collected by buoys. Combined with the satellite remote sensing data, the study carries a series of marine experiments and studies such as sediment deposition tracking and garbage floating tracking.
CommonCrawl
What's My Loudoun Home Value why is traffic stopped on i 95 Translational and rotational degrees of freedom of water (left) and HCl (right). [12] Likewise, the intensity of Raman bands depends on the derivative of polarizability with respect to the normal coordinate. A second group of analytical expressions that are often used to approximate real-world potentials uses an expansion in terms of inverse powers of \(r\). stops - the sum of the distances that the atoms are from their original gives the velocity, v = (2E/m)½. eigenvector. A nonlinear molecule can rotate about any of three mutually perpendicular axes and therefore has 3 rotational degrees of freedom. At the turning point - the point where atomic motion factors are useful. Vibrational Quantities - Example of Nitrogen molecule. Ε: is the energy required to push and pull the bond together. Fig. 4.3 illustrates schematically the transformation from a two-particle to a one-particle system. - 0.5 x N2(x), and the velocity of each atom becomes: in the "x" direction. 4.1 Bond length \(r\), equilibrium bond length \(r_\mathrm{e}\), and displacement \(x\) of a chemical bond. Note also that we have a single degree of freedom (the displacement) and therefore one quantum number - this is another instance of the rule that there is one quantum number per (constrained) degree of freedom. 1 Given the normal mode eigenvector, the reduced mass for the vibration can be Diatomic molecules have electric dipole moments of a few D. The quantum operator for the electric dipole moment is identical to the classical expression - recall that position variables are left unchanged when converting a classical expression to its quantum analog. Each normal mode is assigned a single normal coordinate, and so the normal coordinate refers to the "progress" along that normal mode at any given time. Therefore, loose bonds in heavy molecules have slow oscillations, and stiff bonds in light molecules have fast oscillations. Twisting: a change in the angle between the planes of two groups of atoms, such as a change in the angle between the two methylene groups. position plus the distance of N2 from its equilibrium position. 4.5 illustrates the vibrational wavefunctions for the lowest-energy states. This quantity can also be calculated using the energy of the vibration and To reproduce this + (ψi(A)(y))2 + (ψi(A)(z))2)2 MA D. Schiferl, S. Buchsbaum, and R. L. Mills, J. Phys. ergs per mole, and "m" is the reduced mass of the vibration in amu. -4455.175 x 6.9477 x 10-3 = -30.9520 millidynes x Fig. This is one of the exercises that you will be tested on in Exam 1. The second term, the first derivative, is zero, since we are at a minimum. The wavefunctions for the ground state and the first excited state are. The energy needed to stretch or squeeze the bond is dependent on the stiffness of the bond, which is represented by the spring constant k, and the reduced mass, or "center of mass" of the two atoms attached to either end denoted by μ. The wavefunctions for the stationary states of the harmonic oscillator are. Remember with hydrogen there was one kind of stretching, but in water there are two kinds of stretching and four other kinds of vibration called bending vibrations as shown below. where Phys. are due to round-off, and some are due to the difference between simple harmonic the energy minimum, is a = 0.05929 Å = 5.929 x 10-10 cm. 2. On the other hand, other atmospheric gases such as CO2 and CH4 are strong absorbers of infrared radiation and therefore are crucial in determining the temperature of the atmosphere - they are greenhouse gases. J. Belak, R. Lesar, and R. D. Etters, J. Chem. To get a sense for the magnitude of electric dipole moments, 1 D corresponds to about 0.21 e Å, where e is the elementary charge. Fig. conversion from wavenumber to period is given by: Convert wavenumbers to Angular Frequency, ω. Angular frequency is the angular displacement per second, or 2π/T: The zero-point energy of a polyatomic molecule is the sum This implies that diatomic molecules whose dipole moment doesn't change with displacement don't absorb any infrared radiation. So the three normal modes of vibration for water have the symmetries A 1, A 1 and B 1. The probability density of the ground state shows that even in this lowest-energy state, the chemical bond length is not sharply defined. For example, hydrogen fluoride HF with \(m_1\approx 1\,\mathrm{u}\) and \(m_2\approx 19\,\mathrm{u}\) has a reduced mass of approximately \(1\,\mathrm{u}\cdot 19\,\mathrm{u}/(1\,\mathrm{u}+19\,\mathrm{u}) \approx 0.95\,\mathrm{u}\). For a large molecule, this vibrational zero-point energy can be substantial. Right: Simple case of a neutral diatomic molecule with partial positive and negative charges on the two atoms. rocking: This motion is like a pendulum on a clock going back and forth only here an atom is the pendulum and there are two instead of one. The shapes of real-world inter-atomic potentials are derived from experimental data. Phys. has the value -1.1141, and represents the derivative of the "x" coordinate of 4.5 The wavefunctions of the harmonic oscillator. For this operation, the cgs system will be used. A nitrogen molecule, N2, provides a good, The vibration frequencies,νi are obtained from the eigenvalues,λi, of the matrix product GF. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency The difference is mostly due to the difference in force constants (a factor of 5), and not from the difference in reduced mass (9.5 u vs. 7 u). and associate eigenvectors, ψi. Therefore, the theory of molecular vibrations is based on the harmonic potential. NOSYM, and Both parts needed for the calculation of Tμ, the "displacement" for a mode involves movements of many atoms, with varying relative amplitude. The reduced mass of hydrogen fluoride, 1H19F, is 0.95 u, and the force constant of the bond is 959 N/m. Szabó and R. Scipioni, small explanation of vibrational spectra and a table including force constants, Character tables for chemically important point groups, Resonance-enhanced multiphoton ionization, Cold vapour atomic fluorescence spectroscopy, Conversion electron Mössbauer spectroscopy, Glow-discharge optical emission spectroscopy, Inelastic electron tunneling spectroscopy, Vibrational spectroscopy of linear molecules, https://en.wikipedia.org/w/index.php?title=Molecular_vibration&oldid=988010362, Creative Commons Attribution-ShareAlike License, Stretching: a change in the length of a bond, such as C–H or C–C, Bending: a change in the angle between two bonds, such as the HCH angle in a methylene group. The bond delocalization depends on the reduced mass and on the force constant. where "A" refers to atom A, and "x", "y", and "z" refer The final step in preparing the matrix for Let's revisit what this means. The functions \(H_n(\cdot)\) are called Hermite polynomials (see Appendix). They are independent vibrations that can simultaneously occur in a molecule. 3. Phys. Nitrogen has only one non-trivial vibration, so: and the eigenvector: ψ1 Comparison between a real-world inter-atomic potential and the harmonic potential. We can characterize the state of such a spring by its length \(r\). This is slightly different to the renormalization used in this analysis. For these, the energy difference is \(\Delta E = \hbar\omega_\mathrm{e}\), independent of the value of \(n\). output can be used in The first term (\(V(r_\mathrm{e})=-D_\mathrm{e}\)) shifts all energies by the same amount and does not affect any observed transitions. Their number is given by, Since the molecular geometry can distort along each of these degrees of freedoms, these constitute vibrational normal modes. In this case, $\mathrm{ZPVE}=\omega_1/2$.If its anharmonic or has rovibrational coupling, even the expression for a diatomic doesn't not allow you to determine $\omega_1$ from just the ZPVE. When multiple quanta are absorbed, the first and possibly higher overtones are excited. A. F. Goncharov, E. Gregoryanz, H. Mao, Z. Liu, and R. J. Hemley, Phys. results obtained from the animation or output are compared. Followup (SO 2) Would CO 2 and SO 2 have a different number for degrees of vibrational freedom? Displacement. Skoog, D. A.; Holler, F. J.; Crouch, S. R. https://simple.wikipedia.org/w/index.php?title=Molecular_vibrations&oldid=6674266, Creative Commons Attribution/Share-Alike License. The more atoms in the molecule the more ways they can be combined. D. Schiferl, D. T. Cromer, and R. L. Mills, High Temp.-High Press. Illustrations of symmetry–adapted coordinates for most small molecules can be found in Nakamoto.[6]. Think of the atoms as round balls that are attached by a spring that can stretch back and forth. The most common of these is the Lennard-Jones potential: Fig. In general, the reduced mass of a diatomic molecule, AB, is expressed in terms of the atomic masses, mA and mB, as. Each new vibrational mode is basically a different combination of the six shown above. femtoseconds. where N is the number of atoms, for HCl, This is useful because like a spring, a bond requires energy to stretch it out and it also takes energy to squeeze it together. These subfields are known as Near IR, Mid IR and Far IR spectroscopy. A fundamental vibration is evoked when one such quantum of energy is absorbed by the molecule in its ground state. Black Belt Karate Levels, Cabernet Sauvignon List, Simplehuman Tension Pole Shower Caddy, Mitsubishi Lancer Ex, Difference Between Primary And Secondary Sources Of Law, Cabernet Sauvignon List, November 14, 2020 /0 Comments/by https://secureservercdn.net/198.71.233.179/41g.b9a.myftpupload.com/wp-content/uploads/2019/10/CQuick-Logo-White.png 0 0 https://secureservercdn.net/198.71.233.179/41g.b9a.myftpupload.com/wp-content/uploads/2019/10/CQuick-Logo-White.png 2020-11-14 12:28:432020-11-14 12:28:43why is traffic stopped on i 95 Cindy Quick Email: [email protected] Keller Williams Reston 11700 Plaza America Drive #150 © 2019 Copyright - Homes for Sale in Loudoun County, VA
CommonCrawl
Computing relative Lie algebra cohomology (as appears in Borel-Weil-Bott theorem) Suppose $G$ is a complex Lie group, $P$ a Borel subgroup, $E$ a representation of $P$ that induces a vector bundle ${\cal E}$ over $G/P$. The general version of Borel-Weil-Bott theorem, as stated in Bott's 1957 paper, says that $H^*(G/P,{\cal E}) = \sum K\otimes H^*(p,v,{\rm Hom}(K,E))$, where $p$ is the Lie algebra of $P$, $v$ the Lie algebra of the intersection of $P$ with the maximal compact subgroup $M$ of $G$, and the sum is over all irreducible representations $K$ of $M$. My question is how to compute the relative Lie algebra cohomology appearing on the RHS of this formulae in practice, say when $M$ is of ADE type (and $G$ its complexification). I understand that in the degree $0$ case, ${\rm H}^0$ is computed simply as homomorphisms from $K$ to $E$ over $p$. What is an efficient way to compute the higher cohomology groups? Also: the more commonly seen version of the theorem deals with line bundle ($E$ being a $1$-dimensional representation of $P$). In this case, the RHS is usually expressed in terms of the highest weight representation given by a Weyl group transformation of the weight vector associated with that $1$-dimensional representation. How does this result follow from the general formula above expressed in terms of relative Lie algebra cohomology, and in particular, why does the length of the Weyl group element translate into the degree of the cohomology group? rt.representation-theory vector-bundles lie-algebra-cohomology flag-varieties Davide Giraudo $\begingroup$ Just in case you didn't already know this: Proposition 6.3 of Kostant (ams.org/mathscinet-getitem?mr=142696) recasts Bott's theorem in a way that bypasses relative Lie algebra cohomology. (See especially loc. cit., equation (6.3.1). Kostant then shows how this reduces the computation of $H^*(G/P,\mathcal{E})$ to his Theorem 5.14.) $\endgroup$ – Francois Ziegler $\begingroup$ As Francois points out, Kostant's Annals paper (available online via JSTOR) is a basic reference. Aside from that, I'd urge you to edit your notation to make the distinction between Lie groups and Lie algebras precise. Your formulation is confusing. $\endgroup$ – Jim Humphreys The theorems of Borel-Weil and then Bott, along with Kostant's translation of the ideas into the language of Lie algebra cohomology, do much to illuminate classical representation theory (Cartan-Weyl) but probably can't be viewed as a computational tool. As in other situations, cohomological language provides a natural setting for concrete older ideas and also suggests new possibilities (for instance in the work of Schmid on infinite dimensional Lie group representations). But in Lie theory you don't usually get easier ways to deal with the combinatorics. While Bott approached the subject rather indirectly, the idea of Borel and Weil is fairly direct and geometric: realize irreducible finite dimensional representations of compact (or complex) semisimple Lie groups in terms of the cohomology of associated line bundles relative to the flag variety. For more general vector bundles, as in Bott's paper, you can expect less explicit results but perhaps more flexibility in the methods. By using the then-recent spectral sequence methods of Grothendieck in algebraic geometry, Demazure was able to streamline the original Borel-Weil-Bott treatments (working over an algebraically closed field of characteristic 0). He wrote a short paper (in French) here and then a much shorter version of the main step (in English) here. In all these papers the notation tends to differ a lot, but once you get into the spirit of Demazure's short proof you may be able to see more clearly how the lengths of Weyl group elements correlate with the possible degrees of nonvanishing cohomology groups for arbitrary line bundles. Basically the argument goes step-by-step inductively, starting with a single simple reflection to get from a weight in the dominant Weyl chamber to a weight in an adjacent chamber where the same irreducible representation typically occurs but with a new non-dominant weight attached. (Here the Weyl group reflections occur with origin shifted to $-\rho$, since a canonical line bundle is also involved.) Again I'd emphasize that no new representations turn up (unless you venture into prime characteristic), but Bott's theorem and by implication the Lie algebra interpretation may seem more natural. Since the combinatorial description of these representations is already intricate (in terms of weight spaces and characters), one has to expect things to get much more complicated to compute for arbitrary vector bundles. But at least there is a coherent pattern. Jim HumphreysJim Humphreys Not the answer you're looking for? Browse other questions tagged rt.representation-theory vector-bundles lie-algebra-cohomology flag-varieties or ask your own question. Highest weights of the restriction of an irreducible representation of a simple group to a Levi subgroup Relative Lie Algebra cohomology and sheaf cohomology Is there a generalization of Borel-Weil-Bott theorem for not completely reducible vector bundles? T-bundles and the Borel-Weil-Bott theorem Cohomology ring of a flag variety and representation theory What are cohomology of Lie algebra with coefficients geometrically? What's the most simple proof of Kostant's version of Borel-Weil-Bott for Lie Algebra cohomology?
CommonCrawl
Why is a holomorphic map between compact connected Riemann surfaces a branched covering? I have seen it claimed that a non-constant holomorphic map $f:X \rightarrow Y$ between compact connected Riemann surfaces is a branched covering i.e. surjective and there is a finite set $\Sigma \subset Y$ and $r \in \mathbb{Z}_+$ such that $|f^{-1}(q)|=r$ for all $q \in Y \setminus \Sigma$. I can see why such a map is surjective, but I don't understand why the rest of the statement is true. complex-analysis riemann-surfaces Michael Hardy Matt RMatt R This argument may have some holes (and I would appreciate it if people would kindly point them out, as I need to go now), but I think the overall gist is correct: Let $A \subset X$ be the set of all $p \in X$ where $f$ is not locally injective, i.e. let $A$ be the set of $p \in X$ such that there exists no open neighbourhood $U$ of $p$ such that $f$ is injective on $U$. (People usually refer to $A$ as the set of branch points of $f$ in $X$.) We can show that $A$ is a finite set. To do this, we observe that around any $p \in A$, we can find local coordinates in which $f$ is represented as the mapping $z \mapsto z^k$, with $k \geq 2$. This follows from the open mapping theorem of complex analysis. (Of course, the point $p$ is meant to be the point $z = 0$ in these local coordinates.) But having written $f$ in this way, it's clear that $f$ is not locally injective at any point in this little coordinate patch around $p$, except at $p$ itself. In other words, $p$ is an isolated point in $A$. Since $p$ was chosen arbitrarily, this means that every point in $A$ is isolated. By a compactness argument, it follows that $A$ is a finite set. Now define your $\Sigma$ to be $f(A)$. So $\Sigma$ is a finite set in $Y$, as required. Moreoever, $f$ is locally injective at any point in $f^{-1}(Y\backslash \Sigma)$, by construction, and therefore, by another application of the open mapping theorem, $f$ is locally a homeomorphism at every point in $f^{-1}(Y\backslash \Sigma)$. And clearly, the preimage of any point in $Y \backslash \Sigma$ is a finite set, by a compactness argument. From these facts, it should be simple to check that the restriction of $f$ to the subset $f^{-1}(Y\backslash \Sigma)$ is a covering map. Finally, $Y \backslash \Sigma$ is connected (because $\Sigma$ is finite), so $|f^{-1}(y)|$ is constant as $y$ varies over $f^{-1}(Y\backslash \Sigma)$: this constant is simply the number of sheets of the covering map. Kenny WongKenny Wong First of all, your definition of a branched covering (between surfaces) is incomplete. The correct definition is: It is a map $f: X\to Y$ such that there exists a finite subset $W\subset Y$ such that for $Z:=f^{-1}(W)$, the restriction $$ f|_{X - Z}: X- Z\to Y- W $$ is a covering map. To prove that every nonconstant holomorphic map between compact connected Riemann surfaces is a branched covering, let $W\subset Y$ denote the set of critical values of $f$ (i.e. points $w\in Y$ such that there exists $z\in f^{-1}(w)$ so that $f'(z)=0$; such $z$ is called a critical point of $f$). Then observe that since $X$ is compact and $f$ is nonconstant, it has only finitely many critical points (otherwise, the set of critical points has an accumulation point in $X$ which is impossible). Now, the restriction $f|_{X- Z}$ is a local diffeomorphism. It is also a proper map, i.e. preimages of compact subsets in $Y -W$ are compact. (This follows easily from compactness of $X$.) Furthermore, by the maximum principle, nonconstant holomorphic maps are open. Since $X$ is compact, $f(X)$ is closed in $Y$; since $Y$ is connected, it follows that $f(X)=Y$. Thus, by the construction, $f|_{X- Z}: X- Z\to Y- W$ is surjective. Lastly, applying Ehresmann's "Stack of Records" theorem, we conclude that $f|_{X- Z}: X- Z\to Y- W$ is a covering map. Moishe KohanMoishe Kohan Not the answer you're looking for? Browse other questions tagged complex-analysis riemann-surfaces or ask your own question. Proper Holomorphic Mapping implies Branched Covering Discreteness of Preimages of points, map between compact riemann surfaces Why this is a covering map between compact Riemann surfaces? Unramified functions between Riemann surfaces A more general definition of branched covering. Extending a biholomorphic map between two Riemann surfaces Show that a function from a Riemann Surface $g:Y\to\mathbb{C}$ is holomorphic iff its composition with a proper holomorphic map is holomorphic. Show $f$ is an immersion, where $f$ holomorphic map between compact Riemann Surfaces A non-constant holomorphic map $F$ between riemann-surfaces is an isomorphism Questions from Forster's proof regarding unbranched holomorphic proper covering map Existence of non-constant holomorphic map between two given compact Riemann surfaces
CommonCrawl
JGM Home A unifying approach for rolling symmetric spaces doi: 10.3934/jgm.2021001 Contact Hamiltonian and Lagrangian systems with nonholonomic constraints Manuel de León 1,3, , Víctor M. Jiménez 2, and Manuel Lainz 1, Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM), C\Nicolás Cabrera, 13-15, Campus Cantoblanco, UAM, 28049 Madrid, Spain Universidad de Alcalá (UAH) Campus Universitario. Ctra. Madrid-Barcelona, Km. 33, 600. 28805 Alcal´a de Henares, Madrid, Spain Real Academia de Ciencias Exactas, Fisicas y Naturales C/de Valverde 22, 28004 Madrid, Spain Received November 2019 Revised October 2020 Published December 2020 Figure(1) In this article we develop a theory of contact systems with nonholonomic constraints. We obtain the dynamics from Herglotz's variational principle, by restricting the variations so that they satisfy the nonholonomic constraints. We prove that the nonholonomic dynamics can be obtained as a projection of the unconstrained Hamiltonian vector field. Finally, we construct the nonholonomic bracket, which is an almost Jacobi bracket on the space of observables and provides the nonholonomic dynamics. Keywords: Nonholonomic constraints, contact Hamiltonian systems, Herglotz principle, dissipative systems, nonholonomic mechanics, Jacobi nonholonomic bracket. Mathematics Subject Classification: 37J60;70F25;53D10;70H33. Citation: Manuel de León, Víctor M. Jiménez, Manuel Lainz. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. Journal of Geometric Mechanics, doi: 10.3934/jgm.2021001 R. Abraham and J. E. Marsden, Foundations of Mechanics, AMS Chelsea Publishing, Redwood City, CA, 1978. doi: 10.1090/chel/364. Google Scholar L. Bates and J. Śniatycki, Nonholonomic reduction, Rep. Math. Phys., 32 (1993), 99-115. doi: 10.1016/0034-4877(93)90073-N. Google Scholar A. M. Bloch, P. S. Krishnaprasad, J. E. Marsden and R. M. Murray, Nonholonomic mechanical systems with symmetry, Arch. Rational Mech. Anal., 136 (1996), 21-99. doi: 10.1007/BF02199365. Google Scholar A. V. Borisov and I. S. Mamaev, On the history of the development of the nonholonomic dynamics, Regul. Chaotic Dyn., 7 (2002), 43-47. doi: 10.1070/RD2002v007n01ABEH000194. Google Scholar A. Bravetti, Contact geometry and thermodynamics, Int. J. Geom. Methods Mod. Phys., 16 (2019), 51pp. doi: 10.1142/S0219887819400036. Google Scholar A. Bravetti, Contact Hamiltonian dynamics: The concept and its use, Entropy, 19 (2017), 12pp. doi: 10.3390/e19100535. Google Scholar A. Bravetti, M. de León, J. C. Marrero and E. Padrón, Invariant measures for contact Hamiltonian systems: Symplectic sandwiches with contact bread, J. Phys. A: Math. Theoret., 53 (2020). doi: 10.1088/1751-8121/abbaaa. Google Scholar A. Cannas da Silva and A. Weinstein, Geometric Models for Noncommutative Algebras, Berkeley Mathematics Lecture Notes, 10, American Mathematical Society, Providence, RI; Berkeley Center for Pure and Applied Mathematics, Berkeley, CA, 1999. Google Scholar S. A. Chaplygin, Analysis of the Dynamics of Non-Holonomic Systems, Gostekhizdat, Mosow-Leningrad, 1949. Google Scholar M. de León and D. M. de Diego, A constraint algorithm for singular Lagrangians subjected to nonholonomic constraints, J. Math. Phys., 38 (1997), 3055-3062. doi: 10.1063/1.532051. Google Scholar M. de León and D. M. de Diego, On the geometry of non-holonomic Lagrangian systems, J. Math. Phys., 37 (1996), 3389-3414. doi: 10.1063/1.531571. Google Scholar M. de León and D. M. de Diego, Solving non-holonomic Lagrangian dynamics in terms of almost product structures, Extracta Math., 11 (1996), 325-347. Google Scholar M. de León and M. Lainz Valcázar, Contact Hamiltonian systems, J. Math. Phys., 60 (2019), 18pp. doi: 10.1063/1.5096475. Google Scholar M. de León and M. Lainz Valcázar, Infinitesimal symmetries in contact Hamiltonian systems, J. Geom. Phys., 153 (2020), 13pp. doi: 10.1016/j.geomphys.2020.103651. Google Scholar M. de León and M. Lainz Valcázar, Singular Lagrangians and precontact Hamiltonian systems, Int. J. Geom. Methods Mod. Phys., 16 (2019), 39pp. doi: 10.1142/S0219887819501585. Google Scholar M. de León, J. C. Marrero and D. M. de Diego, Non-holonomic Lagrangian systems in jet manifolds, J. Phys. A, 30 (1997), 1167-1190. doi: 10.1088/0305-4470/30/4/018. Google Scholar M. de León and P. R. Rodrigues, Higher-order mechanical systems with constraints, Internat. J. Theoret. Phys., 31 (1992), 1303-1313. doi: 10.1007/BF00673930. Google Scholar M. de León and P. R. Rodrigues, Methods of Differential Geometry in Analytical Mechanics, North-Holland Mathematics Studies, 158, North-Holland Publishing Co., Amsterdam, 1989. Google Scholar M. A. de León, A historical review on nonholomic mechanics, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM, 106 (2012), 191-284. doi: 10.1007/s13398-011-0046-2. Google Scholar J. Gaset, X. Gràcia, M. C. Muñoz-Lecanda, X. Rivas and N. Román-Roy, New contributions to the Hamiltonian and Lagrangian contact formalisms for dissipative mechanical systems and their symmetries, Int. J. Geom. Methods Mod. Phys., 17 (2020), 27pp. doi: 10.1142/S0219887820500905. Google Scholar F. Gay-Balmaz and H. Yoshimura, A Lagrangian variational formulation for nonequilibrium thermodynamics. Part I: Discrete systems, J. Geom. Phys., 111 (2017), 169-193. doi: 10.1016/j.geomphys.2016.08.018. Google Scholar F. Gay-Balmaz and H. Yoshimura, From Lagrangian mechanics to nonequilibrium thermodynamics: A variational perspective, Entropy, 21 (2019). doi: 10.3390/e21010008. Google Scholar B. Georgieva, The variational principle of Hergloz and related resultst, in Geometry, Integrability and Quantization, Avangard Prima, Sofia, 2011, 214–225. Google Scholar B. Georgieva, R. Guenther and T. Bodurov, Generalized variational principle of Herglotz for several independent variables. First Noether-type theorem, J. Math. Phys., 44 (2003), 3911-3927. doi: 10.1063/1.1597419. Google Scholar H. Goldstein, C. P. Poole and J. L. Safko, Classical Mechanics, 2006. Google Scholar G. Herglotz, Beruhrungstransformationen, in Lectures at the University of Gottingen, Gottingen, 1930. Google Scholar A. Ibort, M. de León, G. Marmo and D. M. de Diego, Non-holonomic constrained systems as implicit differential equations. Geometrical structures for physical theories, I (Vietri, 1996), Rend. Sem. Mat. Univ. Politec. Torino, 54 (1996), 295–-317. Google Scholar A. A. Kirillov, Local Lie algebras, Uspehi Mat. Nauk, 31 (1976), 57-76. doi: 10.1070/RM1976v031n04ABEH001556. Google Scholar J. Koiller, Reduction of some classical nonholonomic systems with symmetry, Arch. Rational Mech. Anal., 118 (1992), 113-148. doi: 10.1007/BF00375092. Google Scholar V. V. Kozlov, On the integration theory of equations of nonholonomic mechanics, Regul. Chaotic Dyn., 7 (2002), 161-176. doi: 10.1070/RD2002v007n02ABEH000203. Google Scholar V. V. Kozlov, Realization of nonintegrable constraints in classical mechanics, Dokl. Akad. Nauk SSSR, 272 (1983), 550-554. Google Scholar A. V. Kremnev and A. S. Kuleshov, Nonlinear dynamics and stability of the skateboard, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 85-103. doi: 10.3934/dcdss.2010.3.85. Google Scholar A. S. Kuleshov, A mathematical model of the snakeboard, Mat. Model., 18 (2006), 37-48. Google Scholar P. Libermann and C.-M. Marle, Symplectic Geometry and Analytical Mechanics, Mathematics and Its Applications, 35, D. Reidel Publishing Co., Dordrecht, 1987. doi: 10.1007/978-94-009-3807-6. Google Scholar A. Lichnerowicz, Les variétés de Jacobi et leurs algèbres de Lie associées, J. Math. Pures Appl. (9), 57 (1978), 453-488. Google Scholar Q. Liu, P. J. Torres and C. Wang, Contact Hamiltonian dynamics: Variational principles, invariants, completeness and periodic behavior, Ann. Physics, 395 (2018), 26-44. doi: 10.1016/j.aop.2018.04.035. Google Scholar N. K. Moshchuk, On the motion of Chaplygin's sledge, J. Appl. Math. Mech., 51 (1987), 426-430. doi: 10.1016/0021-8928(87)90079-7. Google Scholar J. I. Ne${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$mark and N. A. Fufaev, Dynamics of Nonholonomic Systems, Translations of Mathematical Monographs, 33, American Mathematical Society, Providence, RI, 1972. doi: 10.1090/mmono/033. Google Scholar V. V. Rumiantsev, On Hamilton's principle for nonholonomic systems, Prikl. Mat. Mekh., 42 (1978), 387-399. Google Scholar V. V. Rumyantsev, Variational principles for systems with unilateral constraints, J. Appl. Math. Mech., 70 (2006), 808-818. doi: 10.1016/j.jappmathmech.2007.01.002. Google Scholar A. A. Simoes, D. M. de Diego, M. de León and M. L. Valcázar, On the geometry of discrete contact mechanics, preprint, arXiv: 2003.11892. Google Scholar A. A. Simoes, M. de León, M. L. Valcázar and D. M. de Diego, Contact geometry for simple thermodynamical systems with friction, Proc. A, 476 (2020), 244-259. doi: 10.1098/rspa.2020.0244. Google Scholar I. Vaisman, Lectures on the Geometry of Poisson Manifolds, Progress in Mathematics, 118, Birkhäuser Verlag, Basel, 1994. doi: 10.1007/978-3-0348-8495-2. Google Scholar A. van der Schaft, Classical thermodynamics revisited: A systems and control perspective, preprint, arXiv: 2010.04213. Google Scholar M. Vermeeren, A. Bravetti and M. Seri, Contact variational integrators, J. Phys. A, 52 (2019), 28pp. doi: 10.1088/1751-8121/ab4767. Google Scholar A. M. Vershik and L. D. Faddeev, Differential geometry and Lagrangian mechanics with constraints, Soviet Physics. Doklady, 17 (1972), 34-36. Google Scholar Figure Options Download full-size image Download as PowerPoint slide Andrew D. Lewis. Erratum for "nonholonomic and constrained variational mechanics". Journal of Geometric Mechanics, 2020, 12 (4) : 671-675. doi: 10.3934/jgm.2020033 Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Simon Hochgerner. Symmetry actuated closed-loop Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 641-669. doi: 10.3934/jgm.2020030 Hua Shi, Xiang Zhang, Yuyan Zhang. Complex planar Hamiltonian systems: Linearization and dynamics. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020406 Xiaoming Wang. Upper semi-continuity of stationary statistical properties of dissipative systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 521-540. doi: 10.3934/dcds.2009.23.521 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020375 Hui Lv, Xing'an Wang. Dissipative control for uncertain singular markovian jump systems via hybrid impulsive control. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 127-142. doi: 10.3934/naco.2020020 Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368 Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020110 Lingju Kong, Roger Nichols. On principal eigenvalues of biharmonic systems. Communications on Pure & Applied Analysis, 2021, 20 (1) : 1-15. doi: 10.3934/cpaa.2020254 Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399 Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012 Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020451 Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 Lingwei Ma, Zhenqiu Zhang. Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 537-552. doi: 10.3934/dcds.2020268 Manuel de León Víctor M. Jiménez Manuel Lainz
CommonCrawl
Could a cave-in or avalanche in low gravity be dangerous? I have a character on the surface of Enceladus, one of the moons of Saturn. It has an icy surface on top of what is believed to be a liquid ocean. There is no atmosphere, though near its south polar region there are what appear to be canyons filled with long lines of geysers that eject mostly water vapor into space. I need to have a way for this character to be trapped and buried on Enceladus (don't worry about his fate), either as a result of the ground caving in beneath him causing him to fall below the surface, or in something like an avalanche of ice down the side of the one of the canyons. The trouble is, the surface gravity on Enceladus is only 1.13% of Earth's. In that case, it seems to me most scenarios would be easily escapable, both because a cave-in or avalanche would occur very slowly and because the character could jump so high he should be able to simply leap away from the danger. I thought of having him hit by a large piece of ice and knocked out, thus preventing him from escaping; but could a person really be knocked out by a large mass that is slow-moving, or would he just be crushed? So my question is: is there a realistic way for someone to get trapped an buried under these conditions? reality-check gravity nanoguynanoguy $\begingroup$ If the ground opens up below you, then there's nothing for you to push of of for a jump. Thus, you'll fall. $\endgroup$ – RonJohn Apr 9 '18 at 14:26 $\begingroup$ If he's wearing a spacesuit I'm not sure how he can be "knocked out" without suffering decompression $\endgroup$ – SilverCookies Apr 9 '18 at 15:15 $\begingroup$ If a pound of feathers lands on you... does it weigh you down more or less than a pound of rocks... $\endgroup$ – WernerCD Apr 9 '18 at 16:25 $\begingroup$ Your astronaut won't be able to hear the avalanche. There is less sunlight at Saturn compared to Earth and you have to ask how far electric lights will illuminate the surroundings. There's probably very little warning before the avalanche hits. $\endgroup$ – CJ Dennis Apr 10 '18 at 1:36 $\begingroup$ A couple of comments not worthy of an answer: 1) I think you (the author) can contrive an artificial scenario where the character drops something valuable in a crevasse, has to retrieve it, and then gets trapped by a mini cave-in or similar. 2) I might suggest the New Yorker article on Antarctic expeditioner Henry Worsley for some insight on visible and hidden dangers (crevasses!) in a similarly harsh environment newyorker.com/magazine/2018/02/12/the-white-darkness $\endgroup$ – BurnsBA Apr 10 '18 at 13:26 Yes, though Enceladus is probably much safer than Earth for these sorts of things. It all depends on how high the avalanche starts from and how much material is involved. A ton of rock or ice hitting you at 50 mph is going to hurt regardless of whether it's on Earth or Enceladus -- if anything, Enceladus would be worse because of the likelihood of damage to your spacesuit. It's certainly true that an avalanche starting from the same height will do less damage on Enceladus, but if it's high enough, it will still kill. (Also, there's the distinct possibility that Enceladus's low gravity may make much greater elevation differences more common.) Likewise with getting buried. The same volume avalanche will weigh less on Enceladus, and for that reason will be easier to get out of (pressure suit damage aside). But a big enough avalanche will still bury you under too much overlaying material for you to dig your way out even if you survived. Next there's the question of escape. Once more, it's probably easier to escape on Enceladus -- though just how athletic and controlled you can be in a spacesuit is an open question -- but escape is far from guaranteed. Further, in a vacuum, will you always be aware of an oncoming mass of ice? And if you're in a confined space, will you be able to escape? Consider a deep valley and an avalanche which starts far above you. By the time you're aware of it, it's moving 30 mph and is quite inexorable, with the same momentum it would have on Earth. Can you escape? Probably not. The lack of atmosphere had a negligible effect, as air resistance doesn't play a large role in the dynamics. (Its major impact is that the lack of air on Enceladus forces people into space suits and this makes them more vulnerable.) So, assuming your space suit is reasonably rugged, you're most likely safer on Enceladus, but a large avalanche can still trap you and kill you. Mark OlsonMark Olson $\begingroup$ @JanDoggen But it has something to do with the weight... $\endgroup$ – wizzwizz4 Apr 9 '18 at 15:47 $\begingroup$ @JanDoggen And that's to do with inertia(l mass) – momentum – kinetic energy. None of those have anything to do with weight. $\endgroup$ – wizzwizz4 Apr 9 '18 at 15:50 $\begingroup$ In H. Beam Piper's Cosmic Computer, he has the lines: "Yves Jacquemont began posting signs in conspicuous places: WEIGHT IS WHAT YOU LIFT, MASS IS WHAT HURTS WHEN IT HITS YOU. WEIGHT DEPENDS ON GRAVITY; MASS IS ALWAYS CONSTANT." $\endgroup$ – Mark Olson Apr 9 '18 at 15:55 $\begingroup$ @MarkOlson I'm not convinced by your answer. Lack of air resistance means terminal velocities are higher and things like a cave collapse can happen without needing to push the air out of the way and also less warning. (You won't hear an avalanche coming although you may feel a rumble in the ground). On the other hand though fast-moving events often ride on or mix with air in order to reduce friction so without air the avalanche may behave differently. I don't know the answer to all of this but I doubt the net effect is "nothing".. $\endgroup$ – Tim B♦ Apr 9 '18 at 16:03 $\begingroup$ @Tim B: It's not "nothing", it's "not much, negligible". One way to look at it is to consider how much effect air resistance has on Earth -- it will be less on Enceladus. And when you're dealing with materials like ice and rock, it has very little effect on Earth. (Snow's another matter, of course, but that's not what the OP was asking about.) $\endgroup$ – Mark Olson Apr 9 '18 at 16:14 The surface gravity of Enceladus is 0.113m/s2. At such a low gravity, you cannot run, for the force you would use in a step will send you on a very long jump that may last more than a minute (if you don't hit anything along the way before you touch ground again). This may be quite dangerous. If you don't have the means to fly, like a jetpack, you may end up landing on a sharp shard of ice that will rip your spacesuit open. Alternatively, you may accidentally jump from a high place to a lower one. And while lower gravity means smaller acceleration, the fact that you can jump dozens to hundreds of meters upwards, to fall on a hole/crater/depression that might be dozens to hundreds of meters lower than your starting point, means that you can land with enough speed on hardened ice to break bones and equipment. If you want to see how walking on such a gravity might look like, I can recommend you a simulator. Like any simulator, this one does not model reality with 100% accuracy, but it is close enough to reality to give you a general idea. Get yourself a copy of Kerbal Space Program and go take a walk on Gilly (surface gravity = 0.049m/s2) or Pol (surface gravity = 0.373m/s2), which are the bodies with gravity that is closest to Enceladus. That said, unless your astronaut has a jetpack, even walking may be suicidal. But if he does have a jetpack, he would never be in trouble in the first hand. As for whether the snow can crush him... the density of snow on Earth is 0.1 to 0.8g/cm3. Let us assume that the density of snow on Enceladus is around the lowest range, 0.1g/cm3 so as to be nice with your astronaut. Now let's say that he gets 100 meters of snow on him. Let's do some calculations. Under 100 meters of snow, the mass of snow above a section of one square meter is: $$ 10^2m \times 1m^2 \times 10^{-1}g/cm^3 = \frac{10m^3g}{cm^3} = \frac{10^6 cm^3g}{cm^3} = 10^6 g = 1 \space metric \space ton $$ Impressive, right? But at 1.13% the gravity of the Earth, that metric ton would do for a pressure of 11.3 kilograms per square meter. The average surface of an adult humans is around 2m2. This means that, laying down, your astrounaut is exposing about one square meter to the snow. We can then infer that under ten meters of snow, he would be facing 11.3 kilograms of pressure. That is a laughable fraction of an atmposphere. So is he out of the hook? No. Don't forget that the astronaut is considerably denser than the snow around him. If he were naked, he could be ten times as dense as that snow - I figure the equipment in his spacesuit might be denser yet. In other words, he will sink in the snow. The snow will behave like a very viscous liquid, and it should feel like sinking in quicksand for the astrounaut. In the end, he is in for a very slow death in the dark and cold bottom of the avalanche. RenanRenan $\begingroup$ What goes up must come down. Get in the way of one the geysers and that's going to be a bad day. $\endgroup$ – Mazura Apr 9 '18 at 23:13 $\begingroup$ Reminds me of A Fall Of Moondust by Clarke $\endgroup$ – tox123 Apr 10 '18 at 2:25 $\begingroup$ For the KSP surface gravities, I'm assuming you meant 0.049\0.373 meters per second per second, rather than 0.049\0.373 square meters? $\endgroup$ – Sean Apr 10 '18 at 16:55 $\begingroup$ @Sean thanks, I will fix the metrics in the post. $\endgroup$ – Renan Apr 10 '18 at 17:10 $\begingroup$ What about some sort of "gecko tape" like substance on the boots to help maintain traction? You'd still be able to "peel" your boot away from most surfaces just through the normal heel-to-toe roll of a footfall, but if you walk such that your other foot makes contact (and thus adheres) before your first foot's toe departs from the surface, you won't be launching yourself into space in the course of ordinary walking. Incidentally, it would make jumping almost impossible, thus preventing that means of escape! $\endgroup$ – Doktor J Apr 10 '18 at 18:50 Yes, it's definitely feasible for either a cave in or an avalanche to trap this character, and ice is heavy enough that chunks the size of two sedans would be very difficult for the average person to move even under Enceladus' gravity. Material Required To Trap a Person: Since the gravity is ~1% of Earth's (rounded for easier math), 100kg on Earth would be only 1kg on Enceladus. Assuming that the character did get trapped under some amount of ice, let's see how much is needed to prevent the character from just pushing their way out once they've been buried. Benchpress world records are around 485kg, so if your character is a world record body builder they could theoretically lift 48,500kg, or about 40 Toyota Corollas. Let's assume a more modest 100kg to make the math easy. This site claims the volume of their truck trailers are 82 cubic meters, and this site claims that 82 cubic meters of ice is about 75,000kg. A Toyota Corolla is about 12 cubic meters, which is about 7,300kg of ice. So, a chunk of ice the size of 1 and a half sedans could trap, but not completely crush, someone on Enceladus, and presumably your disaster would involve much more than that. Avoiding a Cave In: This is trivially easy to avoid if the character is next to a stable wall to grab on to since they'd fall slowly, so let's assume the entire area around the character is collapsing. This question covers the idea of climbing up falling debris, however the answer's best case scenario involves large pieces of rubble that you were already about to jump off of. If the character is just standing, then they will fall at the same speed as the ground below them so they would not be able to push off of anything. Therefore, the character could not jump to safety if the ground below them caves in and they had no solid ground to grab onto. Avoiding an Avalanche: Although it would be moving slow, it would actually be pretty hard to avoid being buried in an avalanche on Enceladus. I don't have enough physics degrees to understand the math, however I'd imagine that since the avalanche would behave much like a liquid, trying to stay on top of it would be like try to walk through a flood of quicksand or molasses. This, coupled with a cloud of powdery ice blocking attempts to find a safe route, could definitely lead to the character sinking and getting buried. GiterGiter $\begingroup$ You would probably not be able to bench the Toyota. Mass is the same. It would take you minutes of pushing at your full stength to accelerate the mass and get it moving. Once it finally moves it would blast off taking you with them if you made the mistake of holding on. $\endgroup$ – Andrey Apr 9 '18 at 15:42 $\begingroup$ @Andrey: Unless there's no gravity then you don't lift mass, you lift weight. Lifting 'Thing A' that weighs 100kg on Earth would take the same effort as lifting 'Thing B' that weighs 100kg on Enceladus. However, Thing A would weigh 1kg on Enceladus, and Thing B would weigh 10,000kg on Earth. $\endgroup$ – Giter Apr 9 '18 at 15:50 $\begingroup$ not exactly. V=F/M So even at 0 gravity, it takes a huge amount of force to put any useful velocity on an object. Heavy object in low G are extremely dangerous. They soak energy like a sponge and then become freight trains slowly moving forward crushing you $\endgroup$ – Andrey Apr 9 '18 at 16:07 $\begingroup$ @Andrey I happen to have pushed vehicles as heavy as a Toyota Landcruiser, and I can verify that it does NOT take minutes worth of output from a human to get meaningful velocities in multi-ton objects. Think accelerations on the order of ~0.1 M/s^2. Absolutely doable and worthwhile, although one thing that I haven't seen mentioned is that the posture of the person trapped may prevent the same kind of leverage one applies with the proper form. Anyone who has benched can tell you proper form is everything. $\endgroup$ – wedstrom Apr 10 '18 at 17:44 $\begingroup$ @wedstrom that's a good point. Just try pushing at car on a 25 degree incline, and you will have 10% gravity perfectly simulated. See if you can still move it. On a flat surface you are converting 95% of your energy into acceleration, just a little loss to friction, on a lift most of it is being lost to fighting gravity $\endgroup$ – Andrey Apr 10 '18 at 18:59 Despite the lower gravity, a cave-in in a sufficiently deep crevasse or cave could still easily happen quickly enough to block the escape. And once the only entrance is blocked by several tons of ice (and remember, the ice still has the same mass as on earth, so you can't just push it away), your explorer is truly trapped without advanced mining equipment. In fact, the use of such equipment might even pose another hazard, if it melts the ice or causes tremors, which could instabilize the rest of the ice. Being knocked out is also a possibility: Even if the actual collapse is far slower than on earth, the large masses colliding can cause shards and boulders of ice to be ejected at dangerous velocities. Finally, ice can be quite sharp, so if your character falls on or is hit by an icycle, they could end up pinned in place, with the ice stuck in their pressure suite being the only thing between them and decompression. SurpriserSurpriser I think the other answers have sufficiently covered how dangerous a cave-in or avalanche might be, but I want to point out that the chance and severity of them will be far higher. The lower gravity will create a far steeper angle of repose as the cohesiveness of snow, rock, etc. will be much greater relative to gravity than we are used to on Earth. That means you can have far more material build up into very steep, even over-hanging and exotic structures. Add to that the lower atmospheric disturbance (no wind) and no critters or humans to disturb this moon's surface and you will probably have large, critically balanced structures that are ready to be knocked over at any moment. Whether or not cave-ins or avalanches would be as dangerous as on Earth, you will have them occur far more often in virgin territory, and the mass of the material involved will likely be much greater. What I miss in other answers is that the morphology of mountains and avalanche material will be completely different at 1.13% of gravity. The amount of material stacking up before surfaces get crushed to the degree of starting a conversion from sticking to moving friction will be quite higher. So when finally things start getting ugly, the amount of ugliness unleashed will be quite different from that on Earth and the amount of potential energy leading to a chain reaction will be comparable, making the involved masses quite larger. Avalanches will be quite slower at taking up speed, but they will be just as deadly in their effects and the height colloding material will take on will have similar relations and densities compared to the jumping height of a human as on Earth. It's not just the human energy and time frame getting better payoff. $\begingroup$ You apparently didn't read my answer because I addressed that. $\endgroup$ – BlackThorn Apr 11 '18 at 15:18 Absolutely yes. Even though the force of gravity is 1.3% of the Earth's, the planetary weight of a landslide or cave-in could still be fatal. Weight is the force of gravity on a mass. Newton's second law formula (F = m•a) shows the relationship between mass, acceleration and force. The following weight formula uses Newton's second law: w = m • g w is the weight (force of gravity on a mass) m is the mass g is the acceleration due to gravity. Using this weight calculator with a g value of 0.1274 m/s^2 (1.13% of 9.8 m/s^2), you can make some simple calculations. If 1,000 pounds of material would crush you on Earth (453.59 kg), that's a fourth of the weight of a VW Beetle. On your planet, ~34,800 kg would have the same effect, and that could be 21 cubic meters of stone, i.e. not much. Kurt HeckmanKurt Heckman Not the answer you're looking for? Browse other questions tagged reality-check gravity or ask your own question. How fast would one have to move to climb up falling debris? What are the military pros and cons of a low gravity world vs a high gravity one? Storms on a low-gravity planet If Earth Were Forged by Fire and Ice Getting around the effects of low Mars gravity Skyscrapers on low gravity planets Life in extremely low gravity water Would trams be workable on a low-gravity world? Reality Check: Planet with low gravity and water vapour Planetary cave: Gravity inside a non-concentric shell Low-gravity Bronze Age fortifications
CommonCrawl
Why do we not have to prove definitions? I am a beginning level math student and I read recently (in a book written by a Ph. D in Mathematical Education) that mathematical definitions do not get "proven." As in they can't be proven. Why not? It seems like some definitions should have a foundation based on proof. How simple (or intuitive) does something have to be to become a definition? I mean to ask this and get a clear answer. Hopefully this is not an opinion-based question, and if it is will someone please provide the answer: "opinion based question." definition philosophy BunsOfWrath ZduffZduff $\begingroup$ What does it mean to "prove" a definition? A proof is a demonstration of the truth of a certain claim about something. Definitions are not claims; just like cats are not Dolphins. $\endgroup$ – user230734 Aug 16 '15 at 23:54 $\begingroup$ "Proving" a definition makes no sense, since a definition is a decision to introduce and use a particular concept. But mathematicians have not yet advanced to the point where a motivation of each definition, conforming to the rules of the logic of motivation, follows definitions in the way in which proofs follow theorems. ${}\qquad{}$ $\endgroup$ – Michael Hardy Aug 17 '15 at 0:18 $\begingroup$ Definitions are motivated, I think, not proved. $\endgroup$ – Akiva Weinberger Aug 17 '15 at 0:31 $\begingroup$ @Zduff that's correct. Definitions are like the entries of the mathematical dictionary. They are simply stated facts and don't have any deeper meaning in and of themselves, but when we start putting them to use, we can make some really nice things out of them. $\endgroup$ – Cameron Williams Aug 17 '15 at 1:48 $\begingroup$ Because otherwise it would be turtles all the way down! Also, see the third option in this trilemma $\endgroup$ – James Webster Aug 17 '15 at 7:33 I'd like to take a somewhat broader view, because I suspect your question is based on a very common problem among people who are starting to do "rigorous" or "theorem-proof" mathematics. The problem is that they often fail to fully recognize that, when a mathematical term is defined, its meaning is given exclusively by the definition. Any meaning the word has in ordinary English is totally irrelevant. For example, if I were to define "A number is called teensy if and only if it is greater than a million", this would conflict what English-speakers and dictionaries think "teensy" means, but, as long as I'm doing mathematics on the basis of my definition, the opinions of all English-speakers and dictionaries are irrelevant. "Teensy" means exactly what the definition says. If the word "teensy" already had a mathematical meaning (for example, if you had already given a different definition), then there would be a question whether my definition agrees with yours. That would be something susceptible to proof or disproof. (And, while the question is being discussed, we should use different words instead of using "teensy" with two possibly different meanings; mathematicians would often use "Zduff-teensy" and "Blass-teensy" in such a situation.) But if, as is usually the case, a word has only one mathematical definition, then, there is nothing that could be mathematically proved or disproved about the definition. If my definition of "teensy" is the only mathematical one (which I suspect is the case), and if someone asked "Does 'teensy' really mean 'greater than a million'?" then the only possible answer would be "Yes, by definition." A long discussion of the essence of teensiness would add no mathematically relevant information. (It might show that the discussants harbor some meaning of "teensy" other than the definition. If so, they should get rid of that idea.) (I should add that mathematicians don't usually give definitions that conflict so violently with the ordinary meanings of words. I used a particularly bad-looking example to emphasize the complete irrelevance of the ordinary meanings.) Andreas BlassAndreas Blass $\begingroup$ +1 Adding to the confusion is the fact that students are very likely to encounter different definitions for common objects in different textbooks: in one class real numbers are complete "by definition," in the next they satisfy the last upper bound property "by definition," and so on. The student is left with the impression that there is a platonic concept of a "real number" with a hodgepodge of properties, some of which require proof and some of which don't, and no obvious difference between the two. $\endgroup$ – user7530 Aug 17 '15 at 0:38 $\begingroup$ If I ever write a text book, I shall include the term "teensy" as you define it. $\endgroup$ – PyRulez Aug 17 '15 at 2:44 $\begingroup$ 'I don't know what you mean by "glory",' Alice said. Humpty Dumpty smiled contemptuously. 'Of course you don't — till I tell you. I meant "there's a nice knock-down argument for you!"' 'But "glory" doesn't mean "a nice knock-down argument",' Alice objected. 'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less.' 'The question is,' said Alice, 'whether you can make words mean so many different things.' 'The question is,' said Humpty Dumpty, 'which is to be master — that's all.' -- Lewis Carroll,ThroughtheLookingGlass $\endgroup$ – Jeffrey Bosboom Aug 17 '15 at 22:43 $\begingroup$ @JeffreyBosboom: Humpty Dumpty is actually not justified in saying that, because any language is something that is a common consensus between different people to use certain lexical units to denote certain grammatical or semantic concepts. The Egg therefore cannot claim to be free to choose whatever meaning he likes for his words, otherwise communication would utterly break down. What if his "mean[t]" actually means "do[es]/did not mean"? The Egg is just being a proud character who falls. $\endgroup$ – user21820 Aug 18 '15 at 4:30 $\begingroup$ Except that Humpty Dumpty redefined some words, such as portmanteau, and his definition has since stuck. So he could do what he wanted with his definitions, and often did so successfully $\endgroup$ – Henry Aug 18 '15 at 12:51 The other answers did not explain the background of logic that is the key to understanding this issue. In any formal system where we write proofs, we have to use some formal language that specifies the valid syntax of sentences, and we must follow some formal rules that specify which sentences we can write down in which contexts. In mathematics we usually use classical first-order logic, which consists of both the language of first-order logic and classical inference rules. This language is sufficient but extremely cumbersome if we were not allowed to make any definitions. For example, if we are working in Peano Arithmetic where the only objects are natural numbers, then if we want to prove that an odd number multiplied by an odd number is odd, we effectively have to prove: $\def\imp{\Rightarrow}$ $\forall m \forall n ( \exists a ( m = 2a+1 ) \land \exists b ( n = 2b+1 ) \imp \exists c ( mn = 2c+1 ) )$. Now certainly we can do this and completely avoid defining "odd", but as the theorems grow in complexity (and this example is an incredibly trivial theorem) it would become simply impossible to refrain from definitions. What is a definition, then? In first-order logic it can be understood to be simply a shortform for some expression. Continuing the above example, if for any expression $E$ we define "$odd(E)$" to mean "$\exists x ( E = 2x+1 )$" where "$x$" is a variable not used in "$E$", then we can rewrite the theorem as: $\forall m \forall n ( odd(m) \land odd(n) \imp odd(mn) )$. See? Much shorter and clearer. $\begingroup$ +1, gets to the heart of the issue, deservesmore upvotes. $\endgroup$ – 6005 Aug 17 '15 at 15:22 $\begingroup$ I want to add that there is a sense in which some "proof" is required in association with a definition, usually in order to make sure that the definition "makes sense". For example, we can define the degree of a polynomial as the index of the largest nonzero term, and this certainly defines an expression, but to show that it has any meaning at all, say to prove that this yields a nonnegative integer, we will need to prove that there is a largest nonzero term (and in the course of that proof you will have to assume that the polynomial is nonzero), which clarifies the "domain" of the definition. $\endgroup$ – Mario Carneiro Aug 17 '15 at 23:33 $\begingroup$ @MarioCarneiro: Yes, but it depends on the exact rules of the formal system. If you always prove unique existential quantification before you define something to be that, then you can indeed define something that uses an instantiation of that. Of course we should intuitively devise definitions backward in the way you describe, but that does not mean that a formal proof must be written backwards. They are two separate things. Since there are a lot of such finer details in any formal system, I didn't mention any of them in my answer. Certainly one has to be very careful when actually doing it. $\endgroup$ – user21820 Aug 18 '15 at 4:25 $\begingroup$ +1 for this wonderful answer. The need for definitions has perhaps never been expressed in so concise and clear manner. $\endgroup$ – Paramanand Singh Nov 7 '17 at 14:23 In a definition, there is nothing to prove because the general form of a definition is: An object $X$ is called [name] provided [conditions hold]. The reason that there is nothing to prove is that before the definition [name] is undefined (so it has no content). The [conditions] are like a checklist of properties. If all the properties of the [conditions] are true, then $X$ is whatever [name] is. The reason that a definition can't be proven is that it isn't a mathematical statement. There's no if-then statements in a definition, a definition is merely a list of conditions; if all the conditions are true then $X$ is [name]. Since [name] had no meaning before the definition, you can't even check that [name] means the same as the conditions. Michael BurrMichael Burr $\begingroup$ I wouldn't go so far as to say it isn't a mathematical statement. It can even be thought of as an axiom, though it is an axiom involving a new symbol, so the axiom is logically independent from all previous axioms. Many definitions are stated as axioms--e.g., whether you include $=$ in the axioms of set theory or whether you define later two sets to be equal if they contain the same elements. $\endgroup$ – 6005 Aug 17 '15 at 15:20 $\begingroup$ @6005 It isn't a mathematical statement because a statement is a sentence that must be either true or false. $\endgroup$ – Michael Burr Aug 17 '15 at 16:29 $\begingroup$ Certainly it can be regarded as true or false--just involving a previously-undefined symbol, so its truth value is independent of previous assertions. I'm not just arguing here--this is literally the way definitions can be formalized in, say, model theory. How could a mathematical definition not be a mathematical statement anyway? And again see my set theory example. $\endgroup$ – 6005 Aug 17 '15 at 16:31 $\begingroup$ Here's a definition of $1$: $1 := S(0)$. What it is is an axiom. I am asserting that $1$ means $S(0)$, and from this point on will take the statement $1 = S(0)$ to be true. The reason we don't prove definitions is that we take them to be true by default, not because they aren't syntactically true-or-false. $\endgroup$ – 6005 Aug 17 '15 at 16:33 Think about English definitions. They just assign meanings to symbols. It's really the same thing here. If I told you to prove that $1 + 1 = 2$, you would probably object that $2$ is defined as being $1+1$. What more is there to say? guestguest $\begingroup$ 1 + 1 = 2 may not be as simple as you think. Whitehead and Russell spent hundreds of pages on it blog.plover.com/math/PM.html $\endgroup$ – Brice M. Dempsey Aug 17 '15 at 7:33 $\begingroup$ @JamesT.Huggett But that depends on which definition you choose to have for $2$. $\endgroup$ – JiK Aug 17 '15 at 12:04 $\begingroup$ @Jik Which I think is precisely Huggetts point. 2 is very seldom defined as the result of 1+1. The closest you get is that 2 is defined as the sucessor of 1. $\endgroup$ – Taemyr Aug 19 '15 at 11:16 $\begingroup$ @JamesT.Huggett It's inaccurate to say that Principia Mathematica spends hundred of pages proving that 1+1=2. They spendt most of those pages setting up the language in which they are able to prove that 1+1=2. $\endgroup$ – Taemyr Aug 19 '15 at 11:19 Frequently, a definition is given, and then an example or proof follows to show that whatever has been defined actually exists. Some authors will also attempt to motivate a definition before they give it: for example, by studying the symmetries of triangles and squares and how those symmetries are related to each other before going on to define a general group. A definition is distinguished from a theorem or proposition or lemma in that a definition does not declare some fact to be true, it merely assigns meaning to some group of words or symbols. The statement of a theorem says that "such-and-such" thing is true, and then must back up the claim with a proof. Ben ShellerBen Sheller $\begingroup$ But how is a definition different from an axiom? $\endgroup$ – user117644 Aug 16 '15 at 23:50 $\begingroup$ An axiom, as I understand it, is something which is considered to be uncontroversially true, whereas a definition is merely an assignment of meaning to a collection of symbols or words, with no assumption of truth. $\endgroup$ – Ben Sheller Aug 16 '15 at 23:54 $\begingroup$ I would be careful with saying "uncontroversially true", but perhaps it's better to say that it is an accepted truth. $\endgroup$ – Cameron Williams Aug 17 '15 at 1:54 $\begingroup$ @mistermarko: See my answer where I explain what Ben means by assigning meaning to an expression. $\endgroup$ – user21820 Aug 17 '15 at 2:18 $\begingroup$ Also, I would not even say what Cameron says. An axiom in a formal system is merely a given statement that can be used as a true statement. In other formal systems that axiom may not be given, or even its negation may be given instead. Of course, we try to choose a formal system with axioms that are generally accepted (especially if they seem to accurately describe the real world), but there is nothing wrong with considering other formal systems with different axioms (and possibly different rules as well). $\endgroup$ – user21820 Aug 17 '15 at 2:21 Mathematics is a kind of exploration of consistent systems. It needs a language to do this exploring. In order to communicate with each other about these systems, one needs common reference points, or things we all agree upon. In daily life we all agree that the word "chair" has a set number of meanings, the most common of which is something to sit on. Often, when we translate from one language to another, we find a problem because one language doesn't have a word for something so meanings become fuzzy or intuitive. This can't work in mathematics, so we have to agree on the meanings of certain things. We define things and agree upon those definitions so that we can move forward and see the ramifications of statements about those definitions. Euclid, in his Elements started with definitions. For example, "I say a point is that which has location but no dimension." or "I say, a line is that which has length but not width or height." The rest of the Elements are then statements about those definitions.. Back in his time, someone could challenge him and say "I don't see that. Look, I put my finger in the sand and it has size." Euclid might answer, "I can see that. But I think if you follow what I'm saying and see where my statements lead, you'll find some interesting things that are true not just for lines but also for apples and oranges or building things." Euclid's definitions are useful and produce results. Often for learning mathematics, one needs a book and has to start from the "beginning," however, in mathematical exploration the people doing the innovating didn't start that way. They made discoveries and then had to work backwards or develop a system, and to teach that, they had to start with definitions or a common language, so the student can follow along. Getting back to what I was saying about the chair, imagine a world in which nobody agrees on the definition of a chair. There would be confusion and a complete lack of communication. Imagine if I say "A chair is something that you sit in." and someone counters, "Really? Prove it to me." or "I need a new office chair," but they defined chair as a device for adding and subtracting numbers, and you defined chair as a place to park your car. Definitions aren't wrong or right and they don't require proof. They don't say something and they don't arise from a logical progression of ideas. I don't feel that they are intuitive. You might want to check out Euclid's Elements and see how things are worded there. I think this will help you to get your head around this, and give you a feeling for the roots of mathematics and how things started from ground zero. michael_timofeevmichael_timofeev If you want to prove a statement, you need to first tell me what the objects involved in the statement are. For example, if you want to prove that the product of two even numbers is even, you first need to tell me what an even number is, what a product is, ... even what a "number" is. Mathematics is about finding out what relationships/results hold after starting with certain objects that you define. Yes in some sense, the definitions seem to come from nowhere (why do we need imaginary numbers? they're just "made-up"?), but are usually well-motivated (we want roots of negative numbers, solutions to quadratics, etc.). angryavianangryavian Definitions assign meanings, not truths. They describe how you are going to talk about stuff. Definitions are basically arbitrary. It does not make sense to try proving them. Axioms describe what you are going to talk about, the identity of some system. Given a mathematical axiom system, you cannot prove one axiom from the others, but unlike with definitions, that is a matter of proof: basically you show that there is at least one possibility of meeting all the axioms' conditions, and then you show that you can also find one possibility of meeting all the resulting axioms' conditions when replacing one axiom with something incompatible with it, so no axiom is a necessary consequence from the others. Theorems are necessary consequences from a set of axioms. They trivially include the axioms themselves. They constitute knowledge about the properties of a mathematical system defined by its axioms, described in terms of basic definitions. You cant prove a definition, because the act of defining is to give a meaning to a particular concept. For example, the normal English definition of an even number is an integer divisible by 2. That's just what an even number is. We can later prove that if we add even numbers together, we will always get an even number. If we alternatively decide to define an "even number" as a positive integer with all its decimal digits the same (not recommended because it goes against the normal English definition) the very act of defining means that in our language system, this relationship is true. We can then proceed to prove some facts about these "even numbers." For example, they can all be factorised into a single-digit integer and an integer with all its digits 1 (I will leave the proof of this to the reader!) Furthermore, the integers with all digits $1$ are of the form $\dfrac{10^n - 1}{9}$ where $n>0$. So we can say that according to our new definition of "even numbers", "even numbers" are of the form $m\left(\dfrac{10^n - 1}{9}\right)$ , where $n>0$ and $0 < m < 10$. How do you like my new definition of "even numbers"? Level River StLevel River St Of course definitions, to become accepted as standard useful concepts, undergo some kind of testing process, by examining their consequences to see if they correctly express what was meant to be captured by the definition. This is a more subjective process than proof in the sense of "formally proving theorems", but it does correspond to the original English language meaning of prove as in test, check, verify, attempt to refute. Unlike theorems, definitions do not come with an objective (or anywhere near objective) notion of correctness. The only judgement to be made about a definition is whether to use it or not. Mapping out the consequences of the definition is way of testing if the definition is effective. Experience in using competing definitions can sort out which ones to keep and which ones to avoid in the long term. ASCII AdvocateASCII Advocate How simple (or intuitive) does something have to be to become a definition? I don't think the difference between a definition and a theorem is a measure of how true that statement is (or is expected to be). Yes, definitions require motivation; they had to occur to someone. But to me it has more to do with how much you can do with a set of definitions; how far you can extend them; and how much beautiful results one can draw out of them. The crux of mathematics to me is in this process. I believe that is what we study in this discipline. The art(or science) of drawing meaningful and useful conclusions from a set of definitions (or axioms, I choose not to distinguish). I am starting to learn Topology and these two statements intrigue me. A subset $A$ of a metric space $M$ is closed if for every convergent sequence $(a_n)$ of points in $A$ the limit $a$ of $a_n$ lies in $A$ A subset $A$ of a metric space $M$ is closed if it contains its boundary. Some treatments give the first statement as a definition and prove the second as a theorem using the first. Some treatments introduce the latter as the definition and prove the former using it. According to me, what is interesting is purely the process of arriving at one using the other. The investigation of this process to me is what Mathematics is all about. "Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true" - Bertrand Russell So it's not about proving everything you suspect to be true. It's about how much one can do by assuming a few things to be true. Disclaimer: This is very much an answer from an amateur. No heed is given to formal matters of philosophy, logic and set theory. IshfaaqIshfaaq I may be over-reacting to one sentence in your question ("It seems like some definitions should have a foundation based on proof."), but perhaps you are troubled that a definition might be inherently contradictory. A rational person will not define something that obviously leads to a contradiction. However, if a set is defined as a collection of objects, you can get all of Cantor's original results before you run into a contradiction that is traceable back to the original definition. Another way a definition "should have a foundation based on proof" occurs when, for example, you were to consider all functions which satisfy some particular constraint, and then define the yungen-value of such a function to be the global maximum of the function. For the definition to make sense you would need to prove that the constraint actually forces the function to have a global maximum. euler1944euler1944 Definitions are not within math, they within linguistics, on the edge of math. Saying "A is true of B by definition" simply states that you are assuming (an axiom) that the subject B has a property A, and that you will not be trying to prove it. If the listener does not agree that B has a property A, then the listener needs to stop right there and say "this proof may be valid, but its underlying assumptions do not match mine, so I cannot apply the conclusion to my understanding of what B is without digging further into that assumption." $\begingroup$ That's an odd statement to make. $\endgroup$ – Asaf Karagila♦ Aug 17 '15 at 15:36 $\begingroup$ @AsafKaragila What's odd about it? $\endgroup$ – Cort Ammon Aug 17 '15 at 18:09 $\begingroup$ If you say "A is true of B by definition", you are not assuming anything. The statement A is either true or false depending on what the definition of B says. Of course we often write "A is true of B by definition" when we really mean "We can prove that A is true from the definition of B, but the proof is straightforward and uninteresting, and we are not going to give it in full here because the reader should be able to prove it easily for him- or herself." $\endgroup$ – alephzero Aug 17 '15 at 18:43 $\begingroup$ @alephzero Ahh, I'm used to the latter case being "an exercise for the reader." I'm used to saying "And because the sum of the sequence of numbers X grows without bound, its sum approach infinity, by definition." That definition implies that the listener agrees with your definition of natural numbers. A transfinitist may find your definition invalid, unless they are aware you are using a different set of words than they are. As a rule, I find making a claim by definition rather than by proof quite distasteful, because that's just one more place an error can creep into the math. $\endgroup$ – Cort Ammon Aug 17 '15 at 19:13 $\begingroup$ I think the only thing wrong with saying "And because the sum of the sequence of numbers X grows without bound, its sum approach infinity, by definition." is the two words "by definition". I still remember some good advice on exam technique at university: never write "it follows from X that Y". If you just write "Y", and Y is true, the grader in a hurry will probably give you the credit, even if Y doesn't follow from anything else in your proof ;) $\endgroup$ – alephzero Aug 17 '15 at 20:26 protected by Asaf Karagila♦ Aug 19 '15 at 16:55 Not the answer you're looking for? Browse other questions tagged definition philosophy or ask your own question. Verification: Why does $\frac{dy}{dx}$ of $\tan(x)$ = $\sec^2x$? Why are permutations defined as bijective? Having trouble forming mathematical definition General questions about theorems and laws Provocations on the existence of mathematical objects Mathematical definitions of infill asymptotics What is the catch when introducing measure theory using $\sigma$-ring instead of $\sigma$-algebra? Projective and affine varieties: differences, advantages and why two definitions Why isn't finitism nonsense? What does "most of mathematics" mean? Why did we settle for ZFC? Are my definitions of limits accurate? Are axioms truly the foundation of mathematics?
CommonCrawl
Sintering Behavior and Mechanical Properties of NiAl, Al2O3, and NiAl-Al2O3 Composites | springerprofessional.de Skip to main content Download PDF-version previous article Influence of Dual-Phase Microstructures on the ... next article An Approach to Optimize Size Parameters of Forg... Activate PatentFit Swipe to navigate through the articles of this issue 01-11-2014 | Issue 11/2014 Open Access Sintering Behavior and Mechanical Properties of NiAl, Al2O3, and NiAl-Al2O3 Composites Journal of Materials Engineering and Performance > Issue 11/2014 M. Chmielewski, S. Nosewicz, K. Pietrzak, J. Rojek, A. Strojny-Nędza, S. Mackiewicz, J. Dutkiewicz » View abstract Download PDF-version Sintering is a complex thermally activated physicochemical process which takes place at an increased temperature (Ref 1 ). A great number of factors linked to both the technological process conditions and physical characteristics of source materials exert an influence on sintering. The predominant parameters of technological nature include the temperature, time, heating and cooling speed, pressure, and atmosphere (Ref 2 , 3 ). Determining the relation between the process conditions as well as the structure and properties of single-phase materials does not constitute a problem. This correlation is much more complex in case of multiphase materials such as ceramics-metal composite materials. The properties of metals and ceramic materials are very diverse because of their structure. Difficulties in coupling these materials result from their dissimilar atomic bonds, non-wettability of ceramic materials by liquid metals or generation of high thermal stress while cooling caused by differences in their mechanical and thermal properties (Ref 4 - 9 ). A good example of such diphase materials is composites with a NiAl matrix reinforced by the ceramic phase in the form of Al 2O 3 particles. Intermetallic compound NiAl is a promising material for high temperature applications, particularly suitable as a bond coat in thermal barrier coatings (TBCs) (Ref 10 ). Ni-Al intermetallic phases belong to the group of modern constructional materials of low density and with advantageous properties. They are characterized e.g., by a high melting temperature, high resistance to oxidation at high temperatures (to about 1200 °C), high value of the Young's modulus, stability in an increased temperature, high mechanical, fatigue, tensile and compressive strengths (also at high temperatures), and good frictional wear resistance (Ref 11 - 14 ). This unparalleled combination of unique physicochemical and mechanical properties offers a wide range of application possibilities for these materials. They are widely used in technologically developed countries in the automobile, aircraft, spacecraft, metallurgical, chemical, and power generation industries. However, intermetallic compounds also manifest drawbacks, i.e., they are quite brittle at room temperature (RT) which makes their mechanical processing very difficult and restricts their application range. These drawbacks can be obviated by modifying material's chemical composition. The authors of (Ref 15 ) report that non-reactive particles improve the properties of the NiAl matrix as a result of changing the fracture mode from intergranular to transgranular in accordance with Cook-Gordon mechanism (Ref 16 ). In (Ref 17 ) the author indicates plastic deformation as an additional factor in the toughening process. One of the most interesting and, at the same time, one of the most complex problems in relation to composite materials is the determination of relation between the structure of the matrix/reinforcement interface and the properties of composites. For ceramics-metal composite materials, various coupling types of these two phases are possible, i.e., mechanical coupling, coupling as a result of rewetting, and the partial formation of solid solutions, as well as the formation of a compound resulting from the reaction at the boundary of the components. Formation mechanism determines the quality of all enumerated couplings and, at the same time, the properties of the composite which means that coupling can be mechanical, adhesive, diffusive, or reactive. Each of these couplings is characterized by a different formation mechanism, quality as well as durability linked to interaction between the composite's components. The authors of (Ref 18 ) examined the quality of coupling at the NiAl-Al 2O 3 boundary and found it to be the weakest area in the composite, where cracks were likely to appear and be propagated. An optimized bond coat in case of NiAl-Al 2O 3 contains various additives such as Co, Cr or Pt, and reactive elements (REs) such as Y, Hf, or Dy (Ref 10 , 28 ). In the present work, the impact of technological parameters of the sintering process on the microstructure of the NiAl-Al 2O 3 composite was analyzed with the main focus on the metal-ceramics interface. In addition, both the sintering kinetics of separately sintered composite components and the changes in the mechanical properties of materials observed during successive sintering phases were determined. The results presented in this paper constitute an integral part of a broader research programme focused on the manufacture of NiAl-based composite materials. These works are aimed at elaborating technological conditions enabling obtaining composites with target properties. The resultant material is planned to be used in internal combustion engines as valve seats. At present, applicability tests are being performed under conditions resembling true engine load in order to verify the suitability of the developed materials. Commercially available NiAl powders (Fig. 1a, by Goodfellow) and aluminum oxide powders (Fig. 1b, by NewMet Koch) were used in the present work. SEM imaging of starting materials: (a) NiAl, (b) Al 2O 3 Research tasks included the characterization of the grain size distribution of the output powders performed using the Clemex image analysis system. The particle size distribution was analyzed as a function of Feret's diameter ( d). As a result, the average Feret's diameter ( d A) was calculated. Based on the analysis, the average size of the NiAl particles was found to be d NiAl = 9.71 µm (the size range is from 1 to almost 100 µm) and that of the Al 2O 3 particles d Al2O3 = 2.28 µm (from 0.2 to 5 µm). Technological sintering trials were carried out for pure NiAl, pure Al 2O 3 as well as for the mixtures of these powders with the following volume fraction: 80%NiAl/20%Al 2O 3. This composition was selected following the theoretical studies of the mechanical properties of the components of composite materials, taking into consideration their possible application in an internal combustion engine as a valve seat. The mixing test was carried out in a Pulverisette 6 planetary mill with a 250 ml container in air atmosphere. Both the lining of the container and milling balls (Ø10 mm in diameter) were made of tungsten carbide doped with cobalt. Mixing process parameters were experimentally selected based on previous works of the authors (Ref 19 , 20 ). An even distribution of aluminum oxide in the mixture was obtained under the following conditions: rotational speed ω = 100 rpm, BPR coefficient 5:1, and time of 1 h. All materials were pressure-sintered in the ASTRO HP50-7010 press using hot-pressing in a graphite cylindrical die with a diameter of 13 mm in an argon protective atmosphere. A variety of technological conditions were used to describe the sintering kinetics of both individual components (NiAl, Al 2O 3) and the two-phase material. The parameters of sintering were as follows: sintering temperature T s—1300, 1350 and 1400 °C, sintering time t s—0, 10 and 30 min, pressure p—5 and 30 MPa. Temperature was increased from the RT to the sintering temperature T s with the heating rate of 15 °C/min for all examined materials. The samples were kept at T s during the interval (sintering time) t s and naturally cooled down to the RT. In case of t s = 0 min sintering time, the samples were heated to the sintering temperature and immediately cooled down. The pressure was applied from the beginning of the sintering process to the end of the thermal cycle. Microstructural investigations included the analyses using scanning electron microscopy (SEM, Hitachi S4100) and transmission microscopy (TEM, Tedmai G2). The samples were mechanically cut using a diamond saw, then grinded and polished. For the purpose of SEM observations, they were covered with a thin layer of carbon and for TEM they were additionally thinned using abrasive paper. Thin lamellae were cut using FIB Quanta 200 3D FEI instruments or thinned using a Leica EM RES 101 ion beam thinner. The examination of NiAl, Al 2O 3, and NiAl-Al 2O 3 composites density was carried out following the hydrostatic method. To verify the elastic properties of the sintered samples, the measurements based on ultrasonic wave propagation were conducted. Ultrasonic measurements are a common tool used to determine the elastic properties of particulate materials (Ref 21 - 23 ). In the present studies, the obtained sintered samples were assumed to be isotropic materials due to random orientation of the grains and distributed in the material's volume. Accordingly, elastic properties were described in terms of two independent elastic constants, the Young's modulus, E, and Poisson's ratio, \(\upsilon\), which can be deduced from the elastic waves theory (Ref 21 ): $$E = \rho \frac{{(3V_{\text{L}}^{2} V_{\text{T}}^{2} - 4V_{\text{T}}^{4} )}}{{(V_{\text{L}}^{2} - V_{\text{T}}^{2} )}},$$ $$\upsilon = \frac{{(0.5V_{\text{L}}^{2} - V_{\text{T}}^{2} )}}{{(V_{\text{L}}^{2} - V_{\text{T}}^{2} )}},$$ where ρ—bulk density, V L—velocity of longitudinal ultrasonic waves, V T—velocity of shear ultrasonic waves. For measurements of ultrasonic velocities in the sintered samples, the pulse-echo contact technique was employed. A detailed description of the ultrasonic measurements used in the determination of the elastic constant of the sintered NiAl/Al 2O 3 composite material was introduced by the authors in (Ref 24 ). Porosity Evaluation The sintering degree was assessed based on the relative density, ρ rel, defined as the ratio of the measured bulk (apparent) density ρ to the theoretical density ρ theo for the fully dense material (Ref 25 , 26 ): $$\rho_{\text{rel}} = \frac{\rho }{{\rho_{\text{theo}} }}.$$ Results of density measurements are provided in Tables 1, 2, 3. Density evolution curves for different combinations of temperature and time parameters are plotted in Fig. 2. Evolution of the bulk and relative density of the hot-pressed Al 2O 3 ceramics sintered in different combinations of sintering temperatures and pressures (Al 2O 3 theoretical density—3.97 g/cm 3) Temperature/pressure T S (°C)/ p (MPa) t S = 0 min t S = 10 min ρ (g/cm 3) ρ rel Evolution of the bulk and relative density of the hot-pressed NiAl intermetallic compounds sintered in different combinations of sintering temperatures and pressures (NiAl theoretical density—5.91 g/cm 3) Evolution of the bulk and relative density of the hot-pressed NiAl/Al 2O 3 composite sintered in different combinations of sintering temperatures and pressures (NiAl/20%Al 2O 3 theoretical density—5.52 g/cm 3) Density evolution of: (a) pure Al 2O 3, (b) pure NiAl and (c) NiAl/Al 2O 3 composite material sintered under 5 and 30 MPa pressures at sintering temperatures of: 1300, 1350 and 1400 °C It can be deduced from the observation of density measurements that in the selected range of sintering parameters almost fully dense materials were obtained. All three technological parameters (temperature, time, and pressure) have a significant influence on the degree of the material's densification. The importance of each of them is different but cannot be analyzed separately because only their proper combination is likely to provide the best results in terms of materials densification. As can be seen in Tables 1, 2, 3, the pressure plays a very important role in the sintering process of all three types of materials. Relative densities for single- and two-phase materials were different and the best results were obtained for pure ceramics (almost 96%). An increase of the pressure to the level of 30 MPa results in a much higher degree of densification even at lower temperatures. It is correlated with the easier grain regrouping process at an early stage, the activation of diffusion flow,s and an easier pore elimination at the final stage of sintering. Sintering is a time-consuming process and it is obvious that when the sintering time is extended, the density of the material should also rise. Figure 2 shows that in all examined cases an increase in the sintering time improved the relative density of materials, for both pressures, i.e., 5 and 30 MPa. The sintering temperature depends on the physical-chemical properties of the sintered powders as well as the grain size and shape. In case of unary systems, it is assumed that the sintering temperature is 0.6-0.8 of a material's melting point. For multiphase materials, the choice of the sintering temperature is more complicated. It is related to the volume fractions of components, their solubility and wettability, and the surface energy connected with the grain size and specific surface area, etc. The mass flow is strictly correlated with the sintering temperature. Depending on the sintering temperature, different mass transport mechanisms (surface diffusion, evaporation-condensation, grain boundary diffusion, viscous flow, volume diffusion) are dominant, e.g., surface diffusion dominates the low-temperature sintering. Based on Fig. 2. it can be concluded that densification is improved with the rise of temperature for all types of sintered materials. Microstructure Investigations Sintered Al2O3 Ceramics Depending on the type, chemical composition, and required properties, the elements based on sintered aluminum oxide can be sintered under different technological conditions and, most importantly, at a wide temperature range, i.e., between 1400 and 1900 °C (Ref 27 ). An essential prerequisite for the initiation of the sintering process of loose powders is their packing in a manner enabling both their interaction and the arrangement's reaching the temperature at which the atomic activity was sufficient to start the process. At the initial stage of Al 2O 3 sintering, intergranular contacts are created, which is a precondition for the transport of the mass between grains. Contacts with the biggest surface areas possible are formed through a proper dispergation of the sintered powders or the application of external force. The concentration gradient of lattice vacancies at the contact points of grains and, more specifically, between the free grain surface and the contact surface, i.e., the nucleus of the future neck, is the driving force of the sintering process. This can be observed in Fig. 3a. The intermediate phase of aluminum oxide's sintering begins with the changes in grain boundaries and the size of pores which are targeted at taking cylindrical shapes. This stage ends once pore shrinkage has taken place (Fig. 3b). SEM images of sintered aluminum oxide at different conditions: (a) T s = 1300 °C, t s = 0 min, p = 5 MPa, (b) T s = 1300 °C, t s = 0 min, p = 30 MPa There are two alternative final stages of sintering which begins with a significant reduction of the pore volume. The first one takes place under conditions enabling pores to eventually locate in the corners of three or four grains (Fig. 4), whereas the second one occurs when a fast discontinuous growth of grains precedes the movements of pores aimed at finding energetically suitable areas and closing them inside crystallites (Ref 28 ). This phenomenon was not observed in the studied case. SEM images of sintered aluminum oxide at different conditions: (a) T s = 1300 °C, t s = 30 min, p = 30 MPa, (b) T s = 1350 °C, t s = 30 min, p = 30 MPa, (c) T s = 1400 °C, t s = 30 min, p = 30 MPa Any geometrical changes in the shape and dimensions of aluminum oxide grains are a result of diffusive processes. On the other hand, diffusive transfer of ions is directly related to the movement of vacancies which are agglomerated on the pore surface move toward grain boundaries and subsequently migrate toward them. As a result of these changes, the material with a negligible number of pores is eventually thickened. Sintered NiAl Material During the sintering process, NiAl particulate is converted into a polycrystalline material. The evolution of microstructure during sintering is shown in Fig. 5. At the initial stage, (Fig. 5a) cohesive bonds are formed between particles. When the sintering process is continued (Fig. 5b), the necks between particles grow due to the mass transport. The surface and grain boundary diffusion are usually dominant mass transport mechanisms in case of sintering (Ref 1 ). As a result of the stresses in the neck and the surface tension, the particles are attracted to each other, which leads to the shrinkage of the system. The described processes, shrinkage, and mass transport are inextricably linked to the total reduction of material porosity (Fig. 5c). SEM images of sintered NiAl material at different conditions: (a) T s = 1300 °C, t s = 10 min, p = 5 MPa, (b) T s = 1400 °C, t s = 10 min, p = 5 MPa, (c) T s = 1400 °C, t s = 30 min, p = 30 MPa Sintered NiAl-Al2O3 Composite The formation of adhesive contacts between particular grains should be treated as a starting point of the sintering process of composite materials. Subjecting the system of particles to temperature results in the appearance of intergranular necks (Fig. 6). It was observed that the necks were formed between grains of the same material (NiAl-NiAl and Al 2O 3-Al 2O 3) as well as between metallic phase particles and aluminum oxide grains. At first, the necks were relatively small with low durability; however, with time, they enlarged and the material gradually became thicker. Smaller aluminum oxide grains either occurred in the form of single grains on the surface of NiAl grains or formed bigger systems in the area where few NiAl grains were in contact. SEM images of sintered NiAl/Al 2O 3 material at early stage of sintering process: (a) T s = 1300 °C, t s = 30 min, p = 5 MPa The intermediate stage of sintering is characterized by simultaneous pore rounding, densification as well as grain growth and is controlled by the diffusion processes. As seen in Fig. 7a, when increasing temperature, the necks begin to grow and a more compact structure is formed. Bonded grains can be observed in systems consisting of more than two grains. The average distance between adjacent grains decreases and the size of necks grows. Ceramic grains are also connected with each other. The rise in temperature to 1400 °C results in the sintering of Al 2O 3 particles (Fig. 7b). Intermediate stage of NiAl-Al 2O 3 composite material sintering: (a) T s = 1350 °C, t s = 30 min, p = 5 MPa, (b) T s = 1400 °C, t s = 30 min, p = 5 MPa The final stage of the NiAl/Al 2O 3 sintering process is characterized by elimination of pores in the composite structure. When compared to the initial and intermediate stages, the final-stage sintering is a relatively slow process. A complex interaction between particles, pores and grain boundaries plays a crucial role in the final densification. As seen in Fig. 8b, some tiny pores can be trapped between ceramic particles. Such structural defects, especially at the NiAl/Al 2O 3 interface, can affect the mechanical properties of materials. SEM images of sintered NiAl/Al 2O 3 material at final stage of sintering process ( T s = 1400 °C, t s = 30 min, p = 30 MPa) The presented analysis of the microstructure proves that the choice of technological conditions of sintering determines the progress in the material's thickening process and, at the same time, its properties. Through the control of the temperature, time, and pressure of the sintering process, it is possible to obtain a material with an acceptable level of porosity which in turn can be almost entirely eliminated. SEM observations of fracture surfaces indicate brittle character of cracking; however, the path of cracking is different for pure NiAl and composite material. Failure in the pure NiAl phase runs characteristically in one direction through the NiAl grains, whereas the ceramic grains force the crack to wind its way across the ceramic material, which greatly elongates its path and thereby increases the strength of the composite. As a result, the bending strength of composite materials is also raised (Ref 19 ). TEM Studies of Ceramic-Metal Interface In case of multiphase materials, the interface plays a decisive role. The properties of the designed materials will mainly depend on the quality of the said coupling. Properly selected sintering conditions should render the formation of a permanent coupling between a composite's components through its entire volume attainable. Couplings between ceramic grains and NiAl were created in case of sintering performed at 1300 °C. Nevertheless, at first they were observable only at some points. With the increase in the sintering temperature, the process progressed until the composite's components was fully and permanently bound. Figure 9 depicts an exemplary NiAl-Al 2O 3 interface for selected sintering conditions. TEM micrographs of NiAl/Al 2O 3 composites: (a) T s = 1350 °C, t s = 30 min, p = 30 MPa, (b) T s = 1400 °C, t s = 30 min, p = 30 MPa Figure 9 shows transmission electron micrographs in which Al 2O 3 can be seen when the contrast is brighter, while NiAl is observable when the contrast is darker due to the orientation's being close to symmetrical and a relatively high density of dislocations. The ceramic phase is located at NiAl grain boundaries. The interface is relatively clean with no additional phases that could have been formed during the sintering process. It indicates a good quality of the sintered samples, since the formation of the transition phase resulting from the reaction between both phases would have weakened the interface. Similar observations can be made based on Fig. 10 at a slightly higher magnification. Moreover, a high density of dislocations can be seen in the NiAl phase. A few stacking faults are observed in Al 2O 3. There is no crystallographic relation between both phases. The orientation relationships are rather random. Diffraction pattern is only useful for identifying the phases. It was confirmed in the microanalysis performed using a TEM EDS detector that along the marked line across the interface there were changes in the content of Ni, O, and Al, which did not indicate the presence of any transition phases (Fig. 11). The TEM analyses proved that the bond at the NiAl/Al 2O 3 interface was quite strong and had an adhesive character. The contrast change at the interface also suggested that no diffusive type interface layer was formed. TEM micrographs of NiAl/Al 2O 3 composites (a, b) and (c) selected area diffraction pattern (SADP) from the area marked by a circle in (a) and (d) SADP from the area marked by a circle in (b) The zone axis resulting from the distances and angles between reflections in SADPs is indicated Scanning transmission electron micrograph (a) and changes of the content of Al, O, Ni along the marked line (b) Elastic Properties The elastic constants of the obtained sintered samples were determined based on the measurements of ultrasonic velocities described in section 2. Table 4 presents detailed results of measurements of the Young's modulus E and Poisson's ratio ν for pure ceramic Al 2O 3, pure intermetallic NiAl, and NiAl/Al 2O 3 composite samples sintered in different conditions. Based on the presented values, the relations between elastic moduli and the relative density ρ rel for three considered materials are illustrated in Fig. 12. In all cases, the substantial increase of the Young's modulus with the relative density can be seen. For example, for pure ceramics, the difference between the maximum and minimum value of E equals 78%. The change of the Poisson's ratio is less pronounced. Only in case of NiAl material a slight increase of Poisson's ratio can be observed with the increase in density. The growth of the Young's modulus of Al 2O 3 and NiAl during the material's densification is similar. Additionally, in case of the Poisson's ratio, its growth for the intermetallic material is more significant than for ceramics. Evolution of elastic mechanical properties of hot-pressed Al 2O 3, NiAl, and NiAl/Al 2O 3 samples determined by ultrasonic measurements sintering process parameters Al 2O 3 NiAl NiAl/Al 2O 3 T s (°C) t s (min) p (MPa) Young's modulus E (GPa) Poisson's ratio \(\nu\) Experimental results of: (a) Young's modulus, and (b) Poisson's ratio of hot-pressed Al 2O 3, NiAl, and NiAl/Al 2O 3 samples as a function of relative density The values of elastic constants of the NiAl/Al 2O 3 composite apparently depend on the values of elastic constants of its constituent phases with respect to the volume content. Due to the intermetallic and ceramic phase content in the analyzed composite material, i.e., 80% of NiAl and 20% of Al 2O 3, the value of NiAl/Al 2O 3 elastic constants should be close to the intermetallic one. Particularly at low relative density, \(\rho_{\text{rel}}\) < 0.95, this trend is confirmed regardless of the sintering parameters—the results of the composite's Young's modulus are fairly similar to the Young's modulus of NiAl (Fig. 12a). The explanation of this homogeneity of results can be found in the section devoted to the microstructure. Figure 13(a) and (b) shows the microstructure of NiAl/Al 2O 3 and NiAl samples with relative density close to 0.9. In the first one, we can see a skeleton with small alumina particles created on the surface of NiAl grains. At low densities, these small ceramic grains located on the NiAl surface have no impact on the stiffness of the composite. In comparison to pure NiAl samples, ceramic particles in composite materials slightly reduce the porosity with no significant effect on the stiffness. For higher relative densities, \(\rho_{\text{rel}}\) > 0.95, ceramic particles are strongly connected with intermetallic ones in the whole volume of the composite material (Fig. 8), which is the major reason for a higher stiffness than in case of pure intermetallic material. Comparison of the microstructure of: (a) NiAl-Al 2O 3, and (b) NiAl samples at the approximate stage of densification \((\rho_{\text{rel}}\) ≈ 0.90) The value of elastic constants for fully dense composite can be calculated theoretically from well-known models, such as Voigt-Reuss, or Hashin-Shtrikman limits (Ref 22 , 23 ). From the values of elasticity moduli measured on fully dense specimens of pure NiAl and Al 2O 3 (see Table 4), one can calculate the theoretical limits for Young's modulus of the NiAl/Al 2O 3 composite. They are, Voigt-Reuss: 202.5—227.7 GPa and Hashin-Shtrikman: 212.6—217.0 GPa, respectively. The maximum value measured on the samples of NiAl/Al 2O 3 composite—219 GPa is well within Voigt-Reuss limits but slightly above the Hashin-Shtrikman limits. This minor discrepancy may be caused by measurement errors as well as by some statistical fluctuations in the real phase content of the NiAl/Al 2O 3 specimens. The influence of the sintering temperature T s, sintering time t s, and external pressure p on the elastic properties of the ceramic, intermetallic, and composite samples is well-illustrated in Fig. 14(a) and (c), respectively. It is rather obvious that for all three materials the elastic modulus rose when increasing all three sintering parameters. It can be seen that samples manufactured at theoretically least favorable sintering conditions show low and dissatisfactory values of material stiffness. Specifically the ceramic materials require better sintering parameters to avoid a situation where the Young's modulus for the samples stays at the level of 22% of the Young's modulus for fully dense alumina E 0 ( T s = 1300 °C, t s = 0 min, p = 5 MPa). In general, all materials sintered at low pressure (5 MPa) have a very low Young's modulus, regardless of other sintering parameters (temperature and time). Obviously, the main reason for such low stiffness is insufficient material densification; however, the effect of the microstructure's features should also be considered. Figure 3 and 7 presents the microstructure of both ceramic and composite samples with a low value of E, where high distribution of irregularly shaped pores can be observed. Sintered materials exhibit a decreasing strength (and stiffness) when the pore shape becomes irregular; small spherical pores are preferable. Creating favorable pore configuration depends on the process conditions (Ref 1 ). Furthermore, it should be stated that, because of the low relative density of materials (in the presence of the second phase higher density is not possible to obtain under these conditions), the stiffness of composite specimens sintered at 5 MPa is lower than for NiAl sintered under the same conditions (9 examples). Based on the results of TEM investigations (section 3.2.4), one can observe that the addition of the ceramic phase is linked with the occurrence of an adhesive contact between intermetallic and alumina particles. The quality of the NiAl-Al 2O 3 bond depends on the applied pressure. The importance of sintering pressure as a critical parameter in the manufacturing process can be particularly seen in the case of a composite material for which the difference between the Young's modulus for the samples sintered at 5 and 30 MPa is the most significant (Fig. 14c). The application of higher external pressure (30 MPa) intensifies the penetration of the intermetallic and ceramic particles, at the same time allowing us to obtain a material with a low-porosity structure and a low number of isolated spherical pores (Fig. 8). Generally, samples manufactured at higher pressure are characterized by the Young's modulus between 87 and 100% of the value for fully dense samples. Experimental results of Young's modulus of: (a) pure Al 2O 3, (b) pure NiAl, and c) NiAl/Al 2O 3 samples manufactured at a different combinations of sintering temperatures and pressures as a function of the sintering time ( E 0—the Young's modulus of a fully dense material) In the present work, a comparison was made between the sinterability of a two-phase NiAl-Al 2O 3 composite material and the sinterability of its separate components as a function of the parameters of the sintering process (temperature, time, and pressure). The proper choice of these parameters enables for obtaining materials characterized by density close to the theoretical one. The observation of the structure of the sintered materials rendered the determination of particular sintering stages possible for both single-phase and compound materials. It was discovered that the formation of couplings between concrete composite components (i.e., NiAl-NiAl, Al 2O 3-Al 2O 3, and NiAl-Al 2O 3) took place at about the same sintering time and the quality (the Young's modulus) of these couplings improved as the sintering process progressed. The examination of the matrix-reinforcement interface proved the existence of a strong adhesive coupling. No new phases were found at the ceramics-metal phase boundary. The values of elastic constants of NiAl/Al 2O 3 were close to intermetallic ones due to the volume content of the NiAl phase. The Young's modulus and the Poisson's ratio of the analyzed composite material was similar to these in case of NiAl, especially at low densities, in case of which small alumina particles had no impact on the composite's stiffness. The influence of external pressure of 30 MPa seemed crucial for obtaining a satisfactory stiffness for three kinds of the studied materials, which were characterized by a high dense microstructure with a low number of isolated spherical pores. Based on the sintering tests performed for particular composite components, preliminary information on the properties of two-phase materials was collected, which is likely to have a tremendous influence on the design of new materials with required characteristics. The results presented in this paper have been obtained within the projects funded by the National Science Centre awarded by decisions number DEC-2012/05/N/ST8/03376 and DEC-2013/11/B/ST8/03287, Operational Programme Human Capital 8.2.1 "Wsparcie przedsiębiorczości naukowców bio tech med poprzez stypendia, staże i szkolenia," as well as "KomCerMet" project (contract no. POIG.01.03.01-14-013/08-00 with the Polish Ministry of Science and Higher Education) within the framework of the Operational Programme Innovative Economy 2007-2013. Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. go back to reference R.M. German, Sintering—Theory and Practice, A Wiley Interscience Publications, New York, 1996 R.M. German, Sintering—Theory and Practice, A Wiley Interscience Publications, New York, 1996 go back to reference P. Nieroda, R. Zybala, and K.T. Wojciechowski, Development of the method for the preparation of Mg2Si by SPS technique, Am. Inst. Phys. Conf. Proc., 2012, 1449, p 199–202 P. Nieroda, R. Zybala, and K.T. Wojciechowski, Development of the method for the preparation of Mg2Si by SPS technique, Am. Inst. Phys. Conf. Proc., 2012, 1449, p 199–202 go back to reference M. Chmielewski and K. Pietrzak, Processing, Microstructure and Mechanical Properties of Al 2O 3-Cr Nanocomposites, J. Eur. Ceram. Soc., 2007, 27, p 1273–1279 CrossRef M. Chmielewski and K. Pietrzak, Processing, Microstructure and Mechanical Properties of Al 2O 3-Cr Nanocomposites, J. Eur. Ceram. Soc., 2007, 27, p 1273–1279 CrossRef go back to reference W. Olesińska, D. Kaliński, M. Chmielewski, R. Diduszko, and W. Włosiński, Influence of Titanium on the Formation of a "Barrier" Layer During Joining an AlN Ceramic with Copper by the CDB Technique, J. Mater. Sci.: Mater. Electron., 2006, 17, p 781–788 W. Olesińska, D. Kaliński, M. Chmielewski, R. Diduszko, and W. Włosiński, Influence of Titanium on the Formation of a "Barrier" Layer During Joining an AlN Ceramic with Copper by the CDB Technique, J. Mater. Sci.: Mater. Electron., 2006, 17, p 781–788 go back to reference M. Barlak, J. Piekoszewski, J. Stanislawski, Z. Werner, K. Borkowska, M. Chmielewski, B. Sartowska, M. Miskiewicz, W. Starosta, L. Walis, and J. Jagielski, The Effect of Intense Plasma Pulse Pre-Treatment on Wettability in Ceramic-Copper System, Fusion Eng. Des., 2007, 82, p 2524–2530 CrossRef M. Barlak, J. Piekoszewski, J. Stanislawski, Z. Werner, K. Borkowska, M. Chmielewski, B. Sartowska, M. Miskiewicz, W. Starosta, L. Walis, and J. Jagielski, The Effect of Intense Plasma Pulse Pre-Treatment on Wettability in Ceramic-Copper System, Fusion Eng. Des., 2007, 82, p 2524–2530 CrossRef go back to reference K. Pietrzak, D. Kaliński, and M. Chmielewski, Interlayer of Al 2O 3-Cr Functionally Graded Material for Reduction of Thermal Stresses in Alumina—Heat Resisting Steel Joints, J. Eur. Ceram. Soc., 2007, 27(2-3), p 1281–1286 CrossRef K. Pietrzak, D. Kaliński, and M. Chmielewski, Interlayer of Al 2O 3-Cr Functionally Graded Material for Reduction of Thermal Stresses in Alumina—Heat Resisting Steel Joints, J. Eur. Ceram. Soc., 2007, 27(2-3), p 1281–1286 CrossRef go back to reference W. Włosiński and T. Chmielewski, Plasma-Hardfaced Chromium Protective Coatings-Effect of Ceramic Reinforcement on Their Wettability by Glass, Adv. Sci. Technol., 2003, 32, p 253–260 W. Włosiński and T. Chmielewski, Plasma-Hardfaced Chromium Protective Coatings-Effect of Ceramic Reinforcement on Their Wettability by Glass, Adv. Sci. Technol., 2003, 32, p 253–260 go back to reference K. Wojciechowski, R. Zybala, and R. Mania, High Temperature CoSb3—Cu Junctions, Microelectron. Reliab., 2011, 51, p 1198–1202 CrossRef K. Wojciechowski, R. Zybala, and R. Mania, High Temperature CoSb3—Cu Junctions, Microelectron. Reliab., 2011, 51, p 1198–1202 CrossRef go back to reference M. Barlak, W. Olesińska, J. Piekoszewski, Z. Werner, M. Chmielewski, J. Jagielski, D. Kaliński, B. Sartowska, and K. Borkowska, Ion Beam Modification of Ceramic Component Prior to Formation of AlN-Cu Joints by Direct Bonding Process, Surf. Coat. Technol., 2007, 201, p 8317–8321 CrossRef M. Barlak, W. Olesińska, J. Piekoszewski, Z. Werner, M. Chmielewski, J. Jagielski, D. Kaliński, B. Sartowska, and K. Borkowska, Ion Beam Modification of Ceramic Component Prior to Formation of AlN-Cu Joints by Direct Bonding Process, Surf. Coat. Technol., 2007, 201, p 8317–8321 CrossRef go back to reference T. Zhang, H. Guo, S. Gong, and H. Xu, Effects of Dy on the Adherence of Al 2O 3/NiAl Interface: A Combined First-Principles and Experimental Studies, Corrosion Sci., 2013, 66, p 59–66 CrossRef T. Zhang, H. Guo, S. Gong, and H. Xu, Effects of Dy on the Adherence of Al 2O 3/NiAl Interface: A Combined First-Principles and Experimental Studies, Corrosion Sci., 2013, 66, p 59–66 CrossRef go back to reference K. Morsi, Review: Reaction Synthesis Processing of Ni-Al Intermetallic Materials, Mater. Sci. Eng. A, 2001, 299, p 1–15 CrossRef K. Morsi, Review: Reaction Synthesis Processing of Ni-Al Intermetallic Materials, Mater. Sci. Eng. A, 2001, 299, p 1–15 CrossRef go back to reference R. Darolia, Ductility and Fracture Toughness Issues Related to Implementation of NiAl for Gas Turbine Applications, Intermetallics, 2000, 8, p 1321–1327 CrossRef R. Darolia, Ductility and Fracture Toughness Issues Related to Implementation of NiAl for Gas Turbine Applications, Intermetallics, 2000, 8, p 1321–1327 CrossRef go back to reference T. Chmielewski and D.A. Golański, New Method of In-Situ Fabrication of Protective Coatings Based on Fe-Al Intermetallic Compounds, Proc. Inst. Mech. Eng. Part B, 2011, 225(B4), p 611–616 T. Chmielewski and D.A. Golański, New Method of In-Situ Fabrication of Protective Coatings Based on Fe-Al Intermetallic Compounds, Proc. Inst. Mech. Eng. Part B, 2011, 225(B4), p 611–616 go back to reference K. Matsuura, T. Kitamutra, and M. Kudoh, Microstructure and Mechanical Properties of NiAl Intermetallic Compound Synthesized by Reactive Sintering Under Pressure, J. Mater. Process. Technol., 1997, 63, p 293–302 CrossRef K. Matsuura, T. Kitamutra, and M. Kudoh, Microstructure and Mechanical Properties of NiAl Intermetallic Compound Synthesized by Reactive Sintering Under Pressure, J. Mater. Process. Technol., 1997, 63, p 293–302 CrossRef go back to reference D. Tingaud and F. Nardou, Influence of Non-Reactive Particles on the Microstructure of NiAl and NiAl-ZrO 2 Process by Thermal Explosion, Intermetallics, 2008, 16, p 732–737 CrossRef D. Tingaud and F. Nardou, Influence of Non-Reactive Particles on the Microstructure of NiAl and NiAl-ZrO 2 Process by Thermal Explosion, Intermetallics, 2008, 16, p 732–737 CrossRef go back to reference J. Cook, C.C. Evans, J.E. Gordon, and D.M. Marsh, Mechanism for Control of Crack Propagation in All-Brittle Systems, Proc. R. Soc. Lond. Ser. A, 1964, 282(1390), p 508–520 CrossRef J. Cook, C.C. Evans, J.E. Gordon, and D.M. Marsh, Mechanism for Control of Crack Propagation in All-Brittle Systems, Proc. R. Soc. Lond. Ser. A, 1964, 282(1390), p 508–520 CrossRef go back to reference W.H. Tuan, Toughening Alumina with Nickel Aluminide Inclusions, J. Eur. Ceram. Soc., 2000, 20, p 895–899 CrossRef W.H. Tuan, Toughening Alumina with Nickel Aluminide Inclusions, J. Eur. Ceram. Soc., 2000, 20, p 895–899 CrossRef go back to reference W.H. Tuan, W.B. Chou, H.C. You, and S.T. Chang, The Effects of Microstructure on the Mechanical Properties of A1 2O 3-NiAl Composites, Mater. Chem. Phys., 1998, 56, p 157–162 CrossRef W.H. Tuan, W.B. Chou, H.C. You, and S.T. Chang, The Effects of Microstructure on the Mechanical Properties of A1 2O 3-NiAl Composites, Mater. Chem. Phys., 1998, 56, p 157–162 CrossRef go back to reference D. Kaliński, M. Chmielewski, and K. Pietrzak, An Influence of Mechanical Mixing and Hot-Pressing on Properties of NiAl/Al 2O 3 Composite, Arch. Metall. Mater., 2012, 57(3), p 694–702 D. Kaliński, M. Chmielewski, and K. Pietrzak, An Influence of Mechanical Mixing and Hot-Pressing on Properties of NiAl/Al 2O 3 Composite, Arch. Metall. Mater., 2012, 57(3), p 694–702 go back to reference M. Chmielewski, D. Kaliński, K. Pietrzak, and W. Włosiński, Relationship Between Mixing Conditions and Properties of Sintered 20AlN/80Cu Composite Materials, Arch. Metall. Mater., 2010, 55(2), p 579–585 M. Chmielewski, D. Kaliński, K. Pietrzak, and W. Włosiński, Relationship Between Mixing Conditions and Properties of Sintered 20AlN/80Cu Composite Materials, Arch. Metall. Mater., 2010, 55(2), p 579–585 go back to reference C.L. Hsieh, W.H. Tuan, and T.T. Wu, Elastic Behaviour of a Model Two-phase material, J. Eur. Ceram. Soc., 2004, 24, p 3789–3793 CrossRef C.L. Hsieh, W.H. Tuan, and T.T. Wu, Elastic Behaviour of a Model Two-phase material, J. Eur. Ceram. Soc., 2004, 24, p 3789–3793 CrossRef go back to reference C.L. Hsieh and W.H. Tuan, Elastic Properties of Ceramic Metal Particulate Composites, Mater. Sci. Eng. A, 2005, 393, p 133–139 CrossRef C.L. Hsieh and W.H. Tuan, Elastic Properties of Ceramic Metal Particulate Composites, Mater. Sci. Eng. A, 2005, 393, p 133–139 CrossRef go back to reference H.A. Bruck, Y.M. Shabana, B. Xu, and J. Laskis, Evolution of Elastic Mechanical Properties During Pressureless Sintering of Powder-Processed Metals and Ceramics, J. Mater. Sci., 2007, 42, p 7708–7715 CrossRef H.A. Bruck, Y.M. Shabana, B. Xu, and J. Laskis, Evolution of Elastic Mechanical Properties During Pressureless Sintering of Powder-Processed Metals and Ceramics, J. Mater. Sci., 2007, 42, p 7708–7715 CrossRef go back to reference S. Nosewicz, J. Rojek, S. Mackiewicz, M. Chmielewski, K. Pietrzak, and B. Romelczyk, The Influence of Hot Pressing Conditions on Mechanical Properties of Nickel Aluminide/Alumina Composite, J. Compos. Mater., 2014, (In press), doi: 10.​1177/​0021998313511652​. S. Nosewicz, J. Rojek, S. Mackiewicz, M. Chmielewski, K. Pietrzak, and B. Romelczyk, The Influence of Hot Pressing Conditions on Mechanical Properties of Nickel Aluminide/Alumina Composite, J. Compos. Mater., 2014, (In press), doi: 10.​1177/​0021998313511652​. go back to reference L. Nicolas and A. Borzacchiello, Encyclopedia of Composites, 2nd ed., Wiley, New Jersey, 2012 L. Nicolas and A. Borzacchiello, Encyclopedia of Composites, 2nd ed., Wiley, New Jersey, 2012 go back to reference J. Bidulská, R. Bidulský, T. Kvačkaj, and M.A. Grande, Porosity Evolution in Relation to Microstructure /Fracture of ECAPed PM Al-Mg-Si-Cu-Fe Alloy, Steel Res Int, SI, 2012, p 1191-1194 J. Bidulská, R. Bidulský, T. Kvačkaj, and M.A. Grande, Porosity Evolution in Relation to Microstructure /Fracture of ECAPed PM Al-Mg-Si-Cu-Fe Alloy, Steel Res Int, SI, 2012, p 1191-1194 go back to reference E.K.H. Li and P.D. Funkenbusch, Hot Isostatic Pressing (HIP) of Powder Mixtures and Composites, Metall. Trans., 1993, 24A, p 1345–1354 CrossRef E.K.H. Li and P.D. Funkenbusch, Hot Isostatic Pressing (HIP) of Powder Mixtures and Composites, Metall. Trans., 1993, 24A, p 1345–1354 CrossRef go back to reference J. Lis and R. Pampuch, The Role of Surface-Diffusion in the Sintering of Micropowders, J. Phys., 1986, 47, p 219–223 J. Lis and R. Pampuch, The Role of Surface-Diffusion in the Sintering of Micropowders, J. Phys., 1986, 47, p 219–223 M. Chmielewski S. Nosewicz K. Pietrzak J. Rojek A. Strojny-Nędza S. Mackiewicz J. Dutkiewicz https://doi.org/10.1007/s11665-014-1189-z Journal of Materials Engineering and Performance Issue 11/2014 Electronic ISSN: 1544-1024 Other articles of this Issue 11/2014 Go to the issue OriginalPaper The Synthesis and Electrochemical Behavior of High-Nitrogen Nickel-Free Austenitic Stainless Steel Interaction of Benzimidazoles and Benzotriazole: Its Corrosion Protection Properties on Mild Steel in Hydrochloric Acid Inhibitive Performance of a Rust Converter on Corrosion of Mild Steel Ratcheting Behavior of a Non-conventional Stainless Steel and Associated Microstructural Variations Corrosion Resistance Enhancement of AZ91D Magnesium Alloy by Electroless Ni-Co-P Coating and Ni-Co-P-SiO2 Nanocomposite Microstructure and Corrosion Behavior of Electrodeposited Ni-Co-ZrC Coatings in-adhesives, MKVS, Hellmich GmbH/© Hellmich GmbH
CommonCrawl
Journal Home About Issues in Progress Current Issue All Issues Feature Issues Vol. 29, Issue 20, pp. 31240-31248 •https://doi.org/10.1364/OE.437437 Orbital effects in strong-field Rydberg state excitation of N2, Ar, O2 and Xe Fenghao Sun, Chenxu Lu, Yongzhe Ma, Shengzhe Pan, Jiawei Wang, Wenbin Zhang, Junjie Qiang, Fei Chen, Hongcheng Ni, Hui Li, and Jian Wu Fenghao Sun,1,4 Chenxu Lu,1,4 Yongzhe Ma,1 Shengzhe Pan,1 Jiawei Wang,1 Wenbin Zhang,1 Junjie Qiang,1 Fei Chen,1 Hongcheng Ni,1,2 Hui Li,1,5 and Jian Wu1,2,3,6 1State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200241, China 2Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006, China 3CAS Center for Excellence in Ultra-intense Laser Science, Shanghai 201800, China 4FenghaoSun and ChenxuLu contributed equally to this paper. [email protected] [email protected] Hongcheng Ni https://orcid.org/0000-0003-4924-0921 Find other works by these authors F Sun C Lu Y Ma S Pan J Wang W Zhang J Qiang F Chen H Ni H Li J Wu Post on reddit Add to CiteULike Add to Mendeley Add to BibSonomy Copy Citation Text Fenghao Sun, Chenxu Lu, Yongzhe Ma, Shengzhe Pan, Jiawei Wang, Wenbin Zhang, Junjie Qiang, Fei Chen, Hongcheng Ni, Hui Li, and Jian Wu, "Orbital effects in strong-field Rydberg state excitation of N2, Ar, O2 and Xe," Opt. Express 29, 31240-31248 (2021) Citation alert Quantum dynamics of atomic Rydberg excitation in strong laser fields Shilin Hu, et al. Opt. Express 27(22) 31629-31643 (2019) Frustrated tunneling ionization in the elliptically polarized strong laser fields Yong Zhao, et al. Modulation dynamics of atomic Rydberg excitation in strong-field tunneling ionization Peipei Xin, et al. J. Opt. Soc. Am. B 38(4) 1031-1039 (2021) Table of Contents Category The topics in this list come from the Optics and Photonics Topics applied to this article. Femtosecond lasers Multiphoton ionization Rydberg states Strong field physics Original Manuscript: July 13, 2021 Revised Manuscript: September 2, 2021 Manuscript Accepted: September 2, 2021 PDF Article Rather than being freed to the continuum, the strong-field tunneled electrons can make a trajectory driven by the remaining laser fields and have certain probability to be captured by the high lying Rydberg states of the parent atoms or molecules. To explore the effect of molecular orbital on Rydberg state excitation, the ellipticity dependence of Rydberg state yields of N2 and O2 molecules are experimentally investigated using cold target recoil ion momentum spectroscopy and are compared with their counterpart atoms Ar and Xe with comparable ionization potentials. We found the generation probability of the neutral Rydberg fragment O2* was orders of magnitude higher than that of Xe* due to the butterfly-shaped highest occupied molecular orbital of O2. Meanwhile, our experimental and simulation results reveal that it is the initial momentum distribution (determined by the detailed characteristics of orbitals) that finally leads to the tendency that the Rydberg state yield of O2 (Ar) decreased slower than that obtained for Xe (N2) when the ellipticity of the excitation laser field is increased. © 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement When atoms and molecules are exposed to an intense laser field, the bound electrons can be released to the continuum though tunneling or multiphoton ionization. Rather than directly escaping, it was demonstrated that an initially freed electron may revisit its parent nucleus due to the combined influence of the Coulomb attraction and the laser field. This gives rise to many interesting phenomena, such as high harmonic generation (HHG) [1,2], non-sequential double ionization [3–5], laser-induced electron diffraction [6,7], etc. Moreover, the freed electron can be recaptured by the high lying Rydberg states below ionization threshold thus forming excited neutral particles. The Rydberg state excitation (RSE) yield has been observed using, e.g. the reaction spectroscopy [8–10]. The frustrated tunneling ionization (FTI) and multiphoton excitation scenarios have been experimentally and theoretically demonstrated as the underlying mechanisms in the strong-field RSE of the atoms and molecules [11–14]. From the FTI perspective, the RSE process can be simply understood by a semi-classical picture. The electrons tunnel near the maxima of the oscillating optical field and gain nearly zero drift momentum from the remaining laser field so that the electrons may finally stay in Rydberg orbitals by the Coulomb attraction of the parental ion. Since the scenario of FTI is similar to the well-known three-step model [2], the underlying relationship between them is also a fascinating topic. In the multiphoton scenario, the electron directly populates the Rydberg states via resonant multiphoton excitation thus the process can take place under circularly polarized laser fields, which serves as the main distinct feature compared to the situation in the FTI scenario. A comparative study of a molecule and its companion atom (the atom with a comparable ionization potential (IP) value to that of the molecule) provides an important way to reveal the orbital or structure effect of molecule in its response to the external laser field. Recent studies proved that the molecular orbitals significantly influence the strong-field process and lead to some fascinating phenomena [15–17]. In this work, we experimentally investigated the ellipticity dependence of Rydberg excitation yields for two pairs of targets, i.e. O2/Xe and N2/Ar, where the ionization potentials of the atomic and molecular species in each pair are similar. Our experimental results showed that when excited with 25 fs laser pulses at an intensity around 1×1014 W/cm2 the RSE yields were maximized at linear polarization and the yields decreased gradually when the ellipticity of the excitation laser field was increased. However, the descending yield as a function of ellipticity exhibited dramatic difference for different targets, which could be attributed to the effect of the detailed molecular orbitals. Phenomena of ellipticity dependence has also been observed in HHG with similar descending behavior [18]. Our experiment can help to improve the understanding of the underlying physics of electron recombination processes, including RSE and HHG. 2. Experimental method The measurements were carried out in an ultrahigh-vacuum reaction microscope of cold target recoil ion momentum spectroscopy (COLTRIMS) [19,20]. A combination of a quarter-wave plate (QWP) and a half-wave plate (HWP) were employed to adjust the ellipticity of the femtosecond pulses (25 fs, 790 nm, 10 kHz) from a multipass Ti:sapphire laser system. The pulses were afterwards focused onto a supersonic gas jet (1:1 mixture of Xe/O2 and N2/Ar) by a concave mirror (f = 75 mm) in the COLTRIMS. Inside the spectroscope, the created ions and electrons (guided by a weak homogeneous dc electric field (Es∼18.5 V/cm) and a magnetic field (B∼12G), respectively) were detected by two time- and position-sensitive micro-channel plate (MCP) detectors at the opposite ends of the spectrometer. In our experiment the laser pulses were elliptically polarized in the y-z plane with the major axes along the z axis and the ellipticity was finely controlled by rotating the half-wave plate so that the major axis of the elliptical polarization was kept unchanged. The electric field of the laser can be given by F(t) = F0f(t)[cos($\omega $t)$\overrightarrow {{e_z}} $ +$\varepsilon $sin($\omega $t)$\overrightarrow {{e_y}} $]/$\sqrt {1 + {\varepsilon ^2}} $, where ω is the frequency of the laser, ɛ is the ellipticity, and f (t) is the slowly varying pulse envelope. The laser intensity in the interaction region was calibrated to be 8×1013 W/cm2 for the Xe/O2 measurement and 1.2×1014 W/cm2 for the Ar/N2 measurement by tracing the intensity-dependent time-of-flight spectrum of protons from the dissociative ionization of H2 [21]. The Keldysh parameters were calculated to be about 1 for each pair of atoms and molecules under the laser intensity used in the present work. Utilizing the coincidence detection techniques, not only the electron and ion fragments could be detected, but also the generated excited neutral atoms or molecules in Rydberg states could be identified on the detector if they were post-pulse ionized by the static electric field of the spectrometer or the black body radiation (BBR). 3. Results and discussions To identify the neutral atomic and molecular fragments populated to the Rydberg states, typical photoelectron-photoion-coincidence (PEPICO) spectra for O2/Xe and N2/Ar were employed as displayed in Fig. 1. The neutral fragments excited to Rydberg states were further ionized by the spectrometer DC electric field (∼18.5V/cm) or through BBR process during their flight towards the detector. The corresponding signal featured several long lines at long flying times which gradually disappear for increasing ellipticity. Here the observed signals with long time of flights (TOF > 60 ns) were interpreted as postpulse ionization of high-lying Rydberg states generated by the recapture of the freed electrons [8,13]. In the PEPICO measurement, the obtained signal can cover a wide range of the principal quantum number (n) for the Rydberg states. Time delayed ionization of Rydberg states has been studied in detail previously [9,22], showing that the process can be divided into two main parts as indicated in Figs. 1 and 2(b) by the red dashed lines at the TOF of around 200 ns which is mainly affected by the quantum number distribution of the RSE and the strength of the DC field [22]. The yield deceased rapidly from about 60 ns to 200 ns due to the DC field ionization (n>74) [10,22] and the descending became slower for the TOF beyond 200 ns which is attributed to the BBR (n>10) [22]. Here we counted the excited atoms from about TOF = 60 ns (the border of the strong field ionization and the RSE, and was illustrated in the inset in Fig. 1(a)) where a clear statistics for the RSE events could be obtained. The experimental results (shown in Fig. 2(b)) showed that the high lying Rydberg fragments obtained in our experiment are mainly detected in the first 200 ns time window. As can be seen in Figs. 1 and 2(a), the yields denoted by Y(M*) (where M* represents the neutral Rydberg particle of the species M) and the generation probability Y(M*) / Y(M+) (where M+ represents the singly ionized fragment of the species M) of N2* and Ar* were comparable which agrees with the results reported in Ref. 15. However, we found that the generation probability of the neutral Rydberg fragment O2* was orders of magnitude higher than that of Xe* which is significantly different to the previous results [15]. This could be rooted in the different detection methods used in our work compared to that used in Ref. 15. In principle, two reasons may contribute to the orders of magnitude higher ratio of O2*/O2+ than that of Xe*/Xe+. Firstly, the ionization suppression in O2 lead to a relatively lower O2+ population than that of Xe+[23–27]. Secondly, the occupation of Rydberg orbitals with higher principle quantum numbers is larger for O2 than Xe. Due to the butterfly-shaped molecular orbital, the electrons emitted from O2 tend to have a broader initial momentum distribution thus are easier to diffuse away from the nuclei compared to electrons emitted from Xe [15]. It is reasonable to infer that the recaptured electrons from O2 are more likely to occupy Rydberg orbits with higher angular momentum. As mentioned above, the excited fragments in our experiments are mainly generated from DC field ionization of the spectrometer, which is associated with higher principal quantum numbers (n>74) than that of Ref. 15 (20<n< 30). Fig. 1. PEPICO spectra obtained in linearly polarized femtosecond laser pulses for (a) O2/Xe at the intensity of 8×1013 W/cm2 and for (b) N2/Ar at the intensity of 1.2×1014 W/cm2. The inserted image in Fig. 1(a) represents the enlarged spectra at around TOF = 60 ns. The yellow dashed line indicates the division from strong field ionization and RSE. The red dashed curves in both figures indicate the separation between DC electric field ionization and photon ionization due to BBR of the Rydberg atoms and molecules. Download Full Size | PPT Slide | PDF Fig. 2. (a) The measured ellipticity dependence of the electron recapture probability for the four indicated species. (b) The normalized yield of N2* as a function of time of flight (TOF) for two laser ellipticities, i.e. ɛ = 0 and 0.14, respectively. Interestingly, previous theoretical studies predicted that the principle quantum number distributions of the Rydberg state occupation did not show clear differences for the linearly and the circularly polarized laser fields [28], which was yet to be demonstrated by experimental evidences. Recently, an alternative method [22] to detect the Rydberg states was proposed and showed that the postpulse ionized Rydberg atoms or molecules as a function of the time of flight was sensitive to the external DC field strength. This was due to the fact that the ionization rate in a static field obeys the saddle-point model as F = Z3/9n4 by incorporating the effect of linear Stark shift, where F is the external static field and Z represents the charge of the state [10,15]. Here, as shown in Fig. 2(b) there was almost no difference between the normalized yields of the ionized Rydberg molecules in the external DC field at two different laser ellipticities, i.e. ɛ = 0 (the red curve) and 0.14 (the blue curve), respectively. This implies that the n distribution of N2* does not exhibit apparent ellipticity dependence for laser fields of small elliticity (similar results can be seen in Xe*, O2* and Ar*) since different n distributions will lead to noticeable change of the ionization ratio by the DC field at short TOFs. Through integration of the PEPICO spectra, the yield of high lying Rydberg states of N2*, O2*, Ar* and Xe* from weak DC-field and BBR induced ionization can be obtained. Figures 3(a) and (b) show the normalized yield of the RSE for two pairs of ionic species as a function of laser ellipticity. The normalization is done by dividing the RSE yield of different ellipticities by the maximum yield obtained in linearly polarized field. The normalized yields here include the events from both the DC-field ionization and the BBR process. As can be seen in Fig. 3(a), the RSE yield of O2 (red squares) and Xe (black circles) exhibited a maximum value in the linearly polarized laser fields, i.e. at ɛ = 0, and decreases quickly for increasing ellipticities. We fit the ellipticity dependence data with a Gaussian profile, as shown by the solid curves in Fig. 3(a). The full width at half maximum (FWHM) of the Gaussian distribution of O2* is wider than that of Xe*, indicating a weaker ellipticity dependence of the RSE for O2 than for Xe. Similarly, the ellipticity dependence of N2* and Ar* are shown in Fig. 3(b). The situation of N2/Ar was different compared to the case of the O2/Xe where the FWHM of the atomic species Ar* is larger than that of the N2*. These results indicate that besides the ionization potential of the targets, there must be other factors associating with the inherent property of the targets that influenced the RSE process. Fig. 3. The measured ellipticity dependence of the normalized RSE yields of (a) O2* and Xe* and (b) N2* and Ar*. The scattered data are experimental results and the solid lines are the fitting curves using Gaussian functions. (c) and (d) are the corresponding simulation results based on the trajectory model. (e) and (f) are the initial transverse momentum distributions for the O2/Xe and N2/Ar pair at certain molecular orientations, i.e., ϕO2 = 0° and ϕN2 = 90° with respect to the linear polarization of the driven field along 0°. To gain a deeper understanding of the mechanisms of the atomic and molecular RSE, a simple semi-classical model was employed. The wave functions of the highest occupied molecular orbitals (HOMOs) of N2 and O2 were described as linear superposition of the atomic orbitals [29–32], (1)$${\Psi _{N2}}{\; } \propto \; \textrm{cos}({{\boldsymbol p} \cdot {\boldsymbol R}/2} )$$ (2)$${\Psi _{O2}}{\; } \propto \; \textrm{sin}({{\boldsymbol p} \cdot {\boldsymbol R}/2} ).$$ Here R is the vector that points from one atomic center to the other, p is the momentum of electron. The molecular orbital will affect the initial conditions of the ejected electron at tunnel exit [33,34]. As shown in Eq. (1), a cosine term is in the wave function of N2. For molecular orientation perpendicular to the polarization direction, it is easy to realize that the ejected electron from N2 tends to have a zero initial transverse momentum along the polarization direction, i.e. cos(p·R/2) = 1 for p⊥R. But the electron from O2 prefer to have a nonzero initial transverse momentum because of the sine term in its wave function (sin(p·R/2) =0 for p⊥R). Recently, the molecular quantum trajectory Monte Carlo model (MO-QTMC) has been proposed under the framework of strong field approximation (SFA) [31,32] in which the static Ammosov-Delone-Krainov (ADK) ionization rate has been extent to the molecular frame though multiplying a molecular dependent modification factor. Such ionization rate shares the similar concept of molecular orbital (MO)-ADK theory that the initial molecular wave function can be expressed as a linear combination of atomic wave functions, which was widely-used nowadays [35]. Unlike the traditional ADK rate, the ADK rate derived from the SFA not only has a modulation on the initial transverse momentum distribution but also proved to be orientation dependent. Based on this, the ionization rate for molecules was given by W(t0,py) = a2W0(t0)W⊥(py) [31,32]. Linearly polarized laser field is applied in our simulation. The initial momentum distribution is calculated using a quasi-static formula where the laser field ellipticity only affect the strength of the electric vector along the major axis and does not have obvious influences on the initial momentum distribution of electrons. Figures 3(c) and 3(d) show simulation results of the ellipticity dependence of strong-field Rydberg state excitation of N2, Ar, O2 and Xe averaged over various molecular orientations which are in good agreement with the experimental results. Specific molecular orientations were selected in the calculation to show the main difference of the orbital character induced initial transverse momentum distributions in various molecules. Within the SFA [36], the direct transition amplitude from field-free bound state to Volkov state can be expressed as (3)$$M({{{\boldsymbol P}_{\boldsymbol f}}} )={-} i\mathop \smallint \limits_0^{{T_p}} dt\left\langle {{{\boldsymbol p}_{\boldsymbol f}} + {\boldsymbol A}(\textrm{t} )|{{\boldsymbol r} \cdot {\boldsymbol E}(t )} |{\varphi_g}} \right\rangle {e^{iS(t )}}$$ where$\; {\varphi _g}$ is the ground state of molecules and atoms, ${T_p}$ is the duration of the laser pulse, ${\boldsymbol p}_{\boldsymbol f}$ is the asymptotic drift momentum, E(t) is the electric field and $S(t) = \frac{1}{2} \smallint dt [{\boldsymbol p}_{\boldsymbol f} + {\boldsymbol A}(\textrm{t} ]^{2} + I_{p}t $ is the classical action. Through making further assumption that the electric field is static and using the relationship ${\boldsymbol p}_{\boldsymbol i} = {\boldsymbol p}_{\boldsymbol f} - {\mathbf A}(t_r)$. The momentum distribution at tunnel exit can be obtained. Here ${\boldsymbol p}_{\boldsymbol i}$ is the initial momentum and ${t_r}$ is the real part of the saddle point time [32]. As shown in Fig. 3(e), the calculated initial transverse momentum distribution of O2 with molecular axis orientating at 0° with respect to the laser polarization direction showed a wider profile with respect to that of the Xe atom, which was rooted in the butterfly-shape of the O2 HOMO [35,37]. Therefore, the electron from O2 is more probable to tunnel out with a nonzero initial transverse velocity at certain orientations with respect to laser polarization direction. While for the N2/Ar pair where the N2 molecules were oriented at 90° with respect to the laser polarization (shown in Fig. 3(f)), the initial transverse momentum distribution of Ar was wider than that of N2. The spindle-like molecular orbital of N2 could be responsible for such effect. Our results agreed with previous theoretical studies which predicted that the initial momentum distributions can influence the RSE process dramatically [38]. Based on our calculations, the electron initial transverse momentum distributions exhibited the minimum width for molecules with spindle-shaped orbitals. For the O2 molecules with butterfly-shaped molecular orbital, the initial momentum distributions had the maximum width. The distributions for atoms lied in between. Moreover, we conducted a simple trajectory simulation according to Landsman's work [38]. The succeeding particle dynamics was simulated in the FTI picture. The electrons launched with a certain initial momentum distribution at the tunnel exit are afterwards driven by the action from the laser field. The normalized yields of the RSE events under different laser ellipticities could thus be calculated and are presented in Figs. 3(c) and (d) for the O2/Xe and N2/Ar pairs, respectively. The simulation results are qualitatively consistent with the measured data as shown in Figs. 3(a) and (b). Although in the present simulation the Coulomb effect and the n distribution were not considered, the resulting tendency could already show the main physical picture of the RSE processes. With the present simulation, we show that the RSE could take place when the transverse momentum at the tunneling exit is compensated by the action of the oscillating laser field, such that the electron could be decelerated and recaptured by the Coulomb field. Figure 4(a) presented the ellipticity dependence of the recapture probability of O2 at two different laser intensities, i.e. at 8×1013 W/cm2 and 3×1014 W/cm2 W/cm2, respectively. The ellipticity dependence of the Rydberg yields at higher intensity showed a narrower distribution than that obtained at lower intensity. Such effect agreed with the mechanism that the initial transverse momentum and the drift momentum obtained from the laser field are compensated in the FTI process. As the intensity increases, the component of the laser electric field in the transverse direction will increase, therefore the initial momentum would not be large enough to cancel the drift momentum caused by the oscillating laser field. The corresponding RSE yield would decrease faster at higher ellipticities. These would result in a narrower width for the ellipticity dependence of the RSE at higher laser intensity. Fig. 4. The (a) experimentally measured (-0.3 < ɛ < 0.3) and (b) numerically simulated (-0.15 < ɛ < 0.15) ellipticity dependence of the RSE yield of O2* at two laser intensities, i.e. 0.8 and 3.0×1014 W/cm2. In summary, we experimentally and theoretically investigated the ellipticity dependence of the Rydberg yields of two pairs of targets, in which the atomic and the molecular species exhibit similar ionization potentials [15]. We found the generation probability of the neutral Rydberg fragment O2* was orders of magnitude higher than that of Xe* due to the butterfly-shaped molecular orbitals of O2 in which the strong field ionization was highly suppressed. The PEPICO measurements showed distinct ellipticity dependence of the RSE yields for different molecular species at relatively low laser intensities. For the O2 molecule featuring butterfly-shaped orbitals, the ellipticity-dependent distributions of the RSE yield showed a larger width comparing to that obtained for its counterpart atomic target Xe. While the distribution width of the ellipticity dependent yield of N2* was somehow smaller than Ar*. These results could be explained by simple classical simulations considering the effect of molecular orbitals. It was indicated that the compensation of the initial transverse momentum of the freed electron and the drift momentum obtained from the external optical field could lead to the observed ellipticity dependence. Similar behavior has also been found in the HHG process in previous work [18]. Together with it, our experiments could serve as a powerful proof for exploring the FTI processes in the tunneling regime. Our work revealed the effect of molecular orbitals in the RSE process for various atomic and molecular species. Theoretical models involving Coulomb effects would be required to improve the simulation results. National Key Research and Development Program of China (2018YFA0306303); National Natural Science Foundation of China (11621404, 11704124, 11834004, 11904103, 92050105); the Project supported by Science and Technology Commission of Shanghai Municipality (19JC1412200, 19ZR1473900, 21ZR1420100). The authors declare that there are no conflicts of interest related to this article. The data underlying the results presented herein are not publicly available currently but can be obtained from the authors upon reasonable request. 1. A. L' Huillier and P. Balcou, "High-order harmonic generation in rare gases with a 1-ps 1053-nm laser," Phys. Rev. Lett. 70(6), 774–777 (1993). [CrossRef] 2. P. B. Corkum, "Plasma perspective on strong field multiphoton ionization," Phys. Rev. Lett. 71(13), 1994–1997 (1993). [CrossRef] 3. D. N. Fittinghoff, P. R. Bolton, B. Chang, and K. C. Kulander, "Observation of nonsequential double ionization of helium with optical tunneling," Phys. Rev. Lett. 69(18), 2642–2645 (1992). [CrossRef] 4. B. Walker, B. Sheehy, L. F. DiMauro, P. Agostini, K. J. Schafer, and K. C. Kulander, "Precision Measurement of Strong Field Double Ionization of Helium," Phys. Rev. Lett. 73(9), 1227–1230 (1994). [CrossRef] 5. F. Sun, X. Chen, W. Zhang, J. Qiang, H. Li, P. Lu, X. Gong, Q. Ji, K. Lin, H. Li, J. Tong, F. Chen, C. Ruiz, J. Wu, and F. He, "Longitudinal photon-momentum transfer in strong-field double ionization of argon atoms," Phys. Rev. A 101(2), 021402 (2020). [CrossRef] 6. T. Zuo, A. D. Bandrauk, and P. B. Corkum, "Laser-induced electron diffraction: a new tool for probing ultrafast molecular dynamics," Chem. Phys. Lett. 259(3-4), 313–320 (1996). [CrossRef] 7. C. I. Blaga, J. Xu, A. D. DiChiara, E. Sistrunk, K. Zhang, P. Agostini, T. A. Miller, L. F. DiMauro, and C. D. Lin, "Imaging ultrafast molecular dynamics with laser-induced electron diffraction," Nature (London) 483(7388), 194–197 (2012). [CrossRef] 8. T. Nubbemeyer, K. Gorling, A. Saenz, U. Eichmann, and W. Sandner, "Strong-Field Tunneling without Ionization," Phys. Rev. Lett. 101(23), 233001 (2008). [CrossRef] 9. S. Larimian, C. Lemell, V. Stummer, J. Geng, S. Roither, D. Kartashov, L. Zhang, M. Wang, Q. Gong, L. Peng, S. Yoshida, J. Burgdörfer, A. Baltuška, M. Kitzler, and X. Xie, "Localizing high-lying Rydberg wave packets with two-color laser fields," Phys. Rev. A 96(2), 021403 (2017). [CrossRef] 10. F. Sun, W. Zhang, P. Lu, Q. Song, K. Lin, Q. Ji, J. Ma, H. Li, J. Qiang, X. Gong, H. Li, and J. Wu, "Dissociative frustrated double ionization of N2Ar dimers in strong laser fields," J. Phys. B 53(3), 035601 (2020). [CrossRef] 11. W. Zhang, Z. Yu, X. Gong, J. Wang, P. Lu, H. Li, Q. Song, Q. Ji, K. Lin, J. Ma, H. Li, F. Sun, J. Qiang, H. Zeng, F. He, and J. Wu, "Visualizing and Steering Dissociative Frustrated Double Ionization of Hydrogen Molecules," Phys. Rev. Lett. 119(25), 253202 (2017). [CrossRef] 12. Q. Li, X.-M. Tong, T. Morishita, C. Jin, H. Wei, and C. D. Lin, "Fine structures in the intensity dependence of excitation and ionization probabilities of hydrogen atoms in intense 800-nm laser pulses," J. Phys. B 47(20), 204019 (2014). [CrossRef] 13. B. Manschwetus, T. Nubbemeyer, K. Gorling, G. Steinmeyer, U. Eichmann, H. Rottke, and W. Sandner, "Strong laser field fragmentation of H2: Coulomb explosion without double ionization," Phys. Rev. Lett. 102(11), 113002 (2009). [CrossRef] 14. A. Emmanouilidou, C. Lazarou, A. Staudte, and U. Eichmann, "Routes to formation of highly excited neutral atoms in the breakup of strongly driven H2," Phys. Rev. A 85(1), 011402 (2012). [CrossRef] 15. H. Lv, W. Zuo, L. Zhao, H. Xu, M. Jin, D. Ding, S. Hu, and J. Chen, "Comparative study on atomic and molecular Rydberg-state excitation in strong infrared laser fields," Phys. Rev. A 93(3), 033415 (2016). [CrossRef] 16. K. Lin, X. Jia, Z. Yu, F. He, J. Ma, H. Li, X. Gong, Q. Song, Q. Ji, W. Zhang, H. Li, P. Lu, H. Zeng, J. Chen, and J. Wu, "Comparison Study of Strong-Field Ionization of Molecules and Atoms by Bicircular Two-Color Femtosecond Laser Pulses," Phys. Rev. Lett. 119(20), 203202 (2017). [CrossRef] 17. X. Xie, C. Wu, H. Liu, M. Li, Y. Deng, Y. Liu, Q. Gong, and C. Wu, "Tunneling electron recaptured by an atomic ion or a molecular ion," Phys. Rev. A 88(6), 065401 (2013). [CrossRef] 18. B. Shan, S. Ghimire, and Z. Chang, "Effect of orbital symmetry on high-order harmonic generation from molecules," Phys. Rev. A 69(2), 021404 (2004). [CrossRef] 19. R. Dörner, V. Mergel, O. Jagutzki, L. Spielberger, J. Ullrich, R. Moshammer, and H. Schmidt-Böcking, "Cold Target Recoil Ion Momentum Spectroscopy: a 'momentum microscope' to view atomic collision dynamics," Phys. Rep. 330(2-3), 95–192 (2000). [CrossRef] 20. J. Ullrich, R. Moshammer, A. Dorn, R. Dörner, L. Ph, H. Schmidt, and H. Schmidt-Böcking, "Recoil-ion and electron momentum spectroscopy: reaction-microscopes," Rep. Prog. Phys. 66(9), 1463–1545 (2003). [CrossRef] 21. A. S. Alnaser, X. M. Tong, T. Osipov, S. Voss, C. M. Maharjan, B. Shan, Z. Chang, and C. L. Cocke, "Laser-peak-intensity calibration using recoil-ion momentum imaging," Phys. Rev. A 70(2), 023413 (2004). [CrossRef] 22. S. Larimian, S. Erattupuzha, C. Lemell, S. Yoshida, S. Nagele, R. Maurer, A. Baltuška, J. Burgdörfer, M. Kitzler, and X. Xie, "Coincidence spectroscopy of high-lying Rydberg states produced in strong laser fields," Phys. Rev. A 94(3), 033401 (2016). [CrossRef] 23. C. Guo, M. Li, J. P. Nibarger, and G. N. Gibson, "Single and double ionization of diatomic molecules in strong laser fields," Phys. Rev. A 58(6), R4271–R4274 (1998). [CrossRef] 24. C. Guo, "Multielectron Effects on Single-Electron Strong Field Ionization," Phys. Rev. Lett. 85(11), 2276–2279 (2000). [CrossRef] 25. X. M. Tong, Z. X. Zhao, and C. D. Lin, "Simulation of third-harmonic and supercontinuum generation for femtosecond pulses in air," Phys. Rev. A 66(3), 033402 (2002). [CrossRef] 26. Z. Y. Lin, X. Y. Jia, C. L. Wang, Z. L. Hu, H. P. Kang, W. Quan, X. Y. Lai, X. J. Liu, J. Chen, B. Zeng, W. Chu, J. P. Yao, Y. Cheng, and Z. Z. Xu, "Ionization Suppression of Diatomic Molecules in an Intense Midinfrared Laser Field," Phys. Rev. Lett. 108(22), 223001 (2012). [CrossRef] 27. J. Wu, H. Zeng, and C. Guo, "Comparison Study of Atomic and Molecular Single Ionization in the Multiphoton Ionization Regime," Phys. Rev. Lett. 96(24), 243002 (2006). [CrossRef] 28. L. Zhao, J. Dong, H. Lv, T. Yang, Y. Lian, M. Jin, H. Xu, D. Ding, S. Hu, and J. Chen, "Ellipticity dependence of neutral Rydberg excitation of atoms in strong laser fields," Phys. Rev. A 94(5), 053403 (2016). [CrossRef] 29. V. I. Usachenko and S.-I. Chu, "Strong-field ionization of laser-irradiated light homonuclear diatomic molecules: A generalized strong-field approximation–linear combination of atomic orbitals model," Phys. Rev. A 71(6), 063410 (2005). [CrossRef] 30. J. Muth-Böhm, A. Becker, and F. H. M. Faisal, "Suppressed Molecular Ionization for a Class of Diatomics in Intense Femtosecond Laser Fields," Phys. Rev. Lett. 85(11), 2280–2283 (2000). [CrossRef] 31. M. Liu, M. Li, C. Wu, Q. Gong, A. Staudte, and Y. Liu, "Phase Structure of Strong-Field Tunneling Wave Packets from Molecules," Phys. Rev. Lett. 116(16), 163004 (2016). [CrossRef] 32. M. Liu and Y. Liu, "Semiclassical models for strong-field tunneling ionization of molecules, "Semiclassical models for strong-field tunneling ionization of molecules," J. Phys. B 50(10), 105602 (2017). [CrossRef] 33. A. Staudte, S. Patchkovskii, D. Pavici, H. Akagi, and P. B. Corkum, "Angular Tunneling Ionization Probability of Fixed-in-Space H-2 Molecules in Intense Laser Pulses," Phys. Rev. Lett. 102(3), 033004 (2009). [CrossRef] 34. A. Trabattoni, J. Wiese, U. D. Giovannini, J. F. Olivieri, T. Mullins, J. Onvlee, S. Son, B. Frusteri, A. Rubio, S. Trippel, and J. Küpper, "Setting the photoelectron clock through molecular alignment," Nat. Commun. 11(1), 2546 (2020). [CrossRef] 35. A. S. Alnaser, S. Voss, X.-M. Tong, C. M. Maharjan, P. Ranitovic, B. Ulrich, T. Osipov, B. Shan, Z. Chang, and C. L. Cocke, "Effects of molecular structure on ion disintegration patterns in ionization of O2 and N2 by short laser pulses," Phys. Rev. Lett. 93(11), 113003 (2004). [CrossRef] 36. W. Becker, F. Grasbon, R. Kopold, D. B. Milošević, G. G. Paulus, and H. Walther, "Above-Threshold Ionization: From Classical Features to Quantum Effects," Adv. At., Mol., Opt. Phys. 48, 35–98 (2002). [CrossRef] 37. A. Sen, T. Sairam, S. R. Sahu, B. Bapat, R. Gopal, and V. Sharma, "Hindered alignment in ultrashort, intense laser-induced fragmentation of O2," J. Chem. Phys. 152(1), 014302 (2020). [CrossRef] 38. A. S. Landsman, A. N. Pfeiffer, C. Hofmann, M. Smolarski, C. Cirelli, and U. Keller, "Rydberg state creation by tunnel ionization," New J. Phys. 15(1), 013001 (2013). [CrossRef] Article Order A. L' Huillier and P. Balcou, "High-order harmonic generation in rare gases with a 1-ps 1053-nm laser," Phys. Rev. Lett. 70(6), 774–777 (1993). [Crossref] P. B. Corkum, "Plasma perspective on strong field multiphoton ionization," Phys. Rev. Lett. 71(13), 1994–1997 (1993). D. N. Fittinghoff, P. R. Bolton, B. Chang, and K. C. Kulander, "Observation of nonsequential double ionization of helium with optical tunneling," Phys. Rev. Lett. 69(18), 2642–2645 (1992). B. Walker, B. Sheehy, L. F. DiMauro, P. Agostini, K. J. Schafer, and K. C. Kulander, "Precision Measurement of Strong Field Double Ionization of Helium," Phys. Rev. Lett. 73(9), 1227–1230 (1994). F. Sun, X. Chen, W. Zhang, J. Qiang, H. Li, P. Lu, X. Gong, Q. Ji, K. Lin, H. Li, J. Tong, F. Chen, C. Ruiz, J. Wu, and F. He, "Longitudinal photon-momentum transfer in strong-field double ionization of argon atoms," Phys. Rev. A 101(2), 021402 (2020). T. Zuo, A. D. Bandrauk, and P. B. Corkum, "Laser-induced electron diffraction: a new tool for probing ultrafast molecular dynamics," Chem. Phys. Lett. 259(3-4), 313–320 (1996). C. I. Blaga, J. Xu, A. D. DiChiara, E. Sistrunk, K. Zhang, P. Agostini, T. A. Miller, L. F. DiMauro, and C. D. Lin, "Imaging ultrafast molecular dynamics with laser-induced electron diffraction," Nature (London) 483(7388), 194–197 (2012). T. Nubbemeyer, K. Gorling, A. Saenz, U. Eichmann, and W. Sandner, "Strong-Field Tunneling without Ionization," Phys. Rev. Lett. 101(23), 233001 (2008). S. Larimian, C. Lemell, V. Stummer, J. Geng, S. Roither, D. Kartashov, L. Zhang, M. Wang, Q. Gong, L. Peng, S. Yoshida, J. Burgdörfer, A. Baltuška, M. Kitzler, and X. Xie, "Localizing high-lying Rydberg wave packets with two-color laser fields," Phys. Rev. A 96(2), 021403 (2017). F. Sun, W. Zhang, P. Lu, Q. Song, K. Lin, Q. Ji, J. Ma, H. Li, J. Qiang, X. Gong, H. Li, and J. Wu, "Dissociative frustrated double ionization of N2Ar dimers in strong laser fields," J. Phys. B 53(3), 035601 (2020). W. Zhang, Z. Yu, X. Gong, J. Wang, P. Lu, H. Li, Q. Song, Q. Ji, K. Lin, J. Ma, H. Li, F. Sun, J. Qiang, H. Zeng, F. He, and J. Wu, "Visualizing and Steering Dissociative Frustrated Double Ionization of Hydrogen Molecules," Phys. Rev. Lett. 119(25), 253202 (2017). Q. Li, X.-M. Tong, T. Morishita, C. Jin, H. Wei, and C. D. Lin, "Fine structures in the intensity dependence of excitation and ionization probabilities of hydrogen atoms in intense 800-nm laser pulses," J. Phys. B 47(20), 204019 (2014). B. Manschwetus, T. Nubbemeyer, K. Gorling, G. Steinmeyer, U. Eichmann, H. Rottke, and W. Sandner, "Strong laser field fragmentation of H2: Coulomb explosion without double ionization," Phys. Rev. Lett. 102(11), 113002 (2009). A. Emmanouilidou, C. Lazarou, A. Staudte, and U. Eichmann, "Routes to formation of highly excited neutral atoms in the breakup of strongly driven H2," Phys. Rev. A 85(1), 011402 (2012). H. Lv, W. Zuo, L. Zhao, H. Xu, M. Jin, D. Ding, S. Hu, and J. Chen, "Comparative study on atomic and molecular Rydberg-state excitation in strong infrared laser fields," Phys. Rev. A 93(3), 033415 (2016). K. Lin, X. Jia, Z. Yu, F. He, J. Ma, H. Li, X. Gong, Q. Song, Q. Ji, W. Zhang, H. Li, P. Lu, H. Zeng, J. Chen, and J. Wu, "Comparison Study of Strong-Field Ionization of Molecules and Atoms by Bicircular Two-Color Femtosecond Laser Pulses," Phys. Rev. Lett. 119(20), 203202 (2017). X. Xie, C. Wu, H. Liu, M. Li, Y. Deng, Y. Liu, Q. Gong, and C. Wu, "Tunneling electron recaptured by an atomic ion or a molecular ion," Phys. Rev. A 88(6), 065401 (2013). B. Shan, S. Ghimire, and Z. Chang, "Effect of orbital symmetry on high-order harmonic generation from molecules," Phys. Rev. A 69(2), 021404 (2004). R. Dörner, V. Mergel, O. Jagutzki, L. Spielberger, J. Ullrich, R. Moshammer, and H. Schmidt-Böcking, "Cold Target Recoil Ion Momentum Spectroscopy: a 'momentum microscope' to view atomic collision dynamics," Phys. Rep. 330(2-3), 95–192 (2000). J. Ullrich, R. Moshammer, A. Dorn, R. Dörner, L. Ph, H. Schmidt, and H. Schmidt-Böcking, "Recoil-ion and electron momentum spectroscopy: reaction-microscopes," Rep. Prog. Phys. 66(9), 1463–1545 (2003). A. S. Alnaser, X. M. Tong, T. Osipov, S. Voss, C. M. Maharjan, B. Shan, Z. Chang, and C. L. Cocke, "Laser-peak-intensity calibration using recoil-ion momentum imaging," Phys. Rev. A 70(2), 023413 (2004). S. Larimian, S. Erattupuzha, C. Lemell, S. Yoshida, S. Nagele, R. Maurer, A. Baltuška, J. Burgdörfer, M. Kitzler, and X. Xie, "Coincidence spectroscopy of high-lying Rydberg states produced in strong laser fields," Phys. Rev. A 94(3), 033401 (2016). C. Guo, M. Li, J. P. Nibarger, and G. N. Gibson, "Single and double ionization of diatomic molecules in strong laser fields," Phys. Rev. A 58(6), R4271–R4274 (1998). C. Guo, "Multielectron Effects on Single-Electron Strong Field Ionization," Phys. Rev. Lett. 85(11), 2276–2279 (2000). X. M. Tong, Z. X. Zhao, and C. D. Lin, "Simulation of third-harmonic and supercontinuum generation for femtosecond pulses in air," Phys. Rev. A 66(3), 033402 (2002). Z. Y. Lin, X. Y. Jia, C. L. Wang, Z. L. Hu, H. P. Kang, W. Quan, X. Y. Lai, X. J. Liu, J. Chen, B. Zeng, W. Chu, J. P. Yao, Y. Cheng, and Z. Z. Xu, "Ionization Suppression of Diatomic Molecules in an Intense Midinfrared Laser Field," Phys. Rev. Lett. 108(22), 223001 (2012). J. Wu, H. Zeng, and C. Guo, "Comparison Study of Atomic and Molecular Single Ionization in the Multiphoton Ionization Regime," Phys. Rev. Lett. 96(24), 243002 (2006). L. Zhao, J. Dong, H. Lv, T. Yang, Y. Lian, M. Jin, H. Xu, D. Ding, S. Hu, and J. Chen, "Ellipticity dependence of neutral Rydberg excitation of atoms in strong laser fields," Phys. Rev. A 94(5), 053403 (2016). V. I. Usachenko and S.-I. Chu, "Strong-field ionization of laser-irradiated light homonuclear diatomic molecules: A generalized strong-field approximation–linear combination of atomic orbitals model," Phys. Rev. A 71(6), 063410 (2005). J. Muth-Böhm, A. Becker, and F. H. M. Faisal, "Suppressed Molecular Ionization for a Class of Diatomics in Intense Femtosecond Laser Fields," Phys. Rev. Lett. 85(11), 2280–2283 (2000). M. Liu, M. Li, C. Wu, Q. Gong, A. Staudte, and Y. Liu, "Phase Structure of Strong-Field Tunneling Wave Packets from Molecules," Phys. Rev. Lett. 116(16), 163004 (2016). M. Liu and Y. Liu, "Semiclassical models for strong-field tunneling ionization of molecules, "Semiclassical models for strong-field tunneling ionization of molecules," J. Phys. B 50(10), 105602 (2017). A. Staudte, S. Patchkovskii, D. Pavici, H. Akagi, and P. B. Corkum, "Angular Tunneling Ionization Probability of Fixed-in-Space H-2 Molecules in Intense Laser Pulses," Phys. Rev. Lett. 102(3), 033004 (2009). A. Trabattoni, J. Wiese, U. D. Giovannini, J. F. Olivieri, T. Mullins, J. Onvlee, S. Son, B. Frusteri, A. Rubio, S. Trippel, and J. Küpper, "Setting the photoelectron clock through molecular alignment," Nat. Commun. 11(1), 2546 (2020). A. S. Alnaser, S. Voss, X.-M. Tong, C. M. Maharjan, P. Ranitovic, B. Ulrich, T. Osipov, B. Shan, Z. Chang, and C. L. Cocke, "Effects of molecular structure on ion disintegration patterns in ionization of O2 and N2 by short laser pulses," Phys. Rev. Lett. 93(11), 113003 (2004). W. Becker, F. Grasbon, R. Kopold, D. B. Milošević, G. G. Paulus, and H. Walther, "Above-Threshold Ionization: From Classical Features to Quantum Effects," Adv. At., Mol., Opt. Phys. 48, 35–98 (2002). A. Sen, T. Sairam, S. R. Sahu, B. Bapat, R. Gopal, and V. Sharma, "Hindered alignment in ultrashort, intense laser-induced fragmentation of O2," J. Chem. Phys. 152(1), 014302 (2020). A. S. Landsman, A. N. Pfeiffer, C. Hofmann, M. Smolarski, C. Cirelli, and U. Keller, "Rydberg state creation by tunnel ionization," New J. Phys. 15(1), 013001 (2013). Agostini, P. Akagi, H. Alnaser, A. S. Balcou, P. Baltuška, A. Bandrauk, A. D. Bapat, B. Becker, A. Becker, W. Blaga, C. I. Bolton, P. R. Burgdörfer, J. Chang, B. Chang, Z. Chen, F. Chen, J. Chen, X. Cheng, Y. Chu, S.-I. Chu, W. Cirelli, C. Cocke, C. L. Corkum, P. B. Deng, Y. DiChiara, A. D. DiMauro, L. F. Ding, D. Dong, J. Dorn, A. Dörner, R. Eichmann, U. Emmanouilidou, A. Erattupuzha, S. Faisal, F. H. M. Fittinghoff, D. N. Frusteri, B. Geng, J. Ghimire, S. Gibson, G. N. Giovannini, U. D. Gong, Q. Gong, X. Gopal, R. Gorling, K. Grasbon, F. Guo, C. He, F. Hofmann, C. Hu, S. Hu, Z. L. Jagutzki, O. Ji, Q. Jia, X. Jia, X. Y. Jin, C. Jin, M. Kang, H. P. Kartashov, D. Keller, U. Kitzler, M. Kopold, R. Kulander, K. C. Küpper, J. L' Huillier, A. Lai, X. Y. Landsman, A. S. Larimian, S. Lazarou, C. Lemell, C. Li, H. Li, M. Li, Q. Lian, Y. Lin, C. D. Lin, K. Lin, Z. Y. Liu, H. Liu, M. Liu, X. J. Liu, Y. Lu, P. Lv, H. Ma, J. Maharjan, C. M. Manschwetus, B. Maurer, R. Mergel, V. Miller, T. A. Miloševic, D. B. Morishita, T. Moshammer, R. Mullins, T. Muth-Böhm, J. Nagele, S. Nibarger, J. P. Nubbemeyer, T. Olivieri, J. F. Onvlee, J. Osipov, T. Patchkovskii, S. Paulus, G. G. Pavici, D. Peng, L. Pfeiffer, A. N. Ph, L. Qiang, J. Quan, W. Ranitovic, P. Roither, S. Rottke, H. Rubio, A. Ruiz, C. Saenz, A. Sahu, S. R. Sairam, T. Sandner, W. Schafer, K. J. Schmidt, H. Schmidt-Böcking, H. Sen, A. Shan, B. Sharma, V. Sheehy, B. Sistrunk, E. Smolarski, M. Son, S. Song, Q. Spielberger, L. Staudte, A. Steinmeyer, G. Stummer, V. Sun, F. Tong, J. Tong, X. M. Tong, X.-M. Trabattoni, A. Trippel, S. Ullrich, J. Ulrich, B. Usachenko, V. I. Voss, S. Walker, B. Walther, H. Wang, C. L. Wang, J. Wei, H. Wiese, J. Wu, C. Wu, J. Xie, X. Xu, H. Xu, J. Xu, Z. Z. Yang, T. Yao, J. P. Yoshida, S. Yu, Z. Zeng, B. Zeng, H. Zhang, K. Zhang, L. Zhang, W. Zhao, L. Zhao, Z. X. Zuo, T. Zuo, W. Adv. At., Mol., Opt. Phys. (1) Chem. Phys. Lett. (1) J. Chem. Phys. (1) J. Phys. B (3) Nat. Commun. (1) Nature (London) (1) New J. Phys. (1) Phys. Rep. (1) Phys. Rev. A (12) Phys. Rev. Lett. (15) Rep. Prog. Phys. (1) Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) Ψ N 2 ∝ cos ( p ⋅ R / 2 ) (2) Ψ O 2 ∝ sin ( p ⋅ R / 2 ) . (3) M ( P f ) = − i ∫ 0 T p ⁡ d t ⟨ p f + A ( t ) | r ⋅ E ( t ) | φ g ⟩ e i S ( t ) James Leger, Editor-in-Chief Issues in Progress Feature Issues Confirm Citation Alert Please login to set citation alerts. MathJax Help Equations displayed with MathJax. Right click equation to reveal menu options.
CommonCrawl
Research | Open | Published: 10 May 2018 Trade-off of security and performance of lightweight block ciphers in Industrial Wireless Sensor Networks Chao Pei1,2,3, Yang Xiao4, Wei Liang1,2 & Xiaojia Han1,2,3 Lightweight block ciphers play an indispensable role for the security in the context of pervasive computing. However, the performance of resource-constrained devices can be affected dynamically by the selection of suitable cryptalgorithms, especially for the devices in the resource-constrained devices and/or wireless networks. Thus, in this paper, we study the trade-off between security and performance of several recent top performing lightweight block ciphers for the demand of resource-constrained Industrial Wireless Sensor Networks. Then, the software performance evaluation about these ciphers has been carried out in terms of memory occupation, cycles per byte, throughput, and a relative good comprehensive metric. Moreover, the results of avalanche effect, which shows the possibility to resist possible types of different attacks, are presented subsequently. Our results show that SPECK is the software-oriented lightweight cipher which achieves the best performance in various aspects, and it enjoys a healthy security margin at the same time. Furthermore, PRESENT, which is usually used as a benchmark for newer hardware-oriented lightweight ciphers, shows that the software performance combined with avalanche effect is inadequate when it is implemented. In the real application, there is a need to better understand the resources of dedicated platforms and security requirement, as well as the emphasis and focus. Therefore, this case study can serve as a good reference for the better selection of trade-off between performance and security in constrained environments. In the traditional resource-constrained environment, the constrained devices such as nodes in wireless sensor networks and radio-frequency identification (RFID) tags usually have the characteristics of weak computation ability, extremely small storage space, and strictly usage of power consumption [1–3]. Especially in the context of Internet of Things (IOTs), small embedded devices with poor computing capability are expected to connect to larger networks [4–7]. Although great changes and developments are brought to our society and life, it is the fact that almost all of the applications are inevitably faced with potential threats of information security [8]. As increasingly sensitive information is transmitted and manipulated, cryptographic protection should be made. Wearable devices, medical sensing networks, or the sensor networks for the military surveillance are such examples that security about them should pay more attentions [9–12]. Especially in the complex industry environment, interference such as high humidity, high vibration, variable temperature, and multi-frequency noise always exists. Hence, considering devices with constrained resources combined with sufficient security, there is no doubt that the performance about these environments are difficult to guarantee perfectly. Meanwhile, the term "lightweight" is frequently used and mentioned in many literatures [13], but there is not an accurate definition about it. Ciphers targeted for resource-constrained devices are regarded as lightweight ciphers, and either software or hardware implementations should improve the utilization rate of resource. What is shown in [14] is that operations such as block sizes, key sizes, and the process of key scheduling should take into consideration. Elementary operations such as addition, AND, OR, exclusive, or shift are welcomed because simple operations can be applied to all elementary platforms. Moreover, small blocks and short-key length in some means can simplify the encryption process. Of course, resource-constrained environments between wireless sensor networks (WSNs) and RFID are quite different. As for WSNs, sensor nodes with microcontrollers are grouped with sensing units, storage units, transceiver, and other components, and the size of different hardware platforms changes in a large range. In addition, sensor nodes are almost battery powered; energy efficiency and extensive mote life span are expected. Thus, choosing a cipher that could match the resources of nodes is an important consideration [15]. As for RFID, the most transponders are passive RFID tags, and there are different kinds of devices with different requirements, prices, and usage with different capabilities [16]. The electronic product code tags, in which the ultra-high-frequency is adopted at the band of 860–960 MHz, are usually used, and the price of each tag is approximately 0.15 USD. One of the early lightweight cryptographic attempts for RFID includes the work [17], in which they claim that the hardware implementations about security portion should be under 2000 gate equivalents (GEs). An ISO/IEC standard on lightweight cryptography stated that the design requirements should be made with 1000–2000 GEs [18]. The paper [19] claims that a total number of about 1000 to 10000 gates are included in an RFID tag, and just with 200 to 2000 GEs will be available specifically for security, and the standardizing cryptographic such as Advanced Encryption Standard (AES) is not suitable. Of course, there are many other literatures to investigate the resources available on tags, and these detail information could be found in [20, 21]. Because of the requirement and the fact that ciphers are the backbone of data protection and secrecy for highly sensitive and classified data, as a result, many lightweight block ciphers have been proposed in order to allow strong security guarantees at a low cost for these resource-constrained environments [15, 22–24]. Block ciphers with limited to small GEs could have the possibility to satisfy lightweight environments and real-time applications. For security, the total GEs available should be approximately 2000–3000. Extensions to Tiny Encryption Algorithm (XTEA) and Corrected Block Tiny Encryption Algorithm (XXTEA) [25] are designed to deal with the weakness of tiny encryption algorithm (TEA), which is a tiny but fast block cipher. However, there is no much information about hardware implementation results, and XTEA is vulnerable to a related key differential attack and a related rectangle attack, while at the same time, XXTEA suffers from a chosen plaintext attack [26]. At 2007, Data Encryption Standard Lightweight (DESL) and XORed variant of DESL (DESXL), lightweight variants of Data Encryption Standard (DES), were proposed [27, 28]. It is reported that the GEs are a little bit more than 2000, and both of them can be used for passive RFIDs. But the possibility to have a collision in three adjacent S-boxes leads to the most successful differential attack based on a 2-round iteration characteristic with a probability of 1/234 [27, 28]. MIBS is a Feistel network cipher and is reported that it can satisfy the requirement for RFID security [29]. Then, despite linear attacks, cipher text attacks and impossible differential attacks do not threaten the full 32-round MIBS but significantly reduces its margin of security by more than 50% [30]. KATAN and KTANTAN have similar properties, and both of them are suitable for resource-constrained devices [31]. KTANTAN48 is the version recommended for RFID tag usage with 588 GEs; the only difference between them is the processing of key scheduling, and slide attacks and related key attacks are also possible to implement; in addition, the related key differential attack is the only attack where there is a difference between the two families of ciphers [32]. In the literature [24], a linear congruential generator (LCG)-based lightweight block cipher was presented, and this cipher can meet security co-existence requirements of WSNs and RFID systems for pervasive computing, but our experiments show that the avalanche effect about it is not good, and thus, it has high possibility to suffer various attacks. TWINE [33] was designed to focus on the requirement of lightweight, and both hardware and software implementations show better performance. To the best of our knowledge, the most powerful attacks are the impossible attacks on 23-round TWINE-80 and 24-round TWINE-128 proposed by the designers and the biclique cryptanalysis of the full cipher [34]. PICO is a substitution- and permutation-based network [35]: the key scheduling is motivated from SPECK cipher, and it does not include a nonlinear layer in the design. PICO shows good performance on both the hardware and software platforms. However, because it is new, there is no further and detailed analysis about the security performance. In this paper, in the light of the demand of resource-constrained Industrial Wireless Sensor Networks, our target is to study the trade-off between security and performance of several recent top performing lightweight block ciphers. Several software performance metrics are used to evaluate the performance of these ciphers, and the avalanche effect in some ways is adopted to assess security. The contributions of this paper are listed as follows: The term "lightweight cipher" is seriously discussed and analyzed considering the implementation platform, and the characteristics of lightweight ciphers are introduced. We analyze the Wireless Network for Industrial Automation for Factory Automation (WIA-FA) security requirement of industrial wireless networks Wireless Network for Industrial Automation (WIA) specification in which the speed and reliability are strict. We examine and compare the performance of several carefully selected lightweight block ciphers, and in the meantime, a unified platform is used, and some software specific performance metrics are employed. Despite there are many lightweight block ciphers that are useful and inventive, usually, it is difficult to select a suitable cryptalgorithms for the specific applications. At the same time, the lack of comprehensive and comparative studies brings difficulties and resistances to have a better understanding about the security and performance trade-off. The results of avalanche effect, which shows the possibility to resist possible types of different attacks, are presented. When designing a system, the balance between security, cost, and performance has to be accounted [20]. Basically, more iteration rounds and longer key length contribute to a safer system, and the faster and stronger block ciphers would require more costs. However, more rounds mean slowness in algorithms. Overall, we hope that the work of this paper can be served as useful reference for the trade-off between security and performance for further implementation in resource-constrained environments. The remaining part of this paper is then structured as follows. Section 2 provides a short-related work. Section 3 briefly discusses the characteristics of lightweight block ciphers and some relevant features combined with the security requirements of Industrial Wireless Sensor Networks for Factory Automation. Meanwhile, the implementation details of the better selected lightweight ciphers are also discussed. Section 4 presents the method and dedicated platform, and then, some evaluation metrics are introduced. In Section 4, trade-off between security and performance of these ciphers is analyzed from different aspects, and the avalanche effects, which show the possibilities to resist possible attacks, are also compared. Finally, some conclusions are drawn in Section 6. There are also some of other papers in the literature that study the trade-off of security and performance [36–40]. The papers [36, 37] study the optimal network performance for stream ciphers. The paper [38, 39] studies the security overhead of aggregation in WIFI. The paper [40] studies the security trade-off of AES over IEEE 802.15.3 wireless personal area networks. In the paper [41], a multilayer authentication protocol and a secure session key generation method are proposed for both security and performance. In the paper [42], a coarse-to-fine clustering method based on a combination of global feature and local feature and PageRank are proposed to nearly eliminate duplicates for visual sensor networks. In the paper [43], optimal cluster-based mechanisms for load balancing with multiple mobile sinks are proposed under the condition of a delay-tolerant application to optimize energy consumption in sensor networks. In the paper [44], an adaptive observation matrix of compressive sensing is proposed for sparse samples for ultrasonic wave signals to reconstruct sensor response signals. In the paper [45], a coverless information hiding method is proposed based on binary numbers to locate the secret information and meet the requirements of both randomness and universality. In the paper [46], relocated mobile sensors to achieve k-barrier coverage with the minimum energy cost is proposed in sensor networks. In the paper [47], a back propagation neural network model using solar radiation to establish its relationship with air temperature error for sensor networks is proposed. In the paper [48], a multilevel pattern mining architecture to support automatic network management by discovering patterns from network monitoring data is proposed. However, all of the above works are quite different from this paper as explained in the introduction section. Requirements and studied block ciphers In this section, the characteristics of most of the existing lightweight block ciphers are simply discussed and the differences for requirement of Industrial Wireless Sensor Networks are briefly analyzed. Then, some basic features about WIA-FA are concluded, and the security requirements, which are to meet the strict requirement for speed and reliability in factory automation applications, are discussed. Furthermore, the studied lightweight block ciphers are presented in the following subsection. Characteristics of lightweight block ciphers Generally speaking, the security of such lightweight block ciphers for the resource-constrained environment has their own properties. Usually, security for these applications is just needed to be achieved moderately, and that is to say, the demanded ciphers for constrained environment do not require high-level security, and this is essential in the Internet. On the other hand, attackers in this cryptography environment may be lack of information that is needed to implement cryptanalysis and they themselves can be energy-constrained sometimes, causing the attackers to adopt optimized algorithms and to be smarter enough to effectively implement their attacks [49]. Note that there is no need for lightweight block ciphers to encrypt a great large number of data, and the length of these data is always delimited into short segments as it is typical in the context of Industrial Wireless Sensor Networks. Lastly, the security performance of each block cipher should be deeply analyzed when the cipher can be practically used in the real environment. Both hardware performance and software performance for lightweight ciphers must be considered, and the hardware performance for some applications, especially for RFID, is the primary consideration. For the specific application, there are some relevant metrics and criteria to measure whether cipher algorithms are good. In different applications, the security requirement of resource-constrained devices may vary based on the sensitivity of the transmitted data. As for the industry environment, there is a great deal of transmitted data needed to be encrypted in the Industrial Wireless Sensor Networks. Thus, it is essential to require higher throughput for lightweight ciphers, in a sense that sensors in wireless sensor networks usually have more resources such as computation ability, communication ability, and energy compared with RFID tags. Also, because of the software implementations of ciphers do not need additional cost of the hardware manufacturing and often are easy to maintain and upgrade, it is believed that software-oriented implementation of these lightweight ciphers are more practical and useful for sensors. Industrial Wireless Sensor Network for Factory Automation (WIA-FA) Industrial Wireless Sensor Networks, which have characteristics of low cost, easy maintenance, and easy use, are a revolutionary technology. WIA-FA has become the first international wireless technology specification for the applications of high-speed factory automation, and it is a solution by utilizing the 2.4 GHz/5 GHz frequency band to meet the strict requirement for speed and reliability in factory automation applications. Specifically, because of the influence of multifrequency noise, interference, vibration, and multipath effects, it is a problem to realize the reliable communication by utilizing the scarce channel resources. In addition, at the same time, quitting and invalidation of sensor nodes can cause the topology of networks changeable dynamically. From the point of expending, sensor nodes with lower cost usually lead to restrictions on resources of computation and storage. As for the energy consumption, careful measures should be made to guarantee the life span as long as possible. Thus, there is a need for lightweight and lower complexity protocols and algorithms. The WIA-FA network adopts a centralized management framework, as shown in Fig. 1, and an enhanced star topology is adopted. A host computer is the interface for operators to configure the network and display data. A gateway device is used to achieve interconnection with external networks, and the tasks of network management and security are executed by it. Access devices accept data transmitted from field devices by wireless links, and control commands of gateway device can be forwarded to field devices through access devices at the same time. Field devices can send field application data and alarms to the gateway device, as well as receive configuration information, management information, and control commands from the gateway device. WIA-FA redundant star topology (legends: NM network manager, SM security manager, GW gateway device, AD access device As an open system, there are potential inevitable security risks in a WIA-FA network. Therefore, the necessary security measures must be applied to protect the resources within the system and maintain the normal production [50]. Although there is a security framework, encryption algorithms employed are not specified. In addition, according to the characteristics of the WIA-FA network, the security principles, which should be easy to deployment and use, extend battery life, and maximize the use of existing encryption and authentication technologies, are recommended. Based on all these principles and facts of resource-constrained in practice, lightweight block ciphers are imminently needed to fulfil the security requirements. Based on this point of view, the performance and security are implemented and analyzed in this paper. The studied block ciphers The main parameters of block ciphers are block size, key size, and numbers of iteration rounds. Table 1 shows brief collective information about the studied lightweight block ciphers, while some acronyms, background, and implementation details which impact the selection of ciphers for resource-constrained applications of the studied ciphers will be introduced the next. Table 1 List of studied ciphers KLEIN, which is a new family of lightweight block ciphers, is designed for highly resource-constrained devices such as RFID tags and wireless sensor networks. KLEIN was designed as a typical substitution-permutation network (SPN) just as the structure of PRESENT, which is introduced later [51]. In order to obtain a reasonable security level and asymmetric iteration, the number of rounds can be 12/16/20 for the 64/80/96 plaintext, respectively. Since the key length of 64 is a common choice, the KLEIN-64 is adopted in this paper to realize the performance comparison, where the number 64 in the notation KLEIN-64 stands for the key length of 64. The encryption process of KLEIN is shown in Algorithm 1 (Fig. 2) and explained as follows. There are N R rounds in KLEIN encryption. Each round includes five steps as follows: AddRoundKey: The 64-bit plaintext and 64-bit ith round key are XORed with each, where i =1,2,⋯, N R . The key scheduling of 64-bit key length of KLEIN-64 SubNibbles: The XORed result is divided into 16 of 4-bit nibbles, and all of these nibbles are then fed into the same 16 S-boxes. The 4-bit S-box is a 4 ×4 involution permutation, and it is shown in Table 2. In the meantime, the characteristics of S-boxes satisfy the condition S(S(x)) =x, x$\in F^{4}_{2}$, where $F^{4}_{2}$ represents the 4-bit word over the binary field. This property can be used in both the encryption procedure and the decryption procedure. The simple structure with an involution 4-bit S-box playing a role of the nonlinear layer not only assures a better implementation of software, but also could resist against linear and differential cryptanalyses. Table 2 The S-box used in KLEIN RotateNibbles: Two bytes will be circularly rotated left of the 16 output nibbles of the S-boxes. The nibbles will be divided into two tuples. These two tuples are considered as polynomials over $F^{8}_{2}$, where $F^{8}_{2}$ represents the 8-bit word over the binary field. Each of these two polynomials is multiplied by a fixed polynomial c(x)=03·x3+01·x2+01·x+02. In order to get the results of polynomials of degree less than 4, the above two multiplications are reduced modulo x4+1, which is a polynomial of degree 4. MixNibbles: The process is similar to the MixColumn step in Rijndael. Because of the unique characteristics of the $F^{8}_{2}$, the addition operation corresponds to the XOR operation between the corresponding bytes in each of the words, and the whole multiplication result can be described as s′(x)=c(x)⊗s(x), where s(x)=s3·x3+s2·x2+s1·x+s0 represents the four-term polynomial which is defined with coefficients that are finite field elements, s′(x)=s3′·x3+s2′·x2+s1′·x+s0′ represents the result of the output in the same form, and the ⊗ represents the operation of multiplication. As a result, the four bytes in a tuple are replaced by the following: $$s_{0}^{\prime}=({02}\cdot s_{0})\oplus({03}\cdot s_{1})\oplus s_{2} \oplus s_{3} $$ $$s_{1}^{\prime}=s_{0}\oplus({02}\cdot s_{1})\oplus ({03}\cdot s_{2}) \oplus s_{3} $$ $$s_{2}^{\prime}=s_{0}\oplus s_{1}\oplus ({02}\cdot s_{2}) \oplus ({03}\cdot s_{3}) $$ $$s_{3}^{\prime}=({03}\cdot s_{0})\oplus s_{1}\oplus s_{2} \oplus ({02}\cdot s_{3}) $$ In addition, $s_{0}=c^{i}_{8j+0}\parallel c^{i}_{8j+1}$, $s_{1}=c^{i}_{8j+2}\parallel c^{i}_{8j+3}$, $s_{2}=c^{i}_{8j+4}\parallel c^{i}_{8j+5}$, $s_{3}=c^{i}_{8j+6}\parallel c^{i}_{8j+7}$, where j= 0 or 1 in the different two tuples. $c^{i}_{8j+k}$, k∈[0,7] are the eight four-bit outputs of the ith RotateNibbles step, and the operator ∥ represents concatenation of two 4-bit binary strings. s0′, s1′, s2′, and s3′ are all the same with eight bits which represent the outputs of the four equations. The output of the MixNibbles step will be the intermediate results for the next round encryption process. LBlock LBlock employs a variant of Feistel network, operates on a 64-bit plaintext, supports a key length of 80 bits, and adopts a 32-round iterative structure. The encryption process is illustrated in Fig. 3. A 64-bit plaintext can be described as X0 ∥ X1, where ∥ represents the concatenation of two 32-bit binary strings X0 and X1. For the 32 rounds of data processing, the 32-bit binary strings can be obtained through the equation X i =F(Xi−1,Ki−1)⊕(Xi−2⋘8), i∈[2,33], where F is the round function which is illustrated in Fig. 4, Ki−1 represents the 32-bit round subkey in each round encryption, and the operation ⋘ represents the 8-bit left cyclic shift. In the round function F, in order to achieve the balance between enough security margin and efficient implementation, eight minimized 4-bit S-boxes is shown in Table 3 in which a 4-bit word-wise permutation are used. As shown in Fig. 3, only half of the data are selected to pass through the round function in each round and the other half of the data just use the operation of simply rotation. The key scheduling of LBlock is designed in the way of a stream cipher. Firstly, the round subkey K1 is the output of the leftmost 32 bits of the 80-bit master key K=k79k78k77k76…k1k0. Then, for i=1,2,…,31, the subkey Ki+1 is obtained as follows: (1) k⋘29; (2) [k79k78k77k76]=s9[k79k78k77k76], and [k75k74k73k72]=s8[k75k74k73k72], where s8 and s9 are the two 4-bit S-boxes shown in Table 3; (3) [k50k49k48k47k46]⊕[i]2, where [i]2 represents the binary form of an integer i; (4) the leftmost 32 bits of the changed K is the round subkey Ki+1. Furthermore, the performance evaluation shows that not only hardware is efficient but also software implementation is ultra-lightweight [52]. The original author claimed that LBlock is suitable for RFID tags and sensor networks. Encryption processing of LBlock Round function F Table 3 The S-boxes used in LBlock PRESENT is a lightweight cipher which is extremely hardware efficient and was proposed by Bogdanov et al. [53]. Both 80 and 128-bit keys can be used to encrypt a 64-bit plaintext, but usually the version with 80-bit keys is adequate for most low security applications. In many literatures, PRESENT is regarded as a competitive cipher when other lightweight ciphers are designed. PRESENT is a substitution-permutation cipher and 31 rounds iterations are included. The cipher description of PRESENT is showed in Fig. 5. A nonlinear substitution layer, a linear bitwise permutation layer, and a round key K i where 1≤i≤31 are introduced in each of the 31 encryption rounds. Firstly, the 80-bit master key K=k79k78…k0 is stored in the key register, and the 64-bit subkeys K i in each round are the leftmost 64 bits of the key register. Then, the contents of the key register is updated as follows: (1) the key register is rotated by 61 bits to the left, i.e., [k79k78…k1,k0]=[k18k17…k20k19]; (2) the leftmost four bits are substituted by the S-box shown in Table 4, i.e., [k79k78k77k76]=S[k79k78k77k76]; (3) the round counter iis XOR with bits k19k18k17k16k15 to replace the original value, where i uses the binary form. After the 64-bit round keys are obtained, the intermediate states in each round are XOR with round keys in the step of AddRoundKey, and then, the 64-bit current states are considered as sixteen 4-bit words which are replaced by the mentioned S-box. Finally, the 64-bit states are permutated by a specific permutation table (i.e., pLayer in Fig. 5), and the subkey K32 is used for post-whitening through the step of AddRoundKey. It is reported that implementation results can realized as low as 1570 gate equivalent at the hardware level. But the software performance is the point that we mainly care about just because of the sensor nodes have abundant resources than RFID tags and the software implementation is easy to update and modify on different platforms. The cipher description of PRESENT Table 4 The S-box used in PRESENT The block cipher of HIGHT has a 128-bit master key with a 64-bit block length based on a variant of generalized Feistel network. HIGHT was designed by Hong et al., and a 32-round iteration is implemented during the encrypt process [54]. The 128-bit master key MK=MK15∥…MK0 is a concatenation of 16 bytes, where MK i , i∈[0,15] represents the byte. The whole encryption process of HIGHT is shown in Fig. 6 with the steps of key schedule, initial transformation, round function, and final transformation. During the process of encryption, only the 128-bit master key is required to store, and eight whitening keys WK i , where i∈[0,7] and the sub-keys SK i , where i∈[0,127], can be generated on the fly. Eight whitening keys in total are used for initial and final transformations, and four sub-keys are used for the computation in each round. In the step of initial transform, the 8-bit plaintext P=P7∥…P1∥P0 is transformed by using the four whitening keys into the 8-bit input X0=X0,7∥…X0,1∥X0,0 of the first round as follows: $X_{0,0}\leftarrow P_{0}\boxplus WK_{0}$, X0,1←P1; X0,2←P2⊕WK1, X0,3←P3; $X_{0,4}\leftarrow P_{4}\boxplus WK_{2}$, X0,5←P5; X0,6←P6⊕WK3, and X0,7←P7, where the operation $\boxplus $ represents the addition mod 28 and operation ⊕ represents the exclusive-or (XOR), respectively. In the step of 32 round functions, the intermediate results X i =Xi,7∥…Xi,1∥Xi,0 will be transformed into Xi+1=Xi+1,7∥…Xi+1,1∥Xi+1,0 where i=0,1,…,31 as follows: Xi+1,1←Xi,0, Xi+1,3←Xi,2, Xi+1,5←Xi,4, Xi+1,7←Xi,6, $X_{i+1,0}= X_{i,7}\oplus (F_{0}(X_{i,6}))\boxplus SK_{4i+3}$, $X_{i+1,2}= X_{i,1}\boxplus (F_{1}(X_{i,0}))\oplus SK_{4i+2}$, $X_{i+1,4}= X_{i,3}\oplus (F_{0}(X_{i,2}))\boxplus SK_{4i+1}$, $X_{i+1,6}= X_{i,5}\boxplus (F_{1}(X_{i,40}))\oplus SK_{4i}$, where the functions F0(x)=(x⋘1)⊕(x⋘2)⊕(x⋘7), F1(x)=(x⋘3)⊕(x⋘4)⊕(x⋘6), and the operation ⋘ represents the bit left rotation of a 8-bit value. Finally, in the step of final transform, the ciphertext C=C7∥…C1∥C0 can be obtained by the last round function result X32=X32,7∥…X32,1∥X32,0 as follows: $C_{0}\leftarrow X_{32,1}\boxplus WK_{4}$, C1←X32,2, C2←X32,3⊕WK5, C3←X32,4, $C_{4}\leftarrow X_{32,5}\boxplus WK_{6}$, C5←X32,6, C6←X32,7⊕WK7, and C7←X32,0. Because some simple operations such as XOR and bit-wise rotation are adopted, this cipher is efficient to be hardware-oriented. Furthermore, the designer of HIGHT claimed that the software implementation of HIGHT is faster compared with AES-128. Differential cryptanalysis, linear cryptanalysis, saturation, and boomerang attack analysis show better performance about HIGHT. Moreover, the strength of its security is described to be abundant on account of the NIST statistical test result. The encryption process of HIGHT Piccolo, which is an lightweight block cipher, shows both high security and compact hardware implementation. Piccolo supports 64-bit block to fit standard applications, and 80 or 128-bit keys to achieve moderate security levels [55]. The structure of Piccolo is a variant of generalized Feistel network, and the encryption function is described as Fig. 7. Here, we only focus on the Piccolo 64–128, which consists of 31 rounds with 64-bit plaintext and a 128-bit master key. The encryption process is described as follows. Firstly, the 64-bit plaintext X=X0∥X1∥X2∥X3 combined with four 16-bit whitening keys wk i , i∈[0,3] and sixty-two 16-bit round keys rk i , i∈[0,61] are the inputs of the encryption, where the bit length of X i , i∈[0,3] are all the 16 bits. For the start of the encryption, X0←X0⊕wk0, and X2←X2⊕wk1, where the notation ← means updating a value, and ⊕ means the operation of XOR. Then, for each round i∈[0,29], the round function is implemented as follows: X1←X1⊕F(X0)⊕rk2i, X3←X3⊕F(X2)⊕rk2i+1, X0∥X1∥X2∥X3←RP(X0∥X1∥X2∥X3), where the function F is showed in Fig. 8, and the function RP represents the round permutation operation (x0,x1,…,x7)←(x2,x7,x4,x1,x6,x3,x0,x5) in which each of the x i , i∈[0,7] is eight bits to divide the 64-bit intermediate value. Finally, the whitening keys wk2 and wk3 are used for the operations of X0←X0⊕wk2, and X2←X2⊕wk3. The function F consists of two S-box layers and a diffusion matrix M in which the 4-bit S-box is presented in Table 5. The diffusion matrix M is defined as $ {M} = \left [ \begin {array}{llll} 2&3&1&1\\ 1&2&3&1 \\ 1&1&2&3 \\ 3&1&1&2 \end {array}\right ] $ The encryption process of Piccolo F function of Piccolo Table 5 The S-box used in Piccolo The computation for the 16-bit data is defined as (x0,x1,x2,x3)T←M·(x0,x1,x2,x3)T, where the notation T represents the transposition of a vector, x i , i∈[0,3] are 4-bit data which are obtained by the outputs of the S-boxes, and the multiplication is performed over Galois field GF(24) defined by an irreducible polynomial x4+x+1. The authors claimed that the hardware implementation requirements for the 80-bit and 128-bit key modes were 683GE and 758GE, respectively, when this cipher was compared to general standers of 2000 gate equivalents. SIMON and SPECK The SIMON and SPECK families of block ciphers were proposed publicly by the NASA in 2013 [56]. The motivation of the design is the security requirement of sufficient flexibility in the new area of Internet of Things because all the tiny devices in heterogeneous networks will require adequate cryptography, and the most existing block ciphers with fixed block size are lack of flexibility on different application platforms. Both SIMON and SPECK have multiple instantiations, supporting block sizes of 32, 48, 64, 96, and 128 bits, and with up to three key sizes to go along with each block size. The author claimed that SPECK has the highest throughput in software compared with any block ciphers in the literature and SIMON have the best performance in hardware performance. Thus, to our purpose, we only focus on the features of the SPECK. In the design aspect, the SPECK round functions are based on the Feistel structures and S-boxes are not used so that a good balanced between linear diffusion and nonlinear confusion can be achieved. In Fig. 9, the round function of SPECK is the map as follows: (X2i+3,X2i+2)←((S−αX2i+1+X2i)⊕k i ,SβX2i⊕((S−αX2i+1+X2i)⊕k i )), where X2i and X2i+1 are the inputs of n-bit quantities, k i represents the ith round key, the parameters α and β are 8 and 3, respectively, except for the case of SPECK 32/64 in which the parameters α and β are 7 and 2, and the operations Sj represents the left circular shift byjbits. As for the key scheduling of SPECK, the round function is reused and this promotes the reduction in the amount of code size, which is what the resource-constrained devices prefer. SPECK's key schedule are presented as follows: li+m−1=(k i +S−α)⊕i; ki+1=Sβk i ⊕li+m−1, where the parameter i is the round counter, the parameters α and β which represent the number of left circular shifted bits, are 8 and 3, respectively, and mis the number of words of key and at the same time, the key can be written as (lm−2,…,l0,k0). All of these characteristics are also what we need in our subsequent application, and SPECK64-128 and SPECK128-128 are what we focus on because the length of the plaintext and keys are usually used when compared with other lightweight ciphers. Round function of SPECK Advanced encryption standard Advanced Encryption Standard (AES) has a great deal of impact in the modern cryptography, and it is widely used in many applications because of many better characteristics when compared with other ciphers such as stream ciphers and asymmetric cryptography. It has been created to achieve good performance both in hardware and in software. AES is based on a substitution-permutation network structure and the block length is 128 bits with the length of 128, 192, or 256 bits keys [40, 57–59]. Normally, the AES of 128-bit keys is a sufficient selection for different usage with different purpose. In each round of the multiple iterations, the operations of SubBytes, ShiftRows, MixColumns, and AddRoundKey are included. For resource-constrained devices, AES could be too expensive to use despite there are various approaches that had been proposed to reduce the costs during the implementation of hardware and software. Here, we use the widely used cipher AES to achieve the purpose of comparison of these lightweight block ciphers. Methods/experimental A lightweight cipher in some ways is defined as a crypt algorithm that is specifically designed for resource-constrained devices, and three challenges minimal overhead, lower power consumption, and adequate security level must be balanced. However, to a certain extent, the term lightweight is usually overused because of a great deal of different definitions in different literature. A good way to solve this problem and a more objective method to compare the performance of existing different lightweight block ciphers is to use a uniform platform. From this point of view, these ciphers were implemented on a specific platform, and the information about it is present as follows. Then, metrics to measure performance of these lightweight block ciphers are discussed, and five specific indicators are listed in Table 6. In the end of this subsection, different compiling modes are slightly analyzed because it is important in the continued work. Table 6 Software implementation performance metrics The dedicated platform A STM32F407ZGT6 is used, and it has a 32-bit reduced instruction set computer (RISC) micro-controller at a frequency of up to 168 MHz with the high performance ARM Cortex-M4 core. A floating-point unit can support all ARM single precision data processing instruction data types. As for the memories, the flash memory can be up to 1 Mbyte, and the static random-access memory (SRAM) can reach 192 Kbytes. All devices offer three 12-bit analog digital converters (ADCs), two digital to analog converters (DACs), a low-power real-time clock (RTC), and 12 general purpose 16-bit timers. Standard and advanced communication interfaces such as Inter-Integrated Circuit bus (I2C), Serial Peripheral Interface (SPI), Inter-IC Sound (I2S), and Universal Asynchronous Receiver Transmitter (UART) are included. Moreover, there are rich I/O (input or output) interfaces to provide many peripheral functions. The power supply can be 1.8 to 3.6 V, and a comprehensive set of power-saving mode allows the usage of low power consumption. All of these features make the controller suitable for a wide range of applications, especially for the purpose of industry environment. To implement these ciphers on such a platform, all the codes were written in C through the new vision Real View MDK5.14, and the uVision5 integrated development environment was used. As for the debugger, J-LINK was used to flash programs into the micro-controller, and the options for different compiling modes were selected to test the performance of these lightweight block ciphers. Software platform metrics It is well known that performance metrics play an important role when different cipher algorithms are compared. Hence, it is not accurate in the same study to compare ciphers implemented in different environment, and further inaccuracies can be introduced when a metric is estimated from other metrics. Consequently, a uniform platform and consistently agreed on metrics are needed. To our knowledge, the metrics for software and hardware implementations are not identical because the implementation complexities of the cipher operations are different in software and hardware [60]. The implementation of bit permutation is expensive in software but it is easy to implement in hardware, and in practice, large look-up tables can be very easy to set up in software but it may become extremely tough in hardware. The basic performance metrics for hardware designs are area, timing, and energy. Additionally, there are composite metrics in hardware, such as the power and the efficiency metrics [61]. However, as is mentioned in some recent studies, software implementations have more mature performance metrics and measurements. Usually, a microprocessor is only needed to operate software implementations. The main design goals are to reduce the memory occupation and to optimize the throughput and power saving. In addition, obviously, portability is a main advantage compared with hardware implementations. Here, we only focus on the software platform metrics. Some specific metrics that we use are listed as follows [61]. Typically, the complexity of an algorithm is usually combined space complexity and time complexity. Based on this start point, code size and random-access memory (RAM) size are used to describe the occupancy of the micro-controller's space. Cycles/byte is defined as number of processor's cycles to deal with one block, and throughput is defined as a function combined process's frequency with cycles/byte. Both of them can be seen as the metrics of the complexity of time. As for the combined metric, in a sense it is a more fair mechanism because of the code size and the time consumed are both considered. However, when the lightweight ciphers are specifically implemented, metrics of the performance evaluation should be chosen according to the actual situations. While most of the researchers only focus on the process of the encryption because of the operations about encryption and decryption are constantly similar, especially for the involution ciphers, we consider the implement of these lightweight ciphers both encryption and decryption architecture, and the algorithms of key scheduling just in the purpose of different kinds of operations could be included. Different compiling modes During the software implementations of different lightweight block ciphers, different compiling modes could cause a big diversity. In total, the ARM Compilation Tools offer a range of options to apply when compiling cipher codes, the term -O3 and -Os represent the optimization of the focus on achieving the best performance of time and the smallest code size, respectively. Cross-module optimization has been used, and it shows to reduce code size by removing unused functions from the application. It can also improve the performance by allowing modules to share inline code. The combination of options applied will depend on the optimization goal to meet specific requirements. Results and discussions In this section, using the previously defined methodology, the software implement results in different compiling modes of these lightweight block ciphers are presented. These ciphers are all proposed recently, and we evaluate performance of them in different aspects, which can help to make good decisions in the situations of complex industrial applications and resource-constrained environments. In addition to the memory requirements, the minimized execution time of lightweight block ciphers is the point that were most concerned about, and the information about this to some extent could be found from the metrics of throughput and cycles/byte. Finally, a relatively comprehensive result regarding algorithm efficiency combined code size and complexity of execution speed is described. In addition, during the discussion of each subsection, comparisons and analysis are presented in detail. Memory occupation As compact implementation is one of the primary goals for resource-constrained devices, the memory sizes are compared under different modes, and among which the optimization of the smallest code size is preferred. What is mentioned from [62] is just precisely described as follows: ultra-lightweight implementations require up to 4 KB ROM and 256 bytes RAM, low-cost implementations require up to 4 KB ROM and 8KB RAM, and lightweight implementations require up to 32 KB ROM and 8 KB RAM. These targets make sure that ciphers can be used in a variety of platforms. RAM is used to store the stack and variables of intermediate calculation results, and some zero-initialized variables are also stored in RAM on the STM32F407 platforms. Because of the characteristics and distinctive architecture of RAM in the dedicated micro-controller, the source programs are first downloaded to the flash memory which could speed up the code execution. Only the stacks and zero-initialized variables of the system information are stored in the RAM, and thus, the RAM occupations of all these lightweight ciphers in both of the following two modes are the same value. As illustrated in Figs. 10 and 11, both of the ciphers SPECK64 _128 and SPECK128 _128 have the smallest flash memory using less than 1700 bytes; the memories need for HIGHT and Poccolo64 _128 are almost equivalent; LBlock, KLEIN-64, and PRESENT are relatively poor than the abovementioned ciphers. Usually, PRESENT is used as a benchmark for newer lightweight ciphers, the gate equivalents is less to 1570 GEs, and thus, it is hardware-oriented. However, the software footprint about PRESENT is conversely slightly higher. The reason that KLEIN-64 has high memory is from the facts that elementary operations were borrowed from AES and PRESENT. Obviously, the standard cipher AES occupies the most resources even though 2932 bytes of flash memory are needed in the mode of -Os though a matrix of bytes is used to represent tables for operations of ShiftRows and MixColumns. The above presentations are surely what the target of low-cost devices is expected, and the less space occupied, the wider scope the applications have. Memory usage of mode -O3 Memory usage of mode -Os Throughput and cycles/byte Since there is no such a direct instruction in the selected platform which can be adopted to measure the processing speed by throughput and cycles per byte, the speed is compared by using the results of processing a block of plaintext combined with the key scheduling and the obtained numerical values are calculated and listed as follows. The throughput is usually expressed to describe the number of processed bits per second, depending on the processor's frequency and the instruction set. In time-critical applications, delay could cause serious consequences. Especially in the industrial environments, the speed of data processing and data transmitting is an important index, which may cause delay and errors in the production process. Notably, in the caseof time optimization from Fig. 12, the throughputs of SPECK64 _128 and SPECK128 _128 are extremely large, and they are consistently the top performers as their designers mentioned. AES follows and offers a good speed, just like being verified by a lot of standard platforms. KLEIN-64, LBlock, and HIGHT are slower than AES. Because PRESENT is a hardware-oriented lightweight block cipher with the tiny 4-bit S-box and all the designs enable the minimal and compact hardware implementation, the speed of software throughput is relatively low. Furthermore, the space optimization mode from Fig. 13 shows the similar results. Throughput in the mode -O3 Throughput in the mode -Os On the contrary, cycles/byte, which express the cycles needed to deal with one byte, is an opposite metric to measure the performance of processing speed. Figures 14 and 15 show that results conform with the above observations. Cycles/byte in the mode -O3 Cycles/byte in the mode -Os Summaries of the above results are presented in Tables 7 and 8. Table 7 Performance of mode -O3 Table 8 Performance of mode -Os Comprehensive metric The combined metric is defined in Table 6 as Code-size ×Cycle_count/Block _size. A smaller value of comprehensive metric indicates a better lightweight cipher [22]. Figures 16 and 17 show trade-off of code size versus performance in terms of speed. Among these eight ciphers, the ranking order of ciphers from good to bad is SPECK64 _128, SPECK128 _128, HIGHT, LBlock, KLEIN64, AES, Piccolo64 _128, and PRESENT. SPECK64 _128 is the best and is followed by SPECK128 _128. AES exhibits a little bit bad characteristics for both the optimization modes of space and time. HIGHT and LBlock which are smaller than AES present a slightly good trade-off between code size and cycles count. PRESENT is relatively large than AES as shown in figures, because it is hardware-oriented. The Piccolo64 _128 is also worse than that AES. In summary, the ciphers SPECK64 _128 and SPECK128 _128 achieve the best comprehensive metrics and are the best choices for resource-constrained devices such as wireless sensor networks, especially for the real-time applications. We also observe that there are sufficient space for a trade-off between security and cost. Comprehensive metric of mode -O3 Comprehensive metric of mode -Os Avalanche effect comparison Avalanche effect is an important characteristic for block ciphers. It is defined as the fact that with change in a single bit of a plaintext or a key, many bits will change in the corresponding ciphertext. Initially, the avalanche effect is used to measure the amount of nonlinearity in the substitution box, which is a crucial component of many block ciphers. Subsequently, it also can be employed to measure the processing functions of the encryption. The avalanche effect tries to reflect, to some extent, the intuitive idea of high nonlinearity. Mathematically, $$\forall x, y \in Z^{n}_{2}\mid H(x,y)=1, {\text{average}}\ H(F(x),F(y))\geq \frac{n}{2} $$ where xand yare two vectors for the input of the encryption, and H represents the Hamming distance function, which is defined as the number of positions where the vectors differ. Usually, it can be defined as the number of ones of vector z=x⊕y. Therefore, this formula shows that if F has a better avalanche effect, the Hamming distance between the outputs of a random input vector and one generated by randomly flipping one of its bits should be at least $\frac {n}{2}$ on average [63]. Ciphers which possess good avalanche effect have higher possibility to resist various different attacks, and thus an attacker is quite difficult to conduct analysis of cipher text when attacks are launched. The results obtained from Fig. 18 reflect the avalanche effects of these lightweight block ciphers. It is observed that PRESENT is relatively worse than the other algorithms. Related key attacks and slide attacks are the most effective attacks to PRESENT [51, 53], and although the hardware implementation is competitive, the software performance combined memory, throughput, and the comprehensive metric shows that PRESENT is not appropriate when it is implemented in resource-limited devices since the security and software performance both are not good. Results from the above show that LBlock is lightweight in term of the memory occupation, and the throughput in mode -O3, is better than PRESENT. But differential cryptanalysis is one of the possible attacks of LBlock [52]. As for KLEIN-64, the operations RotateNibbles and MixNibbles help to achieve a balance between the minimum number of active S-boxes and the software performance. However, there is also an integral attack that can be mounted based on the 15-round integral distinguisher [51]. The needed memory of HIGHT is smaller than AES, and the comprehensive metric of HIGHT is better than AES. The Boomerang attack is an applicable on 11-round of HIGHT. SPECK64 _128 and SPECK128 _128 are the two ciphers, which show excellent performance in various aspects, extremely smaller memory, higher throughput, faster speed than other ciphers, and have pretty good comprehensive metrics. The avalanche effect about them is good, and to date, all published attacks on SPECK are of the reduced-round variety. One of the measures of security about block ciphers is the number of rounds that can be attacked among the total rounds. For SPECK, there is no published attack that can make this percentage more than 70% of all rounds for all versions of SPECK [64]. In other words, SPECK has a relatively satisfactory security performance. However, despite all that, all of the lightweight block ciphers can be used in the situations which security is not much concerned. In practice, one of the industry requirements is that tasks are performed in a timely manner. Actually, block ciphers which have long keys or large rounds enhance the security and correspondingly decrease the real-time performance on the contrary. Thus, the real-time performance and the high security requirement contradict each other. Lightweight ciphers should be carefully selected for the specific purpose with the considerations that the specific platform which may be resource constrained, and this is the focus of attention in the industry wireless environment. Avalanche effect for different block ciphers Based on the fact that both security requirements and performance of lightweight block ciphers should take into careful consideration in Industrial Wireless Sensor Networks, this paper studied several recent top performing lightweight ciphers on a specific low-cost platform. Some software-oriented performance metrics are used to measure the performance of these ciphers from different aspects. In addition, the avalanche effect defined as the possibility to resist possible types of different attacks indicates the security characteristics of these ciphers. Through the analysis and comparison of experimental data results, it is obvious that the cipher SPECK shows good competitiveness in various aspects, such as the least memory occupation, the highest throughput, the best comprehensive metric, and a better security. In addition, although PRESENT with the nature of compact hardware implementation is usually served as a benchmark for newer hardware-oriented lightweight ciphers, the software performance combined with avalanche effect is inadequate when it is implemented. Thus, the balance between security and performance has to be paid attention to when a system is designed to achieve the expected results. Usually, in actual applications, the environment of Industrial Wireless Sensor Networks is extremely complex with strict requirements such as speed and reliability that are strict as the precondition to guarantee the stable operations of the system. Thus, to select a suitable cryptographic algorithm optimized to the factory environment, there is a need to better understand the resources of dedicated platforms and the algorithmic requirement. Nice trade-off between security and performance will help to put forward good solutions to actual applications. Scopes for further research include the lightweight block cipher implementation on the WIA-FA hardware platforms and under the specific protocol requirements for factory automation. AES: DESL: Data Encryption Standard Lightweight DESXL: XORed variant of DESL GEs: Gate equivalents IOTs: RFID: WSNs: WIA-F: Wireless Network for Industrial Automation for Factory Automation WIA: Wireless Network for Industrial Automation XTEA: Extensions to Tiny Encryption Algorithm XXTEA: Corrected Block Tiny Encryption Algorithm Y Xiao, S Yu, K Wu, Q Ni, C Janecek, J Nordstad, Radio frequency identification: technologies, applications, and research issues. Wireless Commun. Mobile Comput. 7:, 457–472 (2007). Y Xiao, X Shen, B Sun, L Cai, Security and privacy in RFID and applications in telemedicine. IEEE Commun. Mag. 44:, 64–72 (2006). HEH Mustafa, X Zhu, Q Li, G Chen, Efficient median estimation for large-scale sensor RFID systems. Int. J. Sensor Netw. 12:, 171–183 (2012). KT Nguyen, M Laurent, N Oualha, Survey on secure communication protocols for the Internet of Things. Ad Hoc Netw. 32:, 17–31 (2005). S Xiong, L Tian, X Li, L Wang, Fault-tolerant topology evolution and analysis of sensing systems in IoT based on complex networks. Int. J. Sensor Netw. 18:, 22–31 (2005). H Cheng, N Xiong, AV Vasilakos, L Yang, G Chen, Nodes organization for channel assignment with topology preservation in multi-radio wireless mesh networks. Ad Hoc Netw. 10(5), 760–773 (2012). H Cheng, Z Su, N Xiong, Y Xiao, Energy-efficient nodes scheduling algorithms for wireless sensor networks using Markov Random Field Model. Inform. Sci. 329:, 461–477 (2016). M Faisal, AA Cardenas, Incomplete clustering of electricity consumption: an empirical analysis with industrial and residential datasets. Cyber-Physical Syst. 3(1–4), 42–65 (2017). Latré B, B Braem, I Moerman, C Blondia, P Demeester, A survey on wireless body area networks. Wireless Netw. 17:, 1–18 (2011). G Anastasi, M Conti, M Di Francesco, A Passarella, Energy conservation in wireless sensor networks: a survey. Ad hoc Netw. 7:, 537–568 (2009). J Liu, Y Xiao, Temporal accountability and anonymity in medical sensor networks. Mob. Netw Appl.16:, 695–712 (2011). C Liu, S Ghosal, Z Jiang, S Sarkar, An unsupervised anomaly detection approach using energy-based spatiotemporal graphical modeling. Cyber-Physical Syst.3(1-4), 66–102 (2017). A Olteanu, Y Xiao, F Hu, B Sun, H Deng, A lightweight block cipher based on a multiple recursive generator for wireless sensor networks and RFID. Wireless Commun. Mobile Comput. 11:, 254–266 (2011). M Cazorla, K Marquet, Minier M, in Proceedings of the 2013 International Conference on Security and Cryptography (SECRYPT). Survey and benchmark of lightweight block ciphers for wireless sensor networks (IEEEReykjavik, 2013), pp. 1–6. B Sun, CC Li, K Wu, Y Xiao, A lightweight secure protocol for wireless sensor networks. Comput. Commun. 29:, 2556–256 (2006). G Ferrari, F Cappelletti, R Raheli, A simple performance analysis of RFID networks with binary tree collision arbitration. Int. J. Sensor Netw. 4:, 194–208 (2008). SE Sarma, SA Weis, DW Engels, in Proceedings of the 4th International Workshop on Cryptographic Hardware and Embedded Systems. RFID systems and security and privacy implications (SpringerREDWOOD SHORES, 2002), pp. 454–469. K Finkenzeller, In RFID handbook: fundamentals and applications in contactless smart cards, radio frequency identification and near-field communication, 1st Eds (John Wiley Sons Ltd, West Sussex, 2010). A Juels, Weis SA, in Proceedings of the 25th Annual International Cryptology Conference. Authenticating pervasive devices with human protocols (SpringerSanta Barbara, 2005), pp. 293–308. JH Kong, LM Ang, KP Seng, A comprehensive survey of modern symmetric cryptographic solutions for resource constrained environments. J. Netw. Comput. Appl. 49:, 15–50 (2005). J He, Z Xu, Authentication and search mechanism for diffusing RFID-sensor networks. Int. J. Sensor Netw. 14:, 211–217 (2013). T Eisenbarth, Z Gong, T Güneysu, S Heyse, S Indesteege, S Kerckhof, FX Standaert, in Proceedings of the 5th International Conference on Cryptology in Africa, Ifrane, Morocco. Compact implementation and performance evaluation of block ciphers in ATtiny devices (IfraneSpringer, 2012), pp. 172–187. B Sun, Y Xiao, CC Li, HH Chen, TA Yang, Security co-existence of wireless sensor networks and RFID for pervasive computing. Comput. Commun. 31:, 4294–4303 (2008). JP Kaps, in Proceedings of the 9th Annual International Conference on Cryptology in India. Chai-Tea, Cryptographic hardware implementations of xTEA (SpringerKharagpur, 2008), pp. 363–375. E Yarrkov, Cryptanalysis of XXTEA. International Association for Cryptologic Research (IACR) Cryptology EPrint Archive, (2010). G Leander, C Paar, A Poschmann, K Schramm, in Proceedings of the 14th International Workshop on Fast Software Encryption. New lightweight DES variants (SpringerLuxembourg, 2007), pp. 196–210. A Poschmann, G Leander, K Schramm, C Paar, in Proceedings of the IEEE International Symposiumon Circuits and Systems. New light-weight crypto algorithms for RFID (IEEENew Orleans, 2007), pp. 1843–1846. M Izadi, B Sadeghiyan, SS Sadeghian, HA Khanooki, in Proceedings of the 8th International Conference on Cryptology and Network Security. MIBS: a new lightweight block cipher (SpringerKanazawa, 2009), pp. 334–348. A Bay, J Nakahara Jr, S Vaudenay, in Proceedings of the 9th International Conference on Cryptology and Network Security. Cryptanalysis of reduced-round MIBS block cipher (SpringerKuala Lumpur, 2010), pp. 1–19. C Canniere De, O Dunkelman, M Knezevic, in Proceedings of the 11th International Workshop on Cryptographic Hardware and Embedded Systems. KATAN and KTANTAN–a family of small and efficient hardware-oriented block ciphers (SpringerLausanne, 2009), pp. 272–288. AA Priyanka, SK Pal, A survey of cryptanalytic attacks on lightweight block ciphers. Int. J. Comput. Sci. Inf. Technol. Secur. (IJCSITS). 2:, 472–481 (2012). T Suzaki, K Minematsu, S Morioka, Kobayashi E, in Proceedings of the ECRYPTWorkshop on Lightweight Cryptography. TWINE: a lightweight, versatile block cipher (SpringerBelgium, 2011). M Coban, F Karakoc, O Boztas, in Proceedings of the 11th International Conference on Cryptology and Network Security. Biclique cryptanalysis of TWINE (SpringerBerlin, 2012), pp. 43–55. G Bansod, N Pisharoty, A Patil, PICO: an ultra lightweight and low power encryption design for ubiquitous computing. Defence Sci. J. 66:, 259–265 (2016). Y Xiao, HH Chen, X Du, M Guizani, Stream-based cipher feedback mode in wireless error channel. IEEE Trans. Wireless Commun. 8:, 622–626 (2009). X Liang, Y Xiao, S Ozdemir, AV Vasilakos, H Deng, Cipher feedback mode under go-back-N and selective-reject protocols in error channels. Secur. Commun. Netw. 6:, 942–954 (2013). A Olteanu, Y Xiao, in Proceedings of the 2009 IEEE International Conference on Communications (ICC 2009). Fragmentation and AES encryption overhead in very high-speed wireless LANs (IEEEDresden, 2009), pp. 575–579. A Olteanu, Y Xiao, Security overhead and performance for aggregation with fragment retransmission (AFR) in very high-speed wireless 802.11 LANs. IEEE Trans. Wireless Commun. 9:, 218–226 (2010). Olteanu A, Y Xiao, Y Zhang, Optimization between AES security and performance for IEEE 802.15.3 WPAN. IEEE Trans. Wireless Commun. 9:, 6030–6037 (2009). J Shen, S Chang, J Shen, Q Liu, X Sun, A lightweight multi-layer authentication protocol for wireless body area networks. Future Generation Comput.Syst (2016). https://doi.org/10.1016/j.future.2016.11.033. Z Zhou, QMJ Wu, F Huang, X Xingming Sun, Fast and accurate near-duplicate image elimination for visual sensor networks. Int. J. Distributed Sensor Netw. 13(2) (2017). https://doi.org/10.1177/1550147717694172. J Zhang, J Tang, T Wang, F Chen, Energy-efficient data-gathering rendezvous algorithms with mobile sinks for wireless sensor networks. Int.J. Sensor Netw. 23(4), 248–257 (2017). https://doi.org/10.1504/IJSNET.2017.10004216. Y Sun, F Gu, Compressive sensing of piezoelectric sensor response signal for phased array structural health monitoring. Int. J. Sensor Netw. 23(4), 258–264 (2017). https://doi.org/10.1504/IJSNET.2017.10004214. X Chen, S Chen, Y Wu, Coverless information hiding method based on the Chinese character encoding. J. Internet Technol. 18(2), 313–320 (2017). https://doi.org/10.6138/JIT.2017.18.2.20160815. Y Zhang, X Sun, B Wang, Efficient algorithm for k-barrier coverage based on integer linear programming. China Commun. 13(7), 16–23 (2016). https://doi.org/10.1109/CC.2016.7559071. B Wang, X Gu, Ma L, S Yan, Temperature error correction based on BP neural network in meteorological WSN. Int. J. Sensor Netw. 23(4), 265–278 (2017). https://doi.org/10.1504/IJSNET.2017.083532. Z Qu, J Keeney, S Robitzsch, F Zaman, X Wang, Multilevel pattern mining architecture for automatic network monitoring in heterogeneous wireless communication networks. China Commun. 13(7), 108–116 (2016). https://doi.org/10.1109/CC.2016.7559082. H Zhang, Shi Cheng P L, J Chen, Optimal DoS attack scheduling in wireless networked control system. IEEE Trans. Control Syst. Technol. 24:, 843–852 (2016). IEC PAS 62948. In industrial networks—wireless communication network and communication profiles - WIA-FA, 1st Eds, 2015; Available online: http://www.iec.ch. Z Gong, S Nikova, Law YW, in Proceedings of the 7th Workshop on RFID Security and Privacy (RFIDSec). KLEIN: a new family of lightweight block ciphers (SpringerAmherst, 2011), pp. 1–18. W Wu, L Zhang, in Proceedings of the 9th International Conference on Applied Cryptography and Network Security (ACNS). LBlock: a lightweight block cipher (SpringerSPAIN, 2011), pp. 327–344. A Bogdanov, LR Knudsen, G Leander, C Paar, A Poschmann, MJB Robshaw, Y Seurin, Vikkelsoe C, in Proceedings of the 9th International Workshop on Cryptographic Hardware and Embedded Systems. PRESENT: an ultra-lightweight block cipher (SpringerVienna, 2007), pp. 450–466. D Hong, J Sung, S Hong, J Lim, S Lee, B Koo, H Kim, in Proceedings of the 8th International Workshop on Cryptographic Hardware and Embedded Systems. HIGHT: a new block cipher suitable for low-resource device (SpringerYokohama, 2006), pp. 46–59. K Shibutani, T Isobe, H Hiwatari, A Mitsuda, T Akishita, Shirai T, in Proceedings of the 13th International Workshop on Cryptographic Hardware and Embedded Systems. Piccolo: an ultra-lightweight blockcipher (SpringerNara, 2011), pp. 342–357. R Beaulieu, D Shors, J Sntith, S Treatrnan-Cark, B Weeks, L Wingers, in Proceedings of the IT Professional Conference (IT Pro). The simon and speck families of lightweight block ciphers (IEEEGaithersburg, 2014). Pub NF, In 197: advanced encryption standard (AES). Federal Inf. Process. Standards Publication. 197:, 441–0311 (2001). Y Xiao, B Sun, HH Chen, in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM 06). Performance analysis of advanced encryption standard (AES) (IEEESan Francisco, 2006), pp. 1–5. Y Xiao, HH Chen, B Sun, R Wang, S Sethi, MAC security and security overhead analysis in the IEEE 802.15.4 wireless sensor networks. EURASIP J. Wireless Commun. Netw. 1:, 1–12 (2006). S Kerckhof, F Durvaux, C Hocquet, D Bol, FX Standaert, in Proceedings of the 14th International Workshop on Cryptographic Hardware and Embedded Systems. Towards green cryptography: a comparison of lightweight ciphers from the energy viewpoint (SpringerLeuven, 2012), pp. 390–407. BJ Mohd, T Hayajneh, AV Vasilakos, A survey on lightweight block ciphers for low-resource devices: comparative study and open issues. J. Netw. Comput. Appl. 58:, 73–93 (2005). C Manifavas, G Hatzivasilis, K Fysarakis, K Rantos, in Proceedings of the 8th Data Privacy Management International Workshop (DPM). Lightweight cryptography for embedded systems–a comparative analysis (SpringerEgham, 2014), pp. 333–349. JCH Castro, JM Sierra, A Seznec, A Izquierdo, A Ribagorda, The strict avalanche criterion randomness test. Math. Comput. Simul. 68:, 1–7 (2005). R Beaulieu, D Shors, J Sntith, S Treatrnan-Cark, B Weeks, L Wingers, in Proceedings of the 52nd ACM/EDAC/IEEE Design Automation Conference (DAC). The SIMON and SPECK lightweight block ciphers (IEEENew York, 2015), pp. 1–6. The work is partially supported by the following funding: the National Natural Science Foundation of China (NSFC), no. 61374200 as well as National Natural Science Foundation of China, Sino-Korea Cooperation Project no. 71661147005, and Ministry of Science and Technology Inter-Governmental International Scientific and Technological Innovation Cooperation Key Projects YS2017YFGH000571. Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang, 110016, China Chao Pei , Wei Liang & Xiaojia Han Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China University of Chinese Academy of Sciences, Beijing, 100049, China Department of Computer Science, The University of Alabama, Tuscaloosa, 5487-02903, AL, USA Search for Chao Pei in: Search for Yang Xiao in: Search for Wei Liang in: Search for Xiaojia Han in: The first author conducted the experiments and wrote the first draft of the paper. Other co-authors helped to revise the paper and polished the paper. All authors read and approved the final manuscript. Corresponding authors Correspondence to Yang Xiao or Wei Liang. Algorithms and Architectures for Industrial Wireless Sensor Networks
CommonCrawl
International Journal of Industrial Chemistry June 2019 , Volume 10, Issue 2, pp 133–143 | Cite as Inhibitory effect of Senecio anteuphorbium as green corrosion inhibitor for S300 steel R. Idouhli Y. Koumya M. Khadiri A. Aityoub A. Abouelfida A. Benyaich The present work proposes the study of the extract of Senecio anteuphorbium (SA) as green corrosion inhibitor. This inhibitory effect of SA extract on the corrosion of S300 steel in hydrochloric acid 1 M has been evaluated by using the potentiodynamic polarization and electrochemical impedance spectroscopy. From the outcome of the polarization curves, SA extract acts as mixed-type inhibitor. The inhibition efficiency increased with the extract concentration increase to achieve a maximum of 91% at 30 mg/L. The adsorption of the inhibitor on the steel surface follows Langmuir isotherm and the values of activation energy suggested that the adsorption of inhibitor is a physical–chemical adsorption. Kinetic parameters such as enthalpy, activation energy and entropy were determined and discussed. The surface morphology of steel was observed before and after adding inhibitor by Fourier transform infrared spectroscopy. The changes in contact angles identified the formation of the protective film. Scanning electron microscopy and energy-dispersive X-ray revealed the adsorption of the same organic compounds of the extract on the interface metal/solution. Corrosion Green inhibitor Langmuir Steel Adsorption Steel alloys are very used in different acidic industrial applications such as petrochemical process, refining crude oil and others. Hydrochloric and sulfuric acid are widely used in many processes in the industry for example, industrial cleaning, acid pickling, oil well cleaning and industrial descaling, etc. [1]. Corrosion processes are responsible of expensive costs that continue to be a greatest challenge to scientists. Among the various techniques to stop or prevent degradation of metal and alloys surface, the corrosion inhibitor is one of the best options and the most useful in the industry [2]. The environmental toxicity of organic and inorganic corrosion inhibitors have encouraged researchers to use green corrosion inhibitors, which have economic benefits as they are low cost and biodegradable [3, 4]. In addition, they are ecofriendly, ecologically acceptable and sustainable resource [5]. A large number of green corrosion inhibitors have been studied as an alternative. Therefore, these natural products are becoming the subject of a wider range of investigations. Examination of inhibitors issued from the plant extract have shown that they are wealthy of tannins, alkaloids, organic, amino acids and present inhibiting action [6]. Most of the effective inhibitors used in industry contain hetero-atoms as oxygen (O), phosphorus (P), sulphur (S), nitrogen (N) or aromatic component having multiple bonds leading to an easier adsorption on the metal surface [7, 8]. Recently, several researches have been devoted to corrosion inhibition by plant extracts [9, 10], essential oils [11, 12] and purified compounds [13], in order to study their properties and the inhibition's mechanisms against steel corrosion. The yield of these corrosion inhibitors, also called "green inhibitors", depends mainly on the parts of plant used and geographical location [1]. Many plant extracts have been used against the degradation of material in the aggressive media such as: Sesbania grandiflora [14]; Urtica dioica [15], Zanthoxylum alatum [16], etc. Additionally, most of these green products used as corrosion inhibitors are extracted using methanol, ethanol or aqueous solvent. Gerengi and Sahin [17] found that Schinopsis lorentzii extract obtained by water extraction acted as slightly cathodic inhibitor with inhibition efficiency of 66% at 2000 ppm, Krishnan and Shibli [14] have evaluated the inhibitive action of the methanol extract of the Sesbania grandiflora leaf on the mild steel corrosion in an aggressive HCl medium with a high inhibition efficiency of 98% at 10,000 ppm of inhibitor. High inhibitor efficiency of the green compounds (Ruta chalepensis) has been obtained against hydrogen embrittlement of mechanical properties of pipe steel in HCl medium [18]. This significant inhibition of 99.17% explains their importance in the industrial applications. Also, the first patented corrosion inhibitors used were natural products (flour, yeast, etc.) or by product from food industries to hold iron corrosion in the media [19]. The present research is focused on the effect of the ethanol extract of Senecio anteuphorbium as a corrosion inhibitor for steel corrosion in hydrochloric acid. This plant, belonging to the Kleinia family, is endemic to Morocco and the Canary Islands. This species contains alkaloids pyrrolizidiniques in the form of macrocyclic diesters [20]. Our study exhibits that the SA extract has good inhibition efficiency in hydrochloric acid. Moreover, this plant has never been studied as a corrosion inhibitor. The aim of the present work is to study the inhibition effect of Senecio anteuphorbium extract as green corrosion inhibitor of S300 steel in hydrochloric acid. The mechanism of corrosion inhibition of steel in hydrochloric acid 1 M was studied using electrochemical impedance spectroscopy and the potentiodynamic polarization. The morphology of the steel surface was examined by FITR, contact angle and SEM–EDX. Preparation of extracts The Senecio anteuphorbium (SA) plant was collected from Tiznit, located in the south of Morocco during April 2015. The plant has been identified by Professor of Botany A. Ouhammou at Cadi Ayyad University. Samples of the aerial part were deposited in the herbarium of the Biology department in Faculty of Science under the reference MARK-10016. The corresponding extract was produced by maceration in ethanol. Before that, the whole plant (leaf and stem) was washed with water, cut into small pieces and dried in a dark and aerated place. Therefore, the dried SA powdered (5 g) was added to 100 mL of ethanol and stirred during 24 h at room temperature. After filtration, the ethanolic extract was concentrated using the vacuum evaporation setup, then it has been used as corrosion inhibitor. Materials and solutions Corrosion tests were performed on S300 steel specimens with the composition (wt%): C (0.15%), Mn (1.25%), Si (0.05%), and Fe (98.55%). The solution of hydrochloric acid (HCl) 1 M was prepared by dilution of analytical grade (37%) of hydrochloric acid using distilled water. Electrochemical analysis The electrochemical experiments of the corrosion behavior of steel in hydrochloric acid were conducted using a PGZ100 potentiostat connected to a jacketed glass cell of 175 mL capacity, connected to the bath thermostat (± 1 °C). A 2 cm2 platinum sheet electrode and silver–silver chloride electrode (Ag/AgCl) have been used as auxiliary and reference electrodes, respectively. Prior to each experiment, the working electrode (0.76 cm2) has been polished with various grades of sand paper (500, 1200 and 2000), washed and immersed in acidic solution. The polarization curves have been obtained in the potential range of −0.8 V to − 0.2 V at a scanning rate up to 1 mV/s. EIS has been conducted in frequencies varying from 100 kHz to 10 mHz at open-circuit potential by applying the signals of sine wave voltage of 10 mV. The data were fitted using EC-Lab software. Before all experiments, the potential has been stabilized over 30 min. The operating temperatures range between 293 and 323 K. For reproducibility reasons, the measurements were repeated three times for each concentration and temperature. To set the morphology of the steel surface, the sample has been immersed in 1 M HCl solution without and with inhibitor for 2 h, then washed and dried. To investigate the interaction between inhibitor and metal, Fourier transform infrared spectroscopy (FTIR) spectra were achieved using Vertex 70 sample compartment RT-DLa TGS. The changes of contact angles were measured using a goniometer. Samples surface have been analyzed using scanning electron microscope (SEM) technique. Different elements were detected on the steel surface using high-energy dispersive X-ray (EDX) technique. The SEM and EDX analysis have been realized using VEGA3 LM TESCAN instrument at an accelerating voltage of 20 kV. Electrochemical experiments Open-circuit potential (OCP) Open-circuit potential was recorded versus time under free corrosion conditions until attaining the steady state. Figure 1 shows the time variation of OCP of steel immersed in 1 M HCl medium without and with different concentrations of inhibitor at 293 K. Without inhibitor, the potential increases and then stabilizes after 30 min. This may be related to the ennoblement of the surface by the formation of a passivating film. In the presence of inhibitor, the electrode potential tends to stabilize at − 430 mV versus Ag/AgCl for all inhibitor concentrations. This evolution (Fig. 1) may be due to the modification of interface, and a steady state is reached after 30 min. Thereafter, EIS and PDP measurements were performed. Variation of EOCP–time curves for steel without and with inhibitor in 1 M HCl solution at 293 K Potentiodynamic polarization measurements (PDP) Figure 1 illustrates the potentiodynamic polarization curves of steel in 1 M HCl solutions without and with different concentrations of SA extract. Electrochemical parameters such as corrosion current density (icorr), corrosion potential (Ecorr), open-circuit potential (Eocp), Tafel slopes (ba, bc) and inhibition efficiency (η%) calculated using Eq. 1 are listed in Table 1. $$\eta \left( \% \right) = \frac{{i_{{\rm corr}}^{'} - i_{{\rm corr}} }}{{i_{{\rm corr}}^{'} }} \times 100,$$ where \(i_{{\rm corr}}^{'}\) and \(i_{{\rm corr}}\) are the corrosion current densities without and with inhibitor, respectively. Polarization parameters for S 300 steel in 1 M HCl in the presence and absence of SA extract Conc. (mg/L) Eocp (mV vs. Ag/AgCl) Ecorr (mV vs. Ag/AgCl) icorr (mA/cm2) ba (mV/dec) bc (mV/dec) η (%) − 444 ± 0.7 0.683 ± 0.07 77 ± 1 − 142 ± 2 51 ± 0.9 Examination of Table 1 reveals that the addition of SA extract significantly decreases the corrosion current density. Both anodic and cathodic Tafel slopes were reduced compared to those of the blank. In addition, Fig. 2 shows that the anodic curves clearly showed to be dependent on the extract concentration, whereas the cathodic curves seem to be independent. From these experimental results, it can be explained that the presence of SA extract has inhibited the cathodic evolution of hydrogen and the anodic dissolution process of iron [21]. This trend confirms that the SA extract molecules were adsorbed at both anodic and cathodic sites, suggesting the creation of barrier between steel and aggressive medium [22]. According to Bockris mechanism, the anodic dissolution of Fe in acidic media depends primarily on the adsorbed intermediate FeOHads as shown in the following equations [19, 23]: $${\text{Fe }} + {\text{H}}_{2} {\text{O}} \leftrightarrow {\text{FeOH}}_{{\rm ads}} + {\text{H}}^{ + } + {\text{e}}^{ - } ,$$ $${\text{FeOH}}_{{\rm ads}} \mathop \to \limits^{{\rm rds}} {\text{FeOH}}^{ + } + {\text{e}}^{ - } ,$$ $${\text{FeOH}}^{ + } + {\text{H}}^{ + } \leftrightarrow {\text{Fe}}^{2 + } + {\text{H}}_{2} {\text{O}} .$$ Polarization curves of S 300 steel in 1 M HCl with different concentrations of SA extract The cathodic hydrogen evolution reaction may be accounted for as follows: $${\text{Fe }} + {\text{H}}^{ + } \leftrightarrow ({\text{FeH}}^{ + } )_{{\rm ads}} ,$$ $$({\text{FeH}}^{ + } )_{{\rm ads}} + {\text{e}}^{ - } \leftrightarrow ({\text{FeH}})_{{\rm ads}} ,$$ $$({\text{FeH}})_{{\rm ads}} + {\text{H}}^{ + } + {\text{e}}^{ - } \leftrightarrow {\text{Fe}} + {\text{H}}_{2} .$$ However, the first step of adsorption of inhibitor on the steel surface involves replacement of water molecules initially adsorbed on the surface [24]: $${\text{Inh}}_{{\left( {\text{sol}} \right)}} + {\text{x}}.{\text{H}}_{2} {\text{O}}_{{\left( {\text{ads}} \right)}} \leftrightarrow {\text{Inh}}_{{\left( {\text{ads}} \right)}} + {\text{x}}.{\text{H}}_{2} {\text{O}}_{{\left( {\text{sol}} \right)}}$$ The second step is the release of iron ions on the steel surface and formation of metal–inhibitor complexes: $${\text{Fe }} \to {\text{Fe}}^{2 + } + 2{\text{e}}^{ - } ,$$ $${\text{Fe}}^{2 + } + {\text{Inh}}_{{\left( {\text{ads}} \right)}} \to {\text{Fe}}({\text{Inh}})_{{\rm ads}}^{2 + } .$$ Figure 1 reveals that the corrosion potential has slightly shifted to the cathodic direction for the concentrations less than 10 mg/L. However, the anodic shift is more pronounced for the concentrations up to 20 mg/L. In general, an inhibitor acts as anodic or cathodic type, if the variation of Ecorr towards the blank is greater or closer to 85 mV [10, 25]. Besides that, the difference in behavior between OCP and Ecorr could be explained by the role of inhibitor in polarization condition. The OCP is measured without any applied potential where the steady-state value was reached over time, while the Ecorr was obtained by Tafel curves extrapolation method. In the polarization condition, it was seen that the anodic currents are dependent on the extract concentration, whereas the cathodic currents seem to be independent. Therefore, there was a greater variation of the Ecorr with the concentration than the OCP. Table 1 clearly evidenced that SA extract acts as mixed-type inhibitor and the inhibition efficiency reached 91% at 30 mg/L. From the outcome, it is obvious that corrosion current density decreases when the concentration of inhibitor increases. The effectiveness of SA extract depends on inhibitor concentration. The availability of corrosion inhibitor at each concentration explains its performance [26]. Impedance measurements were used to provide a complete analysis of the inhibitor action mechanism and to investigate deeply the film growth formed on the steel surface. Electrochemical impedance spectroscopy (EIS) EIS is a powerful technique to understand the adsorption mechanism, electrode kinetics and surface properties [27, 28]. Nyquist spectra obtained without and with different concentrations of inhibitor in hydrochloric acid 1 M are shown in Fig. 4. The corresponding electrochemical parameters are summarized in Table 2. Electrochemical impedance parameters obtained from EIS measurements for steel in 1 M HCl in absence and presence of SA extract at 293 K Re (Ω cm2) Rct (Ω cm2) Cdl (µF/cm2) χ 2 \(\eta\)(%) 1.29 ± 0.04 172.5 ± 0.03 6.50 × 10−5 169.3 ± 1 75.2 ± 0.05 2.1 × 10−3 The charge transfer resistance was calculated from Nyquist plot in which the inhibition efficiency was calculated using the relationship (11): $$\eta \left( \% \right) = \frac{{R_{{\rm ct}}^{'} \left( {inh} \right) - R_{{\rm ct}} }}{{R_{{\rm ct}} \left( {inh} \right)}} \times 100$$ Figure 2 illustrates the impedance data of the steel in 1 M HCl solution and the corresponding plot of the fitting. Impedance parameters were obtained from the fitting of the experimental data of the Nyquist plots using EC-Lab software. The corresponding equivalent circuit modeling the steel/solution interface without and with inhibitor has been determined as depicted in Fig. 3. Fitting the EIS data for steel in 1 M HCl solution Figure 4 presents Nyquist plots obtained from alternating current (AC) impedance measurements for S300 steel in 1.0 M HCl without and with different concentrations of SA extract. All impedance plots present depressed semicircles associated to the single time constant indicating that the corrosion of steel is controlled by the charge transfer [29]. The presence of SA extract does not change the mechanism of steel corrosion. Noticeably, the addition of SA extract to the corrosive media significantly changes the diameter of semicircles. This diameter grows up with the concentration of SA extract. The examination of Fig. 4 shows that the Nyquist plots are not perfect capacitive loops which can be related to the frequency dispersion as well as to the inhomogeneities of S300 surface [30, 31]. Moreover, the Nyquist plots were analyzed by fitting experimental data in the equivalent electrical circuit shown in Fig. 5. It consists of electrolyte resistance (Re), charge transfer resistance (Rct), Chi-squared (χ2) that illustrates the excellent goodness of fit for the validation of the equivalent circuit proposed and the constant phase element (CPE). The CPE contains the component Qdl which is the magnitude of the CPE and α is the coefficient that describes different physical phenomena such as surface roughness, inhibitor adsorption and porous layer formation [32]. Thus, the capacitance can be calculated from: $$C_{{\rm dl}} = Q_{{\rm dl}} \times (2\pi f_{ \hbox{max} } )^{\alpha - 1} .$$ Nyquist plots for steel in 1 MHCl solution in the absence and presence of SA extract Equivalent circuit used to fit the EIS data for steel in 1 M HCl solution The impedance parameters values of electrolyte resistance (Re), charge transfer resistance (Rct), double layer capacitance (Cdl), Chi-squared (χ2) and inhibition efficiency (\(\eta\)) are summarized in Table 2. The analysis of Table 2 reveals that the increase of inhibitor concentration has increased the charge transfer resistance and reduced the double layer capacitance. The increase in Rct is attributed to the decrease in the local dielectric constant and/or from the increase in thickness of the electrical double layer which suggests the formation of a protective layer on steel surface [32, 33]. The decrease in double layer capacitance is most likely attributed to the progressive replacement of the water molecules by the adsorbed organic molecules [34]. The results obtained from EIS are in good agreement with the PDP. The difference observed in the inhibition efficiency obtained from these methods (EIS and PDP) may be due to the technique type. PDP measurements provide real time kinetics of the electrochemical processes (polarization at wide range of potential with a possible irreversible change occur due to the measuring process [28]), and EIS data are usually obtained at the OCP and provide measured values of the overall interfacial resistance at the electrode–electrolyte interface. Kinetic parameters To calculate the kinetic parameters of the inhibition and the adsorption process, the potentiodynamic polarization measurements were conducted at the temperature range of 293–323 K. Table 3 lists the details about the effect of temperature on steel corrosion in hydrochloric acid containing 30 mg/L of SA extract. It is explicitly shown that the increase of temperature leads has almost no effect on the inhibition efficiency. This might be due to the chemisorption of the inhibitor molecules onto the steel surface [35]. Effect of temperature on the steel in free acid and at 30 mg/L of SA extract T (K) Ecorr (mV vs. Ag/AgCl)) 133 ± 2 Analysis of activation parameters in absence and presence of SA extract gives more insights on the inhibitor adsorption mechanism. The apparent activation energy can be calculated using the Arrhenius famous equation which evaluates the temperature dependency to the corrosion current density: $${ \log }(i_{{\rm corr}} ) = { \log }A - \frac{{ - E_{{\rm a}} }}{2.303 \times R.T},$$ where icorr is the corrosion current density of steel, Ea is the apparent activation energy, A is constant, R is the universal gas constant (R = 8.314 J mol−1 K−1) and T is the absolute temperature. This equation can be used to calculate the Ea values of the corrosion reaction without and with SA extract. By plotting the logarithm of icorr versus 1/T, the activation energy can be calculated from the straight lines slope corresponding to 34.57 kJ mol−1 and 34.94 kJvmol−1 in the absence and the presence of inhibitor, respectively. The Arrhenius plots are presented in Fig. 6, and the corresponding values of the apparent activation corrosion energy are listed in Table 4. Arrhenius plots of log (icorr) versus 1/T (K−1) in 1 M HCl without and with 30 mg/L of SA extract Values of activation parameters Ea, ∆H ads * and ∆S ads * for steel in 1 M HCl in absence and presence of 30 mg/L of SA extract, respectively Ea (kJ/mol) ∆H ads * (kJ/mol) ∆S ads * (J/mol.K) Ea −∆H ads * − 139.60 Generally, the increase in activation energy in the presence of inhibitor compared to the blank is due to the physical adsorption on the metal surface. Unchanged or lowered value of Ea suggests the chemical adsorption. In the other hand, it is accepted that the mixed adsorption is characterized by lower or no change in Ea values [36, 37, 38]. From the data presented in Table 4, it is clear that the values of Ea without and with inhibitor are close. These Ea values explain that the chemical and physical adsorption occurring contribute in the evolution of the protective layer with the temperature. This character suggests that the adsorption of inhibitor reduces the available reaction area by blocking the active sites. Activation parameters, such as enthalpy and entropy of corrosion process were evaluated from the effect of temperature. A transition formulation of Arrhenius equation (Eq. 14) can be used: $${ \log }\left( {\frac{{ i_{{\rm corr}} }}{T}} \right) = \left[ {{ \log }\left( {\frac{R}{hN}} \right) + \left( {\frac{{\Delta S_{{\rm a}}^{*} }}{2.303 \times R}} \right)} \right] - \frac{{\Delta H_{{\rm a}}^{*} }}{RT},$$ where h is Planck's constant, N is Avogadro's number, \(\Delta S_{{\rm a}}^{*}\) is the entropy of activation, \(\Delta H_{{\rm a}}^{*}\) is the enthalpy of activation, T is the absolute temperature and R is the universal gas constant. Figure 7 shows the variation of log (ic/T) against 1000/T for SA extract. Straight lines are obtained with a slope of (\(- \frac{{\Delta H_{{\rm a}}^{*} }}{2.303 \times RT}\)) and an intercept of (log \(\frac{R}{hN}\) + \(\frac{{\Delta S_{{\rm a}}^{*} }}{2.303 \times R}\)) from which the values of \(\Delta H_{{\rm a}}^{*}\) and \(\Delta S_{{\rm a}}^{*}\) are calculated, respectively. Transition-state plots of log(icorr/T) versus 1/T (K−1) in 1 M HCl without and with 30 mg/L of SA extract From the data obtained in Table 4, it can be concluded that the negative values of ∆S ads * imply that the adsorption is accompanied by a decrease in entropy [39]. This can be explained by the chaotic degree between steel and solution before introduction of inhibitor. But when the inhibitor is adsorbed, the increase in order has involved a decrease in entropy [40]. The positive sign of ∆H ads * reflects the endothermic nature of the steel corrosion process, explaining that the metal dissolution does not occur readily [41]. Adsorption isotherm The adsorption behavior of the SA extract on the metal surface can be deduced in terms of the adsorption isotherms, this provides important information on the interaction between the inhibitor and the metal surface. The adsorption of inhibitor can be regarded as a quasi-substitution process between the plant extract in the aqueous phase [org(sol)] and water molecules at the metal surface [H2O]ads [42, 43]: $${\text{Org}}_{{\left( {\text{sol}} \right)}} + {\text{nH}}_{2} {\text{O}}_{{\left( {\text{ads}} \right)}} \leftrightarrow {\text{Org}}_{{\left( {\text{ads}} \right)}} + {\text{nH}}_{2} {\text{O}}_{{\left( {\text{sol}} \right)}} ,$$ where Org(sol) and Org(ads) are the organic species dissolved in the aqueous solution and adsorbed onto the metallic surface, respectively, H2O(ads) is the water molecules adsorbed on the metallic surface, and n is the number of water molecules replaced by one inhibitor molecule (size ratio). The degree of surface coverage (θ) for different concentrations of inhibitor was determined using potentiodynamic polarization measurements. So as to obtain the isotherm that fits best, the surface coverage was determined from the following equation: $$\theta = \frac{{i_{{\rm corr}} - i_{{\rm corr}}^{inh} }}{{i_{{\rm corr}} }},$$ where \(i_{{\rm corr}}^{inh}\) and icorr are the density current of steel with and without inhibitor, respectively. In the present work, several adsorption isotherms (Langmuir–Freundlich–Temkin) have been tested to determine the best fit model. The correlation coefficient between surface coverage (θ) and the amount of inhibitor in the corroding medium were compared. The adsorption data fitted well to the three models, but the better it was Langmuir model as evidenced by R2 values (Table 5). Isotherm adsorption parameters of Langmiur model Kads (L/mg) The Langmuir equation was developed assuming that adsorption will only occur in specific homogeneous sites in the adsorbate surface with uniform distribution of energy level. This achieves that the adsorption process is kind of monolayer [44]. The adsorption models can be given as [45]: $$\frac{{C_{inh} }}{\theta } = \frac{1}{{K_{{\rm ads}} }} + C_{inh} \quad \left( {{\text{Langmuir}}\;{\text{isotherm}}} \right),$$ $$\hbox{Exp} \left( { - 2a \times \theta } \right) = K \times C_{inh} \quad ({\text{Temkin}}\;{\text{isotherm}}),$$ $$l{\text{n}}\,\theta = l{\text{n}}K + \left( {\frac{1}{n}} \right) \times l{\text{n}}C_{inh} \quad ({\text{Freundlich}}\;{\text{isotherm}}),$$ where \(\theta\) is the surface coverage, \(C_{{\rm inh}}\) is the concentration of inhibitor and \(K_{{\rm ads}}\) is equilibrium adsorption constant that relates to the standard free energy \(\Delta G_{{\rm ads}}^{^\circ }\) of adsorption by the following relation: $$\Delta G_{{\rm ads}}^{^\circ } = - RTLn\left( {10^{3} \times K_{{\rm ads}} \times 55.5 \times M_{{\rm inhibitor}} } \right),$$ where 55.5 is the concentration of water in the solution in mol L−1 [46], R is the gas equilibrium constant and \(M_{{\rm inhibitor}}\) is the molecular weight of inhibitor. In general, the values of free energy of adsorption (\(\Delta G_{{\rm ads}}^{^\circ }\)) around or below of −20 kJ/mol represent physisorption mechanisms and inhibition is due to the electrostatic interaction between inhibitor and metal. The values of \(\Delta G_{{\rm ads}}^{^\circ }\) around − 40 kJ/mol or smaller result in covalent bonds through chemisorption mechanism due to the sharing or charge transfer from the inhibitor to the metal surface [17, 26]. In the present work, the value of \(\Delta G_{{\rm ads}}^{^\circ }\) is not possible to obtain, thus is due to non-estimation of molecular weight of inhibitor \(\left( {M_{{\rm inhibitor}} } \right)\), because the whole extract was used (Fig. 8). Plots of Langmuir adsorption isotherm of SA extract on the steel surface at 293 K Surface investigation SEM analysis and contact angle Surface analysis of steel was carried out using SEM micrographs and the variations of wettability of the surface were done after 2 h immersion and the results are presented in Fig. 9. The polished metal surface before immersion shows a good surface property with the angle of 86.0° ± 3.90° (Fig. 9a). The surface morphology of steel after immersion without inhibitor is very rough and strongly damaged due to acidic solution aggressiveness. The water contact angle was found up to 81.8° ± 1.73° which confirms a hydrophilic character of surface (Fig. 9b). This explains that the surface of steel in hydrochloric media is highly porous with oxides formed over it [47, 48]. However, it is obvious that the addition of 30 mg/L of extract shows less damage in the surface as compared to blank with water contact angle of 103.1° ± 5.83° demonstrating a hydrophobic character, suggesting the formation of protective film due to adsorption of SA onto the steel surface (Fig. 9c). The morphology of steel specimen with extract is smoother compared to the steel without inhibitor. This may conclude that the presence of inhibitor protects the steel surface from the aggressive aqueous environment. a Polished metal surface before immersion; b steel after 2 h immersion in 1 M HCl solution without inhibitor; c metal surface after addition of 30 mg/L SA extract Energy-dispersive X-ray spectroscopy Energy-dispersive X-ray spectra (EDX) analyses were conducted to obtain the characteristic peaks of the elements on the steel sample without and with inhibitor in 1 M HCl solution [49]. Figure 10 depicts the EDX spectrum of steel. It is observed from the EDX spectrum of steel the absence of oxygen peak (Fig. 10a). However, the spectrum of steel in medium presents some peaks of oxygen (11.92%) indicating the corrosion of steel and the formation of iron oxide on the metal surface (Fig. 10b) [50]. The EDX spectra in presence of SA extract give some contents of C and O suggesting that the SA extract has covered steel surface acting consequently as barrier between metal and acidic medium (Fig. 10c). It is also clear from Table 6 that in presence of SA extract, the decrease in iron atomic percentage is due to the mild steel surface covered by inhibitor molecules. a EDX images of steel surface; b after 2 h immersion in 1 M HCl; c in the presence of 30 mg/L SA extract Atomic percentage of elements obtained from EDX spectra of steel FTIR analysis Figure 11 shows the FTIR spectra of inhibitor and steel after 24 h of immersion in hydrochloric media without and with 30 mg/L SA extract. It is observed from the spectra of adsorbed layer that the peaks obtained are similar to those of SA extract. The peaks at 3412, 2912 and 1617 cm−1 can be assigned to the superficial adsorbed water and the functional groups such as C–H, C=O and C=C, respectively. The peaks at 1736, 1414, 1257 and 1073 cm−1 are related to C=O, aromatic ring C–C, C–O and C–O, respectively [16]. This explains that the plant extract contains different organic molecules having various functional groups. FTIR spectra of a inhibitor and b adsorption layer formed on the surface of steel in 1.0 M HCl + 30 mg/L SA extract 24 h at 293 K Figure 11 shows the existence of some peaks displacement between the adsorbed inhibitor spectra and inhibitor extract, also some peaks are either lost or less prominent [51]. The peaks at 585 and 625 cm−1 arise from Fe2O3 and FeOOH that indicates the oxidation of the adsorbed protective film by O2 and H2O in air [52]. The peak shift from to 1607 to 1640 cm−1 is due to formation of the iron–inhibitor complex or salt. This shift may be also linked to the possibility of electron transfer from inhibitor to steel surface that promotes the formation of the adsorbed layer onto the steel surface [16, 53, 54, 55]. The adsorption and inhibitor effect of SA on the corrosion behavior of steel in 1 M HCl were investigated using different techniques. Experimental analysis of the corrosion inhibition properties of ethanol extract of Senecio anteuphorbium showed good inhibiting performance towards the steel corrosion in aggressive media. AC impedance plots of steel show that the inhibition efficiency rises with the increase in plant extract concentration and the corrosion process is controlled by the charge transfer. The results obtained from the PDP curves pointed showed that the SA inhibits both anodic metal dissolution and cathodic hydrogen evolution and corroborated the EIS results. Adsorption of inhibitor molecules on the steel surface was found to be endothermic. The values of the activation energy suggested that the adsorption mechanism is simultaneously physical–chemical adsorption. The adsorption was found to follow Langmuir isotherm. FTIR spectra allowed the detection of some functional groups containing hetero-atoms. The SEM–EDX and contact angle analysis showed that the inhibitor adsorption forms an adsorbed film on the steel surface. Ali AI, Mahrous YS (2017) Corrosion inhibition of C-steel in acidic media from fruiting bodies of Melia azedarach L. extract and a synergistic Ni 2 + additive. RSC Adv 7:23687–23698. https://doi.org/10.1039/C7RA00111H CrossRefGoogle Scholar Dariva CG, Galio AF (2014) Corrosion inhibitors—principles mechanisms and applications. Dev Corros Prot. https://doi.org/10.5772/57255 CrossRefGoogle Scholar Souli R, Triki E, Rezrazi M et al (2015) Nigella sativa: an alternative solution for the corrosion of mild steel in hydrochloric acid medium. J Mater Environ 6:2729–2735Google Scholar Halambek J, Berković K, Vorkapić-Furač J (2013) Laurus nobilis L. oil as green corrosion inhibitor for aluminium and AA5754 aluminium alloy in 3% NaCl solution. Mater Chem Phys 137:788–795. https://doi.org/10.1016/j.matchemphys.2012.09.066 CrossRefGoogle Scholar Boumhara K, Tabyaoui M, Jama C, Bentiss F (2015) Artemisia mesatlantica essential oil as green inhibitor for carbon steel corrosion in 1 M HCl solution: electrochemical and XPS investigations. J Ind Eng Chem 29:146–155. https://doi.org/10.1016/j.jiec.2015.03.028 CrossRefGoogle Scholar Rani BE, Basu BBJ (2011) Green inhibitors for corrosion protection of metals and alloys: an overview. Int J Corros. https://doi.org/10.1155/2012/380217 CrossRefGoogle Scholar Solmaz R (2010) Investigation of the inhibition effect of 5-((E)-4-phenylbuta-1,3-dienylideneamino)-1,3,4-thiadiazole-2-thiol Schiff base on mild steel corrosion in hydrochloric acid. Corros Sci 52:3321–3330. https://doi.org/10.1016/j.corsci.2010.06.001 CrossRefGoogle Scholar Singh A, Ebenso EE, Quraishi MA (2012) Corrosion inhibition of carbon steel in HCl solution by some plant extracts. Int J Corros. https://doi.org/10.1155/2012/897430 CrossRefGoogle Scholar Bothi Raja P, Sethuraman MG (2008) Inhibitive effect of black pepper extract on the sulphuric acid corrosion of mild steel. Mater Lett 62:2977–2979. https://doi.org/10.1016/j.matlet.2008.01.087 CrossRefGoogle Scholar Idouhli R, Abouelfida A, Benyaich A, Aityoub A (2016) Cuminum cyminum extract: a green corrosion inhibitor of S300 steel in 1 M HCl. Chem Process Eng Res 44:16–25Google Scholar Znini M, Majidi L, Bouyanzer A et al (2012) Essential oil of Salvia aucheri mesatlantica as a green inhibitor for the corrosion of steel in 0.5 M H 2SO 4. Arab J Chem 5:467–474. https://doi.org/10.1016/j.arabjc.2010.09.017 CrossRefGoogle Scholar Idouhli R, Oukhrib A, Koumya Y et al (2018) Inhibitory effect of Atlas cedar essential oil on the corrosion of steel in 1 m HCl. Corros Rev 36:373–384. https://doi.org/10.1515/corrrev-2017-0076 CrossRefGoogle Scholar Vermaa CB, Quraishia MA, Singh A (2015) 2-Aminobenzene-1,3-dicarbonitriles as green corrosion inhibitor for mild steel in 1 M HCl: electrochemical, thermodynamic, surface and quantum chemical investigation. J Taiwan Inst Chem Eng 49:229–239. https://doi.org/10.1016/j.jtice.2014.11.029 CrossRefGoogle Scholar Krishnan A, Shibli SMA (2018) Optimization of an efficient, economic and eco-friendly inhibitor based on Sesbania grandiflora leaf extract for the mild steel corrosion in aggressive HCl environment. Anti-Corros Methods Mater 65:210–216. https://doi.org/10.1108/ACMM-06-2017-1810 CrossRefGoogle Scholar Salehi E, Naderi R, Ramezanzadeh B (2017) Synthesis and characterization of an effective organic/inorganic hybrid green corrosion inhibitive complex based on zinc acetate/Urtica Dioica. Appl Surf Sci 396:1499–1514. https://doi.org/10.1016/j.apsusc.2016.11.198 CrossRefGoogle Scholar Chauhan LR, Gunasekaran G (2007) Corrosion inhibition of mild steel by plant extract in dilute HCl medium. Corros Sci 49:1143–1161. https://doi.org/10.1016/j.corsci.2006.08.012 CrossRefGoogle Scholar Gerengi H, Sahin HI (2012) Schinopsis lorentzii extract as a green corrosion inhibitor for low carbon steel in 1 M HCl solution. Ind Eng Chem Res 51:780–787. https://doi.org/10.1021/ie201776q CrossRefGoogle Scholar Soudani M, Hadj Meliani M, El-Miloudi K et al (2018) Efficiency of green inhibitors against hydrogen embrittlement on mechanical properties of pipe steel API 5L X52 in hydrochloric acid medium. J Bio- Tribo-Corros 4:36. https://doi.org/10.1007/s40735-018-0153-0 CrossRefGoogle Scholar Okafor PC, Ikpi ME, Uwah IE et al (2008) Inhibitory action of Phyllanthus amarus extracts on the corrosion of mild steel in acidic media. Corros Sci 50:2310–2317. https://doi.org/10.1016/j.corsci.2008.05.009 CrossRefGoogle Scholar Belakhdar J (1997) La pharmacopée Marocaine Traditionnelle. Ed. Ibis Press, MarocGoogle Scholar Haque J, Srivastava V, Chauhan DS et al (2018) Microwave-induced synthesis of chitosan schiff bases and their application as novel and green corrosion inhibitors: experimental and theoretical approach. ACS Omega 3:5654–5668. https://doi.org/10.1021/acsomega.8b00455 CrossRefGoogle Scholar Li L, Zhang X, Lei J et al (2012) Adsorption and corrosion inhibition of Osmanthus fragrans leaves extract on carbon steel. Corros Sci 63:82–90. https://doi.org/10.1016/j.corsci.2012.05.026 CrossRefGoogle Scholar Bockris JOM, Drazic D, Despic AR (1961) The electrode kinetics of the deposition and dissolution of iron. Electrochim Acta 4:325–361. https://doi.org/10.1016/0013-4686(61)80026-1 CrossRefGoogle Scholar Zor S, Kandemirli F, Bingul M (2009) Inhibition effects of methionine and tyrosine on corrosion of iron in HCl solution: electrochemical, FTIR, and quantum-chemical study. Prot Met Phys Chem Surf 45:46–53. https://doi.org/10.1134/S2070205109010079 CrossRefGoogle Scholar Luo X, Pan X, Yuan S et al (2017) Corrosion inhibition of mild steel in simulated seawater solution by a green eco-friendly mixture of glucomannan (GL) and bisquaternary ammonium salt (BQAS). Corros Sci 125:139–151. https://doi.org/10.1016/j.corsci.2017.06.013 CrossRefGoogle Scholar Loto RT, Loto CA, Popoola API, Fedotova T (2014) Inhibition effect of butan-1-ol on the corrosion behavior of austenitic stainless steel (Type 304) in dilute sulfuric acid. Arab J, ChemGoogle Scholar Prabakaran M, Kim SH, Hemapriya V, Chung IM (2016) Tragia plukenetii extract as an eco-friendly inhibitor for mild steel corrosion in HCl 1 M acidic medium. Res Chem Intermed 42:3703–3719. https://doi.org/10.1007/s11164-015-2240-x CrossRefGoogle Scholar Lorenz WJ, Mansfeld F (1981) Determination of corrosion rates by electrochemical DC and AC methods. Corros Sci 21:647–672. https://doi.org/10.1016/0010-938X(81)90015-9 CrossRefGoogle Scholar Singh P, Srivastava V, Quraishi MA (2016) Novel quinoline derivatives as green corrosion inhibitors for mild steel in acidic medium: electrochemical, SEM, AFM, and XPS studies. J Mol Liq 216:164–173. https://doi.org/10.1016/j.molliq.2015.12.086 CrossRefGoogle Scholar Gunasekaran G, Chauhan LR (2004) Eco friendly inhibitor for corrosion inhibition of mild steel in phosphoric acid medium. Electrochim Acta 49:4387–4395. https://doi.org/10.1016/j.electacta.2004.04.030 CrossRefGoogle Scholar Döner A, Solmaz R, Özcan M, Kardas G (2011) Experimental and theoretical studies of thiazoles as corrosion inhibitors for mild steel in sulphuric acid solution. Corros Sci 53:2902–2913. https://doi.org/10.1016/j.corsci.2011.05.027 CrossRefGoogle Scholar Muthukrishnan P, Prakash P, Jeyaprabha B, Shankar K (2015) Stigmasterol extracted from Ficus hispida leaves as a green inhibitor for the mild steel corrosion in 1 M HCl solution. Arab J Chem. https://doi.org/10.1016/j.arabjc.2015.09.005 CrossRefGoogle Scholar Prabakaran M, Kim SH, Kalaiselvi K et al (2016) Highly efficient Ligularia fischeri green extract for the protection against corrosion of mild steel in acidic medium: electrochemical and spectroscopic investigations. J Taiwan Inst Chem Eng 59:553–562. https://doi.org/10.1016/j.jtice.2015.08.023 CrossRefGoogle Scholar Wang S, Tao Z, He W et al (2015) Effects of cyproconazole on copper corrosion as environmentally friendly corrosion inhibitor in nitric acid solution. Asian J Chem 27:1107–1110. https://doi.org/10.14233/ajchem.2015.18346 CrossRefGoogle Scholar Muthukrishnan P, Jeyaprabha B, Prakash P (2013) Adsorption and corrosion inhibiting behavior of Lannea coromandelica leaf extract on mild steel corrosion. Arab J Chem. https://doi.org/10.1016/j.arabjc.2013.08.011 CrossRefGoogle Scholar Obot IB, Umoren SA, Obi-Egbedi NO (2011) Corrosion inhibition and adsorption behaviour for aluminum by extract of Aningeria robusta in HCl solution: synergistic effect of iodides ions. J Mater Environ Sci 2:60–71. https://doi.org/10.4161/onci.23245 CrossRefGoogle Scholar Vračar LM, Draži DM (2002) Adsorption and corrosion inhibitive properties of some organic molecules on iron electrode in sulfuric acid. Corros Sci 44:1669–1680. https://doi.org/10.1016/S0010-938X(01)00166-4 CrossRefGoogle Scholar Prabakaran M, Kim S-H, Hemapriya V et al (2016) Rhus verniciflua as a green corrosion inhibitor for mild steel in 1 MH2 SO4. RSC Adv 6:57144–57153. https://doi.org/10.1039/C6RA09637A CrossRefGoogle Scholar Li XH, Deng SD, Fu H, Mu GN (2009) Inhibition by tween-85 of the corrosion of cold rolled steel in 1.0 M hydrochloric acid solution. J Appl Electrochem 39:1125–1135. https://doi.org/10.1007/s10800-008-9770-5 CrossRefGoogle Scholar Mu G, Li X, Liu G (2005) Synergistic inhibition between tween 60 and NaCl on the corrosion of cold rolled steel in 0.5 M sulfuric acid. Corros Sci 47:1932–1952. https://doi.org/10.1016/j.corsci.2004.09.020 CrossRefGoogle Scholar Afia L, Salghi R, Bazzi E et al (2011) Testing natural compounds: Argania spinosa kernels extract and cosmetic oil as ecofriendly inhibitors for steel corrosion in 1 M HCl. Int J Electrochem Sci 6:5918–5939Google Scholar Afia L, Salghi R, Zarrouk A et al (2012) Inhibitive action of Argan press cake extract on the corrosion of steel in acidic media. Port Electrochim Acta 30:267–279. https://doi.org/10.4152/pea.201204267 CrossRefGoogle Scholar El Ouadi Y, Bouyanzer A, Majidi L et al (2015) Evaluation of Pelargonium extract and oil as eco-friendly corrosion inhibitor for steel in acidic chloride solutions and pharmacological properties. Res Chem Intermed 41:7125–7149. https://doi.org/10.1007/s11164-014-1802-7 CrossRefGoogle Scholar Adewuyi A, Göpfert A, Wolff T (2014) Succinyl amide gemini surfactant from Adenopus breviflorus seed oil: a potential corrosion inhibitor of mild steel in acidic medium. Ind Crops Prod 52:439–449. https://doi.org/10.1016/j.indcrop.2013.10.045 CrossRefGoogle Scholar Pinto GM, Nayak J, Shetty AN (2011) Corrosion inhibition of 6061 Al-15 vol. pct. SiC(p) composite and its base alloy in a mixture of sulphuric acid and hydrochloric acid by 4-(N,N-dimethyl amino) benzaldehyde thiosemicarbazone. Mater Chem Phys 125:628–640. https://doi.org/10.1016/j.matchemphys.2010.10.006 CrossRefGoogle Scholar Singh AK, Mohapatra S, Pani B (2016) Corrosion inhibition effect of Aloe vera gel: gravimetric and electrochemical study. J Ind Eng Chem 33:288–297. https://doi.org/10.1016/j.jiec.2015.10.014 CrossRefGoogle Scholar Kasilingam T, Thangavelu C, Palanivel V (2014) Nano analyses of adsorbed film onto carbon steel. Port Electrochim Acta 32:259–270. https://doi.org/10.4152/pea.201404259 CrossRefGoogle Scholar Idouhli R, N'Ait Ousidi A, Koumya Y et al (2018) Electrochemical studies of monoterpenic thiosemicarbazones as corrosion inhibitor for steel in 1 M HCl. Int J Corros 2018:1–15. https://doi.org/10.1155/2018/9212705 CrossRefGoogle Scholar Yadav DK, Maiti B, Quraishi MA (2010) Electrochemical and quantum chemical studies of 3,4-dihydropyrimidin-2(1H)-ones as corrosion inhibitors for mild steel in hydrochloric acid solution. Corros Sci 52:3586–3598. https://doi.org/10.1016/j.corsci.2010.06.030 CrossRefGoogle Scholar Preethi Kumari P, Shetty P, Rao SA (2017) Electrochemical measurements for the corrosion inhibition of mild steel in 1 M hydrochloric acid by using an aromatic hydrazide derivative. Arab J Chem 10:653–663. https://doi.org/10.1016/j.arabjc.2014.09.005 CrossRefGoogle Scholar Ituen E, Akaranta O, James A (2016) Green anticorrosive oilfield chemicals from 5-hydroxytryptophan and synergistic additives for X80 steel surface protection in acidic well treatment fluids. J Mol Liq 224:408–419. https://doi.org/10.1016/j.molliq.2016.10.024 CrossRefGoogle Scholar Li X, Deng S, Fu H, Li T (2009) Adsorption and inhibition effect of 6-benzylaminopurine on cold rolled steel in 1.0 M HCl. Electrochim Acta 54:4089–4098. https://doi.org/10.1016/j.electacta.2009.02.084 CrossRefGoogle Scholar Raja PB, Fadaeinasab M, Qureshi AK et al (2013) Evaluation of green corrosion inhibition by alkaloid extracts of Ochrosia oppositifolia and isoreserpiline against mild steel in 1 M HCl medium. Ind Eng Chem Res 52:10582–10593. https://doi.org/10.1021/ie401387s CrossRefGoogle Scholar Hamdy A, El-Gendy NS (2013) Thermodynamic, adsorption and electrochemical studies for corrosion inhibition of carbon steel by henna extract in acid medium. Egypt J Pet 22:17–25. https://doi.org/10.1016/j.ejpe.2012.06.002 CrossRefGoogle Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Laboratory of Physical Chemistry of Materials and Environment, Faculty of Science SemlaliaUniversity Cadi AyyadMarrakechMorocco Idouhli, R., Koumya, Y., Khadiri, M. et al. Int J Ind Chem (2019) 10: 133. https://doi.org/10.1007/s40090-019-0179-2
CommonCrawl
How is it possible for a substance to have a high heat of vaporization but a low boiling point? The final paragraph of Dissenter's question here is worthy of standing alone: [H]ow does one square a high heat of vaporization with a low boiling point? If it takes a lot of energy to vaporize something, then how can that something have a low boiling point? (It's similar to this old question by Friend of Kim, but different in that it emphasizes an apparent contradiction.) physical-chemistry thermodynamics enthalpy boiling-point hBy2PyhBy2Py This seeming contradiction can be reconciled by examining the thermodynamic quantities involved. First, per Wikipedia, the enthalpy of vaporization is "the enthalpy change required to transform a given quantity of a substance from a liquid into a gas at a given pressure." Written symbolically: $$ \Delta H_{\mathrm{vap}} = H_{\mathrm{gas}} - H_{\mathrm{liq}} $$ For this situation, it is most helpful to consider $H_i$ as the enthalpy per mole ("molar enthalpy") of the species of interest in phase $i$. Formally, again per Wikipedia, the boiling point for a substance is "the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid changes into a vapor." This is not so helpful for answering this question, however. Another way of defining the boiling point is the temperature at which the liquid and the vapor of the substance are in equilibrium, for a closed system (no mass can get in or out) containing only the pure substance of interest at a given pressure$^*$. The equilibrium condition here can be expressed in terms of molar Gibbs free energies (free energy per mole$^{**}$) as: $$ G_{\mathrm{liq}} = G_{\mathrm{gas}} $$ By definition, the molar free energy of a phase is equal to the molar enthalpy minus the temperature times the molar entropy: $$ G = H - TS $$ So, the boiling point equilibrium condition can be rewritten as ($T_b$ is the boiling point): $$ H_{\mathrm{liq}} - T_bS_{\mathrm{liq}} = H_{\mathrm{gas}} - T_bS_{\mathrm{gas}} $$ Subtracting $H_{\mathrm{liq}}$ and $-T_bS_{\mathrm{gas}}$ from both sides gives: $$ T_b\Delta S_{\mathrm{vap}} = T_b\left(S_{\mathrm{gas}} - S_{\mathrm{liq}}\right) = H_{\mathrm{gas}} - H_{\mathrm{liq}} = \Delta H_{\mathrm{vap}} $$ Here, $\Delta S_{\mathrm{vap}}$ is the molar entropy of vaporization, which is analogous to $\Delta H_{\mathrm{vap}}$. The boiling point is thus: $$ T_b = \frac{\Delta H_{\mathrm{vap}}}{\Delta S_{\mathrm{vap}}} $$ So, what does this result mean? Qualitatively, as noted in answers to the linked questions, $\Delta H_{\mathrm{vap}}$ is a measure of how much energy must be added to one mole of molecules in the liquid to free them from their associations with the other liquid molecules. The physical meaning of $\Delta S_{\mathrm{vap}}$ is somewhat harder to describe, but corresponds roughly to the increase in the 'freedom to wiggle' afforded by the transition to the gas phase from the liquid. So, in order to have a large $\Delta H_{\mathrm{vap}}$ (lots of energy required to drive molecules into the gas phase) but also a low $T_b$ (low temperature at which molecules "like" to transition into the gas phase) a chemical must have a large $\Delta S_{\mathrm{vap}}$, which means that each molecule gains a significant amount of 'freedom of movement' from entering the gas phase. This makes sense in the case of ethanol versus water in Dissenter's question: ethanol is bulkier than water, so even though they both have similar values of $\Delta H_{\mathrm{vap}}$, ethanol gains more 'freedom to wiggle' in the gas phase and thus has a lower $T_b$. This also helps to provide a qualitative explanation for why $T_b$ is proportional to pressure: as the pressure of the gas phase increases, the 'freedom to wiggle' gained by entry to the gas phase decreases, because higher $P$ means more collisions with gas-phase molecules. Thus, one can expect $\Delta S_{\mathrm{vap}}$ to decrease with increasing pressure ($S_{\mathrm{liq}}$ isn't affected nearly as much by pressure changes as is $S_{\mathrm{gas}}$), which translates directly to the experimentally observed increase in $T_b$. More details on the relationship between boiling point and pressure can be found in the Wikipedia articles on the saturation temperature and the Clausius-Clapeyron relation. More information on equilibrium and Gibbs free energy can be found on www.chem1.com. $^*$ In this model system, the "knob" one has available (see Gibbs' phase rule) to change the equilibrium pressure and temperature is to add to or remove from the system some number of moles of the species of interest. $^{**}$ As orthocresol rightly notes in a comment, the equilibrium must be expressed in terms of the chemical potentials of each species present. However, in this case there is only one species present and thus examining the total molar Gibbs free energy is sufficient. $\begingroup$ Brian, I disagree with your definition of boiling point as "the temperature at which the liquid and the vapor of the substance are in equilibrium". In fact, in those conditions the liquid and vapor of a substance will always reach an equilibrium at any temperature: this is governed by the vapor pressure of that substance and has nothing to do with being at the boiling point. Fortunately this does not affect the rest of the answer, i.e. your "boiling point equilibrium condition" happens to be correct for any temperature, as long as both phases are present, including at the boiling point. $\endgroup$ – MarcoB Jun 6 '15 at 18:13 $\begingroup$ To be pedantic, the condition for equilibrium is that the chemical potentials (which is, in this case, equal to the molar Gibbs free energies) of both phases are equal. Most of the equations should be written in terms of molar quantities, actually. $\endgroup$ – orthocresol♦ Jun 6 '15 at 19:44 $\begingroup$ @orthocresol Pedantry appreciated; edit made. While $\Delta H_{\mathrm{vap}}$ was described obliquely as being intensive, it was definitely a point worth making more explicit throughout the explanation. $\endgroup$ – hBy2Py Jun 6 '15 at 21:57 $\begingroup$ @MarcoB This was something that bugged me when initially writing up the answer, actually. I have revised the answer somewhat to indicate the system considered in the analysis to be closed and composed solely of the chemical species of interest. I believe this makes the argument accurate, though I welcome critique as errors may well remain -- thermo is not my strongest subject. $\endgroup$ – hBy2Py Jun 6 '15 at 21:59 $\begingroup$ Am accepting as answer since dissent tapered off. Will gladly revisit if critique resumes in the future. $\endgroup$ – hBy2Py Jun 10 '15 at 14:50 Not the answer you're looking for? Browse other questions tagged physical-chemistry thermodynamics enthalpy boiling-point or ask your own question. Why do different substances have different boiling points? Why does hand sanitizer leave your skin feeling cool? What is heat of vaporization? How can it be used at temperature as low as 25 °C? Justification for Freezing Point Depression & Boiling Point Elevation in Solutions? How to calculate the boiling point of a solid substance? Pressure Cooker at High Altitudes and Low Altitudes Does exothermic solvation mean solute is more soluble at low temp? The Boiling or Evaporation of Diethyl Ether Are specific heat capacity and boiling point of a substance related or proportional to each other?
CommonCrawl
Melanin-embedded materials effectively remove hexavalent chromium (CrVI) from aqueous solution An Manh Cuong1, Nguyen Thi Le Na1, Pham Nhat Thang2, Trinh Ngoc Diep2, Ly Bich Thuy3, Nguyen Lai Thanh1 & Nguyen Dinh Thang1,4 Environmental Health and Preventive Medicine volume 23, Article number: 9 (2018) Cite this article Currently, it is recognized that water polluted with toxic heavy metal ions may cause serious effects on human health. Therefore, the development of new materials for effective removal of heavy metal ions from water is still a widely important area. Melanin is being considered as a potential material for removal of heavy metal from water. In this study, we synthesized two melanin-embedded beads from two different melanin powder sources and named IMB (Isolated Melanin Bead originated from squid ink sac) and CMB (Commercial Melanin Bead originated from sesame seeds). These beads were of globular shape and 2–3 mm in diameter. We investigated and compared the sorption abilities of these two bead materials toward hexavalent-chromium (CrVI) in water. The isotherm sorption curves were established using Langmuir and Freundlich models in the optimized conditions of pH, sorption time, solid/liquid ratio, and initial concentration of CrVI. The FITR analysis was also carried out to show the differences in surface properties of these two beads. The optimized conditions for isotherm sorption of CrVI on IMB/CMB were set at pH values of 2/2, sorption times of 90/300 min, and solid-liquid ratios of 10/20 mg/mL. The maximum sorption capacities calculated based on the Langmuir model were 19.60 and 6.24 for IMB and CMB, respectively. However, the adsorption kinetic of CrVI on the beads fitted the Freundlich model with R2 values of 0.992 for IMB and 0.989 for CMB. The deduced Freundlich constant, 1/n, in the range of 0.2–0.8 indicated that these beads are good adsorption materials. In addition, structure analysis data revealed great differences in physical and chemical properties between IMB and CMB. Interestingly, FTIR analysis results showed strong signals of –OH (3295.35 cm− 1) and –C=O (1608.63 cm− 1) groups harboring on the IMB but not CMB. Moreover, loading of CrVI on the IMB caused a shift of broad peaks from 3295.35 cm− 1 and 1608.63 cm− 1 to 3354.21 cm− 1 and 1597.06 cm− 1, respectively, due to –OH and –C=O stretching. Taken together, our study suggests that IMB has great potential as a bead material for the elimination of CrVI from aqueous solutions and may be highly useful for water treatment applications. Currently, environmental pollution caused by rapid industrialization and technological advances is a worldwide problem. It is recognized that water polluted with toxic heavy metals can have serious effects on human health [1, 2]. There are many types of materials which have been being used to remove heavy metals from aqueous effluents; these include activated carbon, plant-leaf materials, chitosan gel, and hydrotalcite [3, 4]. However, these materials are not fully effective nor cost efficient. Chromium is considered to be one of the key contaminants in the wastewaters of many industries, such as plating-electroplating, dying-pigmenting, film-photography, leathering and mining. Although both hexavalent chromium (CrVI) and trivalent chromium (CrIII) are predominant species in industrial effluents, the CrVI is more toxic than CrIII. More seriously, the CrVI is considered as a mutagenic agent, which may cause adverse public health problems [1, 5]. Melanin is synthesized in humans, animals, invertebrate animals, bacteria, and fungi by oxidation of phenol or indole compounds [6,7,8,9]. Besides its role in pigmentation, melanin has many other important biological functions; it serves as an electron transporter, ion balancer, free radical acceptor as well as antioxidant, antibacterial, antivirus, and anticancer agent [6, 9]. Thus, melanin has been widely considered to be a potential material for use in various industries including agriculture, pharmacy, medicine, and cosmetics [6,7,8,9]. Recently, melanin powder (but not melanin bead) has also been examined for its ability to eliminate heavy metal ions (e.g., lead, cadmium, copper, and ferrous) in aqueous solutions [10,11,12]. Generally, for removing of heavy metal ions, a material in powder form should have very high sorption capacity. However, there is no guarantee that a high sorption capacity for the material exists in bead form [3, 4]. For practical conditions, such as in drinking water treatment, the bead form (rather than in powder form) of a material is the most popular and suitable form to avoid the possibility of being stuck when water flow passes through the material column. To date, there has been no study evaluating the use of melanin originated from squid ink sacs for removal of chromium ions, although a previous report showed that melanin secreted from Aureobacidium pullulans could also adsorb CrVI from waste water [13]. However, the different source of melanin may have big different capacity in removing of CrVI ion. In this study, we used two different melanin sources: one was isolated from squid ink sacs, which are considered as waste material of seafood processing companies and named as IMB (Isolated Melanin Bead), and the other was derived from sesame seeds (purchased from Xi'an Green Spring Technology Co., LTD, China) and named as CMB (Commercial Melanin Bead). These two melanin powders were used to make melanin-embedded beads for investigating their abilities to remove hexavalent chromium ions (CrVI). This study also aimed to compare the capacity of CrVI uptake by the two melanin-embedded beads. Comparisons were made by examining differences in their physical and chemical properties due to their different source of origin. Melanin isolation from squid ink sacs The method used for isolating melanin has been described previously [14]. Briefly, squid ink sacs collected from the seafood company were broken down to collect ink liquid. This liquid (50 g) was dissolved into 200 mL of 0.5 M HCl. The mixture was then sonicated for 15 min in a sonicator followed by stirring for 30 min. The mixture was then incubated at 4 °C for 48 h before centrifuging at 10,000 rpm at 5 °C for 15 min to collect the pellet. The pellet was washed with acetone for three times then washed with distilled water for three times. The melanin pellet was dried at 60 °C, grinded, sieved through 150 μm, and then stored at room temperature. Method for making spherical melanin-embedded beads Melanin beads were made according to a previously published protocol [15]. Briefly, melanin powder was embedded using sodium alginate as a cohesion agent. Sodium alginate was dissolved in 20 mL of distilled water and incubated in a water bath incubator at 70 °C to completely dissolve it before adding 5 g of melanin powder with continuous stirring. The mixture solution was drawn into a syringe and then eluted drop by drop into CaCl2 solution (5%) to create beads with spherical form and with a diameter of 2–3 mm. Next, the melanin beads were dried out and dipped into 5% CaCl2 solution for 24 h before washing with distilled water for three times and drying to unchanged weight. Fourier-transform infrared analysis Infrared spectra of the material beads were obtained using a Fourier-transform infrared spectrometer (FTIR Affinity - 1S, SHIMADZU, Kyoto, Japan) [16]. Microscope analysis Morphology and purification of melanin powder isolated from squid ink sac was investigated by scanning electron microscope, model NANOSEM450 (Netherlands), and surface property of melanin bead was examined under Carlzeiss stereo-microscope, model stemi SV2000 (Germany). Sorption experiments and CrVI analytical methods Experiments were conducted at room temperature. Batch equilibrium sorption experiments were carried out in 250 mL Erlenmeyer flasks containing potassium dichromate (K2Cr2O7) solutions (100 mL) of known concentrations (varying from 5 to 200 mg/L). Melanin was added into the K2Cr2O7 solution with various ratios of solid/liquid and placed on a shaker at 200 rpm for various time settings. The solution was then centrifuged at 10,000 rpm for 10 min. In the acidified medium, CrVI reacted with diphenyl carbazide to form a purple-violet colored complex. The concentration of CrVI in the supernatant was determined colorimetrically using a spectrophotometer (Shimadzu). Absorbance was measured at wavelength (λ) of 540 nm [17]. Standard curves were generated and depicted in Fig. 1. Adsorption efficiencies were calculated using following formula: $$ H=\frac{C_{\mathrm{o}}-{C}_{\mathrm{e}}}{C_{\mathrm{o}}}\times 100\ \left(\%\right) $$ Purple-violet colored complex of (CrVI-diphenyl cacbazide) at different concentrations (a) and standard curve for CrVI analysis measured by spectrophotometer (b) were presented H: Adsorption efficiency (%) Co: Initial concentration (mg/L) Ce: Equilibirium concentration (mg/L) Method for determining the isotherm adsorption equations Freundlich adsorption model: The Freundlich model is used to describe the adsorption model from liquids and can be expressed as the following equation [17]: $$ \ln {q}_{\mathrm{e}}=\ln {K}_{\mathrm{F}}+\frac{1}{n}\times \ln {C}_{\mathrm{e}} $$ Langmuir adsorption model: The Langmuir model, which is mainly used to determine the maximum adsorption capacity, is expressed as the following equation [17]: $$ \frac{C_e}{q_{\mathrm{e}}}=\frac{1}{q_{\mathrm{max}}}\times {C}_{\mathrm{e}}+\frac{1}{q_{\mathrm{max}}\times {K}_{\mathrm{L}}} $$ Ce: concentration at equilibrium stage (mg) qe: adsorption capacity at equilibrium stage (mg/g) qmax: maximum adsorption capacity (mg/g) KL: adsorption constant for Langmuir (L/mg) KF, 1/n: adsorption constants for Freundlich (L/mg) In this study, all experiments were repeated three times, and the collected data were analyzed with the appropriate statistical tests. To compare the two groups, the Mann-Whitney U test (for non-parametric comparisons) or Student's t test (for parametric comparisons) were used. Significance was set at three levels with P < 0.05 [1, 2]. Synthesis of spherical melanin beads After purification, isolated melanin pellet was lyophilized to obtain the intact natural squid melanin. Then the melanin sample was examined by the scanning electron microscope (SEM), which showed high purity without contamination by any cellular components (Fig. 2). This purified melanin was more than enough for treatment of heavy metal ions in adsorption experiments [18]. Morphology and purification of melanin isolated from squid ink sacs were examined under scanning electron microscope (SEM) at × 10,000 (left image) and × 50,000 (right image) To produce the spherical melanin-embedded beads, melanin powder was added into the binding agent solution containing alginate at different percentages, which varied from 3 to 15%, to form a mixture before dropping into the CaCl2 solution to form beads (Fig. 3). The results showed that at low percentages of alginate (3 and 4%), the formed melanin beads were not stable and were easily broken since the concentration of the binding agent was insufficient. At the high percentages of alginate (12 and 15%), the formed melanin beads did not have spherical shape because the viscosity of the mixture was too high. Percentages of alginate in the range of 5–10% were optimal to form stable melanin beads with spherical shape (Fig. 3). Melanin-embedded beads with a 3% alginate, b 4% alginate, c 5% alginate, d 7% alginate, e 10% alginate, and f 15% alginate as binding agent In addition, neither alginate content (in the range of 3–15%) nor the drying method (un-drying, low-temperature drying, high-temperature drying, or freezing drying) had any significant effect on the sorption capacities of the beads (Fig. 4). However, alginate at 5% was chosen because it yielded the highest productivity and uniformity of the melanin beads. Effect of drying method (a) and alginate content (b) on sorption efficiency of CrVI on IMB and CMB. (UD: undrying; LTD: low temperature drying; HTD: high temperature drying; FD: freezing drying). Three asterisks indicate significant difference (p < 0.001) between IMB and CMB by the Student's t test Effect of pH on CrVI sorption by melanin-embedded beads The effect of pH on the efficiency of CrVI removal by melanin beads was evaluated for the following set conditions: shaking rate of 200 rpm at 30 °C, solid/liquid ratio of 10 g/L, shaking time of 1 h, and CrVI initial concentration of 200 mg/L. The results are shown in Fig. 5. The removal efficiencies of CrVI by IMB or CMB were better at lower pH values and reached the maximum at pH 1–2 (Fig. 5a). However, IMB had a much higher sorption capacity compared to that of CMB at any pH value. In particular, at the optimized pH (1–2), the sorption capacity of IMB was almost threefold higher than that of CMB. Effect of pH (a), sorption time (b), solid/liquid (c), and CrVI initial concentration (d) on sorption efficiency of CrVI on CMB and IMB. *, **, and *** Significant difference (p < 0.05, 0.01 and 0.001, respectively) between IMB and CMB by the Student's t test Effect of sorption time on CrVI removal efficiency The effect of sorption time on the efficiency of CrVI removal by melanin beads was also evaluated for the following set conditions: pH of 2, shaking rate of 200 rpm at 30 °C, initial CrVI concentration of 200 mg/L, and the solid/liquid ratios of 20 g/L. The results indicated that the longer the sorption time, the higher the removal efficiency. However, the removal efficiency quickly increased during the first hour then slowly increased and reached the highest values around 96% at 2 h for IMB and 67% at 6 h for CMB (Fig. 5b). In general, at any sorption time, IMB was more effective than CMB at removing CrVI. In particular, at the same sorption time of 2 h, the sorption capacity of IMB was 2.8-fold higher than that of CMB. Effect of solid/liquid ratios on sorption efficiency To investigate the effect of solid/liquid ratios on the efficiency of CrVI adsorption, we tested solid/liquid ratios in the range of 1–30 g/L with the following set conditions: pH of 2, shaking rate of 200 rpm at 30 °C, shaking time of 1 h, and CrVI initial concentration of 200 mg/L. The results showed that the removal efficiency increased rapidly as the solid/liquid ratio increased from 1 to 20 g/L and increased only slightly from 20 to 30 g/L. The maximum removal efficiencies reached 95 and 35% for IMB and CMB, respectively, at the solid/liquid ratio of 30 g/L (Fig. 5c). At the same solid/liquid ratio, IMB was much more effective than CMB at eliminating CrVI. Effect of initial concentration of CrVI on sorption efficiency The effect of the initial concentration of CrVI on the efficiency of CrVI removal by melanin beads was evaluated for the following set conditions: pH of 2, shaking rate of 200 rpm at 30 °C, solid/liquid ratio of 20 g/L, and sorption time of 2 h for IMB or 4 h for CMB. The initial concentrations of CrVI were in the range of 5–200 mg/L. The CrVI removal efficiencies and the sorption capacities of CMB and IMB are shown in Fig. 5d and presented in Tables 1 and 2. The maximum capacities for IMB and CMB were 19.6 and 6.24, respectively. These results indicate that while CMB is not that efficient at eliminating CrVI, IMB is efficient and serves as a promising material for CrVI removal due to its high sorption capacity, especially as bead form. Previous studies have tested numerous materials (e.g., activated carbon, sludge, plant-leaf materials, and chitosan gel) for CrVI removal from aqueous solution and have shown that these materials as powder form had sorption capacities of wide range from 6 mg/g to 50 mg/g [3, 4]. Our study shows that IMB (in bead form) is a highly effective material for removing CrVI in water. Table 1 CrVI removal efficiencies and sorption capacities of CMB Table 2 CrVI removal efficiencies and sorption capacities of IMB CrVI sorption kinetics The results of this study indicate that an increase of initial concentration can lead to a decrease of CrVI removal efficiency and increase of the sorption capacity. From the isotherm adsorption results of CrVI at different concentrations on IMB and CMB at optimized conditions, we then examined the suitable isotherm adsorption model for adsorption of CrVI on IMB and CMB using the two common models of Langmuir and Freundlich. The results are shown in Fig. 6. The isotherm equations deduced from Langmuir and Freundlich models were presented as follows: Langmuir model (a and b) and Freundlich model (c and d) for isotherm sorption mechanisms of CrVI on CMB (a and c) and IMB (b and d) Equation of the Langmuir model for CMB (Fig. 6a): \( \frac{C_e}{q_e} \) = 0.153 Ce + 1.450; R2 = 0.955 Equation of the Langmuir model for IMB (Fig. 6b): \( \frac{C_{\mathrm{e}}}{q_{\mathrm{e}}} \) = 0.051 Ce + 0.492; R2 = 0.885 Equation of the Freundlich model for CMB (Fig. 6c): ln qe = 0.491lnCe − 0.237; R2 = 0.989. Equation of the Freundlich model for IMB (Fig. 6d): ln qe = 0.601lnCe + 0.655; R2 = 0.992 Parameters for isotherm adsorption of CMB and IMB are summarized in Table 3. The results suggest that the Freundlich model is more suitable than the Langmuir model to describe the sorption mechanism of CrVI on melanin bead since the R2—coefficient value of the Freundlich model—was higher than that of the Langmuir model. The data also indicate that the surfaces of IMB or CMB are not uniform, and therefore, the distributions of reaction centers on the surface of the materials probably follow an exponential equation rather than a linear one. In the Freundlich model, the mechanism and the rate of adsorption are functions of the constants 1/n and KF. For a good absorbance, the 1/n value should be 0.2 < 1/n < 0.8, and a smaller value of 1/n indicates better adsorption and formation of strong bonds between the adsorbate and adsorbent [19, 20]. In this study, the 1/n values of 0.49 and 0.6 for CMB and IMB, respectively, demonstrate that both IMB and CMB are good materials for adsorption of CrVI; however, IMB has a much better adsorption capacity compared to CMB. Table 3 Parameters for isotherm sorption of CMB and IMB materials Fourier transform infrared analysis In general, on the surface of the melanin material, there are many chemical groups including hydroxyl, carboxyl, and ether, which have been proposed to be responsible for sorption of metal ions by formation of chemical bonding. The chemical-sorption ability of the material depends on factors such as quantity of active centers, its accessibility, and affinity between active centers and metal ions [21]. The surfaces of IMB and CMB were observed under stereo-microscope and presented in Fig. 7. The differences in surface structure of IMB and CMB are clearly distinguishable. It showed that the intensities of peaks of the hydroxyl, carboxyl, and ether groups in IMB were very clear and sharp, while the intensities of these corresponding peaks in CMB were not so clear especially for hydroxyl group. This result indicated that the distribution of chemical groups on the surface of IMB may be denser on the surface of CMB and might lead to difference in numbers of chemical linkages formed between melanin and CrVI ion. Surface structures of IMB (a and b) and CMB (c and d) at × 10 magnifications Conversely, FTIR analysis was used to analyze the functional groups on the surfaces of the native and CrVI-bound IMB and CMB; results are shown in Figs. 8 and 9. IMB and CMB showed completely different FTIR spectra. While IMB had the broad absorption peaks at 3296 cm− 1 and 1608 cm− 1 due to the presence of the –OH and –C=O groups, respectively [22, 23], there were almost no peaks at these sites on the surface of CMB (Fig. 8a and Fig. 9a). Although many other sorption peaks were observed, it is difficult to interpret all. After loading CrVI, the FTIR spectra of CrVI-bound IMB and CrVI-bound CMB were presented in Fig. 8b and Fig. 9b. The results indicated that the adsorption of CrVI on the surface of IMB may have caused a shift of the broad peaks at 3296 cm− 1 and 1608 cm− 1 to 3354 cm− 1 and 1597 cm− 1, respectively, due to –OH and –C=O stretching (Fig. 8b). FTIR spectra of native IMB (a) and CrVI-loaded IMB (b) FTIR spectra of native CMB (a) and CrVI-loaded CMB (b) Chromium pollution originated from plating and electroplating industries, iron and steel industries, and inorganic-chemical production represents a huge problem for environmental health [24]. Exposure to chromium ions, especially CrVI, may cause diseases related to the digestive system and lung; such complications can include epigastric pain, nausea, diarrhea, hemorrhage, and cancer [25]. Thus, it is essential to eliminate CrVI from wastewater before disposal. There are many methods which can be applied to remove CrVI from aqueous solutions; these methods include ion exchange [26], chemical precipitation [27], electrochemical precipitation [28], reduction [29], solvent extraction [30], adsorption [31], membrane separation [32], and reverse osmosis and biosorption [33]. However, these different methods have different disadvantages, such as low removal efficiency, expensive equipment, high operating cost, and high energy requirement [34]. In this study, we investigated the ability of melanin (as a material in bead form) to remove CrVI from aqueous solution. Two different natural melanin sources, one originating from plant (commercial one) and the other extracted from ink sacs of squid (isolated one), were used for making melanin-embedded beads; the beads were called CMB and IMB, respectively. In many Asian countries, the seafood industry is one of the most important industries which provide great economic benefit for the country. Squid and octopuses are processed in many seafood processing companies for export. Nevertheless, ink sacs of squid and octopuses are wastes in these seafood companies. More importantly, melanin accounts for about 16–18% in total weight of the sac [14]. Thus, utilization of these wastes for melanin production will have great impact since melanin has not only been considered as a potential material for heavy metal removal but also for many other applications, such as medicine and cosmetics [6,7,8,9]. To examine the effect of IMB and CMB on removing CrVI, the effect of various parameters such as pH, sorption time, and solid-liquid ratio on CrVI sorption were conducted, and isotherm models including Freundlich and Langmuir were applied to fit experimental data. In accordance with previous studies [19, 35,36,37,38,39], the data showed that IMB and CMB both had the highest sorption capacities as pH 1–2 and that the Freundlich model was the best model to represent the sorption model of CrVI on IMB and/or CMB. There are many materials which have been used to remove chromium ions in effluents from various industries. The removal capacities of these materials vary from 0.2 to 200 mg/g. In general, sorption capacities of materials are different from their origins, for example: plant-originated materials (0.5–10 mg/g), activated carbon materials (2–30 mg/g), coal (6.68 mg/g), hydrous titanium oxide (5 mg/g), maghemite nanoparticles (1.5 mg/g), and tannin gel (200 mg/g) [19, 35,36,37,38,39]. In addition, almost all materials used in previous studies were in powder form, and therefore, their capacities in CrVI removal would be significantly decreased after making the bead form. Previous studies demonstrated that acidic conditions at pH of 1 or 2 were good for the removal of CrVI from water [35,36,37,38,39]. This study also introduced the similar result. In practical conditions, CrVI pollutant mostly comes from the mining and platting industries, which normally have effluents with low pH values. That means IMB should be a suitable material for treatment of CrVI from industrial effluent. Besides that, in some cases, CrVI pollutant may also come from natural water, which has pH of 5–7. However, concentration of CrVI in natural water is below 1 μg/L [40], while this study showed that the adsorption capacity of IMB for CrVI was about 7–8 mg/g at the pH of 6–7 (one third of that at pH of 1–2). It means that IMB is also good enough for removing of CrVI from natural water. In this study, our results demonstrated that melanin materials are potential for the removal of CrVI from water. However, melanin from different sources have different physical and chemical properties. Particularly, the properties of IMB (melanin extracted from squid ink sacs) were significantly different from those of CMB (melanin extracted from plant). These results led to a difference in the ability of these two melanin materials to eliminate CrVI from aquaous solution. CMB had CrVI sorption capacity of 6.24 mg/g while IMB had CrVI sorption capacity of 19.8 mg/g. In summary, our study suggests that melanin isolated from squid ink sacs (which are considered as waste of seafood processing companies) can be used to synthesis the melanin bead and applied in water treatment to effectively remove CrVI ions. CMB: Commercial Melanin Bead CrVI : FTIR: IMB: Isolated Melanin Bead Ohgami N, Yamanoshita O, Thang ND, Yajima I, Nakano C, Wenting W, Ohnuma S, Kato M. Carcinogenic risk of chromium, copper and arsenic in CCA-treated wood. Environ Pollut. 2015;206:456–60. Thang ND, Yajima I, Kumasaka M, Kato M. Bidirectional functions of arsenic as a carcinogen and an anticancer agent in human squamous cell carcinoma. PLoS One. 2014; https://doi.org/10.1371/journal.pone.0096945. Singha B, Naiya TK, Bhattacharya AK, Das SK. Cr(VI) ions removal from aqueous solutions using natural adsorbents—FTIR studies. J Environ Protection. 2011;2:729–35. Mohan D and Pittman CY. Activated carbons and low cost adsorbents for remediation of tri- and hexavalent chromium from water. J Hazard Mater. 2006;137:762–811, 2006. EPA, Environmental Protection Agency, Environmental Pollution Control Alternatives, EPA/625/5–90/025, EPA/625/4–89/023, Cincinnati, US, 1990. Manivasagan P, Venkatesan J, Senthilkumar K, Sivakumar K, Kim SK. Isolation and characterization of biologically active melanin from Actinoalloteichus sp. MA-32. Int J Biol Macromol. 2013;58:263–74. Mbonyiryivuze A, Nuru ZY, Ngom BD, Mwakikunga B, Dhlamini SM, Park E, Maaza M. Morphological and chemical composition characterization of commercial sepia melanin. American Journal of Nanomaterials. 2015;3(1):22–7. Nosanchuk JD, Casadevall A. Impact of melanin on microbial virulence and clinical resistance to antimicrobial compounds. Antimicrob Agents Chemother. 2006;6:3519–28. Tarangini K, Mishra S. Production, characterization and analysis of melanin from isolated marine pseudomonas sp. using vegetable waste. Res J Engineering Sci. 2013;2(5):40–6. Hong L, Simon JD. Current understanding of the binding sites, capacity, affinity, and biological significance of metals in melanin. J Phys Chem B. 2007;111(28):7938–47. Hong L, Liu Y, Simon JD. Binding of metal ions to melanin and their effects on the aerobic reactivity. Photochem Photobiol. 2004;80:477–81. Szpoganicz B, Gidanian S, Kong P, Farmer P. Metal binding by melanins: studies of colloidal dihydroxyindole-melanin, and its complexation by Cu(II) and Zn(II) ions. J Inorg Biochem. 2002;89:45–53. Yu XH, Gu GX, Shao R, Chen RX, Wu XJ, Xu W. Study on adsorbing chromium (VI) ions in wastewater by Aureobacidium pullulans secretion of melanin. Adv Mater Res. 2011;156-157:1378–84. Magarelli M, Passamonti P, Renieri C. Purification, characterization and analysis of sepia melanin from commercial sepia ink (Sepia Officinalis). Rev CES Med Vet Zootec. 2010;5(2):18–28. Kato M, Azimi MD, Fayaz SH, Shah MD, Hoque MZ, Hamajima N, et al. Uranium in well drinking water of Kabul, Afghanistan and its effective, low-cost depuration using Mg-Fe based hydrotalcite-like compounds. Chemosphere. 2016;165:27–32. Bansal M, Singh D, Garg VK. A comparative study for the removal of hexavalent chromium from aqueous solution by agriculture wastes' carbons. J Hazard Mater. 2009;171(1-3):83–92. Gupta S, Babu BV. Removal of toxic metal Cr(VI) from aqueous solutions using sawdust as adsorbent: equilibrium, kinetics and regeneration studies. Chem Eng J. 2009;150:352–65. Aoyama M, Sugiyama T, Doi S, Cho NS, Kim HE. Removal of hexavalent chromium from dilute aqueous solution by coniferous leaves. Holzforschung. 1999;53:365–8. Dakiky M, Khamis M, Manassra A, Mereb M. Selective adsorption of chromium(VI) in industrial wastewater using low-cost abundantly available adsorbents. Adv Environ Res. 2002;6(4):533–40. Aksu Z, Acikel U, Kabasakal E, Tezer S. Equilibrium modelling of individual and simultaneous biosorption of chromium(VI) and nickel(II) onto dried activated sludge. Water Res. 2002;36:3063–73. Garg UK, Kaur MP, Garg VK, Sud D. Removal of nickel (II) from aqueous solution by adsorption on agricultural waste biomass using a response surface methodological approach. Bioresour Technol. 2008;99(5):1325–31. Liu Y, Simon JD. Metal-ion interactions and the structural organization of Sepia eumelanin. Pigment Cell Res. 2005;18(1):42–8. Ho YS, Chiang CC, Hsu YC. Sorption kinetics for dye removal from aqueous solution using activated clay. Sep Sci Technol. 2001;36(11):2473–88. Wang YT, Xiao C. Factors affecting hexavalent chromium reduction in pure cultures of bacteria. Water Res. 1995;29:2467–74. Mohanty K, Jha M, Meikap BC, Biswas MN. Removal of chromium(VI) from dilute aqueous solutions by activated carbon developed from Terminalia Arjuna nuts activated with zinc chloride. Chem Eng Sci. 2005;60:3049–59. Tiravanti G, Petruzzelli D, Passiono R. Pretreatment of tannery wastewaters by an ion exchange process for Cr(III) removal and recovery. Water Sci Technol. 1997;36:197–207. Zhou X, Korenaga T, Takahashi T, Moriwake T, Shinoda S. A process monitoring/controlling system for the treatment of wastewater containing chromium(VI). Water Res. 1993;27:1049–54. Kongsricharoern N, Polprasert C. Chromium removal by a bipolar electrochemical precipitation process. Water Sci Technol. 1996;34:109–16. Seaman JC, Bertsch BM, Schwallie L. In situ Cr(VI) reduction within coarsetextured, oxide-coated soil and aquifer systems using Fe(II) solutions. Environ Sci Technol. 1999;33:938–44. Calace N, Muro DA, Nardi E, Petronio BM, Pietroletti M. Adsorption isotherms for describing heavy metal retention in paper mill sludges. Ind Eng Chem Res. 2002;41:5491–7. Pagilla K, Canter LW. Laboratory studies on remediation of chromium contaminated soils. J Environ Eng. 1999;125:243–8. Chakravarti AK, Chowdhury SB, Chakrabarty S, Chakrabarty T, Mukherjee DC. Liquid membrane multiple emulsion process of chromium(VI) separation from wastewaters. Colloids Surf A Physicochem Eng Asp. 1995;103:59–71. Aksu Z, Ozer D, Ekiz H, Kutsal T, Calar A. Investigation of biosorption of chromium(VI) on C. crispate in two staged batch reactor. Environ Technol. 1996;17:215–20. Aksu Z, Gonen F, Demircan Z. Biosorption of chromium(VI) ions by Mowital B3OH resin immobilized activated sludge in a packed bed: comparison with granular activated carbon. Process Biochem. 2002;38:175–86. Aoyama M. Removal of Cr(VI) from aqueous solution by London plane leaves. J Chem Technol Biotechnol. 2003;78:601–4. Aoyama M, Kishino M, Jo TS. Biosorption of Cr(VI) on Japanese ceder bark. Sep Sci Technol. 2004;39(5):1149–62. Aoyama M, Tsuda M, Seki K, Doi S, Kurimoto Y, Tamura Y. Adsorption of Cr(VI) from dichromate solutions onto black locust leaves. Holzforschung. 2000;54:340–2. Mohan D, Singh KP, Singh VK. Removal of hexavalent chromium from aqueous solution using low-cost activated carbons derived from agricultural waste materials and activated carbon fabric cloth. Ind Eng Chem Res. 2005;44:1027–42. Mohan D, Singh KP, Singh VK. Trivalent chromium removal from wastewater using low cost activated carbon derived from agricultural waste material and activated carbon fabric cloth. J Hazard Mater. 2006;135:280–95. WHO. Guidelines for drinking-water quality. 2nd ed. vol. 2. Geneva: World Health Organization; 1996. Department of Biochemistry and Molecular Biology, Faculty of Biology, VNU University of Science, Vietnam National University, 334 Nguyen Trai St., Thanh Xuan Dist, Hanoi, Vietnam An Manh Cuong, Nguyen Thi Le Na, Nguyen Lai Thanh & Nguyen Dinh Thang High school for Gifted Students, VNU University of Science, Hanoi, Vietnam Pham Nhat Thang & Trinh Ngoc Diep Institute for Environmental Science and Technology, Hanoi University of Science and Technology, Hanoi, Vietnam Ly Bich Thuy Key Laboratory of Enzyme and Protein Technology, VNU University of Science, Hanoi, Vietnam Nguyen Dinh Thang An Manh Cuong Nguyen Thi Le Na Pham Nhat Thang Nguyen Lai Thanh AMC, NTLN, PNT, and TND carried out the melanin bead synthesis, adsorption experiments. NLT carried out the FITR analysis. LBT performed the statistical analysis. NDT conceived and designed the study and drafted the manuscript. All authors read and approved the final manuscript. Correspondence to Nguyen Dinh Thang. Cuong, A.M., Le Na, N.T., Thang, P.N. et al. Melanin-embedded materials effectively remove hexavalent chromium (CrVI) from aqueous solution. Environ Health Prev Med 23, 9 (2018). https://doi.org/10.1186/s12199-018-0699-y DOI: https://doi.org/10.1186/s12199-018-0699-y CrVI
CommonCrawl
Rank as a function of measure DCDS Home On the characterization of $p$-harmonic functions on the Heisenberg group by mean value properties July 2014, 34(7): 2751-2778. doi: 10.3934/dcds.2014.34.2751 Computability of the Julia set. Nonrecurrent critical orbits Artem Dudko 1, Institute for Mathematical Sciences, Stony Brook University, Stony Brook, NY, 11794-3660, United States Received June 2012 Revised September 2013 Published December 2013 We prove, that the Julia set of a rational function $f$ is computable in polynomial time, assuming that the postcritical set of $f$ does not contain any critical points or parabolic periodic orbits. Keywords: computational complexity, Computability, Julia set.. Mathematics Subject Classification: Primary: 37F50, 37F10; Secondary: 03F6. Citation: Artem Dudko. Computability of the Julia set. Nonrecurrent critical orbits. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2751-2778. doi: 10.3934/dcds.2014.34.2751 M. Aspenberg, The Collet-Eckmann condition for rational functions on the Riemann sphere,, Math. Z., 273 (2013), 935. doi: 10.1007/s00209-012-1039-3. Google Scholar A. Avila and C. G. Moreira, Statistical properties of unimodal maps: The quadratic family,, Ann. of Math. (2), 161 (2005), 831. doi: 10.4007/annals.2005.161.831. Google Scholar I. Binder, M. Braverman and M. Yampolsky, On computational complexity of Siegel Julia sets,, Commun. Math. Phys., 264 (2006), 317. doi: 10.1007/s00220-006-1546-3. Google Scholar I. Binder, M. Braverman and M. Yampolsky, Filled Julia sets with empty interior are computable,, Found. Comput. Math., 7 (2007), 405. doi: 10.1007/s10208-005-0210-1. Google Scholar M. Braverman and M. Yampolsky, Constructing locally connected non-computable Julia sets,, Commun. Math. Phys., 291 (2009), 513. doi: 10.1007/s00220-009-0858-5. Google Scholar M. Braverman, Computational Complexity of Euclidean Sets: Hyperbolic Julia Sets are Poly-Time Computable,, Master's thesis, (2004). Google Scholar M. Braverman, Parabolic Julia sets are polynomial time computable,, Nonlinearity, 19 (2006), 1383. doi: 10.1088/0951-7715/19/6/009. Google Scholar M. Braverman and M. Yampolsky, Non-computable Julia sets,, J. Amer. Math. Soc., 19 (2006), 551. doi: 10.1090/S0894-0347-05-00516-3. Google Scholar M. Braverman and M. Yampolsky, Computability of Julia Sets,, Algorithms and Computation in Mathematics, (2009). Google Scholar J. B. Conway, Functions of One Complex Variable. II,, Graduate Texts in Mathematics, (1995). doi: 10.1007/978-1-4612-0817-4. Google Scholar J. Graczyk and G. Światek, Generic hyperbolicity in the logistic family,, Ann. of Math. (2), 146 (1997), 1. doi: 10.2307/2951831. Google Scholar M. Lyubich, Dynamics of quadratic polynomials. I, II,, Acta Math., 178 (1997), 185. doi: 10.1007/BF02392694. Google Scholar R. Mañé, On a theorem of Fatou,, Bol. Soc. Bras. Mat. (N.S.), 24 (1993), 1. doi: 10.1007/BF01231694. Google Scholar J. Milnor, Dynamics in One Complex Variable,, Third edition, (2006). Google Scholar C. M. Papadimitriou, Computational Complexity,, Addison-Wesley Publishing Company, (1994). Google Scholar R. Rettinger, A fast algorithm for Julia sets of hyperbolic rational functions,, in Proceedings of the 6th Workshop on Computability and Complexity in Analysis (CCA 2004), (2004), 145. doi: 10.1016/j.entcs.2004.06.041. Google Scholar M. Shishikura and T. Lei, An alternative proof of Mañé's theorem on non-expanding Julia sets,, in The Mandelbrot set, (2000), 265. doi: 10.1017/CBO9780511569159.014. Google Scholar M. Sipser, Introduction to the Theory of Computation,, Second edition, (2005). doi: 10.1145/230514.571645. Google Scholar H. Weyl, Randbemerkungen zu Hauptproblemen der Mathematik,, Math. Z., 20 (1924), 131. doi: 10.1007/BF01188076. Google Scholar Luke G. Rogers, Alexander Teplyaev. Laplacians on the basilica Julia set. Communications on Pure & Applied Analysis, 2010, 9 (1) : 211-231. doi: 10.3934/cpaa.2010.9.211 Koh Katagata. On a certain kind of polynomials of degree 4 with disconnected Julia set. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 975-987. doi: 10.3934/dcds.2008.20.975 Volodymyr Nekrashevych. The Julia set of a post-critically finite endomorphism of $\mathbb{PC}^2$. Journal of Modern Dynamics, 2012, 6 (3) : 327-375. doi: 10.3934/jmd.2012.6.327 Rich Stankewitz. Density of repelling fixed points in the Julia set of a rational or entire semigroup, II. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2583-2589. doi: 10.3934/dcds.2012.32.2583 Yu-Hao Liang, Wan-Rou Wu, Jonq Juang. Fastest synchronized network and synchrony on the Julia set of complex-valued coupled map lattices. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 173-184. doi: 10.3934/dcdsb.2016.21.173 Andrea Bonito, Roland Glowinski. On the nodal set of the eigenfunctions of the Laplace-Beltrami operator for bounded surfaces in $R^3$: A computational approach. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2115-2126. doi: 10.3934/cpaa.2014.13.2115 Marcin Mazur, Jacek Tabor. Computational hyperbolicity. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1175-1189. doi: 10.3934/dcds.2011.29.1175 Silvère Gangloff, Benjamin Hellouin de Menibus. Effect of quantified irreducibility on the computability of subshift entropy. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1975-2000. doi: 10.3934/dcds.2019083 Guizhen Cui, Wenjuan Peng, Lei Tan. On the topology of wandering Julia components. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 929-952. doi: 10.3934/dcds.2011.29.929 Stefano Galatolo. Orbit complexity and data compression. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 477-486. doi: 10.3934/dcds.2001.7.477 Valentin Afraimovich, Lev Glebsky, Rosendo Vazquez. Measures related to metric complexity. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1299-1309. doi: 10.3934/dcds.2010.28.1299 Peter Giesl, Sigurdur Hafstein. Computational methods for Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : i-ii. doi: 10.3934/dcdsb.2015.20.8i Shu Liao, Jin Wang, Jianjun Paul Tian. A computational study of avian influenza. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1499-1509. doi: 10.3934/dcdss.2011.4.1499 Nur Aidya Hanum Aizam, Louis Caccetta. Computational models for timetabling problem. Numerical Algebra, Control & Optimization, 2014, 4 (3) : 269-285. doi: 10.3934/naco.2014.4.269 Ian H. Dinwoodie. Computational methods for asynchronous basins. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3391-3405. doi: 10.3934/dcdsb.2016103 Koh Katagata. Quartic Julia sets including any two copies of quadratic Julia sets. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2103-2112. doi: 10.3934/dcds.2016.36.2103 Luiz Henrique de Figueiredo, Diego Nehab, Jorge Stolfi, João Batista S. de Oliveira. Rigorous bounds for polynomial Julia sets. Journal of Computational Dynamics, 2016, 3 (2) : 113-137. doi: 10.3934/jcd.2016006 Robert L. Devaney, Daniel M. Look. Buried Sierpinski curve Julia sets. Discrete & Continuous Dynamical Systems - A, 2005, 13 (4) : 1035-1046. doi: 10.3934/dcds.2005.13.1035 Danilo Antonio Caprio. A class of adding machines and Julia sets. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 5951-5970. doi: 10.3934/dcds.2016061 Artem Dudko
CommonCrawl
Syllabus Course FAQ What is CS109? Honor Code Policy Staff / Office Hours Problem Sets 1. Counting 1. Welcome 2. Combinatorics 3. Probability Sign up for section Course Reader Draft Calculation Reference Probability Reference (overleaf) Probability in Python Latex in CS109 Latex Cheat Sheet Representative Jury Pools Justice Breyer is a Stanford Alum In the Supreme Court case: Berghuis v. Smith, the Supreme Court (of the United States) discussed the question: "If a group is underrepresented in a jury pool, how do you tell?" Justice Breyer (an alumnus of Stanford) opened the questioning by invoking the binomial theorem. He hypothesized a scenario involving "an urn with a thousand balls, and sixty are red, and nine hundred forty are black, and then you select them at random… twelve at a time." According to Justice Breyer and the binomial theorem, if the red balls were black jurors then "you would expect… something like a third to a half of juries would have at least one black person" on them. Note: What is missing in this conversation is the power of diverse backgrounds when making difficult decisions. Simulation: Technically, since jurors are selected without replacement, you should represent the number of under-representative jurors as being a Hyper Geometric Random Variable (a random variable we don't look at explicitely in CS109) st \begin{align*}X \sim \text{HypGeo}(n=12, N = 1000, m = 60)\end{align*} \begin{align*} P(X \geq 1) &= 1 - P(X = 0) \\ &= 1 - \frac{ {60 \choose 0}{940 \choose 12} }{1000 \choose 12} \\ &\approx 0.5261 \end{align*} However Justic Breyer made his case by citing a Binomial distribution. This isn't a perfect use of binomial, because the binomial assumes that each experiment has equal likelihood ($p$) of success. Because the jurors are selected without replacement, the probability of getting a minority juror changes slightly after each selection (and depending on what the selection was). However, as we will see, because the probabilities don't change too much the binomial distribution is not too far off. \begin{align*} X \sim \text{Binomial}(n=12, p = 60/1000) \end{align*} \begin{align*} P(X \geq 1) &= 1 - P(X = 0) \\ &= 1 - {60 \choose 0}(1- 0.06)^{12} \\ &\approx 0.5241 \end{align*}
CommonCrawl
Search the School of Mathematical Sciences Find in People Courses Events News Publications Events matching "+Differential +Geometry +Seminar" 00:00 Wed 30 Nov, -0001 :: Ingkarni Wardli B20 :: Pedram Hekmati :: University of Adelaide 00:00 Wed 30 Nov, -0001 :: Ingkarni Wardli B17 :: Steve Rosenberg :: University of Adelaide / Boston University Media... Abstract... Direct "delay" reductions of the Toda equation 13:10 Fri 23 Jan, 2009 :: School Board Room :: Prof Nalini Joshi :: University of Sydney A new direct method of obtaining reductions of the Toda equation is described. We find a canonical and complete class of all possible reductions under certain assumptions. The resulting equations are ordinary differential-difference equations, sometimes referred to as delay-differential equations. The representative equation of this class is hypothesized to be a new version of one of the classical Painleve equations. The Lax pair associated to this equation is obtained, also by reduction. Noncommutative geometry of odd-dimensional quantum spheres 13:10 Fri 27 Feb, 2009 :: School Board Room :: Dr Partha Chakraborty :: University of Adelaide We will report on our attempts to understand noncommutative geometry in the lights of the example of quantum spheres. We will see how to produce an equivariant fundamental class and also indicate some of the limitations of isospectral deformations. Bibundles 13:10 Fri 6 Mar, 2009 :: School Board Room :: Prof Michael Murray :: University of Adelaide The index theorem for projective families of elliptic operators 13:10 Fri 13 Mar, 2009 :: School Board Room :: Prof Mathai Varghese :: University of Adelaide Geometric analysis on the noncommutative torus 13:10 Fri 20 Mar, 2009 :: School Board Room :: Prof Jonathan Rosenberg :: University of Maryland Noncommutative geometry (in the sense of Alain Connes) involves replacing a conventional space by a "space" in which the algebra of functions is noncommutative. The simplest truly non-trivial noncommutative manifold is the noncommutative 2-torus, whose algebra of functions is also called the irrational rotation algebra. I will discuss a number of recent results on geometric analysis on the noncommutative torus, including the study of nonlinear noncommutative elliptic PDEs (such as the noncommutative harmonic map equation) and noncommutative complex analysis (with noncommutative elliptic functions). Classification and compact complex manifolds I 13:10 Fri 17 Apr, 2009 :: School Board Room :: A/Prof Nicholas Buchdahl :: University of Adelaide Classification and compact complex manifolds II String structures and characteristic classes for loop group bundles 13:10 Fri 1 May, 2009 :: School Board Room :: Mr Raymond Vozzo :: University of Adelaide The Chern-Weil homomorphism gives a geometric method for calculating characteristic classes for principal bundles. In infinite dimensions, however, the standard theory fails due to analytical problems. In this talk I shall give a geometric method for calculating characteristic classes for principal bundle with structure group the loop group of a compact group which side-steps these complications. This theory is inspired in some sense by results on the string class (a certain cohomology class on the base of a loop group bundle) which I shall outline. Four classes of complex manifolds 13:10 Fri 8 May, 2009 :: School Board Room :: A/Prof Finnur Larusson :: University of Adelaide We introduce the four classes of complex manifolds defined by having few or many holomorphic maps to or from the complex plane. Two of these classes have played an important role in complex geometry for a long time. A third turns out to be too large to be of much interest. The fourth class has only recently emerged from work of Abel Prize winner Mikhail Gromov. Lagrangian fibrations on holomorphic symplectic manifolds I: Holomorphic Lagrangian fibrations 13:10 Fri 5 Jun, 2009 :: School Board Room :: Dr Justin Sawon :: Colorado State University A compact K{\"a}hler manifold $X$ is a holomorphic symplectic manifold if it admits a non-degenerate holomorphic two-form $\sigma$. According to a theorem of Matsushita, fibrations on $X$ must be of a very restricted type: the fibres must be Lagrangian with respect to $\sigma$ and the generic fibre must be a complex torus. Moreover, it is expected that the base of the fibration must be complex projective space, and this has been proved by Hwang when $X$ is projective. The simplest example of these {\em Lagrangian fibrations\/} are elliptic K3 surfaces. In this talk we will explain the role of elliptic K3s in the classification of K3 surfaces, and the (conjectural) generalization to higher dimensions. Chern-Simons classes on loop spaces and diffeomorphism groups 13:10 Fri 12 Jun, 2009 :: School Board Room :: Prof Steve Rosenberg :: Boston University The loop space LM of a Riemannian manifold M comes with a family of Riemannian metrics indexed by a Sobolev parameter. We can construct characteristic classes for LM using the Wodzicki residue instead of the usual matrix trace. The Pontrjagin classes of LM vanish, but the secondary or Chern-Simons classes may be nonzero and may distinguish circle actions on M. There are similar results for diffeomorphism groups of manifolds. Lagrangian fibrations on holomorphic symplectic manifolds II: Existence of Lagrangian fibrations 13:10 Fri 19 Jun, 2009 :: School Board Room :: Dr Justin Sawon :: Colorado State University The Hilbert scheme ${\mathrm Hilb}^nS$ of points on a K3 surface $S$ is a well-known holomorphic symplectic manifold. When does ${\mathrm Hilb}^nS$ admit a Lagrangian fibration? The existence of a Lagrangian fibration places some conditions on the Hodge structure, since the pull back of a hyperplane from the base gives a special divisor on ${\mathrm Hilb}^nS$, and in turn a special divisor on $S$. The converse is more difficult, but using Fourier-Mukai transforms we will show that if $S$ admits a divisor of a certain degree then ${\mathrm Hilb}^nS$ admits a Lagrangian fibration. Lagrangian fibrations on holomorphic symplectic manifolds III: Holomorphic coisotropic reduction Given a certain kind of submanifold $Y$ of a symplectic manifold $(X,\omega)$ we can form its coisotropic reduction as follows. The null directions of $\omega|_Y$ define the characteristic foliation $F$ on $Y$. The space of leaves $Y/F$ then admits a symplectic form, descended from $\omega|_Y$. Locally, the coisotropic reduction $Y/F$ looks just like a symplectic quotient. This construction also work for holomorphic symplectic manifolds, though one of the main difficulties in practice is ensuring that the leaves of the foliation are compact. We will describe a criterion for compactness, and apply coisotropic reduction to produce a classification result for Lagrangian fibrations by Jacobians. Another proof of Gaboriau-Popa 13:10 Fri 3 Jul, 2009 :: School Board Room :: Prof Greg Hjorth :: University of Melbourne Gaboriau and Popa showed that a non-abelian free group on finitely many generators has continuum many measure preserving, free, ergodic, actions on standard Borel probability spaces. The original proof used the notion of property (T). I will sketch how this can be replaced by an elementary, and apparently new, dynamical property. Generalizations of the Stein-Tomas restriction theorem 13:10 Fri 7 Aug, 2009 :: School Board Room :: Prof Andrew Hassell :: Australian National University The Stein-Tomas restriction theorem says that the Fourier transform of a function in L^p(R^n) restricts to an L^2 function on the unit sphere, for p in some range [1, 2(n+1)/(n+3)]. I will discuss geometric generalizations of this result, by interpreting it as a property of the spectral measure of the Laplace operator on R^n, and then generalizing to the Laplace-Beltrami operator on certain complete Riemannian manifolds. It turns out that dynamical properties of the geodesic flow play a crucial role in determining whether a restriction-type theorem holds for these manifolds. Asymmetric Cantor measures and sumsets 13:10 Fri 14 Aug, 2009 :: School Board Room :: Prof Gavin Brown :: Royal Institution of Australia and University of Adelaide Weak Hopf algebras and Frobenius algebras 13:10 Fri 21 Aug, 2009 :: School Board Room :: Prof Ross Street :: Macquarie University A basic example of a Hopf algebra is a group algebra: it is the vector space having the group as basis and having multiplication linearly extending that of the group. We can start with a category instead of a group, form the free vector space on the set of its morphisms, and define multiplication to be composition when possible and zero when not. The multiplication has an identity if the category has finitely many objects; this is a basic example of a weak bialgebra. It is a weak Hopf algebra when the category is a groupoid. Group algebras are also Frobenius algebras. We shall generalize weak bialgebras and Frobenius algebras to the context of monoidal categories and describe some of their theory using the geometry of string diagrams. Moduli spaces of stable holomorphic vector bundles 13:10 Fri 28 Aug, 2009 :: School Board Room :: Dr Nicholas Buchdahl :: University of Adelaide Defect formulae for integrals of pseudodifferential symbols: applications to dimensional regularisation and index theory 13:10 Fri 4 Sep, 2009 :: School Board Room :: Prof Sylvie Paycha :: Universite Blaise Pascal, Clermont-Ferrand, France The ordinary integral on L^1 functions on R^d unfortunately does not extend to a translation invariant linear form on the whole algebra of pseudodifferential symbols on R^d, forcing to work with ordinary linear extensions which fail to be translation invariant. Defect formulae which express the difference between various linear extensions, show that they differ by local terms involving the noncommutative residue. In particular, we shall show how integrals regularised by a "dimensional regularisation" procedure familiar to physicists differ from Hadamard finite part (or "cut-off" regularised) integrals by a residue. When extended to pseudodifferential operators on closed manifolds, these defect formulae express the zeta regularised traces of a differential operator in terms of a residue of its logarithm. In particular, we shall express the index of a Dirac type operator on a closed manifold in terms of a logarithm of a generalized Laplacian, thus giving an a priori local description of the index and shall discuss further applications. Covering spaces and algebra bundles 13:10 Fri 11 Sep, 2009 :: School Board Room :: Prof Keith Hannabuss :: University of Oxford Bundles of C*-algebras over a topological space M can be classified by a Dixmier-Douady obstruction in H^3(M,Z). This talk will describe some recent work with Mathai investigating the relationship between algebra bundles on M and on its covering space, where there can be no obstruction, particularly when there is a group acting on M. Understanding hypersurfaces through tropical geometry 12:10 Fri 25 Sep, 2009 :: Napier 102 :: Dr Mohammed Abouzaid :: Massachusetts Institute of Technology Given a polynomial in two or more variables, one may study the zero locus from the point of view of different mathematical subjects (number theory, algebraic geometry, ...). I will explain how tropical geometry allows to encode all topological aspects by elementary combinatorial objects called "tropical varieties." Mohammed Abouzaid received a B.S. in 2002 from the University of Richmond, and a Ph.D. in 2007 from the University of Chicago under the supervision of Paul Seidel. He is interested in symplectic topology and its interactions with algebraic geometry and differential topology, in particular the homological mirror symmetry conjecture. Since 2007 he has been a postdoctoral fellow at MIT, and a Clay Mathematics Institute Research Fellow. Stable commutator length 13:40 Fri 25 Sep, 2009 :: Napier 102 :: Prof Danny Calegari :: California Institute of Technology Stable commutator length answers the question: "what is the simplest surface in a given space with prescribed boundary?" where "simplest" is interpreted in topological terms. This topological definition is complemented by several equivalent definitions - in group theory, as a measure of non-commutativity of a group; and in linear programming, as the solution of a certain linear optimization problem. On the topological side, scl is concerned with questions such as computing the genus of a knot, or finding the simplest 4-manifold that bounds a given 3-manifold. On the linear programming side, scl is measured in terms of certain functions called quasimorphisms, which arise from hyperbolic geometry (negative curvature) and symplectic geometry (causal structures). In these talks we will discuss how scl in free and surface groups is connected to such diverse phenomena as the existence of closed surface subgroups in graphs of groups, rigidity and discreteness of symplectic representations, bounding immersed curves on a surface by immersed subsurfaces, and the theory of multi- dimensional continued fractions and Klein polyhedra. Danny Calegari is the Richard Merkin Professor of Mathematics at the California Institute of Technology, and is one of the recipients of the 2009 Clay Research Award for his work in geometric topology and geometric group theory. He received a B.A. in 1994 from the University of Melbourne, and a Ph.D. in 2000 from the University of California, Berkeley under the joint supervision of Andrew Casson and William Thurston. From 2000 to 2002 he was Benjamin Peirce Assistant Professor at Harvard University, after which he joined the Caltech faculty; he became Richard Merkin Professor in 2007. A Fourier-Mukai transform for invariant differential cohomology 13:10 Fri 9 Oct, 2009 :: School Board Room :: Mr Richard Green :: University of Adelaide Fourier-Mukai transforms are a geometric analogue of integral transforms playing an important role in algebraic geometry. Their name derives from the construction of Mukai involving the Poincare line bundle associated to an abelian variety. In this talk I will discuss recent work looking at an analogue of this original Fourier-Mukai transform in the context of differential geometry, which gives an isomorphism between the invariant differential cohomology of a real torus and its dual. Irreducible subgroups of SO(2,n) 13:10 Fri 16 Oct, 2009 :: School Board Room :: Dr Thomas Leistner :: University of Adelaide Berger's classification of irreducibly represented Lie groups that can occur as holonomy groups of semi-Riemannian manifolds is a remarkable result of modern differential geometry. What is remarkable about it is that it is so short and that only so few types of geometry can occur. In Riemannian signature this is even more remarkable, taking into account that any representation of a compact Lie group admits a positive definite invariant scalar product. Hence, for any not too small n there is an abundance of irreducible subgroups of SO(n). We show that in other signatures the situation is quite different with, for example, SO(1,n) having no proper irreducible subgroups. We will show how this and the corresponding result about irreducible subgroups of SO(2,n) follows from the Karpelevich-Mostov theorem. (This is joint work with Antonio J. Di Scala, Politecnico di Torino.) Building centralisers in ~A_2 groups 13:10 Fri 23 Oct, 2009 :: School Board Room :: Prof Guyan Robertson :: University of Newcastle, UK Analytic torsion for twisted de Rham complexes 13:10 Fri 30 Oct, 2009 :: School Board Room :: Prof Mathai Varghese :: University of Adelaide We define analytic torsion for the twisted de Rham complex, consisting of differential forms on a compact Riemannian manifold X with coefficients in a flat vector bundle E, with a differential given by a flat connection on E plus a closed odd degree differential form on X. The definition in our case is more complicated than in the case discussed by Ray-Singer, as it uses pseudodifferential operators. We show that this analytic torsion is independent of the choice of metrics on X and E, establish some basic functorial properties, and compute it in many examples. We also establish the relationship of an invariant version of analytic torsion for T-dual circle bundles with closed 3-form flux. This is joint work with Siye Wu. Upper bounds for the essential dimension of the moduli stack of SL_n-bundles over a curve 11:10 Mon 14 Dec, 2009 :: School Board Room :: Dr Nicole Lemire :: University of Western Ontario, Canada In joint work with Ajneet Dhillon, we find upper bounds for the essential dimension of various moduli stacks of SL_n-bundles over a curve. When n is a prime power, our calculation computes the essential dimension of the moduli stack of stable bundles exactly and the essential dimension is not equal to the dimension in this case. Critical sets of products of linear forms 13:10 Mon 14 Dec, 2009 :: School Board Room :: Dr Graham Denham :: University of Western Ontario, Canada Suppose $f_1,f_2,\ldots,f_n$ are linear polynomials in $\ell$ variables and $\lambda_1,\lambda_2,\ldots,\lambda_n$ are nonzero complex numbers. The product $$ \Phi_\lambda=\Prod_{i=1}^n f_1^{\lambda_i}, $$ called a master function, defines a (multivalued) function on $\ell$-dimensional complex space, or more precisely, on the complement of a set of hyperplanes. Then it is easy to ask (but harder to answer) what the set of critical points of a master function looks like, in terms of some properties of the input polynomials and $\lambda_i$'s. In my talk I will describe the motivation for considering such a question. Then I will indicate how the geometry and combinatorics of hyperplane arrangements can be used to provide at least a partial answer. Hartogs-type holomorphic extensions 13:10 Tue 15 Dec, 2009 :: School Board Room :: Prof Roman Dwilewicz :: Missouri University of Science and Technology We will review holomorphic extension problems starting with the famous Hartogs extension theorem (1906), via Severi-Kneser-Fichera-Martinelli theorems, up to some recent (partial) results of Al Boggess (Texas A&M Univ.), Zbigniew Slodkowski (Univ. Illinois at Chicago), and the speaker. The holomorphic extension problems for holomorphic or Cauchy-Riemann functions are fundamental problems in complex analysis of several variables. The talk will be very elementary, with many figures, and accessible to graduate and even advanced undergraduate students. Group actions in complex geometry, I and II 13:10 Fri 8 Jan, 2010 :: School Board Room :: Prof Frank Kutzschebauch, IGA Lecturer :: University of Berne Group actions in complex geometry, III and IV 10:10 Fri 15 Jan, 2010 :: School Board Room :: Prof Frank Kutzschebauch, IGA Lecturer :: University of Berne Group actions in complex geometry, V and VI Group actions in complex geometry, VII and VIII 10:10 Fri 29 Jan, 2010 :: Napier LG 23 :: Prof Frank Kutzschebauch, IGA Lecturer :: University of Berne Oka manifolds and Oka maps 13:10 Fri 29 Jan, 2010 :: Napier LG 23 :: Prof Franc Forstneric :: University of Ljubljana In this survey lecture I will discuss a new class of complex manifolds and of holomorphic maps between them which I introduced in 2009 (F. Forstneric, Oka Manifolds, C. R. Acad. Sci. Paris, Ser. I, 347 (2009) 1017-1020). Roughly speaking, a complex manifold Y is said to be an Oka manifold if Y admits plenty of holomorphic maps from any Stein manifold (or Stein space) X to Y, in a certain precise sense. In particular, the inclusion of the space of holomorphic maps of X to Y into the space of continuous maps must be a weak homotopy equivalence. One of the main results is that this class of manifolds can be characterized by a simple Runge approximation property for holomorphic maps from complex Euclidean spaces C^n to Y, with approximation on compact convex subsets of C^n. This answers in the affirmative a question posed by M. Gromov in 1989. I will also discuss the Oka properties of holomorphic maps and their characterization by approximation properties. Proper holomorphic maps from strongly pseudoconvex domains to q-convex manifolds 13:10 Fri 5 Feb, 2010 :: School Board Room :: Prof Franc Forstneric :: University of Ljubljana (Joint work with B. Drinovec Drnovsek, Amer. J. Math., in press.) I will discuss the existence of closed complex subvarieties of a complex manifold X that are proper holomorphic images of strongly pseudoconvex Stein domains. The main sufficient condition is expressed in terms of the Morse indices and of the number of positive Levi eigenvalues of an exhaustion function on X. Examples show that our condition cannot be weakened in general. I will describe optimal results for subvarieties of this type in complements of compact complex submanifolds with Griffiths positive normal bundle; in the projective case these generalize classical theorems of Remmert, Bishop and Narasimhan concerning proper holomorphic maps and embeddings to complex Euclidean spaces. Conformal geometry of differential equations 13:10 Fri 12 Feb, 2010 :: School Board Room :: Dr Pawel Nurowski :: University of Warsaw Integrable systems: noncommutative versus commutative 14:10 Thu 4 Mar, 2010 :: School Board Room :: Dr Cornelia Schiebold :: Mid Sweden University After a general introduction to integrable systems, we will explain an approach to their solution theory, which is based on Banach space theory. The main point is first to shift attention to noncommutative integrable systems and then to extract information about the original setting via projection techniques. The resulting solution formulas turn out to be particularly well-suited to the qualitative study of certain solution classes. We will show how one can obtain a complete asymptotic description of the so called multiple pole solutions, a problem that was only treated for special cases before. Convolution equations in A^{-\infty} for convex domains 13:10 Fri 5 Mar, 2010 :: School Board Room :: Dr Le Hai Khoi :: Nanyang Technological University, Singapore Holomorphic extension on complex spaces 14:10 Fri 5 Mar, 2010 :: School Board Room :: Prof Egmont Porten :: Mid Sweden University Conformal structures with G_2 ambient metrics 13:10 Fri 19 Mar, 2010 :: School Board Room :: Dr Thomas Leistner :: University of Adelaide The n-sphere considered as a conformal manifold can be viewed as the projectivisation of the light cone in n+2 Minkowski space. A construction that generalises this picture to arbitrary conformal classes is the ambient metric introduced by C. Fefferman and R. Graham. In the talk, I will explain the Fefferman-Graham ambient metric construction and how it detects the existence of certain metrics in the conformal class. Then I will present conformal classes of signature (3,2) for which the 7-dimensional ambient metric has the noncompact exceptional Lie group G_2 as its holonomy. This is joint work with P. Nurowski, Warsaw University. Random walk integrals 13:10 Fri 16 Apr, 2010 :: School Board Room :: Prof Jonathan Borwein :: University of Newcastle Following Pearson in 1905, we study the expected distance of a two-dimensional walk in the plane with unit steps in random directions---what Pearson called a "ramble". A series evaluation and recursions are obtained making it possible to explicitly determine this distance for small number of steps. Closed form expressions for all the moments of a 2-step and a 3-step walk are given, and a formula is conjectured for the 4-step walk. Heavy use is made of the analytic continuation of the underlying integral. Loop groups and characteristic classes 13:10 Fri 23 Apr, 2010 :: School Board Room :: Dr Raymond Vozzo :: University of Adelaide Suppose $G$ is a compact Lie group, $LG$ its (free) loop group and $\Omega G \subseteq LG$ its based loop group. Let $P \to M$ be a principal bundle with structure group one of these loop groups. In general, differential form representatives of characteristic classes for principal bundles can be easily obtained using the Chern-Weil homomorphism, however for infinite-dimensional bundles such as $P$ this runs into analytical problems and classes are more difficult to construct. In this talk I will explain some new results on characteristic classes for loop group bundles which demonstrate how to construct certain classes---which we call string classes---for such bundles. These are obtained by making heavy use of a certain $G$-bundle associated to any loop group bundle (which allows us to avoid the problems of dealing with infinite-dimensional bundles). We shall see that the free loop group case naturally involves equivariant cohomology. Moduli spaces of stable holomorphic vector bundles II In this talk, I shall briefly review the notion of stability for holomorphic vector bundles on compact complex manifolds as discussed in the first part of this talk (28 August 2009). Then I shall attempt to compute some explicit examples in simple situations, illustrating the use of basic algebraic-geometric tools. The level of the talk will be appropriate for graduate students, particularly those who have been taking part in the algebraic geometry reading group meetings. The caloron transform 13:10 Fri 7 May, 2010 :: School Board Room :: Prof Michael Murray :: University of Adelaide The caloron transform is a `fake' dimensional reduction which transforms a G-bundle over certain manifolds to a loop group of G bundle over a manifold of one lower dimension. This talk will review the caloron transform and show how it can be best understood using the language of pseudo-isomorphisms from category theory as well as considering its application to Bogomolny monopoles and string structures. Moduli spaces of stable holomorphic vector bundles III 13:10 Fri 14 May, 2010 :: School Board Room :: A/Prof Nicholas Buchdahl :: University of Adelaide This talk is a continuation of the talk on 30 April. The same abstract applies: In this talk, I shall briefly review the notion of stability for holomorphic vector bundles on compact complex manifolds as discussed in the first part of this talk (28 August 2009). Then I shall attempt to compute some explicit examples in simple situations, illustrating the use of basic algebraic-geometric tools. The level of the talk will be appropriate for graduate students, particularly those who have been taking part in the algebraic geometry reading group meetings. Functorial 2-connected covers 13:10 Fri 21 May, 2010 :: School Board Room :: David Roberts :: University of Adelaide The Whitehead tower of a topological space seeks to resolve that space by successively removing homotopy groups from the 'bottom up'. For a path-connected space with no 1-dimensional local pathologies the first stage in the tower can be chosen to be the universal (=1-connected) covering space. This construction also works in the category Diff of manifolds. However, further stages in the two known constructions of the Whitehead tower do not work in Diff, being purely topological - and one of these is non-functorial, depending on a large number of choices. This talk will survey results from my thesis which constructs a new, functorial model for the 2-connected cover which will lift to a generalised (2-)category of smooth objects. This talk contains joint work with Andrew Stacey of the Norwegian University of Science and Technology. On the uniqueness of almost-Kahler structures 13:10 Fri 28 May, 2010 :: School Board Room :: Dr Paul-Andi Nagy :: University of Auckland We show uniqueness up to sign of positive, orthogonal almost-Kahler structures on any non-scalar flat Kahler-Einstein surface. This is joint work with A. J. di Scala. Vertex algebras and variational calculus I 13:10 Fri 4 Jun, 2010 :: School Board Room :: Dr Pedram Hekmati :: University of Adelaide A basic operation in calculus of variations is the Euler-Lagrange variational derivative, whose kernel determines the extremals of functionals. There exists a natural resolution of this operator, called the variational complex. In this talk, I shall explain how to use tools from the theory of vertex algebras to explicitly construct the variational complex. This also provides a very convenient language for classifying and constructing integrable Hamiltonian evolution equations. Vertex algebras and variational calculus II 13:10 Fri 11 Jun, 2010 :: School Board Room :: Dr Pedram Hekmati :: University of Adelaide Last time I introduced the variational complex of an algebra of differential functions and gave a sketchy definition of a vertex algebra. This week I will make this notion more precise and explain how to apply it to the calculus of variations. On affine BMW algebras 13:10 Fri 25 Jun, 2010 :: Napier 208 :: Prof Arun Ram :: University of Melbourne I will describe a family of algebras of tangles (which give rise to link invariants following the methods of Reshetikhin-Turaev and Jones) and describe some aspects of their structure and their representation theory. The main goal will be to explain how to use universal Verma modules for the symplectic group to compute the representation theory of affine BMW (Birman-Murakami-Wenzl) algebras. Introduction to mirror symmetry and the Fukaya category I 13:10 Thu 15 Jul, 2010 :: Napier G04 :: Dr Mohammed Abouzaid, IGA Lecturer :: Clay Research Fellow, MIT I shall give an overview of recent progress in homological mirror symmetry, both in clarifying our conceptual understanding of how the sign of the canonical bundle affects the behaviour of the mirror, and in obtaining concrete examples where the mirror conjecture has now been verified. (This is a two-hour talk.) Introduction to mirror symmetry and the Fukaya category II 13:10 Fri 16 Jul, 2010 :: Napier G04 :: Dr Mohammed Abouzaid, IGA Lecturer :: Clay Research Fellow, MIT Introduction to mirror symmetry and the Fukaya category III 13:10 Mon 19 Jul, 2010 :: Napier G04 :: Dr Mohammed Abouzaid, IGA Lecturer :: Clay Research Fellow, MIT Introduction to mirror symmetry and the Fukaya category IV 13:10 Tue 20 Jul, 2010 :: Napier G04 :: Dr Mohammed Abouzaid, IGA Lecturer :: Clay Research Fellow, MIT Introduction to mirror symmetry and the Fukaya category V 13:10 Wed 21 Jul, 2010 :: Napier G04 :: Dr Mohammed Abouzaid, IGA Lecturer :: Clay Research Fellow, MIT Higher nonunital Quillen K'-theory 13:10 Fri 23 Jul, 2010 :: Engineering-Maths G06 :: Dr Snigdhayan Mahanta :: University of Adelaide Quillen introduced a $K'_0$-theory for possibly nonunital rings and showed that it agrees with the usual algebraic $K_0$-theory if the ring is unital. We shall introduce higher $K'$-groups for $k$-algebras, where $k$ is a field, and discuss some elementary properties of this theory. We shall also show that for stable $C*$-algebras the higher $K'$-theory agrees with the topological $K$-theory. If time permits we shall explain how this provides a formalism to treat topological $\mathbb{T}$-dualities via Kasparov's bivariant $K$-theory. Eynard-Orantin invariants and enumerative geometry 13:10 Fri 6 Aug, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Dr Paul Norbury :: University of Melbourne As a tool for studying enumerative problems in geometry Eynard and Orantin associate multilinear differentials to any plane curve. Their work comes from matrix models but does not require matrix models (for understanding or calculations). In some sense they describe deformations of complex structures of a curve and conjectural relationships to deformations of Kahler structures of an associated object. I will give an introduction to their invariants via explicit examples, mainly to do with the moduli space of Riemann surfaces, in which the plane curve has genus zero. Index theory in the noncommutative world 13:10 Fri 20 Aug, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Prof Alan Carey :: Australian National University The aim of the talk is to give an overview of the noncommutative geometry approach to index theory. A classical construction for simplicial sets revisited 13:10 Fri 27 Aug, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Dr Danny Stevenson :: University of Glasgow Simplicial sets became popular in the 1950s as a combinatorial way to study the homotopy theory of topological spaces. They are more robust than the older notion of simplicial complexes, which were introduced for the same purpose. In this talk, which will be as introductory as possible, we will review some classical functors arising in the theory of simplicial sets, some well-known, some not-so-well-known. We will re-examine the proof of an old theorem of Kan in light of these functors. We will try to keep all jargon to a minimum. On some applications of higher Quillen K'-theory 13:10 Fri 3 Sep, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Dr Snigdhayan Mahanta :: University of Adelaide In my previous talk I introduced a functor from the category of k-algebras (k field) to abelian groups, called KQ-theory. In this talk I will explain its relationship with topological (homological) T-dualities and twisted K-theory. Contraction subgroups in locally compact groups 13:10 Fri 17 Sep, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Prof George Willis :: University of Newcastle For each automorphism, $\alpha$, of the locally compact group $G$ there is a corresponding {\sl contraction subgroup\/}, $\hbox{con}(\alpha)$, which is the set of $x\in G$ such that $\alpha^n(x)$ converges to the identity as $n\to \infty$. Contractions subgroups are important in representation theory, through the Mautner phenomenon, and in the study of convolution semigroups. If $G$ is a Lie group, then $\hbox{con}(\alpha)$ is automatically closed, can be described in terms of eigenvalues of $\hbox{ad}(\alpha)$, and is nilpotent. Since any connected group may be approximated by Lie groups, contraction subgroups of connected groups are thus well understood. Following a general introduction, the talk will focus on contraction subgroups of totally disconnected groups. A criterion for non-triviality of $\hbox{con}(\alpha)$ will be described (joint work with U.~Baumgartner) and a structure theorem for $\hbox{con}(\alpha)$ when it is closed will be presented (joint with H.~Gl\"oeckner). Some algebras associated with quantum gauge theories 13:10 Fri 15 Oct, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Dr Keith Hannabuss :: Balliol College, Oxford Classical gauge theories study sections of vector bundles and associated connections and curvature. The corresponding quantum gauge theories are normally written algebraically but can be understood as noncommutative geometries. This talk will describe one approach to the quantum gauge theories which uses braided categories. IGA-AMSI Workshop: Dirac operators in geometry, topology, representation theory, and physics 10:00 Mon 18 Oct, 2010 :: 7.15 Ingkarni Wardli :: Prof Dan Freed :: University of Texas, Austin Lecture Series by Dan Freed (University of Texas, Austin). Dirac introduced his eponymous operator to describe electrons in quantum theory. It was rediscovered by Atiyah and Singer in their study of the index problem on manifolds. In these lectures we explore new theorems and applications. Several of these also involve K-theory in its recent twisted and differential variations. These lectures will be supplemented by additional talks by invited speakers. For more details, please see the conference webpage: http://www.iga.adelaide.edu.au/workshops/WorkshopOct2010/ Higher stacks and homotopy theory II: the motivic context 13:10 Thu 16 Dec, 2010 :: Ingkarni Wardli B21 :: Mr James Wallbridge :: University of Adelaide and Institut de mathematiques de Toulouse In part I of this talk (JC seminar May 2008) we presented motivation and the basic definitions for building homotopy theory into an arbitrary category by introducing the notion of (higher) stacks. In part II we consider a specific example on the category of schemes to illustrate how the machinery works in practice. It will lead us into motivic territory (if we like it or not). Complete quaternionic Kahler manifolds associated to cubic polynomials 13:10 Fri 11 Feb, 2011 :: Ingkarni Wardli B18 :: Prof Vicente Cortes :: University of Hamburg We prove that the supergravity r- and c-maps preserve completeness. As a consequence, any component H of a hypersurface {h = 1} defined by a homogeneous cubic polynomial h such that -\partial^2 h is a complete Riemannian metric on H defines a complete projective special Kahler manifold and any complete projective special Kahler manifold defines a complete quaternionic Kahler manifold of negative scalar curvature. We classify all complete quaternionic Kahler manifolds of dimension less or equal to 12 which are obtained in this way and describe some complete examples in 16 dimensions. Real analytic sets in complex manifolds I: holomorphic closure dimension 13:10 Fri 4 Mar, 2011 :: Mawson 208 :: Dr Rasul Shafikov :: University of Western Ontario After a quick introduction to real and complex analytic sets, I will discuss possible notions of complex dimension of real sets, and then discuss a structure theorem for the holomorphic closure dimension which is defined as the dimension of the smallest complex analytic germ containing the real germ. Real analytic sets in complex manifolds II: complex dimension 13:10 Fri 11 Mar, 2011 :: Mawson 208 :: Dr Rasul Shafikov :: University of Western Ontario Given a real analytic set R, denote by A the subset of R of points through which there is a nontrivial complex variety contained in R, i.e., A consists of points in R of positive complex dimension. I will discuss the structure of the set A. Surface quotients of hyperbolic buildings 13:10 Fri 18 Mar, 2011 :: Mawson 208 :: Dr Anne Thomas :: University of Sydney Let I(p,v) be Bourdon's building, the unique simply-connected 2-complex such that all 2-cells are regular right-angled hyperbolic p-gons, and the link at each vertex is the complete bipartite graph K_{v,v}. We investigate and mostly determine the set of triples (p,v,g) for which there is a discrete group acting on I(p,v) so that the quotient is a compact orientable surface of genus g. Surprisingly, the existence of such a quotient depends upon the value of v. The remaining cases lead to open questions in tessellations of surfaces and in number theory. We use elementary group theory, combinatorics, algebraic topology and number theory. This is joint work with David Futer. Lorentzian manifolds with special holonomy 13:10 Fri 25 Mar, 2011 :: Mawson 208 :: Mr Kordian Laerz :: Humboldt University, Berlin A parallel lightlike vector field on a Lorentzian manifold X naturally defines a foliation of codimension 1 on X and a 1-dimensional subfoliation. In the first part we introduce Lorentzian metrics on the total space of certain circle bundles in order to construct weakly irreducible Lorentzian manifolds admitting a parallel lightlike vector field such that all leaves of the foliations are compact. Then we study which holonomy representations can be realized in this way. Finally, we consider the structure of arbitrary Lorentzian manifolds for which the leaves of the foliations are compact. Operator algebra quantum groups 13:10 Fri 1 Apr, 2011 :: Mawson 208 :: Dr Snigdhayan Mahanta :: University of Adelaide Woronowicz initiated the study of quantum groups using C*-algebras. His framework enabled him to deal with compact (linear) quantum groups. In this talk we shall introduce a notion of quantum groups that can handle infinite dimensional examples like SU(\infty). We shall also study some quantum homogeneous spaces associated to this group and compute their K-theory groups. This is joint work with V. Mathai. Spherical tube hypersurfaces 13:10 Fri 8 Apr, 2011 :: Mawson 208 :: Prof Alexander Isaev :: Australian National University We consider smooth real hypersurfaces in a complex vector space. Specifically, we are interested in tube hypersurfaces, i.e., hypersurfaces represented as the direct product of the imaginary part of the space and hypersurfaces lying in its real part. Tube hypersurfaces arise, for instance, as the boundaries of tube domains. The study of tube domains is a classical subject in several complex variables and complex geometry, which goes back to the beginning of the 20th century. Indeed, already Siegel found it convenient to realise certain symmetric domains as tubes. One can endow a tube hypersurface with a so-called CR-structure, which is the remnant of the complex structure on the ambient vector space. We impose on the CR-structure the condition of sphericity. One way to state this condition is to require a certain curvature (called the CR-curvature of the hypersurface) to vanish identically. Spherical tube hypersurfaces possess remarkable properties and are of interest from both the complex-geometric and affine-geometric points of view. I my talk I will give an overview of the theory of such hypersurfaces. In particular, I will mention an algebraic construction arising from this theory that has applications in abstract commutative algebra and singularity theory. I will speak about these applications in detail in my colloquium talk later today. Centres of cyclotomic Hecke algebras 13:10 Fri 15 Apr, 2011 :: Mawson 208 :: A/Prof Andrew Francis :: University of Western Sydney The cyclotomic Hecke algebras, or Ariki-Koike algebras $H(R,q)$, are deformations of the group algebras of certain complex reflection groups $G(r,1,n)$, and also are quotients of the ubiquitous affine Hecke algebra. The centre of the affine Hecke algebra has been understood since Bernstein in terms of the symmetric group action on the weight lattice. In this talk I will discuss the proof that over an arbitrary unital commutative ring $R$, the centre of the affine Hecke algebra maps \emph{onto} the centre of the cyclotomic Hecke algebra when $q-1$ is invertible in $R$. This is the analogue of the fact that the centre of the Hecke algebra of type $A$ is the set of symmetric polynomials in Jucys-Murphy elements (formerly known as he Dipper-James conjecture). Key components of the proof include the relationship between the trace functions on the affine Hecke algebra and on the cyclotomic Hecke algebra, and the link to the affine braid group. This is joint work with John Graham and Lenny Jones. A strong Oka principle for embeddings of some planar domains into CxC*, I 13:10 Fri 6 May, 2011 :: Mawson 208 :: Mr Tyson Ritter :: University of Adelaide The Oka principle refers to a collection of results in complex analysis which state that there are only topological obstructions to solving certain holomorphically defined problems involving Stein manifolds. For example, a basic version of Gromov's Oka principle states that every continuous map from a Stein manifold into an elliptic complex manifold is homotopic to a holomorphic map. In these two talks I will discuss a new result showing that if we restrict the class of source manifolds to circular domains and fix the target as CxC* we can obtain a much stronger Oka principle: every continuous map from a circular domain S into CxC* is homotopic to a proper holomorphic embedding. This result has close links with the long-standing and difficult problem of finding proper holomorphic embeddings of Riemann surfaces into C^2, with additional motivation from other sources. A strong Oka principle for embeddings of some planar domains into CxC*, II 13:10 Fri 13 May, 2011 :: Mawson 208 :: Mr Tyson Ritter :: University of Adelaide Knots, posets and sheaves 13:10 Fri 20 May, 2011 :: Mawson 208 :: Dr Brent Everitt :: University of York The Euler characteristic is a nice simple integer invariant that one can attach to a space. Unfortunately, it is not natural: maps between spaces do not induce maps between their Euler characteristics, because it makes no sense to talk of a map between integers. This shortcoming is fixed by homology. Maps between spaces induce maps between their homologies, with the Euler characteristic encoded inside the homology. Recently it has become possible to play the same game with knots and the Jones polynomial: the Khovanov homology of a knot both encodes the Jones polynomial and is a natural invariant of the knot. After saying what all this means, this talk will observe that Khovanov homology is just a special case of sheaf homology on a poset, and we will explore some of the ramifications of this observation. This is joint work with Paul Turner (Geneva/Fribourg). Lifting principal bundles and abelian extensions 13:10 Fri 27 May, 2011 :: Mawson 208 :: Prof Michael Murray :: School of Mathematical Sciences I will review what it means to lift the structure group of a principal bundle and the topological obstruction to this in the case of a central extension. I will then discuss some new results in the case of abelian extensions. Natural operations on the Hochschild cochain complex 13:10 Fri 3 Jun, 2011 :: Mawson 208 :: Dr Michael Batanin :: Macquarie University The Hochschild cochain complex of an associative algebra provides an important bridge between algebra and geometry. Algebraically, this is the derived center of the algebra. Geometrically, the Hochschild cohomology of the algebra of smooth functions on a manifold is isomorphic to the graduate space of polyvector fields on this manifold. There are many important operations acting on the Hochschild complex. It is, however, a tricky question to ask which operations are natural because the Hochschild complex is not a functor. In my talk I will explain how we can overcome this obstacle and compute all possible natural operations on the Hochschild complex. The result leads immediately to a proof of the Deligne conjecture on Hochschild cochains. What is... a tensor? 12:10 Mon 25 Jul, 2011 :: 5.57 Ingkarni Wardli :: Mr Michael Albanese :: School of Mathematical Sciences Tensors are important objects that are frequently used in a variety of fields including continuum mechanics, general relativity and differential geometry. Despite their importance, they are often defined poorly (if at all) which contributes to a lack of understanding. In this talk, I will give a concrete definition of a tensor and provide some familiar examples. For the remainder of the talk, I will discuss some applications—here I mean applications in the pure maths sense (i.e. more abstract nonsense, but hopefully still interesting). The (dual) local cyclic homology valued Chern-Connes character for some infinite dimensional spaces 13:10 Fri 29 Jul, 2011 :: B.19 Ingkarni Wardli :: Dr Snigdhayan Mahanta :: School of Mathematical Sciences I will explain how to construct a bivariant Chern-Connes character on the category of sigma-C*-algebras taking values in Puschnigg's local cyclic homology. Roughly, setting the first (resp. the second) variable to complex numbers one obtains the K-theoretic (resp. dual K-homological) Chern-Connes character in one variable. We shall focus on the dual K-homological Chern-Connes character and investigate it in the example of SU(infty). Towards Rogers-Ramanujan identities for the Lie algebra A_n 13:10 Fri 5 Aug, 2011 :: B.19 Ingkarni Wardli :: Prof Ole Warnaar :: University of Queensland The Rogers-Ramanujan identities are a pair of q-series identities proved by Leonard Rogers in 1894 which became famous two decades later as conjectures of Srinivasa Ramanujan. Since the 1980s it is known that the Rogers-Ramanujan identities are in fact identities for characters of certain modules for the affine Lie algebra A_1. This poses the obvious question as to whether there exist Rogers-Ramanujan identities for higher rank affine Lie algebras. In this talk I will describe some recent progress on this problem. I will also discuss a seemingly mysterious connection with the representation theory of quivers over finite fields. Horocycle flows at prime times 13:10 Wed 10 Aug, 2011 :: B.19 Ingkarni Wardli :: Prof Peter Sarnak :: Institute for Advanced Study, Princeton The distribution of individual orbits of unipotent flows in homogeneous spaces are well understood thanks to the work work of Marina Ratner. It is conjectured that this property is preserved on restricting the times from the integers to primes, this being important in the study of prime numbers as well as in such dynamics. We review progress in understanding this conjecture, starting with Dirichlet (a finite system), Vinogradov (rotation of a circle or torus), Green and Tao (translation on a nilmanifold) and Ubis and Sarnak (horocycle flows in the semisimple case). K3 surfaces: a crash course 13:10 Fri 12 Aug, 2011 :: B.19 Ingkarni Wardli :: A/Prof Nicholas Buchdahl :: University of Adelaide Everything you have ever wanted to know about K3 surfaces! Two talks: 1:10 pm to 3:00 pm. There are no magnetically charged particle-like solutions of the Einstein-Yang-Mills equations for models with Abelian residual groups 13:10 Fri 19 Aug, 2011 :: B.19 Ingkarni Wardli :: Dr Todd Oliynyk :: Monash University According to a conjecture from the 90's, globally regular, static, spherically symmetric (i.e. particle-like) solutions with nonzero total magnetic charge are not expected to exist in Einstein-Yang-Mills theory. In this talk, I will describe recent work done in collaboration with M. Fisher where we establish the validity of this conjecture under certain restrictions on the residual gauge group. Of particular interest is that our non-existence results apply to the most widely studied models with Abelian residual groups. Deformations of Oka manifolds 13:10 Fri 26 Aug, 2011 :: B.19 Ingkarni Wardli :: A/Prof Finnur Larusson :: University of Adelaide We discuss the behaviour of the Oka property with respect to deformations of compact complex manifolds. We have recently proved that in a family of compact complex manifolds, the set of Oka fibres corresponds to a G_delta subset of the base. We have also found a necessary and sufficient condition for the limit fibre of a sequence of Oka fibres to be Oka in terms of a new uniform Oka property. The special case when the fibres are tori will be considered, as well as the general case of holomorphic submersions with noncompact fibres. Oka properties of some hypersurface complements 13:10 Fri 2 Sep, 2011 :: B.19 Ingkarni Wardli :: Mr Alexander Hanysz :: University of Adelaide Oka manifolds can be viewed as the "opposite" of Kobayashi hyperbolic manifolds. Kobayashi conjectured that the complement of a generic algebraic hypersurface of sufficiently high degree is hyperbolic. Therefore it is natural to ask whether the complement is Oka for the case of low degree or non-algebraic hypersurfaces. We provide a complete answer to this question for complements of hyperplane arrangements, and some results for graphs of meromorphic functions. Twisted Morava K-theory 13:10 Fri 9 Sep, 2011 :: 7.15 Ingkarni Wardli :: Dr Craig Westerland :: University of Melbourne Morava's extraordinary K-theories K(n) are a family of generalized cohomology theories which behave in some ways like K-theory (indeed, K(1) is mod 2 K-theory). Their construction exploits Quillen's description of cobordism in terms of formal group laws and Lubin-Tate's methods in class field theory for constructing abelian extensions of number fields. Constructed from homotopy-theoretic methods, they do not admit a geometric description (like deRham cohomology, K-theory, or cobordism), but are nonetheless subtle, computable invariants of topological spaces. In this talk, I will give an introduction to these theories, and explain how it is possible to define an analogue of twisted K-theory in this setting. Traditionally, K-theory is twisted by a three-dimensional cohomology class; in this case, K(n) admits twists by (n+2)-dimensional classes. This work is joint with Hisham Sati. Cohomology of higher-rank graphs and twisted C*-algebras 13:10 Fri 16 Sep, 2011 :: B.19 Ingkarni Wardli :: Dr Aidan Sims :: University of Wollongong Higher-rank graphs and their $C^*$-algebras were introduced by Kumjian and Pask in 2000. They have provided a rich source of tractable examples of $C^*$-algebras, the most elementary of which are the commutative algebras $C(\mathbb{T}^k)$ of continuous functions on $k$-tori. In this talk we shall describe how to define the homology and cohomology of a higher-rank graph, and how to associate to each higher-rank graph $\Lambda$ and $\mathbb{T}$-valued cocycle on $\Lambda$ a twisted higher-rank graph $C^*$-algebra. As elementary examples, we obtain all noncommutative tori. This is a preleminary report on ongoing joint work with Alex Kumjian and David Pask. T-duality via bundle gerbes I 13:10 Fri 23 Sep, 2011 :: B.19 Ingkarni Wardli :: Dr Raymond Vozzo :: University of Adelaide In physics T-duality is a phenomenon which relates certain types of string theories to one another. From a topological point of view, one can view string theory as a duality between line bundles carrying a degree three cohomology class (the H-flux). In this talk we will use bundle gerbes to give a geometric realisation of the H-flux and explain how to construct the T-dual of a line bundle together with its T-dual bundle gerbe. T-duality via bundle gerbes II 13:10 Fri 21 Oct, 2011 :: B.19 Ingkarni Wardli :: Dr Raymond Vozzo :: University of Adelaide Dirac operators on classifying spaces 13:10 Fri 28 Oct, 2011 :: B.19 Ingkarni Wardli :: Dr Pedram Hekmati :: University of Adelaide The Dirac operator was introduced by Paul Dirac in 1928 as the formal square root of the D'Alembert operator. Thirty years later it was rediscovered in Euclidean signature by Atiyah and Singer in their seminal work on index theory. In this talk I will describe efforts to construct a Dirac type operator on the classifying space for odd complex K-theory. Ultimately the aim is to produce a projective family of Fredholm operators realising elements in twisted K-theory of a certain moduli stack. Staircase to heaven 13:10 Fri 4 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Burkard Polster :: Monash University How much of an overhang can we produce by stacking identical rectangular blocks at the edge of a table? It has been known for at least 100 years that the overhang can be as large as desired: we arrange the blocks in the form of a staircase. With $n$ blocks of length 2 the overhang can be made to sum to $1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots+\frac{1}{n}$. Since the harmonic series diverges, it follows that the overhang can be arranged to be as large as desired, simply by using a suitably large number of blocks. Recently, a number of interesting twists have been added to this paradoxical staircase. I'll be talking about some of these new developments and in particular about a continuous counterpart of the staircase that I've been pondering together with my colleagues David Treeby and Marty Ross. Metric geometry in data analysis 13:10 Fri 11 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Facundo Memoli :: University of Adelaide The problem of object matching under invariances can be studied using certain tools from metric geometry. The central idea is to regard objects as metric spaces (or metric measure spaces). The type of invariance that one wishes to have in the matching is encoded by the choice of the metrics with which one endows the objects. The standard example is matching objects in Euclidean space under rigid isometries: in this situation one would endow the objects with the Euclidean metric. More general scenarios are possible in which the desired invariance cannot be reflected by the preservation of an ambient space metric. Several ideas due to M. Gromov are useful for approaching this problem. The Gromov-Hausdorff distance is a natural candidate for doing this. However, this metric leads to very hard combinatorial optimization problems and it is difficult to relate to previously reported practical approaches to the problem of object matching. I will discuss different variations of these ideas, and in particular will show a construction of an L^p version of the Gromov-Hausdorff metric, called the Gromov-Wassestein distance, which is based on mass transportation ideas. This new metric directly leads to quadratic optimization problems on continuous variables with linear constraints. As a consequence of establishing several lower bounds, it turns out that several invariants of metric measure spaces turn out to be quantitatively stable in the GW sense. These invariants provide practical tools for the discrimination of shapes and connect the GW ideas to a number of pre-existing approaches. Oka theory of blow-ups 13:10 Fri 18 Nov, 2011 :: B.19 Ingkarni Wardli :: A/Prof Finnur Larusson :: University of Adelaide This talk is a continuation of my talk last August. I will discuss the recently-obtained answers to the open questions I described then. Applications of tropical geometry to groups and manifolds 13:10 Mon 21 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Stephan Tillmann :: University of Queensland Tropical geometry is a young field with multiple origins. These include the work of Bergman on logarithmic limit sets of algebraic varieties; the work of the Brazilian computer scientist Simon on discrete mathematics; the work of Bieri, Neumann and Strebel on geometric invariants of groups; and, of course, the work of Newton on polynomials. Even though there is still need for a unified foundation of the field, there is an abundance of applications of tropical geometry in group theory, combinatorics, computational algebra and algebraic geometry. In this talk I will give an overview of (what I understand to be) tropical geometry with a bias towards applications to group theory and low-dimensional topology. Space of 2D shapes and the Weil-Petersson metric: shapes, ideal fluid and Alzheimer's disease 13:10 Fri 25 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Sergey Kushnarev :: National University of Singapore The Weil-Petersson metric is an exciting metric on a space of simple plane curves. In this talk the speaker will introduce the shape space and demonstrate the connection with the Euler-Poincare equations on the group of diffeomorphisms (EPDiff). A numerical method for finding geodesics between two shapes will be demonstrated and applied to the surface of the hippocampus to study the effects of Alzheimer's disease. As another application the speaker will discuss how to do statistics on the shape space and what should be done to improve it. Noncritical holomorphic functions of finite growth on algebraic Riemann surfaces 13:10 Fri 3 Feb, 2012 :: B.20 Ingkarni Wardli :: Prof Franc Forstneric :: University of Ljubljana Given a compact Riemann surface X and a point p in X, we construct a holomorphic function without critical points on the punctured (algebraic) Riemann surface R=X-p which is of finite order at the point p. In the case at hand this improves the 1967 theorem of Gunning and Rossi to the effect that every open Riemann surface admits a noncritical holomorphic function, but without any particular growth condition. (Joint work with Takeo Ohsawa.) Embedding circle domains into the affine plane C^2 13:10 Fri 10 Feb, 2012 :: B.20 Ingkarni Wardli :: Prof Franc Forstneric :: University of Ljubljana We prove that every circle domain in the Riemann sphere admits a proper holomorphic embedding into the affine plane C^2. By a circle domain we mean a domain obtained by removing from the Riemann sphere a finite or countable family of pairwise disjoint closed round discs. Our proof also applies to some circle domains with punctures. The uniformization theorem of He and Schramm (1996) says that every domain in the Riemann sphere with at most countably many boundary components is conformally equivalent to a circle domain, so our theorem embeds all such domains properly holomorphically in C^2. (Joint work with Erlend F. Wold.) Plurisubharmonic subextensions as envelopes of disc functionals 13:10 Fri 2 Mar, 2012 :: B.20 Ingkarni Wardli :: A/Prof Finnur Larusson :: University of Adelaide I will describe new joint work with Evgeny Poletsky. We prove a disc formula for the largest plurisubharmonic subextension of an upper semicontinuous function on a domain $W$ in a Stein manifold to a larger domain $X$ under suitable conditions on $W$ and $X$. We introduce a related equivalence relation on the space of analytic discs in $X$ with boundary in $W$. The quotient is a complex manifold with a local biholomorphism to $X$, except it need not be Hausdorff. We use our disc formula to generalise Kiselman's minimum principle. We show that his infimum function is an example of a plurisubharmonic subextension. IGA Workshop: The mathematical implications of gauge-string dualities 09:30 Mon 5 Mar, 2012 :: 7.15 Ingkarni Wardli :: Prof Rajesh Gopakumar :: Harish-Chandra Research Institute Lecture series by Rajesh Gopakumar (Harish-Chandra Research Institute). The lectures will be supplemented by talks by other invited speakers. The Lorentzian conformal analogue of Calabi-Yau manifolds 13:10 Fri 16 Mar, 2012 :: B.20 Ingkarni Wardli :: Prof Helga Baum :: Humboldt University Calabi-Yau manifolds are Riemannian manifolds with holonomy group SU(m). They are Ricci-flat and Kahler and admit a 2-parameter family of parallel spinors. In the talk we will discuss the Lorentzian conformal analogue of this situation. If on a manifold a class of conformally equivalent metrics [g] is given, then one can consider the holonomy group of the conformal manifold (M,[g]), which is a subgroup of O(p+1,q+1) if the metric g has signature (p,q). There is a close relation between algebraic properties of the conformal holonomy group and the existence of Einstein metrics in the conformal class as well as to the existence of conformal Killing spinors. In the talk I will explain classification results for conformal holonomy groups of Lorentzian manifolds. In particular, I will describe Lorentzian manifolds (M,g) with conformal holonomy group SU(1,m), which can be viewed as the conformal analogue of Calabi-Yau manifolds. Such Lorentzian metrics g, known as Fefferman metrics, appear on S^1-bundles over strictly pseudoconvex CR spin manifolds and admit a 2-parameter family of conformal Killing spinors. IGA Workshop: Dualities in field theories and the role of K-theory 09:30 Mon 19 Mar, 2012 :: 7.15 Ingkarni Wardli :: Prof Jonathan Rosenberg :: University of Maryland Lecture series by Jonathan Rosenberg (University of Maryland). There will be additional talks by other invited speakers. The de Rham Complex 12:10 Mon 19 Mar, 2012 :: 5.57 Ingkarni Wardli :: Mr Michael Albanese :: University of Adelaide The de Rham complex is of fundamental importance in differential geometry. After first introducing differential forms (in the familiar setting of Euclidean space), I will demonstrate how the de Rham complex elegantly encodes one half (in a sense which will become apparent) of the results from vector calculus. If there is time, I will indicate how results from the remaining half of the theory can be concisely expressed by a single, far more general theorem. Bundle gerbes and the Faddeev-Mickelsson-Shatashvili anomaly 13:10 Fri 30 Mar, 2012 :: B.20 Ingkarni Wardli :: Dr Raymond Vozzo :: University of Adelaide The Faddeev-Mickelsson-Shatashvili anomaly arises in the quantisation of fermions interacting with external gauge potentials. Mathematically, it can be described as a certain lifting problem for an extension of groups. The theory of bundle gerbes is very useful for studying lifting problems, however it only applies in the case of a central extension whereas in the study of the FMS anomaly the relevant extension is non-central. In this talk I will explain how to describe this anomaly indirectly using bundle gerbes and how to use a generalisation of bundle gerbes to describe the (non-central) lifting problem directly. This is joint work with Pedram Hekmati, Michael Murray and Danny Stevenson. New examples of totally disconnected, locally compact groups 13:10 Fri 20 Apr, 2012 :: B.20 Ingkarni Wardli :: Dr Murray Elder :: University of Newcastle I will attempt to explain what a totally disconnected, locally compact group is, and then describe some new work with George Willis on an attempt to create new examples based on Baumslag-Solitar groups, which are well known, tried and tested examples/counterexamples in geometric/combinatorial group theory. I will describe how to compute invariants of scale and flat rank for these groups. A Problem of Siegel 13:10 Fri 27 Apr, 2012 :: B.20 Ingkarni Wardli :: Dr Brent Everitt :: University of York The first explicit examples of orientable hyperbolic 3-manifolds were constructed by Weber, Siefert, and Lobell in the early 1930's. In the subsequent decades the world of hyperbolic n-manifolds has grown into an extraordinarily rich one. Its sociology is best understood through the eyes of invariants, and for hyperbolic manifolds the most important invariant is volume. Viewed this way the n-dimensional hyperbolic manifolds, for fixed n, look like a well-ordered subset of the reals (a discrete set even, when n is not 3). So we are naturally led to the (manifold) Siegel problem: for a given n, determine the minimum possible volume obtained by an orientable hyperbolic n-manifold. It is a problem with a long and venerable history. In this talk I will describe a unified solution to the problem in low even dimensions, one of which at least is new. Joint work with John Ratcliffe and Steve Tschantz (Vanderbilt). Acyclic embeddings of open Riemann surfaces into new examples of elliptic manifolds 13:10 Fri 4 May, 2012 :: Napier LG28 :: Dr Tyson Ritter :: University of Adelaide In complex geometry a manifold is Stein if there are, in a certain sense, "many" holomorphic maps from the manifold into C^n. While this has long been well understood, a fruitful definition of the dual notion has until recently been elusive. In Oka theory, a manifold is Oka if it satisfies several equivalent definitions, each stating that the manifold has "many" holomorphic maps into it from C^n. Related to this is the geometric condition of ellipticity due to Gromov, who showed that it implies a complex manifold is Oka. We present recent contributions to three open questions involving elliptic and Oka manifolds. We show that affine quotients of C^n are elliptic, and combine this with an example of Margulis to construct new elliptic manifolds of interesting homotopy types. It follows that every open Riemann surface properly acyclically embeds into an elliptic manifold, extending an existing result for open Riemann surfaces with abelian fundamental group. Index type invariants for twisted signature complexes 13:10 Fri 11 May, 2012 :: Napier LG28 :: Prof Mathai Varghese :: University of Adelaide Atiyah-Patodi-Singer proved an index theorem for non-local boundary conditions in the 1970's that has been widely used in mathematics and mathematical physics. A key application of their theory gives the index theorem for signature operators on oriented manifolds with boundary. As a consequence, they defined certain secondary invariants that were metric independent. I will discuss some recent work with Benameur where we extend the APS theory to signature operators twisted by an odd degree closed differential form, and study the corresponding secondary invariants. Computational complexity, taut structures and triangulations 13:10 Fri 18 May, 2012 :: Napier LG28 :: Dr Benjamin Burton :: University of Queensland There are many interesting and difficult algorithmic problems in low-dimensional topology. Here we study the problem of finding a taut structure on a 3-manifold triangulation, whose existence has implications for both the geometry and combinatorics of the triangulation. We prove that detecting taut structures is "hard", in the sense that it is NP-complete. We also prove that detecting taut structures is "not too hard", by showing it to be fixed-parameter tractable. This is joint work with Jonathan Spreer. On the full holonomy group of special Lorentzian manifolds 13:10 Fri 25 May, 2012 :: Napier LG28 :: Dr Thomas Leistner :: University of Adelaide The holonomy group of a semi-Riemannian manifold is defined as the group of parallel transports along loops based at a point. Its connected component, the `restricted holonomy group', is given by restricting in this definition to contractible loops. The restricted holonomy can essentially be described by its Lie algebra and many classification results are obtained in this way. In contrast, the `full' holonomy group is a more global object and classification results are out of reach. In the talk I will describe recent results with H. Baum and K. Laerz (both HU Berlin) about the full holonomy group of so-called `indecomposable' Lorentzian manifolds. I will explain a construction method that arises from analysing the effects on holonomy when dividing the manifold by the action of a properly discontinuous group of isometries and present several examples of Lorentzian manifolds with disconnected holonomy groups. Geometric modular representation theory 13:10 Fri 1 Jun, 2012 :: Napier LG28 :: Dr Anthony Henderson :: University of Sydney Representation theory is one of the oldest areas of algebra, but many basic questions in it are still unanswered. This is especially true in the modular case, where one considers vector spaces over a field F of positive characteristic; typically, complications arise for particular small values of the characteristic. For example, from a vector space V one can construct the symmetric square S^2(V), which is one easy example of a representation of the group GL(V). One would like to say that this representation is irreducible, but that statement is not always true: if F has characteristic 2, there is a nontrivial invariant subspace. Even for GL(V), we do not know the dimensions of all irreducible representations in all characteristics. In this talk, I will introduce some of the main ideas of geometric modular representation theory, a more recent approach which is making progress on some of these old problems. Essentially, the strategy is to re-formulate everything in terms of homology of various topological spaces, where F appears only as the field of coefficients and the spaces themselves are independent of F; thus, the modular anomalies in representation theory arise because homology with modular coefficients is detecting something about the topology that rational coefficients do not. In practice, the spaces are usually varieties over the complex numbers, and homology is replaced by intersection cohomology to take into account the singularities of these varieties. IGA Workshop: Dendroidal sets 14:00 Tue 12 Jun, 2012 :: Ingkarni Wardli B17 :: Dr Ittay Weiss :: University of the South Pacific A series of four 2-hour lectures by Dr. Ittay Weiss. The theory of dendroidal sets was introduced by Moerdijk and Weiss in 2007 in the study of homotopy operads in algebraic topology. In the five years that have past since then several fundamental and highly non-trivial results were established. For instance, it was established that dendroidal sets provide models for homotopy operads in a way that extends the Joyal-Lurie approach to homotopy categories. It can be shown that dendroidal sets provide new models in the study of n-fold loop spaces. And it is very recently shown that dendroidal sets model all connective spectra in a way that extends the modeling of certain spectra by Picard groupoids. The aim of the lecture series will be to introduce the concepts mentioned above, present the elementary theory, and understand the scope of the results mentioned as well as discuss the potential for further applications. Sources for the course will include the article "From Operads to Dendroidal Sets" (in the AMS volume on mathematical foundations of quantum field theory (also on the arXiv)) and the lecture notes by Ieke Moerdijk "simplicial methods for operads and algebraic geometry" which resulted from an advanced course given in Barcelona 3 years ago. No prior knowledge of operads will be assumed nor any knowledge of homotopy theory that is more advanced then what is required for the definition of the fundamental group. The basics of the language of presheaf categories will be recalled quickly and used freely. Introduction to quantales via axiomatic analysis 13:10 Fri 15 Jun, 2012 :: Napier LG28 :: Dr Ittay Weiss :: University of the South Pacific Quantales were introduced by Mulvey in 1986 in the context of non-commutative topology with the aim of providing a concrete non-commutative framework for the foundations of quantum mechanics. Since then quantales found applications in other areas as well, among others in the work of Flagg. Flagg considers certain special quantales, called value quantales, that are desigend to capture the essential properties of ([0,\infty],\le,+) that are relevant for analysis. The result is a well behaved theory of value quantale enriched metric spaces. I will introduce the notion of quantales as if they were desigend for just this purpose, review most of the known results (since there are not too many), and address a some new results, conjectures, and questions. K-theory and unbounded Fredholm operators 13:10 Mon 9 Jul, 2012 :: Ingkarni Wardli B19 :: Dr Jerry Kaminker :: University of California, Davis There are several ways of viewing elements of K^1(X). One of these is via families of unbounded self-adjoint Fredholm operators on X. Each operator will have discrete spectrum, with infinitely many positive and negative eigenvalues of finite multiplicity. One can associate to such a family a geometric object, its graph, and the Chern character and other invariants of the family can be studied from this perspective. By restricting the dimension of the eigenspaces one may sometimes use algebraic topology to completely determine the family up to equivalence. This talk will describe the general framework and some applications to families on low-dimensional manifolds where the methods work well. Various notions related to spectral flow, the index gerbe and Berry phase play roles which will be discussed. This is joint work with Ron Douglas. Complex geometry and operator theory 14:10 Mon 9 Jul, 2012 :: Ingkarni Wardli B19 :: Prof Ron Douglas :: Texas A&M University In the study of bounded operators on Hilbert spaces of holomorphic functions, concepts and techniques from complex geometry are important. An anti-holomorphic bundle exists on which one can define the Chern connection. Its curvature turns out to be a complete invariant and various operator notions can't be reframed in terms of geometrical ones which leads to the solution of some problems. We will discuss this approach with an emphasis on natural examples in the one and multivariable case. Inquiry-based learning: yesterday and today The speaker will report on a project to develop and promote approaches to mathematics instruction closely related to the Moore method -- methods which are called inquiry-based learning -- as well as on his personal experience of the Moore method. For background, see the speaker's article in the May 2012 issue of the Notices of the American Mathematical Society. To download the article, click on "Media" above. The motivic logarithm and its realisations 13:10 Fri 3 Aug, 2012 :: Engineering North 218 :: Dr James Borger :: Australian National University When a complex manifold is defined by polynomial equations, its cohomology groups inherit extra structure. This was discovered by Hodge in the 1920s and 30s. When the defining polynomials have rational coefficients, there is some additional, arithmetic structure on the cohomology. This was discovered by Grothendieck and others in the 1960s. But here the situation is still quite mysterious because each cohomology group has infinitely many different arithmetic structures and while they are not directly comparable, they share many properties---with each other and with the Hodge structure. All written accounts of this that I'm aware of treat arbitrary varieties. They are beautifully abstract and non-explicit. In this talk, I'll take the opposite approach and try to give a flavour of the subject by working out a perhaps the simplest nontrivial example, the cohomology of C* relative to a subset of two points, in beautifully concrete and explicit detail. Here the common motif is the logarithm. In Hodge theory, it is realised as the complex logarithm; in the crystalline theory, it's as the p-adic logarithm; and in the etale theory, it's as Kummer theory. I'll assume you have some familiarity with usual, singular cohomology of topological spaces, but I won't assume that you know anything about these non-topological cohomology theories. Hodge numbers and cohomology of complex algebraic varieties 13:10 Fri 10 Aug, 2012 :: Engineering North 218 :: Prof Gus Lehrer :: University of Sydney Let $X$ be a complex algebraic variety defined over the ring $\mathfrak{O}$ of integers in a number field $K$ and let $\Gamma$ be a group of $\mathfrak{O}$-automorphisms of $X$. I shall discuss how the counting of rational points over reductions mod $p$ of $X$, and an analysis of the Hodge structure of the cohomology of $X$, may be used to determine the cohomology as a $\Gamma$-module. This will include some joint work with Alex Dimca and with Mark Kisin, and some classical unsolved problems. Differential topology 101 13:10 Fri 17 Aug, 2012 :: Engineering North 218 :: Dr Nicholas Buchdahl :: University of Adelaide Much of my recent research been directed at a problem in the theory of compact complex surfaces---trying to fill in a gap in the Enriques-Kodaira classification. Attempting to classify some collection of mathematical objects is a very common activity for pure mathematicians, and there are many well-known examples of successful classification schemes; for example, the classification of finite simple groups, and the classification of simply connected topological 4-manifolds. The aim of this talk will be to illustrate how techniques from differential geometry can be used to classify compact surfaces. The level of the talk will be very elementary, and the material is all very well known, but it is sometimes instructive to look back over simple cases of a general problem with the benefit of experience to gain greater insight into the more general and difficult cases. Noncommutative geometry and conformal geometry 13:10 Fri 24 Aug, 2012 :: Engineering North 218 :: Dr Hang Wang :: Tsinghua University In this talk, we shall use noncommutative geometry to obtain an index theorem in conformal geometry. This index theorem follows from an explicit and geometric computation of the Connes-Chern character of the spectral triple in conformal geometry, which was introduced recently by Connes and Moscovici. This (twisted) spectral triple encodes the geometry of the group of conformal diffeomorphisms on a spin manifold. The crux of of this construction is the conformal invariance of the Dirac operator. As a result, the Connes-Chern character is intimately related to the CM cocycle of an equivariant Dirac spectral triple. We compute this equivariant CM cocycle by heat kernel techniques. On the way we obtain a new heat kernel proof of the equivariant index theorem for Dirac operators. (Joint work with Raphael Ponge.) Holomorphic flexibility properties of compact complex surfaces 13:10 Fri 31 Aug, 2012 :: Engineering North 218 :: A/Prof Finnur Larusson :: University of Adelaide I will describe recent joint work with Franc Forstneric (arXiv, July 2012). We introduce a new property, called the stratified Oka property, which fits into a hierarchy of anti-hyperbolicity properties that includes the Oka property. We show that stratified Oka manifolds are strongly dominable by affine spaces. It follows that Kummer surfaces are strongly dominable. We determine which minimal surfaces of class VII are Oka (assuming the global spherical shell conjecture). We deduce that the Oka property and several other anti-hyperbolicity properties are in general not closed in families of compact complex manifolds. I will summarise what is known about how the Oka property fits into the Enriques-Kodaira classification of surfaces. Classification of a family of symmetric graphs with complete quotients 13:10 Fri 7 Sep, 2012 :: Engineering North 218 :: A/Prof Sanming Zhou :: University of Melbourne A finite graph is called symmetric if its automorphism group is transitive on the set of arcs (ordered pairs of adjacent vertices) of the graph. This is to say that all arcs have the same status in the graph. I will talk about recent results on the classification of a family of symmetric graphs with complete quotients. The most interesting graphs arising from this classification are defined in terms of Hermitian unitals (which are specific block designs), and they admit unitary groups as groups of automorphisms. I will also talk about applications of our results in constructing large symmetric graphs of given degree and diameter. This talk contains joint work with M. Giulietti, S. Marcugini and F. Pambianco. Geometric quantisation in the noncompact setting 13:10 Fri 14 Sep, 2012 :: Engineering North 218 :: Dr Peter Hochs :: Leibniz University, Hannover Traditionally, the geometric quantisation of an action by a compact Lie group on a compact symplectic manifold is defined as the equivariant index of a certain Dirac operator. This index is a well-defined formal difference of finite-dimensional representations, since the Dirac operator is elliptic and the manifold and the group in question are compact. From a mathematical and physical point of view however, it is very desirable to extend geometric quantisation to noncompact groups and manifolds. Defining a suitable index is much harder in the noncompact setting, but several interesting results in this direction have been obtained. I will review the difficulties connected to noncompact geometric quantisation, and some of the solutions that have been proposed so far, mainly in connection to the "quantisation commutes with reduction" principle. (An introduction to this principle will be given in my talk at the Colloquium on the same day.) Introduction to pairings in cryptography 13:10 Fri 21 Sep, 2012 :: Napier 209 :: Dr Naomi Benger :: University of Adelaide From cryptanalysis to a powerful tool which made identity based cryptography possible, pairings have a range of applications in cryptography. I will present basic background (algebraic geometry) needed to understand pairings, hard problems associated with pairings and protocols which use pairings. Supermanifolds and the moduli space of instantons 13:10 Fri 19 Oct, 2012 :: Engineering North 218 :: Prof Ugo Bruzzo :: International School for Advanced Studies (SISSA), Trieste I will give an example of an application of supermanifold theory to physics, i.e., how to "superize" the moduli space of instantons on a 4-fold and use it to give a description of the BRST transformations, to compute the "supermeasure" of the moduli space, and the Nekrasov partition function. The space of cubic rational maps 13:10 Fri 26 Oct, 2012 :: Engineering North 218 :: Mr Alexander Hanysz :: University of Adelaide For each natural number d, the space of rational maps of degree d on the Riemann sphere has the structure of a complex manifold. The topology of these manifolds has been extensively studied. The recent development of Oka theory raises some new and interesting questions about their complex structure. We apply geometric invariant theory to the degree 3 case, studying a double action of the Mobius group on the space of cubic rational maps. We show that the categorical quotient is C, and that the space of cubic rational maps enjoys the holomorphic flexibility properties of strong dominability and C-connectedness. Twisted analytic torsion and adiabatic limits 13:10 Wed 5 Dec, 2012 :: Ingkarni Wardli B17 :: Mr Ryan Mickler :: University of Adelaide We review Mathai-Wu's recent extension of Ray-Singer analytic torsion to supercomplexes. We explore some new results relating these two torsions, and how we can apply the adiabatic spectral sequence due to Forman and Farber's analytic deformation theory to compute some spectral invariants of the complexes involved, answering some questions that were posed in Mathai-Wu's paper. Variation of Hodge structure for generalized complex manifolds 13:10 Fri 7 Dec, 2012 :: Ingkarni Wardli B20 :: Dr David Baraglia :: University of Adelaide Generalized complex geometry combines complex and symplectic geometry into a single framework, incorporating also holomorphic Poisson and bi-Hermitian structures. The Dolbeault complex naturally extends to the generalized complex setting giving rise to Hodge structures in twisted cohomology. We consider the variations of Hodge structure and period mappings that arise from families of generalized complex manifolds. As an application we prove a local Torelli theorem for generalized Calabi-Yau manifolds. Hyperplane arrangements and tropicalization of linear spaces 10:10 Mon 17 Dec, 2012 :: Ingkarni Wardli B17 :: Dr Graham Denham :: University of Western Ontario I will give an introduction to a sequence of ideas in tropical geometry, the tropicalization of linear spaces. In the beginning, a construction due to De Concini and Procesi (wonderful models, 1995) gave a combinatorially explicit description of various iterated blowups of projective spaces along (proper transforms of) linear subspaces. A decade later, Tevelev's notion of tropical compactifications led to, in particular, a new view of the wonderful models and their intersection theory in terms of the theory of toric varieties (via work of Feichtner-Sturmfels, Feichtner-Yuzvinsky, Ardila-Klivans, and others). Recently, these ideas have played a role in Huh and Katz's proof of a long-standing conjecture in combinatorics. Stably Cayley groups over fields of characteristic 0 11:10 Mon 17 Dec, 2012 :: Ingkarni Wardli B17 :: Dr Nicole Lemire :: University of Western Ontario A linear algebraic group is called a Cayley group if it is equivariantly birationally isomorphic to its Lie algebra. It is stably Cayley if the product of the group and some torus is Cayley. Cayley gave the first examples of Cayley groups with his Cayley map back in 1846. Over an algebraically closed field of characteristic 0, Cayley and stably Cayley simple groups were classified by Lemire, Popov and Reichstein in 2006. In recent joint work with Blunk, Borovoi, Kunyavskii and Reichstein, we classify the simple stably Cayley groups over an arbitrary field of characteristic 0. Recent results on holomorphic extension of functions on unbounded domains in C^n 11:10 Fri 21 Dec, 2012 :: Ingkarni Wardli B19 :: Prof Roman Dwilewicz :: Missouri University of Science and Technology In the talk there will be given a short review of holomorphic extension problems starting with the famous Hartogs theorem (1906) up to recent results on global holomorphic extensions for unbounded domains, obtained together with Al Boggess (Arizona State Univ.) and Zbigniew Slodkowski (Univ. Illinois at Chicago). There is an interesting geometry behind the extension problem for unbounded domains, namely (in some cases) it depends on the position of a complex variety in the closure of the domain. The extension problem appeared non-trivial and the work is in progress. However the talk will be illustrated by many figures and pictures and should be accessible also to graduate students. Conformally Fedosov manifolds 12:10 Fri 8 Mar, 2013 :: Ingkarni Wardli B19 :: Prof Michael Eastwood :: Australian National University Symplectic and projective structures may be compatibly combined. The resulting structure closely resembles conformal geometry and a manifold endowed with such a structure is called conformally Fedosov. This talk will present the basic theory of conformally Fedosov geometry and, in particular, construct a Cartan connection for them. This is joint work with Jan Slovak. Twistor space for rolling bodies 12:10 Fri 15 Mar, 2013 :: Ingkarni Wardli B19 :: Prof Pawel Nurowski :: University of Warsaw We consider a configuration space of two solids rolling on each other without slipping or twisting, and identify it with an open subset U of R^5, equipped with a generic distribution D of 2-planes. We will discuss symmetry properties of the pair (U,D) and will mention that, in the case of the two solids being balls, when changing the ratio of their radii, the dimension of the group of local symmetries unexpectedly jumps from 6 to 14. This occurs for only one such ratio, and in such case the local group of symmetries of the pair (U,D) is maximal. It is maximal not only among the balls with various radii, but more generally among all (U,D)s corresponding to configuration spaces of two solids rolling on each other without slipping or twisting. This maximal group is isomorphic to the split real form of the exceptional Lie group G2. In the remaining part of the talk we argue how to identify the space U from the pair (U,D) defined above with the bundle T of totally null real 2-planes over a 4-manifold equipped with a split signature metric. We call T the twistor bundle for rolling bodies. We show that the rolling distribution D, can be naturally identified with an appropriately defined twistor distribution on T. We use this formulation of the rolling system to find more surfaces which, when rigidly rolling on each other without slipping or twisting, have the local group of symmetries isomorphic to the exceptional group G2. On the chromatic number of a random hypergraph 13:10 Fri 22 Mar, 2013 :: Ingkarni Wardli B21 :: Dr Catherine Greenhill :: University of New South Wales A hypergraph is a set of vertices and a set of hyperedges, where each hyperedge is a subset of vertices. A hypergraph is r-uniform if every hyperedge contains r vertices. A colouring of a hypergraph is an assignment of colours to vertices such that no hyperedge is monochromatic. When the colours are drawn from the set {1,..,k}, this defines a k-colouring. We consider the problem of k-colouring a random r-uniform hypergraph with n vertices and cn edges, where k, r and c are constants and n tends to infinity. In this setting, Achlioptas and Naor showed that for the case of r = 2, the chromatic number of a random graph must have one of two easily computable values as n tends to infinity. I will describe some joint work with Martin Dyer (Leeds) and Alan Frieze (Carnegie Mellon), in which we generalised this result to random uniform hypergraphs. The argument uses the second moment method, and applies a general theorem for performing Laplace summation over a lattice. So the proof contains something for everyone, with elements from combinatorics, analysis and algebra. Gauge groupoid cocycles and Cheeger-Simons differential characters 13:10 Fri 5 Apr, 2013 :: Ingkarni Wardli B20 :: Prof Jouko Mickelsson :: Royal Institute of Technology, Stockholm Groups of gauge transformations in quantum field theory are typically extended by a 2-cocycle with values in a certain abelian group due to chiral symmetry breaking. For these extensions there exist a global explicit construction since the 1980's. I shall study the higher group cocycles following a recent paper by F. Wagemann and C. Wockel, but extending to the transformation groupoid setting (motivated by QFT) and discussing potential obstructions in the construction due to a nonvanishing of low dimensional homology groups of the gauge group. The resolution of the obstruction is obtained by an application of the Cheeger-Simons differential characters. M-theory and higher gauge theory 13:10 Fri 12 Apr, 2013 :: Ingkarni Wardli B20 :: Dr Christian Saemann :: Heriot-Watt University I will review my recent work on integrability of M-brane configurations and the description of M-brane models in higher gauge theory. In particular, I will discuss categorified analogues of instantons and present superconformal equations of motion for the non-abelian tensor multiplet in six dimensions. The latter are derived from considering non-abelian gerbes on certain twistor spaces. Conformal Killing spinors in Riemannian and Lorentzian geometry 12:10 Fri 19 Apr, 2013 :: Ingkarni Wardli B19 :: Prof Helga Baum :: Humboldt University Conformal Killing spinors are the solutions of the conformally covariant twistor equation on spinors. Special cases are parallel and Killing spinors, the latter appear as eigenspinors of the Dirac operator on compact Riemannian manifolds of positive scalar curvature for the smallest possible positive eigenvalue. In the talk I will discuss geometric properties of manifolds admitting (conformal) Killing spinors. In particular, I will explain a local classification of the special geometric structures admitting conformal Killing spinors without zeros in the Riemannian as well as in the Lorentzian setting. An Oka principle for equivariant isomorphisms 12:10 Fri 3 May, 2013 :: Ingkarni Wardli B19 :: A/Prof Finnur Larusson :: University of Adelaide I will discuss new joint work with Frank Kutzschebauch (Bern) and Gerald Schwarz (Brandeis). Let $G$ be a reductive complex Lie group acting holomorphically on Stein manifolds $X$ and $Y$, which are locally $G$-biholomorphic over a common categorical quotient $Q$. When is there a global $G$-biholomorphism $X\to Y$? In a situation that we describe, with some justification, as generic, we prove that the obstruction to solving this local-to-global problem is topological and provide sufficient conditions for it to vanish. Our main tool is the equivariant version of Grauert's Oka principle due to Heinzner and Kutzschebauch. We prove that $X$ and $Y$ are $G$-biholomorphic if $X$ is $K$-contractible, where $K$ is a maximal compact subgroup of $G$, or if there is a $G$-diffeomorphism $X\to Y$ over $Q$, which is holomorphic when restricted to each fibre of the quotient map $X\to Q$. When $G$ is abelian, we obtain stronger theorems. Our results can be interpreted as instances of the Oka principle for sections of the sheaf of $G$-biholomorphisms from $X$ to $Y$ over $Q$. This sheaf can be badly singular, even in simply defined examples. Our work is in part motivated by the linearisation problem for actions on $\C^n$. It follows from one of our main results that a holomorphic $G$-action on $\C^n$, which is locally $G$-biholomorphic over a common quotient to a generic linear action, is linearisable. Diffeological spaces and differentiable stacks 12:10 Fri 10 May, 2013 :: Ingkarni Wardli B19 :: Dr David Roberts :: University of Adelaide The category of finite-dimensional smooth manifolds gives rise to interesting structures outside of itself, two examples being mapping spaces and classifying spaces. Diffeological spaces are a notion of generalised smooth space which form a cartesian closed category, so all fibre products and all mapping spaces of smooth manifolds exist as diffeological spaces. Differentiable stacks are a further generalisation that can also deal with moduli spaces (including classifying spaces) for objects with automorphisms. This talk will give an introduction to this circle of ideas. Crystallographic groups I: the classical theory 12:10 Fri 17 May, 2013 :: Ingkarni Wardli B19 :: Dr Wolfgang Globke :: University of Adelaide A discrete isometry group acting properly discontinuously on the n-dimensional Euclidean space with compact quotient is called a crystallographic group. This name reflects the fact that in dimension n=3 their compact fundamental domains resemble a space-filling crystal pattern. For higher dimensions, Hilbert posed his famous 18th problem: "Is there in n-dimensional Euclidean space only a finite number of essentially different kinds of groups of motions with a [compact] fundamental region?" This problem was solved by Bieberbach when he proved that in every dimension n there exists only a finite number of isomorphic crystallographic groups and also gave a description of these groups. From the perspective of differential geometry these results are of major importance, as crystallographic groups are precisely the fundamental groups of compact flat Riemannian orbifolds. The quotient is even a manifold if the fundamental group is required to be torsion-free, in which case it is called a Bieberbach group. Moreover, for a flat manifold the fundamental group completely determines the holonomy group. In this talk I will discuss the properties of crystallographic groups, study examples in dimension n=2 and n=3, and present the three Bieberbach theorems on the structure of crystallographic groups. Crystallographic groups II: generalisations The theory of crystallographic groups acting cocompactly on Euclidean space can be extended and generalised in many different ways. For example, instead of studying discrete groups of Euclidean isometries, one can consider groups of isometries for indefinite inner products. These are the fundamental groups of compact flat pseudo-Riemannian manifolds. Still more generally, one might study group of affine transformation on n-space that are not required to preserve any bilinear form. Also, the condition of cocompactness can be dropped. In this talk, I will present some of the results obtained for these generalisations, and also discuss some of my own work on flat homogeneous pseudo-Riemannian spaces. A strong Oka principle for proper immersions of finitely connected planar domains into CxC* 12:10 Fri 31 May, 2013 :: Ingkarni Wardli B19 :: Dr Tyson Ritter :: University of Adelaide Gromov, in his seminal 1989 paper on the Oka principle, proved that every continuous map from a Stein manifold into an elliptic manifold is homotopic to a holomorphic map. In previous work we showed that, given a continuous map from X to the elliptic manifold CxC*, where X is a finitely connected planar domain without isolated boundary points, a stronger Oka property holds whereby the map is homotopic to a proper holomorphic embedding. If the planar domain is additionally permitted to have isolated boundary points the problem becomes more difficult, and it is not yet clear whether a strong Oka property for embeddings into CxC* continues to hold. We will discuss recent results showing that every continuous map from a finitely connected planar domain into CxC* is homotopic to a proper immersion that, in most cases, identifies at most finitely many pairs of distinct points. This is joint work with Finnur Larusson. A new approach to pointwise heat kernel upper bounds on doubling metric measure spaces 12:10 Fri 7 Jun, 2013 :: Ingkarni Wardli B19 :: Prof Thierry Coulhon :: Australian National University On doubling metric measure spaces endowed with a Dirichlet form and satisfying the Davies-Gaffney estimate, we show some characterisations of pointwise upper bounds of the heat kernel in terms of one-parameter weighted inequalities which correspond respectively to the Nash inequality and to a Gagliardo-Nirenberg type inequality when the volume growth is polynomial. This yields a new and simpler proof of the well-known equivalence between classical heat kernel upper bounds and the relative Faber-Krahn inequalities. We are also able to treat more general pointwise estimates where the heat kernel rate of decay is not necessarily governed by the volume growth. This is a joint work with Salahaddine Boutayeb and Adam Sikora. Birational geometry of M_g 12:10 Fri 21 Jun, 2013 :: Ingkarni Wardli B19 :: Dr Jarod Alper :: Australian National University In 1969, Deligne and Mumford introduced a beautiful compactification of the moduli space of smooth curves which has proved extremely influential in geometry, topology and physics. Using recent advances in higher dimensional geometry and the minimal model program, we study the birational geometry of M_g. In particular, in an effort to understand the canonical model of M_g, we study the log canonical models as well as the associated divisorial contractions and flips by interpreting these models as moduli spaces of particular singular curves. IGA/AMSI Workshop: Representation theory and operator algebras 10:00 Mon 1 Jul, 2013 :: 7.15 Ingkarni Wardli :: Prof Nigel Higson :: Pennsylvania State University This interdisciplinary workshop will be about aspects of representation theory (in the sense of Harish-Chandra), aspects of noncommutative geometry (in the sense of Alain Connes) and aspects of operator K-theory (in the sense of Gennadi Kasparov). It features the renowned speaker, Professor Nigel Higson (Penn State University) http://www.iga.adelaide.edu.au/workshops/WorkshopJuly2013/ All are welcome. K-homology and the quantization commutes with reduction problem 12:10 Fri 5 Jul, 2013 :: 7.15 Ingkarni Wardli :: Prof Nigel Higson :: Pennsylvania State University The quantization commutes with reduction problem for Hamiltonian actions of compact Lie groups was solved by Meinrenken in the mid-1990s using geometric techniques, and solved again shortly afterwards by Tian and Zhang using analytic methods. In this talk I shall outline some of the close links that exist between the problem, the two solutions, and the geometric and analytic versions of K-homology theory that are studied in noncommutative geometry. I shall try to make the case for K-homology as a useful conceptual framework for the solutions and (at least some of) their various generalizations. The search for the exotic - subfactors and conformal field theory 13:10 Fri 26 Jul, 2013 :: Engineering-Maths 212 :: Prof David E. Evans :: Cardiff University Subfactor theory provides a framework for studying modular invariant partition functions in conformal field theory, and candidates for exotic modular tensor categories. I will describe work with Terry Gannon on the search for exotic theories beyond those from symmetries based on loop groups, Wess-Zumino-Witten models and finite groups. Subfactors and twisted equivariant K-theory 12:10 Fri 2 Aug, 2013 :: Ingkarni Wardli B19 :: Prof David E. Evans :: Cardiff University The most basic structure of chiral conformal field theory (CFT) is the Verlinde ring. Freed-Hopkins-Teleman have expressed the Verlinde ring for the CFTs associated to loop groups as twisted equivariant K-theory. In joint work with Terry Gannon, we build on their work to express K-theoretically the structures of full CFT. In particular, the modular invariant partition functions (which essentially parametrise the possible full CFTs) have a rich interpretation within von Neumann algebras (subfactors), which has led to the developments of structures of full CFT such as the full system (fusion ring of defect lines), nimrep (cylindrical partition function), alpha-induction etc. Symplectic Lie groups 12:10 Fri 9 Aug, 2013 :: Ingkarni Wardli B19 :: Dr Wolfgang Globke :: University of Adelaide A "symplectic Lie group" is a Lie group G with a symplectic form such that G acts by symplectic transformations on itself. Such a G cannot be semisimple, so the research focuses on solvable symplectic Lie groups. In the compact case, a classification of these groups is known. In many cases, a solvable symplectic Lie group G is a cotangent bundle of a flat Lie group H. Then H is a Lagrange subgroup of G, meaning its Lie algebra h is isotropic in the Lie algebra g of G. The existence of Lagrange subalgebras or ideals in g is an important question which relates to many problems in the general structure theory of symplectic Lie groups. In my talk, I will give a brief overview of the known results in this field, ranging from the 1970s to a very recent structure theory. A survey of non-abelian cohomology 12:10 Fri 16 Aug, 2013 :: Ingkarni Wardli B19 :: Dr Danny Stevenson :: University of Adelaide If G is a topological group, not necessarily abelian, then the set H^1(M,G) has a natural interpretation in terms of principal G-bundles on the space M. In this talk I will describe higher degree analogs of both the set H^1(M,G) and the notion of a principal bundle (the latter is closely connected to the subject of bundle gerbes). I will explain, following work of Joyal, Jardine and many others, how the language of abstract homotopy theory gives a very convenient framework for discussing these ideas. The Einstein equations with torsion, reduction and duality 12:10 Fri 23 Aug, 2013 :: Ingkarni Wardli B19 :: Dr David Baraglia :: University of Adelaide We consider the Einstein equations for connections with skew torsion. After some general remarks we look at these equations on principal G-bundles, making contact with string structures and heterotic string theory in the process. When G is a torus the equations are shown to possess a symmetry not shared by the usual Einstein equations - T-duality. This is joint work with Pedram Hekmati. Geometry of moduli spaces 12:10 Fri 30 Aug, 2013 :: Ingkarni Wardli B19 :: Prof Georg Schumacher :: University of Marburg We discuss the concept of moduli spaces in complex geometry. The main examples are moduli of compact Riemann surfaces, moduli of compact projective varieties and moduli of holomorphic vector bundles, whose points correspond to isomorphism classes of the given objects. Moduli spaces carry a natural topology, whereas a complex structure that reflects the variation of the structure in a family exists in general only under extra conditions. In a similar way, a natural hermitian metric (Weil-Petersson metric) on moduli spaces that induces a symplectic structure can be constructed from the variation of distinguished metrics on the fibers. In this way, various questions concerning the underlying symplectic structure, the curvature of the Weil-Petersson metric, hyperbolicity of moduli spaces, and construction of positive/ample line bundles on compactified moduli spaces can be answered. What are fusion categories? 12:10 Fri 6 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Scott Morrison :: Australian National University Fusion categories are a common generalization of finite groups and quantum groups at roots of unity. I'll explain a little of their structure, mention their applications (to topological field theory and quantum computing), and then explore the ways in which they are in general similar to, or different from, the 'classical' cases. We've only just started exploring, and don't yet know what the exotic examples we've discovered signify about the landscape ahead. K-theory and solid state physics 12:10 Fri 13 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Keith Hannabuss :: Balliol College, Oxford More than 50 years ago Dyson showed that there is a nine-fold classification of random matrix models, the classes of which are each associated with Riemannian symmetric spaces. More recently it was realised that a related argument enables one to classify the insulating properties of fermionic systems (with the addition of an extra class to give 10 in all), and can be described using K-theory. In this talk I shall give a survey of the ideas, and a brief outline of work with Guo Chuan Thiang. The logarithmic singularities of the Green functions of the conformal powers of the Laplacian 11:10 Mon 16 Sep, 2013 :: Ingkarni Wardli B20 :: Prof Raphael Ponge :: Seoul National University Green functions play an important role in conformal geometry. In this talk, we shall explain how to compute explicitly the logarithmic singularities of the Green functions of the conformal powers of the Laplacian. These operators are the Yamabe and Paneitz operators, as well as the conformal fractional powers of the Laplacian arising from scattering theory for Poincare-Einstein metrics. The results are formulated in terms of Weyl conformal invariants defined via the ambient metric of Fefferman-Graham. In this talk we shall report on a program of using the recent framework of twisted spectral triples to study conformal geometry from a noncommutative geometric perspective. One result is a local index formula in conformal geometry taking into account the action of the group of conformal diffeomorphisms. Another result is a version of Vafa-Witten's inequality for twisted spectral triples. Geometric applications include a version of Vafa-Witten's inequality in conformal geometry. There are also noncommutative versions for spectral triples over noncommutative tori and duals of discrete cocompact subgroups of semisimple Lie groups satisfying the Baum-Connes conjecture. (This is joint work with Hang Wang.) Conformal geometry in four variables and a special geometry in five 12:10 Fri 20 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Dennis The :: Australian National University Starting with a split signature 4-dimensional conformal manifold, one can build a 5-dimensional bundle over it equipped with a 2-plane distribution. Generically, this is a (2,3,5)-distribution in the sense of Cartan's five variables paper, an aspect that was recently pursued by Daniel An and Pawel Nurowski (finding new examples concerning the geometry of rolling bodies where the (2,3,5)-distribution has G2-symmetry). I shall explain how to understand some elementary aspects of this "twistor construction" from the perspective of parabolic geometry. This is joint work with Michael Eastwood and Katja Sagerschnig. The irrational line on the torus 12:35 Mon 23 Sep, 2013 :: B.19 Ingkarni Wardli :: Kelli Francis-Staite :: University of Adelaide The torus is very common example of a surface in R^3, but it's a lot more interesting than just a donut! I will introduce some standard mathematical descriptions of the torus, a bit of number theory, and finally what the irrational line on the torus is. Why is this interesting? Well despite donuts being yummy to eat, the irrational line on the torus gives a range of pathological counter-examples. In Differential Geometry, it is an example of a manifold that is a subset of another manifold, but not a submanifold. In Lie theory, it is an example of a subgroup of a Lie group which is not a Lie subgroup. If that wasn't enough of a mouthful, I may also provide some sweet incentives to come along! Does anyone know the location of a good donut store? Exact Fefferman-Graham metrics 12:10 Fri 11 Oct, 2013 :: Ingkarni Wardli B19 :: Prof Pawel Nurowski :: University of Warsaw Geodesic completeness of compact pp-waves 12:10 Fri 18 Oct, 2013 :: Ingkarni Wardli B19 :: Dr Thomas Leistner :: University of Adelaide A semi-Riemannian manifold is geodesically complete (or for short, complete) if all its maximal geodesics are defined on the real line. Whereas for Riemannian metrics the compactness of the manifold implies completeness, there are compact Lorentzian manifolds that are not complete (e.g. the Clifton-Pohl torus). Several rather strong conditions have been found in the literature under which a compact Lorentzian manifold is complete, including being homogeneous (Marsden) or of constant curvature (Carriere, Klingler), or admitting a timelike Killing vector field (Romero, Sanchez). We will consider pp-waves, which are Lorentzian manifold with a parallel null vector field and a highly degenerate curvature tensor, but which do not satisfy any of the above conditions. We will show that a compact pp-wave is universally covered by a vector space, determine the metric on the universal cover and consequently show that they are geodesically complete. Localised index and L^2-Lefschetz fixed point formula 12:10 Fri 25 Oct, 2013 :: Ingkarni Wardli B19 :: Dr Hang Wang :: University of Adelaide In this talk we introduce a class of localised indices for the Dirac type operators on a complete Riemannian manifold, where a discrete group acts properly, co-compactly and isometrically. These localised indices, generalising the L^2-index of Atiyah, are obtained by taking Hattori-Stallings traces of the higher index for the Dirac type operators. We shall talk about some motivation and applications for working on localised indices. The talk is related to joint work with Bai-Ling Wang. IGA Lectures on Finsler geometry 13:30 Thu 31 Oct, 2013 :: Ingkarni Wardli 7.15 :: Prof Robert Bryant :: Duke University 13:30 Refreshments. 14:00 Lecture 1: The origins of Finsler geometry in the calculus of variations. 15:00 Lecture 2: Finsler manifolds of constant flag curvature. Recent developments in special holonomy manifolds 12:10 Fri 1 Nov, 2013 :: Ingkarni Wardli 7.15 :: Prof Robert Bryant :: Duke University One of the big classification results in differential geometry from the past century has been the classification of the possible holonomies of affine manifolds, with the major first step having been taken by Marcel Berger in his 1954 thesis. However, Berger's classification was only partial, and, in the past 20 years, an extensive research effort has been expended to complete this classification and extend it in a number of ways. In this talk, after recounting the major parts of the history of the subject, I will discuss some of the recent results and surprising new examples discovered as a by-product of research into Finsler geometry. If time permits, I will also discuss some of the open problems in the subject. Braids and entropy 10:10 Fri 8 Nov, 2013 :: Ingkarni Wardli B19 :: Prof Burglind Joricke :: Australian National University This talk will be a brief introduction to some aspects of braid theory and to entropy, to provide background for the speaker's talk at 12:10 pm the same day. Braids, conformal module and entropy I will discuss two invariants of conjugacy classes of braids. The first invariant is the conformal module which implicitly occurred already in a paper of Gorin and Lin in connection with their interest in Hilbert's 13th problem. The second is a popular dynamical invariant, the entropy. It appeared in connection with Thurston's theory of surface homeomorphisms. It turns out that these invariants are related: They are inversely proportional. In a preparatory talk (at 10:10 am) I will give a brief introduction to some aspects of braid theory and to entropy. Euler and Lagrange solutions of the three-body problem and beyond 12:10 Fri 15 Nov, 2013 :: Ingkarni Wardli B19 :: Prof Pawel Nurowski :: Centre for Theoretical Physics, Polish Academy of Sciences Reductive group actions and some problems concerning their quotients 12:10 Fri 17 Jan, 2014 :: Ingkarni Wardli B20 :: Prof Gerald Schwarz :: Brandeis University We will gently introduce the concept of a complex reductive group and the notion of the quotient Z of a complex vector space V on which our complex reductive group G acts linearly. There is the quotient mapping p from V to Z. The quotient is an affine variety with a stratification coming from the group action. Let f be an automorphism of Z. We consider the following questions (and give some answers). 1) Does f preserve the stratification of Z, i.e., does it permute the strata? 2) Is there a lift F of f? This means that F maps V to V and p(F(v))=f(p(v)) for all v in V. 3) Can we arrange that F is equivariant? We show that 1) is almost always true, that 2) is true in a lot of cases and that a twisted version of 3) then holds. The density property for complex manifolds: a strong form of holomorphic flexibility 12:10 Fri 24 Jan, 2014 :: Ingkarni Wardli B20 :: Prof Frank Kutzschebauch :: University of Bern Compared with the real differentiable case, complex manifolds in general are more rigid, their groups of holomorphic diffeomorphisms are rather small (in general trivial). A long known exception to this behavior is affine n-space C^n for n at least 2. Its group of holomorphic diffeomorphisms is infinite dimensional. In the late 1980s Andersen and Lempert proved a remarkable theorem which stated in its generalized version due to Forstneric and Rosay that any local holomorphic phase flow given on a Runge subset of C^n can be locally uniformly approximated by a global holomorphic diffeomorphism. The main ingredient in the proof was formalized by Varolin and called the density property: The Lie algebra generated by complete holomorphic vector fields is dense in the Lie algebra of all holomorphic vector fields. In these manifolds a similar local to global approximation of Andersen-Lempert type holds. It is a precise way of saying that the group of holomorphic diffeomorphisms is large. In the talk we will explain how this notion is related to other more recent flexibility notions in complex geometry, in particular to the notion of a Oka-Forstneric manifold. We will give examples of manifolds with the density property and sketch applications of the density property. If time permits we will explain criteria for the density property developed by Kaliman and the speaker. Holomorphic null curves and the conformal Calabi-Yau problem 12:10 Tue 28 Jan, 2014 :: Ingkarni Wardli B20 :: Prof Franc Forstneric :: University of Ljubljana I shall describe how methods of complex analysis can be used to give new results on the conformal Calabi-Yau problem concerning the existence of bounded metrically complete minimal surfaces in real Euclidean 3-space R^3. We shall see in particular that every bordered Riemann surface admits a proper complete holomorphic immersion into the ball of C^2, and a proper complete embedding as a holomorphic null curve into the ball of C^3. Since the real and the imaginary parts of a holomorphic null curve in C^3 are conformally immersed minimal surfaces in R^3, we obtain a bounded complete conformal minimal immersion of any bordered Riemann surface into R^3. The main advantage of our methods, when compared to the existing ones in the literature, is that we do not need to change the conformal type of the Riemann surface. (Joint work with A. Alarcon, University of Granada.) Integrability of infinite-dimensional Lie algebras and Lie algebroids 12:10 Fri 7 Feb, 2014 :: Ingkarni Wardli B20 :: Christoph Wockel :: Hamburg University Lie's Third Theorem states that each finite-dimensional Lie algebra is the Lie algebra of a Lie group (we also say "integrates to a Lie group"). The corresponding statement for infinite-dimensional Lie algebras or Lie algebroids is false and we will explain geometrically why this is the case. The underlying pattern is that of integration of central extensions of Lie algebras and Lie algebroids. This also occurs in other contexts, and we will explain some aspects of string group models in these terms. In the end we will sketch how the non-integrability of Lie algebras and Lie algebroids can be overcome by passing to higher categorical objects (such as smooth stacks) and give a panoramic (but still conjectural) perspective on the precise relation of the various integrability problems. Hormander's estimate, some generalizations and new applications 12:10 Mon 17 Feb, 2014 :: Ingkarni Wardli B20 :: Prof Zbigniew Blocki :: Jagiellonian University Lars Hormander proved his estimate for the d-bar equation in 1965. It is one the most important results in several complex variables (SCV). New applications have emerged recently, outside of SCV. We will present three of them: the Ohsawa-Takegoshi extension theorem with optimal constant, the one-dimensional Suita Conjecture, and Nazarov's approach to the Bourgain-Milman inequality from convex analysis. 12:10 Fri 7 Mar, 2014 :: Ingkarni Wardli B20 :: Peter Hochs :: University of Adelaide Geometric quantisation is a way to construct quantum mechanical phase spaces (Hilbert spaces) from classical mechanical phase spaces (symplectic manifolds). In the presence of a group action, the quantisation commutes with reduction principle states that geometric quantisation should be compatible with the ways the group action can be used to simplify (reduce) the classical and quantum phase spaces. This has deep consequences for the link between symplectic geometry and representation theory. The quantisation commutes with reduction principle has been given explicit meaning, and been proved, in cases where the symplectic manifold and the group acting on it are compact. There have also been results where just the group, or the orbit space of the action, is assumed to be compact. These are important and difficult, but it is somewhat frustrating that they do not even apply to the simplest example from the physics point of view: a free particle in Rn. This talk is about a joint result with Mathai Varghese where the group, manifold and orbit space may all be noncompact. The phase of the scattering operator from the geometry of certain infinite dimensional Lie groups 12:10 Fri 14 Mar, 2014 :: Ingkarni Wardli B20 :: Jouko Mickelsson :: University of Helsinki This talk is about some work on the phase of the time evolution operator in QED and QCD, related to the geometry of certain infinite-dimensional groups (essentially modelled by PSDO's). Moduli spaces of contact instantons 12:10 Fri 28 Mar, 2014 :: Ingkarni Wardli B20 :: David Baraglia :: University of Adelaide In dimensions greater than four there are several notions of higher Yang-Mills instantons. This talk concerns one such case, contact instantons, defined for 5-dimensional contact manifolds. The geometry transverse to the Reeb foliation turns out to be important in understanding the moduli space. For example, we show the dimension of the moduli space is the index of a transverse elliptic complex. This is joint work with Pedram Hekmati. Scattering theory and noncommutative geometry 01:10 Mon 31 Mar, 2014 :: Ingkarni Wardli B20 :: Alan Carey :: Australian National University Semiclassical restriction estimates 12:10 Fri 4 Apr, 2014 :: Ingkarni Wardli B20 :: Melissa Tacy :: University of Adelaide Eigenfunctions of Hamiltonians arise naturally in the theory of quantum mechanics as stationary states of quantum systems. Their eigenvalues have an interpretation as the square root of E, where E is the energy of the system. We wish to better understand the high energy limit which defines the boundary between quantum and classical mechanics. In this talk I will focus on results regarding the restriction of eigenfunctions to lower dimensional subspaces, in particular to hypersurfaces. A convenient way to study such problems is to reframe them as problems in semiclassical analysis. T-Duality and its Generalizations 12:10 Fri 11 Apr, 2014 :: Ingkarni Wardli B20 :: Jarah Evslin :: Theoretical Physics Center for Science Facilities, CAS Given a manifold M with a torus action and a choice of integral 3-cocycle H, T-duality yields another manifold with a torus action and integral 3-cocyle. It induces a number of surprising automorphisms between structures on these manifolds. In this talk I will review T-duality and describe some work on two generalizations which are realized in string theory: NS5-branes and heterotic strings. These respectively correspond to non-closed 3-classes H and to principal bundles fibered over M. A generalised Kac-Peterson cocycle 11:10 Thu 17 Apr, 2014 :: Ingkarni Wardli B20 :: Pedram Hekmati :: University of Adelaide The Kac-Peterson cocycle appears in the study of highest weight modules of infinite dimensional Lie algebras and determines a central extension. The vanishing of its cohomology class is tied to the existence of a cubic Dirac operator whose square is a quadratic Casimir element. I will introduce a closely related Lie algebra cocycle that comes about when constructing spin representations and gives rise to a Banach Lie group with a highly nontrivial topology. I will also explain how to make sense of the cubic Dirac operator in this setting and discuss its relation to twisted K-theory. This is joint work with Jouko Mickelsson. Lefschetz fixed point theorem and beyond 12:10 Fri 2 May, 2014 :: Ingkarni Wardli B20 :: Hang Wang :: University of Adelaide A Lefschetz number associated to a continuous map on a closed manifold is a topological invariant determined by the geometric information near the neighbourhood of fixed point set of the map. After an introduction of the Lefschetz fixed point theorem, we shall use the Dirac-dual Dirac method to derive the Lefschetz number on K-theory level. The method concerns the comparison of the Dirac operator on the manifold and the Dirac operator on some submanifold. This method can be generalised to several interesting situations when the manifold is not necessarily compact. A geometric model for odd differential K-theory 12:10 Fri 9 May, 2014 :: Ingkarni Wardli B20 :: Raymond Vozzo :: University of Adelaide Odd K-theory has the interesting property that-unlike even K-theory-it admits an infinite number of inequivalent differential refinements. In this talk I will give a description of odd differential K-theory using infinite rank bundles and explain why it is the correct differential refinement. This is joint work with Michael Murray, Pedram Hekmati and Vincent Schlegel. Oka properties of groups of holomorphic and algebraic automorphisms of complex affine space 12:10 Fri 6 Jun, 2014 :: Ingkarni Wardli B20 :: Finnur Larusson :: University of Adelaide I will discuss new joint work with Franc Forstneric. The group of holomorphic automorphisms of complex affine space C^n, n>1, is huge. It is not an infinite-dimensional manifold in any recognised sense. Still, our work shows that in some ways it behaves like a finite-dimensional Oka manifold. The p-Minkowski problem 12:10 Fri 13 Jun, 2014 :: Ingkarni Wardli B20 :: Xu-Jia Wang :: Australian National University The p-Minkowski problem is an extension of the classical Minkowski problem. It concerns the existence, uniqueness, and regularity of closed convex hypersurfaces with prescribed Gauss curvature. The Minkowski problem has been studied by many people in the last century and has been completely resolved. The p-Minkowski problem involves more applications. In this talk we will review the development of the study of the p-Minkowski problem and discuss some recent works on the problem.​ The Bismut-Chern character as dimension reduction functor and its twisting 12:10 Fri 4 Jul, 2014 :: Ingkarni Wardli B20 :: Fei Han :: National University of Singapore The Bismut-Chern character is a loop space refinement of the Chern character. It plays an essential role in the interpretation of the Atiyah-Singer index theorem from the point of view of loop space. In this talk, I will first briefly review the construction of the Bismut-Chern character and show how it can be viewed as a dimension reduction functor in the Stolz-Teichner program on supersymmetric quantum field theories. I will then introduce the construction of the twisted Bismut-Chern character, which represents our joint work with Varghese Mathai. Estimates for eigenfunctions of the Laplacian on compact Riemannian manifolds 12:10 Fri 1 Aug, 2014 :: Ingkarni Wardli B20 :: Andrew Hassell :: Australian National University I am interested in estimates on eigenfunctions, accurate in the high-eigenvalue limit. I will discuss estimates on the size (as measured by L^p norms) of eigenfunctions, on the whole Riemannian manifold, at the boundary, or at an interior hypersurface. The link between high-eigenvalue estimates, geometry, and the dynamics of geodesic flow will be emphasized. The Dirichlet problem for the prescribed Ricci curvature equation 12:10 Fri 15 Aug, 2014 :: Ingkarni Wardli B20 :: Artem Pulemotov :: University of Queensland We will discuss the following question: is it possible to find a Riemannian metric whose Ricci curvature is equal to a given tensor on a manifold M? To answer this question, one must analyze a weakly elliptic second-order geometric PDE. In the first part of the talk, we will review the history of the subject and state several classical theorems. After that, our focus will be on new results concerning the case where M has nonempty boundary. Quasimodes that do not Equidistribute 13:10 Tue 19 Aug, 2014 :: Ingkarni Wardli B17 :: Shimon Brooks :: Bar-Ilan University The QUE Conjecture of Rudnick-Sarnak asserts that eigenfunctions of the Laplacian on Riemannian manifolds of negative curvature should equidistribute in the large eigenvalue limit. For a number of reasons, it is expected that this property may be related to the (conjectured) small multiplicities in the spectrum. One way to study this relationship is to ask about equidistribution for "quasimodes"-or approximate eigenfunctions- in place of highly-degenerate eigenspaces. We will discuss the case of surfaces of constant negative curvature; in particular, we will explain how to construct some examples of sufficiently weak quasimodes that do not satisfy QUE, and show how they fit into the larger theory. T-duality and the chiral de Rham complex 12:10 Fri 22 Aug, 2014 :: Ingkarni Wardli B20 :: Andrew Linshaw :: University of Denver The chiral de Rham complex of Malikov, Schechtman, and Vaintrob is a sheaf of vertex algebras that exists on any smooth manifold M. It has a square-zero differential D, and contains the algebra of differential forms on M as a subcomplex. In this talk, I'll give an introduction to vertex algebras and sketch this construction. Finally, I'll discuss a notion of T-duality in this setting. This is based on joint work in progress with V. Mathai. Spherical T-duality 01:10 Mon 25 Aug, 2014 :: Ingkarni Wardli B18 :: Mathai Varghese :: University of Adelaide I will talk on a new variant of T-duality, called spherical T-duality, which relates pairs of the form (P,H) consisting of a principal SU(2)-bundle P --> M and a 7-cocycle H on P. Intuitively spherical T-duality exchanges H with the second Chern class c_2(P). This is precisely true when M is compact oriented and dim(M) is at most 4. When M is higher dimensional, not all pairs (P,H) admit spherical T-duals and even when they exist, the spherical T-duals are not always unique. We will try and explain this phenomenon. Nonetheless, we prove that all spherical T-dualities induce a degree-shifting isomorphism on the 7-twisted cohomologies of the bundles and, when dim(M) is at most 7, also their integral twisted cohomologies and, when dim(M) is at most 4, even their 7-twisted K-theories. While the complete physical relevance of spherical T-duality is still being explored, it does provide an identification between conserved charges in certain distinct IIB supergravity and string compactifications. This is joint work with Peter Bouwknegt and Jarah Evslin. Ideal membership on singular varieties by means of residue currents 12:10 Fri 29 Aug, 2014 :: Ingkarni Wardli B20 :: Richard Larkang :: University of Adelaide On a complex manifold X, one can consider the following ideal membership problem: Does a holomorphic function on X belong to a given ideal of holomorphic functions on X? Residue currents give a way of expressing analytically this essentially algebraic problem. I will discuss some basic cases of this, why such an analytic description might be useful, and finish by discussing a generalization of this to singular varieties. The FKMM invariant in low dimension 12:10 Fri 12 Sep, 2014 :: Ingkarni Wardli B20 :: Kiyonori Gomi (Shinshu University) On a space with involutive action, the natural notion of vector bundles is equivariant vector bundles. But, there is an important variant called `Real' vector bundles in the sense of Atiyah, and, its cousin, `symplectic' or `Quaternionic' vector bundles in the sense of Dupont. The FKMM invariant is an invariant of `symplectic' vector bundles originally introduced by Furuta, Kametani, Matsue and Minami. The subject of my talk is recent development of this invariant in my joint work with Giuseppe De Nittis: The classifications of `symplectic' vector bundles in low dimension and the descriptions of some Z/2-invariants by using the FKMM invariant. Translating solitons for mean curvature flow 12:10 Fri 19 Sep, 2014 :: Ingkarni Wardli B20 :: Julie Clutterbuck :: Monash University Mean curvature flow gives a deformation of a submanifold in the direction of its mean curvature vector. Singularities may arise, and can be modelled by special solutions of the flow. I will describe the special solutions that move by only a translation under the flow, and give some explicit constructions of such surfaces. This is based on joint work with Oliver Schnuerer and Felix Schulze. Spectral asymptotics on random Sierpinski gaskets 12:10 Fri 26 Sep, 2014 :: Ingkarni Wardli B20 :: Uta Freiberg :: Universitaet Stuttgart Self similar fractals are often used in modeling porous media. Hence, defining a Laplacian and a Brownian motion on such sets describes transport through such materials. However, the assumption of strict self similarity could be too restricting. So, we present several models of random fractals which could be used instead. After recalling the classical approaches of random homogenous and recursive random fractals, we show how to interpolate between these two model classes with the help of so called V-variable fractals. This concept (developed by Barnsley, Hutchinson & Stenflo) allows the definition of new families of random fractals, hereby the parameter V describes the degree of `variability' of the realizations. We discuss how the degree of variability influences the geometric, analytic and stochastic properties of these sets. - These results have been obtained with Ben Hambly (University of Oxford) and John Hutchinson (ANU Canberra). Topology, geometry, and moduli spaces 12:10 Fri 10 Oct, 2014 :: Ingkarni Wardli B20 :: Nick Buchdahl :: University of Adelaide In recent years, moduli spaces of one kind or another have been shown to be of great utility, this quite apart from their inherent interest. Many of their applications involve their topology, but as we all know, understanding of topological structures is often facilitated through the use of geometric methods, and some of these moduli spaces carry geometric structures that are considerable interest in their own right. In this talk, I will describe some of the background and the ideas in this general context, focusing on questions that I have been considering lately together with my colleague Georg Schumacher from Marburg in Germany, who was visiting us recently. Compact pseudo-Riemannian solvmanifolds 12:10 Fri 17 Oct, 2014 :: Ingkarni Wardli B20 :: Wolfgang Globke :: University of Adelaide A compact solvmanifold M is a quotient of a solvable Lie group G by a cocompact closed subgroup H. A pseudo-Riemannian metric on M is induced by an H-invariant symmetric 2-tensor on G. In this talk I will describe some foundations and results of my ongoing work with Oliver Baues on the nature of this 2-tensor and what it can imply for the subgroup H. The Serre-Grothendieck theorem by geometric means 12:10 Fri 24 Oct, 2014 :: Ingkarni Wardli B20 :: David Roberts :: University of Adelaide The Serre-Grothendieck theorem implies that every torsion integral 3rd cohomology class on a finite CW-complex is the invariant of some projective bundle. It was originally proved in a letter by Serre, used homotopical methods, most notably a Postnikov decomposition of a certain classifying space with divisible homotopy groups. In this talk I will outline, using work of the algebraic geometer Offer Gabber, a proof for compact smooth manifolds using geometric means and a little K-theory. Extending holomorphic maps from Stein manifolds into affine toric varieties 12:10 Fri 14 Nov, 2014 :: Ingkarni Wardli B20 :: Richard Larkang :: University of Adelaide One way of defining so-called Oka manifolds is by saying that they satisfy the following interpolation property (IP): Y satisfies the IP if any holomorphic map from a closed submanifold S of a Stein manifold X into Y which has a continuous extension to X also has a holomorphic extension. An ostensibly weaker property is the convex interpolation property (CIP), where S is assumed to be a contractible submanifold of X = C^n. By a deep theorem of Forstneric, these (and several other) properties are in fact equivalent. I will discuss a joint work with Finnur Larusson, where we consider the interpolation property when the target Y is a singular affine toric variety. We show that all affine toric varieties satisfy an interpolation property stronger than CIP, but that only in very special situations do they satisfy the full IP. Fractal substitution tilings 11:10 Wed 17 Dec, 2014 :: Ingkarni Wardli B17 :: Mike Whittaker :: University of Wollongong Starting with a substitution tiling, I will demonstrate a method for constructing infinitely many new substitution tilings. Each of these new tilings is derived from a graph iterated function system and the tiles typically have fractal boundary. As an application, we construct an odd spectral triple on a C*-algebra associated with an aperiodic substitution tiling. No knowledge of tilings, C*-algebras, or spectral triples will be assumed. This is joint work with Natalie Frank, Michael Mampusti, and Sam Webster. Factorisations of Distributive Laws 12:10 Fri 19 Dec, 2014 :: Ingkarni Wardli B20 :: Paul Slevin :: University of Glasgow Recently, distributive laws have been used by Boehm and Stefan to construct new examples of duplicial (paracyclic) objects, and hence cyclic homology theories. The paradigmatic example of such a theory is the cyclic homology HC(A) of an associative algebra A. It was observed by Kustermans, Murphy, and Tuset that the functor HC can be twisted by automorphisms of A. It turns out that this twisting procedure can be applied to any duplicial object defined by a distributive law. I will begin by defining duplicial objects and cyclic homology, as well as discussing some categorical concepts, then describe the construction of Boehm and Stefan. I will then define the category of factorisations of a distributive law and explain how this acts on their construction, and give some examples, making explicit how the action of this category generalises the twisting of an associative algebra. Nonlinear analysis over infinite dimensional spaces and its applications 12:10 Fri 6 Feb, 2015 :: Ingkarni Wardli B20 :: Tsuyoshi Kato :: Kyoto University In this talk we develop moduli theory of holomorphic curves over infinite dimensional manifolds consisted by sequences of almost Kaehler manifolds. Under the assumption of high symmetry, we verify that many mechanisms of the standard moduli theory over closed symplectic manifolds also work over these infinite dimensional spaces. As an application, we study deformation theory of discrete groups acting on trees. There is a canonical way, up to conjugacy to embed such groups into the automorphism group over the infinite projective space. We verify that for some class of Hamiltonian functions, the deformed groups must be always asymptotically infinite. Boundary behaviour of Hitchin and hypo flows with left-invariant initial data 12:10 Fri 27 Feb, 2015 :: Ingkarni Wardli B20 :: Vicente Cortes :: University of Hamburg Hitchin and hypo flows constitute a system of first order pdes for the construction of Ricci-flat Riemannian mertrics of special holonomy in dimensions 6, 7 and 8. Assuming that the initial geometric structure is left-invariant, we study whether the resulting Ricci-flat manifolds can be extended in a natural way to complete Ricci-flat manifolds. This talk is based on joint work with Florin Belgun, Marco Freibert and Oliver Goertsches, see arXiv:1405.1866 (math.DG). Tannaka duality for stacks 12:10 Fri 6 Mar, 2015 :: Ingkarni Wardli B20 :: Jack Hall :: Australian National University Traditionally, Tannaka duality is used to reconstruct a group from its representations. I will describe a reformulation of this duality for stacks, which is due to Lurie, and briefly touch on some applications. On the analyticity of CR-diffeomorphisms 12:10 Fri 13 Mar, 2015 :: Engineering North N132 :: Ilya Kossivskiy :: University of Vienna One of the fundamental objects in several complex variables is CR-mappings. CR-mappings naturally occur in complex analysis as boundary values of mappings between domains, and as restrictions of holomorphic mappings onto real submanifolds. It was already observed by Cartan that smooth CR-diffeomorphisms between CR-submanifolds in C^N tend to be very regular, i.e., they are restrictions of holomorphic maps. However, in general smooth CR-mappings form a more restrictive class of mappings. Thus, since the inception of CR-geometry, the following general question has been of fundamental importance for the field: Are CR-equivalent real-analytic CR-structures also equivalent holomorphically? In joint work with Lamel, we answer this question in the negative, in any positive CR-dimension and CR-codimension. Our construction is based on a recent dynamical technique in CR-geometry, developed in my earlier work with Shafikov. Singular Pfaffian systems in dimension 6 12:10 Fri 20 Mar, 2015 :: Napier 144 :: Pawel Nurowski :: Center for Theoretical Physics, Polish Academy of Sciences We consider a pair of rank 3 distributions in dimension 6 with some remarkable properties. They define an analog of the celebrated nearly-Kahler structure on the 6 sphere, with the exceptional simple Lie group G2 as a group of symmetries. In our case the metric associated with the structure is pseudo-Riemannian, of split signature. The 6 manifold has a 5-dimensional boundary with interesting induced geometry. This structure on the boundary has no analog in the Riemannian case. Higher homogeneous bundles 12:10 Fri 27 Mar, 2015 :: Napier 144 :: David Roberts :: University of Adelaide Historically, homogeneous bundles were among the first examples of principal bundles. This talk will cover a general method that gives rise to many homogeneous principal 2-bundles. Topological matter and its K-theory 11:10 Thu 2 Apr, 2015 :: Ingkarni Wardli B18 :: Guo Chuan Thiang :: University of Adelaide The notion of fundamental particles, as well as phases of condensed matter, evolves as new mathematical tools become available to the physicist. I will explain how K-theory provides a powerful language for describing quantum mechanical symmetries, homotopies of their realisations, and topological insulators. Real K-theory is crucial in this framework, and its rich structure is still being explored both physically and mathematically. Higher rank discrete Nahm equations for SU(N) monopoles in hyperbolic space 11:10 Wed 8 Apr, 2015 :: Engineering & Maths EM213 :: Joseph Chan :: University of Melbourne Braam and Austin in 1990, proved that SU(2) magnetic monopoles in hyperbolic space H^3 are the same as solutions of the discrete Nahm equations. I apply equivariant K-theory to the ADHM construction of instantons/holomorphic bundles to extend the Braam-Austin result from SU(2) to SU(N). During its evolution, the matrices of the higher rank discrete Nahm equations jump in dimensions and this behaviour has not been observed in discrete evolution equations before. A secondary result is that the monopole field at the boundary of H^3 determines the monopole. Groups acting on trees 12:10 Fri 10 Apr, 2015 :: Napier 144 :: Anitha Thillaisundaram :: Heinrich Heine University of Duesseldorf From a geometric point of view, branch groups are groups acting spherically transitively on a spherically homogeneous rooted tree. The applications of branch groups reach out to analysis, geometry, combinatorics, and probability. The early construction of branch groups were the Grigorchuk group and the Gupta-Sidki p-groups. Among its many claims to fame, the Grigorchuk group was the first example of a group of intermediate growth (i.e. neither polynomial nor exponential). Here we consider a generalisation of the family of Grigorchuk-Gupta-Sidki groups, and we examine the restricted occurrence of their maximal subgroups. IGA Workshop on Symmetries and Spinors: Interactions Between Geometry and Physics 09:30 Mon 13 Apr, 2015 :: Conference Room 7.15 on Level 7 of the Ingkarni Wardli building :: J. Figueroa-O'Farrill (University of Edinburgh), M. Zabzine (Uppsala University), et al The interplay between physics and geometry has lead to stunning advances and enriched the internal structure of each field. This is vividly exemplified in the theory of supergravity, which is a supersymmetric extension of Einstein's relativity theory to the small scales governed by the laws of quantum physics. Sophisticated mathematics is being employed for finding solutions to the generalised Einstein equations and in return, they provide a rich source for new exotic geometries. This workshop brings together world-leading scientists from both, geometry and mathematical physics, as well as young researchers and students, to meet and learn about each others work. Spherical T-duality: the non-principal case 12:10 Fri 1 May, 2015 :: Napier 144 :: Mathai Varghese :: University of Adelaide Spherical T-duality is related to M-theory and was introduced in recent joint work with Bouwknegt and Evslin. I will begin by briefly reviewing the case of principal SU(2)-bundles with degree 7 flux, and then focus on the non-principal case for most of the talk, ending with the relation to SUGRA/M-theory. Indefinite spectral triples and foliations of spacetime 12:10 Fri 8 May, 2015 :: Napier 144 :: Koen van den Dungen :: Australian National University Motivated by Dirac operators on Lorentzian manifolds, we propose a new framework to deal with non-symmetric and non-elliptic operators in noncommutative geometry. We provide a definition for indefinite spectral triples, which correspond bijectively with certain pairs of spectral triples. Next, we will show how a special case of indefinite spectral triples can be constructed from a family of spectral triples. In particular, this construction provides a convenient setting to study the Dirac operator on a spacetime with a foliation by spacelike hypersurfaces. This talk is based on joint work with Adam Rennie (arXiv:1503.06916). The twistor equation on Lorentzian Spin^c manifolds 12:10 Fri 15 May, 2015 :: Napier 144 :: Andree Lischewski :: University of Adelaide In this talk I consider a conformally covariant spinor field equation, called the twistor equation, which can be formulated on any Lorentzian Spin^c manifold. Its solutions have become of importance in the study of supersymmetric field theories in recent years and were named "charged conformal Killing spinors". After a short review of conformal Spin^c geometry in Lorentzian signature, I will briefly discuss the emergence of charged conformal Killing spinors in supergravity. I will then focus on special geometric structures related to the twistor equation and use charged conformal Killing spinors in order to establish a link between conformal and CR geometry. Monodromy of the Hitchin system and components of representation varieties 12:10 Fri 29 May, 2015 :: Napier 144 :: David Baraglia :: University of Adelaide Representations of the fundamental group of a compact Riemann surface into a reductive Lie group form a moduli space, called a representation variety. An outstanding problem in topology is to determine the number of components of these varieties. Through a deep result known as non-abelian Hodge theory, representation varieties are homeomorphic to moduli spaces of certain holomorphic objects called Higgs bundles. In this talk I will describe recent joint work with L. Schaposnik computing the monodromy of the Hitchin fibration for Higgs bundle moduli spaces. Our results give a new unified proof of the number of components of several representation varieties. Some approaches toward a stronger Jacobian conjecture 12:10 Fri 5 Jun, 2015 :: Napier 144 :: Tuyen Truong :: University of Adelaide The Jacobian conjecture states that if a polynomial self-map of C^n has invertible Jacobian, then the map has a polynomial inverse. Is it true, false or simply undecidable? In this talk I will propose a conjecture concerning general square matrices with complex coefficients, whose validity implies the Jacobian conjecture. The conjecture is checked in various cases, in particular it is true for generic matrices. Also, a heuristic argument is provided explaining why the conjecture (and thus, also the Jacobian conjecture) should be true. Instantons and Geometric Representation Theory 12:10 Thu 23 Jul, 2015 :: Engineering and Maths EM212 :: Professor Richard Szabo :: Heriot-Watt University We give an overview of the various approaches to studying supersymmetric quiver gauge theories on ALE spaces, and their conjectural connections to two-dimensional conformal field theory via AGT-type dualities. From a mathematical perspective, this is formulated as a relationship between the equivariant cohomology of certain moduli spaces of sheaves on stacks and the representation theory of infinite-dimensional Lie algebras. We introduce an orbifold compactification of the minimal resolution of the A-type toric singularity in four dimensions, and then construct a moduli space of framed sheaves which is conjecturally isomorphic to a Nakajima quiver variety. We apply this construction to derive relations between the equivariant cohomology of these moduli spaces and the representation theory of the affine Lie algebra of type A. Dirac operators and Hamiltonian loop group action 12:10 Fri 24 Jul, 2015 :: Engineering and Maths EM212 :: Yanli Song :: University of Toronto A definition to the geometric quantization for compact Hamiltonian G-spaces is given by Bott, defined as the index of the Spinc-Dirac operator on the manifold. In this talk, I will explain how to generalize this idea to the Hamiltonian LG-spaces. Instead of quantizing infinite-dimensional manifolds directly, we use its equivalent finite-dimensional model, the quasi-Hamiltonian G-spaces. By constructing twisted spinor bundle and twisted pre-quantum bundle on the quasi-Hamiltonian G-space, we define a Dirac operator whose index are given by positive energy representation of loop groups. A key role in the construction will be played by the algebraic cubic Dirac operator for loop algebra. If time permitted, I will also explain how to prove the quantization commutes with reduction theorem for Hamiltonian LG-spaces under this framework. Workshop on Geometric Quantisation 10:10 Mon 27 Jul, 2015 :: Level 7 conference room Ingkarni Wardli :: Michele Vergne, Weiping Zhang, Eckhard Meinrenken, Nigel Higson and many others Geometric quantisation has been an increasingly active area since before the 1980s, with links to physics, symplectic geometry, representation theory, index theory, and differential geometry and geometric analysis in general. In addition to its relevance as a field on its own, it acts as a focal point for the interaction between all of these areas, which has yielded far-reaching and powerful results. This workshop features a large number of international speakers, who are all well-known for their work in (differential) geometry, representation theory and/or geometric analysis. This is a great opportunity for anyone interested in these areas to meet and learn from some of the top mathematicians in the world. Students are especially welcome. Registration is free. Quantising proper actions on Spin-c manifolds 11:00 Fri 31 Jul, 2015 :: Ingkarni Wardli Level 7 Room 7.15 :: Peter Hochs :: The University of Adelaide For a proper action by a Lie group on a Spin-c manifold (both of which may be noncompact), we study an index of deformations of the Spin-c Dirac operator, acting on the space of spinors invariant under the group action. When applied to spinors that are square integrable transversally to orbits in a suitable sense, the kernel of this operator turns out to be finite-dimensional, under certain hypotheses of the deformation. This also allows one to show that the index has the quantisation commutes with reduction property (as proved by Meinrenken in the compact symplectic case, and by Paradan-Vergne in the compact Spin-c case), for sufficiently large powers of the determinant line bundle. Furthermore, this result extends to Spin-c Dirac operators twisted by vector bundles. A key ingredient of the arguments is the use of a family of inner products on the Lie algebra, depending on a point in the manifold. This is joint work with Mathai Varghese. Gromov's method of convex integration and applications to minimal surfaces 12:10 Fri 7 Aug, 2015 :: Ingkarni Wardli B17 :: Finnur Larusson :: The University of Adelaide We start by considering an applied problem. You are interested in buying a used car. The price is tempting, but the car has a curious defect, so it is not clear whether you can even take it for a test drive. This problem illustrates the key idea of Gromov's method of convex integration. We introduce the method and some of its many applications, including new applications in the theory of minimal surfaces, and end with a sketch of ongoing joint work with Franc Forstneric. Bilinear L^p estimates for quasimodes 12:10 Fri 14 Aug, 2015 :: Ingkarni Wardli B17 :: Melissa Tacy :: The University of Adelaide Understanding the growth of the product of eigenfunctions $$u\cdot{}v$$ $$\Delta{}u=-\lambda^{2}u\quad{}\Delta{}v=-\mu^{2}v$$ is vital to understanding the regularity properties of non-linear PDE such as the non-linear Schr\"{o}dinger equation. In this talk I will discuss some recent results that I have obtain in collaboration with Zihua Guo and Xiaolong Han which provide a full range of estimates of the form $$||uv||_{L^{p}}\leq{}G(\lambda,\mu)||u||_{L^{2}}||v||_{L^{2}}$$ where $u$ and $v$ are approximate eigenfunctions of the Laplacian. We obtain these results by re-casting the problem to a more general related semiclassical problem. Equivariant bundle gerbes 12:10 Fri 21 Aug, 2015 :: Ingkarni Wardli B17 :: Michael Murray :: The University of Adelaide I will present the definitions of strong and weak group actions on a bundle gerbe and calculate the strongly equivariant class of the basic bundle gerbe on a unitary group. This is joint work with David Roberts, Danny Stevenson and Raymond Vozzo and forms part of arXiv:1506.07931. Vanishing lattices and moduli spaces 12:10 Fri 28 Aug, 2015 :: Ingkarni Wardli B17 :: David Baraglia :: The University of Adelaide Vanishing lattices are symplectic analogues of root systems. As with roots systems, they admit a classification in terms of certain Dynkin diagrams (not the usual ones from Lie theory). In this talk I will discuss this classification and if there is time I will outline my work (in progress) showing that the monodromy of the SL(n,C) Hitchin fibration is essentially a vanishing lattice. Integrability conditions for the Grushin operators 12:10 Fri 4 Sep, 2015 :: Ingkarni Wardli B17 :: Michael Eastwood :: The University of Adelaide Fix a non-negative integer k and consider the vector fields in the plane X=d/dx and Y=x^kd/dy. A smooth function f(x,y) is locally constant if and only if it is annihilated by the k^th Grushin operator f\mapsto(Xf,Yf). What about the range of this operator? T-duality and bulk-boundary correspondence 12:10 Fri 11 Sep, 2015 :: Ingkarni Wardli B17 :: Guo Chuan Thiang :: The University of Adelaide Bulk-boundary correspondences in physics can be modelled as topological boundary homomorphisms in K-theory, associated to an extension of a "bulk algebra" by a "boundary algebra". In joint work with V. Mathai, such bulk-boundary maps are shown to T-dualize into simple restriction maps in a large number of cases, generalizing what the Fourier transform does for ordinary functions. I will give examples, involving both complex and real K-theory, and explain how these results may be used to study topological phases of matter and D-brane charges in string theory. Base change and K-theory 12:10 Fri 18 Sep, 2015 :: Ingkarni Wardli B17 :: Hang Wang :: The University of Adelaide Tempered representations of an algebraic group can be classified by K-theory of the corresponding group C^*-algebra. We use Archimedean base change between Langlands parameters of real and complex algebraic groups to compare K-theory of the corresponding C^*-algebras of groups over different number fields. This is work in progress with K.F. Chao. T-dual noncommutative principal torus bundles 12:10 Fri 25 Sep, 2015 :: Engineering Maths Building EMG07 :: Keith Hannabuss :: University of Oxford Since the work of Mathai and Rosenberg it is known that the T-dual of a principal torus bundle can be described as a noncommutative torus bundle. This talk will look at a simple example of two T-dual bundles both of which are noncommutative. Then it will discuss a strategy for extending this to more general noncommutative bundles. Analytic complexity of bivariate holomorphic functions and cluster trees 12:10 Fri 2 Oct, 2015 :: Ingkarni Wardli B17 :: Timur Sadykov :: Plekhanov University, Moscow The Kolmogorov-Arnold theorem yields a representation of a multivariate continuous function in terms of a composition of functions which depend on at most two variables. In the analytic case, understanding the complexity of such a representation naturally leads to the notion of the analytic complexity of (a germ of) a bivariate multi-valued analytic function. According to Beloshapka's local definition, the order of complexity of any univariate function is equal to zero while the n-th complexity class is defined recursively to consist of functions of the form a(b(x,y)+c(x,y)), where a is a univariate analytic function and b and c belong to the (n-1)-th complexity class. Such a represenation is meant to be valid for suitable germs of multi-valued holomorphic functions. A randomly chosen bivariate analytic functions will most likely have infinite analytic complexity. However, for a number of important families of special functions of mathematical physics their complexity is finite and can be computed or estimated. Using this, we introduce the notion of the analytic complexity of a binary tree, in particular, a cluster tree, and investigate its properties. Real Lie Groups and Complex Flag Manifolds 12:10 Fri 9 Oct, 2015 :: Ingkarni Wardli B17 :: Joseph A. Wolf :: University of California, Berkeley Let G be a complex simple direct limit group. Let G_R be a real form of G that corresponds to an hermitian symmetric space. I'll describe the corresponding bounded symmetric domain in the context of the Borel embedding, Cayley transforms, and the Bergman-Shilov boundary. Let Q be a parabolic subgroup of G. In finite dimensions this means that G/Q is a complex projective variety, or equivalently has a Kaehler metric invariant under a maximal compact subgroup of G. Then I'll show just how the bounded symmetric domains describe cycle spaces for open G_R orbits on G/Q. These cycle spaces include the complex bounded symmetric domains. In finite dimensions they are tightly related to moduli spaces for compact Kaehler manifolds and to representations of semisimple Lie groups; in infinite dimensions there are more problems than answers. Finally, time permitting, I'll indicate how some of this goes over to real and to quaternionic bounded symmetric domains. 12:10 Fri 16 Oct, 2015 :: Ingkarni Wardli B17 :: Steve Rosenberg :: Boston University Not much is known about the topology of the diffeomorphism group Diff(M) of manifolds M of dimension four and higher. We'll show that for a class of manifolds of dimension 4k+1, Diff(M) has infinite fundamental group. This is proved by translating the problem into a question about Chern-Simons classes on the tangent bundle to the loop space LM. To build the CS classes, we use a family of metrics on LM associated to a Riemannian metric on M. The curvature of these metrics takes values in an algebra of pseudodifferential operators. The main technical step in the CS construction is to replace the ordinary matrix trace in finite dimensions with the Wodzicki residue, the unique trace on this algebra. The moral is that some techniques in finite dimensional Riemannian geometry can be extended to some examples in infinite dimensional geometry. Quasi-isometry classification of certain hyperbolic Coxeter groups 11:00 Fri 23 Oct, 2015 :: Ingkarni Wardli Conference Room 7.15 (Level 7) :: Anne Thomas :: University of Sydney Let Gamma be a finite simple graph with vertex set S. The associated right-angled Coxeter group W is the group with generating set S, so that s^2 = 1 for all s in S and st = ts if and only if s and t are adjacent vertices in Gamma. Moussong proved that the group W is hyperbolic in the sense of Gromov if and only if Gamma has no "empty squares". We consider the quasi-isometry classification of such Coxeter groups using the local cut point structure of their visual boundaries. In particular, we find an algorithm for computing Bowditch's JSJ tree for a class of these groups, and prove that two such groups are quasi-isometric if and only if their JSJ trees are the same. This is joint work with Pallavi Dani (Louisiana State University). Covariant model structures and simplicial localization 12:10 Fri 30 Oct, 2015 :: Ingkarni Wardli B17 :: Danny Stevenson :: The University of Adelaide This talk will describe some aspects of the theory of quasi-categories, in particular the notion of left fbration and the allied covariant model structure. If B is a simplicial set, then I will describe some Quillen equivalences relating the covariant model structure on simplicial sets over B to a certain localization of simplicial presheaves on the simplex category of B. I will show how this leads to a new description of Lurie's simplicial rigidification functor as a hammock localization and describe some applications to Lurie's theory of straightening and unstraightening functors. Locally homogeneous pp-waves 12:10 Fri 6 Nov, 2015 :: Ingkarni Wardli B17 :: Thomas Leistner :: The University of Adelaide For a certain type of Lorentzian manifolds, the so-called pp-waves, we study the conditions implied on the curvature by local homogeneity of the metric. We show that under some mild genericity assumptions, these conditions are quite strong, forcing the pp-wave to be a plane wave, and yielding a classification of homogeneous pp-waves. This also leads to a generalisation of a classical result by Jordan, Ehlers and Kundt about vacuum pp-waves in dimension 4 to arbitrary dimensions. Several examples show that our genericity assumptions are essential. This is joint work with W. Globke. Weak globularity in homotopy theory and higher category theory 12:10 Thu 12 Nov, 2015 :: Ingkarni Wardli B19 :: Simona Paoli :: University of Leicester Spaces and homotopy theories are fundamental objects of study of algebraic topology. One way to study these objects is to break them into smaller components with the Postnikov decomposition. To describe such decomposition purely algebraically we need higher categorical structures. We describe one approach to modelling these structures based on a new paradigm to build weak higher categories, which is the notion of weak globularity. We describe some of their connections to both homotopy theory and higher category theory. Oka principles and the linearization problem 12:10 Fri 8 Jan, 2016 :: Engineering North N132 :: Gerald Schwarz :: Brandeis University Let G be a reductive complex Lie group (e.g., SL(n,C)) and let X and Y be Stein manifolds (closed complex submanifolds of some C^n). Suppose that G acts freely on X and Y. Then there are quotient Stein manifolds X/G and Y/G and quotient mappings p_X:X-> X/G and p_Y: Y-> Y/G such that X and Y are principal G-bundles over X/G and Y/G. Let us suppose that Q=X/G ~= Y/G so that X and Y have the same quotient Q. A map Phi: X\to Y of principal bundles (over Q) is simply an equivariant continuous map commuting with the projections. That is, Phi(gx)=g Phi(x) for all g in G and x in X, and p_X=p_Y o Phi. The famous Oka Principle of Grauert says that any Phi as above embeds in a continuous family Phi_t: X -> Y, t in [0,1], where Phi_0=Phi, all the Phi_t satisfy the same conditions as Phi does and Phi_1 is holomorphic. This is rather amazing. We consider the case where G does not necessarily act freely on X and Y. There is still a notion of quotient and quotient mappings p_X: X-> X//G and p_Y: Y-> Y//G where X//G and Y//G are now Stein spaces and parameterize the closed G-orbits in X and Y. We assume that Q~= X//G~= Y//G and that we have a continuous equivariant Phi such that p_X=p_Y o Phi. We find conditions under which Phi embeds into a continuous family Phi_t such that Phi_1 is holomorphic. We give an application to the Linearization Problem. Let G act holomorphically on C^n. When is there a biholomorphic map Phi:C^n -> C^n such that Phi^{-1} o g o Phi in GL(n,C) for all g in G? We find a condition which is necessary and sufficient for "most" G-actions. This is joint work with F. Kutzschebauch and F. Larusson. A fibered density property and the automorphism group of the spectral ball 12:10 Fri 15 Jan, 2016 :: Engineering North N132 :: Frank Kutzschebauch :: University of Bern The spectral ball is defined as the set of complex n by n matrices whose eigenvalues are all less than 1 in absolute value. Its group of holomorphic automorphisms has been studied over many decades in several papers and a precise conjecture about its structure has been formulated. In dimension 2 this conjecture was recently disproved by Kosinski. We not only disprove the conjecture in all dimensions but also give the best possible description of the automorphism group. Namely we explain how the invariant theoretic quotient map divides the automorphism group of the spectral ball into a finite dimensional part of symmetries which lift from the quotient and an infinite dimensional part which leaves the fibration invariant. We prove a precise statement as to how hopelessly huge this latter part is. This is joint work with R. Andrist. Quantisation of Hitchin's moduli space 12:10 Fri 22 Jan, 2016 :: Engineering North N132 :: Siye Wu :: National Tsing Hua Univeristy In this talk, I construct prequantum line bundles on Hitchin's moduli spaces of orientable and non-orientable surfaces and study the geometric quantisation and quantisation via branes by complexification of the moduli spaces. A long C^2 without holomorphic functions 12:10 Fri 29 Jan, 2016 :: Engineering North N132 :: Franc Forstneric :: University of Ljubljana For every integer n>1 we construct a complex manifold of dimension n which is exhausted by an increasing sequence of biholomorphic images of C^n (i.e., a long C^n), but it does not admit any nonconstant holomorphic functions. We also introduce new biholomorphic invariants of a complex manifold, the stable core and the strongly stable core, and we prove that every compact strongly pseudoconvex and polynomially convex domain B in C^n is the strongly stable core of a long C^n; in particular, non-equivalent domains give rise to non-equivalent long C^n's. Thus, for any n>1 there exist uncountably many pairwise non-equivalent long C^n's. These results answer several long standing open questions. (Joint work with Luka Boc Thaler.) A fixed point theorem on noncompact manifolds 12:10 Fri 12 Feb, 2016 :: Ingkarni Wardli B21 :: Peter Hochs :: University of Adelaide / Radboud University For an elliptic operator on a compact manifold acted on by a compact Lie group, the Atiyah-Segal-Singer fixed point formula expresses its equivariant index in terms of data on fixed point sets of group elements. This can for example be used to prove Weyl's character formula. We extend the definition of the equivariant index to noncompact manifolds, and prove a generalisation of the Atiyah-Segal-Singer formula, for group elements with compact fixed point sets. In one example, this leads to a relation with characters of discrete series representations of semisimple Lie groups. (This is joint work with Hang Wang.) T-duality for elliptic curve orientifolds 12:10 Fri 4 Mar, 2016 :: Ingkarni Wardli B17 :: Jonathan Rosenberg :: University of Maryland Orientifold string theories are quantum field theories based on the geometry of a space with an involution. T-dualities are certain relationships between such theories that look different on the surface but give rise to the same observable physics. In this talk I will not assume any knowledge of physics but will concentrate on the associated geometry, in the case where the underlying space is a (complex) elliptic curve and the involution is either holomorphic or anti-holomorphic. The results blend algebraic topology and algebraic geometry. This is mostly joint work with Chuck Doran and Stefan Mendez-Diez. The parametric h-principle for minimal surfaces in R^n and null curves in C^n 12:10 Fri 11 Mar, 2016 :: Ingkarni Wardli B17 :: Finnur Larusson :: University of Adelaide I will describe new joint work with Franc Forstneric (arXiv:1602.01529). This work brings together four diverse topics from differential geometry, holomorphic geometry, and topology; namely the theory of minimal surfaces, Oka theory, convex integration theory, and the theory of absolute neighborhood retracts. Our goal is to determine the rough shape of several infinite-dimensional spaces of maps of geometric interest. It turns out that they all have the same rough shape. Expanding maps 12:10 Fri 18 Mar, 2016 :: Eng & Maths EM205 :: Andy Hammerlindl :: Monash University Consider a function from the circle to itself such that the derivative is greater than one at every point. Examples are maps of the form f(x) = mx for integers m > 1. In some sense, these are the only possible examples. This fact and the corresponding question for maps on higher dimensional manifolds was a major motivation for Gromov to develop pioneering results in the field of geometric group theory. In this talk, I'll give an overview of this and other results relating dynamical systems to the geometry of the manifolds on which they act and (time permitting) talk about my own work in the area. Counting periodic points of plane Cremona maps 12:10 Fri 1 Apr, 2016 :: Eng & Maths EM205 :: Tuyen Truong :: University of Adelaide In this talk, I will present recent results, join with Tien-Cuong Dinh and Viet-Anh Nguyen, on counting periodic points of plane Cremona maps (i.e. birational maps of P^2). The tools used include a Lefschetz fixed point formula of Saito, Iwasaki and Uehara for birational maps of surface whose fixed point set may contain curves; a bound on the arithmetic genus of curves of periodic points by Diller, Jackson and Sommerse; a result by Diller, Dujardin and Guedj on invariant (1,1) currents of meromorphic maps of compact Kahler surfaces; and a theory developed recently by Dinh and Sibony for non proper intersections of varieties. Among new results in the paper, we give a complete characterisation of when two positive closed (1,1) currents on a compact Kahler surface behave nicely in the view of Dinh and Sibony's theory, even if their wedge intersection may not be well-defined with respect to the classical pluripotential theory. Time allows, I will present some generalisations to meromorphic maps (including an upper bound for the number of isolated periodic points which is sometimes overlooked in the literature) and open questions. Geometric analysis of gap-labelling 12:10 Fri 8 Apr, 2016 :: Eng & Maths EM205 :: Mathai Varghese :: University of Adelaide Using an earlier result, joint with Quillen, I will formulate a gap labelling conjecture for magnetic Schrodinger operators with smooth aperiodic potentials on Euclidean space. Results in low dimensions will be given, and the formulation of the same problem for certain non-Euclidean spaces will be given if time permits. This is ongoing joint work with Moulay Benameur. Sard Theorem for the endpoint map in sub-Riemannian manifolds 12:10 Fri 29 Apr, 2016 :: Eng & Maths EM205 :: Alessandro Ottazzi :: University of New South Wales Sub-Riemannian geometries occur in several areas of pure and applied mathematics, including harmonic analysis, PDEs, control theory, metric geometry, geometric group theory, and neurobiology. We introduce sub-Riemannian manifolds and give some examples. Therefore we discuss some of the open problems, and in particular we focus on the Sard Theorem for the endpoint map, which is related to the study of length minimizers. Finally, we consider some recent results obtained in collaboration with E. Le Donne, R. Montgomery, P. Pansu and D. Vittone. How to count Betti numbers 12:10 Fri 6 May, 2016 :: Eng & Maths EM205 :: David Baraglia :: University of Adelaide I will begin this talk by showing how to obtain the Betti numbers of certain smooth complex projective varieties by counting points over a finite field. For singular or non-compact varieties this motivates us to consider the "virtual Hodge numbers" encoded by the "Hodge-Deligne polynomial", a refinement of the topological Euler characteristic. I will then discuss the computation of Hodge-Deligne polynomials for certain singular character varieties (i.e. moduli spaces of flat connections). Harmonic analysis of Hodge-Dirac operators 12:10 Fri 13 May, 2016 :: Eng & Maths EM205 :: Pierre Portal :: Australian National University When the metric on a Riemannian manifold is perturbed in a rough (merely bounded and measurable) manner, do basic estimates involving the Hodge Dirac operator $D = d+d^*$ remain valid? Even in the model case of a perturbation of the euclidean metric on $\mathbb{R}^n$, this is a difficult question. For instance, the fact that the $L^2$ estimate $\|Du\|_2 \sim \|\sqrt{D^{2}}u\|_2$ remains valid for perturbed versions of $D$ was a famous conjecture made by Kato in 1961 and solved, positively, in a ground breaking paper of Auscher, Hofmann, Lacey, McIntosh and Tchamitchian in 2002. In the past fifteen years, a theory has emerged from the solution of this conjecture, making rough perturbation problems much more tractable. In this talk, I will give a general introduction to this theory, and present one of its latest results: a flexible approach to $L^p$ estimates for the holomorphic functional calculus of $D$. This is joint work with D. Frey (Delft) and A. McIntosh (ANU). Smooth mapping orbifolds 12:10 Fri 20 May, 2016 :: Eng & Maths EM205 :: David Roberts :: University of Adelaide It is well-known that orbifolds can be represented by a special kind of Lie groupoid, namely those that are étale and proper. Lie groupoids themselves are one way of presenting certain nice differentiable stacks. In joint work with Ray Vozzo we have constructed a presentation of the mapping stack Hom(disc(M),X), for M a compact manifold and X a differentiable stack, by a Fréchet-Lie groupoid. This uses an apparently new result in global analysis about the map C^\infty(K_1,Y) \to C^\infty(K_2,Y) induced by restriction along the inclusion K_2 \to K_1, for certain compact K_1,K_2. We apply this to the case of X being an orbifold to show that the mapping stack is an infinite-dimensional orbifold groupoid. We also present results about mapping groupoids for bundle gerbes. Some free boundary value problems in mean curvature flow and fully nonlinear curvature flows 12:10 Fri 27 May, 2016 :: Eng & Maths EM205 :: Valentina Wheeler :: University of Wollongong In this talk we present an overview of the current research in mean curvature flow and fully nonlinear curvature flows with free boundaries, with particular focus on our own results. Firstly we consider the scenario of a mean curvature flow solution with a ninety-degree angle condition on a fixed hypersurface in Euclidean space, that we call the contact hypersurface. We prove that under restrictions on either the initial hypersurface (such as rotational symmetry) or restrictions on the contact hypersurface the flow exists for all times and converges to a self-similar solution. We also discuss the possibility of a curvature singularity appearing on the free boundary contained in the contact hypersurface. We extend some of these results to the setting of a hypersurface evolving in its normal direction with speed given by a fully nonlinear functional of the principal curvatures. On the Strong Novikov Conjecture for Locally Compact Groups in Low Degree Cohomology Classes 12:10 Fri 3 Jun, 2016 :: Eng & Maths EM205 :: Yoshiyasu Fukumoto :: Kyoto University The main result I will discuss is non-vanishing of the image of the index map from the G-equivariant K-homology of a G-manifold X to the K-theory of the C*-algebra of the group G. The action of G on X is assumed to be proper and cocompact. Under the assumption that the Kronecker pairing of a K-homology class with a low-dimensional cohomology class is non-zero, we prove that the image of this class under the index map is non-zero. Neither discreteness of the locally compact group G nor freeness of the action of G on X are required. The case of free actions of discrete groups was considered earlier by B. Hanke and T. Schick. Algebraic structures associated to Brownian motion on Lie groups 13:10 Thu 16 Jun, 2016 :: Ingkarni Wardli B17 :: Steve Rosenberg :: University of Adelaide / Boston University In (1+1)-d TQFT, products and coproducts are associated to pairs of pants decompositions of Riemann surfaces. We consider a toy model in dimension (0+1) consisting of specific broken paths in a Lie group. The products and coproducts are constructed by a Brownian motion average of holonomy along these paths with respect to a connection on an auxiliary bundle. In the trivial case over the torus, we (seem to) recover the Hopf algebra structure on the symmetric algebra. In the general case, we (seem to) get deformations of this Hopf algebra. This is a preliminary report on joint work with Michael Murray and Raymond Vozzo. Chern-Simons invariants of Seifert manifolds via Loop spaces 14:10 Tue 28 Jun, 2016 :: Ingkarni Wardli B17 :: Ryan Mickler :: Northeastern University Over the past 30 years the Chern-Simons functional for connections on G-bundles over three-manfolds has lead to a deep understanding of the geometry of three-manfiolds, as well as knot invariants such as the Jones polynomial. Here we study this functional for three-manfolds that are topologically given as the total space of a principal circle bundle over a compact Riemann surface base, which are known as Seifert manifolds. We show that on such manifolds the Chern-Simons functional reduces to a particular gauge-theoretic functional on the 2d base, that describes a gauge theory of connections on an infinite dimensional bundle over this base with structure group given by the level-k affine central extension of the loop group LG. We show that this formulation gives a new understanding of results of Beasley-Witten on the computability of quantum Chern-Simons invariants of these manifolds as well as knot invariants for knots that wrap a single fiber of the circle bundle. A central tool in our analysis is the Caloron correspondence of Murray-Stevenson-Vozzo. Twists over etale groupoids and twisted vector bundles 12:10 Fri 22 Jul, 2016 :: Ingkarni Wardli B18 :: Elizabeth Gillaspy :: University of Colorado, Boulder Given a twist over an etale groupoid, one can construct an associated C*-algebra which carries a good deal of geometric and physical meaning; for example, the K-theory group of this C*-algebra classifies D-brane charges in string theory. Twisted vector bundles, when they exist, give rise to particularly important elements in this K-theory group. In this talk, we will explain how to use the classifying space of the etale groupoid to construct twisted vector bundles, under some mild hypotheses on the twist and the classifying space. My hope is that this talk will be accessible to a broad audience; in particular, no prior familiarity with groupoids, their twists, or the associated C*-algebras will be assumed. This is joint work with Carla Farsi. Holomorphic Flexibility Properties of Spaces of Elliptic Functions 12:10 Fri 29 Jul, 2016 :: Ingkarni Wardli B18 :: David Bowman :: University of Adelaide The set of meromorphic functions on an elliptic curve naturally possesses the structure of a complex manifold. The component of degree 3 functions is 6-dimensional and enjoys several interesting complex-analytic properties that make it, loosely speaking, the opposite of a hyperbolic manifold. Our main result is that this component has a 54-sheeted branched covering space that is an Oka manifold. Etale ideas in topological and algebraic dynamical systems 12:10 Fri 5 Aug, 2016 :: Ingkarni Wardli B18 :: Tuyen Truong :: University of Adelaide In etale topology, instead of considering open subsets of a space, we consider etale neighbourhoods lying over these open subsets. In this talk, I define an etale analog of dynamical systems: to understand a dynamical system f:(X,\Omega )->(X,\Omega ), we consider other dynamical systems lying over it. I then propose to use this to resolve the following two questions: Question 1: What should be the topological entropy of a dynamical system (f,X,\Omega ) when (X,\Omega ) is not a compact space? Question 2: What is the relation between topological entropy of a rational map or correspondence (over a field of arbitrary characteristic) to the pullback on cohomology groups and algebraic cycles? Calculus on symplectic manifolds 12:10 Fri 12 Aug, 2016 :: Ingkarni Wardli B18 :: Mike Eastwood :: University of Adelaide One can use the symplectic form to construct an elliptic complex replacing the de Rham complex. Then, under suitable curvature conditions, one can form coupled versions of this complex. Finally, on complex projective space, these constructions give rise to a series of elliptic complexes with geometric consequences for the Fubini-Study metric and its X-ray transform. This talk, which will start from scratch, is based on the work of many authors but, especially, current joint work with Jan Slovak. Product Hardy spaces associated to operators with heat kernel bounds on spaces of homogeneous type 12:10 Fri 19 Aug, 2016 :: Ingkarni Wardli B18 :: Lesley Ward :: University of South Australia Much effort has been devoted to generalizing the Calder'on-Zygmund theory in harmonic analysis from Euclidean spaces to metric measure spaces, or spaces of homogeneous type. Here the underlying space R^n with Euclidean metric and Lebesgue measure is replaced by a set X with general metric or quasi-metric and a doubling measure. Further, one can replace the Laplacian operator that underpins the Calderon-Zygmund theory by more general operators L satisfying heat kernel estimates. I will present recent joint work with P. Chen, X.T. Duong, J. Li and L.X. Yan along these lines. We develop the theory of product Hardy spaces H^p_{L_1,L_2}(X_1 x X_2), for 1 Singular vector bundles and topological semi-metals 12:10 Fri 2 Sep, 2016 :: Ingkarni Wardli B18 :: Guo Chuan Thiang :: University of Adelaide The elusive Weyl fermion was recently realised as quasiparticle excitations of a topological semimetal. I will explain what a semi-metal is, and the precise mathematical sense in which they can be "topological", in the sense of the general theory of topological insulators. This involves understanding vector bundles with singularities, with the aid of Mayer-Vietoris principles, gerbes, and generalised degree theory. Geometry of pseudodifferential algebra bundles 12:10 Fri 16 Sep, 2016 :: Ingkarni Wardli B18 :: Mathai Varghese :: University of Adelaide I will motivate the construction of pseudodifferential algebra bundles arising in index theory, and also outline the construction of general pseudodifferential algebra bundles (and the associated sphere bundles), showing that there are many that are purely infinite dimensional that do not come from usual constructions in index theory. I will also discuss characteristic classes of such bundles. This is joint work with Richard Melrose. Hilbert schemes of points of some surfaces and quiver representations 12:10 Fri 23 Sep, 2016 :: Ingkarni Wardli B17 :: Ugo Bruzzo :: International School for Advanced Studies, Trieste Hilbert schemes of points on the total spaces of the line bundles O(-n) on P1 (desingularizations of toric singularities of type (1/n)(1,1)) can be given an ADHM description, and as a result, they can be realized as varieties of quiver representations. Energy quantisation for the Willmore functional 11:10 Fri 7 Oct, 2016 :: Ligertwood 314 Flinders Room :: Yann Bernard :: Monash University We prove a bubble-neck decomposition and an energy quantisation result for sequences of Willmore surfaces immersed into R^(m>=3) with uniformly bounded energy and non-degenerating conformal structure. We deduce the strong compactness (modulo the action of the Moebius group) of closed Willmore surfaces of a given genus below some energy threshold. This is joint-work with Tristan Riviere (ETH Zuerich). Character Formula for Discrete Series 12:10 Fri 14 Oct, 2016 :: Ingkarni Wardli B18 :: Hang Wang :: University of Adelaide Weyl character formula describes characters of irreducible representations of compact Lie groups. This formula can be obtained using geometric method, for example, from the Atiyah-Bott fixed point theorem or the Atiyah-Segal-Singer index theorem. Harish-Chandra character formula, the noncompact analogue of the Weyl character formula, can also be studied from the point of view of index theory. We apply orbital integrals on K-theory of Harish-Chandra Schwartz algebra of a semisimple Lie group G, and then use geometric method to deduce Harish-Chandra character formulas for discrete series representations of G. This is work in progress with Peter Hochs. Parahoric bundles, invariant theory and the Kazhdan-Lusztig map 12:10 Fri 21 Oct, 2016 :: Ingkarni Wardli B18 :: David Baraglia :: University of Adelaide In this talk I will introduce the notion of parahoric groups, a loop group analogue of parabolic subgroups. I will also discuss a global version of this, namely parahoric bundles on a complex curve. This leads us to a problem concerning the behaviour of invariant polynomials on the dual of the Lie algebra, a kind of "parahoric invariant theory". The key to solving this problem turns out to be the Kazhdan-Lusztig map, which assigns to each nilpotent orbit in a semisimple Lie algebra a conjugacy class in the Weyl group. Based on joint work with Masoud Kamgarpour and Rohith Varma. Toroidal Soap Bubbles: Constant Mean Curvature Tori in S ^ 3 and R ^3 12:10 Fri 28 Oct, 2016 :: Ingkarni Wardli B18 :: Emma Carberry :: University of Sydney Constant mean curvature (CMC) tori in S ^ 3, R ^ 3 or H ^ 3 are in bijective correspondence with spectral curve data, consisting of a hyperelliptic curve, a line bundle on this curve and some additional data, which in particular determines the relevant space form. This point of view is particularly relevant for considering moduli-space questions, such as the prevalence of tori amongst CMC planes and whether tori can be deformed. I will address these questions for the spherical and Euclidean cases, using Whitham deformations. Introduction to Lorentz Geometry: Riemann vs Lorentz 12:10 Fri 18 Nov, 2016 :: Engineering North N132 :: Abdelghani Zeghib :: Ecole Normale Superieure de Lyon The goal is to compare Riemannian and Lorentzian geometries and see what one loses and wins when going from the Riemann to Lorentz. Essentially, one loses compactness and ellipticity, but wins causality structure and mathematical and physical situations when natural Lorentzian metrics emerge. Leavitt path algebras 12:10 Fri 2 Dec, 2016 :: Engineering & Math EM213 :: Roozbeh Hazrat :: Western Sydney University From a directed graph one can generate an algebra which captures the movements along the graph. One such algebras are Leavitt path algebras. Despite being introduced only 10 years ago, Leavitt path algebras have arisen in a variety of different contexts as diverse as analysis, symbolic dynamics, noncommutative geometry and representation theory. In fact, Leavitt path algebras are algebraic counterpart to graph C*-algebras, a theory which has become an area of intensive research globally. There are strikingly parallel similarities between these two theories. Even more surprisingly, one cannot (yet) obtain the results in one theory as a consequence of the other; the statements look the same, however the techniques to prove them are quite different (as the names suggest, one uses Algebra and other Analysis). These all suggest that there might be a bridge between Algebra and Analysis yet to be uncovered. In this talk, we introduce Leavitt path algebras and try to classify them by means of (graded) Grothendieck groups. We will ask nice questions! An equivariant parametric Oka principle for bundles of homogeneous spaces 12:10 Fri 3 Mar, 2017 :: Napier 209 :: Finnur Larusson :: University of Adelaide I will report on new joint work with Frank Kutzschebauch and Gerald Schwarz (arXiv:1612.07372). Under certain conditions, every continuous section of a holomorphic fibre bundle can be deformed to a holomorphic section. In fact, the inclusion of the space of holomorphic sections into the space of continuous sections is a weak homotopy equivalence. What if a complex Lie group acts on the bundle and its sections? We have proved an analogous result for equivariant sections. The result has a wide scope. If time permits, I will describe some interesting special cases and mention two applications. Diffeomorphisms of discs, harmonic spinors and positive scalar curvature 11:10 Fri 17 Mar, 2017 :: Engineering Nth N218 :: Diarmuid Crowley :: University of Melbourne Let Diff(D^k) be the space of diffeomorphisms of the k-disc fixing the boundary point wise. In this talk I will show for k > 5, that the homotopy groups \pi_*Diff(D^k) have non-zero 8-periodic 2-torsion detected in real K-theory. I will then discuss applications for spin manifolds M of dimension 6 or greater: 1) Our results input to arguments of Hitchin which now show that M admits a metric with a harmonic spinor. 2) If non-empty, space of positive scalar curvature metrics on M has non-zero 8-periodic 2-torsion in its homotopy groups which is detected in real K-theory. This is part of joint work with Thomas Schick and Wolfgang Steimle. What is index theory? 12:10 Tue 21 Mar, 2017 :: Inkgarni Wardli 5.57 :: Dr Peter Hochs :: School of Mathematical Sciences Index theory is a link between topology, geometry and analysis. A typical theorem in index theory says that two numbers are equal: an analytic index and a topological index. The first theorem of this kind was the index theorem of Atiyah and Singer, which they proved in 1963. Index theorems have many applications in maths and physics. For example, they can be used to prove that a differential equation must have a solution. Also, they imply that the topology of a space like a sphere or a torus determines in what ways it can be curved. Topology is the study of geometric properties that do not change if we stretch or compress a shape without cutting or glueing. Curvature does change when we stretch something out, so it is surprising that topology can say anything about curvature. Index theory has many surprising consequences like this. Minimal surfaces and complex analysis 12:10 Fri 24 Mar, 2017 :: Napier 209 :: Antonio Alarcon :: University of Granada A surface in the Euclidean space R^3 is said to be minimal if it is locally area-minimizing, meaning that every point in the surface admits a compact neighborhood with the least area among all the surfaces with the same boundary. Although the origin of minimal surfaces is in physics, since they can be realized locally as soap films, this family of surfaces lies in the intersection of many fields of mathematics. In particular, complex analysis in one and several variables plays a fundamental role in the theory. In this lecture we will discuss the influence of complex analysis in the study of minimal surfaces. Geometric structures on moduli spaces 12:10 Fri 31 Mar, 2017 :: Napier 209 :: Nicholas Buchdahl :: University of Adelaide Moduli spaces are used to classify various kinds of objects, often arising from solutions of certain differential equations on manifolds; for example, the complex structures on a compact surface or the anti-self-dual Yang-Mills equations on an oriented smooth 4-manifold. Sometimes these moduli spaces carry important information about the underlying manifold, manifested most clearly in the results of Donaldson and others on the topology of smooth 4-manifolds. It is also the case that these moduli spaces themselves carry interesting geometric structures; for example, the Weil-Petersson metric on moduli spaces of compact Riemann surfaces, exploited to great effect by Maryam Mirzakhani. In this talk, I shall elaborate on the theme of geometric structures on moduli spaces, with particular focus on some recent-ish work done in conjunction with Georg Schumacher. K-types of tempered representations 12:10 Fri 7 Apr, 2017 :: Napier 209 :: Peter Hochs :: University of Adelaide Tempered representations of a reductive Lie group G are the irreducible unitary representations one needs in the Plancherel decomposition of L^2(G). They are relevant to harmonic analysis because of this, and also occur in the Langlands classification of the larger class of admissible representations. If K in G is a maximal compact subgroup, then there is a considerable amount of information in the restriction of a tempered representation to K. In joint work with Yanli Song and Shilin Yu, we give a geometric expression for the decomposition of such a restriction into irreducibles. The multiplicities of these irreducibles are expressed as indices of Dirac operators on reduced spaces of a coadjoint orbit of G corresponding to the representation. These reduced spaces are Spin-c analogues of reduced spaces in symplectic geometry, defined in terms of moment maps that represent conserved quantities. This result involves a Spin-c version of the quantisation commutes with reduction principle for noncompact manifolds. For discrete series representations, this was done by Paradan in 2003. Poisson-Lie T-duality and integrability 11:10 Thu 13 Apr, 2017 :: Engineering & Math EM213 :: Ctirad Klimcik :: Aix-Marseille University, Marseille The Poisson-Lie T-duality relates sigma-models with target spaces symmetric with respect to mutually dual Poisson-Lie groups. In the special case if the Poisson-Lie symmetry reduces to the standard non-Abelian symmetry one of the corresponding mutually dual sigma-models is the standard principal chiral model which is known to enjoy the property of integrability. A natural question whether this non-Abelian integrability can be lifted to integrability of sigma model dualizable with respect to the general Poisson-Lie symmetry has been answered in the affirmative by myself in 2008. The corresponding Poisson-Lie symmetric and integrable model is a one-parameter deformation of the principal chiral model and features a remarkable explicit appearance of the standard Yang-Baxter operator in the target space geometry. Several distinct integrable deformations of the Yang-Baxter sigma model have been then subsequently uncovered which turn out to be related by the Poisson-Lie T-duality to the so called lambda-deformed sigma models. My talk gives a review of these developments some of which found applications in string theory in the framework of the AdS/CFT correspondence. Geometric limits of knot complements 12:10 Fri 28 Apr, 2017 :: Napier 209 :: Jessica Purcell :: Monash University The complement of a knot often admits a hyperbolic metric: a metric with constant curvature -1. In this talk, we will investigate sequences of hyperbolic knots, and the possible spaces they converge to as a geometric limit. In particular, we show that there exist hyperbolic knots in the 3-sphere such that the set of points of large injectivity radius in the complement take up the bulk of the volume. This is joint work with Autumn Kent. Hodge theory on the moduli space of Riemann surfaces 12:10 Fri 5 May, 2017 :: Napier 209 :: Jesse Gell-Redman :: University of Melbourne The Hodge theorem on a closed Riemannian manifold identifies the deRham cohomology with the space of harmonic differential forms. Although there are various extensions of the Hodge theorem to singular or complete but non-compact spaces, when there is an identification of L^2 Harmonic forms with a topological feature of the underlying space, it is highly dependent on the nature of infinity (in the non-compact case) or the locus of incompleteness; no unifying theorem treats all cases. We will discuss work toward extending the Hodge theorem to singular Riemannian manifolds where the singular locus is an incomplete cusp edge. These can be pictured locally as a bundle of horns, and they provide a model for the behavior of the Weil-Petersson metric on the compactified Riemann moduli space near the interior of a divisor. Joint with J. Swoboda and R. Melrose. Graded K-theory and C*-algebras 11:10 Fri 12 May, 2017 :: Engineering North 218 :: Aidan Sims :: University of Wollongong C*-algebras can be regarded, in a very natural way, as noncommutative algebras of continuous functions on topological spaces. The analogy is strong enough that topological K-theory in terms of formal differences of vector bundles has a direct analogue for C*-algebras. There is by now a substantial array of tools out there for computing C*-algebraic K-theory. However, when we want to model physical phenomena, like topological phases of matter, we need to take into account various physical symmetries, some of which are encoded by gradings of C*-algebras by the two-element group. Even the definition of graded C*-algebraic K-theory is not entirely settled, and there are relatively few computational tools out there. I will try to outline what a C*-algebra (and a graded C*-algebra is), indicate what graded K-theory ought to look like, and discuss recent work with Alex Kumjian and David Pask linking this with the deep and powerful work of Kasparov, and using this to develop computational tools. Real bundle gerbes 12:10 Fri 19 May, 2017 :: Napier 209 :: Michael Murray :: University of Adelaide Bundle gerbe modules, via the notion of bundle gerbe K-theory provide a realisation of twisted K-theory. I will discuss the existence or Real bundle gerbes which are the corresponding objects required to construct Real twisted K-theory in the sense of Atiyah. This is joint work with Richard Szabo (Heriot-Watt), Pedram Hekmati (Auckland) and Raymond Vozzo which appeared in arXiv:1608.06466. Schubert Calculus on Lagrangian Grassmannians 12:10 Tue 23 May, 2017 :: EM 213 :: Hiep Tuan Dang :: National centre for theoretical sciences, Taiwan The Lagrangian Grassmannian $LG = LG(n,2n)$ is the projective complex manifold which parametrizes Lagrangian (i.e. maximal isotropic) subspaces in a symplective vector space of dimension $2n$. This talk is mainly devoted to Schubert calculus on $LG$. We first recall the definition of Schubert classes in this context. Then we present basic results which are similar to the classical formulas due to Pieri and Giambelli. These lead to a presentation of the cohomology ring of $LG$. Finally, we will discuss recent results related to the Schubert structure constants and Gromov-Witten invariants of $LG$. Holomorphic Legendrian curves 12:10 Fri 26 May, 2017 :: Napier 209 :: Franc Forstneric :: University of Ljubljana, Slovenia I will present recent results on the existence and behaviour of noncompact holomorphic Legendrian curves in complex contact manifolds. We show that these curves are ubiquitous in \C^{2n+1} with the standard holomorphic contact form \alpha=dz+\sum_{j=1}^n x_jdy_j; in particular, every open Riemann surface embeds into \C^3 as a proper holomorphic Legendrian curves. On the other hand, for any integer n>= 1 there exist Kobayashi hyperbolic complex contact structures on \C^{2n+1} which do not admit any nonconstant Legendrian complex lines. Furthermore, we construct a holomorphic Darboux chart around any noncompact holomorphic Legendrian curve in an arbitrary complex contact manifold. As an application, we show that every bordered holomorphic Legendrian curve can be uniformly approximated by complete bounded Legendrian curves. Constructing differential string structures 14:10 Wed 7 Jun, 2017 :: EM213 :: David Roberts :: University of Adelaide String structures on a manifold are analogous to spin structures, except instead of lifting the structure group through the extension Spin(n)\to SO(n) of Lie groups, we need to lift through the extension String(n)\to Spin(n) of Lie *2-groups*. Such a thing exists if the first fractional Pontryagin class (1/2)p_1 vanishes in cohomology. A differential string structure also lifts connection data, but this is rather complicated, involving a number of locally defined differential forms satisfying cocycle-like conditions. This is an expansion of the geometric string structures of Stolz and Redden, which is, for a given connection A, merely a 3-form R on the frame bundle such that dR = tr(F^2) for F the curvature of A; in other words a trivialisation of the de Rham class of (1/2)p_1. I will present work in progress on a framework (and specific results) that allows explicit calculation of the differential string structure for a large class of homogeneous spaces, which also yields formulas for the Stolz-Redden form. I will comment on the application to verifying the refined Stolz conjecture for our particular class of homogeneous spaces. Joint work with Ray Vozzo. Quaternionic Kaehler manifolds of co-homogeneity one 12:10 Fri 16 Jun, 2017 :: Ligertwood 231 :: Vicente Cortes :: Universitat Hamburg Quaternionic Kaehler manifolds form an important class of Riemannian manifolds of special holonomy. They provide examples of Einstein manifolds of non-zero scalar curvature. I will show how to construct explicit examples of complete quaternionic Kaehler manifolds of negative scalar curvature beyond homogeneous spaces. In particular, I will present a series of examples of co-homogeneity one, based on arXiv:1701.07882. Complex methods in real integral geometry 12:10 Fri 28 Jul, 2017 :: Engineering Sth S111 :: Mike Eastwood :: University of Adelaide There are well-known analogies between holomorphic integral transforms such as the Penrose transform and real integral transforms such as the Radon, Funk, and John transforms. In fact, one can make a precise connection between them and hence use complex methods to establish results in the real setting. This talk will introduce some simple integral transforms and indicate how complex analysis may be applied. Curvature contraction of axially symmetric hypersurfaces in the sphere 12:10 Fri 4 Aug, 2017 :: Engineering Sth S111 :: James McCoy :: University of Wollongong We show that convex surfaces in an ambient three-sphere contract to round points in finite time under fully nonlinear, degree one homogeneous curvature flows, with no concavity condition on the speed. The result extends to convex axially symmetric hypersurfaces of S^{n+1}. Using a different pinching function we also obtain the analogous results for contraction by Gauss curvature. Weil's Riemann hypothesis (RH) and dynamical systems 12:10 Fri 11 Aug, 2017 :: Engineering Sth S111 :: Tuyen Truong :: University of Adelaide Weil proposed an analogue of the RH in finite fields, aiming at counting asymptotically the number of solutions to a given system of polynomial equations (with coefficients in a finite field) in finite field extensions of the base field. This conjecture influenced the development of Algebraic Geometry since the 1950's, most important achievements include: Grothendieck et al.'s etale cohomology, and Bombieri and Grothendieck's standard conjectures on algebraic cycles (inspired by a Kahlerian analogue of a generalisation of Weil's RH by Serre). Weil's RH was solved by Deligne in the 70's, but the finite field analogue of Serre's result is still open (even in dimension 2). This talk presents my recent work proposing a generalisation of Weil's RH by relating it to standard conjectures and a relatively new notion in complex dynamical systems called dynamical degrees. In the course of the talk, I will present the proof of a question proposed by Esnault and Srinivas (which is related to a result by Gromov and Yomdin on entropy of complex dynamical systems), which gives support to the finite field analogue of Serre's result. Compact pseudo-Riemannian homogeneous spaces 12:10 Fri 18 Aug, 2017 :: Engineering Sth S111 :: Wolfgang Globke :: University of Adelaide A pseudo-Riemannian homogeneous space $M$ of finite volume can be presented as $M=G/H$, where $G$ is a Lie group acting transitively and isometrically on $M$, and $H$ is a closed subgroup of $G$. The condition that $G$ acts isometrically and thus preserves a finite measure on $M$ leads to strong algebraic restrictions on $G$. In the special case where $G$ has no compact semisimple normal subgroups, it turns out that the isotropy subgroup $H$ is a lattice, and that the metric on $M$ comes from a bi-invariant metric on $G$. This result allows us to recover Zeghib's classification of Lorentzian compact homogeneous spaces, and to move towards a classification for metric index 2. As an application we can investigate which pseudo-Riemannian homogeneous spaces of finite volume are Einstein spaces. Through the existence questions for lattice subgroups, this leads to an interesting connection with the theory of transcendental numbers, which allows us to characterize the Einstein cases in low dimensions. This talk is based on joint works with Oliver Baues, Yuri Nikolayevsky and Abdelghani Zeghib. Time-reversal symmetric topology from physics 12:10 Fri 25 Aug, 2017 :: Engineering Sth S111 :: Guo Chuan Thiang :: University of Adelaide Time-reversal plays a crucial role in experimentally discovered topological insulators (2008) and semimetals (2015). This is mathematically interesting because one is forced to use "Quaternionic" characteristic classes and differential topology --- a previously ill-motivated generalisation. Guided by physical intuition, an equivariant Poincare-Lefschetz duality, Euler structures, and a new type of monopole with torsion charge, will be introduced. Dynamics of transcendental Hanon maps 11:10 Wed 20 Sep, 2017 :: Engineering & Math EM212 :: Leandro Arosio :: University of Rome The dynamics of a polynomial in the complex plane is a classical topic studied already at the beginning of the 20th century by Fatou and Julia. The complex plane is partitioned in two natural invariant sets: a compact set called the Julia set, with (usually) fractal structure and chaotic behaviour, and the Fatou set, where dynamics has no sensitive dependence on initial conditions. The dynamics of a transcendental map was first studied by Baker fifty years ago, and shows striking differences with the polynomial case: for example, there are wandering Fatou components. Moving to C^2, an analogue of polynomial dynamics is given by Hanon maps, polynomial automorphisms with interesting dynamics. In this talk I will introduce a natural generalisation of transcendental dynamics to C^2, and show how to construct wandering domains for such maps. An action of the Grothendieck-Teichmuller group on stable curves of genus zero 11:10 Fri 22 Sep, 2017 :: Engineering South S111 :: Marcy Robertson :: University of Melbourne In this talk, we show that the group of homotopy automorphisms of the profinite completion of the framed little 2-discs operad is isomorphic to the (profinite) Grothendieck-Teichmuller group. We deduce that the Grothendieck-Teichmuller group acts nontrivially on an operadic model of the genus zero Teichmuller tower. This talk will be aimed at a general audience and will not assume previous knowledge of the Grothendieck-Teichmuller group or operads. This is joint work with Pedro Boavida and Geoffroy Horel. On directions and operators 11:10 Wed 27 Sep, 2017 :: Engineering & Math EM213 :: Malabika Pramanik :: University of British Columbia Many fundamental operators arising in harmonic analysis are governed by sets of directions that they are naturally associated with. This talk will survey a few representative results in this area, and report on some new developments. Equivariant formality of homogeneous spaces 12:10 Fri 29 Sep, 2017 :: Engineering Sth S111 :: Alex Chi-Kwong Fok :: University of Adelaide Equivariant formality, a notion in equivariant topology introduced by Goresky-Kottwitz-Macpherson, is a desirable property of spaces with group actions, which allows the application of localisation formula to evaluate integrals of any top closed forms and enables one to compute easily the equivariant cohomology. Broad classes of spaces of especial interest are well-known to be equivariantly formal, e.g., compact symplectic manifolds equipped with Hamiltonian compact Lie group actions and projective varieties equipped with linear algebraic torus actions, of which flag varieties are examples. Less is known about compact homogeneous spaces G/K equipped with the isotropy action of K, which is not necessarily of maximal rank. In this talk we will review previous attempts of characterizing equivariant formality of G/K, and present our recent results on this problem using an analogue of equivariant formality in K-theory. Part of the work presented in this talk is joint with Jeffrey Carlson. Operator algebras in rigid C*-tensor categories 12:10 Fri 6 Oct, 2017 :: Engineering Sth S111 :: Corey Jones :: Australian National University In noncommutative geometry, operator algebras are often regarded as the algebras of functions on noncommutative spaces. Rigid C*-tensor categories are algebraic structures that appear in the study of quantum field theories, subfactors, and compact quantum groups. We will explain how they can be thought of as ``noncommutative'' versions of the tensor category of Hilbert spaces. Combining these two viewpoints, we describe a notion of operator algebras internal to a rigid C*-tensor category, and discuss applications to the theory of subfactors. End-periodic K-homology and spin bordism 12:10 Fri 20 Oct, 2017 :: Engineering Sth S111 :: Michael Hallam :: University of Adelaide This talk introduces new "end-periodic" variants of geometric K-homology and spin bordism theories that are tailored to a recent index theorem for even-dimensional manifolds with periodic ends. This index theorem, due to Mrowka, Ruberman and Saveliev, is a generalisation of the Atiyah-Patodi-Singer index theorem for manifolds with odd-dimensional boundary. As in the APS index theorem, there is an (end-periodic) eta invariant that appears as a correction term for the periodic end. Invariance properties of the standard relative eta invariants are elegantly expressed using K-homology and spin bordism, and this continues to hold in the end-periodic case. In fact, there are natural isomorphisms between the standard K-homology/bordism theories and their end-periodic versions, and moreover these isomorphisms preserve relative eta invariants. The study is motivated by results on positive scalar curvature, namely obstructions and distinct path components of the moduli space of PSC metrics. Our isomorphisms provide a systematic method for transferring certain results on PSC from the odd-dimensional case to the even-dimensional case. This work is joint with Mathai Varghese. Springer correspondence for symmetric spaces 12:10 Fri 17 Nov, 2017 :: Engineering Sth S111 :: Ting Xue :: University of Melbourne The Springer theory for reductive algebraic groups plays an important role in representation theory. It relates nilpotent orbits in the Lie algebra to irreducible representations of the Weyl group. We develop a Springer theory in the case of symmetric spaces using Fourier transform, which relates nilpotent orbits in this setting to irreducible representations of Hecke algebras of various Coxeter groups with specified parameters. This in turn gives rise to character sheaves on symmetric spaces, which we describe explicitly in the case of classical symmetric spaces. A key ingredient in the construction is the nearby cycle sheaves associated to the adjoint quotient map. The talk is based on joint work with Kari Vilonen and partly based on joint work with Misha Grinberg and Kari Vilonen. A Hecke module structure on the KK-theory of arithmetic groups 13:10 Fri 2 Mar, 2018 :: Barr Smith South Polygon Lecture theatre :: Bram Mesland :: University of Bonn Let $G$ be a locally compact group, $\Gamma$ a discrete subgroup and $C_{G}(\Gamma)$ the commensurator of $\Gamma$ in $G$. The cohomology of $\Gamma$ is a module over the Shimura Hecke ring of the pair $(\Gamma,C_G(\Gamma))$. This construction recovers the action of the Hecke operators on modular forms for $SL(2,\mathbb{Z})$ as a particular case. In this talk I will discuss how the Shimura Hecke ring of a pair $(\Gamma, C_{G}(\Gamma))$ maps into the $KK$-ring associated to an arbitrary $\Gamma$-C*-algebra. From this we obtain a variety of $K$-theoretic Hecke modules. In the case of manifolds the Chern character provides a Hecke equivariant transformation into cohomology, which is an isomorphism in low dimensions. We discuss Hecke equivariant exact sequences arising from possibly noncommutative compactifications of $\Gamma$-spaces. Examples include the Borel-Serre and geodesic compactifications of the universal cover of an arithmetic manifold, and the totally disconnected boundary of the Bruhat-Tits tree of $SL(2,\mathbb{Z})$. This is joint work with M.H. Sengun (Sheffield). Radial Toeplitz operators on bounded symmetric domains 11:10 Fri 9 Mar, 2018 :: Lower Napier LG11 :: Raul Quiroga-Barranco :: CIMAT, Guanajuato, Mexico The Bergman spaces on a complex domain are defined as the space of holomorphic square-integrable functions on the domain. These carry interesting structures both for analysis and representation theory in the case of bounded symmetric domains. On the other hand, these spaces have some bounded operators obtained as the composition of a multiplier operator and a projection. These operators are highly noncommuting between each other. However, there exist large commutative C*-algebras generated by some of these Toeplitz operators very much related to Lie groups. I will construct an example of such C*-algebras and provide a fairly explicit simultaneous diagonalization of the generating Toeplitz operators. Quantum Airy structures and topological recursion 13:10 Wed 14 Mar, 2018 :: Ingkarni Wardli B17 :: Gaetan Borot :: MPI Bonn Quantum Airy structures are Lie algebras of quadratic differential operators -- their classical limit describes Lagrangian subvarieties in symplectic vector spaces which are tangent to the zero section and cut out by quadratic equations. Their partition function -- which is the function annihilated by the collection of differential operators -- can be computed by the topological recursion. I will explain how to obtain quantum Airy structures from spectral curves, and explain how we can retrieve from them correlation functions of semi-simple cohomological field theories, by exploiting the symmetries. This is based on joint work with Andersen, Chekhov and Orantin. Family gauge theory and characteristic classes of bundles of 4-manifolds 13:10 Fri 16 Mar, 2018 :: Barr Smith South Polygon Lecture theatre :: Hokuto Konno :: University of Tokyo I will define a non-trivial characteristic class of bundles of 4-manifolds using families of Seiberg-Witten equations. The basic idea of the construction is to consider an infinite dimensional analogue of the Euler class used in the usual theory of characteristic classes. I will also explain how to prove the non-triviality of this characteristic class. If time permits, I will mention a relation between our characteristic class and positive scalar curvature metrics. Computing trisections of 4-manifolds 13:10 Fri 23 Mar, 2018 :: Barr Smith South Polygon Lecture theatre :: Stephen Tillmann :: University of Sydney Gay and Kirby recently generalised Heegaard splittings of 3-manifolds to trisections of 4-manifolds. A trisection describes a 4–dimensional manifold as a union of three 4–dimensional handlebodies. The complexity of the 4–manifold is captured in a collection of curves on a surface, which guide the gluing of the handelbodies. The minimal genus of such a surface is the trisection genus of the 4-manifold. After defining trisections and giving key examples and applications, I will describe an algorithm to compute trisections of 4–manifolds using arbitrary triangulations as input. This results in the first explicit complexity bounds for the trisection genus of a 4–manifold in terms of the number of pentachora (4–simplices) in a triangulation. This is joint work with Mark Bell, Joel Hass and Hyam Rubinstein. I will also describe joint work with Jonathan Spreer that determines the trisection genus for each of the standard simply connected PL 4-manifolds. Chaos in higher-dimensional complex dynamics 13:10 Fri 20 Apr, 2018 :: Barr Smith South Polygon Lecture theatre :: Finnur Larusson :: University of Adelaide I will report on new joint work with Leandro Arosio (University of Rome, Tor Vergata). Complex manifolds can be thought of as laid out across a spectrum characterised by rigidity at one end and flexibility at the other. On the rigid side, Kobayashi-hyperbolic manifolds have at most a finite-dimensional group of symmetries. On the flexible side, there are manifolds with an extremely large group of holomorphic automorphisms, the prototypes being the affine spaces $\mathbb C^n$ for $n \geq 2$. From a dynamical point of view, hyperbolicity does not permit chaos. An endomorphism of a Kobayashi-hyperbolic manifold is non-expansive with respect to the Kobayashi distance, so every family of endomorphisms is equicontinuous. We show that not only does flexibility allow chaos: under a strong anti-hyperbolicity assumption, chaotic automorphisms are generic. A special case of our main result is that if $G$ is a connected complex linear algebraic group of dimension at least 2, not semisimple, then chaotic automorphisms are generic among all holomorphic automorphisms of $G$ that preserve a left- or right-invariant Haar form. For $G=\mathbb C^n$, this result was proved (although not explicitly stated) some 20 years ago by Fornaess and Sibony. Our generalisation follows their approach. I will give plenty of context and background, as well as some details of the proof of the main result. Index of Equivariant Callias-Type Operators 13:10 Fri 27 Apr, 2018 :: Barr Smith South Polygon Lecture theatre :: Hao Guo :: University of Adelaide Suppose M is a smooth Riemannian manifold on which a Lie group G acts properly and isometrically. In this talk I will explore properties of a particular class of G-invariant operators on M, called G-Callias-type operators. These are Dirac operators that have been given an additional Z_2-grading and a perturbation so as to be "invertible outside of a cocompact set in M". It turns out that G-Callias-type operators are equivariantly Fredholm and so have an index in the K-theory of the maximal group C*-algebra of G. This index can be expressed as a KK-product of a class in K-homology and a class in the K-theory of the Higson G-corona. In fact, one can show that the K-theory of the Higson G-corona is highly non-trivial, and thus the index theory of G-Callias-type operators is not obviously trivial. As an application of the index theory of G-Callias-type operators, I will mention an obstruction to the existence of G-invariant metrics of positive scalar curvature on M. Braid groups and higher representation theory 13:10 Fri 4 May, 2018 :: Barr Smith South Polygon Lecture theatre :: Tony Licata :: Australian National University The Artin braid group arise in a number of different parts of mathematics. The goal of this talk will be to explain how basic group-theoretic questions about the Artin braid group can be answered using some modern tools of linear and homological algebra, with an eye toward proving some open conjectures about other groups. Cobordism maps on PFH induced by Lefschetz fibration over higher genus base 13:10 Fri 11 May, 2018 :: Barr Smith South Polygon Lecture theatre :: Guan Heng Chen :: University of Adelaide In this talk, we will discuss the cobordism maps on periodic Floer homology(PFH) induced by Lefschetz fibration. Periodic Floer homology is a Gromov types invariant for three dimensional mapping torus and it is isomorphic to a version of Seiberg Witten Floer cohomology(SWF). Our result is to define the cobordism maps on PFH induced by certain types of Lefschetz fibration via using holomorphic curves method. Also, we show that the cobordism maps is equivalent to the cobordism maps on Seiberg Witten cohomology under the isomorphism PFH=SWF. Obstructions to smooth group actions on 4-manifolds from families Seiberg-Witten theory 13:10 Fri 25 May, 2018 :: Barr Smith South Polygon Lecture theatre :: David Baraglia :: University of Adelaide Let X be a smooth, compact, oriented 4-manifold and consider the following problem. Let G be a group which acts on the second cohomology of X preserving the intersection form. Can this action of G on H^2(X) be lifted to an action of G on X by diffeomorphisms? We study a parametrised version of Seiberg-Witten theory for smooth families of 4-manifolds and obtain obstructions to the existence of such lifts. For example, we construct compact simply-connected 4-manifolds X and involutions on H^2(X) that can be realised by a continuous involution on X, or by a diffeomorphism, but not by an involutive diffeomorphism for any smooth structure on X. The mass of Riemannian manifolds 13:10 Fri 1 Jun, 2018 :: Barr Smith South Polygon Lecture theatre :: Matthias Ludewig :: MPIM Bonn We will define the mass of differential operators L on compact Riemannian manifolds. In odd dimensions, if L is a conformally covariant differential operator, then its mass is also conformally covariant, while in even dimensions, one has a more complicated transformation rule. In the special case that L is the Yamabe operator, its mass is related to the ADM mass of an associated asymptotically flat spacetime. In particular, one expects positive mass theorems in various settings. Here we highlight some recent results. Hitchin's Projectively Flat Connection for the Moduli Space of Higgs Bundles 13:10 Fri 15 Jun, 2018 :: Barr Smith South Polygon Lecture theatre :: John McCarthy :: University of Adelaide In this talk I will discuss the problem of geometrically quantizing the moduli space of Higgs bundles on a compact Riemann surface using Kahler polarisations. I will begin by introducing geometric quantization via Kahler polarisations for compact manifolds, leading up to the definition of a Hitchin connection as stated by Andersen. I will then describe the moduli spaces of stable bundles and Higgs bundles over a compact Riemann surface, and discuss their properties. The problem of geometrically quantizing the moduli space of stables bundles, a compact space, was solved independently by Hitchin and Axelrod, Del PIetra, and Witten. The Higgs moduli space is non-compact and therefore the techniques used do not apply, but carries an action of C*. I will finish the talk by discussing the problem of finding a Hitchin connection that preserves this C* action. Such a connection exists in the case of Higgs line bundles, and I will comment on the difficulties in higher rank. Comparison Theorems under Weak Assumptions 11:10 Fri 29 Jun, 2018 :: EMG06 :: Kwok Kun Kwong :: National Cheng Kung University The topology and geometry of spaces of Yang-Mills-Higgs flow lines 11:10 Fri 27 Jul, 2018 :: Barr Smith South Polygon Lecture theatre :: Graeme Wilkin :: National University of Singapore Given a smooth complex vector bundle over a compact Riemann surface, one can define the space of Higgs bundles and an energy functional on this space: the Yang-Mills-Higgs functional. The gradient flow of this functional resembles a nonlinear heat equation, and the limit of the flow detects information about the algebraic structure of the initial Higgs bundle (e.g. whether or not it is semistable). In this talk I will explain my work to classify ancient solutions of the Yang-Mills-Higgs flow in terms of their algebraic structure, which leads to an algebro-geometric classification of Yang-Mills-Higgs flow lines. Critical points connected by flow lines can then be interpreted in terms of the Hecke correspondence, which appears in Witten's recent work on Geometric Langlands. This classification also gives a geometric description of spaces of unbroken flow lines in terms of secant varieties of the underlying Riemann surface, and in the remaining time I will describe work in progress to relate the (analytic) Morse compactification of these spaces by broken flow lines to an algebro-geometric compactification by iterated blowups of secant varieties. Carleman approximation of maps into Oka manifolds. 11:10 Fri 3 Aug, 2018 :: Barr Smith South Polygon Lecture theatre :: Brett Chenoweth :: University of Ljubljana In 1927 Torsten Carleman proved a remarkable extension of the Stone-Weierstrass theorem. Carleman's theorem is ostensibly the first result concerning the approximation of functions on unbounded closed subsets of C by entire functions. In this talk we introduce Carleman's theorem and several of its recent generalisations including the titled generalisation which was proved by the speaker in arXiv:1804.10680. Equivariant Index, Traces and Representation Theory 11:10 Fri 10 Aug, 2018 :: Barr Smith South Polygon Lecture theatre :: Hang Wang :: University of Adelaide K-theory of C*-algebras associated to a semisimple Lie group can be understood both from the geometric point of view via Baum-Connes assembly map and from the representation theoretic point of view via harmonic analysis of Lie groups. A K-theory generator can be viewed as the equivariant index of some Dirac operator, but also interpreted as a (family of) representation(s) parametrised by the noncompact abelian part in the Levi component of a cuspidal parabolic subgroup. Applying orbital traces to the K-theory group, we obtain the equivariant index as a fixed point formula which, for each K-theory generators for (limit of) discrete series, recovers Harish-Chandra's character formula on the representation theory side. This is a noncompact analogue of Atiyah-Segal-Singer fixed point theorem in relation to the Weyl character formula. This is joint work with Peter Hochs. Min-max theory for hypersurfaces of prescribed mean curvature 11:10 Fri 17 Aug, 2018 :: Barr Smith South Polygon Lecture theatre :: Jonathan Zhu :: Harvard University We describe the construction of closed prescribed mean curvature (PMC) hypersurfaces using min-max methods. Our theory allows us to show the existence of closed PMC hypersurfaces in a given closed Riemannian manifold for a generic set of ambient prescription functions. This set includes, in particular, all constant functions as well as analytic functions if the manifold is real analytic. The described work is joint with Xin Zhou. Discrete fluxes and duality in gauge theory 11:10 Fri 24 Aug, 2018 :: Barr Smith South Polygon Lecture theatre :: Siye Wu :: National Tsinghua University We explore the notions of discrete electric and magnetic fluxes introduced by 't Hooft in the late 1970s. After explaining their physics origin, we consider the description in mathematical terminology. We finally study their role in duality. Geometry and Topology of Crystals 11:10 Fri 31 Aug, 2018 :: Barr Smith South Polygon Lecture theatre :: Vanessa Robins :: Australian National University This talk will cover some highlights of the mathematical description of crystal structure from the platonic polyhedra of ancient Greece to the current picture of crystallographic groups as orbifolds. Modern materials synthesis raises fascinating questions about the enumeration and classification of periodic interwoven or entangled frameworks, that might be addressed by techniques from 3-manifold topology and knot theory. Noncommutative principal G-bundles 11:10 Fri 14 Sep, 2018 :: Barr Smith South Polygon Lecture theatre :: Keith Hannabuss :: University of Oxford Noncommutative geometry provides greater flexibility for studying some problems. This seminar will survey some work on noncommutative principal G-bundles. These were classified for abelian groups some years ago, but nonabelian groups require a different approach, using tools developed for a totally different reason in the 1980s. This uncovers links with ergodic theory, quantum groups and the Yang-Baxter equation. Exceptional quantum symmetries 11:10 Fri 5 Oct, 2018 :: Barr Smith South Polygon Lecture theatre :: Scott Morrison :: Australian National University I will survey our current understanding of "quantum symmetries", the mathematical models of topological order, in particular through the formalism of fusion categories. Our very limited classification results to date point to nearly all examples being built out of data coming from finite groups, quantum groups at roots of unity, and cohomological data. However, there are a small number of "exceptional" quantum symmetries that so far appear to be disconnected from the world of classical symmetries as studied in representation theory and group theory. I'll give an update on recent progress understanding these examples. Twisted K-theory of compact Lie groups and extended Verlinde algebras 11:10 Fri 12 Oct, 2018 :: Barr Smith South Polygon Lecture theatre :: Chi-Kwong Fok :: University of Adelaide In a series of recent papers, Freed, Hopkins and Teleman put forth a deep result which identifies the twisted K -theory of a compact Lie group G with the representation theory of its loop group LG. Under suitable conditions, both objects can be enhanced to the Verlinde algebra, which appears in mathematical physics as the Frobenius algebra of a certain topological quantum field theory, and in algebraic geometry as the algebra encoding information of moduli spaces of G-bundles over Riemann surfaces. The Verlinde algebra for G with nice connectedness properties have been well-known. However, explicit descriptions of such for disconnected G are lacking. In this talk, I will discuss the various aspects of the Freed-Hopkins-Teleman Theorem and partial results on an extension of the Verlinde algebra arising from a disconnected G. The talk is based on work in progress joint with David Baraglia and Varghese Mathai. An Introduction to Ricci Flow 11:10 Fri 19 Oct, 2018 :: Barr Smith South Polygon Lecture theatre :: Miles Simon :: University of Magdeburg In these three talks we give an introduction to Ricci flow and present some applications thereof. After introducing the Ricci flow we present some theorems and arguments from the theory of linear and non-linear parabolic equations. We explain why this theory guarantees that there is always a solution to the Ricci flow for a short time for any given smooth initial metric on a compact manifold without boundary. We calculate evolution equations for certain geometric quantities, and present some examples of maximum principle type arguments. In the last lecture we present some geometric results which are derived with the help of the Ricci flow. Local Ricci flow and limits of non-collapsed regions whose Ricci curvature is bounded from below We use a local Ricci flow to obtain a bi-Holder correspondence between non-collapsed (possibly non-complete) 3-manifolds with Ricci curvature bounded from below and Gromov-Hausdorff limits of sequences thereof. This is joint work with Peter Topping and the proofs build on results and ideas from recent papers of Hochard and Topping+Simon. You may be able to improve your search results by using the following syntax: Matches the following Asymptotic Equation Anything with "Asymptotic" or "Equation". +Asymptotic +Equation Anything with "Asymptotic" and "Equation". +Stokes -"Navier-Stokes" Anything containing "Stokes" but not "Navier-Stokes". Dynam* Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.
CommonCrawl
EURASIP Journal on Wireless Communications and Networking A second-order dynamic and static ship path planning model based on reinforcement learning and heuristic search algorithms Junfeng Yuan1,2, Jian Wan1,3, Xin Zhang ORCID: orcid.org/0000-0003-3416-839X1,2, Yang Xu1,2, Yan Zeng1,2 & Yongjian Ren1,2 EURASIP Journal on Wireless Communications and Networking volume 2022, Article number: 128 (2022) Cite this article Ship path planning plays an important role in the intelligent decision-making system which can provide important navigation information for ship and coordinate with other ships via wireless networks. However, existing methods still suffer from slow path planning and low security problems. In this paper, we propose a second-order ship path planning model, which consists of two main steps, i.e., first-order static global path planning and second-order dynamic local path planning. Specifically, we first create a raster map using ArcGIS. Second, the global path planning is performed on the raster map based on the Dyna-Sarsa(\(\lambda\)) model, which integrates the eligibility trace and the Dyna framework on the Sarsa algorithm. Particularly, the eligibility trace has a short-term memory for the trajectory, which can improve the convergence speed of the model. Meanwhile, the Dyna framework obtains simulation experience through simulation training, which can further improve the convergence speed of the model. Then, the improved ship trajectory prediction model based on stacked bidirectional gated recurrent unit is used to identify the risk of ship collision and switch the path planning from the first order to the second order. Finally, the second-order dynamic local path planning is presented based on the FCC-A* algorithm, where the cost function of the traditional path planning A* algorithm is rewritten using the fuzzy collision cost membership function (fuzzy collision cost, FCC) to reduce the collision risk of ships. The proposed model is evaluated on the Baltic Sea geographic information and ship trajectory datasets. The experimental results show that the eligibility trace and the Dyna learning framework in the proposed model can effectively improve the planning efficiency of the ship's global path planning, and the collision risk membership function can effectively reduce the number of collisions in A* local path planning and thus improve the navigation safety of encountering ships. In recent years, economic globalization has been developing rapidly. As the backbone of international trade and the global economy, maritime transport carries over 80% of the volume of the international trade in goods is carried by sea, and the percentage is even higher for most developing countries.Footnote 1 Therefore, shipping has become more and more important, particularly where prosperity depended primarily on international trade. However, the rapid development of the international shipping at sea makes the traffic condition increasingly complicated, and the world's main shipping routes and ports have formed complex networks, which are more prone to marine traffic accidents [1], and it pose a major threat to the safety of life and property at sea. Relevant statistics show that about 80%\(\sim\)85% of marine accidents are caused by human factors. For example, the ship drivers did not operate in accordance with regulations [2]. Although the International Maritime Organization has formulated the international rules for preventing collisions at sea, providing navigation methods and rules for ships at sea [3], and minimizing collisions between ships, it is still difficult to effectively reduce the probability of collisions only by relying on the experience of the crew. As the level of marine technology improved, ships have been developing to be quite large-scale, specialized, and intelligent. Meanwhile, researchers have paid more attention to the research and development of intelligent decision-making systems for ship navigation. Particularly, the automated and intelligent driving systems can provide navigation for ship and coordinate with other ships via wireless networks, which can effectively reduce the occurrence of marine accidents Therefore, in order to ensure the safety of ships, drivers and the marine environment, it is extremely important to study related technologies of ship navigation intelligent decision-making system, and ship navigation path planning is one of the core links of this intelligent decision-making system [4]. Particularly, the ship's navigation intelligent decision-making system can provide the crew with important maneuvering suggestions for complex situations, and the ship's path planning is an important prerequisite for the motion control of the intelligent decision-making system and the output information of the intelligent decision-making system. Still, existing research works on the ship's path planning still suffer from challenges in the following two aspects: Scenario modeling of ship path planning and collision domain modeling. In many existing ship path planning researches in discrete scenarios, the simulation environment used for training is a simulated environment where obstacles are randomly generated. Note that the simulated environment is quite different from the real sea environment, and it is difficult to reflect the performance of the path planning model in the actual sea environment. At the same time, in the related works about the local encounter of ships, the ships appear in the form of simple geometry such as points and circles in most cases, and the ship domain model that can reflect the collision distance is rarely used. Therefore, in such a scenario where the collision distance of the ship needs to be considered, existing path planning models suffer from poor collision avoidance effect. The efficiency and safety of ship path planning. The current related research works usually do not distinguish between the long-distance navigation with few obstacles and the emergency ship encounter. In other words, the same model is used to deal with these two different navigation scenarios, which limits their performance in ship path planning tasks. At the same time, existing path planning models based on traditional reinforcement learning algorithm has the problems of slow planning speed and frequent collisions in the process of ship path planning. Besides, the heuristic search-based models have difficulties in avoiding collisions when encountering ships dynamically and cannot ensure the safety of ships. In this work, we focus on ship path planning of intelligent decision-making systems, and propose a second-order ship path planning model. Specifically, the proposed model consists of two main steps, i.e., first-order static global path planning and second-order dynamic local path planning. Firstly, we create a raster map using ArcGIS, and the global path planning is performed on the raster map based on the Dyna-Sarsa(\(\lambda\)) model, which integrates the eligibility trace and the Dyna framework on the Sarsa algorithm. Particularly, the eligibility trace has a short-term memory for the trajectory, which can improve the convergence speed of the model. Meanwhile, the Dyna framework obtains simulation experience through simulation training, which can further improve the convergence speed of the model. Then, the improved ship trajectory prediction model based on stacked bidirectional gated recurrent unit is used to identify the risk of ship collision and switch the path planning from the first order to the second order. Finally, the second-order path planning is implemented based on the FCC-A* algorithm, where the cost function of the traditional path planning A* algorithm is rewritten using the fuzzy collision cost membership function (fuzzy collision cost, FCC) to reduce the collision risk of ships. The proposed model is evaluated on the Baltic Sea geographic information and ship trajectory datasets. Extensive experiments are conducted, and the results show that the proposed model can effectively improve the planning efficiency of the ship's global path planning, and the number of collisions is effectively reduced. The main contributions are summarized as follows: We propose a second-order ship path planning model based on Dyna-Sarsa(\(\lambda\)) for addressing the problem of slow marine scene modeling and the low safety of ship path planning. An ship trajectory prediction model based on Stacked-BiGRUs and a Fuzzy Collision Cost A* (FCC-A*)-based dynamic local path planning algorithm are designed and integrated together to identify the risk of ship collision and improve the safety of path planning model in the case of ship encounters. The proposed model is evaluated on the Baltic Sea geographic information and ship trajectory datasets, and the results demonstrate the proposed model's effectiveness in terms of path planning efficiency and navigation safety. The rest of this paper is organized as follows. The related works are reviewed in Sect. 2. The proposed method is introduced in Sect. 3, and extensive experimental results and discussion are given in Sect. 4. Section 5 concludes the main contributions and future works. The path planning technologies for agent [5] are widely used in the field of automation, including autonomous collision avoidance of service robots, formation of drones, autonomous vehicle navigation, and so on. All path planning methods can solve most of the planning problems of point-line networks. According to the classification in common fields, path planning algorithms can be roughly divided into three categories: traditional path planning algorithms, machine learning algorithms, and heuristic search algorithms. Specifically, the advantages and disadvantages of three kinds of path planning algorithms are listed in Table 1. Table 1 Summary and comparison of three types of path planning methods Traditional path planning algorithms As one traditional path planning method, simulated annealing (SA) [6] can solve the problem of finding the optimal solution in a limited range. Xu et al. [7] investigate the transportation efficiency and sales cost of the aquatic product market in Haikou of China, and use the SA algorithm to improve the aquatic product transportation route planning model, so that the model can find a low-cost transportation route. Xiao et al. [8] propose a coverage path planning method for UAVs to achieve full coverage of a target area and to collect high-resolution images while considering the overlap ratio of the collected images and energy consumption of clustered UAVs. However, the SA algorithm, which is a popular evolutionary algorithm widely used in dynamic path planning, suffers from high computation complexity problem. Therefore, Miao et al. [9] develop an enhanced SA approach by combining two additional mathematical operators and initial path selection heuristics into the standard SA. Particularly, the proposed model can perform robot path planning in dynamic environments with both static and dynamic obstacles, the computing performance of the standard SA is significantly improved while the generated solution is optimal or near-optimal. The improvement makes the proposed model being able to be applied in many real-time and online applications. Bedsides, Wang et al. [10] model the certain climbing ability and crossing ditch capability of the ground robot. Specifically, the authors proposed a model to search the shortest path from the start point to the end point, with reliable obstacle avoidance in the three-dimensional environments. Particularly, ant colony algorithm and genetic algorithm are integrated into the proposed model for improving the performance. Moreover, the artificial potential field (APF) [11] method can construct virtual gravitational and repulsive forces. Specifically, the force between the end point and the object is the gravitational force, and the force between the object and the obstacle is the repulsive force. Therefore, we can set the force as a function for path optimization. Zhu et al. [12] propose a novel collision avoidance (CA) model by devising the APF method, and the proposed model is used to implement a practical ship automatic CA system. Particularly, in the proposed multi-ship CA model, the repulsive force model of APF is devised to incorporate the International Regulations for Preventing Collisions at Sea and the motion characteristics of the ship. Besides, inspired by navigation practice, the distance between the closest point of approach time and approach criterion is used as the unique changeable parameter. Feng et al. [13] propose a new collision avoidance algorithm consisting of two main components, i.e., the path planning and the tracking controller. Specifically, a lateral lane-changing spacing model and the longitudinal braking distance model are designed to model the real vehicle's dynamic scenarios. Next, the authors incorporate the safety distance in a simulated traffic scene into the APF algorithm. Besides, the repulsion in the proposed model includes the force of the position repulsion and the speed repulsion, which are divided according to the threat level. At last, a predictive control model is designed to track the lateral motion through steering angle. Besides, the author present a Fuzzy-PID control to track the longitudinal speed, and the planned path is converted into an actual trajectory with stable vehicle dynamics. Vagale et al. [14] review guidance, and more specifically, path planning algorithms of autonomous surface vehicles and their classification, and provided potential need for new regulations for autonomous surface vehicles. The idea of ant colony algorithm (ACA) [15, 16] draws on the foraging behavior of ants. Specifically, all ants smear their own pheromones on the roads they pass through in the process of searching for food. The road with food will be smeared with pheromone by multiple ants in a short time, so the concentration of pheromone will increase in a short time. The ants will choose the path according to the concentration of pheromone, and finally find the shortest path. In the online logistics scenario, the use of the responsive ant colony-based optimization algorithm has a good effect on the path planning problem of dense vehicles [17]. Particularly, the vehicle response speed can be improved by generating a diverse pheromone matrix. At the same time, the incorporation of simplified pheromone diffusion model, unequal distribution pheromone initialization strategy, and adaptive pheromone update mechanism into the ant colony algorithm can significantly enhance the computational speed and path quality of the classical ant colony algorithm [18]. Genetic algorithms (GAs) [19, 20] can simulate biological evolution, and is also an iterative search algorithm based on the principle of genetic genetics. Pehlivanoglu et al. [21] propose initial population enhancement methods in GA, and thus accelerate convergence process in the path planning problem of autonomous UAV. Nadia et al. [22] use a modified selection operator instead of using mutation operators, an adaptive population size and a modified procedure to perform a genetic algorithm, which outperformed other models in terms of distance minimization.GA is also widely used in multi-vessel collision avoidance scenarios [23, 24]. Particularly, GA-based model can meet the requirements of "early," "large," "wide" and "clear" for multi-vessel collision avoidance by incorporating ship navigation rules into genetic algorithms. Reinforcement learning (RL) [25, 26] algorithm is a machine learning method in which the experimental target learns in the surrounding environment in a constantly trying way, and selects the next action according to the reward obtained by interacting with the environment. Therefore, the experimental target can obtain the maximum reward. In traditional path planning problems, reinforcement learning-based models use reward and punishment strategies to obtain optimal routes by continuously interacting with obstacles and passable areas. As for the problem of ship collision avoidance, Shen et al. [27] use the Bumper model in ship domain to incorporate avoidance experience into deep Q-learning based on maritime traffic rules, and rewrite the reward function part of the reinforcement learning algorithm. Therefore, the final collision avoidance model is in line with the actual ship motion and achieves good results in real ship collision avoidance experiments. Li et al. [28] investigate the path planning problem of USVs in uncertain environments, and proposed a path planning strategy unified with a collision avoidance function based on deep reinforcement learning (DRL). Autonomous mobile robots usually move in dynamic unknown scenes, and can only plan paths through local information obtained from feedback, and their control quantities are continuous quantities. The gradient strategy algorithm A3C in reinforcement learning can handle the navigation problem in the continuous action space. However, the training time of A3C is quite long. Gao et al. [29] propose a new deep reinforcement learning (DRL)-based path planning model with incremental training for robot. Particularly, in order to deal with the complexity of real world applications, the authors combine twin-delayed deep deterministic policy gradients are with the traditional global path planning algorithm Probabilistic Roadmap to enhance the generalization ability of the proposed methods. Heuristic search algorithms The A* algorithm [30, 31] is widely used in various autonomous mobile robots and intelligent car navigation systems. Specifically, the algorithm can calculate the cost of each expansion node around it by selecting the corresponding heuristic function. Then, the position with the lower cost is selected as the next step by comparing the cost, until the target node position is found. Unmanned Surface Vehicles (USV) are widely used in modern cruises on the surface of water. In the study of intelligent navigation systems for unmanned vehicles, Song et al. [32] propose an improved A* algorithm that combines three path smoothing components, which reduces the path aliasing caused by the traditional A* algorithm. Experimental results show that the proposed algorithm achieves better performance than the traditional algorithm in both sparse and cluttered environments with uniform rasterization. The algorithm has been applied to the Springer USV navigation system. Guo et al. [33] propose a complete coverage path planning algorithm based on the improved A* algorithm to improve the efficiency and energy consumption of unmanned ships traversing the entire area. Singh et al. [34] present an A* approach for USV path planning in a maritime environment. Besides, the proposed approach is extended to deal with the complex environments that are cluttered with static and moving obstacles and different current intensities. In this section, we introduce the proposed second-order ship path planning model in detail. Specifically, the problems that need to be solved in ship path planning are introduced first. Second, we present the modeling method of sea area scene, including the rasterization method and the storage format of geographic information. Then, the static global path planning algorithm based on Dyna-Sarsa(\(\lambda\)) is introduced, including the eligibility trace and the optimization process of Sarsa algorithm by Dyna framework. Finally, the dynamic local path planning algorithm based on Fuzzy Collision Cost A* (FCC-A*) is introduced, including the identification of collision risk, the construction of ship domain and the optimization process of collision risk membership function to A* algorithm. Problem description The proposed second-order ship path planning model needs to solve two problems, i.e., static global path planning and dynamic local path planning. Figure 1 shows a schematic diagram of the proposed path planning model. On the macro level, the ship is in a long-distance sea area with few obstacles. As shown by the purple trajectory in Fig. 1, the ship will navigate in a global path planning manner, when there is no local dynamic ship collision risk. This proposed planning method gives more priority to the path planning speed and path length. Microscopically, the ship trajectory prediction method based on Stacked-BiGRUs [35, 36] continuously detects the collision risk between the ship and other ships. The model will switch states and navigate in a local path planning manner when the ship collision risk index exceeds the rated threshold. As shown in Fig. 1, in the local path planning frame, the red ship should try to avoid collision with the blue ship that is sailing straightly. This planning method prioritizes the safety of the planned path and needs to avoid collisions with static obstacles and dynamic ships at the same time. Schematic diagram of second-order ship path planning. In the local path planning frame, the red ship should try to avoid collision with the blue ship that is sailing straightly. This planning method prioritizes the safety of the planned path and needs to avoid collisions with static obstacles and dynamic ships at the same time The framework of the proposed second-order ship path planning model is shown in Fig. 2, and it mainly consists of two components, i.e., global path planning and local path planning. Global path planning. First, the ship path planning based on Sarsa reinforcement learning algorithm can effectively carry out path planning but the convergence speed is slow. Then, the eligibility trace and decay value mechanism are incorporated, and a global path planning algorithm based on the Sarsa(\(\lambda\)) [37] learning model is proposed. Finally, the reinforcement learning algorithm framework Dyna is presented. In particular, the global path planning speed is further accelerated by combining the Dyna framework and the Sarsa(\(\lambda\)) learning model into a Dyna-Sarsa(\(\lambda\)) learning model. Local path planning. First, the ship collision risk identification is introduced. Specifically, the future route of the ship agent may inevitably collide with other dynamic ships when the ship is sailing on the globally planned path. At this time, the system should identify the collision risk and carry out dynamic local path planning to further ensure the navigation safety of the ship. This work focuses on the encounter situation of two ships, and the proposed model uses the ship trajectory prediction model to predict the future trajectories of the two ships in a period of time, and calculates the collision risk index (CRI) for each moment in this period of time. If the CRI index exceeds the threshold, the ship's path planning is switched to second order from the first order. Then, the traditional A* algorithm and its shortcomings that ignores the ship collision domain when applied to the ship trajectory planning problem is discussed. Finally, we introduce the GOODWIN ship domain model. The heuristic estimation cost of the A* algorithm is modified via the membership function, and the collision risk of dynamic obstacles is combined with the A* algorithm, and the FCC-A* path planning model is proposed to effectively reducing the collision risk of ships in local path planning. Framework of second-order ship path planning model Marine scene modeling There are various methods for modeling geographic information of the marine scene, most of which are related to converting the surrounding environment into the problem of graph theory. The environmental map conversion methods in two-dimensional marine can be divided into vector data method and rasterization method, and their characteristics can be summarized as follows. Vector data have the advantages of standardized structure and low redundancy. Particularly, the data retrieval speed is fast, and the image resolution is high. However, the data structure is relatively complex, and it is difficult to process irregular graphics. Raster data have simpler data structure than that of vector data, and it is less difficult in spatial analysis or surface simulation. Besides, the integration or splicing of irregular graphics is more convenient, and it is easy to carry out various spatial analysis and mathematical simulation. The disadvantage is that the geographic information conversion becomes more difficult with data scale increasing. For the geographic features in large-scale ocean scenes shown in Fig. 3a, the rasterization method can represent geographic entities more effectively than the vector data method. The accuracy is determined by the grid side length. Visualization map and raster map of the Baltic Sea geographic information.The rasterization method can represent geographic entities more effectively than the vector data method In the sea area with complex weather and geographical environment, the ships may encounter many obstacles during the entire navigation process, and the obstacles include man-made marine structures, glaciers, reefs, etc. Such topographic data are generally stored in electronic charts. Therefore, it is necessary to convert the electronic chart into a scene data model that the algorithm can recognize to realize the path planning on the simulated electronic chart. The data source used in this paper is the shapefile format data based on ArcGIS, and the raster method is used to establish a static scene model with the vector data rasterization tool provided in ArcMap. This rasterization method belongs to an interpolation method, which is specially used to create a digital elevation model (DEM) that conforms to the real surface. The main principle of interpolation is to restore the real terrain by using traditional input data structures and known surface features. Water is the primary erosive force that determines the general shape of most terrains. Therefore, most terrains contain many local maxima such as peaks, but few local minima, resulting in a discontinuous terrain state. Terrain to raster can constrain the interpolation process with surface-related constraints, generating a continuous terrain structure and an accurate representation of mountains and rivers. This type of function-constrained method can generate more accurate topographic maps with less input data. The scale of information will be smaller than the information required to describe geographic information with digital contours, further reducing the cost of obtaining accurate DEM. This rasterization method is fully computed when removing sinks, and does not impose functional constraints where it might conflict with the input elevation data. Such conflicts are usually saved in log files in the form of sinks. These data can be used to correct geographic information, which is especially suitable for processing large and informative datasets such as marine environment. The rasterized Baltic Sea area is shown in Fig. 3b. Finally, the result data are saved in Shapefile format. In particular, in the dynamic local ship path planning task, the extracted Shapefile data are used to model the local navigation chart, where the collision avoidance rules, navigation experience, ship operation characteristics and the size of the navigation chart should be fully considered. Let \((x_s, y_s)\) be the starting position of the ship, and \((x_d, y_d)\) be the target position at the end of the planning. Then, the center coordinate of the navigation chart \({\text{point}}_{\textrm{ce}}\) is formally set as: $$\begin{aligned} {\text{point}}{_{\textrm{ce}}} = \left( {\frac{{{x_s} + {x_d}}}{2},\frac{{{y_s} + {y_d}}}{2}} \right) . \end{aligned}$$ The warp length \(l_{\textrm{lon}}\) and latitude length \(l_{\textrm{lat}}\) of the navigation chart are set as: $$\begin{aligned} {l_{\textrm{lon}}}&= |{{y_s} - {y_d}} |, \end{aligned}$$ $$\begin{aligned} {l_{\textrm{lat}}}&= |{{x_s} - {x_d}} |. \end{aligned}$$ The grain size of rasterization determines the fineness of path planning, and it is necessary to coordinate the execution time of the algorithm and the planning quality. Static global path planning algorithm based on Dyna-Sarsa(\(\lambda\)) In this section, we will introduce the static global path planning algorithm based on Dyna-Sarsa(\(\lambda\)) in the proposed model. The main feature of the Sarsa algorithm is to perform single-step update. The value function is updated immediately after each step in the environment, which can quickly respond to environmental information. Therefore, the traditional Sarsa algorithm is represented as Sarsa(0). However, in the single-step update method, only the previous step that reaches the goal is related to the goal and all actions before that become unrelated. In particular, this situation will slow down the convergence speed of the algorithm. Generally, the continuous multi-step can be set as one round by extending the number of steps to update, and a complete update is conducted at the end of each round. This memory state of the continuous multi-step is called the eligibility traces (ET). ET is an important concept in reinforcement learning. Sutton and Barto [38] pointed out that ET is additional memory variables associated with each state considering the frequency of visiting each state. There are three different expressions of ET: accumulating trace (AT), replacing trace (RT), and true online trace (TOT). The cumulative eligibility traces of state-action pairs are calculated as follows: $$\begin{aligned} {e_{t + 1}}\left( {s,a} \right) = \left\{ {\begin{array}{*{20}{c}} {\gamma \lambda {e_t}\left( {s,a} \right) \quad \quad \mathrm{{if}}\, s \ne {s_t},a \ne {a_t}}\\ {\gamma \lambda {e_t}\left( {s,a} \right) + 1\quad \mathrm{{if}}\, s = {s_t},a = {a_t}} \end{array}} \right. , \end{aligned}$$ where \(\gamma\) represents the discount factor. \(\lambda \in [0,1]\) is the decay coefficient of the trace, which defines how much the information of a selection in the past should be attenuated. In many cases, related studies have found that eligibility traces can speed up the convergence rate [39]. We can get the obtain the Sarsa(\(\lambda\)) algorithm by using the eligibility trace to modify the Sarsa algorithm, and Sarsa(\(\lambda\)) is shown in Algorithm 1. In Algorithm 1, \(\delta\) represents the temporal difference learning error (TD-error). At each moment, the current \(\delta\) is assigned to each state according to its eligibility trace. The use of the eligibility trace allows the Sarsa(\(\lambda\)) algorithm to converge to the global optimum faster than the traditional Sarsa(0) algorithm. However, this acceleration is based on the preservation of past visits, and it will consume additional memory space. In the case with sufficient computing resources, choosing the Sarsa(\(\lambda\)) algorithm can quickly obtain a safer navigation planning path. The state-action trajectory diagram of the algorithm is shown in Fig. 4, where T is the total number of iterations. Action trajectory diagram of Sarsa(\(\lambda\)). The state-action trajectory diagram of the Sarsa(\(\lambda\)) algorithm The use of cumulative eligibility trace and decay coefficient \(\lambda\) in the optimization of the Sarsa algorithm can improve the convergence speed of the algorithm. However, the Sarsa(\(\lambda\)) algorithm still belongs to the category of model-independent reinforcement learning algorithms. In particular, the ships directly use the experience learned from the marine environment to generate, and the learning efficiency of this method is relatively limited. In model-based reinforcement learning algorithms, ships use the experience generated in the simulated environment to select new strategies by continuously refining the model. During the training process with the Dyna learning framework, the ship first interacts directly in the simulation environment to obtain real experience to generate a pre-model, and interactively obtains simulation experience in the simulation scene inside the model at the same time. Besides, the real experience and simulation experience are integrated to train the ship, helping the ships plan and judge the optimal path. The core idea of the Dyna learning framework is to consume computing resources in exchange for high sampling efficiency. Particularly, more environmental interaction experience can be obtained, which improves the efficiency of the algorithm per unit time, while consuming computing resources. At the same time, in the stage of obtaining simulation experience in the Dyna model, the update method of the Q-learning algorithm is used. This method has the ability to learn the global optimum and can help the ship to avoid the local optimum situation. The steps of the Dyna-Sarsa(\(\lambda\)) algorithm after combining the Dyna learning framework with Sarsa(\(\lambda\)) are shown in Algorithm 2. By integrating the Dyna framework with Sarsa(\(\lambda\)), the ships can not only obtain experience from the simulation training of the Dyna framework, but also learn experience from the direct interaction with the marine environment. The fusion of two kinds of experiences can provide guidance for ship path planning, which can greatly improve the efficiency of ship static global path planning. Dynamic local path planning algorithm based on FCC-A* It is necessary to switch from global path planning to local path planning for collision avoidance operations when a ship faces a collision risk. Therefore, this section first introduces the method for identifying the collision risk of encountering ships. In the traditional autonomous ship collision avoidance system, the collision risk index (CRI) is usually used as an index to measure the collision risk of ships. The minimum value of CRI is 0 and the maximum value is 1. The minimum encounter distance (distance to closest point of approach, DCPA) and the minimum encounter time (time to closest point of approach, TCPA) are important factors for evaluating the CRI index between encountering ships in actual scenarios. As the value range of CRI has a nonlinear negative correlation with DCPA and TCPA, we use DCPA and TCPA to quantify CRI. In consideration of the calculation of the collision risk of two ships, it is assumed that the status of the two ships at a certain moment is: \({V_0}\left( {Lo{n_0},La{t_0},So{g_0},Co{g_0}} \right)\) and \({V_1}\left( {Lo{n_1},La{t_1},So{g_1},Co{g_1}} \right)\), where Lon, Lat, Sog, and Cog represent the longitude, latitude, ground speed, and ground angle of the ship, respectively. Therefore, the relative speed \(S_r\) and relative angle \(C_r\) of the two ships at this moment can be calculated as: $$\begin{aligned} {S_r}&= \sqrt{Sog_0^2 + Sog_t^2 + 2So{g_0}So{g_t}\cos \left( {Co{g_t} - Co{g_0}} \right) }, \end{aligned}$$ $$\begin{aligned} cr &= \left\{ {\begin{array}{*{20}{c}} {Co{g_0} - {\mathop {\textrm{arcos}}\nolimits } \left( {\frac{{S_r^2 + Sog_0^2 - Sog_t^2}}{{2{S_r}So{g_0}}}} \right) \quad Co{g_0} < Co{g_t}}\\ {Co{g_0} + {\mathop {\textrm{arcos}}\nolimits } \left( {\frac{{S_r^2 + Sog_0^2 - Sog_t^2}}{{2{S_r}So{g_0}}}} \right) \quad Co{g_0} \ge Co{g_t}} \end{array}}. \right. \end{aligned}$$ Besides, DCPA and TCPA are defined as: $$\begin{aligned} {\text{DCPA}}= dist*\left( {\sin \left( {{C_r} - Co{g_0} - {\text{Bearing}} - \pi } \right) } \right) , \end{aligned}$$ $$\begin{aligned} {\text{TCPA}}= dist*\left( {{{\cos \left( {{C_r} - Co{g_0} - {\text{Bearing}} - \pi } \right) } / {{S_r}}}} \right) , \end{aligned}$$ where dist is the distance between the two ships on the sea, Bearing is the angle of the ship \(V_1\) relative to \(V_0\) when the ship \(V_0\) is the coordinate origin. Besides, the unit of DCPA is nautical miles, and the unit of TCPA is minutes. The relationships between CRI and DCPA or TCPA are defined as: $$\begin{aligned} CR{I_d}= & {} {a_d}\exp \left( {{b_d}{\text{DCPA}}} \right) , \end{aligned}$$ $$\begin{aligned} CR{I_t}= & {} {a_t}\exp \left( {{b_t}{\text{TCPA}}} \right) , \end{aligned}$$ where the parameters a and b are the adjustment coefficients estimated according to the opinions of the ship experts and the watchmen in the ship transportation system. In this work, the parameters are set as \((a_d, b_d, a_t, b_t ) = (1.0529 , -1.5694, 1.3971, -0.0879)\) according to the movement of the objects on the sea [40]. CRI is calculated by the following formula based on the weighted sum of \(CRI_d\) and \(CRI_t\) $$\begin{aligned} CRI = \alpha CR{I_d} + \beta CR{I_t}, \end{aligned}$$ the parameters \(\alpha\) and \(\beta\) are the weights of \({\text{CRI}}_d\) and \({\text{CRI}}_t,\) respectively. The sum of \(\alpha\) and \(\beta\) is 1, and its value can be set according to the specific characteristics of marine traffic applications. Whenever the state of the two ships at the next moment is predicted, the collision risk index CRI is calculated. If the CRI index exceeds the collision threshold, the ship changes from the global path planning state to the local path planning state. In the local path planning stage, the collision model of the ship itself becomes a factor that cannot be ignored. The basic structure of the ship is shown in Fig. 5. Experts and scholars have conducted related research and proposed ship domain models suitable for different scenarios. In this work, we will first introduce the GOODWIN ship domain model. Basic structure of the ship. Japanese ship expert FUJII first proposed the concept of ship domain in the 1960s. FUJII uses sensing equipment to collect and organize ship encounter behaviors in coastal waterways and crowded areas. Then, the ship collision avoidance trajectory data are filtered and analyzed, and finally an elliptical ship field is obtained. The ship is located at the intersection of the long and short axes. Specifically, the long axis is 8 times the length of the deck, and the short axis is 3.2 times the length of the deck. The schematic diagram of the FUJII ship domain model is shown in Fig. 6: FUJII ship domain model.The schematic diagram of the FUJII ship domain model proposed by Japanese ship expert FUJII in the 1960s Then, GOODWIN improved the FUJII model into an asymmetrical shape via marine traffic surveys and a large number of collision avoidance experiments are conducted on radar simulators using crew training machines, taking into account the International Regulations for Preventing Collisions at Sea. The GOODWIN ship domain model with asymmetric shape based on the FUJII model. The model consists of three sectors with different radii spliced together. The sector areas are distributed according to the range of the ship's lights. Its fan-shaped radii are 0.7 nautical miles, 0.85 nautical miles and 0.45 nautical miles, respectively. The schematic diagram of the GOODWIN ship domain model is shown in Fig. 7. GOODWIN ship domain model.The schematic diagram of the GOODWIN ship domain model, which is an improved version of FUJII ship domain model The GOODWIN model is considered to be suitable for collision avoidance of ships at sea [40]. Particularly, the GOODWIN model is safer than the COLDWELL model and the FUJII model in practical use, so the GOODWIN model is selected as the collision domain model in this study. The GOODWIN model calculates the ship domain according to the angle relationship between ships. Formally, GOODWIN is defined as: $$\begin{aligned} {\text{GOODWIN}} = \left\{ {\begin{array}{*{20}{c}} {0.85\quad {0^ \circ } \le \theta< {{112.5}^ \circ }\quad \;\;}\\ {0.45\quad {{112.5}^ \circ } \le \theta \le {{247.5}^ \circ }}\\ {0.7\quad \;{{247.5}^ \circ }< \theta < {{360}^ \circ }} \end{array}} \right. . \end{aligned}$$ In this work, the A* algorithm will be used to obtain the local optimal planning path of the ship. The cost function f(k) of the A* algorithm in this scenario should be expressed as the sum of the navigation distance cost and the collision cost: $$\begin{aligned} f\left( k \right) = g\left( k \right) + h\left( k \right) , \end{aligned}$$ where \(g\left( k \right)\) is the cost of the ship's distance from the starting point, and its initial value is 0. Besides, the heuristic cost function \(h\left( k \right)\) can choose from a variety of methods to calculate the distance, such as Manhattan distance, Euclidean distance, and Chebyshev distance. Considering the underactuated characteristics of the ship (the degree of freedom of the ship's navigation is less than the degree of freedom of the marine environment) [27], we use the sum of the Chebyshev distance and the collision cost (fuzzy collision cost, FCC) at this point as the heuristic estimated cost of \(\left( k \right)\). Give the ship a guiding direction, and the specific calculation expression of \(\left( k \right)\) is: $$\begin{aligned} \left( k \right) = \max \left( {|{x_k} - {x_t} |, |{y_k} - {y_t} |} \right) + FCC\left( dist, \theta _1, \theta _2 \right) , \end{aligned}$$ where \((x_t, y_t)\) is the position coordinate of the target waypoint, \((x_k, y_k)\) is the current position coordinate of the ship. Besides, we set the clockwise direction of true north from \(0^{\circ }\) to \(360^{\circ }\), and the ship direction angle is the angle between the ship's bow and the true north direction. FCC is the fuzzy collision cost based on the GOODWIN ship domain model. Next, the collision cost FCC based on the fuzzy model is introduced. The basic operation in traditional Boolean logic is "and, or, not," which is suitable for scenarios with clear logic. However, there is no particularly clear threshold when actually judging the distance and angle of two ships. In fuzzy logic, there are no strict boundaries between distances and angles, and the classification of different orientations is measured by the degree of membership. Specifically, the degree of membership refers to the quantitative analysis of a fuzzy research object through membership functions, and the process of transforming logical input values into membership degrees of each set is called fuzzification. The calculation of the collision risk membership function FCC can be expressed as: $$\begin{aligned} {\text{FCC}} = \frac{1}{2}{U_\theta } + \frac{1}{2}{U_{\textrm{dist}}}, \end{aligned}$$ where \(U_\theta\) is the membership function of the azimuth angle \(\theta\) between the current ship and the target ship, and \(U_{\textrm{dist}}\) is the membership function of the distance dist between the current ship and the target ship. The collision risk index encountered by the ship will change with the relative angle of the two ships. \(U_\theta\) is a function of the included angle between the two ships. According to the ship collision avoidance rule [41], the membership degree of the azimuth angle \(\theta\) between the current ship and the target ship is defined as: $$\begin{aligned} {U_\theta } = \frac{{17}}{{44}}\left[ {\cos \left( {abs\left( {{\theta _1} - {\theta _2}} \right) - {{19}^ \circ }} \right) + \sqrt{\frac{{440}}{{289}} + {{\cos }^2}\left( {abs\left( {{\theta _1} - {\theta _2}} \right) - {{19}^ \circ }} \right) } } \right] . \end{aligned}$$ Moreover, dist, the distance between the ship and the target ship, will also cause the change of the collision risk index. Combined with the GOODWIN ship domain model, the surrounding of the ship is divided into three areas, and the collision risk \(U_dist\) is calculated for each area separately, which is shown in Algorithm 3. Besides, we use the collision risk membership function FCC to modify the heuristic estimation cost function of the traditional A* algorithm, and the process of the FCC-A* algorithm is shown in Algorithm 4. In this section, we conduct extensive experiments to evaluate the proposed ship path planning model in details. Specifically, we first analyze the global path planning performance of the proposed model based on Dyna-Sarsa(\(\lambda\)). Then, the local path planning based on FCC-A* is evaluated. Analysis of global path planning based on Dyna-Sarsa(\(\lambda\)) The simulation experimental chart by rasterizing the shapefile data model of part of the Baltic Sea is shown in Fig. 8. We can observe that the experimental chart basically simulates the static obstacles in the sea area, which reflects the proposed model's ability of the sea scene modeling. Nautical chart of simulation experiments. This figure shows the simulation experimental chart by rasterizing the shapefile data model of part of the Baltic Sea, and we can observe that the experimental chart basically simulates the static obstacles in the sea area, which reflects the ability of the sea scene modeling The Q-learning algorithm, Sarsa algorithm, Sarsa(\(\lambda\)) algorithm and Dyna-Sarsa(\(\lambda\)) algorithm are introduced into the simulation chart for evaluation, and each algorithm was trained for 2000 rounds. In the main test of the Dyna-Sarsa(\(\lambda\)) learning algorithm, 40 rounds of simulations are performed using the Dyna learning framework, which means that the ship interacts with the simulated environment for 40 rounds to obtain simulation experiences. The experiment of each algorithm repeats 6 times, and the average value of the corresponding evaluation index is used as the final experimental result. The evaluation indicators of algorithm performance are the reward value of each iteration, the number of collisions per iteration, and the convergence speed of the algorithm. Then, we will evaluate the performance of the Dyna-Sarsa(\(\lambda\)) model. The baseline models are listed as follows: Q-learning learning model. The Q-learning model is used as the benchmark reference model for the ablation experiments in this section. Sarsa learning model. The Sarsa learning model is an online improvement in the Q-learning model, and it is more cautious in exploration than Q-learning. Sarsa(\(\lambda\)) learning model. Sarsa(\(\lambda\)) is a learning model obtained by improving the round update method of Sarsa with eligibility traces. Dyna-Sarsa(\(\lambda\)) learning model. The proposed model that uses the Dyna learning framework to enable the Sarsa(\(\lambda\)) learning model to gain simulation experience. Figures 9 and 10 show the performance of different models on the simulation experiment charts, and the visualization results of path planning are shown in Fig. 11. Comparison of ship reward value for each iteration. The performance comparison of ship reward value achieved by the proposed model and baselines for each iteration Comparison of the number of ship collisions in each iteration. The performance comparison of the number of ship collisions between the proposed model and baselines in each iteration Path planning simulation of four types of learning model. The visualization results of path planning simulation achieved by the proposed model and baselines Specifically, we can observe that: (1) the convergence speed and average reward of the Sarsa learning model and the Q-learning model are quite similar. However, the number of collisions per round of the Sarsa model is less than that of the Q-learning model with an average decrease of 9.8%, which indicates safer navigation. The main reason is that Sarsa is sensitive to the penalty value brought by the collision and adopts a more cautious strategy. Therefore, the safety of the Sarsa-based model is higher than that of the Q-learning model in the application of ship path planning. (2) The eligibility trace can effectively improve the convergence speed of the Sarsa model. The Sarsa(\(\lambda\)) model converges in about 900 rounds, and the corresponding Sarsa model and Q-learning model basically reach the convergence after 2000 rounds. The reason is that the eligibility trace can mark the value of the positions at different distances from the target point, which can guide the state transition selection in the subsequent rounds and help the ship to find the optimal solution faster. (3) The Dyna learning framework can improve the convergence speed of the Sarsa(\(\lambda\)) model. Specifically, compared with the Sarsa(\(\lambda\)) model, which converges in about 900 rounds, the Dyna-Sarsa(\(\lambda\)) learning model reaches the convergence state in about 500 rounds. Besides, the average number of collisions per round decreased by 73.3% compared with the Sarsa model. This is because the Dyna learning framework can help the Sarsa(\(\lambda\)) learning model to gain experience from the simulated environment, and the simulated experience can guide the ship to choose the optimal path. (4) As shown in Fig. 10, in the trajectory planning diagram of the four types of learning models, the Q-learning model tends to perform path planning through the right channel closer to the target point, while Sarsa and Sarsa(\(\lambda\)) are more likely to perform path planning through the wide left waterway. The results reflect that the Sarsa-related model is sensitive to collision risk and will abandon closer paths to avoid obstacles. The Dyna-Sarsa(\(\lambda\)) learning model uses the Q-learning algorithm in the stage of acquiring simulation experience, so it can plan shorter paths under the premise of ensuring safety. In conclusion, the experimental results show that the learning model based on Sarsa has higher navigation safety, and both the eligibility trace and the Dyna learning framework can effectively improve the convergence speed in the experiments. Analysis of local path planning based on FCC-A* In order to provide a local path planning scheme when the ship encounters the danger of collision, we propose a dynamic local path planning model based on the FCC-A* algorithm with trajectory prediction for a collision risk identification method. In this section, we first evaluate the method for calculating the collision risk of encountering ships. Then, the FCC-A* algorithm is evaluated with the path planning time and the number of path collisions. Specifically, we select the trajectory data of two encountering ships in the Baltic Sea summer ship trajectory data for evaluations. Figure 12 shows the visualization of ship trajectories in part of the Baltic Sea. The dashed box is the port of Helsingborg, which has the characteristics of dense ships, complex historical trajectories, and high possibility of ship collision events. Therefore, we select the ship trajectory data that have performed the collision avoidance operation in this port as the experimental dataset. Visualization of summer ship trajectories in the Baltic Sea. This figure shows the visualization of ship trajectories in part of the Baltic Sea. The dashed box is the port of Helsingborg, which has the characteristics of dense ships, complex historical trajectories, and high possibility of ship collision events Firstly, we train a Stacked-BiGRUs trained on the Baltic Sea summer ship trajectory dataset, and use the trained model to predict the trajectories of the two encountering ships. The prediction results are shown in Fig. 13. The blue trajectory in the figure is the historical trajectory of the blue ship, and its route direction is the direction of the blue arrow. Besides, the red line is the red ship's historical trajectory, and the sailing direction is the along the red arrow. The green trajectory is the predicted trajectory of the blue ship from a certain moment. Visualization of improved ship trajectory prediction (green trajectory) based on Stacked-BiGRUs. Firstly, we train a Stacked-BiGRUs trained on the Baltic Sea summer ship trajectory dataset and use the trained model to predict the trajectories of the two encountering ships. The prediction results of the two encountering ships by the pretrained model are shown in this figure. The blue trajectory in the figure is the historical trajectory of the blue ship, and its route direction is the direction of the blue arrow. Besides, the red line is the red ship's historical trajectory, and the sailing direction is the along the red arrow. The green trajectory is the predicted trajectory of the blue ship from a certain moment Since ship collision avoidance is an emergency event, the relevant trajectory data account for a very low proportion in the ship trajectory training set. Therefore, the trajectory predicted by the ship trajectory prediction model is the usual maneuvering behavior of the ship. Generally, the crew is not aware of the danger of collision during the actual navigation, and proceed according to the original route. The red ships in Fig. 13 sail directly, and the blue ships should give way. If the blue ship does not perform the collision avoidance operation according to the blue trajectory in the figure, but continues to sail according to the predicted green trajectory, it will collide with the red ship at the yellow lightning mark and cause a marine traffic accident. In this situation, it is necessary to establish a collision risk identification mechanism first. Specifically, the collision risk index of the ship should be calculated in combination with the collision risk index by predicting the ship's future route. If the collision risk index of the ship exceeds the set threshold, it is determined that the ship has an accident risk, and collision avoidance operations need to be performed in this area. Therefore, the ship collision risk should be firstly identified based on the improved ship trajectory prediction model. Since the closest encounter distance of the ship needs to be more than one nautical mile, the collision risk index (CRI) between the next six predicted positions of the direct ship and the give-way ship is calculated to identify collision risks in time and carry out local path planning. Table 2 shows the changes of DCPA, TCPA and CRI of the six predicted positions of the two ships. It can be seen that the CRI value of Step 3 corresponding to the yellow mark has reached about 0.8. Note that the value range of CRI is 0 to 1, and the larger the value, the higher the collision risk. Particularly, if the value exceeds 0.5, the ship has a collision risk [42]. Therefore, it can be seen from the table that at Step 2, the static global path of the ship needs to be switched to local dynamic path planning. Table 2 Collision risk correlation index change Before evaluating the FCC-A* algorithm, we first set the related parameters. The search range of the traditional A* algorithm is eight grids around a grid. However, the search direction is simplified, and only the three directions, i.e., front, left and right, are searched due to the under-driven property of marine ships. At the same time, the ship cannot turn in the direction that is a boundary or a marine obstacle. These constraints narrow the search scope of the algorithm, thereby reducing the algorithm execution time and ensuring that the planned path meets the navigation requirements of the ship. As shown in Fig. 14, the collision avoidance scene is rasterized according to the sea area scene modeling method. The black squares in the figure represent the terrain of the sea area, which are static obstacles. Besides, the red squares are the ship sailing directly, and the blue squares are the ship that should make way. In this experiment, the direct-sailing ship simulates the navigation process by printing its historical trajectory in real time, and the goal of the avoidance ship is to reach the green star without colliding with the direct-sailing ship and obstacles. Local dynamic path planning scenario. The collision avoidance scene is rasterized according to the sea area scene modeling method. The black squares in the figure represent the terrain of the sea area, which are static obstacles. Besides, the red squares are the ship sailing directly, and the blue squares are the ship that should make way The experimental results of the traditional A* algorithm are shown in Fig. 15, and the planning results of the FCC-A* algorithm are shown in Fig. 16. We can observe that the traditional A* algorithm can avoid all static obstacles well, and the path length is also optimal. However, it is difficult for traditional A* algorithm to effectively avoid the straight ships whose position changes dynamically in each round, resulting in the collision between the give-way ship and the direct-sailing ship at the yellow sign. Besides, the proposed FCC-A* algorithm considers the collision field between the give-way ship and the direct-sailing ship, and the collision risk is incorporated into the cost function of the A* algorithm as a membership function. Therefore, the give-way ship will consider the cost of collision with the direct-sailing ship in every step, and FCC-A* algorithm can help to avoid the risk of collision. Path planning results of traditional A* algorithm. The experimental results of the traditional A* algorithm are shown in this figure. We can observe that the traditional A* algorithm can avoid all static obstacles well, and the path length is optimal. However, it is difficult for traditional A* algorithm to effectively avoid the straight ships whose position changes dynamically in each round, resulting in the collision between the give-way ship and the direct-sailing ship at the yellow sign Path planning results of FCC-A*. The planning results of the FCC-A* algorithm are shown in this figure. The FCC-A* algorithm considers the collision field between the give-way ship and the direct-sailing ship, and the collision risk is incorporated into the cost function of the A* algorithm as a membership function. Therefore, the give-way ship will consider the cost of collision with the direct-sailing ship in every step, thereby avoiding the risk of collision Furthermore, we compare the traditional A* algorithm and the FCC-A* algorithm in the case of two ships meeting on 6 local path planning experiments with different scales. The experimental results of the planning time are given in Table 3. It can be observed that the calculation time of the FCC-A* algorithm is about 30% higher than that of the traditional A* algorithm due to the component of the fuzzy model in FCC-A*. However, since the search speed of the A* algorithm is quite fast, the delay of the FCC-A* algorithm is acceptable. Table 3 Comparison of planning time of two algorithms (unit: seconds) Table 4 presents the comparison of the number of collisions between the two algorithms at different scales. In particular, the collision refers to the number of times the grid positions of the give-way ship and the direct-sailing ship overlap at the same time. We can observe that the path of the give-way ship planned by the traditional A* algorithm has 3 to 4 collisions with the direct ship on average, while the planned path of the FCC-A* algorithm basically has no collision. This is because the FCC-A* algorithm uses the membership function calculation to quantify the collision risk of the two ships and incorporate it as part of the cost calculation. Therefore, the dynamic path planning process considers the collision risk at each moment, which greatly reduces the collision risk between the give-way ship and the direct ship. Table 4 Comparison of collision times between two algorithms (unit: times) In conclusion, the experimental results show that the dynamic path local planning model based on FCC-A* has a slight loss in planning speed compared with the traditional algorithm, but its planned path is safer than that of the traditional algorithm. In this work, a second-order ship path planning model is proposed to address the problem of sea area scene modeling and the slow speed and low safety of ship path planning. Specifically, we first create a raster map with ArcGIS, and the global path planning is performed on the raster map based on the Dyna-Sarsa(\(\lambda\)) model, which integrates the eligibility trace and the Dyna framework on the Sarsa algorithm. Particularly, the eligibility trace is adopted to improve the convergence speed of the model. Meanwhile, the Dyna framework obtains simulation experience through simulation training, which can further improve the convergence speed of the model. Then, the improved ship trajectory prediction model is used to identify the risk of ship collision and switch the path planning from the first order to the second order. Finally, the second-order dynamic local path planning is implemented based on the FCC-A* algorithm, where the cost function of the traditional path planning A* algorithm is rewritten using the fuzzy collision cost membership function to reduce the collision risk of ships. The proposed model is evaluated on the Baltic Sea geographic information and ship trajectory datasets, and the experimental results show the effectiveness of the proposed model. In the future, we plan to adopt federated learning model [43] for privacy protection of each ship without influencing the path planning performance. Besides, collaborative learning of local and global features [44, 45] can be used to guide each ship to plan a safe path through the collaborative collision avoidance of multiple ships. Moreover, edge computing techniques [46] can be also applied in the field of ship path planning to further improve the perception ability and decision-making ability and improve the efficiency and safety of ship path planning. The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. https://unctad.org/webflyer/review-maritime-transport-2021. Fuzzy collision cost BiGRU: Bidirectional gated recurrent unit Simulated annealing APF: Artificial potential field CA: ACA: Ant colony algorithm Genetic algorithm RL: DRL: USV: Unmanned surface vehicle Collision risk index DEM: Digital elevation model Eligibility trace Accumulating trace RT: Replacing trace True online trace TD-error: Temporal difference learning error DCPA: Distance to closest point of approach TCPA: Time to closest point of approach X. Wu, L. Zhang, M. Luo, Current strategic planning for sustainability in international shipping. Environ. Dev. Sustain. 22(3), 1729–1747 (2020) C. Baker, D. McCafferty, Accident database review of human element concerns: what do the results mean for classification, in Proceedings of International Conference on Human Factors in Ship Design and Operation, RINA (2005). Citeseer M.R. Benjamin, J.A. Curcio, Colregs-based navigation of autonomous marine vehicles, in 2004 IEEE/OES Autonomous Underwater Vehicles (IEEE Cat. No. 04CH37578), pp. 32–39 (2004). IEEE B. Wu, T. Cheng, T.L. Yip, Y. Wang, Fuzzy logic based dynamic decision-making system for intelligent navigation strategy within inland traffic separation schemes. Ocean Eng. 197, 106909 (2020) H.-y. Zhang, W.-m. Lin, A.-x. Chen, Path planning for the mobile robot: a review. Symmetry 10(10), 450 (2018). https://doi.org/10.3390/sym10100450 R. Zeng, Y. Wang, A chaotic simulated annealing and particle swarm improved artificial immune algorithm for flexible job shop scheduling problem. EURASIP J. Wirel. Commun. Netw. 2018(1), 1–10 (2018) Q. Xu, J. Wang et al., Study on optimization of aquatic product transportation route in Haikou area based on simulated annealing algorithm. Adv. Comput. Signals Syst. 5(1), 71–74 (2021) S. Xiao, X. Tan, J. Wang, A simulated annealing algorithm and grid map-based UAV coverage path planning method for 3D reconstruction. Electronics 10(7), 853 (2021) H. Miao, Y.-C. Tian, Dynamic robot path planning using an enhanced simulated annealing approach. Appl. Math. Comput. 222, 420–437 (2013) MATH Google Scholar L. Wang, J. Guo, Q. Wang, J. Kan, Ground robot path planning based on simulated annealing genetic algorithm, in 2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), pp. 417–4177 (2018). IEEE U. Orozco-Rosas, O. Montiel, R. Sepúlveda, Mobile robot path planning using membrane evolutionary artificial potential field. Appl. Soft Comput. 77, 236–251 (2019) Z. Zhu, H. Lyu, J. Zhang, Y. Yin, An efficient ship automatic collision avoidance method based on modified artificial potential field. J. Mar. Sci. Eng. 10(1), 3 (2021) S. Feng, Y. Qian, Y. Wang, Collision avoidance method of autonomous vehicle based on improved artificial potential field algorithm. Proc. Inst. Mech. Eng. Part D J. Automobile Eng. 235(14), 3416–3430 (2021) A. Vagale, R. Oucheikh, R.T. Bye, O.L. Osen, T.I. Fossen, Path planning and collision avoidance for autonomous surface vehicles I: a review. J. Mar. Sci. Technol. 1–15 (2021) W. Deng, J. Xu, H. Zhao, An improved ant colony optimization algorithm based on hybrid strategies for scheduling problem. IEEE Access 7, 20281–20292 (2019) L. Yue, H. Chen, Unmanned vehicle path planning using a novel ant colony algorithm. EURASIP J. Wirel. Commun. Netw. 2019(1), 1–9 (2019) Y. Su, J. Liu, X. Xiang, X. Zhang, A responsive ant colony optimization for large-scale dynamic vehicle routing problems via pheromone diversity enhancement. Complex Intell. Syst. 7(5), 2543–2558 (2021) S. Zhang, J. Pu, Y. Si, L. Sun, Path planning for mobile robot using an enhanced ant colony optimization and path geometric optimization. Int. J. Adv. Robot. Syst. 18(3), 17298814211019222 (2021) S. Katoch, S.S. Chauhan, V. Kumar, A review on genetic algorithm: past, present, and future. Multimedia Tools Appl. 80(5), 8091–8126 (2021) X. Sui, D. Liu, L. Li, H. Wang, H. Yang, Virtual machine scheduling strategy based on machine learning algorithms for load balancing. EURASIP J. Wirel. Commun. Netw. 2019(1), 1–16 (2019) Y.V. Pehlivanoglu, P. Pehlivanoglu, An enhanced genetic algorithm for path planning of autonomous UAV in target coverage problems. Appl. Soft Comput. 112, 107796 (2021) N.A. Shiltagh, K.S. Ismail, Z.Q. Habeeb, A modified genetic algorithm path planning for intelligent autonomous mobile robot. Invent. Rapid Algorithm (2012) C. Li, W. Li, J. Ning, Calculation of ship collision risk index based on adaptive fuzzy neural network, in Proceddings of the 2018 3rd International Conference on Modeling, Simulation and Applied Mathematics (MSAM 2018), vol. 160, pp. 223–227 (2018) J. Ning, H. Chen, T. Li, W. Li, C. Li, Colregs-compliant unmanned surface vehicles collision avoidance based on multi-objective genetic algorithm. IEEE Access 8, 190367–190377 (2020) V. François-Lavet, P. Henderson, R. Islam, M.G. Bellemare, J. Pineau et al., An introduction to deep reinforcement learning. Found. Trends® Mach. Learn. 11(3–4), 219–354 (2018) Article MATH Google Scholar Z. Chen, X. Wang, Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach. EURASIP J. Wirel. Commun. Netw. 2020(1), 1–21 (2020) H. Shen, H. Hashimoto, A. Matsuda, Y. Taniguchi, D. Terada, Automatic collision avoidance of ships in congested area based on deep reinforcement learning, in Conference Proceedings, the Japan Society of Naval Architects and Ocean Engineers, pp. 651–656 (2017) L. Li, D. Wu, Y. Huang, Z.-M. Yuan, A path planning strategy unified with a colregs collision avoidance function based on deep reinforcement learning and artificial potential field. Appl. Ocean Res. 113, 102759 (2021) J. Gao, W. Ye, J. Guo, Z. Li, Deep reinforcement learning for indoor mobile robot path planning. Sensors 20(19), 5493 (2020) F. Duchoň, A. Babinec, M. Kajan, P. Beňo, M. Florek, T. Fico, L. Jurišica, Path planning with modified a star algorithm for a mobile robot. Procedia Eng. 96, 59–69 (2014) G. Tang, C. Tang, C. Claramunt, X. Hu, P. Zhou, Geometric a-star algorithm: an improved a-star algorithm for AGV path planning in a port environment. IEEE Access 9, 59196–59210 (2021) R. Song, Y. Liu, R. Bucknall, Smoothed a* algorithm for practical unmanned surface vehicle path planning. Appl. Ocean Res. 83, 9–20 (2019) B. Guo, Z. Kuang, J. Guan, M. Hu, L. Rao, X. Sun, An improved a-star algorithm for complete coverage path planning of unmanned ships. Int. J. Pattern Recognit. Artif. Intell. 36(03), 2259009 (2022) Y. Singh, S. Sharma, R. Sutton, D. Hatton, A. Khan, A constrained a* approach towards optimal path planning for an unmanned surface vehicle in a maritime environment containing dynamic obstacles and ocean currents. Ocean Eng. 169, 187–201 (2018) Y. Xu, J. Zhang, Y. Ren, Y. Zeng, J. Yuan, Z. Liu, L. Wang, D. Ou, Improved vessel trajectory prediction model based on stacked-bigrus. Secur. Commun. Netw. 2022 (2022) S. Ahuja, N.A. Shelke, P.K. Singh, A deep learning framework using CNN and stacked bi-GRU for covid-19 predictions in India. SIViP 16(3), 579–586 (2022) T. Alfakih, M.M. Hassan, A. Gumaei, C. Savaglio, G. Fortino, Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on sarsa. IEEE Access 8, 54074–54084 (2020) R.S. Sutton, A.G. Barto, Reinforcement Learning: An Introduction (MIT Press, Cambridge, 2018) B. Li, F.-W. Pang, An approach of vessel collision risk assessment based on the d-s evidence theory. Ocean Eng. 74, 16–21 (2013) E.M. Goodwin, A statistical study of ship domains. J. Navig. 28(3), 328–344 (1975) Y. Huang, L. Chen, P. Chen, R.R. Negenborn, P. Van Gelder, Ship collision avoidance methods: State-of-the-art. Saf. Sci. 121, 451–473 (2020) R. Zhen, M. Riveiro, Y. Jin, A novel analytic framework of real-time multi-vessel collision risk assessment for maritime traffic surveillance. Ocean Eng. 145, 492–501 (2017) Y. Yin, Y. Li, H. Gao, T. Liang, Q. Pan, FGC: GCN based federated learning approach for trust industrial service recommendation. IEEE Trans. Ind. Inform. (2022) H. Gao, X. Qin, R.J.D. Barroso, W. Hussain, Y. Xu, Y. Yin, Collaborative learning-based industrial iot api recommendation for software-defined devices: the implicit knowledge discovery perspective. IEEE Trans. Emerg. Top. Comput. Intell. (2020) H. Gao, K. Xu, M. Cao, J. Xiao, Q. Xu, Y. Yin, The deep features and attention mechanism-based method to dish healthcare under social IoT systems: an empirical study with a hand-deep local-global net. IEEE Trans. Comput. Soc. Syst. 9(1), 336–347 (2021) Y. Yin, Z. Cao, Y. Xu, H. Gao, R. Li, Z. Mai, QoS prediction for service recommendation with features learning in mobile edge computing environment. IEEE Trans. Cognit. Commun. Netw. 6(4), 1136–1145 (2020) This study was supported in part by the Zhejiang Key Research and Development Program under grants 2021C03187, the Open Research Project Fund of Key Laboratory of Marine Ecosystem Dynamics, Ministry of Natural Resources under Grants MED202202, and the National Natural Science Foundation of China under Grants J2024009 and 62072146. School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China Junfeng Yuan, Jian Wan, Xin Zhang, Yang Xu, Yan Zeng & Yongjian Ren Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Beijing, China Junfeng Yuan, Xin Zhang, Yang Xu, Yan Zeng & Yongjian Ren School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou, 310023, China Jian Wan Junfeng Yuan Xin Zhang Yang Xu Yan Zeng Yongjian Ren JY, JW, and YX proposed the path planning model and designed the system. XZ and YX implemented the simulation. JY, XZ, and YZ wrote the paper. JW, XZ, YZ, and YR revised the manuscript. All authors read and approved the final manuscript. Correspondence to Xin Zhang or Yongjian Ren. Yuan, J., Wan, J., Zhang, X. et al. A second-order dynamic and static ship path planning model based on reinforcement learning and heuristic search algorithms. J Wireless Com Network 2022, 128 (2022). https://doi.org/10.1186/s13638-022-02205-4 Ship path planning Ship navigation Sailing safety Heuristic search Advances Industrial Mobile Communications Over Edge Computing: Challenges and Emerging Applications
CommonCrawl
What is an Adjunction? Part 1 (Motivation) Some time ago, I started a "What is...?" series introducing the basics of category theory: "What is a category?" "What is a functor?" Part 1 and Part 2 "What is a natural transformation?" Part 1 and Part 2 Today, we'll add adjunctions to the list. An adjunction is a pair of functors that interact in a particularly nice way. There's more to it, of course, so I'd like to share some motivation first. And rather than squeezing the motivation, the formal definition, and some examples into a single post, it will be good to take our time: Today, the motivation. Next time, the formal definition. Afterwards, I'll share examples. Indeed, I will make the admittedly provocative claim that adjointness is a concept of fundamental logical and mathematical importance that is not captured elsewhere in mathematics. - Steve Awodey (in Category Theory, Oxford Logic Guides) So, what is an adjunction? Mathematics is often concerned with pinning down an appropriate notion of "sameness" and asking the question, "When are two things the same?" (I am reminded of Jim Propp's excellent essay "Who Knows 2?") Category theory, in particular, shines brightly in this arena. It provides us with better words with which to both ask and answer this question and leads us to the notion of isomorphism: Two objects $X$ and $Y$ in a category are isomorphic if there is a morphism from one to the other that has both a left and a right inverse. An isomorphism, then, is like a process $X\to Y$ that can be completely reversed. When such a process exists, the objects are isomorphic. Sometimes, however, isomorphism is not the notion you want to work with. What if a process isn't exactly reversible on the nose, yet—given your goals—you'd prefer not to distinguish between the original and final states? This arises when we wish to compare two categories: When are categories $\mathsf{C}$ and $\mathsf{D}$ isomorphic? Given a functor $F\colon \mathsf{C}\to\mathsf{D}$, can I find a functor $G\colon \mathsf{D}\to\mathsf{C}$ so that $$FG=\text{id}_{\mathsf{D}} \qquad\text{and} \qquad GF=\text{id}_\mathsf{C} \quad ?$$ (Here, $\text{id}_\mathsf{C}\colon \mathsf{C}\to\mathsf{C}$ is the identity functor on $\mathsf{C}$, and similarly for $\text{id}_\mathsf{D}$.) Oftentimes, the answer will be "no." That is, equality is often too demanding to be useful in mathematics. Here's an easy illustration: if $X$ and $Y$ are sets then $X\times Y$ is not equal to $Y\times X$. Why? Simply because sticking an $x$ in the first slot is not the same as sticking an $x$ in the second slot: $(x,y)\neq (y,x)$ for elements $x\in X$ and $y\in Y$. But if you simply want "the set of pairs of elements in $X$ and $Y$" then you'll be satisfied knowing that although $X\times Y$ and $Y\times X$ are not equal, they are isomorphic. That is, replacing equalities with isomorphisms provides us with desired flexibility. Isomorphisms rather than equalities are thus the tool of choice in category theory. With that in mind, let's revisit the equations above: $FG=\text{id}_{\mathsf{D}}$ and $GF=\text{id}_\mathsf{C}.$ When we replace these equalities with natural isomorphisms $$FG\cong \text{id}_{\mathsf{D}} \qquad\text{and} \qquad GF\cong \text{id}_\mathsf{C}$$ then $\mathsf{C}$ and $\mathsf{D}$ are called equivalent categories. Equivalence, then, is a better notion of "sameness" when comparing categories. Let's take this one step further. We've just exchanged equalities $=$ for isomorphisms $\cong$, so what if we take this a step further and exchange the isomorphisms for regular morphisms? If we replace the natural isomorphisms with natural transformations $$FG \to \text{id}_{\mathsf{D}} \qquad\text{and} \qquad \text{id}_\mathsf{C}\to GF$$ then the categories may no longer be equivalent. But this setup is still of great interest. It is called an adjunction, and $F$ and $G$ are called adjoint functors. Well, almost. We also ask that the two natural transformations relate to each other in a nice way. But we'll get to that next time. Amazingly, there is much that follows from this simple adjustment. That is, by simply replacing equality $=$ with a (not necessarily invertible arrow) $\to$ we've opened up a vast world of mathematical possibilities. By way of analogy... This happens elsewhere in mathematics, too. By "this" I mean the act of finding something interesting after loosening up a strict notion of sameness. In topology, for example, one is interested in distinguishing topological spaces. This amounts to asking if there is a homeomorphism—an isomorphism in the category of topological spaces—between them. Homeomorphisms are very nice, but there is a relaxed version known as a homotopy equivalence. Homotopy equivalence is a weaker notion than homeomorphism, so you might think it's no good. Au contraire. These weak equivalences pave the way for the deeply rich field of homotopy theory. So I like to have this analogy in mind: I'm reminded of this idea in linear algebra, as well, though in a tangential sort of way. Suppose $U$ is an orthogonal $n\times n$ matrix. It represents an invertible linear map $\mathbb{R}^n\to\mathbb{R}^n$ whose inverse is precisely the adjoint $U^*$ of $U$. That is, $UU^*=I=U^*U$. Now imagine omitting some of the columns of $U$ so that it's an $n\times k$ rectangular matrix with $k<n$. This is not an outrageous request. Perhaps $U$ is the matrix obtained from a singular value decomposition, say $M=UDV^*$, whose smaller singular values you wish to disregard for some data compression task. This truncated $U$ is then a map from a smaller space $\mathbb{R}^k$ into a larger space $\mathbb{R}^n$. The remaining $n-k$ columns are still orthogonal, so $U^*U=I_k$, where $I_k$ is the identity on $\mathbb{R}^k$. Intuitively, this says that if you inject $\mathbb{R}^k$ as a subspace into $\mathbb{R}^n$, then project back onto it, you've not done anything at all. On the other hand, $UU^*$ is no longer the identity on $\mathbb{R}^n$. Intuitively, you can't squish all of $\mathbb{R}^n$ onto $\mathbb{R}^k$ and hope to undo the distortion damage. So $U^*$ is no longer the inverse of $U$, yet both matrices still encode valuable information about the data you're interested in. Speaking of linear maps and their adjoints, you might recall a special equality that relates them. If $f\colon V\to W$ is a linear map of Hilbert spaces, then its adjoint $f^*\colon W\to V$ satisfies the inner product equation $$\langle f\mathbf{v},\mathbf{w}\rangle = \langle \mathbf{v},f^*\mathbf{w}\rangle \qquad \text{for all $\mathbf{v}\in V$ and $\mathbf{w}\in W$}$$ As we'll see next time, an adjunction consists of a pair of functors that satisfy a nearly identical equation. For this reason, the functors participating in an adjunction are called adjoint functors. In summary, relaxing a notion of "sameness" gives us extra currency with which to explore new phenomena. So don't think of adjunctions as mere equivalence-wannabes. Instead, think of them as top notch, high class citizens in the categorical landscape. As Awodey shares in the same text quoted above, [The notion of adjoint functor] captures an important mathematical phenomenon that is invisible without the lens of category theory. Next time, we'll unwind the definition a bit more and, with the lens of category theory, be able to spot several examples in mathematics. Matrices as Tensor Network Diagrams
CommonCrawl
A dynamical systems model of progesterone receptor interactions with inflammation in human parturition Douglas Brubaker1, Alethea Barbaro2, Mark R. Chance1 & Sam Mesiano3 Progesterone promotes uterine relaxation and is essential for the maintenance of pregnancy. Withdrawal of progesterone activity and increased inflammation within the uterine tissues are key triggers for parturition. Progesterone actions in myometrial cells are mediated by two progesterone receptor (PR) isoforms, PR-A and PR-B, that function as ligand-activated transcription factors. PR-B mediates relaxatory actions of progesterone, in part, by decreasing myometrial cell responsiveness to pro-inflammatory stimuli. These same pro-inflammatory stimuli promote the expression of PR-A which inhibits the anti-inflammatory activity of PR-B. Competitive interaction between the progesterone receptors then augments myometrial responsiveness to pro-inflammatory stimuli. The interaction between PR-B transcriptional activity and inflammation in the pregnancy myometrium is examined using a dynamical systems model in which quiescence and labor are represented as phase-space equilibrium points. Our model shows that PR-B transcriptional activity and the inflammatory load determine the stability of the quiescent and laboring phenotypes. The model is tested using published transcriptome datasets describing the mRNA abundances in the myometrium before and after the onset of labor at term. Surrogate transcripts were selected to reflect PR-B transcriptional activity and inflammation status. The model coupling PR-B activity and inflammation predicts contractile status (i.e., laboring or quiescent) with high precision and recall and outperforms uncoupled single and two-gene classifiers. Linear stability analysis shows that phase space bifurcations exist in our model that may reflect the phenotypic states of the pregnancy uterus. The model describes a possible tipping point for the transition of the quiescent to the contractile laboring phenotype. Our model describes the functional interaction between the PR-A:PR-B hypothesis and tissue level inflammation in the pregnancy uterus and is a first step in more sophisticated dynamical systems modeling of human partition. The model explains observed biochemical dynamics and as such will be useful for the development of a range of systems-based models using emerging data to predict preterm birth and identify strategies for its prevention. Preterm birth (PTB) causes the majority of neonatal mortality and morbidity and is a major public health and socioeconomic problem worldwide [1, 2]. To prevent PTB, a clear understanding is needed of the hormonal interactions and signaling pathways that control the uterine contractile state. For most of pregnancy the myometrium (uterine muscle) is maintained in a relaxed and quiescent state to accommodate the growing conceptus. Parturition is initiated by a dramatic phenotypic transformation of the myometrium to the laboring state wherein it becomes the rhythmically contracting engine for birth. It is generally considered that the contractile state of the myometrium is controlled by the balance between the relaxatory influences of the steroid hormone progesterone and pro-labor stimuli, especially tissue-level inflammatory stimuli within the myometrium [6]. Progesterone is essential for the establishment and maintenance of pregnancy and its withdrawal is the principle trigger for parturition [3–7]. Multiple studies support the concept that parturition is associated with increased tissue-level inflammation within the myometrium, decidua, and cervix [8–10]. Actions of progesterone in myometrial cells are mediated by two progesterone receptor (PR) isoforms, designated PR-A and PR-B, that function as ligand activated transcription factors with PR-B exhibiting stronger transcriptional activity than PR-A. In vitro studies show that PR-A acts as a repressor of progesterone responsiveness by inhibiting the transcriptional activity of PR-B at certain promoters [11–13]. In most species progesterone withdrawal occurs by a decrease in circulating progesterone levels [14–18]. Human parturition occurs without systemic progesterone withdrawal, and instead is thought to involve decreased responsiveness of the myometrial cell to PR mediated progesterone actions resulting in a functional progesterone withdrawal [8, 19]. Previous studies have shown that most of human pregnancy, progesterone via PR-B promotes uterine quiescence, in part by inhibiting the responsiveness of myometrial cells to pro-inflammatory stimuli and preventing tissue level inflammation, and that functional progesterone withdrawal at parturition is caused by increased PR-A-mediated trans-repression of PR-B [8, 19, 20]. As pregnancy advances, the capacity for PR-B to mediate relaxatory and anti-inflammatory actions of progesterone on the pregnancy myometrium decreases due to increased repression by PR-A [20]. Interestingly, the amount and transrepressive activity of PR-A in myometrial cells is increased by pro-inflammatory stimuli suggesting a causal link between inflammation and PR-A-mediated functional progesterone withdrawal [21]. Thus, our working model for functional progesterone withdrawal in the control of human parturition posits that PR-B-mediated progesterone actions in the myometrium gradually decreases with advancing gestation in response to gradual increases in PR-A in response to increased inflammatory load. This mechanism is referred to as the PR-A:PR-B hypothesis for functional progesterone withdrawal [8, 20, 22]. Dynamical systems modeling uses fixed rules to describe the behavior of a system as its interacting components change with time. This framework has been used to examine the temporal activity of multiple biological systems including epidemics [23], predator-prey population interactions [24], chemical kinetics, protein phosphorylation, and cell signaling pathways [25, 26]. When the mechanism underlying the dynamics of a system is not well understood, a dynamical systems model can be useful for determining whether a particular a particular set of hypotheses that underly the model constitute a plausible mechanism by examining if the predictions of that model are borne out by the data. For all these reasons, dynamical systems are well suited for modeling the process of parturition where the myometrium undergoes a dramatic phenotypic bifurcation as it changes from the quiescent to laboring phenotype and the precise mechanism for this transformation is not yet known. Herein we present a dynamical systems model consistent with the PR-A:PR-B hypothesis that links PR-B activity and inflammatory status in the myometrium at term. PR-B and inflammation were each modeled with a differential equation describing their activation and generation rates, their limiting behavior, and how they interact in association with the onset of labor. The model was robust when tested using published transcriptome datasets from quiescent and laboring myometrium and predicted contractile status (i.e, laboring or quiescent) with high precision using a novel classifier developed from the model. This simple model is a first step in producing patient specific pregnancy trajectories to predict the onset of labor and provides a framework to clinically assess women at risk of preterm birth. Model definition A host of experimental data has been collected which links PR-A, PR-B, and inflammatory drivers in the pregnancy uterus [7, 8, 20–22, 27, 32]. We translated the principles of the PR-A:PR-B hypothesis into equations which could be used to mathematically explore the dynamics and consistency of the biological hypothesis of how the progesterone receptors interact with inflammation during pregnancy. In essence, the PR-A:PR-B hypothesis describes a standard competitive interaction between the pro-pregnancy actions of PR-B and the pro-labor actions of PR-A where the activity of PR-A is related to the level of inflammation in the myometrium. As such, we chose to consider only PR-B and inflammation and incorporated the effects of PR-A into the inflammatory terms of the model (Fig. 1 a). Model Definition and Properties. a Setup of the competitive interaction model between PR-B and inflammation where each variable has a growth term and acts to inhibit and deplete the other. b A setting of the phase space when k>i and b=0.5 where probability of labor is equal to 0.5 indicated by the shaded region, the basin of attraction, about the laboring equilibrium point. c A setting of the phase space when k=i where regardless of the value of b the probability of labor is equal to 1. The blue and orange lines are the null clines and correspond to the lines produced when we set \(\frac {dB}{d\tau }=0\) and \(\frac {dI}{d\tau }\)=0. d The dependence of the probability of labor upon the parameter values b and i for a k fixed at 1 and the model used to make predictions Two coupled differential equations were used to model the change in transcriptionally active PR-B over time, \(\frac {d\hat {B}}{dt}\), and the change of inflammation over time, \(\frac {d\hat {I}}{dt}\), as a function of the growth and depletion of each parameter and interactions between parameters. The equations we propose to model the transcriptional activity of PR-B and inflammation status are $$\begin{array}{@{}rcl@{}} \frac{d\hat{B}}{dt} &=& \hat{b}\hat{B}\left(1-\frac{\hat{B}}{B_{c}}\right)-k_{1}\hat{B}\hat{I},\\ \frac{d\hat{I}}{dt} &=& \hat{i}\hat{I}\left(1-\frac{\hat{I}}{I_{c}}\right)-k_{2}\hat{B}\hat{I}. \end{array} $$ The growth of PR-B and inflammation is modeled by the terms \(\hat {b}\hat {B}(1-\frac {\hat {B}}{B_{c}})\) and \(\hat {i}\hat {I}(1-\frac {\hat {I}}{I_{c}})\) respectively. These terms imply that the levels of PR-B and inflammation increase in the presence of PR-B, \(\hat {B}\), and inflammation, \(\hat {I}\), at rates \(\hat {B}\) and \(\hat {I}\) respectively. The terms \((1-\frac {\hat {B}}{B_{c}})\) and \((1-\frac {\hat {I}}{I_{c}})\), impose a maximum or critical value on the level of PR-B and inflammation. At any given time, there is a finite level of PR-B induced activation of the transcriptional machinery. This critical level of PR-B is represented by the parameter B c . Analogously, there is some saturable level of inflammatory drivers active in a myometrial cell, represented by the parameter I c . The growth terms \(\hat {b}\hat {B}\) and \(\hat {i}\hat {I}\) in the equations for PR-B and inflammation will themselves increase in size as the amount of PR-B and inflammation increase, but in a way that is limited by the critical values for PR-B and inflammation. If \(\hat {B} =B_{c}\) or \(\hat {I} = I_{c}\), the limiting terms in parenthesis equal zero which causes the growth term to equal zero. The depletion of PR-B and inflammation is modeled by the terms after the negative sign, namely \(k_{1}\hat {B}\hat {I}\) and \(k_{2}\hat {B}\hat {I}\). Qualitatively, this means that the rate of depletion of PR-B is the product of \(\hat {B}\), \(\hat {I}\), and a rate constant k 1. The depletion of inflammation follows the same behavior with a different rate constant k 2. The value of k 1 accounts for the relative amount of PR-B to repress inflammation and k 2 accounts for the relative impact of inflammation on PR-B activity. While we know that the phenomenon of PR-B repression of inflammation occurs, the exact mechanism for this repressive activity is not well understood. By allowing for k 1 and k 2 to take on different values relative to one another, we are able to explore multiple possible models for PR-B repression of inflammation. Model nondimensionalization and simplification Nondimentionalization is a tool for simplifying our model whereby the six parameters in our model, \(k_{1}, k_{2}, B_{c}, I_{c}, \hat {i},\) and \(\hat {B}\), are replaced with three dimensionless constants (For full derivation of the dimensionless model see Appendix of derrivations). While this can make it difficult to pinpoint the influence of individual parameters on the system's behavior, since we only have six parameters in our model, unpacking the influence of particular dimensionless constants is straightforward. The units for \(\hat {I}\), \(\hat {B}\), B c , and I c are the amount of PR-B or inflammation present, similar to a concentration. Time t is given in weeks. The rate constants \(\hat {I}\) and \(\hat {B}\) are in units \(\frac {1}{\text {weeks}}\) while the rate constants k 1 and k 2 are in units \(\frac {1}{\text {concentration} * \text {weeks}}\). We define three dimensionless variables for our model, \(B = \frac {\hat {B}}{B_{c}}\), \(I = \frac {\hat {I}}{I_{c}}\), and τ=t k 1 I c . Substituting these for \(\hat {B}\), \(\hat {I}\), and t yields the model, $$\begin{array}{@{}rcl@{}} \frac{dB}{d\tau} = bB(1-B)-BI, \quad \quad \frac{dI}{d\tau} = iI(1-I)-kBI, \end{array} $$ where \(b = \frac {\hat {b}}{I_{c}k_{1}}, i = \frac {\hat {i}}{I_{c}k_{1}},\) and \(k = \frac {B_{c}k_{2}}{I_{c}k_{1}}.\) Determining parameter values from transcriptome data We obtained data from two published studies which examined transcriptional changes in the myometrium (obtained at the time of cesarean section delivery) of women who were not in labor (NIL: closed and rigid cervix and no indication of uterine contractions) and in labor (IL: cervix dilated >4 cm and rhythmic contractions). One study [28] in which transcriptome analysis was performed by microarray technology (henceforth referred to as the microarray dataset or microarray data) comprised 3 myometrial samples from NIL women at term (>37 weeks gestation), 3 samples from IL women at term, and 3 samples from IL women undergoing preterm (<37 weeks gestation) cesarean section delivery. Some confusion has emerged since publication about whether one of the term IL samples was truly in labor. We excluded this sample for our analysis of this dataset and combined the preterm and term IL samples into one IL group. The other study [29] used RNA sequencing (henceforth referred to as the RNAseq dataset or RNAseq data) of myometrium from 5 NIL women and 5 IL women at term. Both datasets collected independent, non-paired samples, comprising a total cohort of 18 unique samples (10 IL, 8 NIL). We infer the activity of PR-B and pro-inflammatory drivers with two PR-B responsive genes to serve as surrogates for PR-B, FOXO1A and FKBP5 [20, 30], and three pro-inflammatory genes to serve as surrogates for inflammation, IL- 1β, IL-6 and IL-8 [31]. Although there may be more genes associated with PR-B and inflammatory response in the myometrium, a fully curated gene set does not exist for inferring the full set of transcriptome changes induced by PR-B and inflammation. Therefore, we focussed on a targeted set of well studied myometrium specific genes for inferring the activity of PR-B and inflammation in the model. The normalized values of the data for these genes N j , are calculated using the equation, \(N_{j} = \frac {G_{j}- m}{M-m},\) where G j is the value of the gene for patient j, M is the maximum expression value for that gene across patients in the dataset, and m is the minimum value of that gene across patients in the dataset. One consequence of this normalization procedure is that the transcriptome data has been nondimensionalized enabling integration of the dimensionless trascriptome data with the dimensionless constants. In this formulation the values of the surrogate genes parameterize the dimensionless model for each patient. The normalization equation bounds the values for the PR-B and inflammation surrogates from 0 to 1 and makes the natural choice of values for critical levels of PR-B and inflammation B c =I c =1. We then assign the values of b and i using the normalized dimensionless values for the PR-B and inflammatory surrogate genes respectively and the value of k corresponds to the strength of PR-B's anti-inflammatory actions. The values of b and i are determined by the normalized intensity of one of the PR-B and one of the inflammatory surrogate genes from the mRNA expression studies [28, 29] and there are six possible combinations of values for b and i depending upon which inflammatory and PR-B surrogate genes are chosen. Thus, for a particular patient, the values of b and i are determined by the normalized expression intensity from the RNA-seq or microarray study for one pair PR-B and inflammatory surrogate genes. Calculating the probability of labor for each patient Next, we quantify the behavior of k in order to apply our model to patient data. To do this, we have to derive the steady state solutions for our model. These solutions are the values of B and I which cause \(\frac {dB}{d\tau }=\frac {dI}{d\tau }=0\) and correspond to a state where the system undergoes no change. There are four steady states, also known as equilibrium points. These occur when the ordered pair for PR-B and inflammation, (B ∗,I ∗), is equal to, (0, 0), (1, 0), (0, 1), and \(\left (\frac {i(1-b)}{k-ib},\frac {b(k-i)}{k-ib}\right)=\left (\frac {i^{2}b(1-k)-ik(1-b)}{k(bi-k)}, \frac {kb(i-1)}{bi-k}\right)\). Of these, (0,0) is the trivial equilibrium point where neither PR-B nor inflammation is present, (1,0) is the quiescent equilibrium where PR-B is maximal and there is no inflammation, and (0,1) is the laboring equilibrium where there is no PR-B and inflammation is maximal. The quiescent equilibrium correspond to a PR-B dominant state and laboring equilibrium corresponds to an inflammatory dominant state. The fourth equilibrium point, the intermediate equilibrium, exists only for certain values of b and i between the quiescent and laboring equilibrium (For full derivation of equilibrium points see Appendix of derrivations). Since the values of b and i are determined by PR-B and inflammatory surrogate genes scaled from 0 to 1, these terms are bounded to that interval. Furthermore, since B and I are bounded by B c =1 and I c =1, B ∗ and I ∗ are bounded to the square domain with vertices (0,0), (0,1), (1,1), and (1,0) and area 1. So, the intermediate equilibrium point, in both forms, should satisfy the constraints 0≤B ∗≤1 and 0≤I ∗≤1. By considering how this constraint impacts both forms of the intermediate equilibrium we can derive a set of constraints for the values of k. In order to allow for the full range of values of i, we find that i and k satisfy 0≤i<k≤1. In the limit in the case where i=k, the intermediate equilibrium point equals the quiescent equilibrium point (1,0). If we visualize this state in phase space (Fig. 1 b) we see that all the vectors point away from the quiescent equilibrium point toward the laboring equilibrium. In phase space, these vectors define trajectories that indicate how the system would evolve in time given a certain starting point. The set of vectors pointing toward the laboring equilibrium point is known as the basin of attraction for the laboring equilibrium point. We compute a probability of labor equal to the area of the laboring equilibrium point's basin of attraction divided by the area of the domain, which in our case is 1. The area of the basin changes as k, i, and b change. This probabilistic interpretation of a phase space is reasonable under the assumptions that all possible pairs of values of B and I in the domain occur with equal likelihood. While it is clear that in a physiological context there are values of B and I that are more likely at different time points in pregnancy, since this information is not readily available our assumption of uniformity enables us to compute the probabilities in the most agnostic way possible. Given more information about the distribution of B and I over the course of pregnancy, it could be possible to get a more precise estimate of the probability of labor, but such an extension is not possible at this t. For example, in the case where k=i and b is fixed at 0.5, the probability of labor is equal to 1, the entire domain is the basin of attraction for the laboring equilibrium, which means that quiescence is impossible (Fig. 1 c). In order to ensure that quiescence is a possibility, we set k=1 so that only one value of i, i=1, results in a probability of labor equal to 1, enabling us to explore the full range of values for b and i. With k fixed, we can plot the dependence of the probability of labor upon the values of b and i in a surface upon which all patients must fall given a value for PR-B and inflammatory activity. Thus, the model we apply to patient data is $$\begin{array}{@{}rcl@{}} \frac{dB}{d\tau} = bB(1-B)-BI, \quad \quad \frac{dI}{d\tau} = iI(1-I)-BI. \end{array} $$ Each patient in the microarray and RNA-seq datasets has an expression value for each of the surrogate genes, FOXO1A, FKBP5, IL- 1β, IL-6 and IL-8. In the absence of proteomic data precisely quantifying the protein level activity of these genes in vivo the mRNA expression levels can be combined with the framework of our mathematical model to approximate the functional activity of these genes at the time of labor, i.e. using FOXO1A or FKBP5 for PR-B and IL- 1β, IL-6, or IL-8 for inflammation. We calculated a probability of labor for each patient in each dataset using all six possible combinations of surrogate genes (FOXO1A, IL- 1β), (FKBP5, IL- 1β), (FOXO1A, IL-6), (FKBP5, IL-6), (FOXO1A, IL-8), and (FKBP5, IL-8) where each surrogate was used to set the parameters b and i in our model. We will hereafter refer to a pair of surrogate genes as a predictor. A probability of labor was computed for each patient which corresponds to the size of the basin of attraction for the laboring equilibrium point given a predictor pair of surrogate genes for b and i. Classifier construction and assessment After normalization within platform cohort, half of the IL and NIL samples from the total cohort of 18 were randomly selected as a training dataset to construct a classifier from five IL and four NIL samples. Probabilities of labor were calculated for each of the samples using a pair of predictor genes, one PR-B responsive and one inflammatory responsive, to value the parameters b and i in the model. Two nonparametric 95 % confidence intervals were computed for the the probabilities of labor for the NIL and IL samples. These intervals about the medians of the NIL and IL samples constituted the NIL and IL classifiers. If the intervals did not separate, then we discarded that classifier. We assessed performance of successful classifiers by computing the probabilities of labor for the remaining nine samples using the same predictor genes. The nine samples in this test set of data were classified as IL, NIL, or a no-call depending on whether a sample's probability fell into the training set confidence interval for IL, NIL, or somewhere in between. Precision and recall metrics were used to assess the classifier defined as $$\begin{array}{@{}rcl@{}} \text{precision} &=& \frac{\text{correctly classified samples}}{\text{total classified samples}}\\ \text{recall} &=& \frac{\text{classified samples}}{\text{total samples}}. \end{array} $$ This procedure of creating classifiers with training samples and predicting phenotypes for test samples was repeated for all 17,640 possible combinations of samples from the microarray and RNA-seq datasets. For each combination, classifier performance was assessed for probabilities of labor calculated from each of all six combinations of PR-B (FOXO1A, FKBP5) and inflammatory (IL- 1β, IL-6, and IL-8) responsive genes. Precision and recall metrics for each classifier were aggregated into an average F-score and the proportion of successful classifiers out of 17,640 possible classifiers were computed. These metrics assessed how the selection of samples for the training set influenced i) classifier performance sensitivity and ii) classifier construction sensitivity. The equations for F-score and classifier success rate (CSR) are $$\begin{array}{@{}rcl@{}} \text{F-score} &=& \frac{2*\text{precision}*\text{recall}}{\text{precision}+\text{recall}}\\ \text{CSR} &=& \frac{\text{constructed classifiers}}{17,640}. \end{array} $$ These 105,840 model classifiers (17,640 training set combinations × 6 combinations of model predictor genes) were compared to two types of null classifiers. We constructed single and two-gene null classifiers for each of the 17,640 combinations of samples using the normalized expression values of the raw datasets. The single gene null classifiers were built by constructing 95 % nonparametric confidence intervals for the IL and NIL training samples on the normalized expression values for the individual genes. The two-gene null classifiers were constructed by defining a two dimensional confidence region for the two predictor genes for the NIL and IL training samples. Precision, recall, F-score, and CSR metrics were calculated for all 264,600 null classifiers (17,640 training set combinations x 15 one or two-gene null classifiers) and compared to the model classifiers to assess i) the relative performance of the single and two-gene null classifiers and ii) the performance of our model's two-gene classifier to the null two-gene classifiers. A full workflow of this approach can be found in Fig. 2. Classifier Construction and Assessment Workflow. A training set of half the IL and NIL samples was randomly sampled from our cohort of 18 myometrium samples. Probabilities of labor were computed for six combinations of predictor genes. 21 total classifiers were constructed for a particular combination of patients including five single gene null classifiers, 10 two-gene null classifiers, and six model classifiers. Performance of all classifiers was assessed by precision and recall metrics. All possible combinations of patient samples were assessed for classifier construction and overall performance metrics of the 21 classifiers were aggregated into average F-scores and classifier success rate (CSR) Model results The model has four equilibrium solutions, i.e. values for (B,I) pairs such that \(\frac {dB}{d\tau } = \frac {dI}{d\tau } = 0\). At these values of B and I, namely (0, 0), (1, 0), (0, 1), and \(\left (\frac {i(1-b)}{k-ib},\frac {b(k-i)}{k-ib}\right)\), the levels of PR-B and inflammation will remain constant. Each of these solutions corresponds to a physiological condition in the myometrium and the solutions can be stable, where the trajectories in the phase space in the neighborhood of the equilibrium point toward it, or unstable, where the trajectories in the neighborhood of the equilibrium point away from it. The values of the parameters in the model influence the size of the basin of attraction for the labor equilibrium point and alter the stability of the equilibrium points. A tipping point exists in our model, where the quiescent equilibrium point transitions from stable to unstable and the phase space becomes completely biased toward the laboring equilibrium point. This change in stability is called a bifurcation and linear stability analysis allows us to compute the exact parameter values which will cause the stability to change. The end result of this analysis is a quantitative prediction for the tipping point of labor, the conditions under which the myometrial cell is permanently in the laboring phenotype leading to uterine emptying. The point (0,0) corresponds to a myometrial cell that is not expressing any PR-B and has no inflammation. The solution (1,0) is the quiescent equilibrium corresponding to a physiological state where PR-B is at its maximal level with no inflammation. The solution (0,1) is the laboring equilibrium where inflammation is maximized and no PR-B is present. The last solution corresponds to an intermediate point between quiescent and laboring where the myometrial cell can pass into either phenotype. In order to characterize the stability of these equilibrium solutions, we need to solve for the eigenvalues, λ 1 and λ 2, of the model at each equilibrium point. The sign of the eigenvalues, positive or negative, determines the stability and the formula for each eigenvalue tells us whether the eigenvalues can ever change sign. The eigenvalues are found by solving the characteristic polynomial equation of the Jacobian matrix, \(\mathcal J\). By applying the quadratic formula, we can obtain an expression for both λ 1 and λ 2 $$\begin{array}{@{}rcl@{}} \lambda_{1} = \frac{-\beta + \sqrt{\left(\beta^{2} - 4\gamma\right)}}{2}\quad\lambda_{2} = \frac{-\beta - \sqrt{\left(\beta^{2} - 4\gamma\right)}}{2} \end{array} $$ where β=2(b B ∗+i I ∗)+k B ∗+I ∗−(i+b) and γ=i b(1−2I ∗−2B ∗+4B ∗ I ∗)+b B ∗(−k+2k B ∗)+i I ∗(2I ∗−1)+I ∗ B ∗(k−1). More details on the derivation of the eigenvalues can be found in the Appendix of derrivations. The sign of λ 1 and λ 2 determine the type and stability of each equilibrium point (Table 1). The trivial equilibrium, (0,0), has two positive, real eigenvalues for all values of b and i, indicating it is always unstable and can be classified as a source node. Physiologically, a myometrial cell in this state is not exposed to inflammatory stimuli and is not expressing PR-B. This state cannot endure long and like the equilibrium point is unstable. The quiescent and laboring equilibria have two real, negative eigenvalues each and are thus stable sink nodes. Both of these are stable so long as both b<1 and i<1. The intermediate equilibrium point allows us to identify the tipping point as i and b change. This equilibrium has two real eigenvalues, one positive and one negative, thus the intermediate equilibrium is a semi-stable saddle node. Table 1 Equilibrium solution stability conditions The formula for the eigenvalues of the intermediate equilibrium indicates that three bifurcations are possible as i and b change. Firstly, if i=1 and b≤1 or if k≤i, then the quiescent equilibrium has one negative and one zero eigenvalue. In this case, the intermediate equilibrium point has moved through the phase space to collide with the quiescent equilibrium (Fig. 3). When these two equilibrium points combine, the quiescent equilibrium point changes from stable, where all temporal trajectories in the neighborhood of the equilibrium pointing toward it, to unstable with all the trajectories pointing away from the equilibrium point. This first bifurcation, the collision of the intermediate equilibrium with the quiescent equilibrium, corresponds to the physiological condition when the myometrium moves from quiescent to laboring. This condition corresponds to a probability of labor equal to one. Phase Space Bifurcation Dynamics. Simulations of the three possible bifurcations in the PR-B/inflammation model The pro-labor bifurcation occurs as i approaches 1, or b approaches 0, or k approaches 0. The non pregnant to pregnant bifurcation occurs as b and i simultaneously approach 0. The pro-pregnancy bifurcation occurs as i approaches 0 or b approaches 1 The converse of this is the second bifurcation that occurs when the intermediate equilibrium point collides with the laboring equilibrium point and the probability of labor is zero. This occurs when b=1 and i≤1, and results in a bifurcation where the laboring equilibrium point changes from stable to unstable. Physiologically this could correspond to a therapeutic intervention that preserves quiescence and prevents labor. Simulating therapeutic modulation of biomarkers that determine b and i could provide insight into what genes to modulate and what effect size is required to preserve quiescence. When k=b i, the intermediate equilibrium becomes a singularity and is non-physiological. The third bifurcation occurs when b and i are both zero and the intermediate equilibrium collides with the trivial equilibrium point (Fig. 3). This third bifurcation may have physiological significance in the transition of the myometrial cell from non-pregnant to pregnant as inflammation and PR-B transition from inactive to active in the pregnancy uterus. However, since our model is based upon the activity of PR-B and inflammatory drivers during pregnancy, this bifurcation, though interesting, is beyond the scope of the present investigation. The dynamical systems model was designed to explore the functional interaction between the anti-inflammatory actions of progesterone mediated by PR-B and the effect of inflammatory load on the contractile state of the human pregnancy uterus. In addition we sought to identify the conditions that induce a bifurcation in the model similar to the one that occurs when the uterus transitions to the laboring state. The dimensionless version of our model simplifies this task by enabling us to identify how changes in the three dimensionless parameters, b, i, and k, influence the trajectory of a hypothetical pregnancy phase space. The underlying rationale was that bifurcations correspond to physiologically important events in the timeline of pregnancy representing uterine quiescence and its transition to the laboring state. The interaction between the dimensionless model parameters i and k appears to be the most significant for initiating the labor bifurcation. The meaning of this interaction is that as long as the repressive capacity of PR-B, k, is greater than the activation rate of inflammation, i, quiescence will be maintained. This finding supports the PR-A:PR-B hypothesis since it recapitulates the important role of PR-B-mediated anti-inflammatory activity and shows how interference of this function, possibly by PR-A, destabilizes quiescence in favor of labor. Though we fixed k in the present analysis, we hypothesize that the level of k relative to i and b may be reflective of the trans-repressive activity of PR-A on PR-B. The phase space dynamics of modulating k seem to support this with higher values of k producing lower probabilities of labor and may be an interesting avenue for further investigation. Predictive modeling of parturition datasets We next applied the dimensionless version of our model with k=1 in Eq. 3 to predict the onset of labor in a cohort of 18 patient samples from myometrial transcriptome studies. Given a value of (b,i) for a particular patient, the model's phase space reflects the state of the pregnancy myometrium for that particular patient. Interpreting the phase space as a probability means that we have a metric for predicting the likelihood of the patient going into labor. It is now possible to test the predictive power of particular sets of biomarkers for predicting the laboring phenotype by assigning the values of b and i based on the values of molecular markers of PR-B activity and inflammation. We can also assess the gain in robustness and predictive power of labor classifiers by comparing the performance of our model's 105,840 possible classifiers gainst the 264,600 possible one and two-gene null classifiers built with the normalized gene expression data. In the case of the single gene null classifiers, the inflammatory surrogates all performed very well with average F-scores above 0.70 (Table 2). However, in the case of IL-6 only 61 % of the combinations of training samples could successfully build a classifier and the best performing classifier IL-1 β only had a 30 % success rate for classifier construction. The two-gene null classifiers had much higher success rates for construction classifiers, but rarely performed better than a random chance since most F-scores were around 0.50 (Table 2). In the case of building classifiers from gene expression data alone, the single gene models generally outperform the two-gene models, but the two-gene models are more robust against choice of training samples for building a classifier. Table 2 Performance of of the null classifiers We see stronger classifier performance and success rates when the gene expression data is considered in the context of our dynamical systems model using the probability of labor. All classifiers built with the inflammatory surrogate genes IL-6 and IL- 1β had a 100 % success rate in classifier construction (Table 3). Further, these classifiers all had average F-scores above 0.77 with the combination of FKBP5 and IL-1 β begin the strongest. The model's IL-6 and IL-1 β classifiers outperformed all two-gene null classifiers in both average F-score and CSR. Though the IL-8 and IL-1 β single gene null classifiers have higher average F-scores than our model classifiers, the low proportion of successful classifiers for both genes shows that these classifiers are extremely sensitive to changes in the training sample set. As such, the single gene classifiers are unreliable as biomarkers for labor. Table 3 Performance of of the model classifiers Identifying the specific inflammatory drivers that induce labor is an important step in identifying upstream and downstream therapeutic targets to delay the onset of premature labor. Interestingly, each of the inflammatory surrogate genes we used to construct our model's classifiers has a subtly different biological function in the pregnancy uterus. IL-8 functions as a chemokine, drawing neutrophils and macrophages to tissues where it is expressed [32]. The lack of phenotypic predictability by IL-8, even when paired with a PR-B surrogate, may suggest that IL-8 does not play an important role in the inflammatory process of labor. In contrast IL- 1β and IL-6 [32] performed well as classifiers in our model when paired with PR-B surrogates, suggesting that the inflammatory processes associated with these genes are more important for labor onset than those of IL-8. In particular, IL-6 is a cytokine that plays an important role in both the canonical and non canonical JAK-STAT signaling pathway [33, 34], a pathway that may integrate effects of circulating and local myometrial cytokines. Recent work examining IL-6 as a blood-based biomarker for labor [35] provides further evidence that understanding and modeling this cytokine in the myometrium could be key to elucidating the driver pathways of labor. The small sample sizes in the transcriptome datasets (N=18) cause us to exercise caution in the data analysis. The nonparametric methods of constructing confidence intervals and testing the separation of the NIL and IL groups are less powerful than comparable parametric testing, but were ultimately more appropriate due to their resistance to outliers, non normality of the data, and small sample size. Our approach of bootstrapping a distribution of classifiers and performance metrics allowed us to overcome the limitations of our cohort size and systematically test 370,440 classifiers and assess the performance of our model in a variety of contexts. The present investigation of the predictive power of our model was limited by the availability of in vivo temporal data for the activity of PR-B and inflammation in the context of the pregnancy myometrium. If there were bloodborne biomarkers that could be used to infer the activity of these quantities, then it would be possible to test the application of our model in a clinical context modeling and predicting the pregnancy trajectories of actual women by measuring two biomarkers, one for PR-B and one for inflammation. This work is the first step in that direction by using transcriptome data from the myometrium to train a predictive model and demonstrate that our model provides the mechanistic hypothesis and framework to increase the predictive power of gene expression data. This mathematical model of the PR-A:PR-B hypothesis of human parturition produces qualitative dynamics which mimic those observed in vitro and in vivo. A novel interpretation of the phase space of a dynamical system as a probability space enables predictive modeling of all possible phenotypic states of the pregnancy myometrium in a patient specific manner. Predictive modeling of patient datasets shows that our model makes accurate predictions of the laboring phenotype in patients, performing best when the PR-B surrogate FKBP5 and inflammatory surrogate IL-1 β are used to fit the dimensionless model. Linear stability analysis shows three phenotypically interesting phase space bifurcations exist in our model and provides a quantitative tipping point for the myometrium transitioning to the contractile phenotype given our model. This dynamical systems model of progesterone receptor interactions in the pregnancy myometrium provides a plausible explanation for the observed biochemical dynamics in the literature and is a first step in more sophisticated modeling of human partition with dynamical systems models. Our model provides a framework where if a woman's PR-B and inflammatory activity can be determined from blood-based biomarkers, then we can produce patient specific trajectories characterizing a woman's likelihood of labor and the variables to modulate to prevent PTB. Appendix of derrivations There are six parameters in our model \({k_{1}, k_{2}, B_{c}, I_{c}, \hat {i}, \hat {b}}\) with various units. Nondimentionalization is a tool for simplifying our model whereby these parameters are replaced with dimensionless constants. While nondimensionalization can make it difficult to pinpoint the influence of individual parameters on the system's behavior, this concern is minimal since we only have six parameters in our model. Unpacking the influence of particular dimensionless constants and the parameters that constitute those constants is thus straightforward for our model. The units for \(\hat {I}\), \(\hat {B}\), B c , and I c are the amount of PR-B or inflammation present, similar to a concentration. Time t is given in weeks. The rate constants \(\hat {I}\) and \(\hat {B}\) are in units \(\frac {1}{\text {weeks}}\) while the rate constants k 1 and k 2 are in units \(\frac {1}{\text {concentration} * \text {weeks}}\). We define three dimensionless variables for our model, $$\begin{array}{@{}rcl@{}} B = \frac{\hat{B}}{B_{c}} \quad \quad I = \frac{\hat{I}}{I_{c}} \quad \quad \tau = tk_{1}I_{c} \end{array} $$ Substituting these for \(\hat {B}\), \(\hat {I}\), and t makes the PR-B equation, $$\begin{array}{@{}rcl@{}} B_{c}I_{c}k_{1}\frac{dB}{d\tau} = \hat{b}B_{c}B(1-B)-k_{1}B_{c}I_{c}BI, \end{array} $$ and the inflammation equation, $$\begin{array}{@{}rcl@{}} {I_{c}^{2}}k_{1}\frac{dI}{d\tau} = \hat{i}I_{c}I(1-I)-k_{2}B_{c}I_{c}BI. \end{array} $$ We simplify the equations by dividing by B c I c k 1 in the equation for \(\frac {dB}{d\tau }\) and by \({I_{c}^{2}}k_{1}\) in the equation for \(\frac {dI}{d\tau }\). The result is the dimensionless model, $$\begin{array}{@{}rcl@{}} \frac{dB}{d\tau} &=& \frac{\hat{b}}{k_{1}I_{c}}B(1-B)-BI, \\ \frac{dI}{d\tau} &=& \frac{\hat{i}}{k_{1}I_{c}}I(1-I)-\frac{k_{2}B_{c}}{k_{1}I_{c}}BI. \end{array} $$ We can now define three dimensionless constants, \(R_{1} = \frac {\hat {b}}{k_{1}I_{c}}\), \(R_{2} = \frac {\hat {i}}{k_{1}I_{c}}\), and \(R_{3} = \frac {k_{2}B_{c}}{k_{1}I_{c}}\). Substituting these yields the final version of the dimensionless model, $$\begin{array}{@{}rcl@{}} \frac{dB}{d\tau} = R_{1}B(1-B)-BI,\\ \frac{dI}{d\tau} = R_{2}I(1-I)-R_{3}BI. \end{array} $$ We infer the activity of PR-B and pro-inflammatory drivers with two PR-B responsive genes to serve as surrogates for PR-B, FOXO1A and FKBP5, and three pro-inflammatory genes to serve as surrogates for inflammation, IL- 1β, IL-6 and IL-8. The normalized values of the data for these genes N j , are calculated using the equation, $$ N_{j} = \frac{G_{j}- m}{M-m} $$ where G j is the value of the gene for patient j, M is the maximum expression value for that gene across patients in the dataset, and m is the minimum value of that gene across patients in the dataset. One consequence of this normalization procedure is that the transcriptome data has been nondimensionalized. Therefore, the dimensionless constants and dimensionless trascriptome data can be seamlessly combined so that the surrogate genes parameterize the dimensionless model for each patient. This equation bounds the values for the PR-B and inflammation surrogates from 0 to 1 and makes the natural choice of values for critical levels of PR-B and inflammation B c =I c =1. The dimensionless parameters then become, $$\begin{array}{@{}rcl@{}} R_{1} = \frac{\hat{b}}{k_{1}} = b \quad \quad R_{2} = \frac{\hat{i}}{k_{1}} = i \quad \quad R_{3} = \frac{k_{2}}{k_{1}} = k. \end{array} $$ Now our model can be rewritten as, where the values of b and i are determined by the normalized dimensionless values for the PR-B and inflammatory surrogate genes respectively and the value of k corresponds to the strength of PR-B's anti-inflammatory actions. Next we quantify the behavior of k in order to apply our model to patient data. To do this, we have to derive the steady states solutions for our model. These solutions are the values of B and I which cause \(\frac {dB}{d\tau }=\frac {dI}{d\tau }=0\) and correspond to a state where the system undergoes no change. There are three steady states, equilibrium points, which are easy to derive. These occur when the ordered pair for PR-B and inflammation, (B,I), is equal to, $$\begin{array}{@{}rcl@{}} (0, 0) \quad \quad \quad \quad (1, 0) \quad \quad \quad \quad (0, 1) \end{array} $$ where (0,0) is the trivial equilibrium point where neither PR-B nor inflammation is present, (1,0) is the quiescent equilibrium where PR-B is maximal and there is no inflammation, and (0,1) is the laboring equilibrium where there is no PR-B and inflammation is maximal. The quiescent equilibrium correspond to a PR-B dominant state and laboring equilibrium corresponds to an inflammatory dominant state. There is a fourth equilibrium point which exists for some values of b and i between the quiescent and laboring equilibrium which we will designate as the intermediate equilibrium, (B ∗,I ∗). We obtain this equilibrium point by first setting our model equations equal to zero, $$\begin{array}{@{}rcl@{}} \frac{dB}{d\tau} =0= bB^{*}(1-B^{*})-B^{*}I^{*},\\ \frac{dI}{d\tau} =0= iI^{*}(1-I^{*})-kB^{*}I*, \end{array} $$ $$\begin{array}{@{}rcl@{}} \frac{dI}{d\tau} =0= iI^{*}(1-I^{*})-kB^{*}I*. \end{array} $$ Simplifying this becomes, $$\begin{array}{@{}rcl@{}} 0= b(1-B^{*})-I^{*}, \quad \quad 0= i(1-I^{*})-kB^{*}. \end{array} $$ This results in two equations, one for B ∗ and one for I ∗, $$\begin{array}{@{}rcl@{}} I^{*} = b-bB^{*}, \quad \quad B^{*} = \frac{i}{k}(1-I^{*}). \end{array} $$ Depending on how we chose to perform the substitution, B ∗ into the equation for I ∗ or I ∗ into the equation for B ∗, we can derive two forms of the same intermediate equilibrium point. These are, $$\begin{array}{@{}rcl@{}} (B^{*},I^{*}) &=& \left(\frac{i(1-b)}{k-ib},\frac{b(k-i)}{k-ib}\right)\\&=&\left(\frac{i^{2}b(1-k)-ik(1-b)}{k(bi-k)}, \frac{kb(i-1)}{bi-k}\right). \end{array} $$ In the limit in the case where i=k, the intermediate equilibrium point equals the quiescent equilibrium point (1,0). If we visualize this state in phase space we see that all the vectors point away from the quiescent equilibrium point toward the laboring equilibrium. In phase space, these vectors define trajectories that indicate how the system would evolve in time given a certain starting point. The set of vectors pointing toward the laboring equilibrium point is known as the basin of attraction for the laboring equilibrium point. We compute a probability of labor equal to the area of the laboring equilibrium point's basin of attraction divided by the area of the domain, which in our case is 1. The area of the basin changes as k, i, and b change. This probabilistic interpretation of a phase space is reasonable under the assumptions that all possible pairs of values of B and I in the domain occur with equal likelihood. For example, in the case where k=i and b is fixed at 0.5, the probability of labor is equal to 1, the entire domain is the basin of attraction for the laboring equilibrium, which means that quiescence is impossible. In order to ensure that quiescence is a possibility, we set k=1 so that only one value of i, i=1, results in a probability of labor equal to 1 enabling us to explore the full range of values for b and i. Thus, the model we apply to patient data is Characterizing the stability of the quiescent and laboring equilibria We analyzed the dimensionless form of our model and computed the eigenvalues for the four equilibrium points, $$\begin{array}{@{}rcl@{}} (0, 0) \quad (1, 0) \quad (0, 1) \quad \left(\frac{i(1-b)}{k-ib},\frac{b(k-i)}{k-ib}\right) \end{array} $$ where (0,0) corresponds to a myometrial cell that is not expressing any PR-B and has no inflammation. The solution (1,0) corresponds to a myometrial cell that does not express PR-B and has no inflammation. The solution (1,0) is the quiescent equilibrium corresponding to a physiological state where PR-B is at its maximal level with no inflammation. The solution (0,1) is the laboring equilibrium where inflammation is maximized and no PR-B is present. The last solution corresponds to an intermediate point between quiescent and laboring where the myometrial cell can pass into either phenotype. In order to characterize the stability of these solutions, we need to solve for the eigenvalues of the model λ 1 and λ 2 at each equilibrium point from (5). The sign of the eigenvalues, positive or negative, determines the stability and the formula for each eigenvalue tells us whether the eigenvalues can ever change sign. This is done by solving the characteristic polynomial equation of the Jacobian matrix, \(\mathcal {J}\). We begin by computing \(\mathcal {J}\) for our model at a general equilibrium point (B ∗,I ∗), $$\begin{array}{@{}rcl@{}} \mathcal{J}\! = \!\left(\!\begin{array}{cc} \frac{dB'}{dB} & \frac{dB'}{dI} \\ \frac{dI'}{dB} & \frac{dI'}{dI}\\ \end{array}\!\right)\! =\! \left(\!\begin{array}{cc} b - 2bB^{*}-I^{*} & -B^{*}\\ -I^{*} & i-2iI^{*} - kB^{*}\\ \end{array}\!\right). \end{array} $$ The characteristic polynomial can be obtained by taking the determinant of the matrix, $$\begin{array}{@{}rcl@{}} \left(\!\lambda I \,-\, {\mathcal{J}}\right)\,=\, \left(\!\!\begin{array}{cc} \lambda - b + 2bB^{*}+I^{*} & -B^{*}\\ -I^{*} & \lambda - i+2iI^{*} + kB^{*}\\ \end{array}\!\!\right). \end{array} $$ Setting this determinant to zero gives us the eigenvalues, the roots of the characteristic polynomial whose signs determine the stability of the equilibrium solutions: $$ 0 = \lambda^{2} + \lambda\beta+ \gamma. $$ $$\begin{array}{@{}rcl@{}} {}\beta = 2\left(bB^{*} + iI^{*}\right) + kB^{*} + I^{*} -(i +b) \end{array} $$ $$ \begin{aligned} \gamma =&\; ib\left(1-2I^{*}-2B^{*}+4B^{*}I^{*}\right)+bB^{*}\left(-k+2kB^{*}\right)\\ &+iI^{*}\left(2I^{*}-1\right) + I^{*}B^{*}(k-1) \end{aligned} $$ By applying the quadratic formula, we can obtain an expression for both λ 1 $$\begin{array}{@{}rcl@{}} \lambda_{1} = \frac{-\beta + \sqrt{(\beta^{2} - 4\gamma)}}{2} \end{array} $$ and λ 2 $$\begin{array}{@{}rcl@{}} & & \lambda_{2} = \frac{-\beta - \sqrt{(\beta^{2} - 4\gamma)}}{2} \end{array} $$ The sign of λ 1 and λ 2 determine the type and stability of each equilibrium point. The trivial equilibrium, (0,0), has two positive, real eigenvalues for all values of b and i indicating it is always unstable and can be classified as a source node. Physiologically, a myometrial cell in this state is not exposed to inflammatory stimuli and is not expressing PR-B. This state cannot endure long and like the equilibrium point is unstable, all trajectories are pointing away from the equilibrium. The quiescent and laboring equilibria have two real, negative eigenvalues each and are thus stable sink nodes. Both of these are stable so long as both b<1 and i<1. The intermediate equilibrium point allows us to identify the tipping point as i and b change. This equilibrium has two real eigenvalues, one positive and one negative, thus the intermediate equilibrium is a semi-stable, saddle node. The formula for the eigenvalues of the intermediate equilibrium indicates that three bifurcations are possible as i and b change. Firstly, if i=1 and b≤1 or if k≤i, then the quiescent equilibrium has one negative and one zero eigenvalue. In this case, the intermediate equilibrium point has moved through the phase space to collide with the quiescent equilibrium. When these two equilibrium points combine, the quiescent equilibrium point changes stability from stable, where all temporal trajectories in the neighborhood of the equilibrium pointing toward it, to unstable with all the trajectories pointing away from the equilibrium point. Similarly, if b=1 and i≤1 the intermediate equilibrium collides with the laboring equilibrium resulting in a bifurcation where the laboring equilibrium point changes from stable to unstable. When k=b i the intermediate equilibrium becomes a singularity and is non-physiological. The third bifurcation occurs when b and i equal 0 and the intermediate equilibrium collides with the trivial equilibrium point. Ananth CV, et al.Epidemiology of preterm birth and its clinical subtypes. J Mat Fet Neonat Med. 2006; 19(12):773–82. Beck S, Wojdyla D, Say L, et al.The worldwide incidence of preterm birth: a systematic review of maternal mortality and morbidity. Bull World Health Organ. 2010; 88(1):31–38. PMC2802437. Csapo A. Progesterone block. Am J Anat. 1956; 98(2):273–91. Corner GW. The hormones in human reproduction: Princeton University Press; 1946. Corner GW, Csapo A. Action of the ovarian hormones on uterine muscle. Br Med J. 1953; 1(4812):687–93. Romero R, Espinoza J, Goncalves LF, Kusanovic JP, Friel LA, Nien JK. Inflammation in preterm and term labour and delivery. Semin Fetal Neonatal Med. 2006; 11(5):317–326. Norman JE, et al.Inflammatory pathways in the mechanism of parturition. BMC Pregnancy Childbirth. 2007;7(1)1. Mesiano S, et al.Progesterone withdrawal and estrogen activation in human parturition are coordinated by progesterone receptor A expression in the myometrium. J Clin Endocrinol Metab. 2002; 87:2924–30. Kim J, et al.Transcriptome landscape of the human placenta. BMC Genomics. 2012; 13:115. Larsen B, et al.Progesterone interactions with the cervix: translational implications for term and preterm birth; 2011. Kastner P, Krust A, Turcotte B, Stropp U, Tora L, Gronemeyer H, et al.Two distinct estrogen-regulated promoters generate transcripts encoding the two functionally different human progesterone receptor forms A and B. EMBO J. 1990; 9(5):1603–14. Giangrande PH, Kimbrel EA, Edwards DP, McDonnell DP. The opposing transcriptional activities of the two isoforms of the human progesterone receptor are due to differential cofactor binding. Mol Cell Biol. 2000; 20(9):3102–15. Vegeto E, Shahbaz MM, Wen DX, Goldman ME, O'Malley BW, McDonnell DP. Human progesterone receptor A form is a cell- and promoter-specific repressor of human progesterone receptor B function. Mol Endocrinol. 1993; 7(10):1244–55. Tulchinsky D, Hobel CJ, Yeager E, Marshall JR. Plasma estradiol, estriol, and progesterone in human pregnancy. II. Clinical applications in Rh-isoimmunization disease. Am J Obstet Gynecol. 1972; 113(6):766–770. Tulchinsky D, Hobel CJ, Yeager E, Marshall JR. Plasma estrone, estradiol, estriol, progesterone, and 17-hydroxyprogesterone in human pregnancy. I. Normal pregnancy. Am J Obstet Gynecol. 1972; 112(8):1095–1100. Tulchinsky D, Okada D. Hormones in human pregnancy: IV. Plasma progesterone. Am J Obstet Gynecol. 1975; 121:293–299. Walsh S, Kittinger G, Novy M. Maternal peripheral concentrations of estradiol, estrone, cortisol, and progesterone during late pregnancy in rhesus monkeys (Macaca mulatta) and after experimental fetal anencephaly and fetal death. Am J Obstet Gynecol. 1979; 135:37–42. Boroditsky RS, Reyes FI, Winter JS, Faiman C. Maternal serum estrogen and progesterone concentrations preceding normal labor. Obstet Gynecol. 1978; 51(6):686–691. Merlino AA, et al.Nuclear progesterone receptors in the human pregnancy myometrium: evidence that parturition involves functional progesterone withdrawal mediated by increased expression of progesterone receptor-A. J Clin Endocrinol Metab. 2007; 92(5):1927–33. Tan H, et al.Progesterone receptor-A and -B have opposite effects on pro inflammatory gene expression in human myometrial cells: implications for progesterone actions in human pregnancy and parturition. J Clin Endocrinol Metab. 2012; 97(5):19–30. Madsen G, et al.Prostaglandins differentially modulate progesterone receptor-A and -B expression in human myometrial cells: evidence for prostaglandin-induced functional progesterone withdrawal. J Clin Endocrinol Metab. 2004; 89(2):1010–3. Hardy DB, et al.Progesterone receptor plays a major antiinflammatory role in human myometrial cells by antagonism of nuclear factor-kappaB activation of cyclooxygenase 2 expression. Mol Endocrinol. 2006; 20(11):2724–33. Britton NF. Essential Mathematical Biology: Springer; 2005. Calvetti D, Erkki S. Computational Mathematical Modeling: SIAM; 2013. Iber D, Fengos G. Predictive models for cellular signaling networks: Springer; 2012. Chapter 1. Rangamani P, Iyengar R. Modelling cellular signalling systems. Essays Biochem. 2008; 45:83–94. Mesiano S, et al.Progesterone receptors in the human pregnancy uterus: do they hold the key to birth timing?Reprod Sci. 2011; 18(1):6–19. Bethin KE, et al.Microarray analysis of uterine gene expression in mouse and human pregnancy. J Mol Endocrinol. 2003; 17(8):1454–69. Chan YW, et al.Assessment of myometrial transcriptome changes associated with spontaneous human labour by high-throughput RNA-seq. J Exp Physiol. 2014; 99(3):510–24. Brosens JJ, et al.Progesterone and FOXO1 signaling. Cell Cycle. 2013; 12:1660. Golightly E, et al.Endocrine immune interactions in human parturition. Mol Cell Endocrinol. 2011; 335:52–9. Gotkin JL, et al.Progesterone reduces lipopolysaccharide induced interleukin-6 secretion in fetoplacental chorionic arteries, fractionated cord blood, and maternal mononuclear cells. Am J Obstet Gynecol. 2006; 195(4):1015–9. Li W. Canonical and non-canonical JAK?STAT signaling. Trends Cell Biol. 2008; 18(11):545–551. Shuai K, Liu B. Regulation of JAK-STAT Signaling in the Immune System. Nat Rev. 2003; 3:900–911. Neal J, et al. Differences in inflammatory markers between nulliparous women admitted to hospitals in preactive vs active labor. Am J Obstet Gynecol. 2015; 212:68.e1–8. The authors wish to than Drs Jill Barnholtz-Sloan and Jenny Brynjarsdottir for their guidance in implementing nonparametric statistical testing for the predictive modeling. We also would like to thank Mr. Gavin Brown for his helpful discussion of interpreting a dynamical system phase space as a probability space for predictive modeling. This work was supported by NIH Grant T32HL007567, the Clinical and Translational Science Collaborative in the Case Western Reserve School of Medicine, and the March of Dimes Ohio Prematurity Research Collaborative, The Global Alliance to Prevent Prematurity and Stillbirth and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (HD069819). Phase space plots were generated using Mathematica. DB conceived of, analyzed, and implemented the mathematical model for predictive testing. He performed data collection and analyses and led much of the project. He authored the manuscript both managing and implementing revisions from the co-authors. AB is an expert in dynamical systems modeling and guided Mr. Brubaker as he designed, analyzed, and implemented the model. Her expertise ensured that the linear stability analysis and phase space interpretation were correctly performed. She was instrumental in ensuring that the mathematical form of the model was justified by the existing biological data and was extensively involved in writing and revising the manuscript. MC oversaw the design of the predictive modeling and ensured that the models predictions were compared to other comparable predictors. He guided the overall design and implementation of the predictive modeling, ensured biological justification of the model, and provided important feedback in the manuscript writing and revision process. SM is an expert in myometrial physiology who studies the function of the progesterone receptor isoforms in parturition. His expertise ensured that the dynamics of the progesterone receptor interactions with inflammation were accurately described by the equations and his work provided much of the basis for the model itself. His contributions to the manuscript writing and revision process ensured that it faithfully reflected the biological reality of the progesterone receptors in human parturition. All authors read and approved the final manuscript. Center for Proteomics and Bioinformatics, Case Western Reserve University, 11900 Euclid Avenue, Cleveland, 44106, OH, USA Douglas Brubaker & Mark R. Chance 11900 Euclid Avenue, Cleveland, 44106, OH, USA Alethea Barbaro Department of Reproductive Biology, Case Western Reserve University, 11900 Euclid Avenue, Cleveland, 44106, OH, USA Sam Mesiano Douglas Brubaker Mark R. Chance Correspondence to Sam Mesiano. Brubaker, D., Barbaro, A., R. Chance, M. et al. A dynamical systems model of progesterone receptor interactions with inflammation in human parturition. BMC Syst Biol 10, 79 (2016). https://doi.org/10.1186/s12918-016-0320-1 Myometrium
CommonCrawl
Casson rheological flow model in an inclined stenosed artery with non-Darcian porous medium and quadratic thermal convection J. U. Abubakar ORCID: orcid.org/0000-0002-2079-57311, Q. A. Omolesho1, K. A. Bello1 & A. M. Basambo1 The current study investigates the combined response of the Darcy–Brinkman–Forchheimer and nonlinear thermal convection influence among other fluid parameters on Casson rheology (blood) flow through an inclined tapered stenosed artery with magnetic effect. Considering the remarkable importance of mathematical models to the physical behavior of fluid flow in human systems for scientific, biological, and industrial use, the present model predicts the motion and heat transfer of blood flow through tapered stenosed arteries under some underline conditions. The momentum and energy equations for the model were obtained and solved using the collocation method with the Legendre polynomial basis function. The expressions obtained for the velocity and temperature were graphed to show the effects of the Darcy–Brinkman–Forchheimer term, Casson parameters, and nonlinear thermal convection term among others. The results identified that a higher Darcy–Brinkman number slows down the blood temperature, while continuous injection of the Casson number decreases both velocity and temperature distribution. Stenosis refers to a strange narrowing in a blood vessel or other tubular organs such as the foramina and canal. It is also known as urethral stricture. Atherosclerosis is the majorly cause of stenosis, a form of the disease in which the wall of the artery develops lesions (abnormalities that can eventually lead to narrowing due to deposits of fats). Blood is a non-Newtonian fluid, that is, the viscosity varies with the shear rate which makes it a shear-thinning fluid. The study of mathematical biology and computational fluid mechanics has enhanced the work of researchers to inspect the mathematical and physical behavior of blood flow for use in medicine and other industrial applications. Among the novel, investigations in the area of tapered stenosed arteries include the work of Abubakar and Adeoye [2], and Abubakar et al. [1] study of steady blood flow through stenosis under the influence of a magnetic field. The influence of MHD blood flow and heat transfer through an inclined porous stenosed artery with variable viscosity is presented by Tripathi and Kumar [19]. Chaturani and Samy [7] discussed the pulsatile flow of Casson fluid through arteries (stenosed) with blood application. Bio-inspired peristaltic propulsion of hybrid nanofluid flow with hybrid nanoparticle aggregation was discussed by Bhatti et al. [6]. Recently, Sharma et al. [18] investigated the flow of blood through a multi-stenosed tapered artery; the study was centered on the slip flow and thermal radiation influence with the inclusion of hybrid nanoparticles (Au-Al2O3/Blood) and second law analysis; thus, the impact of Au and slip velocity is fully remarked. Poonam et al. [17] utilized the finite difference (C-N) scheme to examine the heat and mass transfer flow of pulsatile blood through a curved artery subject to hybrid nanoparticles (Au-Al2O3/blood) aggregation, and Ikbar et al. [11] enumerated their model for the non-Newtonian flow of blood through a stenosed artery in the presence of a transverse magnetic field. Blood is taken into account as the third-grade non-Newtonian fluid in the work presented by Akbarzedeh [3]; the study revealed that the mean value of the velocity increases, and the amplitude of the velocity remains constant as the pressure gradient rises. Mandal et al. [15] developed and discussed a two-dimensional mathematical model for the study of body acceleration external effect on non-Newtonian blood flow in a stenosed artery, and the blood is characterized by the generalized power-law model. Hayat et al. [10] considered Darcy–Brinkman–Forchheimer flow with Cattaneo–Christov homogeneous–heterogeneous, therein, MHD effects are considered on the flow of blood in the stenosed artery. Bio-inspired peristaltic propulsion of hybrid nanofluid flow nanoparticles subject to magnetic effects is carried out by Bhatti and Abdelsalam [6], and their focus demonstrated how Ta-NPs can be employed for the removal of unwanted reactive oxygen species in both small and large animals as well as in biomedical systems. Krishna [12, 13] studied the effect of heat and mass flux conditions on the magnetohydrodynamics flow of Casson fluid over a curved stretching surface. Mustafa [16] investigate the pipe flow of Eyring-Powell fluid enumerating its impact on flow and heat transfer. Mustafa [20] as well study the second law phenomena in thermal transport through the metallic porous channel; in the study, the impact of Brinkman–Darcy model is enumerated, to mention but a few among the numerous investigations. Hemodynamic characteristics of gold nanoparticle blood flow through a tapered stenosed vessel with variable nanofluid viscosity were discussed by Elnaqeeb et al [8]. Beckermann et al. [4] presented the numerical study of a porous enclosure medium with non-Darcian natural convection influence. The work enumerated that Forchheimer's extension must be included for Prandtl number less than one. In related work, Bhargava et al. [5] explore the finite element analysis for drug diffusion and transient pulsatile magneto-hemodynamic non-Newtonian flow in a porous channel. Numerical methods have been the simplest and most approachable way of obtaining an approximate solution to systems of highly nonlinear equations of which the collocation method is one of them. The current investigation utilized one of the orthogonal polynomials called the Legendre polynomial combined with the collocation method. Among the studies that have used this approach include Mallawi et al. [14] solution to the nonlinear differential equation was solved by computational means of Legendre collocation points. Guner and Yalcinbas [9] worked on the Legendre collocation method for solving a nonlinear differential equation, to mention but just a few. Motivated by all the above-mentioned research, this article presents the effects of the non-Darcian porous medium and quadratic thermal convection behavior on equations governing the blood flow through an inclined tapered stenosed artery. The model of the nonlinear equations has been solved numerically using the Legendre collocation method with the aid of Wolfram Mathematica 11.3 under the defined boundary conditions. MAPLE 18 generated the codes are used to show the effects of the Darcy–Brinkman–Forchheimer term, Casson parameters, nonlinear thermal convection term, and variation in inclination angle of the blood flow in the inclined stenosed artery graphically. Problem formulation The flow of blood is taken to be flowing in a cylindrical form of the narrow artery, in an axial direction, as shown in Fig. 1. Let (\(r\), \(\theta\), and \(z\)) be the polar coordinate system (cylindrical), and let \(\tilde{u}\), \(\tilde{v}\) and \(\tilde{w}\) be the velocity components in the \(r\), \(\theta\), and \(z\) directions. We consider magnetohydrodynamics (MHD) Newtonian fluid of density \(\rho\) and variable viscosity \(\mu\) flowing through a porous material in a tube having a finite length \(L\). The stenosed artery is inclined at the angle \(\gamma\) from the vertical axis with outside applied radiation \(q_{r}\) and magnetic field \(M\). Geometry of the inclined stenosed artery The governing equations for the model are as follows: Continuity equation $$\frac{{\partial \tilde{u}}}{{\partial \tilde{r}}} + \;\frac{{\partial \tilde{v}}}{{\partial \tilde{z}}} + \frac{{\tilde{u}}}{{\tilde{r}}} = 0$$ Momentum equation (r-direction) $$\rho \left[ {\tilde{u}\frac{{\partial \tilde{u}}}{{\partial \tilde{r}}} + \tilde{v}\frac{{\partial \tilde{u}}}{{\partial \tilde{z}}}} \right] = \; - \frac{{\partial \tilde{P}}}{{\partial \tilde{r}}} + \frac{\partial }{{\partial \tilde{r}}}\left[ {2\mu \frac{{\partial \tilde{u}}}{{\partial \tilde{r}}}} \right] + \frac{2\mu }{{\tilde{r}}}\left[ {\frac{{\partial \tilde{u}}}{{\partial \tilde{r}}} - \frac{{\tilde{u}}}{{\tilde{r}}}} \right] + \frac{\partial }{{\partial \tilde{z}}}\left[ {\mu \left( {\frac{{\partial \tilde{u}}}{{\partial \tilde{z}}} + \frac{{\partial \tilde{v}}}{{\partial \tilde{r}}}} \right)} \right]$$ Momentum equation (z-direction) $$\begin{aligned} \rho \left[ {\tilde{u}\frac{{\partial \tilde{v}}}{{\partial \tilde{r}}} + \tilde{v}\frac{{\partial \tilde{v}}}{{\partial \tilde{z}}}} \right] & = - \frac{{\partial \tilde{P}}}{{\partial \tilde{z}}} + \left( {1 + \frac{1}{\beta }} \right)\left\{ {\frac{\partial }{{\partial \tilde{z}}}\left[ {\left( {2\mu \frac{{\partial \tilde{v}}}{{\partial \tilde{z}}}} \right)} \right] + \frac{1}{{\tilde{r}}}\frac{\partial }{{\partial \tilde{r}}}\left[ {\mu \left( {\frac{{\partial \tilde{u}}}{{\partial \tilde{z}}} + \frac{{\partial \tilde{v}}}{{\partial \tilde{r}}}} \right)} \right]} \right\} - \sigma_{1} \mu_{m}^{2} H_{0}^{2} \tilde{v} \\ & \quad + \rho g[\alpha_{1} \left( {T - T_{0} } \right) + \alpha_{2} \left( {T - T_{0} } \right)^{2} ]{\text{cos}}\gamma - \;\left( {1 + \frac{1}{\beta }} \right)\frac{{\mu \tilde{V}}}{{K_{1} }} - \frac{{b^{*} v^{2} }}{{k_{1} }} \\ \end{aligned}$$ Energy equation $$\rho c_{p} \left[ {\tilde{u}\frac{\partial T}{{\partial \tilde{r}}} + \tilde{v}\frac{\partial T}{{\partial \tilde{r}}}} \right] = \frac{k}{{\tilde{r}}}\frac{\partial }{{\partial \tilde{r}}}\left[ {\tilde{r}\frac{\partial T}{{\partial \tilde{r}}}} \right] + k\frac{{\partial^{2} T}}{{\partial \tilde{z}^{2} }} + 2\mu \left( {1 + \frac{1}{\beta }} \right)\, + \left\{ {\left[ {\left( {\frac{{\partial \tilde{u}}}{{\partial \tilde{z}}}} \right)^{2} + \left( {\frac{{\tilde{u}}}{{\tilde{r}}}} \right)^{2} + \left( {\frac{{\partial \tilde{v}}}{{\partial \tilde{z}}}} \right)^{2} } \right] + \mu \left( {\frac{{\partial \tilde{u}}}{{\partial \tilde{z}}} + \frac{{\partial \tilde{v}}}{{\partial \tilde{r}}}} \right)^{2} } \right\} - \frac{{\partial q_{r} }}{{\partial \tilde{r}}}$$ where \(\frac{{b^{*} v^{2} }}{{k_{1} }}\) is the Darcy–Forchheimer's term, \(\tilde{u}\), \(\tilde{v}\), and \(\tilde{w}\) are the velocity components in the radial and axial directions, respectively. \(\sigma_{1}\) is the electrical conductivity, \(k\) is the thermal conductivity, and \(C_{{\text{p}}}\) is the specific heat at constant pressure. The differential equation for the radiative flux \(q_{{\text{r}}}\) is given in the following equation: $$\frac{{\partial^{2} q_{{\text{r}}} }}{{\partial \tilde{r}^{2} }} - 3\alpha_{v}^{2} q_{{\text{r}}} - 16\alpha_{v} \sigma T^{3} \frac{\partial T}{{\partial \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{r} }} = 0,$$ where \(\sigma\) is the Stefan–Boltzmann constant. With the assumption of thin blood, \(\alpha_{v} { \ll }1.\) Then, \(T_{0}\) is the blood temperature at the stenosed region and T is the local temperature of the blood, then (5) can be solved to $$\frac{{\partial q_{{\text{r}}} }}{{\partial \tilde{r}}} = 4\alpha_{v}^{2} \left( {T - T_{0} } \right),$$ The variable viscosity of the flow of blood is expressed by the formula: $$\mu \left( {\tilde{r}} \right) = \mu_{0} (\lambda h\left( {\tilde{r}} \right) + 1),$$ where \(h\left( {\tilde{r}} \right) = H\left[ {1 - \left( {\frac{{\tilde{r}}}{{d_{0} }}} \right)^{m} } \right]\) and \(H_{r} = \lambda H\) in which \(\lambda\) is having a numerical value of 2.5 and H is the maximum hematocrit at the center of the artery, \(m\) is the parameter that decides the exact shape of the blood velocity profile and \(H_{r}\) is the hematocrit parameter. The geometry illustration of the stenosis located at the point, \(z\) with its maximum height, \(\delta\) is defined by the following formula: $$h\left( {\tilde{z}} \right) = \left[ {1 - \eta \left( {b^{n - 1} \left( {\tilde{z} - a} \right) - \left( {\tilde{z} - a} \right)^{n} } \right)} \right]d\left( {\tilde{z}} \right),\quad {\text{where}}\;a \le \tilde{z} \le a + b,\quad h\left( {\tilde{z}} \right) = d\left( {\tilde{z}} \right),\,\,{\text{otherwise}},$$ where \(d\left( {\tilde{z}} \right)\) is the radius of the narrow artery in the stenotic region with \(d\left( {\tilde{z}} \right) = d_{0} + \xi \widetilde{z,}\) In (8), n is the shape parameter which determines the shape of the constriction profile. The value \(n = 2\) results in symmetrically shaped stenosis, and for nonsymmetric stenosis case \(n\) considers the values \(n \ge 2\). \(\xi\) is the narrowed parameter defined by \(\xi = {\text{tan}}\varphi\), where \(\varphi\) is known as narrowed artery and \(\xi\) as the narrowing parameter which is defined by the case of converging, and \(d_{0}\) is the radius of the non-narrowed artery. The parameter \(\eta\) is defined as $$\eta = \frac{{\delta^{*} n^{{\frac{n}{n - 1}}} }}{{d_{0} b^{n} \left( {n - 1} \right)}}$$ where \(\delta\) is the maximum height of the stenosis located at $$\tilde{z} = a + \frac{b}{{n^{{\frac{n}{n - 1}}} }}.$$ Method of solution To non-dimensionalize the obtained governing equations, we introduce the non-dimensional variables as follows: $$\begin{gathered} \;\tilde{u} = \frac{{uu_{0} \delta }}{b},\;\;\tilde{r} = rd_{0\;} ,\;\;\tilde{z} = zb\;,\;\;\tilde{v} = wu_{0\;} ,\;\;\tilde{h} = hd_{0} ,\;\;\tilde{P} = \frac{{u_{0} b\mu_{0} p}}{{d_{0}^{2} }}\;, \hfill \\ {\text{Re}} = \;\frac{{\rho bu_{0} }}{{\mu_{0} }},\,\,\theta = \frac{{T - T_{0} }}{{T_{0} }},\,\,\;\Pr = \frac{{\mu c_{p} }}{k},\;\,\,{\text{Ec}} = \frac{{\mu_{0}^{2} }}{{c_{{pT_{0} }} }},\;\,\,Z = \frac{{k_{1} }}{{d_{0}^{2} }}\;, \hfill \\ M = \frac{{\sigma_{1} H_{0}^{2} d_{0}^{2} }}{{\mu_{0} }}\;,\;\,\,Q = A\frac{{d_{0}^{2} }}{K}\;,\;\,G_{r} = \frac{{g\alpha d_{0}^{3} T_{0} }}{{v^{2} }},\,\;N^{2} = \frac{{4d_{0}^{2} \alpha_{v}^{2} }}{k}\;,\,\,G_{{\text{N}}} = \frac{{\alpha_{2} }}{{\alpha_{1} }}T_{0} . \hfill \\ \end{gathered}$$ where \(\Pr ,\;Z\;,\;N\;,\;{\text{Re}} \;,\;\theta ,\;Z\;\;{\text{Gr}}{\kern 1pt} \;M\;,\;{\text{Ec}}\;{\text{and}}\;\;G_{{\text{N}}},\) respectively, represent the Prandtl number, porosity parameter, radiation absorption parameter, Reynolds number, temperature parameter, Grashof number, magnetic field parameter, Eckert number, and nonlinear thermal convection. In the case of aortic stenosis \(\frac{\delta }{{d_{0} }}{ \ll }1\) and the other additional conditions, $${\text{Re}} \frac{{\delta^{*} n^{{\frac{1}{n - 1}}} }}{b} \ll 1,$$ assuming the following approximation: $$\frac{{d_{0\;} n^{{\frac{1}{n - 1}}} }}{b}\; \sim \;O\left( 1 \right),$$ To non-dimensionalize the continuity equation, we substitute the non-dimensional quantities in (11) into (1) to obtain: $$\frac{{u_{0} \delta }}{{bd_{0} }}\frac{\partial u}{{\partial r}} + \frac{{uu_{0} \delta }}{{rbd_{0} }} + \frac{{u_{0} }}{b}\frac{\partial w}{{\partial z}} = 0,$$ Since \(\frac{\delta }{{d_{0} }}{ \ll }1\), $$\frac{{u_{0} }}{b}\frac{\partial w}{{\partial z}} = 0,$$ $$\frac{\partial w}{{\partial z}} = 0.$$ To non-dimensionalize the momentum equation (\(r\)-direction), substitute (11) into (2) to obtain: Also, since \(\frac{\delta }{{d_{0} }}{ \ll }1\), \(\frac{\partial w}{{\partial z}} = 0\), $$- \frac{{u_{0} b\mu_{0} }}{{d_{0}^{3} }}\frac{\partial p}{{\partial z}} = 0,$$ $$\frac{\partial p}{{\partial z}} = 0.$$ Also, substituting the non-dimensional variables in (11) and (7) into the momentum equation (z-direction): $$\begin{aligned} \frac{\partial p}{{\partial z}} & = \left( {1 + \frac{1}{\beta }} \right)\left[ {H_{r} \left( {\frac{1}{r} - r^{m - 1} \left( {m + 1} \right)} \right)} \right]\frac{\partial w}{{\partial r}} + \left( {1 + \frac{1}{\beta }} \right)\left[ {1 + H_{r} \left( {1 - r^{m} } \right)} \right]\frac{{\partial^{2} w}}{{\partial r^{2} }}wM^{2} \\ & \quad + G_{r} \left[ {\theta + G_{{\text{N}}} \theta^{2} } \right]\cos \gamma \; - w\left( {1 + \frac{1}{\beta }} \right)\frac{{H_{r} }}{Z}\left( {1 - r^{m} } \right) - \frac{{b^{*} w^{2} d_{0}^{2} }}{{k_{1} }}, \\ \end{aligned}$$ where \(G_{N} = \frac{{\alpha_{2} T_{0} }}{{\alpha_{1} }}\) is the nonlinear thermal convection. Also, using the non-dimensional variables in Eq. (11), the energy equation becomes $$\frac{1}{r}\frac{\partial }{\partial r}\left( {r\frac{\partial \theta }{{\partial r}}} \right) + \;{\text{Ec}}\Pr \left( {1 + \frac{1}{\beta }} \right)\left[ {1 + H_{r} \left( {1 - r^{m} } \right)} \right]\left( {\frac{\partial w}{{\partial r}}} \right)^{2} - N^{2} \theta = 0,$$ where \(B_{r} = \;{\text{Ec}}\Pr\) Brinkman number (\(B_{r}\)) is a dimensionless number used to study viscous flow. The corresponding boundary conditions are $$\;\frac{\partial \theta }{{\partial r}} = 0,\;\,\,\,\frac{\partial w}{{\partial r}} = 0,\;\;\;\;at\;\;r = 0,$$ and the no-slip boundary conditions (assuming that at a solid boundary, the fluid will have zero velocity relative to the boundary) at the artery wall $$\;w = 0,\,\,\,\theta = 0,\quad {\text{at}}\;\;r = h\left( z \right),$$ where h(z) is defined by $$\begin{gathered} h(z) = \left( {1 + \xi^{\prime } z} \right)\left[ {1 - \eta_{1} \left( {(z - a^{\prime } ) - (z - a^{\prime } )^{n} } \right)} \right], \hfill \\ \;\;{\text{where}}\;a^{\prime } \le z \le a^{\prime } + 1 \hfill \\ \end{gathered}.$$ With the use of the Legendre collocation method, we have to define some functions. Let \(P_{n} (x)\) be the Legendre polynomial function of degree \(n\). We recall that \(P(x)\) is the solution (eigenfunction) of the Sturm–Liouville problem as follows: $$\begin{aligned} & \left[ {\left( {1 - x^{2} } \right)P_{n}^{^{\prime}} \left( x \right)} \right]^{^{\prime}} + n\left( {n + 1} \right)p_{n} \left( x \right) = 0, \\ & x \in \left[ { - 1,1} \right],\;n = 0,1,2,3, \ldots \\ \end{aligned}.$$ Equation (24) satisfies the recursive relations: $$P_{0} \left( x \right) = 1\;,\;P_{1} \left( x \right) = x\;,\;P_{2} \left( x \right) = \frac{1}{2}\left( {3x^{2} - 1} \right),$$ $$P_{n} \left( x \right) = \;\frac{2n - 1}{n}\;x\;P_{n - 1} \left( x \right) - \;\frac{n - 1}{n}P_{n - 2} \left( x \right)\;\;;n \ge 1,$$ The set of Legendre polynomials from a [− 1,1] orthogonal set is $$\int_{ - 1}^{1} P_{n} \left( x \right)P_{m} \left( x \right)\;w\left( x \right){\text{d}}x = \;\frac{2}{2n + 1}\delta_{m,n} ,$$ where \(\delta_{m,n}\) is the Kronecker delta function. To apply the Legendre polynomial to the problem with a semi-infinite domain, we introduce algebraic mapping $$x = \frac{2\varsigma }{h} - 1,[ - 1,1] \to [0,h],$$ the boundary value problem is solved within the region [0, \(h\)] in place of [0,\(\infty\)), whereas the scaling parameter is taken to be sufficiently large enough to evaluate the thickness of the boundary layer. Therefore, the real solutions \(f\left( \varsigma \right)\) and \(\theta \left( \varsigma \right)\) are expressed as the basis of the Legendre polynomial function as $$f\left( \varsigma \right) = \sum\limits_{j = 0}^{N} a_{j} P_{j} \left( \varsigma \right),\;\;\,\,\theta \left( \varsigma \right) = \sum\limits_{j = 0}^{N} b_{j} P_{j} \left( \varsigma \right),\,\,\,{\text{for}}\quad j = 0,1,2,3, \ldots ,$$ $$\begin{gathered} f\left( \varsigma \right) = a_{0} P_{0} \left( \varsigma \right) + a_{1} P_{1} \left( \varsigma \right) + a_{2} P_{2} \left( \varsigma \right) + \cdots , \hfill \\ \theta \left( \varsigma \right) = b_{0} P_{0} \left( \varsigma \right) + b_{1} P_{1} \left( \varsigma \right) + b_{2} P_{2} \left( \varsigma \right) + \cdots , \hfill \\ \end{gathered}$$ where \(P_{0} \left( \varsigma \right)\), \(P_{1} \left( \varsigma \right)\), \(P_{2} \left( \varsigma \right)\),…,\(P_{n} \left( \varsigma \right)\) are generated from recursive relation in (26) and $$P_{0} \left( \varsigma \right) = 1\;,\;P_{1} \left( \varsigma \right) = \varsigma \;,\;P_{2} \left( \varsigma \right) = \frac{1}{2}\left( {3\varsigma^{2} - 1} \right), \ldots$$ Hence, substituting Eq. (31) into (30) $$f\left( \varsigma \right) = a_{0} + a_{1} \left( {\frac{2\varsigma }{h} - 1} \right) + \frac{{a_{2} }}{2}\left[ {3\left( {\frac{2\varsigma }{h} - 1\;} \right)^{2} - 1} \right] + \cdots ,$$ $$\theta \left( \varsigma \right) = b_{0} + b_{1} \left( {\frac{2\varsigma }{h} - 1} \right) + \frac{{b_{2} }}{2}\left[ {3\left( {\frac{2\varsigma }{h} - 1\;} \right)^{2} - 1} \right] + \cdots ,$$ for \(h = 6\) and \(N = 6\), Eqs. (32–33) become $$f\left( \varsigma \right) = a_{0} + \frac{{a_{1} }}{6}\left( { - 6 + 2\varsigma } \right) + \frac{{a_{2} }}{6}\left[ {6 - 6\varsigma + \varsigma^{2} } \right] + \cdots$$ $$\theta \left( \varsigma \right) = b_{0} + \frac{{b_{1} }}{6}\left( { - 6 + 2\varsigma } \right) + \frac{{b_{2} }}{6}\left[ {6 - 6\varsigma + \varsigma^{2} } \right] + \cdots$$ We assumed that \(w(r)\) and \(\theta (r)\) are the Legendre base trial functions, defined by $$w(z) = \sum\limits_{j = 0}^{N} a_{j} P_{j} \left( {\frac{2z}{h} - 1\;} \right),\;\theta (r) = \sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1\;} \right),$$ where \(a_{j}\) and \(b_{j}\) are constants to be determined and \(P_{j} \left( {\frac{2r}{h} - 1} \right)\) is the shifted Legendre function from \([ - 1,1]\) to \([0,h]\). Substituting (36) into the boundary conditions in (21) and (22), respectively, we have $$\begin{array}{*{20}l} {\left[ {\frac{{\text{d}}}{{{\text{d}}r}}\sum\limits_{j = 0}^{N} a_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right]_{r = 0} = 0} \hfill \\ \end{array} ,\;\begin{array}{*{20}l} {\left[ {\frac{{\text{d}}}{{{\text{d}}r}}\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right]_{r = 0} = 0} \hfill \\ \end{array} ,$$ $$\begin{array}{*{20}l} {\left[ {\sum\limits_{j = 0}^{N} a_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right]_{r = h\left( z \right)} = 0} \hfill \\ \end{array} ,\;\begin{array}{*{20}l} {\left[ {\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right]_{r = h\left( z \right)} = 0} \hfill \\ \end{array} ,$$ Residues \(D_{w} \left( {r,a_{j} ,b_{j} } \right)\) and \(D_{\theta } \left( {r,a_{j} ,b_{j} } \right)\) are derived from the above (39) and (40) accordingly. The residues are minimized close to zero using the collocation method as follows: $$\begin{array}{*{20}l} {{\text{for}}\;\delta \left( {r - r_{j} } \right) = \left\{ {\begin{array}{*{20}c} {1,} & {t = t_{j} } \\ {0,} & {{\text{otherwise}},} \\ \end{array} } \right.} \hfill \\ \end{array}$$ $$\begin{array}{*{20}l} {\int_{0}^{1} D_{w} \delta \left( {r - r_{j} } \right){\text{d}}r = D_{f} \left( {r_{j} ,a_{k} ,b_{k} } \right) = 0,\;\;\;{\text{for}}\;j = 1,\;2, \ldots N - 1} \hfill \\ \end{array}$$ $$\begin{array}{*{20}l} {\int_{0}^{1} D_{\theta } \delta \left( {r - r_{j} } \right)dr = D_{\theta } \left( {r_{j} ,a_{k} ,b_{k} } \right) = 0,\;\;\;{\text{for}}\;j = 1,\;2,..N - 1} \hfill \\ \end{array}$$ The above procedure sought the unknown constant coefficients \(a_{j} ,\) and \(b_{j}\) which are then substituted in Eq. (36) as the required solution. Mathematica 11.3 is used to obtain the numerical results for the temperature and velocity variation. The parameters used include the inclination of the angle \(\left( \gamma \right)\) of the artery, Casson parameter \(\left( \beta \right)\), porosity parameter \(\left( Z \right)\), the height of the stenosis \(\left( \delta \right)\), Darcy–Brinkman–Forchheimer term \(\left( {F_{s} } \right)\), and nonlinear thermal convection term \(\left( {G_{{\text{n}}} } \right)\). The following various parameters were used in the plotting of the graphs.\(z = 0.5,\delta = 0.1,N = 1.5,\)\(\gamma = \frac{\pi }{3},a = 0.25,b = 1,\xi = 0.002,{\text{Ec}} = 1,P_{r} = 2,{\text{Gr}} = 2,\)\(n = 2,h = 0.92,\frac{\partial P}{{\partial z}} = 3,\)\(H_{r} = 1 {\text{and}} d_{0} = 1\). Figure 2a shows the effects of variation of inclination angle (\(\gamma\)) parameters on the velocity profile. There is an increase in the velocity of the blood flow in the artery as the angle of inclination (\(\gamma\)) values increase. Influence of Inclination angle \(\left( \gamma \right)\), nonlinear thermal convection term (\(G_{N}\)), and Casson parameter on the velocity profile Figure 2b displays the graphical features of the introduced nonlinear thermal convection parameter (\(G_{N}\)) on the velocity profile. It is seen from Fig. 2b that the velocity profile decreases with the increasing values of the nonlinear thermal convection parameter (\(G_{N}\)). Figure 2c depicts the effect of the scale of the Casson parameter (β); it shows that as it increases from 0.2 to 1.0, velocity increases at the arterial wall. Figure 3a shows the effects of variation of the inclination angle parameter (γ), on the temperature profile. There is an increase in the temperature of the blood flow in the artery as the inclination angle (\(\gamma\)) values increase. Influence of a inclination angle \(\left( \gamma \right)\), b nonlinear thermal convection (\(G_{N}\)), c Casson parameter, and d Darcy–Brinkman–Forchheimer on the temperature profile From Fig. 3b, it is clear that as the value of the nonlinear thermal convection parameter increases, the temperature profile decreases, respectively. It is observed through these figures that velocity and temperature achieve their maximum value at the wall of the artery and attain the minimum value at the middle of the artery for the nonlinear thermal convection parameter. From Fig. 3c, it is seen that as the value of the Casson parameter (β) increases, the temperature profile decreases at the arterial wall and this takes place maybe because of the viscous nature of the fluid which decreases with increasing values of temperature. In this paper, we studied the Casson rheological flow of blood in an inclined stenosed artery with a non-Darcian porous medium and quadratic thermal convection. The collocation method with Legendre polynomial basis functions was used to solve the nonlinear governing equations. From the velocity and temperature profiles, it concluded that: (i) as the angle of inclination parameter (γ) increases both the blood flow velocity and temperature increase, (ii) with an increase in the value of nonlinear thermal convection parameter (Gn) the velocity and the temperature of the blood flow also increase, and (iii) the increase in Casson parameter (β) gives a decrease on both velocity and temperature of the blood flow. \(\tilde{u}\), \(\tilde{v}\) and \(\tilde{w}\) : Velocity components in the \(r\), \(\theta\), and \(z\) directions \(\rho\) : Variable viscosity \(\gamma\) : Inclined at the angle \(\mu\) : \(L\) : \(q_{{\text{r}}}\) : Applied radiation \(M\) : \(\sigma_{1}\) : Electrical conductivity \(k\) : \(C_{{\text{p}}}\) : Specific heat at constant pressure \(T_{0}\) : Blood temperature at the stenosed region T : Local temperature of the blood H : Maximum hematocrit at the center of the artery \(H_{r}\) : Hematocrit parameter \(d\left( {\tilde{z}} \right)\) : Radius of the narrow artery in the stenotic region \(d_{0}\) : Radius of the non-narrowed artery \(\delta\) : Maximum height of the stenosis \(\Pr\) : Prandtl number \(Z\;\) : Porosity parameter \(N\) : Radiation absorption parameter \({\text{Re}}\) : Reynolds number \(\theta\) : Dimensionless temperature parameter \({\text{Gr}}\) : Grashof number \({\text{Ec}}\) : Eckert number \(G_{{\text{N}}}\) : Nonlinear thermal convection parameter \(f\) : Dimensionless fluid velocity Abubakar, J.U., Gbedeyan, J.A., Ojo, J.B.: Steady blood flow through vascular stenosis under the influence of magnetic field. Centrepoint J. (Sci. Ed.) 25(1), 61–82 (2019) Abubakar, J.U., Adeoye, A.D.: Effects of radiative heat and magnetic field on blood flow in an inclined tapered stenosed porous artery. J. Taibah Univ. Sci. 14(1), 77–86 (2020). https://doi.org/10.1080/16583655.2019.1701397 Akbarzedeh, P.: Pulsatile Magneto-hydrodynamic blood flows through porous blood vessels using a third grade non-newtonian fluids model. Comput. Methods Progr. Biomed. 126(3–19), 2016 (2016) Beckermann, C., Viskanta, R., Ramadhyani, S.: A numerical study of non-Darcian natural convection in a vertical enclosure filled with a porous medium. Numer. Heat Transf. 10, 557–570 (1986) Bhargava, R., Anwar, O., Rawat, S., Beg, T.A., Triphati, D.: Finite element study of transient pulsatile magneto-hemodynamic non-Newtonian flow and drug diffusion in a porous medium channel. J. Mech. Med. Biol. 12(14), 1250081 (2012) Bhatti, M.M., Abdelsalam, S.I.: Bio-inspired peristaltic propulsion of hybrid nanofluid flow with Tantalum (Ta) and Gold (Au) nanoparticles under magnetic effects. Waves Random Complex Med. (2021). https://doi.org/10.1080/17455030.2021.1998728 Chaturani, P., Samy, R.P.: Pulsative flow of Casson's fluid through stenosed arteries with applications to blood flow. Biorhelogy 23(5), 499–511 (1986) Elnaqeeb, T., Sheh, N.A., Mekheimer, K.S.: Hemodynamics characteristics of gold nanoparticle blood flow through a tapered stenosed vessel with variable nanofluid viscosity. BioNanoScience 9(2), 245–255 (2019) Guner, A., Yalcinbas, S.: Legendre collocation method for solving nonlinear differential equations. Math. Comput. Appl. 18(3), 521–530 (2013) Hayat, T., Haider, F., Muhammad, T., Alsaedi, A.: Darcy–Forchheimer flow with Cattaneo–Christov homogeneous-heterogeneous. PLoS ONE 12(2017), 1–18 (2017) Ikbar, M.A., Chakravarty, S., Kelvin, K.L., Wong, M.J., Mandal, P.K.: Unsteady response of non-newtonian blood flow through a stenosed artery in magnetic field. J. Comput. Appl. Math. 230(1), 243–259 (2009) Krishna, M.M.: Effect of heat and mass flux conditions on Magnetohydrodynamics flow of Casson fluid over a curved stretching surface (2019). https://doi.org/10.4028/www.scientific.net/DDF.392.29 Krishna, M.M.: Numerical investigation on magnetohydrodynamics flow of Casson fluid over a deformable porous layer with slip conditions. Indian J. Phys. (2019). https://doi.org/10.1007/s12648-019-01668-4 Mallawi, F., Alzaidy, J.F., Hafez, R.M.: Application of a Legendre collocation method to the space-time variable fractional-order advection-dispersion equation. J. Taibah Univ. Sci. 13, 2019 (2018) Mandal, P.K., Chakravarthy, S., Mandal, A., Amin, N.: Effect of the body acceleration on unsteady pulsatile flow of non-Newtonian fluid through a stenosed artery. Appl. Math. Comput. 189, 766–779 (2007) Mustafa, T.: Eyring-Powell fluid flow through a circular pipe and heat transfer: full solutions. Int. J. Numer. Meth. Heat Fluid Flow 30(11), 4765–4774 (2020). https://doi.org/10.1108/hff-12-2019-0925 Poonam, Sharma, B.K., Kumawat, C., Vafai, K.: Computational biomedical simulations of hybrid nanoparticles (blood-mediated) transport in a stenosed and aneurysmal curved artery with heat and mass transfer: hematocrit dependent viscosity approach. Chem. Phys. Lett. 800, 139666 (2022) Sharma, B.K., Rishu Gandhi, M.M.: Bhatti, Entropy analysis of thermally radiating MHD slip flow of hybrid nanoparticles (Au–Al2O3/Blood) through a tapered multi-stenosed artery. Chem. Phys. Lett. 790, 139348 (2022). https://doi.org/10.1016/j.cplett.2022.139348 Tripathi, B., Kumar, B.S.: MHD blood flow and heat transfer through an inclined porous stenosed artery with variable viscosity. J. Taibah Univ. Sci. 14(1), 77–86 (2019). https://doi.org/10.1080/16583655.2019.1701397 Turkyilmazoglu, M.: Velocity slip and entropy generation phenomena in thermal transport through metallic porous channel. J. Non-Equilib. Thermodyn (2020). https://doi.org/10.1515/jnet-2019-0097 It was funded by authors. Department of Mathematics, University of Ilorin, Ilorin, Nigeria J. U. Abubakar, Q. A. Omolesho, K. A. Bello & A. M. Basambo J. U. Abubakar Q. A. Omolesho K. A. Bello A. M. Basambo JUA and QAO formulate the model and JUA, QAO, KAB, and AMB solved the model, drew the graphs presented, discussed the results, and presented the conclusions. All authors read and approved the final manuscript. Correspondence to J. U. Abubakar. We, the authors, declare that there are no competing interests. Also, on substituting (38) into the governing Eq. (20), we obtain $$\begin{aligned} D_{w} : & = \left( {1 + \frac{1}{\beta }} \right)\left[ {H_{r} \left( {\frac{1}{r} - \left( {m + 1} \right)r^{m - 1} } \right)} \right]\left[ {\frac{{\text{d}}}{{{\text{d}}r}}\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right] + \left( {1 + \frac{1}{\beta }} \right)\left[ {1 + H_{r} \left( {1 - r^{m} } \right)} \right]\frac{{{\text{d}}^{2} }}{{{\text{d}}r^{2} }}\left[ {\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right] \\ & \quad - \left( {\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right)M^{2} + G_{r} \left[ {\sum\limits_{j = 0}^{N} c_{j} P_{j} \left( {\frac{2r}{h} - 1} \right) + G_{N} \left( {\sum\limits_{j = 0}^{N} c_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right)^{2} } \right]\cos \gamma \; - \left( {\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right) \\ & \quad + \left( {1 + \frac{1}{\beta }} \right)\frac{{H_{r} }}{Z}\left( {1 - r^{m} } \right) - \frac{{b^{*} \left( {\sum\limits_{j = 0}^{N} b_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right)^{2} d_{0}^{2} }}{{k_{1} }} - \frac{{\text{d}}}{{{\text{d}}z}}\sum\limits_{j = 0}^{N} a_{j} P_{j} \left( {\frac{2z}{h} - 1} \right), \\ \end{aligned}$$ $$\begin{aligned} D_{\theta } : & = \frac{1}{r}\frac{{\text{d}}}{{{\text{d}}r}}\left( {r\frac{{\text{d}}}{{{\text{d}}r}}\sum\limits_{j = 0}^{N} c_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right) + \;E_{c} P_{r} \left( {1 + \frac{1}{\beta }} \right)\left[ {1 + H_{r} \left( {1 - r^{m} } \right)} \right] \\ & \quad \times \left( {\frac{{\text{d}}}{{{\text{d}}r}}\sum\limits_{j = 0}^{N} c_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right)^{2} - N^{2} \left( {\sum\limits_{j = 0}^{N} c_{j} P_{j} \left( {\frac{2r}{h} - 1} \right)} \right). \\ \end{aligned}$$ Abubakar, J.U., Omolesho, Q.A., Bello, K.A. et al. Casson rheological flow model in an inclined stenosed artery with non-Darcian porous medium and quadratic thermal convection. J Egypt Math Soc 30, 23 (2022). https://doi.org/10.1186/s42787-022-00157-8 Casson fluid Inclined stenosed artery Magnetohdyrodyanamics (MHD) fluid Collocation method Darcy–Brinkman–Forchheimer 76Z05
CommonCrawl
Home / Definitions / A Ahmad Faizan Definitions The sum of the instantaneous voltages in a half cycle wave shape divided by the number of instantaneous voltages. In a sine wave, it is equal to 0.637 times the peak voltage. Analog meters are a multifunctional multimeter that operates based on electrical mechanical movement. A transformer with a single electric winding, which can be used as a step-down or step-up device; a transformer characterized by no electrical isolation between primary and secondary. The reciprocal of a circuit's impedance, measured in Siemens. The atom is made up of subatomic particles, the most important of which are the electrons and protons. An Ammeter is a device used for the measurement of current in amperes (or even in smaller units like mA or μA). American Wire Gauge (AWG) The system of notation for measuring the size of conductors or wires. An electric current that periodically changes its direction, usually many times per second. Air-Gap Power It is the power transferred from the machine stator to the rotor across the air gap. Aluminum conductor steel reinforced (ACSR) ACSR is a composite conductor which is made up of an amalgamation of aluminum wires surrounding the steel. Ampere's law The magnetic field strength, at any point in the neighborhood of a circuit in which there is a current i, is equal to the vector sum of the contributions from all the differential elements of the circuit. The contribution dH, caused by a current i in an element ds at a distance r from a point P is given by $d\text{H}=\frac{r[\text{r}*\text{ds}]}{{{r}^{2}}}$ An-bn-cn or abc sequence The order in which the successive members of the set reach their positive maximum values. Phase a is followed by phase b and then by phase c. Apparent power The product of the root-mean-square voltage and the root-mean-square current. Armature Winding The winding in which AC voltage is produced by relative motion with respect to a magnetic field. The time average of the instantaneous power; the average being taken over one period. Analog Signal A signal with a varying voltage or current. A continuous set of values. Auto Ranging A term used to describe test equipment where the test equipment automatically adjusts the range to display the correct readout. For example, if you are measuring a voltage under 2 volts, you would select read voltage and the meter would automatically read the voltage and display the value.
CommonCrawl
December 2017's Entries An M5-Brane Model The string Lie 2-algebra may be relevant to M5-branes. Arithmetic Gauge Theory On Writing Short Papers How and why to write short papers. SageMath and 3D Models in Webpages Emded 3d models of surfaces into your websites using SageMath. Entropy Modulo a Prime (Continued) The right definition of the entropy of a probability distribution whose "probabilities" are integers mod p. Entropy Modulo a Prime The analogue of Shannon entropy in the field of p elements (after Kontsevich). From the Icosahedron to E8 Here are two ways to get $\mathrm{E}_8$ from the icosahedron. How are they related? The 2-Dialectica Construction: A Definition in Search of Examples Generalized multivariable adjunctions and the Dialectica-Chu construction. Spectral Triples and Graph Field Theory Yan Soibelman is thinking about spectral stringy geometry. Eli Hawkins on Geometric Quantization, II Eli Hawkins explains his method of getting a quantum algebra from the convolution algebra of sections on a symplectic groupoid. More on Tangent Categories More comments on the nature of tangent categories and their relation to the notion of shifted tangent bundles to differential graded spaces. Mathematical Images A collection of (loosely) mathematical images. Castles in the Air Some thoughts on the value of nonrigorous mathematics. When you try to quantize 10-dimensional supergravity theories, you are led to some theories involving strings. These are fairly well understood, because the worldsheet of a string is 2-dimensional, so string theories can be studied using 2-dimensional conformal quantum field theories, which are mathematically tractable. When you try to quantize 11-dimensional supergravity, you are led to a theory involving 2-branes and 5-branes. People call it M-theory, because while it seems to have magical properties, our understanding of it is still murky — because it involves these higher-dimensional membranes. They have 3- and 6-dimensional worldsheets, respectively. So, precisely formulating M-theory seems to require understanding certain quantum field theories in 3 and 6 dimensions. These are bound to be tougher than 2d quantum field theories… tougher to make mathematically rigorous, for example… but even worse, until recently people didn't know what either of these theories were! In 2008, Aharony, Bergman, Jafferis and Maldacena figured out the 3-dimensional theory: it's a supersymmetric Chern–Simons theory coupled to matter in a way that makes it no longer a topological quantum field theory, but still conformally invariant. It's now called the ABJM theory. This discovery led to the 'M2-brane mini-revolution', as various puzzles about M-theory got solved. The 6-dimensional theory has been much more elusive. It's called the (0,2) theory. It should be a 6-dimensional conformal quantum field theory. But its curious properties got people thinking that it couldn't arise from any Lagrangian — a serious roadblock, given how physicists normally like to study quantum field theories. But people have continued avidly seeking it, and not just for its role in a potential 'theory of everything'. Witten and others have shown that if it existed, it would shed new light on Khovanov duality and geometric Langlands correspondence! The best introduction is here: Edward Witten, Geometric Langlands from six dimensions, 2009. Continue reading "An M5-Brane Model" … Around 2008-9 we had several exchanges with Minhyong Kim here at the Café, in particular on his views of approaching number theory from a homotopic perspective, in particular in the post Kim on Fundamental Groups in Number Theory. (See also the threads Afternoon Fishing and The Elusive Proteus.) I even recall proposing a polymath project based on his ideas in Galois Theory in Two Variables. Something physics-like was in the air, and this seemed a good location with two mathematical physicists as hosts, John having extensively written on number theory in This Week's Finds. Nothing came of that, but it's interesting to see Minhyong is very much in the news these days, including in a popular article in Quanta magazine, Secret Link Uncovered Between Pure Math and Physics. The Quanta article has Minhyong saying: "I was hiding it because for many years I was somewhat embarrassed by the physics connection," he said. "Number theorists are a pretty tough-minded group of people, and influences from physics sometimes make them more skeptical of the mathematics." Café readers had an earlier alert from an interview I conducted with Minhyong, reported in Minhyong Kim in The Reasoner. There he was prepared to announce The work that occupies me most right now, arithmetic homotopy theory, concerns itself very much with arithmetic moduli spaces that are similar in nature and construction to moduli spaces of solutions to the Yang-Mills equation. Now his articles are appearing bearing explicit names such as 'Arithmetic Chern-Simons theory' (I and II), and today, we have Arithmetic Gauge Theory: A Brief Introduction. Continue reading "Arithmetic Gauge Theory" … Posted by Mike Shulman In the old days, when mathematics journals were all published on paper, there were hard budgetary constraints on the number of pages available in any issue, so long papers were naturally a much harder sell than short ones. But now that the primary means of dissemination of papers is electronic, this should no longer be the case. So journals that still impose draconian page constraints (I'm looking at you, CS conference proceedings), or reject papers because they are too long, are just a holdover from the past, an annoyance to be put up with until they die out. At least, that's what I used to believe. Continue reading "On Writing Short Papers" … Posted by Simon Willerton I want to write a few posts (which means at least one!) on things I've done around SageMath, not necessarily about SageMath, but using that as a springboard. Here I'll say how I used it to help visualization – for both the students and me! – in the differential geometry course I've been teaching this semester. SageMath (formerly SAGE) is a computer algebra system like Mathematica, Maple and MATLAB. However, unlike those other systems, it doesn't start with an 'M'. More importantly though, it is an open source project which, amongst other things, provides a unified 'front-end' for many other pieces of open source mathematical software such as Maxima, PARI and GAP. Having been using Maple since I was a PhD student, I started to make the switch to SageMath a couple of years ago, which was not that easy as the biggest problem with SageMath is the lack of introductory material and documentation, although this is now definitely improving, see for instance the book Mathematical Computation with SageMath, originally available in French. Now onto visualization, here is a static picture of a catenoid surface. Beneath the fold I'll explain two ways in which you can use SageMath to embed a rotatable model of the surface in a webpage. All being well, in the main body of the post you should be able to play with the catenoid yourself. This is, in some sense, a follow-on from one of the first posts I wrote here on using blender for creating 3d models of surface diagrams, nearly eight years ago. Continue reading "SageMath and 3D Models in Webpages" … In the comments last time, a conversation got going about pp-adic entropy. But here I'll return to the original subject: entropy modulo pp. I'll answer the question: Given a "probability distribution" mod pp, that is, a tuple π=(π 1,…,π n)∈(ℤ/pℤ) n \pi = (\pi_1, \ldots, \pi_n) \in (\mathbb{Z}/p\mathbb{Z})^n summing to 11, what is the right definition of its entropy H p(π)∈ℤ/pℤ? H_p(\pi) \in \mathbb{Z}/p\mathbb{Z}? Continue reading "Entropy Modulo a Prime (Continued)" … In 1995, the German geometer Friedrich Hirzebruch retired, and a private booklet was put together to mark the occasion. That booklet included a short note by Maxim Kontsevich entitled "The 1121\tfrac{1}{2}-logarithm". Kontsevich's note didn't become publicly available until five years later, when it was included as an appendix to a paper on polylogarithms by Philippe Elbaz-Vincent and Herbert Gangl. Towards the end, it contains the following provocative words: Conclusion: If we have a random variable ξ\xi which takes finitely many values with all probabilities in ℚ\mathbb{Q} then we can define not only the transcendental number H(ξ)H(\xi) but also its "residues modulo pp" for almost all primes pp ! Kontsevich's note was very short and omitted many details. I'll put some flesh on those bones, showing how to make sense of the sentence above, and much more. Continue reading "Entropy Modulo a Prime" … Here's a draft of a little thing I'm writing for the Newsletter of the London Mathematical Society. The regular icosahedron is connected to many 'exceptional objects' in mathematics, and here I describe two ways of using it to construct E 8 \mathrm{E}_8. One uses a subring of the quaternions called the 'icosians', while the other uses Du Val's work on the resolution of Kleinian singularities. I leave it as a challenge to find the connection between these two constructions! (Dedicated readers of this blog may recall that I was struggling with the second construction in July. David Speyer helped me a lot, but I got distracted by other work and the discussion fizzled. Now I've made more progress… but I've realized that the details would never fit in the Newsletter, so I'm afraid anyone interested will have to wait a bit longer.) You can get a PDF version here: • From the icosahedron to E8. But blogs are more fun. Continue reading "From the Icosahedron to E8" … An adjunction is a pair of functors f:A→Bf:A\to B and g:B→Ag:B\to A along with a natural isomorphism A(a,gb)≅B(fa,b). A(a,g b) \cong B(f a,b). Question 1: Do we get any interesting things if we replace "isomorphism" in this definition by something else? If we replace it by "function", then the Yoneda lemma tells us we get just a natural transformation fg→1 Bf g \to 1_B. If we replace it by "retraction" then we get a unit and counit, as in an adjunction, satisfying one triangle identity but not the other. If AA and BB are 2-categories and we replace it by "equivalence", we get a biadjunction. If AA and BB are 2-categories and we replace it by "adjunction", we get a sort of lax 2-adjunction (a.k.a. "local adjunction") Are there other examples? Question 2: What if we do the same thing for multivariable adjunctions? A two-variable adjunction is a triple of functors f:A×B→Cf:A\times B\to C and g:A op×C→Bg:A^{op}\times C\to B and h:B op×C→Ah:B^{op}\times C\to A along with natural isomorphisms C(f(a,b),c)≅B(b,g(a,c))≅A(a,h(b,c)). C(f(a,b),c) \cong B(b,g(a,c)) \cong A(a,h(b,c)). What does it mean to "replace 'isomorphism' by something else" here? It could mean different things, but one thing it might mean is to ask instead for a function A(a,h(b,c))×B(b,g(a,c))→C(f(a,b),c). A(a,h(b,c)) \times B(b,g(a,c)) \to C(f(a,b),c). Even more intriguingly, if A,B,CA,B,C are 2-categories, we could ask for an ordinary two-variable adjunction between these three hom-categories; this would give a certain notion of "lax two-variable 2-adjunction". Question 2 is, are notions like this good for anything? Are there any natural examples? Now, you may, instead, be wondering about Question 3: In what sense is a function A(a,h(b,c))×B(b,g(a,c))→C(f(a,b),c) A(a,h(b,c)) \times B(b,g(a,c)) \to C(f(a,b),c) a "replacement" for isomorphisms C(f(a,b),c)≅B(b,g(a,c))≅A(a,h(b,c)) C(f(a,b),c) \cong B(b,g(a,c)) \cong A(a,h(b,c)) ? But that question, I can answer; it has to do with comparing the Chu construction and the Dialectica construction. Continue reading "The 2-Dialectica Construction: A Definition in Search of Examples" …
CommonCrawl
Analysis and performance evaluation of resource management mechanisms in heterogeneous traffic cognitive radio networks S. Lirio Castellanos-Lopez1, Felipe A. Cruz-Pérez2, Genaro Hernandez-Valdez1 & Mario E. Rivero-Angeles3 In this paper, the Erlang capacity achieved by the separate or joint use of several resource management mechanisms commonly considered in the literature (spectrum aggregation, spectrum adaptation, call buffering, channel reservation, selective interruption, and preemptive prioritization) to mitigate the effects of secondary call interruptions in cognitive radio networks (CRNs) is evaluated and compared. Heterogeneous traffic is considered, and service differentiation between real-time and elastic (data) traffic is done in terms of their different delay tolerance characteristics. The aim of our investigation is to identify the most relevant resource management mechanisms to improve the performance of the considered networks. As such, both the individual and joint effect of each resource management mechanisms on system performance are evaluated with the objective of comparing the gains in capacity achieved by each resource management mechanism studied in this work. For this purpose, the different resource management mechanisms studied are carefully combined and, for each resulting strategy, optimization of its configuration is presented to maximize the achievable Erlang capacity. For the performance evaluation of the considered strategies in heterogeneous traffic CRNs, a general teletraffic analysis is developed. Numerical results show that spectrum adaptation and call buffering are the mechanisms that best exploit the elasticity of delay-tolerant traffic in heterogeneous traffic CRNs and, therefore, most significantly improve system performance. Cognitive radio technology has been proposed for performing dynamic spectrum sharing such that the scarce spectrum in wireless communication systems is utilized more efficiently [1]. Two types of users are defined in cognitive radio networks (CRNs): the licensed or primary users (PUs) who own the license of spectrum usage and the unlicensed or cognitive or secondary users (SUs) having opportunistic access to the licensed spectrum when the PUs are not occupying primary channels (a.k.a., white spaces or spectrum holes). However, when a PU decides to access a spectrum hole, all SUs using this primary channel must relinquish their transmission immediately. These unfinished secondary sessions may be either simply blocked or switched to another white space to continue their transmissions (this process is called spectrum handoff.) If no white spaces are available, the secondary session is dropped (forced to terminate.) Due to this unpredictable nature of spectrum hole availability in a secondary network, in the early years, CRNs were best suited for best effort services without any quality-of-service (QoS) guarantees. Nonetheless, with the ever increasing popularity of cost-efficient voice over the Internet protocol-based applications, mobile internet, social networks, and internet of things, CRNs are required to support real-time and interactive traffic (with most stringent QoS requirements) in addition to best effort (elastic) type of traffic. In this work, it is assumed that secondary data users have a type of service where data integrity is much more important than delay. Hence, all the information of such users has to be relayed successfully to the destination. Then, these users cannot experience a forced termination since in this case data would be lost. Also, these users can experience long queuing delay without degrading their QoS. Services with these characteristics are for instance e-mail, SMS, ftp, ATM-related data, among others. For these reasons, it is of paramount importance to employ and develop spectrum management strategies for QoS provisioning in CRNs. In this research direction, the most effective mechanism for guarantee QoS for real-time and interactive applications in CRNs is spectrum partitioning (a.k.a. coordinated CRNs) [2,3,4,5,6,7,8]. Spectrum partitioning means that the total spectrum band is divided into normal spectrum band (in which PUs can preempt SUs) and reserved spectrum band (dynamically leased from the primary network) for exclusive use of the secondary network [3, 5,6,7,8]. However, spectrum partitioning implies that the secondary network has to give some (may monetary) incentive to the primary network for the exclusive use of its licensed spectrum [6,7,8]. CRNs that perform spectrum partitioning are called coordinated CRNs (CCRNs) [6,7,8,9]. The use of spectrum partitioning is not considered in this work; however, for the interested reader, in our early work [7, 8], the performance of dynamic spectrum leasing strategies in CCRNs is investigated. Furthermore, the extension of the considered strategies in the paper for CCRNs is straightforward and subject of our current research work. On the other hand, in CRNs with heterogeneous traffic, spectrum adaptation is perhaps the most important mechanism to provide QoS provisioning among the different classes of traffic [10]. Spectrum adaptation means that an ongoing secondary elastic session can adaptively adjust the number of assembled channels according to both availability of white spaces and activities of secondary users (i.e., arrivals, departures, interruptions.) In this context, channel aggregation (also known as channel assembling) means that two or more idle channels are combined together as one channel to provide higher data rate and, thus, reduce the total transmission time of secondary elastic traffic calls [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. On the contrary, elastic traffic may flexibly adjust downward the number of assembled channels to provide, for instance, immediate access to secondary real-time traffic when needed [10,11,12,13,14,15,16,17,18,19,20,21,22,23].Footnote 1 Additionally, other relevant mechanisms are identified in the literature to overcome the impact of service interruption of SUs in CRNs [25,26,27]. Among these mechanisms are the following. Spectrum handoff process that, as explained above, allows the interrupted secondary calls to be switched to an idle channel, if one is available, to continue its service. Call buffering strategy that can be used to reduce the forced termination probability of preempted secondary calls. That is, if spectrum handoff is not possible (i.e., no white spaces are available), preempted secondary sessions may be queued into a buffer to wait for a spectrum hole [25, 28]. In this case, when a queued SU finds a new spectrum hole, it is allowed to continue transmitting its information. Although not very common employed in the related literature, a buffer may be used to queue new secondary sessions to improve blocking probability [29]. Preemptive priority which is an effective strategy to provide QoS provisioning among the different classes of secondary traffic [30]. For instance, real-time (delay-sensitive) traffic calls can preempt delay-tolerant (elastic) traffic calls. Of course, those preempted elastic traffic calls may be queued into a buffer to reduce forced termination probability at the expense of increasing transmission delay. In this sense, and according to realistic situations for other types of applications (with limited tolerance to the transmission delay), some expiration mechanisms (reneging due to impatience or thrown away by the system after a certain time) of preempted sessions in the queue or in the system must be considered. However, in this paper, only elastic traffic with unlimited transmission delay tolerance is considered for secondary data users. Double preemptive priority refers to the strategy in which both primary and secondary voice users can interrupt a data session in case all resources are busy. Channel reservation is another prioritization mechanism proposed in the literature for QoS provisioning in CRNs [25,26,27]. In [26], for instance, it is proposed a channel reservation mechanism in which a certain number of white spaces are precluded to be used by new arriving secondary sessions (that is, those white spaces are reserved for spectrum handoff). Blocking access to new secondary sessions, even if there are enough white spaces, lessen the number of secondary sessions to be forcedly terminated. Thus, channel reservation allows the tradeoff between forced call termination and new call blocking probabilities according to the QoS requirements of the secondary traffic. To finely control the performance metrics, the number of reserved channels can take a real value. When a real number of channels is reserved, this strategy is known as fractional channel reservation [31] and works as follows. Selective interruption means that upon the arrival of a PU, if a secondary call must be interrupted, the flexible resource allocation (FRA) strategy firstly chooses a data secondary call for being interrupted. From the arguments just exposed, it is evident that in preemptive priority CRNs with spectrum adaptation and call buffering, transmission delay (defined as the total time that a SU session, that is preempted at least once, spends in the queue during its lifetime in the system) is one of the most important performance metrics for elastic traffic classes (this performance metric is also referred in the literature as file transfer delay [32]). Surprisingly, transmission delay has not been previously considered as performance metric [10,11,12,13,14,15,16,17,18,19,20,21,22,23, 28]. Moreover, previous studies that address CRNs with heterogeneous traffic and spectrum adaptation [10,11,12,13,14,15,16,17,18,19,20,21,22,23] have not considered the use of call buffering, which is a relevant mechanism to exploit the elasticity of best effort traffic in benefit of both real-time and elastic secondary sessions. Exception of this are [28, 33, 34]; however, in these works, neither Erlang capacity nor transmission delay is considered. In this paper, the Erlang capacity achieved by different resource management mechanisms typically employed in the literature to mitigate the effects of call interruptions (i.e., spectrum adaptation, buffering for new or interrupted secondary data calls, channel reservation, selective interruption, and preemptive prioritization) is evaluated and compared when they are employed either separately or jointly in heterogeneous traffic CRNs. The evaluated Erlang capacity of the cognitive radio system refers to the maximum offered traffic load of secondary users for which the QoS requirements are guaranteed. In our previous work [35], the Erlang capacity of a CRN was analyzed but only VoIP traffic was considered. In [35], neither different service types nor the individual and joint effect of each strategy on system performance was studied. In the present paper, service differentiation between real-time and elastic traffic is considered according to their different delay tolerance characteristics. Also, several strategies that jointly employ different resource management techniques to take advantage of the flexibility and delay tolerance features of elastic traffic are analyzed and evaluated. The studied strategies include as special cases other adaptive assembling strategies that have been recently reported in the open literature [10, 14]. Additionally, a general teletraffic analysis for the performance analysis of the considered strategies in heterogeneous traffic CRNs is developed. A major characteristic of the proposed mathematical analysis requires less state variables than the one proposed in [10]. Hence, the proposed analysis methodology can be used in the context of future works in CRNs where the complexity is considerably high. Contrary to the previously published related works, non-homogenous bandwidth among primary and secondary channels is captured by our mathematical model, that is, the possibility that multiple secondary calls be preempted by the arrival of a PU is now considered. Finally, it is important to note that a similar work was developed in [36]; however, the main contribution of [36] is the formulation of two different spectrum leasing strategies for the performance improvement of CRNs in terms of Erlang capacity and cost per Erlang of capacity. In particular, the Erlang capacity as function of the maximum allowable number of simultaneously rented resources and the fraction of time that resources are rented to achieve such capacity were evaluated. For the strategies developed in [36], spectrum adaptation and spectrum leasing were used jointly with spectrum handoff, call buffering, and preemptive priority mechanism. As in this paper, two service types (i.e., real-time and elastic traffic differentiated according to their different delay tolerance characteristics) were considered in [36]. Contrary to [36], the aim of our investigation in this paper is to identify the most relevant resource management mechanisms (among those commonly considered in the literature: spectrum aggregation, spectrum adaptation, call buffering, channel reservation, selective interruption, and preemptive prioritization) to improve the performance of CRNs. To this end, we develop a new mathematical model to capture the (individual or joint) effect of these resource management mechanisms on system performance. System performance is evaluated in term of Erlang capacity, mean transmission delay of secondary elastic traffic, and forced termination probability of secondary elastic traffic. Building on this, we evaluate and quantify the impact of each individual resource management mechanism and also the impact of jointly employing these mechanisms on system performance. Hence, new numerical results and insights are obtained that were neither presented nor discussed in [36] The rest of the paper is organized as follows. System model and general assumptions are presented in Section 2. This is followed, in Section 3, by a description of the different strategies studied in this paper. In Section 4, teletraffic analysis is developed for the performance evaluation of the considered spectrum allocation strategies. Numerical results are analyzed and discussed in Section 5, before conclusions are presented in Section 6. The considered opportunistic spectrum sharing wireless system consists of two types of radio users, the PUs of the spectrum and the SUs that opportunistically share the spectrum resources with the PUs in a coverage area. PUs are unaffected by any resource management and admission policies used in the cognitive network, as SUs are transparent to them. The system consists of M primary identical bands, and each primary band is divided into N identical sub-bands (channels). This asymmetry of the primary and secondary channels is based on the fact that PUs and SUs can support different applications. However, the mathematical analysis developed in Section 4 is valid for any integer value of N ≥ 1. Thus, there are NM channels that are dynamically shared by the primary and secondary users. Two heterogeneous secondary traffic types are considered: real-time (voice) and elastic (data) traffic. Secondary voice users (SvUs) have absolute priority over secondary data users (SdUs). Non-homogeneity between the number of channels used by primary and secondary calls is considered. Each PU service occupies a fixed number of N channels. On the other hand, SvUs occupy one channel, while all the secondary data calls have the same minimum (b min) and maximum (b max) number of channel requirements. In order to exploit elasticity of secondary data traffic, the number of assembled channels can be adjusted upward or downward in benefit of secondary user satisfaction. That is, data sessions can flexibly adjust upward the number of assembled channels (as long as they are not being used by primary nor secondary voice sessions) to achieve higher data rate and, thus, reduce its total transmission time. (Notice that, for real-time traffic, sufficient QoS is provided given a fixed number of radio resources, and its service time is not modified even if more channels are assembled, but may be improved from the final user point of view when more channels are assembled.) On the other hand, data sessions can also flexibly adjust downward the number of assembled channels to provide immediate service (when needed) to either new secondary (voice or data) calls or ongoing (just preempted) secondary voice calls. On the other hand, it is considered that PUs, SvUs, and SdUs arrive to the system according to a Poisson process with rates λ (P), λ v (S), and λ d (S), respectively. Also, the unencumbered service times for PUs, SvUs, and SdUs (when SdUs employ one channel), (represented, respectively, by the random variables X (P), X v (S), and X d (S)) are considered independent and identically negative-exponentially distributed with mean values 1/μ (P), \( 1/{\mu}_v^{\left(\mathrm{S}\right)} \), and \( 1/{\mu}_{\mathrm{d}}^{\left(\mathrm{S}\right)} \), respectively. Note that the transmission time and departure rate of secondary data users depend on the number of channels that they employ. It is assumed that as secondary offered traffic load increases, the same proportion of service requests is maintained for each service considered. Thus, as the considered offered traffic changes, there is a dependence of the arrival rates of the SU voice and data flows; this dependence is capture through the parameter f v. Parameter f v represents the proportion of the total secondary offered traffic that corresponds to secondary voice users (delay-sensitive users). In Table 1, a list of the variables and acronyms used in the manuscript is provided. Table 1 List of variables and acronyms Considered adaptive spectrum allocation strategies In this section, the considered and/or studied adaptive allocation strategies are described. They are called strategies E1, E2, E3, E4, E5, E6, and E7. Table 2 presents the characteristics of each of the FRA strategies studied in this section. Strategy E1 does not employ any of the studied resource management mechanisms. Strategy E2 only employs spectrum adaptation. Strategy E3 employs all the studied resource management mechanisms but channel reservation.Footnote 2 Strategy E4 employs all the studied resource management mechanisms but data call buffering. Strategy E5 employs spectrum adaptation, selective interruption, and data call buffering. Strategy E6 only employs selective interruption. Strategy E7 employs spectrum adaptation and selective interruption. Note that strategies E1, E2, E5, E6, and E7 are special cases of strategy E3. Figure 1 shows the block diagram of operation of strategies E1, E2, E6, and E7. The block diagrams of operation of strategies E3, E4, and E5 are shown in Figs. 2, 3 and 4, respectively. Table 2 Characteristics of the analyzed strategies Block diagram of strategies E1, E2, E6, and E7 Block diagram of strategy E3 Strategies E1 to E7 are differentiated according to the different mechanisms used to mitigate the adverse effects of secondary call interruptions as well as to provide QoS in CRNs (i.e., spectrum handoff, data call buffering, spectrum adaptation, double preemptive priority, channel reservation, and selective interruption mechanisms). The study carried out in this work allows us to investigate the performance improvement due to the use of these resource management mechanisms commonly separately employed in the literature. All of the strategies studied in this work consider the use of spectrum handoff. Strategy E1 is the most basic one; this strategy is studied in [10] and, in that work, it is known as the non-assembling (NA) strategy. Strategy E2 is also proposed in [10] to improve system performance. Strategy E2 is characterized by using spectrum adaptation. Also, under strategy E2, interrupted secondary calls are selected randomly among data and voice calls (that is, the selective interruption mechanism is not considered), no buffer is used for exploiting the elasticity of the data traffic, and neither channel reservation nor double preemptive mechanisms are considered. Strategy E3 represents the reference strategy, and it is also the most complete strategy in the sense that it employs most of the mechanisms to improve the QoS in CR networks. Strategies E4 and E5 represent two alternatives of our reference E3 strategy (for instance, from Table 2, it is observed that strategy E4 considers channel reservation for ongoing secondary calls and does not consider the use of a buffer for new data calls). To finely control the different performance metrics, strategy E4 considers fractional channel reservation. Thus, to reserve, on average, a real number r v of channels, ⌊r v⌋ + 1 channels has to be reserved with probability p v = r v − ⌊r v⌋ and ⌊r v⌋ channels with the complementary probability [31]. Finally, strategies E6 and E7 are variants of strategies E1 and E2, respectively (specifically, as it is shown in Table 2, these strategies include the use of selective interruption mechanism). The aim of the considered resource management strategies is to take advantage of the flexibility and delay tolerance of elastic traffic. In order to protect ongoing SU (both voice and data) calls, the reference E3 strategy jointly employs spectrum handoff, call buffering, double preemptive priority for PUs and voice calls, and degradation/compensation mechanisms. For the sake of space, only detailed description of operation of the reference strategy E3 (shown in Fig. 2) is provided. Nevertheless, with the provided description of strategy E3 and the block diagrams shown in Figs. 1, 3, and 4, the understanding of the operation of the other strategies should be straightforward. In strategy E3, a common buffer is used to queue both new and interrupted SdU sessions. As explained below, the number of resources assigned to data sessions is dynamically adjusted. In order to protect interrupted SdUs by the preemptive priority mechanism due to the arrival of either PU or SvU calls, a victim buffer is employed. In this way, the elasticity of data call traffic is exploited in benefit of QoS satisfaction of SvUs at the price of increasing mean transmission delay of SdU calls. Additionally, new data sessions for SUs can be buffered only when there are not enough resources in the system to be served and the number of queued data sessions is less than the threshold Q thr. To finely control de performance metrics, this threshold is considered to have a real value [37]. The required buffer size to queue all interrupted data sessions is of finite number ⌊MN/b min⌋ + ⌈Q thr⌉ of locations.Footnote 3 Also, data sessions from SUs remain in the system until service termination. For all the evaluated strategies, resume retransmission is considered and queued sessions are served in the order of arrival [38]. Notice that, because of non-homogenous bandwidth among primary and secondary channels, there is the possibility that multiple secondary calls be simultaneously preempted by the arrival of a PU. Upon the arrival of a SdU (SvU) call, if the number A c of available channels is larger than or equal to b min (one), the call is accepted and the number of assembled channels assigned to it is given by min{b max, A c } (one). If not enough idle channels exist for a newly arrived SU call, instead of blocking it, a degradation procedure for ongoing SdU calls take place to try allowing the newly arrived SU call to join the network. The degradation procedure applied for the data traffic calls is based on the equal resource sharing allocation (ERSA) strategy [39]. The degradation procedure works as follows: When it is necessary to degrade ongoing data calls, a cyclic process exists in which ongoing data calls with the largest amount of assembled channels are degraded first and calls with the least number of assembled channels are degraded last. Calls are degraded one channel at a time. This helps with distributing the resources within data traffic calls as equally as possible. The degradation process for an engaged call ends when the number of channels allocated to it decreases up to the minimum required (b min). When it is not possible to perform further spectrum degradation and the number of available channels plus the ones to be released by all ongoing SdU calls is still not enough to attend the newly arrived SU call, it is queued in the buffer if the admission threshold (Q thr) at the buffer is not exceeded; otherwise, it is blocked. On the other hand, if the new service request is for a voice call, a preemption priority mechanism is triggered. The preemption priority mechanism is used to further protect the access of SvU call traffic. Under this mechanism, an ongoing data call is interrupted to allow the newly arrived SvU call to join the network. As such, a newly arrived SvU call is blocked only if the number of available channels is smaller than one and no ongoing SdUs exist in the system. Under this spectrum adaptation strategy, upon the departure of either PUs or SUs, the available channels are assigned to users at the queue. If after this process there remain vacant channels, a spectrum compensation procedure for secondary data calls is triggered. The compensation procedure works as follows. When it is necessary to compensate ongoing data calls, a cyclic process exists in which ongoing data calls with the smaller amount of assembled channels are compensated first and calls with the larger number of assembled channels are compensated last. Calls are compensated one channel at a time. This helps to achieve the closer to a uniform distribution of channels among data users. The compensation process ends until either all the vacant channels are assigned or when all the existing SdU calls have assembled b max channels. By using the compensation mechanism, the transmission rate of SdUs is speeded up when channels become vacant. As such, the channel occupancy times by delay-tolerant traffic calls is reduced. On the other hand, the spectrum degradation procedure described above is also triggered by the arrival of a PU to mitigate forced termination of interrupted SU calls. Again, it is taken advantage of the flexibility and delay tolerance of elastic traffic in benefit of QoS satisfaction of SvUs at the price of transmission delay for SdUs. In this case, if the degradation procedure (as well as the preemptive priority mechanism used to protect SvU calls) fails, the interrupted secondary call is forcedly terminated (that is, no victim buffer is considered for voice sessions). Note that the previous description applies to strategies E3, E4, and E5 under the following considerations: (i) When no new data sessions are allowed to be queued in the buffer, as in strategy E4, the value of the threshold Q thr is set to 0 (see block diagram of Fig. 3 and compare with block diagram of Fig. 2); (ii) When neither channel reservation nor the new data sessions are buffered, as in strategy E5, both r v and Q thr are set to 0 (see block diagram of Fig. 4 and compare with block diagram of Fig. 2). Additionally, it is important to note that strategy E5 does not consider the double preemptive mechanism where real-time sessions interrupt ongoing elastic traffic sessions, as it is done in strategies E3 and E4. As such, in strategy E5, whenever a secondary voice user arrives to the system and finds no idle resources to be served, the preemption priority mechanism is not triggered and the new SvU session request is simply blocked. Teletraffic analysis In this section, the teletraffic analysis for the performance evaluation of the studied FRA strategies is developed. As strategies E1, E2, E4, E5, E6, and E7 are special cases of strategy E3, a mathematical analysis for strategy E3 is developed and, then, it is explained how it must be modified to evaluate the performance of the other strategies. According to the considered spectrum adaptation strategy, resources are assigned in such a way as to achieve as close as possible equal resource sharing among secondary data users. The state variable of the system is represented by the vector k = [k 0, k 1, k 2], where k 0 is the number of PUs in the system, k 1 is the number of SvUs being served in the secondary system, and k 2 is the number of SdUs (both buffered and being served) in the secondary system. Building on this, a continuous time Markov chain (CTMC) is built in order to perform the teletraffic analysis for the strategy E3. In Fig. 5, the state transition diagram of this system is shown. Let us represent by vector e i of size 3 whose all entries are zero except the entry at the ith position which is 1 (i = 0, 1, 2). Let Ω be the set of feasible states as $$ \Omega =\left\{\begin{array}{l}\mathbf{k}=\left({k}_0,{k}_1,{k}_2\right):\kern0.36em 0\le {k}_0\le M\cap 0\le {k}_1\le MN\;\\ {}\cap \kern0.36em 0\le {k}_2\le \left\lfloor MN/{b}_{min}\right\rfloor +\left\lceil {Q}_{thr}\right\rceil \kern0.6em \cap \kern0.6em 0\le {k}_1+b\left({k}_2\right)\le MN\kern0.24em \end{array}\right\} $$ where b(k 2) represents the number of resources occupied by SdUs. Note that in order to have at least one SdU being served, the system must satisfy the following conditions: MN − Nk 0 − k 1 ≥ b min ∩ k 2 > 0. From this observation, it is straightforward to see that the number of resources occupied by SdUs is given by: $$ b\left({k}_2\right)=\left\{\begin{array}{l}\min \left( MN-{Nk}_0-{k}_1,{b}_{\mathrm{max}}{k}_2\right)\kern0.72em ;{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2< MN\\ {} MN-{Nk}_0-{k}_1\kern4.559998em ;{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2\ge MN\kern0.36em \cap MN-{Nk}_0-{k}_1\ge {b}_{\mathrm{min}}\\ {}0\kern9.799997em ;\mathrm{otherwise}\end{array}\right. $$ State transition diagram Note that even if variable k 2 represents only the total number of SdUs both active and queued in the buffer, it can be used to calculate the number of resources used by an active SdU. For a particular state k, the maximum number of active SdUs is b(k 2)/b min. Hence, if k 2 is higher than b(k 2)/b min, it means that there are SdUs waiting to be served in the buffer. Building on this, let κ a represent the maximum number of active data sessions in the system. Then, $$ {\kappa}_{\mathrm{a}}=\min \left({k}_2,\left\lfloor \frac{b\left({k}_2\right)}{b_{\mathrm{min}}}\right\rfloor \right) $$ and the average number of resources used by active SdUs is: $$ {b}_{\mathrm{a}\mathrm{vg}}=\min \left(\left\lfloor \frac{MN-{Nk}_0-{k}_1}{\kappa_{\mathrm{a}}}\right\rfloor, {b}_{\mathrm{max}}\right) $$ As such, the number of data sessions using b avg + 1 resources is calculated as κ = mod(b(k 2), b avg κ a) and the number of data sessions using b avg resources is given by κ a − κ. The remaining k 2 − κ a SdUs are buffered in the queue. Transition rates due to arrivals and service terminations of PUs or SUs, as well as departure of SUs from the queue are summarized in Table 3, where: $$ {a}_0\left(\mathbf{k}\right)=\left\{\begin{array}{l}{\lambda}^{\left(\mathrm{P}\right)}\kern1em ;{k}_0\ge 0\cap {Nk}_0+{k}_1+\gamma {b}_{\mathrm{min}}{k}_2\le \left(M-1\right)N\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. $$ where γ = 1 for strategies E1 and E2 because these do not employ the selective interruption mechanism, and γ = 0 for the other strategies. Table 3 Transitions from a reference state k = [k 0 ,…, k 2 ] In this transition, the number of resources occupied by PUs and SvUs is lower or equal than (M − 1)N and no SU is interrupted. It is worth mentioning the importance of representing by k 2 the total number of data sessions (both active and queued): the value of k 2 is not modified when secondary data sessions are interrupted due to the fact that these interrupted sessions go from active state to buffered state. As such, the complexity of the analysis is reduced. In order to also employ this expression of a 0(k) as incoming transition rate, the number of PUs has to be at least zero. In fact, for the arrival rate functions a i (k) (i = 0, 1, 2), it is necessary to specify that the corresponding number of sessions has to be equal or larger than zero to use it as incoming transition rate to the reference state k. To reserve, on average, a real number r v of channels, ⌊r v⌋ + 1 channels has to be reserved with probability p v = r v − ⌊r v⌋ and ⌊r v⌋ channels with the complementary probability [31]. $$ {a}_1\left(\mathbf{k}\right)=\left\{\begin{array}{l}{\lambda}_{\mathrm{v}}^{\left(\mathrm{S}\right)}\kern1em ;{Nk}_0+{k}_1< MN-\left\lfloor {r}_{\mathrm{v}}\right\rfloor -1\cap {k}_1\ge 0;\\ {}{\lambda}_{\mathrm{v}}^{\left(\mathrm{S}\right)}\left(1-{p}_{\mathrm{v}}\right)\kern1em ;{Nk}_0+{k}_1= MN-\left\lfloor {r}_{\mathrm{v}}\right\rfloor -1\cap {k}_1\ge 0;\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. $$ This transition rate indicates that new sessions of SvUs are accepted into the system whenever the number of occupied resources by PUs and SvUs is either lower than MN − ⌊r v⌋ − 1 or with probability (1 − p v) when the number of occupied resources by PUs and SvUs equals MN − ⌊r v⌋ − 1. Note that the number of resources occupied by SdUs is not considered in this point due to the double preemptive scheme and the use of k 2 for active and queued SdUs. In this case, the number of SvUs has to be higher or equal to zero. $$ {a}_2\left(\mathbf{k}\right)=\left\{\begin{array}{l}{\lambda}_{\mathrm{d}}^{\left(\mathrm{S}\right)};\kern0.96em {Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2\le MN-{b}_{\mathrm{min}}\cap {k}_2\ge 0\\ {}{\lambda}_{\mathrm{d}}^{\left(\mathrm{S}\right)};\kern0.96em {k}_2-{\kappa}_a<\left\lfloor {Q}_{\mathrm{thr}}\right\rfloor \cap {Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2> MN-{b}_{\mathrm{min}}\cap {k}_2\ge 0\\ {}{\lambda}_{\mathrm{d}}^{\left(\mathrm{S}\right)}{p}_{\mathrm{q}};\kern0.36em {k}_2-{\kappa}_a=\left\lfloor {Q}_{\mathrm{thr}}\right\rfloor \cap {Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2> MN-{b}_{\mathrm{min}}\cap {k}_2\ge 0\\ {}0;\kern1.68em \mathrm{otherwise}\end{array}\right., $$ where p q = Q thr − ⌊Q thr⌋. For this state transition, new sessions of SdUs can be accepted if at least b min resources can be obtained by the system as a consequence of the service degradation scheme if they are not being used by PUs or SvUs. If this degradation cannot be performed, the new sessions can be queued into the buffer if the occupation of the buffer is lower than ⌊Q thr⌋ or equal to ⌊Q thr⌋ with probability p q or blocked with probability (1 − p q). When the buffer occupation is higher than the threshold ⌊Q thr⌋, new data secondary sessions are blocked. In this case, the number of SdUs (active or waiting in the buffer) has to be higher or equal to zero. \( {b}_0\left(\mathbf{k}\right)=\left\{\begin{array}{l}{k}_0{\mu}^{\left(\mathrm{P}\right)}\kern1em ;{k}_0\le M\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. \). This transition occurs when a PU completes its service. Hence, the departure rate is proportional to the number of active PUs. When this departure happens, there can be a resource reassignment process but, as already mentioned, the value of k 2 is not altered. In this case, to employ this departure rate as incoming rate to the reference state k, the number of sub-channels used has to be lower or equal to the total number of sub-channels in the network. \( {b}_1\left(\mathbf{k}\right)=\left\{\begin{array}{l}{k}_1{\mu}_{\mathrm{v}}^{\left(\mathrm{S}\right)}\kern1em ;{Nk}_0+{k}_1+b\left({k}_2\right)\le MN\\ {}0\kern3em ;\mathrm{otherwise}\end{array}\right. \). This transition rate indicates that the departure rate of SvUs is also proportional to the number of active SvUs in the system. Now, the number of occupied sub-channels either by SUs or PUs has to be lower or equal to the total number of sub-channels in the system. $$ {b}_2\left(\mathbf{k}\right)=\left\{\begin{array}{l}b\left({k}_2\right){\mu}_{\mathrm{d}}^{\left(\mathrm{S}\right)};{Nk}_0+{k}_1+b\left({k}_2\right)\le MN\kern0.36em \cap \kern0.36em {b}_{\mathrm{min}}{k}_2\le MN+{b}_{\mathrm{min}}\left\lceil {Q}_{\mathrm{thr}}\right\rceil \\ {}0\kern3.6em ;\mathrm{otherwise}\end{array}\right. $$ b (k 2) In this transition, the departure rate of SdUs is proportional to the number of resources occupied by the SdUs and the inverse of the mean service time of these users when they employ only one channel. Note that in order to calculate the death rate of secondary data calls, it is not necessary to know the specific distribution of resources between the different data users. The rationale behind this is that the death rate is given by the sum of the individual death rates of each secondary user which are in turn proportional to the number of assigned resources to each one. Hence, the total death rate of SdUs is the product of the death rate of SdUs when only one channel is being used multiplied by the total number of resources that are assigned to the SdUs. This is the reason why our analysis requires less state variables than that presented in [10]. In this case, to employ this departure rate as incoming rate to the reference state k, the total number of sub-channels occupied by SUs and PUs has to be lower or equal to the total number of sub-channels in the system. Also, the total number of active SdUs and waiting in the buffer has to be lower or equal to the maximum number of users that can be accepted in the network, \( \left\lfloor \raisebox{1ex}{$ MN$}\!\left/ \!\raisebox{-1ex}{${b}_{\mathrm{min}}$}\right.\right\rfloor +\left\lceil {Q}_{\mathrm{thr}}\right\rceil \). \( c\left(\mathbf{k}\right)=\left\{\begin{array}{l}{\lambda}^{\left(\mathrm{P}\right)}\kern1em ;{k}_0<M\cap {Nk}_0+{k}_1>\left(M-1\right)N\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. \). This transition rate indicates that new PU sessions are accepted with rate λ (P) when the number of occupied resources by PUs is lower than M and the total number of sub-channels used by primary and secondary users is higher than (M − 1)N. The corresponding incoming transition rate to the reference state k in this case is $$ d\left(\mathbf{k}\right)=\left\{\begin{array}{l}{\lambda}^{\left(\mathrm{P}\right)}\kern1em ;0<{k}_0\le M\cap {Nk}_0+{k}_1>\left(M-1\right)N\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. $$ Given feasible states and their transitions in the CTMC, the global set of balance equations can be constructed as follows $$ {\displaystyle \begin{array}{l}\left[\sum \limits_{i=0}^2{a}_i\left(\mathbf{k}\right)+\sum \limits_{i=0}^2{b}_i\left(\mathbf{k}\right)+c\left(\mathbf{k}\right)\right]P\left(\mathbf{k}\right)=\sum \limits_{i=0}^2{a}_i\left(\mathbf{k}-{\mathbf{e}}_i\right)P\left(\mathbf{k}-{\mathbf{e}}_i\right)+\\ {}\sum \limits_{i=0}^2{b}_i\left(\mathbf{k}+{\mathbf{e}}_i\right)P\left(\mathbf{k}+{\mathbf{e}}_i\right)+\sum \limits_{j=1}^Nd\left(\mathbf{k}-{\mathbf{e}}_0+j{\mathbf{e}}_1\right)P\left(\mathbf{k}-{\mathbf{e}}_0+j{\mathbf{e}}_1\right)\end{array}} $$ Based on these equations and the normalization equation, the state probabilities P(k) are obtained. The performance metrics of interest in this CRN when the spectrum adaptation strategy is enabled are as follows. Let \( {P}_{\mathrm{b}}^{\left(\mathrm{P}\right)} \), \( {P}_{\mathrm{b}\_\mathrm{v}}^{\left(\mathrm{S}\right)} \), and \( {P}_{\mathrm{b}\_\mathrm{d}}^{\left(\mathrm{S}\right)} \) represent, respectively, the new call blocking probabilities of PUs, SvUs, and SdUs, while \( {P}_{\mathrm{ft}\_\mathrm{v}}^{\left(\mathrm{S}\right)} \) represents the forced call termination probability of SvUs. Blocking probability for PUs, which occurs when the total number of channels of the system are being used by PUs: $$ {P}_{\mathrm{b}}^{\left(\mathrm{P}\right)}=\sum \limits_{\Omega \left|{k}_0=M\right.}P\left(\mathbf{k}\right) $$ Blocking probability for secondary voice users depends on whether it is used simple or double preemption priority. Then, two different expressions for the new call blocking probability of secondary voice users are derived. Simple preemption priority. For the strategies that employ the simple preemption priority mechanism, blocking of secondary voice user requests occurs when the total number of idle sub-channels is lower than ⌊r v⌋ + 1 or with probability p v, when the total number of idle sub-channels equals ⌊r v⌋ + 1. Then, the blocking probability for secondary voice users is given by $$ {P}_{b\_v}^{\left(\mathrm{S}\right)}=\sum \limits_{\Omega \left|{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2> MN-\left\lfloor {r}_v\right\rfloor -1\right.}P\left(\mathbf{k}\right)+\sum \limits_{\Omega \left|{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2= MN-\left\lfloor {r}_v\right\rfloor -1\right.}{p}_{\mathrm{v}}P\left(\mathbf{k}\right) $$ (7a) Double preemption priority. For the strategies that employ double preemption priority mechanism, blocking of secondary voice user requests occurs when the total number of sub-channels used by both PUs and SvUs is greater than MN − ⌊r v⌋ − 1 or with probability p v, when the total number of sub-channels used by both PUs and SvUs equals MN − ⌊r v⌋ − 1. Then, the blocking probability for secondary voice users is given by $$ {P}_{b\_v}^{\left(\mathrm{S}\right)}=\sum \limits_{\left\{\mathbf{k}\in \Omega \left|{Nk}_0+{k}_1> MN-\left\lfloor {r}_{\mathrm{v}}\right\rfloor -1\right.\right\}}P\left(\mathbf{k}\right)+\sum \limits_{\left\{\mathbf{k}\in \Omega \left|{Nk}_0+{k}_1= MN-\left\lfloor {r}_{\mathrm{v}}\right\rfloor -1\right.\right\}}{p}_{\mathrm{v}}P\left(\mathbf{k}\right) $$ (7b) Forced termination probability for secondary voice users depends on whether prioritized interruption is used or not. Then, two different expressions for the forced termination probability of secondary voice users are derived. Without prioritized interruption. For the strategies that do not employ the prioritized interruption mechanism, secondary voice sessions are interrupted by PU users. This occurs with the probability that the PU is assigned the same channels being used by secondary voice users when the number of occupied channels is higher than (M − 1)N. Voice sessions are interrupted with the same rate that PU find more than (M − 1)N occupied channels multiplied by the probability that a SvU is using the same channel assigned to a new PU given as k 1/(k 1 + b min k 2) and divided by the rate that voice calls are accepted into the system given by \( {\lambda}_{\mathrm{v}}^{\left(\mathrm{S}\right)}\left(1-{P}_{\mathrm{b}\_\mathrm{v}}^{\left(\mathrm{S}\right)}\right) \). Then, the forced termination probability of secondary voice users is given by $$ {P}_{\mathrm{ft}\_\mathrm{v}}^{\left(\mathrm{S}\right)}=\frac{\lambda^{\left(\mathrm{P}\right)}}{\lambda_{\mathrm{v}}^{\left(\mathrm{S}\right)}\left(1-{P}_{\mathrm{b}\_\mathrm{v}}^{\left(\mathrm{S}\right)}\right)}\sum \limits_{\left\{\mathbf{k}\in \Omega \left|\begin{array}{l}{k}_0<M\cap {k}_1>0\\ {}\cap {Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2>\left(M-1\right)N\end{array}\right.\right\}}\frac{k_1}{k_1+{b}_{\mathrm{min}}{k}_2}P\left(\mathbf{k}\right) $$ With prioritized interruption. For the strategies that employ the prioritized interruption mechanism, forced termination of secondary voice users occurs when, upon the arrival of PUs, the number of occupied resources is higher than (M − 1)N and there are no active data sessions that can be interrupted. Then, the forced termination probability of secondary voice users is given by $$ {P}_{\mathrm{ft}\_\mathrm{v}}^{\left(\mathrm{S}\right)}=\frac{\lambda^{\left(\mathrm{P}\right)}}{\lambda_{\mathrm{v}}^{\left(\mathrm{S}\right)}\left(1-{P}_{\mathrm{b}\_\mathrm{v}}^{\left(\mathrm{S}\right)}\right)}\sum \limits_{\Omega \left|\begin{array}{l}{k}_0<M\cap {k}_1>0;b\left({k}_2\right)=0\\ {}\cap {Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2>\left(M-1\right)N\end{array}\right.}P\left(\mathbf{k}\right) $$ Blocking probability for secondary data users which occurs when the total number of idle sub-channels is lower than b min and the buffer is no longer accepting new sessions of SdUs: $$ {P}_{\mathrm{b}\_\mathrm{d}}^{\left(\mathrm{S}\right)}=\sum \limits_{\Omega \left|\begin{array}{l}{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2> MN-{b}_{\mathrm{min}}\\ {}\cap {k}_2-{\kappa}_{\mathrm{a}}>\left\lfloor {Q}_{\mathrm{thr}}\right\rfloor \end{array}\right.}P\left(\mathbf{k}\right)+\sum \limits_{\Omega \left|\begin{array}{l}{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2> MN-{b}_{\mathrm{min}}\\ {}\cap {k}_2-{\kappa}_{\mathrm{a}}=\left\lfloor {Q}_{\mathrm{thr}}\right\rfloor \end{array}\right.}\left(1-{p}_{\mathrm{q}}\right)P\left(\mathbf{k}\right) $$ Average queue length which is the sum of SdUs waiting in the queue in each state multiplied by the probability of being in that state: $$ E\left\{L\right\}=\sum \limits_{\Omega}\left({k}_2-{\kappa}_{\mathrm{a}}\right)P\left(\mathbf{k}\right) $$ where (k 2 − κ a) is the total number of queued secondary sessions (of both data and voice service sessions). Average number of resources used by each data session $$ {B}_{\mathrm{a}\mathrm{vg}}=\frac{\sum \limits_{\Omega \left|{\kappa}_{\mathrm{a}}>0\right.}\frac{b\left({k}_2\right)}{\kappa_{\mathrm{a}}}P\left(\mathbf{k}\right)}{\sum \limits_{\Omega \left|{\kappa}_{\mathrm{a}}>0\right.}P\left(\mathbf{k}\right)} $$ All numerical results obtained for the different performance metrics are obtained analytically but for the average normalized transmission delay. The average normalized transmission delay is evaluated through simulations and is calculated as: $$ \frac{E\left\{W\right\}}{E\left\{{X}_d^{(S)}\right\}}=\frac{\sum \limits_{i=1}^n{x}_{death}^i-{x}_{born}^i-{x}_{d\_ ideal}^{\left(S,i\right)}}{nE\left\{{X}_d^{(S)}\right\}} $$ where \( {x}_{\mathrm{death}}^i \) and \( {x}_{\mathrm{born}}^i \) are the instant when the ith SdU successfully completes its service and arrives to the system respectively. On the other hand, \( {x}_{\mathrm{d}\_\mathrm{ideal}}^{\left(\mathrm{S},\mathrm{i}\right)} \) is the service time when the maximum amount of resources b max is used by a given SdU, and n is the number of data calls successfully completed. As shown in Appendix 1 of [40], the call interruption process of secondary calls due to the arrival of primary users in cognitive radio networks is not a Poissonian one. As such, the interruption probability of secondary users cannot be obtained in a straightforward manner. This is the reason why the average transmission delay of SU elastic flows was not been computed by Little's Law. For resource management strategies that allow neither new data sessions to be queued in the buffer nor the use of channel reservation for new secondary voice session requests, parameters Q thr and r v are set to zero; this is the case of strategies E1, E2, E6, and E7. Additionally, due to the fact that strategies E1 and E6 do not consider the use of spectrum adaptation b min = b max = 1. The set of feasible states Ω for the strategies E1, E2, E6, and E7 should include the condition Mk 0 + k 1 + b(k 2) ≤ MN because they do not buffer interrupted data sessions. Similarly, due to the fact that strategies E1, E2, E5, E6, and E7 do not employ the double preemptive mechanism, the transition from state k to state (k + e 1) is only possible when the total number of busy resource is less or equal to MN − 1. Thus, for strategies E1, E2, E5, E6, and E7, the arrival rate a 1(k) must be rewritten as follows: $$ {a}_1\left(\mathbf{k}\right)=\left\{\begin{array}{l}{\lambda}_{\mathrm{v}}^{\left(\mathrm{S}\right)}\kern1em ;{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2< MN-\left\lfloor {r}_{\mathrm{v}}\right\rfloor -1\cap \kern0.48em {k}_1\ge 0\\ {}{\lambda}_{\mathrm{v}}^{\left(\mathrm{S}\right)}\left(1-{p}_{\mathrm{v}}\right)\kern0.84em ;{Nk}_0+{k}_1+{b}_{\mathrm{min}}{k}_2= MN-\left\lfloor {r}_{\mathrm{v}}\right\rfloor -1\cap \kern0.36em {k}_1\ge 0\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right.. $$ For strategies that do not consider either selective interruption (E1 and E2) or buffer for interrupted data sessions (E1, E2, E6, and E7), when a new PU arrival occurs and the total number of white spaces is less than N, it is imperative to interrupt enough ongoing secondary sessions (voice and/or data) in order to have at least N available channels to attend this primary arrival call. The number of channels occupied by secondary sessions that needs to be released to complete the N available resources can be computed as n = Nk 0 + k 1 + b min k 2 − (M − 1)N. In these situations, there exists different ways of interrupting ongoing secondary (voice and/or data) sessions to obtain the n required available resources. In this case, the primary arrival event triggers the transition from state k to the state k + e 0 − x e 1 − y e 2, where x = 0, 1, …, min(n, k 1) represents the number of interrupted voice sessions while y = n − x represents the number of interrupted secondary data calls. Notice that for the incoming rate to the state k in strategies E1 and E2, x and y must be modeled as random variables due to the fact that there exist more than one possible state from which the state k can be reached. Thus, for every state under which occurs a primary session request and the number of available resources is greater than (M − 1)N, the probability that x ongoing voice secondary sessions and y ongoing data secondary sessions be interrupted is given by the following hyper-geometric distribution $$ P\left(\mathbf{k},\mathbf{X}=x,\mathbf{Y}=y\right)=\frac{\left(\begin{array}{c}{k}_1\\ {}x\end{array}\right)\left(\begin{array}{c}b\left({k}_2\right)\\ {}y\end{array}\right)}{\left(\begin{array}{c}{k}_1+b\left({k}_2\right)\\ {}n\end{array}\right)} $$ for x = 0, 1, …, min(n, k 1) and y = n − x. Thus, in order to analyze strategies E1 and E2, the transition rate c(k) involved with the transition from state k to state k + e 0 − x e 1 − y e 2 must be computed as follows. \( c\left(\mathbf{k},x,y\right)=\left\{\begin{array}{l}P\left(\mathbf{k},\mathbf{X}=x,\mathbf{Y}=y\right){\lambda}^{\left(\mathrm{P}\right)}\kern0.6em ;{k}_0<M\cap {k}_1\ge 0\cap {k}_2\ge 0\\ {}\kern10.44em \cap \left(M-1\right)N<{Nk}_0+{k}_1+{b}_{min}{k}_2\le MN\\ {}\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. \). The corresponding incoming transition rate to the reference state k in this case is \( d\left(\mathbf{k},x,y\right)=\left\{\begin{array}{l}P\left(\mathbf{k},\mathbf{X}=x,\mathbf{Y}=y\right){\lambda}^{\left(\mathrm{P}\right)}\kern0.6em ;0<{k}_0\le M\cap {k}_1\ge 0\cap {k}_2\ge 0\\ {}\kern10.44em \cap \left(M-1\right)N<{Nk}_0+{k}_1+{b}_{min}{k}_2\le MN\\ {}\\ {}0\kern2.5em ;\mathrm{otherwise}\end{array}\right. \). On the other hand, due to the fact that strategies E6 and E7 employ selective interruption, the number of interrupted voice and data sessions (x and y, respectively) is deterministic quantities. Thus, in order to analyze strategies E6 and E7, P(k, X = x, Y = y) = 1 and the transition rates c(k, x, y) and d(k, x, y) equals λ (P) only when (x > 0 ∩ k 2 = y) ∪ (x = 0 ∩ k 2 = n) . Finally, the global set of balance equations for strategies E1, E2, E6, and E7 can be constructed as follows: $$ {\displaystyle \begin{array}{l}\left[\sum \limits_{i=0}^2{a}_i\left(\mathbf{k}\right)+\sum \limits_{i=0}^2{b}_i\left(\mathbf{k}\right)+\sum \limits_{x=0}^{\min \left(n,{k}_1\right)}c\left(\mathbf{k},x,y\right)\right]P\left(\mathbf{k}\right)=\sum \limits_{i=0}^2{a}_i\left(\mathbf{k}-{\mathbf{e}}_i\right)P\left(\mathbf{k}-{\mathbf{e}}_i\right)+\\ {}\sum \limits_{i=0}^2{b}_i\left(\mathbf{k}+{\mathbf{e}}_i\right)P\left(\mathbf{k}+{\mathbf{e}}_i\right)+\sum \limits_{x=1}^Nd\left(\mathbf{k}-{\mathbf{e}}_0+x{\mathbf{e}}_1+\left(n-x\right){\mathbf{e}}_2,x,y\right)P\left(\mathbf{k}-{\mathbf{e}}_0+x{\mathbf{e}}_1+\left(n-x\right){\mathbf{e}}_2\right).\\ {}\end{array}} $$ Numerical results The goal of the numerical evaluations presented in this section is to verify the applicability as well as the accuracy and robustness of the mathematical model developed in Section 4 to investigate the performance of spectrum adaptation strategies for CRN with heterogeneous traffic described in previous sections. The performance of the studied flexible resource allocation (FRA) strategies is evaluated in terms of the maximum Erlang capacity of the CRN (defined as the maximum offered traffic load of secondary users for which QoS requirements are guaranteed in terms of blocking probability of new voice and data calls and forced termination probability of secondary voice calls) [35]. In order to calculate the Erlang capacity of the CR system, the maximum acceptable value of the new call blocking probability for voice and data secondary traffic and forced termination probability of voice secondary calls equals 2%. In this work, a bound on the transmission delay of SU elastic flows to determine the Erlang capacity is not established since it is assumed that secondary data users have a type of service where data integrity is much more important than delay. These users can experience long queuing delay without degrading their QoS. Additionally, in this work we compare the performance of strategies where secondary data sessions experience forced termination (i.e., E1, E2, E6, and E7) against that of strategies where secondary data sessions do not experience forced termination (i.e., E3, E4, and E5). As such, strategies E3, E4, and E5 entail higher transmission delays due to the fact that no data sessions of secondary users are forced to terminate. Then, for these reasons, establishing a bound on the transmission delay of SU elastic flows to determine the Erlang capacity would yield both not suitable performance evaluation of the system (because secondary data sessions do not have delay requirement) and unfair performance comparison among the different resource allocation strategies (because some strategies guarantee the reliability of the delivery of information at the expense of transmission delay and others not). It is important to note that the forced termination probability for SdUs under the strategies E3, E4, and E5 is equal to zero due to the fact that the interrupted data calls are not forced to terminate their transmissions, instead preempted data calls are queued into the buffer. For strategies E3 and E4, maximum Erlang capacity is achieved by optimizing the control parameters Q thr and r v, respectively. Maximum Erlang capacity for E4 is obtained by optimizing the number of reserved channels to prioritize new voice call attempts over data call requests. The optimal value of the number of reserved channels r v is systematically searched by using the fact that new call blocking (forced termination) probability for SvUs is a monotonically increasing (decreasing) function of the number of reserved channels. Once a value of the number of reserved channels for which the new call blocking probability for SvUs and/or SdUs and/or forced termination probability for SvUs achieve its maximum acceptable value is found, another offered traffic load is tested. The capacity maximization procedure ends when the new call blocking probability for SvUs and SdUs, and forced call termination probability for SvUs achieve their maximum acceptable values. A similar procedure is followed for calculation of the threshold Q thr under the strategy E3. The threshold Q thr is systematically searched by using the fact that new call blocking probability for SdUs is a monotonically decreasing function of threshold Q thr. For strategies E1, E2, E6, and E7, the maximum value of the traffic load is systematically searched by using the fact that, for these strategies, new call blocking and forced termination probabilities for SvUs and SdUs are monotonically increasing functions of the offered traffic. Thus, different values for the traffic load are tested (using, for example, the well-known bisection algorithm). The maximum traffic load (Erlang capacity) is found when at least one of the performance metrics reaches its maximum acceptable value and the others are below of their respective maximum acceptable values. Unless otherwise specified, the following system and teletraffic parameters are employed in this section: total number of primary bands M = 6; number of channels per primary band N = 3; minimum and maximum number of channels required for SdUs calls (elastic traffic) b min = 1 and b max = 3; the number of channels required for SvUs calls (real-time traffic) is fixed to 1; of course, a fixed number of N channels is required by any primary call. We use as reference the evaluation scenario considered in [10] where mean service time for primary users is 1/0.5, voice services have a mean service time of 1/0.6, and secondary data users have a mean time of 1/0.82, and the proportion of voice traffic f v = 0.4767. The mean service times for PUs, SvUs, and SdUs are 1/μ (P) = 1/0.5, 1/μ v (S) = 1/0.6, and 1/μ d (S) = 1/0.82, respectively. Figures 6 and 7, respectively, show the maximum Erlang capacity and the normalized mean transmission delay as function of the utilization factor of primary channels (defined as the ratio between the primary carried load and the total number of primary channels). From Fig. 6, it is observed that, for all the analyzed spectrum adaptation strategies, as the utilization factor of primary channels (hereafter denoted by ρ) increases, the Erlang capacity decreases. This is an expected result due to the fact that as the primary traffic load increases, more secondary calls are interrupted in detrimental of system performance. Figure 6 shows that, for values of ρ less than about 0.2, our reference strategy (E3) diminishes this effect compared to the other strategies. On the other hand, the improvement due to the use of spectrum adaptation can be quantitative and qualitatively obtained from Fig. 6 by comparing strategy E2 versus strategy E1. For instance, for a value of ρ = 0.2, the Erlang capacity of strategy E2 increases 30.5% compared to the one achieved by strategy E1. Also, capacity improvement due to the jointly use of new data call buffering and double preemptive mechanisms can be obtained by comparing strategy E3 versus strategy E5. For instance, for a value of ρ = 0.2, the Erlang capacity of strategy E3 increases 70.2% compared to the one achieved by strategy E5. In general, Fig. 6 shows that, for values of ρ smaller than about 0.2, our reference E3 strategy considerably outperforms all the other strategies. For instance, for a value of ρ = 0.2, the Erlang capacity of strategy E3 increases 60% (72%) compared to the one achieved by strategy E2 (E4). This is mainly due to the use of a buffer for new secondary data calls; this buffer allows the strategy E3 (compared against strategies E2 and E4) to exploit more efficiently the elasticity of data traffic in benefit of system capacity. Figure 7 shows that this capacity gain is achieved at expenses of increasing the mean transmission delay of data (elastic) traffic. Additionally, under the strategy E3 (and its variants: strategies E4 and E5), the forced termination probability of data calls is equal to zero (i.e., the integrity of accepted data traffic is guaranteed) due to the use of a buffer for interrupted secondary data calls. This feature is especially useful in heterogeneous CRNs where the data traffic is represented by background services or some interactive applications with relaxed delay constraints. Notice from Fig. 7 that under strategy E3, the normalized mean transmission delays is always lower than 10. Notice from Fig. 7 that the mean transmission delay of SUs decreases with the primary traffic load; this behavior may be perceived as counterintuitive due to the fact that the opposite effect would be expected; however, this behavior can be explained as follows. First of all, we have to mention that the behavior observed in Fig. 7 has to be interpreted in conjunction with the results presented in Fig. 6. Indeed, Fig. 6 shows that the Erlang capacity of the cognitive radio system decreases as the PU traffic load increases. Recall that this Erlang capacity corresponds to the maximum offered traffic load of secondary users where the QoS requirements are guaranteed in terms of blocking probability of new voice and data calls and forced termination probability of secondary voice calls. Hence, as the PU traffic load increases; less new SUs are admitted to the network which in turn decreases the resource occupation. As such, transmission delay is reduced as the PU traffic load increases. From this, Fig. 7 shows numerical results considering that the system is operating with the PU traffic load presented in Fig. 6. Figure 8 shows forced termination probability of data traffic as a function of ρ for strategies E1, E2, and its variants E6 and E7. Notice that, due to the use of selective interruption, strategy E6 (E7) achieves a higher forced termination probability than strategy E1 (E2). Maximum Erlang capacity as a function of ρ, for f v = 0.4767 Normalized mean transmission delay as a function of ρ, for f v = 0.4767 Forced termination probability of data traffic as a function of ρ, for f v = 0.4767 On the other hand, for values of ρ greater than 0.2, Fig. 6 shows that strategy E3 achieves less Erlang capacity than most of the analyzed strategies; moreover, strategies E4 and E6 are the ones that have the slowest decreasing rates of Erlang capacity as the utilization factor of primary channels increases. This positive behavior of strategy E4 is achieved by the use of channel reservation and by precluding the use of a buffer for new data calls (that is, strategy E4 interchange capacity by blocking probability of new data calls). Similarly, as Fig. 8 shows, the beneficial behavior of strategy E6 regarding Erlang capacity is paid with forced termination probability of data calls (due to the use of selective interruption). Thus, channel reservation mechanism and avoiding the use of queue for buffering new data calls are relevant aspects for improving system performance in scenarios with high primary traffic load. By comparing in Fig. 6 the plots that correspond to the strategies E4 and E5, it is observed that the performance of both strategies is practically the same for low to moderate primary traffic load (ρ < 0.25). This behavior indicates that, in these scenarios, the double preemptive and channel reservation mechanisms do not contribute to improve system Erlang capacity. On the other hand, and for scenarios where ρ < 0.2, by comparing strategy E4 against strategy E3, it is observed from Fig. 6 that strategy E3 considerably improves system Erlang capacity. This behavior indicates that buffering new (blocked) secondary calls is an important mechanism to improve system capacity under low primary traffic load. Similarly, for scenarios where ρ < 0.2, by comparing strategies E1 against E6 (and E2 against E7), it is observed from Fig. 6 that their performance are practically the same. This behavior indicates that the capacity improvement due to the use of selective interruption is negligible. Finally, Table 4 shows the maximum percentage on Erlang capacity gain given for every single considered resource management mechanism when M = 6, N = 3, f v = 0.4767, b max = 3, and a maximum value of ρ = 0.23 is considered. From Table 4, and the previous discussion of this section, it is concluded that the most relevant mechanisms that improve system performance in CRNs with heterogeneous traffic are spectrum adaptation and buffering new data calls. Table 4 Maximum percentage on Erlang capacity gain given for every single considered resource management mechanism when M = 6, N = 3, f v = 0.4767, b max = 3 and, a maximum value of ρ = 0.23 Figures 9, 10 and 11, respectively, show numerical results for new call blocking probability of voice SUs, new call blocking probability of data SUs, and forced termination probability of voice SUs as function of the utilization factor of primary channels. For every value of the utilization factor of primary channels considered in Figs. 9, 10, 11, the performance of each resource management strategy is evaluated under its optimal operating configuration (that is, the configuration that maximizes Erlang capacity). In this sense, for every value of the utilization factor of primary channels shown in Figs. 9, 10, 11, the corresponding value of the secondary traffic load shown in Fig. 7 is considered. In Figs. 9, 10 and 11, label "A" ("S") stands for analytical (simulation) results. From Figs. 9, 10 and 11, perfect agreement between analytical and simulation results is observed; this verifies the accuracy of the mathematical model developed in Section 4. From Fig. 9 (Fig. 10), it is observed that, for values of the utilization factor of primary channels smaller than 0.2, the secondary blocking probability for voice (data) users attained for strategies E4 and E5 (strategy E3) reaches its maximum allowable value. This means that for values of the utilization factor of primary channels smaller than 0.2, the secondary blocking probability is the metric that limits Erlang capacity. Similarly, from Fig. 11, it is observed that the secondary forced termination probability for voice users attained by the different resource management strategies reaches its threshold for values of utilization factor of primary channels greater than 0.2. This means that under these scenarios, the secondary forced termination probability is the metric that limits Erlang capacity. New call blocking probability of secondary voice users for the corresponding optimal configuration of each strategy as function of the primary channel utilization factor New call blocking probability of secondary data users for the corresponding optimal configuration of each strategy as function of the primary channel utilization factor Forced termination blocking probability of secondary voice users for the corresponding optimal configuration of each strategy as function of the primary channel utilization factor The Erlang capacity achieved by the separate or joint use of different resource management mechanisms commonly considered in the literature to mitigate the effects of secondary call interruptions in cognitive radio networks (CRNs) with heterogeneous traffic was evaluated and compared. Novel adaptive spectrum allocation strategies that jointly employ different resource management techniques to take advantage of the flexibility and delay tolerance features of elastic traffic for the considered networks were analyzed and evaluated. To this end, a general teletraffic analysis was developed. For the mathematical modeling, spectrum handoff, channel reservation, elastic-traffic buffering, spectrum adaptation, and non-homogenous channels were considered. Because the proposed analysis reduces the number of state variables employed in [10], this is a major contribution of this work. The performance of the considered system was evaluated in terms of maximum Erlang capacity. Numerical results show that call buffering and spectrum adaptation are the mechanisms that best exploit the elasticity of delay-tolerant traffic in heterogeneous traffic CRNs and, therefore, most significantly improve their performance. Also, it concluded that channel reservation for ongoing secondary calls allows a slow decreasing rates of Erlang capacity as the utilization factor of primary channels increases. Finally, our numerical results show that the Erlang capacity of delay-sensitive secondary users is very low even for reduced primary traffic loads (such that the blocking probability is lower than 1%). As previously commented, the most effective mechanism for guarantee QoS for real-time and interactive applications in CRNs is to reserve spectrum band for exclusive use of the secondary network (CCRNs) [3, 5,6,7,8]. The use of spectrum partitioning is not considered in this work; however, the extension of the resource management mechanisms considered in the paper for CCRNs is straightforward and subject of future research work. In the context of cellular systems, spectrum adaptation is equivalent to the Flexible Resource Allocation (FRA) concept, where resource compensation and degradation mechanisms are employed [39, 41, 42]. In the context of the LTE-Advanced standard, spectrum adaptation is analogous to the concept of (intra-band) spectrum aggregation [24]. It is important to comment that initially, we did include channel reservation in this reference strategy. However, we noticed that the simultaneous use of channel reservation and limiting the number of secondary data users to the system has a redundant effect. Note that the value of Q thr ranges from 0 up to ⌊MN/b min⌋. Then, the assumption that the buffer size is long enough to guarantee that any interrupted secondary data call can be accommodated into the buffer is practical in reality. YC Liang, KC Chen, GY Li, P Mähönen, Cognitive radio networking and communications: an overview. IEEE Trans. Veh. Technol. 60(7), 3386–3407 (2011) M. M. Buddhikot, Cognitive radio, DSA and Self-X: toward next transformation in cellular networks, in Proc. IEEE DySPAN 2010, Singapore, 6–9 Apr. 2010. X. Mao, H. Ji, V. C. M. Leung, M. Li, Performance enhancement for unlicensed users in coordinated cognitive radio networks via channel reservation, in Proc. IEEE GLOBECOM'2010, Miami, FL., 6–10 Dec. 2010. J. Sachs, I. Maric, and A. Goldsmith, Cognitive cellular systems within the TV spectrum, Proc. IEEE Symp. New Frontiers in Dynamic Spectrum (DySPAN'2010), Singapore, 6–9 Apr. 2010. G. Liu. X. Zhu, and, L. Hanzo, Dynamic spectrum sharing models for cognitive radio aided ad hoc networks and their performance analysis, Proc. IEEE GLOBECOM'2011, Houston, Texas, USA, 5–9 Dec. 2011. SK Jayaweera, G Vazquez-Vilar, C Mosquera, Dynamic spectrum leasing: a new paradigm for spectrum sharing in cognitive radio networks. IEEE Trans. on Veh. Technol. 59(5), 2328–2339 (2010) M. A. Ramirez-Reyna, F. A. Cruz-Perez, M. E. Rivero-Angeles, and G. Hernandez-Valdez, Dynamic Spectrum Leasing Strategies for Coordinated Cognitive Radio Networks with Delay-Tolerant Traffic, IEEE 25th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC 2014), Washington, DC USA, 2–5 Sept. 2014. M. A. Ramirez-Reyna, F. A. Cruz-Perez, M. E. Rivero-Angeles, and G. Hernandez-Valdez, Performance Analysis of Dynamic Spectrum Leasing Strategies in Overlay Cognitive Radio Networks, Proc. 80th IEEE Vehicular Technology Conference (VTC2014-Fall), Vancouver, Canada, 14–17 Sept. 2014. SL Castellanos-López, FA Cruz-Pérez, ME Rivero-Ángeles, G Hernández-Valdez, Performance analysis of coordinated cognitive radio networks under fixed-rate traffic with hard delay constraints. J. Commun. Netw. Spec Issue Cogn Networking 16(2), 130–139 (2014) L Jiao, FY Li, V Pla, Modeling and performance analysis of channel assembling in multichannel cognitive radio networks with spectrum adaptation. IEEE Trans. on Veh. Technol. 61(6), 2686–2697 (2012) T Jiang, H Wang, S Leng, Channel allocation and reallocation for cognitive radio networks. Wirel. Commun., Mob. Comput. 13(12), 1073–1081 (2013) J. Martínez-Bauset, A. Popescu, V. Pla, and A. Popescu, Cognitive radio networks with elastic traffic, in Proc. 8th Euro-NF Conference on Next Generation Internet (NGI), Karlskrona, Sweden, 25–27 June 2012. L. Jiao, F. Y. Li, V. Pla, Greedy versus Dynamic Channel Aggregation Strategy in CRNs: Markov Models and Performance Evaluation. In: Casares-Giner V., Manzoni P., Pont A. (eds) NETWORKING 2011 Workshops. NETWORKING 2011. Lecture Notes in Computer Science, vol 6827. Springer, Berlin, Heidelberg. L. Yu, T. Jiang, P. Guo, Y. Cao, D. Qu, and P. Gao, Improving achievable traffic load of secondary users under GoS constraints in cognitive wireless networks, in Proc. IEEE Globecom'2011, Houston, Texas, USA, 5–9 Dec. 2011. J Mar, HC Nien, J-C Cheng, Intelligent data control in cognitive mobile heterogeneous networks. IEICE Trans. Commun E95-B(4), 1161–1169 (2012) Y Konishi, H Masuyama, S Kasahara, Y Takahashi, Performance analysis of dynamic spectrum handoff scheme with variable bandwidth demand of secondary users for cognitive radio networks. J Wirel. Netw 19(5), 607–617 (2013) J Lee, J So, Channel aggregation schemes for cognitive radio networks. IEICE Trans. Commun. E95-B(5), 1802–1809 (2012) L. Li, S. Zhang, K. Wang, and W. Zhou, Combined channel aggregation and fragmentation strategy in cognitive radio networks, CoRR abs/1203.4913: (2012). T. M. N. Ngatched, S. Dong, A. S. Alfa, and J. Cai, Performance analysis of cognitive radio networks with channel assembling and imperfect sensing, in Proc. IEEE ICC'2012, Ottawa, Canada, 10–15 June 2012 S. M. Kannappa and M. Saquid, Performance analysis of a cognitive network with dynamic spectrum assignment to secondary users, in Proc. IEEE ICC'2010, Cape Town, South Africa, 23–27 May 2010. L. Jiao, F. Y. Li, and V. Pla, Dynamic channel aggregation strategies in cognitive radio networks with spectrum adaptation, in Proc. IEEE GLOBECOM'2011, Houston, Texas, USA, 5–9 Dec. 2011. L. Jiao, V. Pla, and F. Y. Li, Analysis on channel bonding/aggregation for multi-channel cognitive radio networks, in Proc. European Wireless Conference (EW'2010), Tuscany, Italy, 12–15 Apr. 2010. J. Lee and J. So, Analysis of cognitive radio networks with channel aggregation, in Proc. IEEE WCNC'2010, Sydney, Australia, 18-210April 2010. H Lee, S Vahid, K Moessner, A survey of radio resource management for spectrum aggregation in LTE-advanced. IEEE Commun. Surv. Tutorials 16(2), 745–760 (2014) S. L. Castellanos-Lopez, F. A. Cruz-Perez, and G. Hernandez-Valdez, Channel reservation in cognitive radio networks with the RESTART retransmission strategy, in Proc. CROWNCOM 2012, Stockholm, Sweden, 18–20 June 2012. X Zhu, L Shen, TSP Yum, Analysis of cognitive radio spectrum access with optimal channel reservation. IEEE Commun. Lett 11(4), 304–306 (2007) J. Lai, R. P. Liu, E. Dutkiewicz, and R. Vesilo, Optimal channel reservation in cooperative cognitive radio networks, in Proc. IEEE 73rd VTC'2011-Spring, Budapest, Hungary, 15–18 May 2011. H Ahmed, SA AlQahtani, Performance evaluation of joint admission and eviction controls of secondary users in cognitive radio networks. Arab. J. Sci. Eng. 40(12), 3469–3481 (2015) Y. Zhang, Dynamic spectrum access in cognitive radio wireless networks, in Proc. IEEE ICC'2008, Beijing, China, 19–23 May 2008. J Zhou, CC Beard, A controlled preemption scheme for emergency applications in cellular networks. IEEE Trans. on Veh. Technol. 58(7), 3753–3764 (2009) JL Vázquez-Avila, FA Cruz-Pérez, L Ortigoza-Guerrero, Performance analysis of fractional guard channel policies in mobile cellular networks. IEEE Trans. on Wireless Commun. 5(2), 301–305 (2006) H Wang, NB Mandayam, Opportunistic file transfer over a fading channel under energy and delay constraints. IEEE Trans. on Commun. 53(4), 632–644 (2005) IAM Balapuwaduge, L Jiao, V Pla, FY Li, Channel assembling with priority-based queues in cognitive radio networks: strategies and performance evaluation. IEEE Tran. Wireless Commun 13(2), 630–645 (2014) S.M. Kannappa, T. Ali, and M. Saquib., Analysis of a buffered cognitive wireless network with dynamic spectrum assignment, in Proc. IEEE International Conference on Communications (ICC'2014), Sydney, Australia, 10–14 June 2014. SL Castellanos-López, FA Cruz-Pérez, ME Rivero-Ángeles, G Hernández-Valdez, Joint connection level and packet level analysis of cognitive radio networks with VoIP traffic. IEEE J. Sel. Areas Commun. 32(3), 601–614 (2014) M.A. Ramírez-Reyna, F.A. Cruz-Pérez, S.L. Castellanos-López, G. Hernández-Valdez, and M.E. Rivero-Angeles, Analysis of spectrum adaptation and spectrum leasing in heterogeneous traffic cognitive radio networks, 12th International IEEE Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2016), New York, NY, 17–19 October, 2016. RP Murillo-Perez, CB Rodriguez-Estrello, FA Cruz-Perez, Call admission control with fractional buffer size. IEICE Trans. on Commun. 95(9), 2972–2975 (2012) S. L. Castellanos-Lopez, F. A. Cruz-Perez, and G. Hernandez-Valdez, Performance of Cognitive Radio Networks under Resume and Restart Retransmission Strategies, 7th IEEE International Conference on Wireless and Mobile Computing Networking, and Communications (WiMob 2011), Shanghai, China, 10–12 October 2011. FA Cruz-Pérez, L Ortigoza-Guerrero, Flexible resource allocation strategies for class-based QoS provisioning in mobile networks. IEEE Trans. Veh. Technol. 53(3), 805–819 (2004) A.L.E. Corral-Ruiz, F.A. Cruz-Perez, S.L. Castellanos-Lopez, G. Hernandez-Valdez, and M.E. Rivero-Angeles, Modeling and performance analysis for mobile cognitive radio cellular networks, J Wireless Com Network, 2017: 159. https://doi.org/10.1186/s13638-017-0940-1. L Ortigoza-Guerrero, FA Cruz-Pérez, H Heredia-Ureta, Call level performance analysis for multi-services wireless cellular networks with adaptive resource allocation strategies. IEEE Trans. Veh. Technol. 54(4), 1455–1472 (2005) S Tang, W Li, An adaptive bandwidth allocation scheme with preemptive priority for integrated voice/data mobile networks. IEEE Trans. Wireless Commun. 5(10), 2874–2886 (2006) PRODEP Contract No.916040 provides us resources to acquire the computing equipment for the numerical evaluation of the studied systems. Electronics Department, UAM-A, Mexico City, Mexico S. Lirio Castellanos-Lopez & Genaro Hernandez-Valdez Electrical Engineering Department, CINVESTAV-IPN, Mexico City, Mexico Felipe A. Cruz-Pérez Communication Networks Laboratory, CIC-Instituto Politécnico Nacional, Mexico City, Mexico Mario E. Rivero-Angeles Search for S. Lirio Castellanos-Lopez in: Search for Felipe A. Cruz-Pérez in: Search for Genaro Hernandez-Valdez in: Search for Mario E. Rivero-Angeles in: SLCL developed and proposed the adaptive spectrum allocation strategies to mitigate the effects of secondary call interruptions in cognitive radio networks (CRNs). She also participated in the mathematical analysis derived in this work and the numerical results obtained to evaluate the performance of the system. FACP developed the mathematical model and Markov chains used to evaluate the proposed mechanism. Also, he identified the most relevant resource management mechanisms to improve the performance of the considered networks and proposed, analyzed, and modeled the optimization of procedures to maximize the achievable Erlang capacity. GHV was involved in the development of the mathematical model as well as identifying, analyzing, and studying the main resource management mechanisms commonly considered in the literature (spectrum aggregation, spectrum adaptation, call buffering, channel reservation, selective interruption, and preemptive prioritization) to mitigate the effects of secondary call interruptions in cognitive radio networks (CRNs). MERA was involved in the mathematical analysis as well as the evaluation of the studied strategies in heterogeneous traffic CRNs. All authors read and approved the final manuscript. Correspondence to S. Lirio Castellanos-Lopez. Castellanos-Lopez, S.L., Cruz-Pérez, F.A., Hernandez-Valdez, G. et al. Analysis and performance evaluation of resource management mechanisms in heterogeneous traffic cognitive radio networks. J Wireless Com Network 2017, 218 (2017) doi:10.1186/s13638-017-1003-3 Spectrum aggregation Spectrum adaptation Preemptive priority Heterogeneous traffic Elastic traffic Call buffering Transmission delay
CommonCrawl
Distributed Computing, Refrigerator Art, Overclockers, and the wonderful BOINC explosion all over Planet Earth! Why yes, click, certainly. What follows is about a quite remarkable quantum explosion in Distributed Computing B erkeley [University of California @ Berkeley] O pen I nfrastructure for N etwork C omputing which is a system, and general-purpose software, for rapidly distributing all sorts of computation-intensive problems (in physics, meteorology, pure mathematics, chemistry, and many other scientific fields) to tens of thousands of ordinary Personal Computers all over the world, automatically linked via the Internet. SETI@Home -- a computerized sifting through vast volumes of received radio signals from outer space to identify the first message sent to Us from Extraterrestrial Intelligence -- was the first problem to use the BOINC system and software, but BOINC quickly "opened up" to dozens of fascinating and scientifically important new projects, and huge numbers of PC users have enthusiastically volunteered to donate unused CPU (Central Processing Unit -- the computer's Brain Chip) time to solving these problems. So anyway, the BOINC site was asking for a new Logo and other images, so I cooked up this one, and sent it to BOINC's Big Cheese at U-Cal Berkeley. Here's some stuff about the Intersection of Science, Art, and an amazing new way to cure Teenagers of their Addiction to meaningless mega-violent Computer Games -- while finding a cure for Multiple Sclerosis. Distributed Computing is one reason I don't completely chuck Planet Earth and move to Planet Vleeptron permanently. For a real dose of the Weltschmertz, compare what Earth People are doing in Iran and Iraq and Afghanistan and Guantanamo, with what thousands of other Earth People are doing on the Internet with Distributed Computing and BOINC. There are two reasons why this is in the style of Refrigerator Art. The first of course is that I am a crappy artist and stick figures is about as close to Life Study as I get. (A very undernourished young woman comes over once a week to pose for me, and when we're done, I make her eat a big pastrami sandwich.) But as I screwed around with it, it dawned on me that when very little children see it on some adult's wall, they'll wonder why adults made and display Refrigerator Art, and they'll wonder what the image is trying to tell children and adults. Here I should mention a very odd thing about this BOINC / Distributed Computing thing. All over the world, the hundreds of thousands of computer users who have linked up their unused computing power to solve dozens of important problems in medicine, chemistry, physics, pure math, are roughly split into two very different kinds of people. The first are those drawn to Distributed Computing by scientific curiosity (even though you don't have to know squat about Science or Math to link up, you just have to know how to link up, which is about as difficult as clicking to get this e-mail). But the other kind of D.C. enthusiast, these are technically called "overclockers," and they're in it for the Computer Game challenge, exactly like TCJ has the all-time high score on the Video Game at the pizzeria. Overclockers spend all their money or max out the credit card on super-fancy computer Hardware, a gazillion gigs of RAM, the hottest CPU chip on the market, and tweak their computer's guts to make the system clock run much faster than the manufacturer recommends -- they are the Hot Rodders and Drag Racers of the Information Superhighway. All these arcane scientific D.P. projects reward their volunteers with Stats -- points for all the hours of computation a volunteer's computer has contributed to the project. And the Overclockers, who couldn't give a rat's ass whether or not the project cures Alzheimer's or solves the Extended Riemann Hypothesis, all they want is Stats Stats and bragging rights to the massive amounts of Stats they've accumulated. NEWS FLASH: This month Sony PlayStation 3 announced that it will automatically display the Folding@Home icon on player screens, and make it click-click easy for every PS3 owner to contribute to the Folding project (and amass mega-Stats). Finding the Cure for Cancer just took a Big Worldwide Power Leap. All over the Web there are thousands of pages which barely mention the Science behind these projects (one may cure or eradicate malaria) or the Scientific aims of these projects, but are just long, long lists of Overclockers -- ordered High to Low -- and their various Stats. There are ferociously competitive national teams -- the Dutch Power Cows, the Scottish BOINC Team, BOINC.FR.Net, etc. The Quest to hose up mega-Stats is so fierce that every Project now lists as its First Commandment: Run this software on your own computer only, or obtain written permission to run this software on any computer you do not own. ... because they're always catching college students or insurance company computer geeks stealing Big Time from the boss's or the university's Big Mainframe to hose up the mega-Stats. It's an authentic Variety of Theft, and sometimes the kids wind up (no photos available) in embarrassing little crime stories on Page 16. They never actually Go To Jail, but if they did, and their new cellmate asked, "What're you in for?", they'd have to say, "I stole 9000 hours of CPU time from my boss's mainframe to accumulate Stats for a Distributed Computing competition ..." But Overclocker Culture means that lots of these freakazoids get addicted to Distributed Computing at the same age kids get addicted to ordinary Computer Games -- frighteningly young ages. But unlike "Saturn Marauders vs. Mechazoidz III" these BOINC addicts can actually boast that they are indeed finding the cures for diseases and solving profoundly significant questions in the sciences. (The Dutch Power Cows have won an important national award for their efforts. BOINC itself, at UC-Berkeley, runs on National Science Foundation grants, and a million volunteers.) So the Refrigerator Art is intended to make a nine-year-old gamester ask questions and learn about a whole new world of Games -- just as challenging, just as competitive -- but which the Scientific World considers not as games at all, but as fundamental scientific research. [In] V.1 all the D.C. stick people were Caucasian, and all were sitting Rightside-Up as we naturally do in the Northern Hemisphere, so in V.2 I have made some of the people Green and Purple and Non-White, and turned half the people Upside-Down, the way people compute in the Southern Hemisphere. BOINC and D.C. are Worldwide Endeavors. (The Folding@Home map shows one Folder in the holy city of Mashhad and a handful of Folders in Tehran in Iran. (See map at top.) They help find a cure for genetic diseases, the West returns the favor by contemplating dropping The Big One on their evil heads.) Well, I know my Weaknesses -- grievous and multiple -- as an Artist, and so I play to what pathetic strengths I possess. This is probably the first piece I've done in which Image and Visual Information outweigh Text and Typography. It pleases me deeply that there are Clever People -- and even insane teenage Overclockers and insurance clerks who steal their boss's CPU time the way I used to steal stationery and office supplies -- who, in This World 2007, same Planet, right Here, right Now, are having So Much Fucking Fun doing So Much Good. Who ever imagined Doing Good could be So Much Fun? Certainly not me. I've had the Fun, but very little of it cured cancer. Dr. David P. Anderson U.C. Berkeley Space Sciences Laboratory Dear Dr. Anderson, BOINC's "Logos and Graphics" page wasn't clear about whom to send submissions to, so please excuse this inbox invasion. Nevertheless I hope this bit of Refrigerator Art gives you some pleasure, and might be of use to BOINC. I've been a distributed computing volunteer since soon after G.I.M.P.S. started, and now host Folding@Home. I've been amazed and fascinated at the ways such a magnificent idea has erupted and blossomed throughout the world. Two years ago I noticed two clusters of host volunteers on the Folding@Home map of Iran, one in Tehran and the other in the big university city in the northeast. As already bad relations between the West and Iran have degenerated to frightening depths, distributed computing invites thoughtful women, boys, men, girls everywhere to ignore the blunders of governments and discover and expand knowledge to help Earth and enrich all who sail through space on her. BOINC is something new under the Sun, and it rocks. I'm going to wear its t-shirt, and I wish BOINC and all who make it work the huge success it deserves. Bob Merkin Northampton Massachusetts USA Posted by Vleeptron Dude at 5:17 AM 3 comments: Links to this post PIZZAQ: "Mark and Beverly Escaping from the Shopping Mall" / Cincinnati Museum of Forgotten Cliches Clicking should do good things. Okay, here's a big, clear shot of the painting on the wall at my grand-niece (?) Anya Rose's birthday party. For the Pizza, ya need to figure out the Artist and the Name of the Painting. When I saw it on the party photo, I realized the painting was so familiar/famous that I must have seen it 5000 times. But I had no idea who painted it or what it was. It's not Andy Warhol. Posted by Vleeptron Dude at 3:15 PM 5 comments: Links to this post 1st Day Issue / Tierra de los Suen~os / TdSPosta / en famille avec des amies @ Anya Rose's 1st Birthday Of course you should click. For a very important date! No time to say Hello I'm late! -- the White Rabbit Disney's "Alice in Wonderland" Cooking meatless spaghetti sauce (somebody else is making meatballs) for 24! (I don't mean 24-factorial, I'm just excited.) A rare photographic sighting of Bob, Bob's sister Margie, Cynthia ( = S.W.M.B.O.) and a new friend Amy, all at my sister's son's and daughter-in-law's daughter Anya Rose's 1st birthday last Saturday in Vermont USA. Because he was one of 3 brothers, my nephew (not shown) began life (in my head) as either Huey, Louie or Dewey, but later became (in my head) Utopia Guy (with his brothers Ice Cube and Mush Boy). His RL name is Jonathan and his lovely and witty wife is Wendy. WHAT THE HECK IS THE PAINTING ON THE WALL??? ----- 3 slices PIZZAQ: my neighbor measures the time it takes for his radar to reach la Lune and return to his backyard clicking makes it prettier but not more helpful 4 slices with pepperoni, wild mushrooms and shallots: My amateur radio neighbor aims his radar dish at the Moon. His radar signal bounces off the Moon, and his parabolic dish receives the echo 2.6367241033 seconds later. At that moment, how far is my neighbor's backyard from the surface of the Moon? Please give your Answer in kilometers. VLEEPTRON UPDATE: Progress on the Travelling Easter Bunny Problem (a 13-Node Euclidean/Planar Traveling Salesperson Problem) Click for larger, clearer. But what are all such gaieties to me Whose thoughts are full of indices and surds? x² + 7x + 53 = 11/3 -- Lewis Carroll Like Yo -- (I have to stop hanging on IRC, chatting with Youth is degenerating my language skills.) I'd already solved the 7-Node Santa Problem when the mysterious mathematical mystic ramanuJohn submitted his totally unexpected and awesomely correct answer. The Travelling Santa Problem was a go-to-kitchen-get-a-cup-of-coffee run. The Travelling Easter Bunny Problem is a little less speedy. I have a very robust core program ("very robust" means it hasn't blown up so far), and attached please find a whilst-sleeping run. Very roughly, 7 hours = 1.05 % of all 13! paths so even more roughly, the whole job should take an eerily Satanic 666 hours, or about 28 days. In the New Yorker article about the Chudnovsky Brothers' world-record-breaking pi expansion (billion+ digits) with their homebrew mail-order supercomputer (cooled by hardware-store fans purchased in winter, at their cheapest) in their Spanish Harlem slum down the block from the sidewalk corpse, one top-tier professor told the reporter that all mathematics is essentially about aesthetic preferences. Expanding pi to world-record distances races the brothers' pulses and gives the brothers stiffies, while other equally accomplished mathematicians look on this task as though it were an above-ground swimming pool full of swine vomit and elimination. When told about the Chudnovskys' pi computations, one distinguished number theorist authentically recoiled in horror and yelled, "What for?" Well, of course the purpose of this kind of massive computation is clear. Supercomputer manufacturers traditionally enjoy testing new machines by stretching pi, and most of the recent world records are held by a Japanese guy and his latest-model Hitachi xTremeSoroban. The Chudnovskys showed the reporter lots of amazing stats about these unimaginable distances of the pi expansion, and enthusiastically proclaimed that the farther you compute the expansion, the less you know or understand about pi. (The reporter asked where they kept pi, and they handed him a hard disk. "It's in here," one brother explained.) A retired telephone company guy has an antenna and dish farm at the corner of F******* Road and B**** P** Road. His ham specialty: He bounces radar signals off the Moon. I told a friend this, and she asked: "Why?" I was at a loss to answer her, but I know Why. Actually, I don't know Why, but I'm just totally kookoo for cocoapuffs about massive computation, and the tiny morsels of it I can cram into whatever outdated wheezing PC I've got plugged in under the desk [more about which later]. I guess at least half the proggies I write are Really Silly Things to murder mosquitos with a hydrogen bomb -- or more precisely, to murder an elephant by poking it with a sewing needle 4.55 x 10^40 times, while S.W.M.B.O. and I go out to dinner. I think a lot of it is that I'm a really shitty mathematician and a worse math student, and my grasp on actual sophistication is so thready that I started substituting massive computation for the textbook elegance and straightforwardness I don't have and probably never will. (When I can't find an antidifferential, I love to carve an area into a gazillion skinny rectangles and sum their areas; I go to the kitchen for coffee, I come back, the answer awaits. You have a problem with this?) I took a first-gen Toshiba laptop with me to Europe once, and when I got to Communist Prague -- and even attended their lollapalooza Soviet Bloc Komputer Expo '88 -- it dawned on me that I was schlepping around the 2nd or 3rd most powerful computer in Czechoslovakia. And a computer you buy for your kid at Wal-Mart has more punch in it than all the digital computers of all the combatants during World War II. Turing would have begged to borrow my Toshiba laptop, or would have had MI6 whack me for it. The computers the Poles first devised to crack the Germans' Enigma Code were electromechanical and went tik-tik-tik-tik, so they called them bombes. But they cracked the early Enigma codes. ENIAC, also largely electromechanical, reminded one visitor of a room full of knitting grandmothers. The more I futz around with it, this Easter Bunny Problem turns out to be sincerely interesting (to me). Like Tantulus, I can see delicious grapes -- The Answer -- just out of my reach. But just a little bit out of reach, not an infinite, unimaginable out of reach. And it's just out of my pathetic programming reach so far, but the Answer is certainly easily within my PC's reach if I just keep running the dumb proggie for 28 days. There are, apparently, well-known techniques to furp back excellently short paths within a very short time. But only an exhaustive search of all paths can guarantee finding The Shortest Path. The overnight runs furp back PATH WORD and PATH LENGTH REDACTED & CLASSIFIED by order of VLEEPTRON MINISTRY OF SECRETS If you wish to know this information, in about 2 hours, and then can't find a shorter path for the rest of the night; after this one, then the serious 28-day thermonuclear mosquito hunt begins. One particularly appealing aspect of TSP is that nobody's (yet) much smarter about TSP than I am, at least insofar as any shortcuts to actually find the shortest path. There are no shortcuts yet. Bob's Thinking into this problem is about as advanced as the Cal Tech Combinatorics professor's thinking. TSP is officially classified as NP-Hard. [see Wikipedia thing below] More than that, there's a buzz that there's a proof of This: IF a shortcut to TSP can be found, THEN such a shortcut will apply to ALL NP-Hard problems. So the humble, low-rent, trailer-park TSP is or can be the key to simplifying/shortening all NP-Hard problems. The Near Future of the Easter Bunny Problem 1. As hinted above, I'm about to buy a new Dell, probably with an Intel Core 2 Duo, whatever the fudge that is. So from a HARDWARE standpoint, everything's coming up roses with the Easter Bunny, it's all Win-Win and Optimism and The Sun'll Come Out Tomorrow. I can see the Light At The End of the Tunnel. The Boys'll Be Home By Christmas. 2. Not so rosey with the SOFTWARE, me being Mister QuickBASIC FOR NEXT IF GOTO Guy. But I got me this Automatic Repeating Hammer and the sucker actually works. I suspect I'm a cinch to set the Guinness World Record for Slowest Solution ever found for a 13-Node [Euclidean / Planar] TSP. Maybe the Core 2 Duo thingie will cut it down to 24 days. 3. I have these 2.5 brainstorms: A. add a Yank-Off-Save-and-Resume (YOSaR) routine, so BUNNY doesn't really have to run continuously for a month on a system where the cat likes to play with the red button on the power strip. Maybe I won't get the answer in time for Easter Sunday, but I would find it deeply pleasing to know that every time I power up or re-start the PC, BUNNY chugs toward the Summit for another 16 hours. YOSaR would prevent against a total-loss catastrophe that occurred when the program had analyzed 83 percent of all possible paths. B. The Distributed Computing Thang 13! = 6,227,020,800 but 13!/13 = 12! = a mere 479,001,600 so I could carve the Bunny Problem into 13 13ths, and e-mail BUNNY and a bloc of 1/13th of all paths to 13 different PC users. This suggests each PC would only have to run BUNNY for 2.2 days. Each block of sequential paths would e-mail me its local minimum path, and the winner would be the shortest of the 13. This way, if I could manipulate or finesse 12 other PC-owning suckers, I could actually get the damn thing solved before Easter, mirabilu visu. Setting up the START path of each bloc, I am unhappy to report, exceeds my current Understanding of Combinatorics and the crappy IF-based method I'm using to generate all paths. It apparently won't be as falling-off-log simple as Bloc . Start Path 01 ABCDEFGHIJKLM 02 B............ 03 C............ 04 D............ 05 E............ 06 F............ 11 K............ 12 L............ 13 M............ more's the pity. But maybe a Flash Of Insight might come to me and I'll figure out a scheme to partition the Bunny into 13 distributed parts. [UPDATE: I got the Flash Of Insight, I can partition the Bunny into 13 equal parts now!] C. Much Smarter Combinatorics You saw the Horror of my path-generator. Surely there's a path-generator which mechanically chugs out all paths, new path issuing mechanically from old path, with no IF statements. Just eliminating the IF delays in generating all paths would be a tremendous time-saver. I've been playing around with generating few-node path sets like ABCDE. I haven't got very far yet (and I've misplaced my Schaum's Combinatorics), but my early thinking uses the model of Refrigerator Magnets. [see Figure 1.] So then the trick is to find an algorithm that uses these primitive magnet manipulating functions to generate all possible paths. I had to cook up something very much like this to automate the Towers of Hanoi with large numbers of disks. (Also not Elegant, but if my PC isn't busy for a day or two, it can shift a stack of 64 disks from one pole to another one disk at a time.) Okay, all this thinking is exhausting me, and I'm not very smart. I would be very appreciative of ramanuJohn's thoughts on these profound matters. Maybe you'll be stuck now and then in Wait Mode, and the Bunny might save you from having to read about Anna Nicole Smith in People magazine. Please carry a pad of graph paper and a calculator with you at all times. Euclidean TSP Euclidean TSP, or planar TSP, is the TSP with the distance being the ordinary Euclidean distance. Although the problem still remains NP-hard, it is known that there exists a subexponential time algorithm for it. Moreover, many heuristics work better. Euclidean TSP is a particular case of TSP with triangle inequality, since distances in plane obey triangle inequality. However, it seems to be easier than general TSP with triangle inequality. For example, the minimum spanning tree of the graph associated with an instance of Euclidean TSP is a Euclidean minimum spanning tree, and so can be computed in expected O(n log n) time for n points (considerably less than the number of edges). This enables the simple 2-approximation algorithm for TSP with triangle inequality above to operate more quickly. In general, for any c > 0, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP (Arora); this is called a polynomial-time approximation scheme. This result is an important theoretical algorithm but is not likely to be practical. Instead, heuristics with weaker guarantees are often used, but they also perform better on instances of Euclidean TSP than on general instance NP-Hard NP-Hard (Nondeterministic Polynomial-time hard), in computational complexity theory, is a class of problems informally "at least as hard as problems in NP." A problem H is NP-hard if and only if there is an NP-complete problem L that is polynomial time Turing-reducible to H, i.e. L \leq_T H. In other words, L can be solved in polynomial time by an oracle machine with an oracle for H. Informally we can think of an algorithm that can call such an oracle machine as subroutine for solving H, and solves L in polynomial time if the subroutine call takes only one step to compute. NP-hard problems may be of any type: decision problems, search problems, optimization problems. As consequences of such definition, we have (note that these are claims, not definitions): * problem H is at least as hard as L, because H can be used to solve L; * since L is NP-complete, and hence the hardest in class NP, also problem H is at least as hard as NP, but H does not have to be in NP and hence does not have to be a decision problem; * since NP-complete problems transform to each other by polynomial-time many-one reduction (also called polynomial transformation), therefore all NP-complete problems can be solved in polynomial time by a reduction to H, thus all problems in NP reduce to H; note however, that this involves combinig two different transformations: from NP-complete decision problems to NP-complete problem L by polynomial transformation, and from L to H by polynomial Turing reduction; * if there is a polynomial algorithm for any NP-hard problem, then there are polynomial algorithms for all problems in NP, and hence P = NP; * if P \neq NP, then NP-hard problems have no solutions in polynomial time, while P = NP does not resolve whether the NP-hard problems can be solved in polynomial time; * if an optimization problem H has an NP-complete decision version L, then H is NP-hard; * if H is in NP, then H is also NP-complete because in this case the existing polynomial Turing transformation fulfills the requirements of polynomial time transformation; A common mistake is to think that the "NP" in "NP-hard" stands for "non-polynomial". Although it is widely suspected that there are no polynomial-time algorithms for these problems, this has never been proven. An example of an NP-hard problem is the decision problem SUBSET-SUM which is this: given a set of integers, does any non empty subset of them add up to zero? That is a yes/no question, and happens to be NP-complete. Another example of an NP-hard problem is the optimization problem of finding the least-cost route through all nodes of a weighted graph. This is commonly known as the Traveling Salesman Problem. There are also decision problems that are NP-hard but not NP-complete, for example the halting problem. This is the problem "given a program and its input, will it run forever?" That's a yes/no question, so this is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, while the halting problem, in general, is not. Alternative definitions An alternative definition of NP-hard that is often used restricts NP-Hard to decision problems and then uses polynomial-time many-one reduction instead of Turing reduction. So, formally, a language L is NP-hard if \forall L^\prime\in \mathbf{NP}, L^\prime \leq_p L\!. If it is also the case that L is in NP, then L is called NP-complete. NP-naming convention The NP-family naming system is confusing: NP-hard problems are not all NP, despite having 'NP' as the prefix of their class name! However the names are now entrenched and unlikely to change. On the other hand, the NP- naming system has some deeper sense, because the NP- family is defined in relation to the class NP: NP-complete - means problems that are 'complete' in NP, i.e. the most difficult to solve in NP; NP-hard - stands for 'at least' as hard as NP (but not necessarily in NP); NP-easy - stands for 'at most' as hard as NP (but not necessarily in NP); NP-equivalent - means equally difficult as NP, (but not necessarily in NP); Posted by Vleeptron Dude at 5:09 PM No comments: Links to this post wtf??? ... dude's talking about the WEATHER and winning OSCARS and setting the planet on fire!!! The Deluge, by Paul Gustave Doré (French, 1832 - 1883) The Vleeptron High Non-Junk Science Council will skip its own long rant about Global Warming and Planetary Climate Change. Unfortunately, science has yet to find a way to harness Global Warming Rants as a source of environmentally clean energy. More's the pity. The controversy on this topic is HUGE, the volume of violently angry disputes is ENORMOUS and growing every day. If someone could find a way to squeeze electricity out of it, coal, oil and nuclear would go out of business overnight. About three months ago, I answered an e-mail from the US political action group moveon.org to go to the home of a neighbor (a Smith College professor) for an intimate little screening of Al Gore's documentary "An Inconvenient Truth." (I was asked to bring Postage Stamps and Snacks, and after the movie, wrote some letters to some politicians.) The hell with the Science, the hell with the Truth about GW. I mean -- are you asking me? Are you waiting for Vleeptron to tell you whether this is all crap, or whether you should move to higher ground and buy a canoe and a new pair of galoshes/wellies from LL Bean? Go ahead. Ask. Leave A Comment. If you really want to know What Bob Thinks About Global Warming, I'll tell you. Bob is Not a Climate Scientist. Bob has a Nephew who's a Glaciologist, but he's keeping his mouth fairly shut about the GW thing. In fact Nephew keeps his mouth fairly shut about just about Everything, so that doesn't prove anything. But Bob does know a thing or two about News, about how some things just bore the crap out of everyone, while other things spontaneously burst into flames in the public and political imagination. That's all you get from Vleeptron in this post. I am in total awe of Al Gore. In the past couple of years, this rather boring unemployed politician has rapidly positioned himself to be the Mother Theresa / Darth Vader / Winston Churchill / Joseph Stalin / Albert Einstein / Abraham Lincoln / Helen Keller / Stephen Hawking / Bertrand Russell / Savanorola / Rasputin / Carl Sagan of this issue. After he lost (or maybe won) the 2000 presidential election, Al Gore's destiny was clear: He was supposed to vanish from American political life, get a quiet chair in political science at Vanderbilt University in Tennessee, and make way for a new generation of political players. He was supposed to shut up and go away. We were never supposed to hear from him again -- except maybe to make a dignified speech now and then, but not in a prime-time hour, at Democratic Party conventions. So much for his cooperation with his destiny. Dude just won an Academy Award, his documentary's theme song just won another Oscar, and on TV, on the news, on C-Span, on Oprah, on Letterman, it's Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore, Al Gore. Okay, no big deal, dozens of people win an Oscar every year. But Al Gore? And suddenly people CARE about the Oscar for Best Documentary? How many documentaries have you ever gone to the movies and paid money to see? Check one: [ ] None [ ] One Where are the tits? Where are the car chases? Where are the machine guns? Where are the gay cowboys? Where are the 300 pumped Spartans and the FX? Where's George Cloony? This guy made a DOCUMENTARY about the WEATHER, for Christ's sake, and suddenly the entire population of Planet Earth is staring fixedly at Al Gore, and waiting for his next utterance about The Weather. Al Gore and Global Warming have practically pushed the Iraq War and Anna Nichole Smith off the front page. Color Vleeptron Impressed! And as many people hate Al Gore and his Weather as are worshipping him. Rich and powerful people want to assassinate Al Gore because of what he says about The Weather. Only one thing could possibly explain all this: Al Gore sold his Soul to the Devil. Nothing else could possibly explain how Al Gore and his Weather PowerPoint Lecture Documentary could Rock Planet Earth this way. Dude sold his Soul to Satan. I mean, Check This Out. Check out how Al Gore just went up to Congress to testify about The Weather, and it was like Moses parting the Red Sea. And check out the ANGER! This isn't just Political Posing Anger. These guys are sincerely FURIOUS at Al Gore. While others are waiting in long lines hoping to touch the hem of The Great Weather Prophet's garment. Who is Al Gore's Press Agent? Who turned this cross-eyed smoked whitefish into Jesus Walking On The Water? Who tossed Al Gore into a big brown grocery bag and then reached in and pulled out Jimi Hendrix and Jim Morrison? I want that Press Agent. He can have 19 Percent. I don't care. I want that Press Agent. BLOG: SCIAM OBSERVATIONS Opinions, arguments and analyses from the editors of Scientific American 06:23:43 pm, Categories: Global Warming and Climate Change, Politics and Science, 1344 words Gore Returns to Senate to Butt Heads With Climate Change Skeptics, Propose Real Solutions by Christopher Mims As soon as the Democrats took both houses of Congress, one thing became inevitable: Gore was coming back to the Senate, if only to address his all-consuming passion, climate change. Today at 2:30 EST, at the behest of Barbara Boxer (D-California), the chairman of the Environment and Public Works Committee, Gore got 30 minutes to speak before a packed house. Immediately after, noted climate change skeptic Sen. James Inhofe (R-Oklahoma), who famously declared that global warming "is the greatest hoax ever perpetrated on the American public," got a chance to lay in to the former vice president, at one point even attempting to ambush him by embarrassing him into signing a pledge that he reduce his emissions to those of a typical American household. The gloves were off: it was political theater at its finest. Unfortunately, that meant that, save Mr. Gore and, in his better moments, Sen. Inhofe, few of those present addressed the science of climate change in a way that made it sound like they'd done their homework. To wit: * When Senator Kit Bond (R-Missour) declared that sun spots were just as likely a cause of global warming as human emissions of CO2, I just about fell out of my chair. So did Mr. Gore, apparently, because he focused so much on answering this claim that he almost missed the bit of political stagecraft that preceded it, when Sen. Bond unveiled a giant poster of a little girl whose family is so poor they can't afford to heat their home in the winter, then asked how Gore could conscionably ask that folks like this pay more for energy (since clean coal and renewables are both more expensive than plain old dirty coal). It's a measure of how into the science Gore has become that he answered the sunspot question first and basically missed his opponent's attempt to pull the heartstrings of anyone too unimaginative to realize that energy efficiency would make it *less* likely that anyone would be cold in the winter rather than more likely. * Sen. Inhofe declared that the Antarctic is gaining ice, not losing it. This makes a nice sound-bite (gee, if the coldest place on earth is growing, not shrinking, doesn't that mean the earth is cooling and not warming, or something?) until you realize that the climate models actually predict increased snowfall over antarctica, mitigating to some extent the sea-level rise that will come about as a result of global warming. It's also worth noting that this data is patchy, at best, and only goes back a few decades. * Inhofe also whipped out a poster with "over a thousand names" on it of scientists who don't agree with the consensus on global warming. This was a nice touch, but Gore responded appropriately: the IPCC just declared the evidence for anthropogenic climate change to be unequivocal. The National Academies of Science of the 16 most developed countries all concurr. In other words, for every name on that poster, there are a dozen, maybe a hundred scientists, maybe more, who don't dispute the basics of anthropogenic climate change. (It was also nice to hear Gore cite the September 2006 single-topic special issue of Scientific American on the future of energy, even if it was only to note that in it the editor in chief declared that the debate on anthropogenic global warming is over.) To me, Inhofe's poster o' climate change skeptics is the equivalent of trotting out a bus full of young-earth creationists--sure, there are people on this Earth who think that dinosaurs and humans co-existed, but that doesn't make it so, nor does it mean that there is any real debate about whether or not our planet is 6,000 years old. To his credit, Inhofe did bring up one point where Gore may have exaggerated in his film: the link between global warming and an increased number of hurricanes. Certainly scientists believe a warmer earth will cause more intense hurricanes. But more hurricanes overall? The jury's still out. Chris Mooney, who is about to come out with a book on just this subject, has more at his blog The Intersection.) Some folks may still think this is a political issue, but the many Republican Senators on the Senate Environment committee who were more insterested in talking about solutions than debating the science would disagree with those folks. It was gratifying to finally see this becoming a bipartisan issue. Here is Gore's 9-point plan for dealing with climate change, starting today, directly from his speech: 1) I think we ought to have an immediate freeze on co2 reductions and start from there. 2) We should use the tax code. What I'm about to propose I know is is very much outside the range of what is now politically feasible. I think we ought to cut taxes on employment and make up the difference with pollution taxes - principally CO2 taxes. Some countries are talking about it seriously. In the developed world our big disadvantage is that these developed countries have access to tech and container shipping. We don't want to lower our wages - but we don't want to pile on top of those wages these taxes. We ought to use some of the revenue [from carbon taxes] to help the poor with the adjustments that are coming forward. 3) I'm in favor of cap and trade and I supported Kyoto. but I understand the realities of the situation. I think the new president should take office at a time when our country has a commitment to defacto compliance with Kyoto. And I think we should move the start of the new treaty period from 2012 to 2010. We need a tougher treaty that starts in 2010. And we need to find a creative way to get China and india involved sooner rather than later. That's important not least of which because China's emissions will exceed ours in the next couple of years. We need to ratify a cap and trade system so the market will work for us rather than against us. 4) We should have a moratorium on new coal plants that are not fixed with carbon capture and sequestration technology. 5) I think our congress should fix a date beyond which incadescent lightbulbs are banned. [aside: Australia is about to do this.]... It's like wal-mart. It's not taking on the climate crisis simply out of the goodness of their heart. They care about it but they're making money at it. 6) The creative power of the information revolution was unlocked by the Internet. When the science and engineering pioneers came up with arpanet and this senate empowered them with a legislative framework and money for r&d, that came together. We ought to have [an analogous] electro-net and we ought to encourage widely distributed power generation. We ought to take off the caps and let individuals sell back as much as they want on the grid. Know that the opposite of a monopoly is a monopsony - a single buyer who dictates prices, so we need to have an open market to deal with that [so it's not just the utility company dictating the value of electricity sold back to the grid]. You give individuals the ability to do that and you watch - families, small business will go to town on this. 7) I think we ought to raise the CAFE standards. Don't single out autos, but as part of it. 8) Pass a carbon-neutral mortgage association. Here's why: buyers of new homes and buyers and sellers all focus on purchase prices. But the expenditures that go into more insulation and window treatment and those that don't pay back immediately but pay back over 2-3 years, those don't get counted as savings. Put those in a separate instrument - then have a Connie May [like the government's Fannie Mae, which handles mortgages] which can create a separate instrument. So that people can save and reduce co2 at the same time. 9) Require corporate disclosure of carbon emissions. Investors have a right to know about material risks that could affect the value of their stocks in the future. Posted by Christopher Mims Comments, Pingbacks: Comment from: monocrater [Visitor] Global Warming or Climate Change is not a political issue, it can be substantiated with empirical evidence. Anthropogenic Global Warming IS political and it is largely a theoritical soft-science because it is primarily focused on future predictions based on limited and selective present knowledge. While well-intentioned, Mr. Gore, the IPCC, environmental organizations, and the liberal left have latched onto AGW and promoted it as a catastrophic fact, creating a new fear to shift political and social power. AGW has been hyped, overblown, exaggerated, and promoted as unequivocal fact without any separation of reality from myth. Now, every storm, heat wave, drought, migration, flood, cold spell, disease, sleepless night, and a host of other maladies are being attributed to AGW as a result. Follow the money trail for research into AGW - largely the the IPCC and government agencies are funding this research - and all come with deep pockets. Climate researchers who's hypothesis inetend to show alternative causes to GW are not funded to the same degree as those whose hypothesis aim to show a human cause. The reason? Human cuases are scarier and "correctable" and "legislateable" - something the UN and government agencies excel at. Pro AGW hypothesis are going to get funded and as a proverb in economcs goes "when you fund something, you'll get more of it". Environmentalists sit back and enjoy the show as their exaggerated AGW scare has gotten these agencies to fund their cause in a "take no chances" panic. Many scientists in the AGW camp are by nature sympathetic to environmental causes and are generally not friendly to industry. In other words, imagine a poet being a neo conservative? An artist who supports the NRA? A biologist who supports the death penalty? It is a positive sympathetic feedback system in the AGW science camp. Peer reviewed? By sympathetic peers who are not held accountable by the funding agencies? What are the official peer review standards anyway in IPCC AGW science? How are these researchers held accountable for review? While I remain skeptical of AGW claims, and largely because history has shown "flavor of the month science", I am open to the further study of this issue. However, I am not supportive of sweeping and costly political legislations at this point. There are plenty of other natural explanations that have yet to be fully funded and ruled out. Funding is the key here, and who is doing the funding will often dictate the results. If you agree with this principle (since it seems to apply when industry funds) then it must be decided on the science. And since science has shown radical and rapid climate change in recent geologic times, how can the AGW science we are being told is a "consensus" be explained as the absolute truth when funding is taken into account? This is fundamental skepticism from a critical thinking perspective. And please spare the holier-than-thou "climate contrarian" and "climate denier" labels. They are further evidence of politics manifesting itself in climate change debate. March 21, 2007 @ 19:48 Comment from: Wall Street Journal [Visitor] · http://online.wsj.com/article/SB115457177198425388.html Wall Street Journal article by Antonio Regalado and Dionne Searcey about Al Gore's Penguin army comes to mind. http://online.wsj.com/article/SB115457177198425388.html Comment from: Eco Author Chris Eldridge [Visitor] · http://www.trafford.com/04-2708 With how important his message is, I have to wonder why some people are so hateful and distrustful of it. Isn't environmentalism the right thing to do regardless? Basically, you really don't even really need to believe in global warming to want to live more efficiently, right? I mean, it's the right thing to do for many critical reasons. Apart from improving our health: Living more efficiently SAVES MONEY. Yeah, like you have to twist my arm for that... To think how companies can save millions in just the efficient design of their office buildings. Economy cars can also save you like $16,000 in gas over the life of the car when figured at just $2.00 per gallon. Isn't that worth it right there? It improves our national security. Oil could spike to $5.00 a gallon tomorrow if something happened in the Middle East or a hurricane hit Huston. If that happened, the economy would be greatly weakened, Duh! Burning less fuel creates less smog, less air pollution, and less soil contamination. Go figure ... Living more efficiently ultimately lowers our impact on wildlife and forest areas in the form of less acid rain, fewer catastrophic oil spills, and less strip mining. Finally, if we can become more energy self-sufficient on a very local level we become that much less vulnerable to region-wide disasters like Hurricanes or mass blackouts. In this regard, renewable energy, isn't just good for the environment. It's also the key to keeping the power on when everybody else is sitting in the dark. Overall, isn't wastefulness and carelessness "morally" wrong? We have to expand our thinking to find solutions that address the broadest possible array of problems. Being able to work productively from home or in our own communities would be the most 'cut-to-the-chase solution' of them all as it would eliminate the need for a daily commute in the first place while giving us five more hours of free time! It's what LA is trying to do to curb the extraordinary amount of traffic they have: create consolidated communities where people live, work, and have recreational facilities nearby! Got to think that's smart at some level, right? Comment from: Keira [Visitor] [about 20 porn site URLs] Comment from: Truman Witherspoon [Visitor] Mr. Eldridge, In response to your question I would offer the following; while no response should be hateful, I can fully understand the distrust. Absolutely environmentalism is a worthy endeavor. However, to couple the cause with fear tactics intended for political and financial gain is quite troubling. Further, the omissions of critical facts from the analysis to make the "climate crisis� story more compelling is akin to the claims of WMD and linking of al qaeda to Iraq as a justification for invasion. These are highly charged issues with millions of dollars and elite cabals of power pulling the strings. These groups have historically use tactics of this type to incite advocates on both sides. Unfortunately, we [the people] loose when political and social leaders use these tactics. A nation misguided and divided ensures that a corrupt group of elitists can remain in power. Also, while the notions you suggest regarding fuel efficient cars, green homes and consolidated communities are excellent goals to strive for, I trust that you don't expect these changes to occur within Mr. Gore's timeframes. Saving $16,000 in fuel costs is not that compelling to the average income family who would have to spend $30,000+ to buy the vehicle in the first place. To upgrade the average home in the U.S. to a "green� home would put the homeowner in debt for 14 years. Although this is clearly the direction society needs to move in, there is just not enough economic capacity to move in this direction quickly. Mr. Truman Witherspoon Comment from: Christopher Mims [Member] Chris, I have to agree. Let's say you don't believe in AGW at all. Isn't it still a good idea to save yourself money by increasing the efficiency of your car, your home, etc.? Maybe you don't believe CO2 is an issue, fine... there are plenty of other pollutants our actions produce (Sulfur Dioxide and acid rain, anyone?) that should make us want to use less and use what we've got more efficiently. Let's extend it further--isn't investing in R&D on new energy sources a good way to get us off the foreign oil teat? How about the idea that we might someday produce solar power for less than we now pay to make that energy with coal. Wouldn't that be nice? I personally get so many peripheral (mainly psychological) benefits from taking steps to reduce my own impact (saving money isn't the least of them, I can tell you) that I wonder why there is so much harsh rhetoric... would it kill us all to buy locally and strive for energy independence? Posted by Vleeptron Dude at 9:14 AM 1 comment: Links to this post I have an alibi I wish I could find the guy who carried this sign at the anti-war protest in Washington DC this past weekend. If I didn't have an air-tight alibi, I'd be certain he was me. My Army buddy R.B., who lives in the USA state shaped like the palm and fingers of the right hand, sent me this picture, which a photographer friend took at the very large demonstration. Our e-mails prompted the poem of the previous post. The anti-war demonstrations and protests and vigils -- in more cities than Washington DC -- is a big Flashback for us sexagenarian-somethings. It's one more indisputable proof that the Iraq War is absolutely, thoroughly, completely different from the Vietnam War. See? In the Vietnam protests, nobody had an iPod or a cell phone. Other than that, the fear, the dread, the dead bodies, the amputations, the destroyed lives, the Post Traumatic Stress Disorder, the escalation ("troop surge") and the ferocious anger that Americans have grown to feel toward the psychopaths who designed the Iraq War and lied their asses off to get it started ... it's Flashback City for me, and obviously for the old sign-carrying guy. In case anyone's in any doubt, I vote for Cut and Run, immediately. Bring the Boys and the Girls Home immediately. Over on the Fox News Channel, anyone who's for Cut & Run is not only a traitor and a girrlyman and a left-wing radical who hates America, but ending the Iraq War would cause a bloodbath in Iraq, quickly followed by emboldened Jihadist terrorist attacks on the US Homeland. That's why we're fighting a war in Iraq -- so we won't be attacked in the Homeland. Well -- I'd confess to being a traitor and a girrlyman and a left-wing radical who hates America, but this damn certificate of my US Army service, and the medals, and the letter from my Commander-in-Chief, Richard M. Nixon, thanking me for my patriotic wartime service keep confusing the indictment. You get confused about simple things when you're my age. I thought Vietnam meant we'd never again do something that fucking stupid. Obviously it meant nothing of the sort. It's about time to start calling for designs for the Iraq War Memorial, not far from the Vietnam War Memorial, in Washington DC. If you'd like to Leave A Comment with your sketch or your design ideas, we might as well start it now, we've amassed enough dead soldiers and Marines, and all we've achieved is the same outrage among Americans that Americans felt against Lyndon Johnson and Richard Nixon. If you'd like to Leave A Comment to explain why the Iraq War is a great American achievement, and why we Must Stay the Course, and just give the new Free and Democratic government in Iraq just a little time to get its shit together and become a Western-friendly oil-producing dependable ally, and why Victory is Just Around the Corner, knock your socks off. Please address all military-solution hallucinations to Bob the Old Girrlyman Traitor who Hates America SP5 US Army 1969-1971 Posted by Vleeptron Dude at 8:35 PM 1 comment: Links to this post old guy's anniversary poem If this long after, I got my motorcycle back and the hippie communes and the parties all weekend long came back and really great music started coming over the radio again that would be just dandy If this long after, the drug dealer was packing an acoustic guitar and forced me to harmonize with him while he played folksongs that would be really cool If this long after, the President had to vamoose on the helicopter and go away forever the Attorney General went to federal prison and 20 assorted White House dickheads went to federal prison that would be totally awesome If this long after, LSD came back and sitar music while I stumbled around and Time got all stretchy like a rubber band and insights into What It's All About flashed like the brilliance of a thousand suns inside my head and I routinely forgot them all the next morning If this long after, embroidered bellbottoms came back and paisley came back and tie-dye came back and long long hair came back and smiles all day long came back and laughing all night long until the sun came up came back and bras went away If this long after, the city was as much fun as the country and the woods and the mountains and the seashore again that would be rad phat bitchin But it's this long after, and all that's back is this war 20 March 2007 / 4th anniversary of start of Iraq War Copyright (c) 2007 by Robert Merkin, All Rights Reserved fucking up a wet dream: privatizing Walter Reed Army Medical Center in the midst of two wars This is a very rich story, just overflowing with fascinating details about how a partnership between the Army and the Defense Department, and a division of the Halliburton Corporation, managed to sicken, weaken and spread dysfunction throughout Walter Reed Army Medical Center -- for a century the crown jewel of Army medicine -- just as the Iraq and Afghanistan Wars began and large numbers of combat wounded were being flown back to Walter Reed for treatment. Perhaps some things shouldn't be privatized and outsourced. In matters where we have very high expectations of excellence -- for example, medical treatment and associated housing and administrative services for wounded combat troops -- the only way to guarantee that we can meet very high expectations is to have the agency ultimately responsible -- the Army -- do nearly all the work itself. Particularly since, in this case, the Army hospital has a century's worth of experience performing all the necessary tasks involved in treating, housing, rehabilitating and administering combat casualties. Halliburton is a huge private corporation which does a huge number of things, and all for profit and return to its investors. Walter Reed Army Medical Center does only one thing, and has been doing it, and even inventing pioneering ways of doing it, since 1904. It provides top-quality medical care to members of the U.S. military and their family members. Making a profit, or a return on investment, is not part of Walter Reed's mission or structure. At the end of each quarter, Halliburton must justify all its activities by analyzing the profit it made. At the end of each quarter, Walter Reed must justify all its activities by analyzing how many patients it treated, and how well. These are fundamentally different aims, and are destined or doomed to achieve fundamentally different results. Congress last week made its first loud peep to keep Walter Reed Army Medical Center open after all, and not close it as planned by 2011. With a vile scandal placed in front of them like a plate of puke, the Einsteins of Congress have begun, slowly, to conclude that perhaps closing the jewel of Army medicine might not be a really great idea in the midst of two long and very nasty wars (while the Bush administration rattles sabers for more possible wars). To really rub it in, The Washington Post broke the story of substandard care for wounded combat troops at Walter Reed precisely at the moment when the Bush "surge" of 20,000 more U.S. troops was deployed to Iraq. It's the heightened expectations that are causing so much noisy trouble in this story. Americans are increasingly making the Connection between agreeing to wars, and insisting that those Americans wounded in the wars get the best medical treatment America has to offer, at any price. Americans are making the Connection. The way the Defense Department and the Department of Veterans Affairs assemble and maintain the machine to provide medical treatment to our soldiers, marines, sailors, air force personnel, etc. may reflect Congress's legislative will. But Jeez -- they way they go about trying to accomplish this mission -- these people could fuck up a wet dream. And they're the people the wounded troops and their families depend on to do the job the way the American people expect them to. pickup in Seattle Post-Intelligencer (Washington state USA) Walter Reed deal hindered by disputes by Donna Borak, AP Business Writer [image] A section of wallpaper that was pulled back to reveal mold is seen as Army Master Sgt. Gary Rhett, right, building manager, looks on in room 416 of Building 18 of the Walter Reed Army Medical Center, which was used to house recovering wounded soldiers, in Washington,Thursday, March 15, 2007. An Army contract to privatize maintenance at Walter Reed Medical Center was delayed more than three years amid bureaucratic bickering and legal squabbles that led to staff shortages and a hospital in disarray just as the number of severely wounded soldiers from Iraq and Afghanistan was rising rapidly. (AP Photo/Charles Dharapak) WASHINGTON -- An Army contract to privatize maintenance at Walter Reed Medical Center was delayed more than three years amid bureaucratic bickering and legal squabbles that led to staff shortages and a hospital in disarray just as the number of severely wounded soldiers from Iraq and Afghanistan was rising rapidly. Documents from the investigative and auditing arm of Congress map a trail of bid, rebid, protests and appeals between 2003, when Walter Reed was first selected for outsourcing, and 2006, when a five-year, $120 million contract was finally awarded. The disputes involved hospital management, the Pentagon, Congress and IAP Worldwide Services Inc., a company with powerful political connections and the only private bidder to handle maintenance, security, public works and management of military personnel. While medical care was not directly affected, needed repairs went undone as the non-medical staff shrank from almost 300 to less than 50 in the last year and hospital officials were unable to find enough skilled replacements. An investigative series by The Washington Post last month sparked a furor on Capitol Hill after it detailed subpar conditions at the 98-year-old hospital in northwest Washington and substandard services for patients. Three top-ranking military officials, including the secretary of the Army, were ousted in part for what critics said was the Pentagon's mismanaged effort to reduce costs and improve efficiency at the Army's premier military hospital while the nation was at war. IAP is owned by a New York hedge fund whose board is chaired by former Treasury Secretary John Snow, and it is led by former executives of Kellogg, Brown and Root, the subsidiary spun off by Texas-based Halliburton Inc., the oil services firm once run by Vice President Dick Cheney. IAP finally got the job in November 2006, but further delays caused by the Army and Congress delayed work until Feb. 4, two weeks before the Post series and two years after the number of patients at the hospital hit a record 900. "The Army unfortunately did not devote sufficient resources to the upfront planning part of this, and when you do that, you suffer every step of the way," said Paul Denett, administrator for federal procurement policy at the Office of Management and Budget, the White House unit that prepares the president's budget and oversees government contracts. The contract includes management of Building 18, which houses soldiers with minor injuries and was highlighted in the Post series as symptomatic of substandard conditions: black mold on the walls of patient rooms, rodent and cockroach infestation, and shoddy mattresses. Those 54 rooms are now vacant. Interior work cannot be started until a badly damaged roof is repaired, and that will need another contract because it's not covered in the IAP contract, Walter Reed officials said. "These rooms are exactly as they were left," Sgt. Gary Rhett, manager of Building 18, said Thursday. "No changes have been made." The Army has confirmed the timing of the contract delays but declined several requests for comment on why the protest and appeal process took so long, even as more and more injured soldiers were arriving. The trail goes back to the end of the Clinton administration. The Army began studying the cost benefits of privatization in 2000. When President Bush took office, he mandated the competitive outsourcing of 425,000 federal jobs. At the time, the Pentagon was aggressively pushing for increased outsourcing, and in June 2003, then-Defense Secretary Donald Rumsfeld told a Senate committee he was considering outsourcing up to 320,000 nonmilitary support jobs. That's the same year that the Army asked for bids on Walter Reed and, coincidentally, the same year the United States invaded Iraq. One company responded: Johnson Controls World Services Inc., which would be acquired by IAP in March 2005. It initially bid $132 million, but it and Walter Reed's then-management agreed that the Army was underestimating the cost. By September 2004, the Army had decided it would be cheaper to continue with current management, which said it could do the work for $124.5 million. Johnson Controls filed a protest with the Government Accountability Office. The protest was dismissed in June 2005, but the Army agreed to reopen bidding three months later to include additional costs for services. In January 2006, after two rounds of protests by IAP and two appeals by Walter Reed employees to the U.S. Army Medical Command, IAP was named the winner, according to Steve Sanderson, a Walter Reed spokesman. Instead, in an unusual turn of events, the contract wasn't awarded for another 11 months, the GAO said. Walter Reed officials blame several factors, including an additional protest to the GAO filed by Deputy Garrison Commander Alan D. King, a separate appeal to the U.S. Army Medical Command by Walter Reed's public works director, at least one intervention by Congress, and delays on required congressional notifications about government employee dismissals. IAP spokeswoman Arlene Mellinger said "it was up to the Army to decide when to begin that contract." The company was ready to start at any time, she added. In August 2006, led by Sen. Barbara Mikulski, D-Md., lawmakers asked then-Army Secretary Francis J. Harvey to hold off on the contract until Congress finished work on the fiscal 2007 defense appropriations bill. Congress approved that bill Sept. 29. The Army's plan then was to eliminate 360 federal jobs at Walter Reed in November and turn the work over to IAP, according to the American Federation of Government Employees, a federal workers' trade union. But the Army failed to notify Congress 45 days in advance, as required by law, so the turnover was delayed until early this year. Then it was IAP's turn to have problems. When work finally began at the hospital, IAP made an immediate request, which the Army approved, to hire 87 temporary skilled workers for up to four months "to ease the turbulence caused by employees being placed into positions or other installations and otherwise finding new jobs early," said Sanderson, the Walter Reed official. However, a "tight" job market in the Washington area meant that only 10 qualified temporary employees were found, he added. Meanwhile, injured soldiers continue to arrive weekly to a short-handed, deteriorated hospital, which the Army still plans to close in 2011. LAHAR! volcano disaster in New Zealand -- but lots of warning, nobody hurt TOP: Depiction of an eruption-generated lahar flowing from the summit of Mount Ruapehu in New Zealand. (The Australasian Journal of Disaster and Trauma Studies) BOTTOM: Recent chart about the expected impending Ruapehu lahar in The Dominion Post newspaper, New Zealand. A lahar is a huge mudflow or mudslide triggered by a volcanic eruption or by the unstable geology at the top of an active volcano. The damage and deaths are caused not by fire, lava or hot toxic gas, but by huge volumes of fast-descending mud. Lahars are one of the most common ways that volcanos kill people. In places with an advanced scientific and civil-defense infrastructure, potential lahars can be predicted, and effective and automatic early warning systems can be established. Ruapehu went lahar in 1953, wiped out a railroad bridge, swept a passenger train into a lake, and killed 151 people. This time the lahar killed nobody. Lahar sweeps down New Zealand mountain Sunday 18 March 2007 4:59 GMT (1st Lead) Wellington, New Zealand -- A massive lahar, or volcanic mudflow, swept down New Zealand's 2,797-metre high Mount Ruapehu on Sunday after its steaming crater lake burst its banks releasing thousands of tonnes of rock-filled water. It had long been expected and police and civil defence officials said alarms and safety system installed after a similar lahar 54 years ago which killed 151 people on a train when a rail bridge was swept away, had worked perfectly. They said the lahar, confined by a new stopbank, had kept to its expected course down the mountain into the Whangaehu River valley and past the village of Tangiwai, near the rail bridge, without incident. Police, alerted by a series of automatic alarms monitoring the crater lake's temperature and level, closed all roads in the area, including the highway between the capital Wellington and the country's biggest city Auckland, and stopped trains on the main trunk line. Hundreds of motorists and train passengers were stranded but officials said the lahar had not reached the road, nobody was hurt and no settlements had been affected. The lahar kept to its predicted path eventually moving out to the sea. Civil Defence Minister Rick Barker said bad weather over the weekend had fortuitously kept hikers and climbers off the mountain, an active volcano that is the North Island's highest peak. Scientists had been closely monitoring the 17-hectare crater lake, which sits about 250 metres below the summit of Mount Ruapehu, since January when seeping water threatened to sweep away the rim. Weathermen said extremely heavy rain had fallen on the mountain for more than three hours which probably accounted for the rising lake level. The Department of Conservation had earlier predicted that a lahar would travel at about 21 kilometres an hour down the mountain and a spokesman described Sunday's event as 'moderate.' The National Crisis Centre in Wellington was activated and officials said the lahar emergency response plan had worked as expected. © 2007 dpa - Deutsche Presse-Agentur The Dominion Post (daily newspaper, New Zealand) Tuesday 2 January 2007 Dam on the brink of bursting by Emily Watt Mt Ruapehu's crater lake is at a record high level, and the dam wall is beginning to erode. Scientists say if the lake keeps rising as predicted, the dam could blow by March. They say it is a case of "not if, but when" for the lahar, which is expected to burst the dam and flow down the Whangaehu River on the eastern side of the mountain to Tangiwai and out to the coast. Unlike the 1953 lahar that led to the Tangiwai rail disaster that killed 151 people, this one is unlikely to threaten residential areas but roads, bridges and rail lines could be affected. When a multimillion-dollar early warning system sounds, police will have 20 minutes to cordon off key roads, including State Highway 1. The system also triggers automatic road and rail barriers. A 300-metre concrete barrier has been built to prevent the flow entering the Waikato Stream and Tongariro River; the road bridge on State Highway 49 has been raised and strengthened; and riverbanks have been cleared of pine trees to lessen the risk of their being wedged against the bridge. Last Friday, Conservation Department staff measured the lake at a record high of 2.8 metres below the top of the dam, and there is evidence of the dam seeping at up to 10 litres a second. Department earth scientist Harry Keys said that marked the beginning of the dam's eroding. The lower the lake levels when this happens, the less water that escapes. "The sooner it happens the better," Dr Keys said. "This needs to be resolved. It's an issue hanging over the local community." The lahar warning level remains at two, with a 1 to 2 per cent chance of an immediate lahar. There is also a wave hazard as chunks of the ice cliff fall into the lake, causing water to slosh on to the top of the dam. A large wave could spill over the dam and flow down the mountain. The lahar threat becomes more significant when the lake rises another 0.8 metres, which could be in two weeks, but more likely by February. © Fairfax New Zealand Limited 2007. All rights reserved. leave Earth, ride a sightseeing helicopter above the surface of Mars Click and good things will probably happen. A still photograph of the Iani Chaos region of Mars taken in October 2004 by the High Resolution Stereo Camera aboard the European Space Agency's orbital probe Mars Express. Do you have just a few minutes? I'd love to take you completely off Planet Earth and show you an incredible movie -- you'll float above the surface of Another Planet, as if you were in a sightseeing helicopter over the Grand Canyon or Niagara Falls. Drift above the Iani Chaos and Ares Vallis regions of our neighbor planet Mars. This movie was produced using images from the High Resolution Stereo Camera (HRSC) on board [the European Space Agency's] Mars Express spacecraft. Its first part shows a simulated flight over the upper reaches of Ares Vallis, a large outflow channel on Mars, and parts of its source region, Iani Chaos. I think I've seen movies like this on other blogs. Can I filch a movie like this and put it on Vleeptron? How do I do it? If you know how, Leave A Comment. I think it's another of my many PEBKAC problems. Distributed Computing, Refrigerator Art, Overclock... PIZZAQ: "Mark and Beverly Escaping from the Shoppi... 1st Day Issue / Tierra de los Suen~os / TdSPosta /... PIZZAQ: my neighbor measures the time it takes for... VLEEPTRON UPDATE: Progress on the Travelling Easte... wtf??? ... dude's talking about the WEATHER and wi... fucking up a wet dream: privatizing Walter Reed Ar... LAHAR! volcano disaster in New Zealand -- but lots... leave Earth, ride a sightseeing helicopter above t... Fold Proteins on your Sony PlayStation 3! What a p... a Cry for Help from the Easter Bunny to the Dutch ... Retirement Golf -- a fun future for General Peter ... kill the polar bears, melt the ice, drown the plan... FWOOOOSH!!! Arianespace launches 2 satellites afte... french rokkit launch postponed till oh about 22.00... smut from Elmer Elevator's Discount Prep test -- hope it wiggles for you and me drowning the polar bears, gagging government scien... I love ewe Bob the Gentleman Scientist's first Letter to the ... vleeptron hopes it wiggles for you Arianespace Launch Update! The UK MoD says: We don... Rage Against the Machine rises from the dead, will... Vive la France! A launch for the British Ministry ... can it be true? Glenn Gould's 1955 Goldberg Variat... Euclid alone has looked on Beauty bare Blutgeld V.2, for the Anti-Kriegs-Museum / Anti-Wa... old SP5 Bob, and the Blogosphere, Share Their Feel... 2-star medico yutz tossed from command of Walter R... Blood money / Blutgeld / Cost of Iraq War in the b...
CommonCrawl
Predictors of wealth-related inequality in institutional delivery: a decomposition analysis using Nepal multiple Indicator cluster survey (MICS) 2019 Umesh Prasad Bhusal ORCID: orcid.org/0000-0001-9331-60281,2 Inequality in maternal healthcare use is a major concern for low-and middle-income countries (LMICs). Maternal health indicators at the national level have markedly improved in the last couple of decades in Nepal. However, the progress is not uniform across different population sub-groups. This study aims to identify the determinants of institutional delivery, measure wealth-related inequality, and examine the key components that explain the inequality. Most recent nationally representative Multiple Indicator Cluster Survey (MICS) 2019 was used to extract data about married women (15-49 years) with a live birth within two years preceding the survey. Logistic regression models were employed to assess the association of independent variables with the institutional delivery. The concentration curve (CC) and concentration index (CIX) were used to analyze the inequality in institutional delivery. Wealth index scores were used as a socio-economic variable to rank households. Decomposition was performed to identify the determinants that explain socio-economic inequality. The socio-economic status of households to which women belong was a significant predictor of institutional delivery, along with age, parity, four or more ANC visits, education status of women, area of residence, sex of household head, religious belief, and province. The concentration curve was below the line of equality and the relative concentration index (CIX) was 0.097 (p < 0.001), meaning the institutional delivery was disproportionately higher among women from wealthy groups. The decomposition analysis showed the following variables as the most significant contributor to the inequality: wealth status of women (53.20%), education of women (17.02%), residence (8.64%) and ANC visit (6.84%). To reduce the existing socio-economic inequality in institutional delivery, health policies and strategies should focus more on poorest and poor quintiles of the population. The strategies should also focus on raising the education level of women especially from the rural and relatively backward province (Province 2). Increasing antenatal care (ANC) coverage through outreach campaigns is likely to increase facility-based delivery and decrease inequality. Monitoring of healthcare indicators at different sub-population levels (for example wealth, residence, province) is key to ensure equitable improvement in health status and achieve universal health coverage (UHC). Maternal health is a priority public health issue, well reflected in global development agendas. Reducing preventable maternal deaths was one of the key targets of millennium development goals (MDGs). It continues to be a target of Sustainable Development Goals (SDGs). As per a global estimation done in 2017, maternal mortality ratio (MMR) was 211 per 100,000 live births [1]. By 2030, the target of SDGs is to reduce it to less than 70 per 100,000 live births [2]. Despite about a 38% reduction in global MMR since 2000, high maternal deaths is still a concern, particularly in low-and middle-income countries (LMICs), signalling the inequality in progress towards maternal health [1]. MMR is 40 times higher in the low-income countries compared to that of Europe and 60 times higher than that of Australia and New Zealand [1]. South Asia accounts for nearly one in every five global maternal deaths [1]. Low access to and utilization of health services during pregnancy and childbirth such as antenatal care (ANC), institutional delivery and skilled birth attendants (SBAs) are key factors responsible for a higher number of maternal deaths in this region [3]. These deaths, to a larger extent, results from complications during delivery such as haemorrhage, sepsis, unsafe abortion, obstructed labour, and hypertensive disorders that could be prevented by switching from home to institutional delivery [1, 3,4,5,6]. The skilled attendance at delivery in a hygienic environment and timely access to emergency care abate the risk of mortality or serious complications for both mother and newborn [7]. Equity in access to and utilization of healthcare services has received increased attention lately. It is one of the health system goals highlighted in the health system framework proposed by World Health Organization (WHO) in 2007 [8, 9], and a crucial element of universal health coverage (UHC) embodied in SDGs [2, 10]. However, inequality in the distribution of access to and utilization of maternal health services both between and within countries continues to be a major concern for LMICs [7, 11, 12]. Studies from different countries have shown that maternal health service utilization is disproportionately higher among women from wealthier households compared to their poorer counterparts; those living in an urban area compared to rural counterparts; those with higher education compared to non-educated counterparts; those belonging to the accessible geographical areas compared to remote counterparts [3, 6, 7, 13,14,15,16,17,18]. Different socio-economic and demographic factors interact with each other and aggravate the situation of inequality. The maternal health indicators at the national level have markedly improved in the last couple of decades in Nepal. The percent of women aged 15-49 years delivered in health institutions has increased from 8 in 1996 to 57 in 2016 [19]. Similarly, percent of women receiving more than four antenatal care (ANC) has increased from 14 in 2001 to 69 in 2016 [19]. The maternal mortality ratio (MMR) has decreased from 539 maternal deaths per 100,000 live births to 239 between 1996 and 2016 [19]. This progress is attributed to a mix of supply and demand-side financing strategies introduced by Government of Nepal (GoN) since the 1990s [20,21,22,23,24]. Establishment of birthing centres (BCs) and basic/comprehensive emergency obstetric centres (BEOCs/CEOCs) in remote and rural areas; skilled birth attendant training to nursing staffs and doctors of BCs and EOCs; expansion of blood transfusion services; strengthening of the referral services are key examples of supply-side financing. Likewise, maternity incentive scheme introduced in 2005 (transport incentive to mothers who deliver in health facilities) and revised subsequently in 2006 (user fees removed in all facilities in 25 districts with low human development index), 2009 (nationwide user fee removed and renamed as Aama program), and 2012 (incentive for completing recommended four ANC visits followed by institutional delivery introduced in 2009 merged with Aama program) are examples of demand-side financing. However, the progress in maternal health in Nepal is not uniform across population from different geography and socio-economy. The evidence shows that the investment made in maternal health disproportionately favours women belonging to: an urban area, educated group, wealthy households, privileged ethnic group [25]. Despite the focus of GoN towards enhancing equitable distribution and utilization of health services through policies and sector strategies, progress in narrowing the socio-economic inequality was not uniform across seven provinces in Nepal as demonstrated by the further analyses of nationally representative household surveys conducted before 2016 [4, 20, 24, 26]. GoN is committed to UHC and SDGs to be achieved by 2030. Hence, measuring health service utilization from the equity perspective using the most recent survey data is essential to ensure fair progress across population sub-groups. It is also of paramount importance to analyze the determinants of inequality so that evidence-based policy intervention could be taken by the policymakers. There is a dearth of studies that examine the determinants of inequality in maternal health in Nepal. The objective of this paper is three-fold: (i) to analyze the determinants of institutional delivery in Nepal using the most recent nationally representative household survey; (ii) to measure the socio-economic inequality in the use of institutional delivery services; (iii) to identify the main components that explain socio-economic inequality in institutional delivery through decomposition analysis. Study design and setting This study analyzed the Multiple Indicator Cluster Survey (MICS) 2019 data set. The survey was conducted in Nepal by the Central Bureau of Statistics (CBS) in technical and financial support from UNICEF. MICS is a nationally representative cross-sectional survey that aims to monitor the situation of women and children by capturing the information on health, education, social protection, environment, domestic violence along with the socio-economic, demographic and geographic characteristics at the individual and household level. The sampling frame of the Nepal MICS 2019 was based on the National Population and Housing Census 2011. The frame consisted of a complete list of all census wards created in 2011 and updated in 2018 to account for the current administrative structure of Nepal. The survey employed a multistage, stratified cluster probability sampling design to establish a representative sample of households at the national and province level. Within each province, the urban and rural areas were defined as the main sampling strata. The sample of households was selected in the following stages: (i) within each stratum, a specified number of census enumeration areas (EAs) or clusters were selected systematically with probability proportional to size (then listing of the household was done for the selected EAs) (ii) the sample of households was selected from the sampled EAs. In total, 25 households were selected from each sampled EA through a systematic random sampling method. For this round of survey, a total of 512 EAs and 12,800 households were selected. Out of which; 12,655 households, 14,805 women (15-49 years), and 5501 men (15-49 years) were successfully interviewed. Details of the MICS design and methodology are described elsewhere [27]. This study used data from married women (15-49 years) who had a live birth within the two years preceding the survey. In case of multiple births by the selected women within the two years, the data analysis was conducted for their most recent live birth. A total of 1936 women (unweighted count = 2500) were eligible to be included in this study. Selection of the study population is summarized in a flow chart (Fig. 1). Flow chart showing selection of study population (Nepal MICS 2019) Study variables The outcome variable used in this study was institutional delivery. It was categorized as "1" if women delivered their baby at health facility (both public or private) and "0" if women delivered their baby at home. The independent or explanatory variables included in this study are: age of women at the time of survey, number of births (parity), education status of women, exposure to media, four or more ANC visits for the most recent live birth, education status of household head, sex of household head, religion, ethnicity, area (rural and urban), province and wealth index quintile. Age of women was categorized into four groups (1: 15-19 years; 2: 20-29 years; 3: 30-39 years; 4: 40-49 years). Religion was broadly categorized as Hindu and non-Hindu. More than 100 castes recorded during the survey were re-classified into four caste/ethnic groups based on Population Monograph of Nepal 2014 [28] (1: Brahmin, Chhetri and Madhesi; 2: Janajati from mountain, hill, and terai (plain southeast belt) and Newer; 3: Dalits from mountain hill and terai and Muslims; 4: rest (Marwadi, Bangali etc) as others. Education status of women and household head was categorized into four groups (0: without formal education; 1: primary education (grade one to five); 2: secondary education (grade six to ten); 3: higher secondary education and above (grade 11, 12 and above). Exposure to media was classified based on whether women were exposed to at least one source of mass media (0: no exposure; 1: limited exposure where women either listen to radio or watch TV or read magazine/newspaper less than once in a week; 2: exposure where women either listen to radio or watch TV or read magazine/newspaper at least once in a week to almost every day). Wealth index is a composite indicator of wealth and is commonly used as a proxy measure of socio-economic status of a household. We used wealth index available in Nepal MICS 2019 dataset. To construct the wealth index, principal components analysis was performed by using information on the ownership of consumer goods, dwelling characteristics, water and sanitation, and other assets and durables that are related to the household's wealth [27]. Since the wealth index provides an ordinal interpretation, it was used as a ranking variable of households. Descriptive and regression analyses The characteristics of the women included in this study were described by using frequency and percentage. Bivariate and multivariate logistic regression models were used to assess the association of independent variables with institutional delivery. STATA 16 for the statistical analysis was used. Complex survey design was declared by using svyset command to account for sampling weights, clustering and stratification in the sampling design. Variance Inflation Factors (VIFs) were used to examine multicollinearity among covariates before building regression models. All covariates had VIFs less than 3.5 (maximum: 3.37; minimum: 1.06; average: 1.82). The details of the VIFs are presented in Supplementary Table 1. Test for specification error [29] was done to confirm the assumption that the logit of the outcome variable is a linear combination of the independent variables. This involved two steps: (i) whether logit is the correct function to use in this regression model; (ii) whether all the relevant (but no extraneous) variables are included. The Stata command linktest was employed to test for specification error. The output of the command linktest is provided in Supplementary Table 2. Goodness of fit test accounting for survey design [30] was used to assess the fitness of multivariate logistic regression model. There was no evidence of lack of fitness (F-adjusted test statistic: 1.5; p-value: 0.144). All analyses were weighted. Inequality measurement The concentration curve (CC) and concentration index (CIX) in their relative formulation (with no correction), were used to analyze the inequality in the use of health services (institutional delivery) across socio-economic characteristics of the population (women) [31]. The CIX presented in this paper corresponds to the horizontal inequity since every woman in the study were assumed to have an equal need for institutional delivery. While producing CC, cumulative proportion of women ranked by wealth index score (poorest first) was plotted on the x-axis against the cumulative proportion of institutional delivery on the y-axis. The 45-degree inclination from the origin showed perfect equality. If the CC overlaps with the line of equality, use of institutional delivery is equal among women. However, if the CC subtends the line of equality below (above), then inequality in the use of institutional delivery exists and is biased towards women belonging to low (high) socio-economic status. The further the CC subtends from the line of equality, the greater the degree of inequality. To quantify the magnitude of wealth-related inequality, CIX was calculated. CIX is defined as twice the area between the line of equality and CC [31]. The following are the advantages of using CIX as a measure of inequality index in healthcare: it takes socio-economic dimension of healthcare inequalities into account since the classification of individuals is according to the socio-economic status, instead of their health status; it captures the experience of the whole population and; it is sensitive towards the changes in population distribution across socio-economic groups [15]. The CIX takes a value between − 1 and + 1. When the institutional delivery is equally distributed across socio-economic groups, CIX takes the value of 0. A positive value of CIX implies that the use of institutional delivery is concentrated among the higher socio-economic groups (pro-rich). Conversely, a negative value of CIX suggests that the use of institutional delivery is concentrated among the lower socio-economic groups (pro-poor).The calculation of CIX was done by using "convenient covariance" formula described by O'Donnell et al. [31], as shown in eq. 1 below. $$CIX=\frac{2}{\mu}\mathit{\operatorname{cov}}\left(h,r\right)$$ Here h is the health sector variable, μ is its mean, and r = i/N is the fractional rank of individual i in the living standards distribution, with i = 1 for the poorest and i = N for the richest. The user-written STATA commands lorenz [32] and conindex [33] were used to produce CC and measure CIX, respectively. Decomposition of CIX The decomposition of the relative CIX was performed to calculate the portion of inequality that is due to the inequality in underlying determinants. The technique explained by Wagstaff et al. [34] and O'Donnell et al. [31] were followed for the analysis and interpretation of results. The contribution of individual determinant of institutional delivery to the overall wealth-related inequality was calculated as the product of sensitivity of institutional delivery with respect to the determinant (elasticity) and the degree of wealth-related inequality in that determinant (CIX of determinant). Part of the CIX not explained by the determinants was presented as residual. ADePT (version 6) software platform for automated economic analysis developed by World Bank was used for required calculations. Since the institutional delivery is a binary outcome variable, the non-linear model was specified while conducting the analysis. Selection of variables for the decomposition of CIX was based on the results of multivariate logistic regression (statistical significance), policy relevance and literature review of empirical studies [15, 17, 35]. Descriptive summary Table 1 shows the descriptive statistics for socio-economic and demographic characteristics of women aged 15-49 years disaggregated by the place of delivery (home delivery versus institutional delivery). Overall, out of 1936 women, about 78.2% delivered in health institution and 21.8% delivered in home. Most of the women in this study belonged to age group 20-29 years, had one child, did four or more ANC visits, had secondary education, were exposed to mass media, belonged to urban residence, had male as a household head, were from upper ethnic group, and were Hindu. Similarly, most of the women had household head without formal education, belonged to Province 2, and were from the poorest wealth quintile. Table 1 Socio-economic and demographic characteristics of women by place of delivery (N = 1936) Proportion of home delivery and institutional delivery was identical among different age groups. Women with one or two births, four or more ANC visits, formal education, exposed to mass media, from urban area were more likely to deliver in health institution. Higher proportion of women belonging to upper caste and Hindu religion had institutional delivery compared to Dalit and non-Hindu women. The education status of household head was negatively associated with the home delivery. Institutional delivery was concentrated more in Bagmati province, followed by Lumbini Province, Province 2, and Province 1. In contrast, home delivery was concentrated in Province 2, followed by Lumbini Province and Province 1. Likewise, institutional delivery was distributed more in richer quintiles in comparison to poorer counterparts. Except for age of women, there was strong evidence of an association between socio-economic and demographic characteristics of women and place of delivery (demonstrated by p-value for chi-square test, Table 1). The map of Nepal showing province was status of institutional delivery as a percentage of total delivery in that province is shown in Fig. 2. Map of Nepal showing Province wise percentage of institutional delivery. Map was created using QGIS 3.16. Shapefile was accessed from publicly available source Results from the regression model Table 2 presents the estimate and the corresponding 95% confidence interval (CI) for the bivariate and multivariate regression models as unadjusted odds ratio (OR) and adjusted OR, respectively. Both the bivariate and multivariate analyses showed that women who have completed four or more ANC visits; attained higher education; from urban residence; belonging to Hindu religion; and from wealthier quintiles were more likely to deliver in health institution in comparison to their respective counterparts. Conversely, the lower odds of institutional delivery were found for multiparous women; those living in male-headed household; and women belonging to Province 2 in comparison to the respective reference groups. Some variables (exposure to mass media, ethnicity, education status of household head, belonging to Bagmati, Gandaki or Karnali Province) that showed statistically significant association with institutional delivery in bivariate analyses did not show such association in multivariate analysis. Likewise, few associations were apparent only in the multivariate models (age of women, belonging to Sudurpaschim Province). Table 2 Determinants of institutional delivery in Nepal (MICS 2019), N = 1936 Women in age group 30 to 39 years were about two times (adjusted OR = 2.39; 95% CI: 1.29-4.42) more likely to deliver in health institution compared to those in age group 15 to 19 years. Similarly, women in age group 40 to 49 years were about two and a half times (adjusted OR = 2.66; 95% CI: 1.04-6.78) more likely to deliver in health institution compared to those in age group 15 to 19 years. Women who had delivered two, three, and four or more children already were significantly less likely to deliver in health institution compared to those who had only one child ever born. Women who had received four or more antenatal care (ANC) visits were nearly two and a half times more likely to deliver in health institution (adjusted OR = 2.54; 95% CI: 1.84-3.50) compared to those with fewer ANC visits. Women with higher secondary education or above were nearly five times more likely to deliver in health institution (adjusted OR = 5.18; 95% CI: 2.39-11.23) compared to those without formal education. Similarly, women residing in urban area had greater odds of delivering in health institution (adjusted OR = 1.80; 95% CI: 1.29-2.52) compared to those living in rural setting. However, women from male-headed households were less likely to deliver in health facility (adjusted OR = 0.68; 95% CI: 0.47-0.98) compared to female-headed households. Regarding religion, Hindu women were more likely to deliver in health institution (adjusted OR = 1.69; 95% CI: 1.16-2.46) compared to non-Hindu women. Similarly, women from Sudurpaschim Province were more likely to deliver in health institution (adjusted OR = 2.13; 95% CI: 1.06-4.29) compared to those from Province 1. However, women from Province 2 were less likely to deliver in health institution. Women belonging to higher wealth index quintile had greater odds of delivering in health institution compared to poorest quintile. Women from second, middle, richer and richest wealth index quintile were more than two times, four times, fifth times and seventh times more likely to deliver in health institution compared to poorest wealth quintile (reference), respectively. Results from the measures of inequality Figure 3 depicts average institutional delivery (with 95% confidence interval) over wealth index quintiles with respect to the total delivery in that quintile. Just under 60% of women in the poorest wealth quintile delivered in health institutions compared to about 95% in the richest counterpart. The graph demonstrates that institutional delivery increases monotonically in moving from women in the poorest wealth quintile to the richest wealth quintile. Figure 4 shows the inequality in institutional delivery by wealth status. Since the concentration curve is below the line of equality, the institutional delivery was disproportionately higher among women from wealthy groups. A positive estimated relative CIX of 0.097 (standard error: 0.008; p < 0.001) indicates that the institutional delivery was concentrated among the wealthier women in comparison to their poor counterparts Institutional delivery over wealth index quintiles Concentration curve for institutional delivery against wealth rank Decomposition of the relative concentration index (CIX) Table 3 presents the result of decomposition analysis of the relative CIX used to ascertain the contributions of different determinants towards wealth-related inequality in institutional delivery. The major contribution to the inequality was from the wealth status of household women belongs (53.20%), followed by education (17.02%), area of residence (8.64%), and ANC visit (6.84%) The residual contribution (13.70%) represents the amount of wealth-related inequality not explained by the determinants used in the analysis. Table 3 Decomposition of concentration index (CIX) This study analyzed the determinants of institutional delivery in Nepal using the most recent MICS 2019. The odds of Nepalese women giving birth in health institutions with respect to their socio-economic and demographic characteristics were measured. Further, the wealth-related inequality in institutional delivery was calculated along with a decomposition analysis to find out the key determinants that explain the inequality [31]. The study found that age of women, parity, four or more ANC visit, education status of women, area of residence, sex of household head, religious belief, province, and wealth index quintile were significant determinants for the institutional delivery. The institutional delivery was disproportionately higher among women belonging to wealthy households. The decomposition of the concentration index showed that the wealth-related inequality was explained mostly by household wealth, education status of women, urban residence, and ANC visits. The odds of institutional delivery increased with the increase in age of women. The women above age 30 years were more than two times more likely to have institutional delivery compared to that of age below 15-19 years. This finding corroborates previous study that analyzed first-order births in 34 countries of sub-Saharan Africa and found that older age at birth was associated with significantly higher odds of facility-based delivery [36]. Finding from this study aligns well also with the study from Bangladesh [37]. However, the non-significant association was obtained in few studies from Nepal, Pakistan and Ethiopia [3, 38, 39] and even negative association was obtained in a previous study from Nepal [40]. Further studies are needed to investigate the association between age of women and institutional delivery. The likelihood of institutional delivery decreased with an increase in parity. This result support finding from similar studies conducted in developing countries that have shown that experienced mothers were less likely to opt for facility-based delivery [3, 7, 39, 41]. One possible explanation for the low uptake of institutional delivery among high parity women is that women with birth history may develop confidence from the knowledge and experience acquired from earlier pregnancies and therefore are less motivated to opt for services from health facilities [7]. The odds of institutional delivery was twice in women who had completed four or more ANC visits. Positive association between the antenatal visits and facility-based delivery was found in previous studies conducted in Kenya, Ethiopia, Nepal, Bangladesh, and Pakistan [3, 4, 6, 37, 39,40,41,42]. The counselling on birth preparedness received from health workers during the antenatal visits could be one of the reasons. The women with higher education were significantly more likely to deliver in health institution compared to women without formal education. Similar findings were obtained from studies conducted in other developing countries, establishing education as a significant determinant of facility-based delivery [3, 4, 14, 15, 38, 40, 41, 43]. This association could be attributed to the fact that educated women are more likely to have a better understanding of risks associated with childbirth and benefits of using skilled healthcare compared to uneducated women. This evidence implies that inequality in institutional delivery could be reduced with appropriate intervention targeted to educate women. To reduce the barriers in uptake of maternal health services, GoN has been implementing Birth Preparedness Package (BPP) through health workers and Female Community Health Volunteers (FCHVs) in the community since the early 2000s as part of the safe motherhood program [44]. BPP educates pregnant women, their families, and communities to plan for normal pregnancy, delivery, and postnatal period and creates demand for healthcare through inter-personal communication using specially designed cards and flipcharts. So, more focused outreach education and awareness campaigns are key to further reduce the inequality in use of maternal healthcare services between educated and uneducated mothers. Study conducted by Karkee et al. using nationally representative dataset from 2011,however, found no significant association between education of women and institutional delivery [39]. The women from urban area were nearly two times more likely to opt for institutional delivery compared to the ones from rural counterparts. The findings from this study corroborate those of prior studies, where it was shown that urban residence of mother was associated with an increase in the use of institutional delivery [3, 15, 17, 38, 39, 41, 45]. In general, women from urban area have better access to healthcare system due to better transport, well-equipped hospitals and less distance between residence and health facility [15, 17]. Studies based in rural settings of Nepal have identified access to birthing facilities, perception regarding the quality of healthcare, lack of transportation, poor infrastructure and equipment at birthing centres as key barriers to access facility-based delivery services [46,47,48]. Women from Province 2 were significantly less likely to deliver in health institution compared to women from Province 1. Similar finding was obtained from a study using nationally representative dataset of 2016 [40]. Province 2 is terai (plain southeast belt) of Nepal and generally falls behind other provinces in terms of public health coverage indicators [20] and health infrastructure. As per the evidence from nationally representative surveys it had the lowest percentage of facilities providing normal vaginal delivery [49] and lowest mean general health service readiness score [50]. It observed the lowest annual change in Human Development Index (HDI) since 1996 and had one of the third lowest HDI of 0.485 (national average = 0.522) in 2011 [51]. Focused measures that address the unique socio-economic positioning are urgent to bring maternal health indicators of Province 2 at par with the national average. Institutional delivery increased monotonically in moving from women in poorest wealth quintile to richest wealth quintile. Analysis of concentration curve and concentration index revealed a pro-rich inequity in institutional delivery. The result from this study is consistent with findings from similar studies where it was shown that better socio-economic condition of mothers was associated with an increase in the use of maternal healthcare services [7, 14,15,16,17,18, 39,40,41, 52, 53]. Women belonging to higher socio-economic group have a better chance to visit health facility and when required, make payment for the expenses related to travel and medical care [15, 53]. Various studies indicate the potential reasons for disproportionally lower coverage of maternal health services among women from lower socio-economic group: direct and indirect cost related to healthcare including travel expenses and opportunity cost; perceived quality of care in public facilities; perceived importance of seeking formal healthcare during pregnancy and childbirth [7, 15, 16]. This indicates that the efforts of government since the 1990s to address barriers posed by Nepalese, mainly the poor households, through various supply and demand-side financing are still insufficient. However, as observed in earlier analysis conducted using four rounds of Nepal Demographic and Health Survey (NDHS: 2001, 2006, 2011, 2016), the socio-economic inequality concerning institutional delivery between the socio-economic groups measured by relative CIX has, on average, narrowed over this period [20]. The relative CIX obtained from these four rounds of NDHS were 0.56, 0.48, 0.35 and 0.19, respectively [20]. The analysis presented in this paper using the data from MICS 2019 has shown that the relative CIX for institutional delivery has further narrowed down to 0.097. So, the investment made by GoN looks working, but it should be more focused on benefiting the lower socio-economic group (in contrast to the current blanket approach with more emphasis on national targets), to further reduce the inequality gap. The decomposition analysis found that household wealth status contributes significantly towards the inequality in institutional delivery (53.2%), followed by women's education (17.02%.), urban residence (8.64%) and ANC visit (6.84%). This implies that the future policies and strategies need to be pro-poor, pro-rural and that need to focus more on educating women and families for increased ANC uptake. The result of decomposition analysis presented in this paper is consistent with similar studies from developing countries [15, 17, 18, 54]. Few variables that did not demonstrate a significant association with institutional delivery in this study showed a statistically significant association in studies conducted in other settings. Unlike Pulok et al. [7], Ketemaw et al. [38] this study found no statistically significant association between media exposure and institutional delivery. Similarly, unlike Atake [15], Pulok et al. [7] and Obiyan and Kumar [55], this study found no statistically significant association between education status of the household head (usually male/husband in Nepalese context) and institutional delivery. More studies might be required to ascertain these associations. Strength of this study This study has used the most recent nationally representative household survey conducted in 2019. Rigorous statistical methods to calculate the odds of enrollment controlling for relevant socio-economic and demographic variables were employed. Further, this study adds to the current body of literature from Nepal by providing the composite measure of inequality using standard techniques and performing decomposition analysis to identify key determinants that explain the inequality in the use of institutional delivery in Nepal. Limitation of this study First, the list of independent variables included in this study may not be an exhaustive one. The variables such as employment status of women, distance to the nearest institution with the birthing facility, cost (direct or indirect) associated with institutional delivery, understanding of the importance of safe delivery could not be included in this analysis due to the unavailability of such data in this round of MICS. Second, since this study is cross-sectional, it could not establish any causal relationship between the variables under study and institutional delivery. Notwithstanding, this study has elicited empirical evidence on socio-economic inequality and its predictors regarding institutional delivery which have policy relevance to countries with similar socio-economic context to Nepal. Age of women, parity, four or more ANC visit, education status of women, area of residence, sex of household head, religious belief, province and household wealth were found to be important predictors of institutional delivery in Nepal. Institutional delivery was found to be disproportionately higher among women belonging to wealthy households. The decomposition of the concentration index showed that wealth-related inequality was explained mostly by socio-economic status of household, education status of women, residence, and ANC visit. The pro-poor strategies are urgent to further reduce the existing inequality between women belonging to different socio-economic groups. The strategies should focus on raising the education level of women especially from the rural and backward province (Province 2). Increasing antenatal care coverage through the outreach campaign is likely to increase facility-based delivery and reduce the gap between poor and wealthy women. Monitoring of healthcare indicators at different sub-population level (for example wealth, residence, education, province) is key to ensure equitable improvement in health status and achieve universal health coverage (UHC) by 2030. Publicly available data were used that are accessible from the MICS website (https://mics.unicef.org/surveys) upon request. ANC: Antenatal care Birthing centre BEOC: Basic emergency obstetric care BPP: Birth preparedness package CBS: Central Bureau of Statistics Concentration curve CEOC: Comprehensive emergency obstetric care CIX: Concentration index DHS: Demographic and health survey FCHV: Female community health volunteers GoN: HDI: LMIC: Low-and middle-income country MDG: Millennium development goal MICS: Multiple indicator cluster survey Maternal mortality rate SBA: Skilled birth attendant SDG: UHC: UNICEF: United Nations Children's Fund World Health Organization. Trends in maternal mortality 2000 to 2017: estimates by WHO, UNICEF, UNFPA, World Bank Group and the United Nations Population Division: executive summary [Internet]. 2019. Available from: https://apps.who.int/iris/handle/10665/327596 Nations U. THE 17 GOALS | Sustainable Development [Internet]. Department of Economic and Social Affairs. 2020 [cited 2021 Apr 25]. Available from: https://sdgs.un.org/goals Rahman MA, Rahman MA, Rawal LB, Paudel M, Howlader MH, Khan B, et al. Factors influencing place of delivery: Evidence from three south-Asian countries. PLoS one [Internet]. 2021;16(4):e0250012. Available from: https://doi.org/10.1371/journal.pone.0250012. Devkota B, Maskey J, Pandey AR, Karki D, Godwin P, Gartoulla P, et al. Determinants of home delivery in Nepal – a disaggregated analysis of marginalised and non-marginalised women from the 2016 Nepal demographic and health survey. Budhathoki SS, editor PLoS One [Internet] 2020 Jan 30 [cited 2021 Apr 25];15(1):e0228440. Available from: https://dx.plos.org/10.1371/journal.pone.0228440 Benova L, Macleod D, Radovich E, Lynch CA, Campbell OMR. Should i stay or should i go?: Consistency and switching of delivery locations among new mothers in 39 Sub-Saharan African and South/Southeast Asian countries. Health Policy Plan [Internet]. 2017 Nov 1 [cited 2021 Apr 25];32(9):1294–308. Available from: https://academic.oup.com/heapol/article/32/9/1294/4065273 Ketemaw A, Tareke M, Dellie E, Sitotaw G, Deressa Y, Tadesse G, et al. Factors associated with institutional delivery in Ethiopia: a cross sectional study. BMC Health Serv Res [Internet] 2020 Mar 31 [cited 2021 Mar 10];20(1):266. Available from: https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-020-05096-7 Pulok MH, Sabah MNU, Uddin J, Enemark U. Progress in the utilization of antenatal and delivery care services in Bangladesh: where does the equity gap lie? BMC Pregnancy Childbirth [Internet]. 2016;16(1). Available from: https://doi.org/10.1186/s12884-016-0970-4. A new era for the WHO health system building blocks? | Health Systems Global [Internet]. [cited 2021 May 1]. Available from: https://healthsystemsglobal.org/news/a-new-era-for-the-who-health-system-building-blocks/ World Health Organization. Everybody business: strengthening health systems to improve health outcomes: WHO's framework for action. 2007. Universal Health Coverage [Internet]. [cited 2021 Mar 1]. Available from: https://www.who.int/health-topics/universal-health-coverage#tab=tab_1 Barros AJ, Ronsmans C, Axelson H, Loaiza E, Bertoldi AD, Frana GV, et al. Equity in maternal, newborn, and child health interventions in Countdown to 2015: A retrospective review of survey data from 54 countries. Lancet [Internet]. 2012 [cited 2021 Apr 26];379(9822):1225–33. Available from: www.thelancet.com Okoli C, Hajizadeh M, Rahman MM, Khanam R. Geographical and socioeconomic inequalities in the utilization of maternal healthcare services in Nigeria: 2003-2017. BMC Health Serv Res [Internet]. 2020 Sep 10 [cited 2021 Mar 10];20(1):849. Available from: https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-020-05700-w Krishnamoorthy Y, Majella MG, Rajaa S. Equity in coverage of maternal and newborn care in India: Evidence from a nationally representative survey. Health Policy Plan [Internet] 2020 Jun 1 [cited 2021 Mar 10];35(5):616–23. Available from: https://academic.oup.com/heapol/article/35/5/616/5814880 Rahman M, Haque SE, Mostofa MG, Tarivonda L, Shuaib M. Wealth inequality and utilization of reproductive health services in the Republic of Vanuatu: insights from the multiple indicator cluster survey, 2007. Int J Equity Health 2011;10. Atake E. Socio-economic inequality in maternal health care utilization in sub-Saharan Africa: Evidence from Togo. Int J Health Plann Manage [Internet] 2021 [cited 2021 Mar 10];36(2):288–301. Available from: https://pubmed.ncbi.nlm.nih.gov/33000498/ Myint ANM, Liabsuetrakul T, Htay TT, Wai MM, Sundby J, Bjertness E. Inequity in the utilization of antenatal and delivery care in Yangon region, Myanmar: a cross-sectional study. Int J Equity Health [Internet] 2018 May 22 [cited 2021 Mar 9];17(1):63. Available from: https://equityhealthj.biomedcentral.com/articles/10.1186/s12939-018-0778-0 Kim C, Saeed KMA, Salehi AS, Zeng W. An equity analysis of utilization of health services in Afghanistan using a national household survey. BMC public health [Internet]. 2016;16(1):1–11. Available from: https://doi.org/10.1186/s12889-016-3894-z. Huda TM, Hayes A, Dibley MJ. Examining horizontal inequity and social determinants of inequality in facility delivery services in three south Asian countries. J Glob Health 2018;8(1). Ministry of Health, Nepal; New ERA; and ICF. 2017. 2016 Nepal demographic and health survey key findings. Kathmandu, Nepal: Ministry of Health Nepal. Sapkota VP, Bhusal UP, Acharya K. Trends in national and subnational wealth related inequalities in use of maternal health care services in Nepal: an analysis using demographic and health surveys (2001–2016). BMC Public Health. 2021;21(1):1–14. Ensor T, Bhatt H, Tiwari S. Incentivizing universal safe delivery in Nepal: 10 years of experience. Health Policy Plan. 2017;32(8):1185–92. Murray SF, Hunter BM, Bisht R, Ensor T, Bick D. Effects of demand-side financing on utilisation, experiences and outcomes of maternity care in low- and middle-income countries: a systematic review. BMC Pregnancy Childbirth 2014;14(1). Bhatt H, Tiwari S, Ensor T, Ghimire DR, Gavidia T. Contribution of Nepal's free delivery care policies in improving utilisation of maternal health services. Int J Heal Policy Manag. 2018;7(7):645–55. Mehata S, Paudel YR, Dariang M, Aryal KK, Lal BK, Khanal MN, et al. Trends and inequalities in use of maternal health Care Services in Nepal: strategy in the search for improvements. Biomed Res Int 2017;2017(2008). Ministry of Health, Nepal; New ERA; and ICF. 2017. Nepal demographic and health survey 2016. Kathmandu, Nepal: Ministry of Health, Nepal. MoHP. Inequalities in maternal health service utilization in Nepal: An analysis of routine and survey data. 2018. Central Bureau of Statistics (CBS), 2020. Nepal multiple Indicator cluster survey 2019, survey findings report. Kathmandu, Nepal: Central Bureau of Statistics and UNICEF Nepal [Internet]. Available from: http://www.unicef.org/statistics/index_24302.html Central Bureau of Statistics (CBS). Population Monograph of Nepal, Volume II (Social Demography). 2014. Lesson 3 Logistic Regression Diagnostics [Internet]. Stata Web Books Logistic Regression with Stata. 2014 [cited 2021 Oct 7]. p. 1–28. Available from: https://stats.idre.ucla.edu/stata/webbooks/logistic/chapter3/lesson-3-logistic-regression-diagnostics/ Archer KJ, Lemeshow S. Goodness-of-fit test for a logistic regression model fitted using survey sample data. Stata J Promot Commun Stat Stata [Internet] 2006 Feb 19;6(1):97–105. Available from: http://journals.sagepub.com/doi/10.1177/1536867X0600600106. O'Donnell O, van Doorslaer E, Wagstaff A, Lindelow M. Analyzing health equity using household survey data [Internet]. Analyzing health equity using household survey data. Washington, D.C.: World Bank Group; 2007. Available from: http://documents.worldbank.org/curated/en/633931468139502235/Analyzing-health-equity-using-household-survey-data-a-guide-to-techniques-and-their-implementation. Jann B. Estimating Lorenz and concentration curves. Stata J Promot Commun Stat Stata [Internet]. 2016 Dec 1;16(4):837–66. Available from: http://journals.sagepub.com/doi/10.1177/1536867X1601600403. O'Donnell O, O'Neill S, Van Ourti T, Walsh B. Conindex: estimation of concentration indices. Stata J Promot Commun Stat Stata [Internet]. 2016 Mar 19;16(1):112–38. Available from: http://journals.sagepub.com/doi/10.1177/1536867X1601600112. Wagstaff A, Bilger M, Sajaia Z, Lokshin M. Health equity and financial protection: streamlined analysis with ADePT software [Internet]. Washington, DC: The World Bank; 2011. Available from: https://openknowledge.worldbank.org/bitstream/handle/10986/2306/622580PUB0heal01476B0extop0id018459.pdf?sequence=1&isAllowed=y. Farewar F, Saeed KMA, Foshanji AI, Alawi SMK, Zawoli MY, Sayedi O, et al. Analysis of equity in utilization of health services in Afghanistan using a national household survey. J Hosp Manag Heal Policy 2020;4(May):34–34. Dunlop CL, Benova L, Campbell O. Effect of maternal age on facility-based delivery: analysis of first-order births in 34 countries of sub-Saharan Africa using demographic and health survey data. BMJ Open. 2018;8(4):1–9. Iftikhar ul Husnain M, Rashid M, Shakoor U. Decision-making for birth location among women in Pakistan: Evidence from national survey. BMC Pregnancy Childbirth [Internet]. 2018 Jun 14 [cited 2021 may 15];18(1):1–11. Available from: https://doi.org/10.1186/s12884-018-1844-8. Ketemaw A, Tareke M, Dellie E, Sitotaw G, Deressa Y, Tadesse G, et al. Factors associated with institutional delivery in Ethiopia: a cross sectional study. BMC Health Serv Res. 2020;20(1):1–6. Karkee R, Lee AH, Khanal V. Need factors for utilisation of institutional delivery services in Nepal: An analysis from Nepal Demographic and Health Survey, 2011. BMJ Open. 2014;4(3). Neupane B, Rijal S, GC S, Basnet TB. A multilevel analysis to determine the factors associated with institutional delivery in Nepal: further analysis of Nepal demographic and health survey 2016. Heal Serv Insights 2021;14. Shahabuddin ASM, De Brouwere V, Adhikari R, Delamou A, Bardaj A, Delvaux T. Determinants of institutional delivery among young married women in Nepal: Evidence from the Nepal Demographic and Health Survey, 2011. BMJ Open. 2017;7(4). Kitui J, Lewis S, Davey G. Factors influencing place of delivery for women in Kenya: An analysis of the Kenya demographic and health survey, 2008/2009. BMC Pregnancy Childbirth [Internet]. 2013 Feb 17 [cited 2021 May 11];13(1):1–10. Available from: http://www.biomedcentral.com/1471-2393/13/40 Say L, Raine R. A systematic review of inequalities in the use of maternal health care in developing countries: examining the scale of the problem and the importance of context. Bull World Health Organ [Internet] 2007 Oct [cited 2021 May 9];85(10):812–9. Available from: https://pubmed.ncbi.nlm.nih.gov/18038064/ McPherson RA, Khadka N, Moore JM, Sharma M. Are birth-preparedness programmes effective? Results from a field trial in Siraha District, Nepal. J Heal Popul Nutr [Internet]. 2006 Dec [cited 2021 May 12];24(4):479–88. Available from: /pmc/articles/PMC3001152/. Shabnam J, Gifford M, Dalal K. Socioeconomic inequalities in the use of delivery care services in Bangladesh : a comparative study between 2004 and 2007. Health (Irvine Calif). 2011;3(12):762–71. Shah R, Rehfuess EA, Paudel D, Maskey MK, Delius M. Barriers and facilitators to institutional delivery in rural areas of Chitwan district, Nepal: A qualitative study. Reprod Health [Internet]. 2018 Jun 20 [cited 2021 may 10];15(1):1–13. Available from: https://doi.org/10.1186/s12978-018-0553-0. Onta S, Choulagai B, Shrestha B, Subedi N, Bhandari GP, Krettek A. Perceptions of users and providers on barriers to utilizing skilled birth care in mid- and far-western Nepal: A qualitative study. Glob Health Action [Internet]. 2014 [cited 2021 may 10];7(1):24580. Available from: https://doi.org/10.3402/gha.v7.24580. Khatri RB, Dangi TP, Gautam R, Shrestha KN, Homer CSE. Barriers to utilization of childbirth services of a rural birthing center in Nepal: A qualitative study. PLoS One [Internet]. 2017 May 1 [cited 2021 may 10];12(5):e0177602. Available from: https://doi.org/10.1371/journal.pone.0177602. Aryal KK, Dangol R, Gartoulla P, Subedi GR. Health services availability and readiness in seven provinces of Nepal. DHS Further Analysis Reports [Internet] 2018 [cited 2021 May 11]. Available from: http://www.newera.com.np/. Acharya K, Paudel YR. General health service readiness and its association with the facility level indicators among primary health care centers and hospitals in Nepal. J Glob Heal Reports 2019;3. Dhungel S. Provincial comparison of development status in Nepal: an analysis of human development trend for 1996 to 2026. J Manag Dev Stud. 2018;28:53–68. Houweling TAJ, Ronsmans C, Campbell OMR, Kunst AE. Huge poor-rich inequalities in maternity care: an international comparative study of maternity and child care in developing countries. Bull World Health Organ [Internet]. 2007 Oct [cited 2021 May 9];85(10):745–54. Available from: https://pubmed.ncbi.nlm.nih.gov/18038055/ Zere E, Oluwole D, Kirigia JM, Mwikisa CN, Mbeeli T. Inequities in skilled attendance at birth in Namibia: A decomposition analysis. BMC Pregnancy Childbirth [Internet]. 2011 May 14 [cited 2021 May 9];11(1):1–10. Available from: http://www.biomedcentral.com/1471-2393/11/34 Goli S, Nawal D, Rammohan A, Sekher T V., Singh D. Decomposing the socioeconomic inequality in utilization of maternal health care services in selected countries of south Asia and sub-Saharan Africa. J Biosoc Sci [Internet]. 2018 [cited 2021 may 17];50(6):725–48. Available from: https://doi.org/10.1017/S0021932017000530. Obiyan MO, Kumar A. Socioeconomic Inequalities in the Use of Maternal Health Care Services in Nigeria: Trends Between 1990 and 2008. SAGE Open [Internet]. 2015 Nov 21 [cited 2021 May 9];5(4). Available from: https://us.sagepub.com/en-us/nam/open-access-at-sage The author would like to acknowledge the Multiple Indicator Cluster Surveys (MICS) for their permission to access and use the dataset for this study. This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Public Health and Social Protection Professional, Kathmandu, Nepal Umesh Prasad Bhusal Melbourne School of Population and Global Health, The University of Melbourne, Melbourne, Victoria, Australia UPB conceptualized and designed the study, extracted data, conducted literature review, performed statistical analyses and interpretation of the result. UPB prepared and revised the manuscript and approved it for submission. Author's information Umesh Prasad Bhusal has a Master of Public Health with specialization in health economics and economic evaluation from The University of Melbourne, Australia. He is working in the sector of health and social protection based in Nepal since 2009. His research area and interest are health and social protection system in low-and middle-income countries; health equity analysis; health insurance and financing; maternal & child health. Correspondence to Umesh Prasad Bhusal. This study is based on publicly available MICS datasets. The permission to access and use these datasets was obtained from Unicef/MICS website (http://mics.unicef.org/surveys), so no further ethical approval was necessary. The protocol for the survey was approved by the Central Bureau of Statistics (CBS) as per the Statistical Act (1958) in September 2018. This Act enables CBS to implement surveys as per the Government of Nepal's ethics protocol without involving an institutional review board (IRB). So, all procedures were performed in accordance with relevant guidelines. During the data collection, verbal consent was obtained from each respondent after a thorough introduction of the survey. All respondents were briefed about the voluntary nature of participation. Participants were assured that the information they share during the interview will be kept confidential and anonymous. The author declares that there are no conflicts of interest regarding the publication of this paper. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of the organization(s) the author is affiliated with. Additional file 1: Supplementary table 1. Variance Inflation factors (VIFs) of variables included in the multivariate logistic regression model. Supplementary Table 2. Results of specification error test using linktest command in Stata Bhusal, U.P. Predictors of wealth-related inequality in institutional delivery: a decomposition analysis using Nepal multiple Indicator cluster survey (MICS) 2019. BMC Public Health 21, 2246 (2021). https://doi.org/10.1186/s12889-021-12287-2 Institutional delivery
CommonCrawl
Poly(3-hexyltiophene-2,5-diyl) (P3HT) reactions with inorganic salts in aqueous solution I would need for my research at least as a starting point to know what kind of reactions occur at the interface of P3HT and an electrolyte. The electrolyte is simple $\ce{KCl}$ solution, maybe with $\ce{HCl}$ or $\ce{KOH}$ added. The reactions of polytiophene with the same type of electrolyte will do, if there is no information on P3HT. The two molecules don't seem to be that much different to me from the point of view of electrolytic reactions. What I really need is information about reaction kinetics, at least in the form of some reaction constants, but simple stochiometric equations are still better than nothing at all, so I'm interested in those, too. I have been searching for some time now, but I'm not a chemist, and I don't really know where to look to find something like this. organic-chemistry electrolysis surface-chemistry pentavalentcarbon $\begingroup$ Assuming the electrolyte is inert (e.g., won't oxidize or reduce the P3HT), then there are no likely reactions. I would look up electrochemical potentials of KCl and P3HT, but I don't expect much would happen. $\endgroup$ – Geoff Hutchison Feb 22 '16 at 21:11 $\begingroup$ @GeoffHutchison If you measure the characteristics of a transistor made as such that you put two electrodes under a P3HT thin film, drop a drop of electrolyte on the top, and insert an electrode into the electrolyte, the characteristics of the transistor drastically change. These transistors are sensitive to both changing the pH or salt concentration. The sensitivity comes from capacitive moulation, but also there is a continuous drift in the characteristics, that might be due to surface reactions. My first impression also was that it doesn't react, but seemingly experiment shows otherwise. $\endgroup$ – user3237992 Feb 22 '16 at 21:55 I'll point to a very recent paper on oligothiophenes for solution-gated FET sensing that suggests there are no chemical reactions. (This article popped up in my feed - there are many others on solution-gating organic electronics.) $\alpha,\omega$-dihexyl-sexithiophene thin films for solution-gated organic field-effect transistors. Appl. Phys. Lett. 108, 073301 (2016) The authors look at many factors influencing the device performance: Finally, excellent transistor stability is confirmed by continuously operating the device over a period of several days, which is a consequence of the low threshold voltage of DH6T-based SGOFETs. Altogether, our results demonstrate the feasibility of high performance and highly stable organic semiconductor devices for chemical or biochemical applications. The devices in this article use KCl and phosphate buffer. They vary the pH and ionic strength. I haven't seen anyone suggest chemical reactions, and I'm not sure how it would occur. Certainly the oligo- or polythiophene could degrade over time, but most articles argue these are stable for long-term use. The authors do note a small drift on the time scale of days: both changes may possibly be attributed to the passivation of trap states at the semiconductor-electrolyte interface by water molecules penetrating into the thin film I can't speak to the drift you see, but I'd make sure that: You aren't seeing evaporation of solvent over time (e.g., from device heating) that's changing your electrolyte concentration. You use a proper reference electrode You try removing water via vacuum and repeat the experiment (i.e., you drive off any water penetrating the semiconductor film) Certainly organic semiconductors are porous, and while mostly hydrophobic, it's not unreasonable to expect that after days of use, some water may migrate into the film. Geoff HutchisonGeoff Hutchison $\begingroup$ @GeoffHutchinson Thank you very much. To be honest, I'm not doing the experiments, but the simulations, and my experimentalist conterpart told me this one, and that it would be nice to have a model of the drift. I'll definitely talk with her, and she promised to show me how she fabricates and measures these devices. However, she is no amateur, and I don't suppose she would talk about a drift that comes from experimental error. What might happen however, is that the P3HT thin film itself degrades when operating the device (I read it in a Master's thesis though, so it's not 100% sure). $\endgroup$ – user3237992 Feb 23 '16 at 19:58 $\begingroup$ @GeoffHutchinson I didn't specify it in the question, that is true, but there is a fundamental difference between the transistor that they have made and ours: ours is solution processed, thus the P3HT film is disordered, while in the article they have used molecular beam deposition which results in single crystals. I could also imagine that the disordered P3HT film is less stable, than a single crystal (although right now I don't know why). Maybe not because of surface reactions then, but some internal change. Can this be possible? $\endgroup$ – user3237992 Feb 23 '16 at 20:02 $\begingroup$ @user3237992 - I'd be happy to consult more, but find my e-mail. Definitely if you're doing simulations, see how they're making the devices. I would not be surprised solution-deposited P3HT has differences - it's likely much more porous than the crystalline films. But I doubt it's from a chemical reaction - more likely some sort of water interpolation or rearrangement of the film. As I said, please send me an e-mail. $\endgroup$ – Geoff Hutchison Feb 23 '16 at 20:59 Not the answer you're looking for? Browse other questions tagged organic-chemistry electrolysis surface-chemistry or ask your own question. What dangerous gases could electrolysis with water and sodium carbonate produce? How can an inexperienced chemist determine the chemical structure of a molecule? Electrochromic properties of ethyl viologen What is the meaning of n-Pr and i-Pr? How does one properly do this alkali melt? (Pfleger's indigo synthesis) Will KOH work in place of NaOH as the electolyte in the electrolysis of water?
CommonCrawl
Dirichlet's and Thomson's principles for non-selfadjoint elliptic operators with application to non-reversible metastable diffusion processes (1701.00985) C. Landim, M. Mariani, I. Seo March 9, 2018 math.AP, cond-mat.stat-mech, math.PR We present two variational formulae for the capacity in the context of non-selfadjoint elliptic operators. The minimizers of these variational problems are expressed as solutions of boundary-value elliptic equations. We use these principles to provide a sharp estimate for the transition times between two different wells for non-reversible diffusion processes. This estimate permits to describe the metastable behavior of the system. Cryogenic Characterization of FBK RGB-HD SiPMs (1705.07028) C. E. Aalseth, F. Acerbi, P. Agnes, I. F. M. Albuquerque, T. Alexander, A. Alici, A. K. Alton, P. Ampudia, P. Antonioli, S. Arcelli, R. Ardito, I. J. Arnquist, D. M. Asner, H. O. Back, G. Batignani, E. Bertoldo, S. Bettarini, M. G. Bisogni, V. Bocci, A. Bondar, G. Bonfini, W. Bonivento, M. Bossa, B. Bottino, R. Bunker, S. Bussino, A. Buzulutskov, M. Cadeddu, M. Cadoni, A. Caminata, N. Canci, A. Candela, C. Cantini, M. Caravati, M. Cariello, M. Carlini, M. Carpinelli, A. Castellani, S. Catalanotti, V. Cataudella, P. Cavalcante, R. Cereseto, Y. Chen, A. Chepurnov, A. Chiavassa, C. Cicalò, L. Cifarelli, M. Citterio, A. G. Cocco, M. Colocci, S. Corgiolu, G. Covone, P. Crivelli, I. D'Antone, M. D'Incecco, M. D. Da Rocha Rolo, M. Daniel, S. Davini, A. De Candia, S. De Cecco, M. De Deo, G. De Filippis, G. De Guido, G. De Rosa, G. Dellacasa, P. Demontis, A. V. Derbin, A. Devoto, F. Di Eusanio, G. Di Pietro, C. Dionisi, A. Dolgov, I. Dormia, S. Dussoni, A. Empl, A. Ferri, C. Filip, G. Fiorillo, K. Fomenko, D. Franco, G. E. Froudakis, F. Gabriele, A. Gabrieli, C. Galbiati, P. Garcia Abia, A. Gendotti, A. Ghisi, S. Giagu, G. Gibertoni, C. Giganti, M. Giorgi, G. K. Giovanetti, M. L. Gligan, A. Gola, O. Gorchakov, A. M. Goretti, F. Granato, M. Grassi, J. W. Grate, G. Y. Grigoriev, M. Gromov, M. Guan, M. B. B. Guerra, M. Guerzoni, M. Gulino, R. K. Haaland, B. Harrop, E. W. Hoppe, S. Horikawa, B. Hosseini, D. Hughes, P. Humble, E. V. Hungerford, An. Ianni, S. Jimenez Cabre, T. N. Johnson, K. Keeter, C. L. Kendziora, S. Kim, G. Koh, D. Korablev, G. Korga, A. Kubankin, R. Kugathasan, M. Kuss, X. Li, M. Lissia, G. U. Lodi, B. Loer, G. Longo, R. Lussana, L. Luzzi, Y. Ma, A. A. Machado, I. N. Machulin, L. Mais, A. Mandarano, L. Mapelli, M. Marcante, A. Margotti, S. M. Mari, M. Mariani, J. Maricic, M. Marinelli, D. Marras, C. J. Martoff, M. Mascia, A. Messina, P. D. Meyers, R. Milincic, A. Moggi, S. Moioli, S. Monasterio, J. Monroe, A. Monte, M. Morrocchi, W. Mu, V. N. Muratova, S. Murphy, P. Musico, R. Nania, J. Napolitano, A. Navrer Agasson, I. Nikulin, V. Nosov, A. O. Nozdrina, N. N. Nurakhov, A. Oleinik, V. Oleynikov, M. Orsini, F. Ortica, L. Pagani, M. Pallavicini, S. Palmas, L. Pandola, E. Pantic, E. Paoloni, G. Paternoster, V. Pavletcov, F. Pazzona, K. Pelczar, L. A. Pellegrini, N. Pelliccia, F. Perotti, R. Perruzza, C. Piemonte, F. Pilo, A. Pocar, D. Portaluppi, S. S. Poudel, D. A. Pugachev, H. Qian, B. Radics, F. Raffaelli, F. Ragusa, K. Randle, M. Razeti, A. Razeto, V. Regazzoni, C. Regenfus, B. Reinhold, A. L. Renshaw, M. Rescigno, Q. Riffard, A. Rivetti, A. Romani, L. Romero, B. Rossi, N. Rossi, A. Rubbia, D. Sablone, P. Salatino, O. Samoylov, W. Sands, M. Sant, R. Santorelli, C. Savarese, E. Scapparone, B. Schlitzer, G. Scioli, E. Sechi, E. Segreto, A. Seifert, D. A. Semenov, S. Serci, A. Shchagin, L. Shekhtman, E. Shemyakina, A. Sheshukov, M. Simeone, P. N. Singh, M. D. Skorokhvatov, O. Smirnov, G. Sobrero, A. Sokolov, A. Sotnikov, C. Stanford, G. B. Suffritti, Y. Suvorov, R. Tartaglia, G. Testera, A. Tonazzo, A. Tosi, P. Trinchese, E. V. Unzhakov, A. Vacca, M. Verducci, T. Viant, F. Villa, A. Vishneva, B. Vogelaar, M. Wada, J. Wahl, S. Walker, H. Wang, Y. Wang, A. W. Watson, S. Westerdale, J. Wilhelmi, R. Williams, M. M. Wojcik, S. Wu, X. Xiang, X. Xiao, C. Yang, Z. Ye, F. Zappa, G. Zappalà, C. Zhu, A. Zichichi, G. Zuzel Sept. 12, 2017 physics.ins-det We report on the cryogenic characterization of Red Green Blue - High Density (RGB-HD) SiPMs developed at Fondazione Bruno Kessler (FBK) as part of the DarkSide program of dark matter searches with liquid argon time projection chambers. A dedicated setup was used to measure the primary dark noise, the correlated noise, and the gain of the SiPMs at varying temperatures. A custom-made data acquisition system and analysis software were used to precisely characterize these parameters. We demonstrate that FBK RGB-HD SiPMs with low quenching resistance (RGB-HD-LR$_q$) can be operated from 40 K to 300 K with gains in the range $10^5$ to $10^6$ and noise rates on the order of a few Hz/mm$^2$. DarkSide-20k: A 20 Tonne Two-Phase LAr TPC for Direct Dark Matter Detection at LNGS (1707.08145) C. E. Aalseth, F. Acerbi, P. Agnes, I. F. M. Albuquerque, T. Alexander, A. Alici, A. K. Alton, P. Antonioli, S. Arcelli, R. Ardito, I. J. Arnquist, D. M. Asner, M. Ave, H. O. Back, A. I. Barrado Olmedo, G. Batignani, E. Bertoldo, S. Bettarini, M. G. Bisogni, V. Bocci, A. Bondar, G. Bonfini, W. Bonivento, M. Bossa, B. Bottino, M. Boulay, R. Bunker, S. Bussino, A. Buzulutskov, M. Cadeddu, M. Cadoni, A. Caminata, N. Canci, A. Candela, C. Cantini, M. Caravati, M. Cariello, M. Carlini, M. Carpinelli, A. Castellani, S. Catalanotti, V. Cataudella, P. Cavalcante, S. Cavuoti, R. Cereseto, A. Chepurnov, C. Cicalò, L. Cifarelli, M. Citterio, A. G. Cocco, M. Colocci, S. Corgiolu, G. Covone, P. Crivelli, I. D'Antone, M. D'Incecco, D. D'Urso, M. D. Da Rocha Rolo, M. Daniel, S. Davini, A. de Candia, S. De Cecco, M. De Deo, G. De Filippis, G. De Guido, G. De Rosa, G. Dellacasa, M. Della Valle, P. Demontis, A. Derbin, A. Devoto, F. Di Eusanio, G. Di Pietro, C. Dionisi, A. Dolgov, I. Dormia, S. Dussoni, A. Empl, M. Fernandez Diaz, A. Ferri, C. Filip, G. Fiorillo, K. Fomenko, D. Franco, G. E. Froudakis, F. Gabriele, A. Gabrieli, C. Galbiati, P. Garcia Abia, A. Gendotti, A. Ghisi, S. Giagu, P. Giampa, G. Gibertoni, C. Giganti, M. A. Giorgi, G. K. Giovanetti, M. L. Gligan, A. Gola, O. Gorchakov, A. M. Goretti, F. Granato, M. Grassi, J. W. Grate, G. Y. Grigoriev, M. Gromov, M. Guan, M. B. B. Guerra, M. Guerzoni, M. Gulino, R. K. Haaland, A. Hallin, B. Harrop, E. W. Hoppe, S. Horikawa, B. Hosseini, D. Hughes, P. Humble, E. V. Hungerford, An. Ianni, C. Jillings, T. N. Johnson, K. Keeter, C. L. Kendziora, S. Kim, G. Koh, D. Korablev, G. Korga, A. Kubankin, M. Kuss, M. Kuźniak, B. Lehnert, X. Li, M. Lissia, G. U. Lodi, B. Loer, G. Longo, P. Loverre, R. Lussana, L. Luzzi, Y. Ma, A. A. Machado, I. N. Machulin, A. Mandarano, L. Mapelli, M. Marcante, A. Margotti, S. M. Mari, M. Mariani, J. Maricic, C. J. Martoff, M. Mascia, M. Mayer, A. B. McDonald, A. Messina, P. D. Meyers, R. Milincic, A. Moggi, S. Moioli, J. Monroe, A. Monte, M. Morrocchi, B. J. Mount, W. Mu, V. N. Muratova, S. Murphy, P. Musico, R. Nania, A. Navrer Agasson, I. Nikulin, V. Nosov, A. O. Nozdrina, N. N. Nurakhov, A. Oleinik, V. Oleynikov, M. Orsini, F. Ortica, L. Pagani, M. Pallavicini, S. Palmas, L. Pandola, E. Pantic, E. Paoloni, G. Paternoster, V. Pavletcov, F. Pazzona, S. Peeters, K. Pelczar, L. A. Pellegrini, N. Pelliccia, F. Perotti, R. Perruzza, V. Pesudo Fortes, C. Piemonte, F. Pilo, A. Pocar, T. Pollmann, D. Portaluppi, D. A. Pugachev, H. Qian, B. Radics, F. Raffaelli, F. Ragusa, M. Razeti, A. Razeto, V. Regazzoni, C. Regenfus, B. Reinhold, A. L. Renshaw, M. Rescigno, F. Retière, Q. Riffard, A. Rivetti, S. Rizzardini, A. Romani, L. Romero, B. Rossi, N. Rossi, A. Rubbia, D. Sablone, P. Salatino, O. Samoylov, E. Sánchez García, W. Sands, M. Sant, R. Santorelli, C. Savarese, E. Scapparone, B. Schlitzer, G. Scioli, E. Segreto, A. Seifert, D. A. Semenov, A. Shchagin, L. Shekhtman, E. Shemyakina, A. Sheshukov, M. Simeone, P. N. Singh, P. Skensved, M. D. Skorokhvatov, O. Smirnov, G. Sobrero, A. Sokolov, A. Sotnikov, F. Speziale, R. Stainforth, C. Stanford, G. B. Suffritti, Y. Suvorov, R. Tartaglia, G. Testera, A. Tonazzo, A. Tosi, P. Trinchese, E. V. Unzhakov, A. Vacca, E. Vázquez-Jáuregui, M. Verducci, T. Viant, F. Villa, A. Vishneva, B. Vogelaar, M. Wada, J. Wahl, J. Walding, S. Walker, H. Wang, Y. Wang, A. W. Watson, S. Westerdale, R. Williams, M. M. Wojcik, S. Wu, X. Xiang, X. Xiao, C. Yang, Z. Ye, A. Yllera de Llano, F. Zappa, G. Zappalà, C. Zhu, A. Zichichi, M. Zullo, A. Zullo July 25, 2017 physics.ins-det Building on the successful experience in operating the DarkSide-50 detector, the DarkSide Collaboration is going to construct DarkSide-20k, a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LArTPC) with an active (fiducial) mass of 23 t (20 t). The DarkSide-20k LArTPC will be deployed within a shield/veto with a spherical Liquid Scintillator Veto (LSV) inside a cylindrical Water Cherenkov Veto (WCV). Operation of DarkSide-50 demonstrated a major reduction in the dominant $^{39}$Ar background when using argon extracted from an underground source, before applying pulse shape analysis. Data from DarkSide-50, in combination with MC simulation and analytical modeling, shows that a rejection factor for discrimination between electron and nuclear recoils of $\gt3\times10^9$ is achievable. This, along with the use of the veto system, is the key to unlocking the path to large LArTPC detector masses, while maintaining an "instrumental background-free" experiment, an experiment in which less than 0.1 events (other than $\nu$-induced nuclear recoils) is expected to occur within the WIMP search region during the planned exposure. DarkSide-20k will have ultra-low backgrounds than can be measured in situ. This will give sensitivity to WIMP-nucleon cross sections of $1.2\times10^{-47}$ cm$^2$ ($1.1\times10^{-46}$ cm$^2$) for WIMPs of $1$ TeV$/c^2$ ($10$ TeV$/c^2$) mass, to be achieved during a 5 yr run producing an exposure of 100 t yr free from any instrumental background. DarkSide-20k could then extend its operation to a decade, increasing the exposure to 200 t yr, reaching a sensitivity of $7.4\times10^{-48}$ cm$^2$ ($6.9\times10^{-47}$ cm$^2$) for WIMPs of $1$ TeV$/c^2$ ($10$ TeV$/c^2$) mass. Estrellas h\'ibridas: una aproximaci\'on semi-anal\'itica a T finita (1704.07733) M. Mariani, M. G. Orsaria April 26, 2017 nucl-th, astro-ph.HE Starting from the semi-analytic construction of a equation of state (EoS) which takes into account nuclear and quark matter at finite temperature, we study the possibility that proto-neutron stars, be proto-hybrid stars whose cores are composed of quark matter. We obtain the mass-radius relationship and discuss the latest constraints on masses and radii of neutron stars, considering the pulsars PSR J1614-2230 and PSR J0348 + 0432. Simplified Thermal Evolution of Proto-hybrid Stars (1704.07732) M. Mariani, M. Orsaria, H. Vucetich We study the possibility of a hadron-quark phase transition in the interior of neutron stars, taking into account different schematic evolutionary stages at finite temperature. Furthermore, we analyze the astrophysical properties of hot and cold hybrid stars, considering the constraint on maximum mass given by the pulsars J1614-2230 and J1614-2230. We obtain cold hybrid stars with maximum masses $\geq 2$ M$_{\odot}$. Our study also suggest that during the proto-hybrid star evolution a late phase transition between hadronic matter and quark matter could occur, in contrast with previous studies of proto-neutron stars. Constant entropy hybrid stars: a first approximation of cooling evolution (1607.05200) April 24, 2017 astro-ph.SR, astro-ph.HE We aim to study the possibility of a hadron-quark phase transition in the interior of neutron stars, taking into account different schematic evolutionary stages at finite temperature. We also discuss the strange quark matter stability in the quark matter phase. Furthermore, we aim to analyze the astrophysical properties of hot and cold hybrid stars, considering the constraint on maximum mass given by the pulsars J1614-2230 and J0348+0432. We have developed a computational code to construct semi-analytical hybrid equations of state at fixed entropy per baryon and to obtain different families of hybrid stars. An analytical approximation of the Field Correlator Method is developed for the quark matter equation of state. For the hadronic equation of state we use a table based on the relativistic mean field theory, without hyperons. We solved the relativistic structure equations of hydrostatic equilibrium and mass conservation for hybrid star configurations. For the different equations of state obtained, we calculated the stability window for the strange quark matter, lepton abundances, temperature profiles and contours profiles for the maximum mass star depending on the parameters of the Field Correlator Method. We also computed the mass-radius and gravitational mass-baryonic mass relationships for different hybrid star families. We have analyzed different stages of hot hybrid stars as a first approximation of the cooling evolution of neutron stars with quark matter cores. We obtain cold hybrid stars with maximum masses $\geq 2 M_\odot$ for different combinations of the Field Correlator Method parameters. In addition, our study based on the gravitational mass - baryonic mass plane shows a late phase transition between hadronic and quark matter during the proto-hybrid star evolution, in contrast with previous studies of proto-neutron stars. Magnetic properties and spin dynamics of 3d-4f molecular complexes (1111.7008) P. Khuntia, M. Mariani, A. V. Mahajan, A. Lascialfari, F. Borsa, T. D. Pasatoiu, M. Andruh Nov. 29, 2011 cond-mat.mtrl-sci, cond-mat.mes-hall We present the magnetic properties of three recently synthesized binuclear molecular complexes [NiNd], [NiGd] and [ZnGd] investigated by dc magnetization and proton nuclear magnetic resonance (NMR) measurements. The high-temperature magnetic properties are related to the independent paramagnetic behavior of the two magnetic metal ions within the binuclear entities both in [NiNd] and [NiGd]. On lowering the temperature, the formation of a magnetic dimer, with a low-spin ground state due to antiferromagnetic interaction (J/kB = -25 K) between Ni2+ and Nd3+, is found in the case of [NiNd], while in [NiGd] a ferromagnetic interaction (J/kB = 3.31 K) between the magnetic ions leads to a high-spin (S = 9/2) ground state. The temperature dependence of the proton nuclear spin lattice relaxation rate 1/T1 in [NiNd] is driven by the fluctuation of the hyperfine field at the nuclear site due to relaxation of the magnetization. At high temperature the independent Ni2+ and Nd3+ spins fluctuate fast while at low temperature we observe a slowing down of the fluctuation of the total magnetization of the dimer because of the insurgence of antiferromagnetic spin correlations. The relaxation mechanism in [NiNd] at low temperature is interpreted by a single, temperature dependent, correlation frequency wc\simT^3.5, which reflects the life time broadening of the exchange coupled spins via spin-phonon interaction. The proton NMR signal in [NiGd] could be detected just at room temperature, due to the shortening of relaxation times when T is decreased. The magnetic properties of [ZnGd] are the ones expected from a weakly interacting assembly of isolated moments except for anomalies in the susceptibility and NMR results below 15 K which currently cannot be explained. Finite-size effects on the dynamic susceptibility of CoPhOMe single-chain molecular magnets in presence of a static magnetic field (1105.3009) M. G. Pini, A. Rettori, L. Bogani, A. Lascialfari, M. Mariani, A. Caneschi, R. Sessoli May 16, 2011 cond-mat.mtrl-sci, cond-mat.mes-hall The static and dynamic properties of the single-chain molecular magnet [Co(hfac)$_2$NITPhOMe] are investigated in the framework of the Ising model with Glauber dynamics, in order to take into account both the effect of an applied magnetic field and a finite size of the chains. For static fields of moderate intensity and short chain lengths, the approximation of a mono-exponential decay of the magnetization fluctuations is found to be valid at low temperatures; for strong fields and long chains, a multi-exponential decay should rather be assumed. The effect of an oscillating magnetic field, with intensity much smaller than that of the static one, is included in the theory in order to obtain the dynamic susceptibility $\chi(\omega)$. We find that, for an open chain with $N$ spins, $\chi(\omega)$ can be written as a weighted sum of $N$ frequency contributions, with a sum rule relating the frequency weights to the static susceptibility of the chain. Very good agreement is found between the theoretical dynamic susceptibility and the ac susceptibility measured in moderate static fields ($H_{\rm dc}\le 2$ kOe), where the approximation of a single dominating frequency turns out to be valid. For static fields in this range, new data for the relaxation time, $\tau$ versus $H_{\rm dc}$, of the magnetization of CoPhOMe at low temperature are also well reproduced by theory, provided that finite-size effects are included. Magnetic properties and spin dynamics in single molecule paramagnets Cu6Fe and Cu6Co (0909.5063) P. Khuntia, M. Mariani, M. C. Mozzati, L. Sorace, F. Orsini, A. Lascialfari, F. Borsa, M. Andruh, C. Maxim Sept. 28, 2009 cond-mat.mtrl-sci, cond-mat.mes-hall The magnetic properties and the spin dynamics of two molecular magnets have been investigated by magnetization and d.c. susceptibility measurements, Electron Paramagnetic Resonance (EPR) and proton Nuclear Magnetic Resonance (NMR) over a wide range of temperature (1.6-300K) at applied magnetic fields, H=0.5 and 1.5 Tesla. The two molecular magnets consist of CuII(saldmen)(H2O)}6{FeIII(CN)6}](ClO4)38H2O in short Cu6Fe and the analog compound with cobalt, Cu6Co. It is found that in Cu6Fe whose magnetic core is constituted by six Cu2+ ions and one Fe3+ ion all with s=1/2, a weak ferromagnetic interaction between Cu2+ moments through the central Fe3+ ion with J = 0.14 K is present, while in Cu6Co the Co3+ ion is diamagnetic and the weak interaction is antiferromagnetic with J = -1.12 K. The NMR spectra show the presence of non equivalent groups of protons with a measurable contact hyperfine interaction consistent with a small admixture of s-wave function with the d-function of the magnetic ion. The NMR relaxation results are explained in terms of a single ion (Cu2+, Fe3+, Co3+) uncorrelated spin dynamics with an almost temperature independent correlation time due to the weak magnetic exchange interaction. We conclude that the two molecular magnets studied here behave as single molecule paramagnets with a very weak intramolecular interaction, almost of the order of the dipolar intermolecular interaction. Thus they represent a new class of molecular magnets which differ from the single molecule magnets investigated up to now, where the intramolecular interaction is much larger than the intermolecular one. Mesoscopic phase separation in Na$_x$CoO$_2$ ($0.65\leq x\leq 0.75$) (cond-mat/0312284) P. Carretta, M. Mariani, C.B. Azzoni, M.C. Mozzati, I. Bradaric, I. Savic, A. Feher, J. Sebek Dec. 11, 2003 cond-mat.str-el NMR, EPR and magnetization measurements in Na$_x$CoO$_2$ for $0.65\leq x\leq 0.75$ are presented. While the EPR signal arises from Co$^{4+}$ magnetic moments ordering at $T_c\simeq 26$ K, $^{59}$Co NMR signal originates from cobalt nuclei in metallic regions with no long range magnetic order and characterized by a generalized susceptibility typical of strongly correlated metallic systems. This phase separation in metallic and magnetic insulating regions is argued to occur below $T^*(x)$ ($220 - 270$ K). Above $T^*$ an anomalous decrease in the intensity of the EPR signal is observed and associated with the delocalization of the electrons which for $T<T^*$ were localized on Co$^{4+}$ $d_{z^2}$ orbitals. It is pointed out that the in-plane antiferromagnetic coupling $J\ll T^*$ cannot be the driving force for the phase separation.
CommonCrawl
Is there a probability distribution with the following properties? I'm looking for a univariate probability distribution defined for $x \in (-\infty, \infty)$ with the following properties: The PDF is symmetric around the origin ($p(x)=p(-x)$). The derivative of the PDF at $x=0$ (with the limit taken from above) can be specified to lie in some interval $-\infty \dots 0$ (not necessarily inclusive). Note that this implies that the derivative of the PDF may be discontinuous at that point. The distribution should interpolate between a heavy tailed and a normal distribution. The normal distribution must be a special case. The CDF and inverse CDF can be efficiently evaluated (i.e., using well-known functions and without integrating numerically in a typical computer algebra system). The PDF should be as smooth as possible (everywhere except at the origin). The generalized normal distribution (with $p(x)\sim\exp(-|x|^\beta)$) comes close to this, but $\beta$ specifies the tail behavior as well as its "peakiness" at $x=0$. I would like to be able to specify these two things as independently as possible. One idea would be to convolve the PDF of a generalized normal with a normal PDF of a specified scale (i.e., adding a normal random variable to the generalized normal), thereby "smoothing" it at the origin. However, I am stuck with computing the resulting distribution in closed form. probability-distributions JohannesJohannes $\begingroup$ I don't know, but I would take a look at academia.edu/1686413/… $\endgroup$ – Stéphane Laurent May 15 '15 at 11:52 Browse other questions tagged probability-distributions or ask your own question. Cumulative distribution function of the generalized beta distribution. Linear Projection Property for a Logconcave CDF Class Setting up the liklihood distribution for Bayesian Estimation Closed form for CDF of Beta distribution on $[a,b]$? Variance of a doubly truncated normal distribution Integral over bivariant Normal CDF Tail weight of product distributions Product of Normal and independent log-Normal. What is the density? Given a probability density function $p$, find a factorized distribution $q$, such that $\frac{p(x)}{q(x)}$ is bounded above? What exactly is a probability distribution, a Radon-Nikodym derivative of a CDF function?
CommonCrawl
Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity Nuno Sepúlveda1, 2Email author and Chris Drakeley1 © Sepúlveda and Drakeley; licensee BioMed Central. 2015 Published: 3 April 2015 In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups. Seroprevalence Seroconversion rate Parasite prevalence (PR) and entomological inoculation rate (EIR) are the two most common disease risk indicators used in malaria epidemiology. PR is defined as the percentage of people who are currently infected with malaria parasites, and reflects the direct interplay between transmission intensity, age, and disease burden. EIR is in turn the frequency at which people are bitten by infectious mosquitoes over a period of time (typically a year), and provides information on the vector biology and its interaction with the human host. These measures, although useful in high and moderate transmission settings, show limitations in areas of lower transmission or in populations on the cusp of disease elimination. This is primarily due to the low number of infected individuals (humans or mosquitoes) in the population at the time of sampling. Accurate metrics are particularly important in assessing the effects of malaria interventions at these low transmission levels. Therefore, in recent years, alternative risk indicators based on anti-malarial antibody seroprevalence (SP) and seroconversion rate (SCR) have been evaluated [1-4]. The rationale of using antibody data stems from the observation that specific antibodies against parasite antigens persist in time and at reasonably stable concentrations, even when disease transmission is seasonal. Experimentally, the quantification of antibodies in sera is relatively easy to perform using simple laboratory techniques, such as ELISA assays. The resulting antibody measurements are usually optical densities or the respective titre values upon which one classifies each individual as seronegative or seropositive using appropriate cut-off points. These seropositivity thresholds are typically determined by two distinct approaches. The first one uses antibody data of known seronegative individuals in which the parameters of the underlying distribution are estimated, as illustrated by Arnold et al. [5]. In contrast, the second approach is based on fitting a Gaussian mixture model to current antibody data directly under the assumption that there are two latent subpopulations referring to seronegative and seropositive individuals, respectively [6]. In both approaches, the cut-off point for seropositivity is determined by the average plus 3 times the standard deviation of the seronegative population. Seroprevalence (SP) is then the percentage of seropositive individuals in the sample and embodies information over currently infected and recently exposed individuals. As expected, SP estimates are typically higher than those for PR measured in the same sample [1,7]. Although overcoming some of the shortcomings of PR and EIR, SP does not reflect the dynamics of malaria transmission directly. Seroconversion rate (SCR) extends SP analysis to the scenario where one is a step closer to capture the underlying disease dynamics of a given population. This serological parameter arises from the analysis of seroprevalence taken as function of age of the individuals using the so-called reverse catalytic models. The age of individuals is assumed to be a good surrogate of time in a stochastic process where individuals transit between seropositive and seronegative states upon malaria exposure and absence of re-infection. Theoretically, SCR is defined as the frequency by which seronegative individuals become seropositive upon malaria exposure. Conversely the frequency by which seropositive individuals return to a seronegative state is known as seroreversion rate (SRR). This last parameter is related to antibody decay in absence of disease exposure and reflects the effects of host factors on antibody dynamics. Several studies have shown the utility of SCR as a malaria epidemiological tool with some demonstrating good agreement between this measure and EIR [1] and others detecting historical changes in transmission that otherwise would not have been possible with other measures of transmission [4,7-9]. Whilst the evidence for using serology as an adjunct epidemiological marker for malaria transmission is growing, there has been no formal examination of samples size considerations for SP and SCR as primary endpoints. In fact, most malaria epidemiological studies are planned with PR being as the primary endpoint [7] and, therefore, it is unclear whether SP and SCR might have enough statistical precision to lead to clear conclusions. SP is in theory a proportion (or a percentage) and, as such, several methods exist for sample size determination in this situation [10]. In contrast, the precision of SCR estimates depends not only on the sample size, but also on the age distribution associated with a given population. Therefore, sample size determination is not as straightforward. A pragmatic approach is to use an empirical relationship between SCR and SP in order to determine the total sample size required for collecting a given number of seropositive individuals [8]. This approach is here improved by using the theoretical relationship between SP and SCR under a given age distribution and a fixed SRR. Sample size determination is then based on back-transforming the confidence interval for SP into the corresponding one for SCR. In the situation where SCR and SRR are both unknown, a second sample size calculator is developed by bringing simulation together with regression. The use of these two sample size calculators is instrumental to power future serological studies, notably, in the challenging research settings of populations on the cusp of elimination [11]. Reverse catalytic models for seropositivity data In malaria epidemiology, the reverse catalytic models were first described to estimate incidence and recovery rates from longitudinal data [12]. More recently, they were recast to the analysis of malaria seroprevalence data [13]. Mathematically, these models can be described as a Markov chain where individuals transit between two serological states: 0 - seronegative and 1 - seropositive. The time between transitions is assumed to be exponentially distributed. This assumption implies that every time an individual move from one state to another, the stochastic process restarts probabilistically due to lack of memory of the Markov Chains. This is in close agreement with the general notion that malaria parasites can only confer partial immunity to the host. This paper deals with the simplest reverse catalytic model where SCR and SRR are assumed to be fixed constants throughout time and for every individual. The use of this model has in practice three key implications. Firstly, a constant SCR implies that disease transmission remained unchanged throughout time in the population under study. Secondly, a constant SRR implies that the host factors affecting antibody decay were not altered by any genetic selection event, migration or admixture. Thirdly, all individuals have experienced the same disease transmission intensity and, thus, age can be used as a surrogate of the time of disease dynamics. Mathematically, the probability of individuals with age t being at each serological state is given by the transition probability matrix P(t) = [p i|j (t)], i, j = 0, 1, where p i|j (t) is the conditional probability of an individual with age t being in state i given he started the process in state j and R is the so-called rate matrix that, in turn, is defined as $$ R=\left[\begin{array}{cc}\hfill -\lambda \hfill & \hfill \lambda \hfill \\ {}\hfill \rho \hfill & \hfill -\rho \hfill \end{array}\right], $$ where λ and ρ are the SCR and SRR, respectively. Assuming that all individuals are born seronegative (that is, seronegative at time t = 0; this is achieved in practice by only including individuals aged or older than 1 year to negate putative maternal effects on malaria antibodies), the probability of an individual aged t being seropositive is described by $$ {p}_{1\Big|0}(t)=\frac{\lambda }{\lambda +\rho}\left(1-{e}^{-\left(\lambda +\rho \right)t}\right). $$ A special case of the above model may arise from populations where only a few seronegative individuals would result from seroreversion events. As a consequence, data might not enough information to estimate SRR (i.e., ρ ≈ 0). In this case, equation (2) can be rewritten as follows $$ \log \left[- \log \left(1-{p}_{1\Big|0}(t)\right)\right]= \log \lambda + \log t. $$ This model has been applied to malaria data from low transmission populations [14], to serology data on human leishmaniasis [15], and to limiting dilution data [16]. Theoretically, equation (3) can be seen as the popular complementary log-log model from statistics that, in turn, can be formulated as a generalized linear model (GLM) under a binomial sampling scheme [17]. As such, the respective parameter estimation can be performed in most statistical softwares as long as one specifies 'log age' as the explanatory variable and the corresponding slope fixed at 1. Alternative sample size calculators for this model could be used in the same line of a GLM power analysis, as described elsewhere for logistic regression [18,19]. There are also other reverse catalytic models describing changes in disease transmission (see, for example, review of Corran et al. [1]). Although interesting, sample size determination on these alternative models will be studied elsewhere (Sepúlveda and Drakeley, in preparation). In malaria literature, one can also found an extension of the reverse catalytic modelling framework to the situation where seropositivity can be boosted by recurrent malaria exposure [20]. This model would appear to be more adequate to very high transmission settings and, thus, out of the scope of this paper. Model parameterization To illustrate the sample size determination on realistic values of SCR and SRR, Plasmodium falciparum data sets from two independent studies in northeast Tanzania were used [3,21]. This region extends from the high malaria transmission areas in the coastal plains of Tanga to the low transmission settings in the high altitude mountains of Kilimanjaro, Usambara and Pare. Because of this natural variation in malaria endemicity, northeast Tanzania is an ideal region to understand how different malaria risk indicators are related to each other. Available data of altitude (in meters) against EIR [21] was re-analysed leading to the following linear regression model (Additional file 1: Figure A) $$ { \log}_{10}\mathrm{E}\mathrm{I}\mathrm{R}=2.5204-0.0025\times \mathrm{altitude}. $$ In another epidemiological study, serological data from 21 villages of the same region was also available [3,13]. SCR associated with MSP1 antibodies was found to be highly correlated with altitude [1]. This data set suggested the following relationship between SCR and altitude (Additional file 1: Figure B) $$ { \log}_{10}\mathrm{S}\mathrm{C}\mathrm{R}=-0.2908-0.0012\times \mathrm{altitude}, $$ where SRR estimate would appear to be constant across villages and fixed at 0.017. In turn, data from the same study suggested the following relationship between PR of children aged 0–4 years old (PR04) and altitude (Additional file 1: Figure C): $$ \log \frac{{\mathrm{PR}}_{04}}{{1\hbox{-} \mathrm{P}\mathrm{R}}_{04}}=8.9992-1.5934\times { \log}_{10}\mathrm{altitude}. $$ Solving one of the above equations as function of altitude, the expected relationship between EIR, SCR, and PR04 can be obtained as shown in Figure 1A. Model parameterization under the assumption of constant malaria transmission intensity: A. Expected relationship between SCR, EIR rate and PR in children aged from 0 to 4 years olds. B. Age-adjusted SP curves given the expected SCRs associated with EIRs shown in A. C. Age structure of African and non-African populations. D. Seroprevalence as function of SCR based on the age distributions shown in C. Sample size determination was conducted on the following transmission intensities as measured by EIR and PR04 (in brackets) units: 0.01 (0.050), 0.1 (0.073), 1 (0.119), 10 (0.231) and 100 (0.625). The corresponding SCRs are 0.0034, 0.0104, 0.0324, 0.0969 and 0.2900, respectively (Table 1). With respect to the above-mentioned large epidemiological study [1], a SCR between 0.0034 and 0.0104 describes low transmission intensities of high-altitude villages, such as Kilomeni (1556 m - SCR = 0.0047) or Mokala (1702 m - SCR = 0.0104). SCRs between 0.01 and 0.10 are, in turn, associated with villages in intermediate altitude, like Tewe (1049 m - SCR = 0.0308) or Ngulu (831 m - SCR = 0.0906). Finally, SCRs greater than 0.10 are related to lowland villages, such as Mgila (375 m - SCR = 0.128) or Mgome (196 m - SCR = 0.302), where malaria transmission is considered to be high. The expected age-adjusted SP curves are shown in Figure 1B. Expected relationship between EIR, PR 04 , SCR and SP in African (AFR), Southeast Asian and South American (SEA + SA) populations where seroreversion rate was fixed at 0.017 SEA + SA Model estimation In terms of statistical analysis, age-adjusted seropositivity data can be summarized as a frequency vector {n ts } where n ts is the frequency of individuals with age t = 1,…,T and serological state s = 0 or 1, T is the total number of distinct age values in the sample. If individuals were sampled independently of each other and the statistical inference is focused on age-adjusted seroprevalence only, the sampling distribution of the frequency vector {n ts } can be described by a binomial-product distribution, one binomial distribution per age value, that is, $$ f\left(\left\{{n}_{ts}\right\}\Big|\lambda, \rho \right)={\displaystyle \prod_{t=1}^T\frac{\left({n}_{t0}+{n}_{t1}\right)!}{n_{t0}!{n}_{t1}!}}{\left[{p}_{1\Big|0}(t)\right]}^{n_{t1}}{\left[1-{p}_{1\Big|0}(t)\right]}^{n_{t0}}, $$ where p 1|0(t) is given by equation (2). Parameter estimation can be performed via standard maximum likelihood methods, as described elsewhere [15]. Stata and R scripts for parameter estimation are available from the authors upon request. Sample size calculations The first sample size calculator assumes that SRR is a known constant (say ρ 0 = 0.017), thus, should not be estimated after sample collection. In that case, the expected relationship between SP of the population (hereafter denoted by π) and SCR can be computed as follows $$ \pi ={\displaystyle \sum_{t=1}^{A_{\max }}{\alpha}_t\frac{\lambda }{\lambda +{\rho}_0}\left(1-{e}^{-\left(\lambda +{\rho}_0\right)t}\right),} $$ where α t is the proportion of individuals aged t in the population and A max is the maximum age considered relevant for the population, say A max =80. As expected, the above relationship depends on the age distribution of the population (or of the study design used). Official statistics on age distributions were explored in order to understand how these vary across the world [22]. These data sets suggest that African countries have the same age distribution approximately (a decreasing frequency from newborns to elderly; Additional file 2). Thus, a typical age structure distribution for these populations was generated by pooling data from different countries together (Figure 1C). Although slight differences can be observed across countries, the age distributions from Southeast Asia and South America show roughly the same pattern but distinct from the one for African populations (Additional file 2). Therefore, a non-African age distribution prototype was constructed (Figure 1C). This age structure is much flatter than its African counterpart due to a higher frequency of adults. These two general age distributions were then used to derive the expected SP as function of SCR according to equation (8) (see Figure 1D). Interestingly, the relationship between SP and SCR in African populations when SCR = 0 is similar to the one for non-African populations when ρ = 0.017. Therefore, the sample size determination would lead to similar results for these two distinct populations. In the statistical literature, there are several methods for constructing a confidence interval for a proportion that can be used for sample size determination, as reviewed elsewhere [23]. The most popular method is the so-called Wald Score that, although its simplicity of calculation, may lead to poor coverage and problems of overshoot and degeneracy [10]. An alternative method is to introduce an continuity correction in the Wald Score that, when applied to SP estimation, leads to the following confidence interval at 95% $$ {\widehat{\pi}}_l=\widehat{\pi}-1.96\sqrt{\frac{\widehat{\pi}\left(1-\widehat{\pi}\right)}{n}}-\frac{1}{2n} $$ $$ {\widehat{\pi}}_l=\widehat{\pi}+1.96\sqrt{\frac{\widehat{\pi}\left(1-\widehat{\pi}\right)}{n}}+\frac{1}{2n}, $$ where \( \widehat{\pi} \) is an estimate of the true SP, n is the sample size and 1.96 is the 97.5%-quantile of a standard Gaussian distribution. For a given SCR, one can compute the expected π using equation (8) and replace it in the above equations in order to obtain the corresponding confidence bounds \( {\widehat{\pi}}_l \) and \( {\widehat{\pi}}_u \) for a given sample size n. These confidence bounds can then be back-transformed into the corresponding ones for SCR using equation (8) again. To perform the back-transformation, one needs to solve the following equations as function of λ l and λ u (the corresponding lower and upper bounds of SCR) $$ {\widehat{\pi}}_l={\displaystyle \sum_{t=1}^{A_{\max }}{\alpha}_t\frac{\lambda }{\lambda +{\rho}_0}\left(1-{e}^{-\left(\lambda +{\rho}_0\right)t}\right),} $$ $$ {\widehat{\pi}}_u={\displaystyle \sum_{t=1}^{A_{\max }}{\alpha}_t\frac{\lambda }{\lambda +{\rho}_0}\left(1-{e}^{-\left(\lambda +{\rho}_0\right)t}\right).} $$ Unfortunately these equations can be solved analytically but a binary searching algorithm, although slow, is able to obtain an approximate solution using an appropriate searching interval. In theory, one defines the coverage of a confidence interval as the number of times that confidence interval contains the true value of the parameter upon repeated sampling. Under this definition, a confidence interval at 95% should lead to a coverage of 95%. However, the expected coverage is not always achieved due to the use of (Gaussian) approximations for the random variables underpinning the construction of a given confidence interval. This putative incorrect coverage affects sample size determination by either undersampling in situations of undercoverage or oversampling in situations of overcoverage, as reported for proportion estimation when data stems from populations with proportions less than 0.1 or higher than 0.9 [23,24]. Therefore, the back-transformation method was tested against these putative coverage problems. The expected coverage of the confidence interval for SCR was assessed via simulation. For every pairwise combination of SCR and n, the following two-step algorithm was employed for the generation of a given data set: i) generate the age of each individual in the sample, and (ii) generate the corresponding serological state as a Bernoulli trial with seropositivity probability given by equation (2). The back-transformation of the confidence interval for SP was applied to each data set. Coverage was finally calculated by counting how many times the confidence intervals included the SCR that generated the data. The performance of this method was also assessed in terms of the midpoint of the corresponding confidence interval for SCR. In this scenario, a confidence interval was defined as central if the true SCR was located in the middle of the corresponding interval. A practical implication of using central confidence intervals is that they have the shortest length among all intervals one can construct with a given confidence level if a Gaussian distribution is a good approximation for the sampling distribution of SCR estimates. In that case, the use of central confidence intervals for sample size determination implies working with the best precision possible and, thus, the subsequent sample sizes are the minimum ones for a given confidence level. In opposition, if the constructed confidence intervals are not central, they might not be the ones providing the highest precision (i.e., with shortest length). To assess whether a given confidence interval is or not central, one is required to know the sampling distribution of SCR estimates upon repeated sampling. Unfortunately that distribution is not known in general. Sample size determination was then conducted by given length of the 95% confidence interval for SCR. With this goal in mind, the relative length of that confidence interval was fixed at a given constant (e.g., 1, 0.75, 0.5, and 0.25). The above back-transformation method was used together with an additional binary search method aiming to find the required sample size. The search algorithm was implemented in R software and the corresponding code is available from the authors upon request. When there is little information on SRR to help planning a study, there is no clear analytical method to calculate the required sample size. Instead, data simulation would appear to be the best approach for the problem. Specifically, data simulation was used to study the expected length of the confidence intervals for SCR given a set of sample sizes (e.g., n = 250, 500, 1,000, 2,500, 5,000 and 10,000). The generation of each data set followed the same algorithm as described for the performance of the first sample size calculator. For each generated data set, the estimates of SCR and SRR were obtained via maximum likelihood methods. To obtain the precision of SCR estimate associated with a given sample size, the 2.5% and 97.5% quantiles were calculated for the set of SCR estimates generated from data of a given transmission intensity. The absolute precision was defined as the absolute difference between these two quantiles whereas the relative precision is the absolute precision divided by the SCR that generated the data. It is worth noting that the absolute precision (pr) of SP estimates associated with the first sample size calculator can be rewritten as a function of 1/n given a pair of SCR and SRR, that is, $$ {\mathrm{pr}}_{n\Big|{\rho}_0}\left(\widehat{\pi}\right)=3.92\sqrt{\frac{\widehat{\pi}\left(1-\widehat{\pi}\right)}{n}}+\frac{1}{n}, $$ where the above equation results from the absolute difference between equations (9) and (10). Since this sample size calculator is based on a back-transformation relating SP to SCR, the precision of SCR estimates can also be expressed by a function of 1/n (say function g). This function is highly non linear and not analytically derivable but in theory can be approximated by the following MacLaurin expansion from Mathematical Calculus: $$ {\mathrm{pr}}_{n\Big|{\rho}_0}=g(0)+\frac{g\hbox{'}(0)}{1!}\times \frac{1}{n}+\frac{g\hbox{'}\hbox{'}(0)}{2!}\times \frac{1}{n^2}+\frac{g\hbox{'}\hbox{'}\hbox{'}(0)}{3!}\times \frac{1}{n^3}+\cdots $$ where g ′ (0), g ′′ (0) and g ′′′ (0), are unknown but fixed constants associated with the function g, its first, second and third derivative evaluated at zero, respectively. Therefore, the precision of SCR estimates (\( \widehat{\lambda} \)) can be determined by a regression linear model as function of 1/n, that is, $$ p{r}_{n\Big|{\rho}_0}\left(\widehat{\lambda}\right)={\beta}_0+\frac{\beta_1}{n}+\frac{\beta_2}{n^2}+\frac{\beta_3}{n^3}, $$ where β 0, β 1, β 2 and β 3 are coefficients to be estimated from the set of SCR estimates obtained from the simulated data. This rationale was assumed to be applicable directly to the second sample size calculator where SRR is unknown. The above model was then estimated to the simulated precision data via maximum likelihood method. The resulting adjusted correlation coefficient between simulated and predicted data was found to be >0.99, thus, suggesting that the above model is indeed a good approximation of the relationship between the sample size and the expected precision of SCR estimates. The last step was to find the sample size associated with a given precision. This was done numerically by using a binary search algorithm. Performance of the back-transformation method The performance of the back-transformation method was first assessed in terms of the expected coverage of the 95% confidence intervals for SCR (Table 2). In most cases, the confidence intervals showed slight overcoverage (≤1%) with a few exceptions. In very low transmission settings (SCR = 0.0036), the confidence intervals show undercoverage for sample sizes ≤250 in Africa and ≤500 elsewhere, respectively. The most severe case of incorrect coverage is for samples of 50 individuals from African populations where a strong overcoverage (0.998) is observed. Interestingly, in a non-African context, the confidence intervals show instead undercoverage (0.909) for the sample size and transmission intensity. These opposing results might reflect marked differences in the underlying age structures, notably in terms of the proportion of children in one population and the other (see Figure 1C). In high transmission intensities (SCR = 0.29), the confidence intervals also show undercoverage for samples of 100 individuals or less in African settings. In practice, the problem of under or overcoverage most likely results in confidence intervals with higher or lower length than they should in relation to a situation where the correct coverage is obtained for the constructed intervals. This has an impact on sample size determination in the sense that controlling the length of the confidence intervals showing these problems might lead to smaller or greater samples sizes than required in reality. Coverage of confidence intervals based on back-transformation algorithm assuming SRR = 0.017 Confidence intervals for SCR estimates were then evaluated in terms of their midpoints. The results suggest that these midpoints and the true SCR tend to be closer to each other with the increase of the sample size (Additional file 3: Figure A). Mathematically speaking, this results from approximating the back-transformation by means of a linear relationship between SP and SCR. The precise sample size where that begins to happen increases with the underlying transmission intensity. More specifically, sample sizes of about 400 and 2,250 individuals tend to provide central confidence intervals when SCR=0.0036 and 0.29, respectively. For moderate sample sizes, say n < 500, the back-transformation method implies non-central confidence intervals for intermediate values of SCR. Since the exact distribution of SCR estimates is not known in general, it is unclear whether these non-central confidence intervals are the ones providing the highest precision. Sample size calculations for known SRR Sample size determination was then conducted under the assumption of a known SRR (SRR = 0.017; Table 3). For the same relative precision, the sample sizes vary with transmission intensities. In particular, sample sizes increase from very low to intermediate transmission intensities and then they declined after reaching a sufficiently high transmission intensity (i.e., when the SP curve becomes flat). With the increase of precision, the difference between sample sizes from different transmission intensities increases dramatically. On one extreme, for a relative length of 1, sample sizes vary from 73 (SCR = 0.0324) to 315 (SCR = 0.0036) and from 67 to 248 in African and non-African settings, respectively. On the other extreme, sample sizes range from 976 to 4968 (Africa) and from 890 to 3558 (elsewhere) for a relative length of 0.25. Exact sample sizes and corresponding ranges for absolute SCR, EIR and SP by controlling the relative length of 95% confidence interval for SCR under the assumption of SRR = 0.017 Relative length 0.0019-0.0054 46.5-300.58 6.41-17.7 79.44-133.1 Similar sample sizes were found for African and non-African populations experiencing SCR = 0.0324 and 0.0964 (intermediate transmission) irrespective of the relative precision used. When SCR = 0.0964, the sample sizes for African populations are 79, 127 and 262 and 976 individuals to ensure a relative precision of 1, 0.75, 0.5, and 0.25, respectively, whereas the corresponding ones for non-African settings are 90, 142, 288 and 1,059. However, African studies require larger sample sizes than their non-African counterparts for SCR = 0.0036 and 0.0108 and the other way around for SCR = 0.29. For the same transmission intensity, the requirement of a smaller or larger sample size in African studies in the relation to others conducted elsewhere reflects the steepness of the SCR-SP curve. In other words, the use of the back-transformation implies that, when specifying a given confidence interval for SP, the confidence interval for SCR is going to be narrower or wider depending on the steepness of the SP curve. Mathematically, the steepness of that curve is given by the respective derivative. That derivative was found to be smaller in African than in non-African populations for SCR < 0.058 and the other way around for SCR > 0.058 (Additional file 3: Figure B). Available PR data for P. falciparum suggests that non-African populations are most likely to be at lower endemicity [25]. Note that, for SCRs in the vicinity of 0.058 where the two derivative functions cross each other, it is expected to obtain similar sample sizes for both populations, a result compatible with the sample sizes provided for intermediate transmission intensities. Finally, the relationship between SCR and SP was here found to be similar between Africa and non-African populations when SRR = 0 and 0.017, respectively (Figure 1D). Therefore, the comparison between sample sizes for African and non-African studies can also be used to ascertain the bias in sample size estimates when assuming SRR = 0 in an African setting. The calculated sample sizes can also be used to help designing studies including different populations (or sites). Firstly, there is no theoretical impediment to use distinct sample sizes for populations known to differ in malaria endemicity. For example, a sample size of approximately 125 individuals will provide a relative precision of 1 for African sites experiencing a SCR of 0.0108. The same sample size leads to a relative precision of 0.75 for African populations with SCR = 0.0324 or 0.0969. Secondly, the expected confidence intervals for SCR can also provide clear insights on the underlying statistical power to compare sites with different transmission intensities. In particular, the sample sizes associated with a relative precision of 1 are enough to distinguish sites differing at least one order of magnitude in EIR with 95% confidence (or with 5% significance level in hypothesis testing terminology). However, this distinction cannot be done if these sample sizes were used and a 99% confidence level was alternatively specified to study between any two sites differing exactly one order of magnitude (Additional file 4). Thirdly, the expected confidence intervals for SCR are alternatively instrumental to know which transmission intensity range cannot be discriminated by the data. For example, a sample size of 79 individuals associated with a relative length of 1 and SCR = 0.0969 cannot distinguish African populations with EIR ranging from 4.18 to 29.17. Sample size calculations for unknown SRR Sample size calculations were then performed for the most common situation of unknown SRR. For low transmission settings (SCR ≤ 0.0108) and reasonably low sample sizes, there is a non-negligible probability of generating data sets leading to null SRR estimates (Table 4). More precisely, for SCR = 0.0036, one would need to sample at least 1,000 individuals to ensure that chance is smaller than 10% whereas for SCR = 0.0108, the same is achieved for sample sizes of no less than 500 individuals. In practice, these problematic data sets imply that the corresponding SCR estimates underestimate the true SCR that generated the data (Table 4). This underestimation can be explained by the fact that just a few seronegative individuals may result from seroreversion events but they are wrongly assumed to have never been exposed to malaria parasites under a null SRR estimate. For higher transmission settings, the occurrence of these problematic data sets is minimal because the generated data has a good balance between the total number of seropositive and seronegative individuals. Percentage of simulated data sets where SRR was estimated as 0 (% ρ=0 ) and the bias of the corresponding SCR estimates taken as the percentage in relation to the true SCR SCR = 0.0036 % ρ=0 Bias (%) −25.7 Bias was defined as the difference between the mean of the corresponding estimates and the true value of SCR. The true SRR that generated the data sets was fixed at 0.017. Approximated sample sizes were calculated using data simulation coupled with a regression model relating precision to sample size (Table 5); see Additional file 5 for the respective simulation results. Three key observations can be highlighted. Firstly, as found for known SRR, the same qualitative behavior between sample size and transmission intensity was found irrespective of the population under study. More precisely, the sample sizes increase from very low to moderate transmission and decrease from then on. Secondly, the necessity of estimating an additional parameter from the data brought more uncertainty over SCR estimation, thus, increasing the previous sample sizes for known SRR. In this case, the difference in sample sizes assuming or not a known SRR decreases with transmission intensity. On one extreme, for SCR = 0.0036, the sample sizes for relative precisions of 1, 0.75, 0.50 and 0.25 are now 2,193, 5,127 and >10,000, respectively, in comparison to 315, 549, 1163 and 4968 assuming a known SRR. On the other extreme, for SCR = 0.29, the sample sizes do not differ substantially assuming or not known SRR: 213, 267, 542, and 1,927 (unknown SRR) versus 151, 233, 461, and 1,670 (known SRR). Thirdly, for the same relative precision, African studies are most likely to require lesser individuals than their counterparts conducted elsewhere. This is in clear contrast to above results for known SRR where African studies would only have decreased sample sizes in high transmission intensities. The explanation for this result is unclear but it might be related again to the underlying age distribution. When SRR is unknown, the bulk of the information on SCR seems to come from young individuals and, if so, African populations have a higher proportion of individuals with that age. Finally, it is worth noting that, since the sample sizes were calculated using the same relative precision, the above-mentioned results for known SRR on comparing African to non-African studies are still valid for unknown SRR. Approximate sample sizes for controlling precision of SCR estimates under of the assumption of unknown SRR where the true SRR was fixed at 0.017 >10,000 In this paper, two sample size calculators for estimating antibody SCR were proposed. The first calculator is based on the assumption of known SCR and, because of that, it implies smaller sample sizes in relation to a situation where SCR is assumed to be unknown. Obtaining smaller sample size is important for studies where ethical issues, limited human and economic resources, or time constraints might be in place. However, this calculator requires fixing SRR at a given constant. In this regard, the current knowledge of SRR is still limited. Firstly, this parameter has only been measured indirectly by means of fitting the reverse catalytic models to data. Secondly, there might be age differences in seroreversion but seropositivity data appears to not have enough information for its detection [1]. Therefore, considering SRR at a fixed constant is a pragmatic choice not also for data analysis but also for sample size calculation. Notwithstanding this pragmatism, current estimates of SRR [1,7,13] are of the same of magnitude of the one used here and, therefore, the calculated sample sizes would appear to be reliable in general. However, for the matter of precision, sample size determination is recommended to be performed using a predefined SRR estimate from a reliable source. An obvious source of information can be data from another population but with similar malaria transmission intensity and host factors. Another possible source of information is to use existing data from past surveys taken from the same population, as reported in a recent study from Kenya [26]. Statistically speaking, a more coherent and elegant way to incorporate prior information in sample size determination is via Bayesian methodology as done elsewhere for estimating proportions (or prevalences) [27,28]. Although appealing, this approach would not appear to attract much attention of malaria epidemiologists, as suggested by the scarce number of studies applying such alternative approach to data analysis. The basic idea underlying the first sample size calculator is to apply a back-transformation to the confidence interval for SP. The reliability of this method is then critically dependent not only on the statistical performance of the chosen SP confidence interval (in this case, the Wald Score corrected for continuity), but also on the degree of similarity between the age distribution used in the sample size determination and the one to be obtained upon sample collection. In terms of the Wald confidence interval using a continuity correction, it is one among more than twenty methods proposed to construct confidence interval for a proportion [23]. A recent study compared seven of these methods in terms of sample size determination for estimating a proportion [10]. General guidelines are not easy to put forward because they depend not only on the different criteria on how to deal with eventual problems of under or overcoverage of the corresponding confidence intervals, but also on the underlying proportion of the population under study. Notwithstanding this problem, these authors showed that, for a given absolute precision and a proportion between 0.01 and 0.90, the sample sizes from different methods do not deviate more than 40 sampling units. This result is expected to hold true for SCR estimation, but might require large-enough sample sizes where a linear approximation can be invoked between SCR and SP. With the respect to the age distributions used here, official statistics showed a clear distinction between African and non-African populations. However, these age distributions report to the respective overall populations and, thus, slight differences are expected to be seen between these whole-population-based distributions and the corresponding ones for the rural areas where malaria is more prevalent. Although a case-by-case approach is recommended, these differences are most likely to be related to a higher number of older individuals living in urban population that, in general, have better access to health care. Other factors related to sampling feasibility might also introduce some bias in the sampled age distribution, such as using schools surveys or collecting household-consented data that led to a slightly overrepresentation of school-aged children (5–18 years old) in recent studies [9,29,30]. Notwithstanding these putative differences between official and sampled age distributions, there is a good agreement between the age distributions used here and the ones found across a series of recent cross-sectional studies [31-33]. Thus, the calculated sample sizes would appear to be reliable for planning future surveys not using age stratification. A natural follow up of this work is then to perform sample size determination on alternative sampling strategies that may necessitate targeting or oversampling specific age groups. In theory, stratified sampling, if done intelligently, is known to improve precision of the ensuing estimates of the population prevalence [34]. Since the first sample size calculator is based on the confidence interval for SP, the sample sizes of age-adjusted sampling strategies should be decreased in relation to the ones calculated here. The optimal age stratification in terms of minimum sample size is one among other questions to be explored in a near future. The second sample size calculator relates to the most general situation of a unknown SRR. Although general, this method only provides approximate sample sizes because it uses simulation coupled with a regression model predicting the expected precision as function of the sample size. As expected, the additional requirement of estimating SRR results in larger sample sizes in comparison to the ones derived from a known SRR. The simulation results highlighted the possibility of generating data sets from low transmission settings where one does not have enough information to estimate the SRR, thus, introducing significant negative biases on the SCR estimates. To minimize the occurrence of such situations, sample sizes of no less than 1,000 and 500 are recommended for EIR = 0.01 and 0.1, respectively. It is worth noting that there are many combinations of transmission intensities and relative precisions leading to sample sizes of more than 1,000 individuals. This relatively intensive sampling is particularly important for studying populations close to malaria elimination (SCR ≤ 0.0108). As a statistical advantage, a large sample size diminishes the chance of underestimating SCR due to null SRR estimates. However, large community-based surveys are usually seen as financially and logistically demanding enterprises and school or health centre surveys may be more pragmatic. As with a conventional metric like parasite rate, the relative advantages and disadvantages of a relatively small community-based survey and a large study using a more convenient sampling approach need to be properly balanced. Additionally the simulation algorithm for calculating precision assumes a population of infinite size. This assumption is reasonable in highly dense populations living in small areas where malaria transmission is expected to be more homogeneous. However, this is uncommon with heterogeneity in population density and malaria transmission more likely to be the norm especially at low transmission. The corresponding sample size will need to be inflated if one is to unravel subpopulations with subtle differences in malaria exposure, as observed in different studies [1,7,13]. Finally, a large sample size might not be feasible in intrinsically small populations, such as the ones living in islands [4,9]. In that case, the precision is in fact increased in relation to the one calculated from infinite-size population and, thus, the proposed sample size calculator would lead to oversampling. However, if there are no dramatic cost restrictions, oversampling might overcome eventual losses of precision due to the occurrence of missing data. It is also important highlighting the fact that the SCR and SRR used here are for the merozoite surface protein-1 (MSP1) antigen. Another well-characterized antigen is the P. falciparum apical membrane antigen-1 (AMA1). Current SCR and SRR estimates are different for these two antigens due to their inherent immunogenicity and half-life exposed to the immune system [8] with a higher SCR for AMA1 compared to its MSP1 counterpart. As a direct consequence of this observation, smaller sample sizes will be required for AMA1-based studies. There is relatively little data for other antigens though variation in seroconversion rates has been reported [35,36]. Practically to overcome issues around antigenic variation and differential population reactivity (e.g., due to genetics), a combination of antigens are used and sample sizes would be derived from the most immunogenic component. In conclusion, this paper described relatively straightforward approaches to calculating the sample size for estimating SCR. The methods assume data derived from areas with stable transmission, standard population age distributions and community-based surveys with no age stratification. Several caveats relating to survey design, antibody reversion rates and antigen choice were presented to allow an appreciation of the complexity of the issue. Pragmatically however, the results suggest that SCR estimation can be readily incorporated into the design of most malariometric studies and this will be of particular use in populations with low malaria endemicity. Further work is needed to assess the sample size requirements for estimating any change in transmission with serology. EIR: entomological inoculation rate parasite rate SCR: seroconversion rate (λ) SRR: seroreversion rate (ρ) Nuno Sepúlveda is funded by the Wellcome Trust grant number 091924 and Fundação para a Ciência e Tecnologia through the project Pest-OE/MAT/UI0006/2011. Chris Drakeley is funded by the Wellcome Trust grant number 091924. Additional file 1: Relationship between altitude and different malariometrics in northeast Tanzania: altitude versus EIR (A), altitude versus SCR (B), altitude versus PR 04 (C). Additional file 2: Age distributions of different countries from West Africa, East Africa, South America and Southeast Asia. Additional file 3: Midpoints of confidence intervals for SCR as function of the sample size (A) and the derivative function of SP in relation to SCR (B). Additional file 4: Absolute SCR, EIR and SP ranges using the sample sizes shown in Table 3 and 99% confidence level for the respective intervals. Additional file 5: Results of the simulation study when SRR is unknown. The true SRR of the population was setup at 0.017. NS developed the proposed methodology and wrote the manuscript. CD designed the project and provided real-world implications of this work. Both authors read, revised and approved the manuscript. London School of Hygiene and Tropical Medicine, Keppel Street, WC1E 7HT London, UK Center of Statistics and Applications of University of Lisbon, Faculdade de Ciências da Universidade de Lisboa, Bloco C6 - Piso 4, 1749-1016 Lisboa, Portugal Corran P, Coleman P, Riley E, Drakeley C. Serology: a robust indicator of malaria transmission intensity? Trends Parasitol. 2007;23:575–82.View ArticlePubMedGoogle Scholar Bousema T, Youssef RM, Cook J, Cox J, Alegana VA, Amran J, et al. Serologic markers for detecting malaria in areas of low endemicity, Somalia, 2008. Emerg Infect Dis. 2010;16:392–9.View ArticlePubMed CentralPubMedGoogle Scholar Drakeley CJ, Carneiro I, Reyburn H, Malima R, Lusingu JPA, Cox J, et al. Altitude-dependent and -independent variations in Plasmodium falciparum prevalence in northeastern Tanzania. J Infect Dis. 2005;191:1589–98.View ArticlePubMedGoogle Scholar Cook J, Kleinschmidt I, Schwabe C, Nseng G, Bousema T, Corran PH, et al. Serological markers suggest heterogeneity of effectiveness of malaria control interventions on Bioko Island, Equatorial Guinea. PLoS One. 2011;6:e25137.View ArticlePubMed CentralPubMedGoogle Scholar Arnold BF, Priest JW, Hamlin KL, Moss DM, Colford JM, Lammie PJ. Serological measures of malaria transmission in Haiti: comparison of longitudinal and cross-sectional methods. PLoS One. 2014;9:e93684.View ArticlePubMed CentralPubMedGoogle Scholar Bretscher MT, Supargiyono S, Wijayanti MA, Nugraheni D, Widyastuti AN, Lobo NF, et al. Measurement of Plasmodium falciparum transmission intensity using serological cohort data from Indonesian school-children. Malar J. 2013;12:21.View ArticlePubMed CentralPubMedGoogle Scholar Cunha MG, Silva ES, Sepúlveda N, Costa SPT, Saboia TC, Guerreiro JF, et al. Serologically defined variations in malaria endemicity in Pará state, Brazil. PLoS One. 2014;9:e113357.View ArticlePubMed CentralPubMedGoogle Scholar Stewart L, Gosling R, Grin J, Gesase S, Campo J, Hashim R, et al. Rapid assessment of malaria transmission using age-specifc sero-conversion rates. PLoS One. 2009;4:6083.View ArticleGoogle Scholar Cook J, Reid H, Iavro J, Kuwahata M, Taleo G, Clements A, et al. Using serological measures to monitor changes in malaria transmission in Vanuatu. Malar J. 2010;9:169.View ArticlePubMed CentralPubMedGoogle Scholar Gonçalves L, de Oliveira MR, Pascoal C, Pires A. Sample size for estimating a binomial proportion: comparison of different methods. J Appl Stat. 2012;39:2453–73.View ArticleGoogle Scholar Stresman G, Kobayashi T, Kamanga A, Thuma PE, Mharakurwa S, Moss WJ, et al. Malaria research challenges in low prevalence settings. Malar J. 2012;11:353.View ArticlePubMed CentralPubMedGoogle Scholar Bekessy A, Molineaux L, Storey J. Estimation of incidence and recovery rates of Plasmodium falciparum parasitaemia from longitudinal data. Bull World Health Organ. 1976;54:685–93.PubMed CentralPubMedGoogle Scholar Drakeley CJ, Corran PH, Coleman PG, Tongren JE, McDonald SLR, Carneiro I, et al. Estimating medium- and long-term trends in malaria transmission by using serological markers of malaria exposure. Proc Natl Acad Sci U S A. 2005;102:5108–13.View ArticlePubMed CentralPubMedGoogle Scholar von Fricken ME, Weppelmann TA, Lam B, Eaton WT, Schick L, Masse R, et al. Age-specific malaria seroprevalence rates: a cross-sectional analysis of malaria transmission in the Ouest and Sud-Est departments of Haiti. Malar J. 2014;13:361.View ArticleGoogle Scholar Williams BG, Dye C. Maximum likelihood for parasitologists. Parasitol Today. 1994;10:489–93.View ArticlePubMedGoogle Scholar Bonnefoix T, Bonnefoix P, Verdiel P, Sotto JJ. Fitting limiting dilution experiments with generalized linear models results in a test of the single-hit poisson assumption. J Immunol Methods. 1996;194:113–9.View ArticlePubMedGoogle Scholar McCullagh P, Nelder JA. Generalized Linear Models. 2nd ed. London: Chapman & Hall; 1989.View ArticleGoogle Scholar Hsieh FY, Bloch DA, Larsen MD. A simple method of sample size calculation for linear and logistic regression. Stat Med. 1998;17:1623–34.View ArticlePubMedGoogle Scholar Novikov I, Fund N, Freedman LS. A modified approach to estimating sample size for simple logistic regression with one continuous covariate. Stat Med. 2010;29:97–107.PubMedGoogle Scholar Bosomprah S. A mathematical model of seropositivity to malaria antigen, allowing seropositivity to be prolonged by exposure. Malar J. 2014;13:12.View ArticlePubMed CentralPubMedGoogle Scholar Boedker R, Akida J, Shayo D, Kisinza W, Msangeni HA, Pedersen EM, et al. Relationship between altitude and intensity of malaria transmission in the Usambara Mountains, Tanzania. J Med Entomol. 2003;40:706–17.View ArticleGoogle Scholar UN: a world of information. United Nations, New York. 2014. http://data.un.org/. Accessed 5 May 2014. Pires A, Amado C. Interval estimators for a Binomial proportion: comparison of twenty methods. Revstat. 2008;6:165–97.Google Scholar Newcombe RG. Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med. 1998;17:857–72.View ArticlePubMedGoogle Scholar Gething PW, Patil AP, Smith DL, Guerra CA, Elyazar IRF, Johnston GL, et al. A new world malaria map: Plasmodium falciparum endemicity in 2010. Malar J. 2011;10:378.View ArticlePubMed CentralPubMedGoogle Scholar Wong J, Hamel MJ, Drakeley CJ, Kariuki S, Shi YP, Lal AA, et al. Serological markers for monitoring historical changes in malaria transmission intensity in a highly endemic region of Western Kenya, 1994–2009. Malar J. 2014;13:451.View ArticlePubMed CentralPubMedGoogle Scholar Dendukuri N, Rahme E, Blisle P, Joseph L. Bayesian sample size determination for prevalence and diagnostic test studies in the absence of a gold standard test. Biometrics. 2004;60:388–97.View ArticlePubMedGoogle Scholar Santis FD. Using historical data for bayesian sample size determination. J R Statist Soc A. 2007;170:95–113.View ArticleGoogle Scholar Zeukeng F, Tchinda VHM, Bigoga JD, Seumen CHT, Ndzi ES, Abonweh G, et al. Co-infections of malaria and geohelminthiasis in two rural communities of Nkassomo and Vian in the Mfou health district, Cameroon. PLoS Negl Trop Dis. 2014;8:3236.View ArticleGoogle Scholar Bosman P, Stassijns J, Nackers F, Canier L, Kim N, Khim S, et al. Plasmodium prevalence and artemisinin-resistant falciparum malaria in Preah Vihear Province, Cambodia: a cross-sectional population-based study. Malar J. 2014;13:394.View ArticlePubMed CentralPubMedGoogle Scholar Drakeley CJ, Akim NI, Sauerwein RW, Greenwood BM, Targett GA. Estimates of the infectious reservoir of Plasmodium falciparum malaria in the Gambia and in Tanzania. Trans R Soc Trop Med Hyg. 2000;94:472–6.View ArticlePubMedGoogle Scholar Maiga B, Dolo A, Tour O, Dara V, Tapily A, Campino S, et al. Human candidate polymorphisms in sympatric ethnic groups differing in malaria susceptibility in Mali. PLoS One. 2013;8:e75675.View ArticlePubMed CentralPubMedGoogle Scholar Stevenson JC, Stresman GH, Gitonga CW, Gillig J, Owaga C, Marube E, et al. Reliability of school surveys in estimating geographic variation in malaria transmission in the Western Kenyan highlands. PLoS One. 2013;8:e77641.View ArticlePubMed CentralPubMedGoogle Scholar Cochran WG. Sampling Techniques. 3rd ed. New York: John Wiley & Sons; 1977.Google Scholar Baum E, Badu K, Molina DM, Liang X, Felgner PL, Yan G. Protein microarray analysis of antibody responses to Plasmodium falciparum in western Kenyan highland sites with differing transmission levels. PLoS One. 2013;8:e82246.View ArticlePubMed CentralPubMedGoogle Scholar Ondigo BN, Hodges JS, Ireland KF, Magak NG, Lanar DE, Dutta S, et al. Estimation of recent and long-term malaria transmission in a population by antibody testing to multiple Plasmodium falciparum antigens. J Infect Dis. 2014;210:1123–32.View ArticlePubMedGoogle Scholar This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
CommonCrawl
Publications of the Astronomical Society of Australia (9) Low-frequency integrated radio spectra of diffuse, steep-spectrum sources in galaxy clusters: palaeontology with the MWA and ASKAP S. W. Duchesne, M. Johnston-Hollitt, I. Bartalucci Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 12 October 2021, e053 Galaxy clusters have been found to host a range of diffuse, non-thermal emission components, generally with steep, power law spectra. In this work we report on the detection and follow-up of radio halos, relics, remnant radio galaxies, and other fossil radio plasmas in Southern Sky galaxy clusters using the Murchison Widefield Array and the Australian Square Kilometre Array Pathfinder. We make use of the frequency coverage between the two radio interferometers—from 88 to $\sim\!900$ MHz—to characterise the integrated spectra of these sources within this frequency range. Highlights from the sample include the detection of a double relic system in Abell 3186, a mini-halo in RXC J0137.2–0912, a candidate halo and relic in Abell 3399, and a complex multi-episodic head-tail radio galaxy in Abell 3164. We compare this selection of sources and candidates to the literature sample, finding sources consistent with established radio power–cluster mass scaling relations. Finally, we use the low-frequency integrated spectral index, $\alpha$ ( $S_v \propto v^\alpha$ ), of the detected sample of cluster remnants and fossil sources to compare with samples of known halos, relics, remnants and fossils to investigate a possible link between their electron populations. We find the distributions of $\alpha$ to be consistent with relic and halo emission generated by seed electrons that originated in fossil or remnant sources. However, the present sample sizes are insufficient to rule out other scenarios. A broadband radio view of transient jet ejecta in the black hole candidate X-ray binary MAXI J1535–571 Jaiverdhan Chauhan, J. C. A. Miller-Jones, G. E. Anderson, A. Paduano, M. Sokolowski, C. Flynn, P. J. Hancock, N. Hurley-Walker, D. L. Kaplan, T. D. Russell, A. Bahramian, S. W. Duchesne, D. Altamirano, S. Croft, H. A. Krimm, G. R. Sivakoff, R. Soria, C. M. Trott, R. B. Wayth, V. Gupta, M. Johnston-Hollitt, S. J. Tingay Published online by Cambridge University Press: 07 September 2021, e045 We present a broadband radio study of the transient jets ejected from the black hole candidate X-ray binary MAXI J1535–571, which underwent a prolonged outburst beginning on 2017 September 2. We monitored MAXI J1535–571 with the Murchison Widefield Array (MWA) at frequencies from 119 to 186 MHz over six epochs from 2017 September 20 to 2017 October 14. The source was quasi-simultaneously observed over the frequency range 0.84–19 GHz by UTMOST (the Upgraded Molonglo Observatory Synthesis Telescope) the Australian Square Kilometre Array Pathfinder (ASKAP), the Australia Telescope Compact Array (ATCA), and the Australian Long Baseline Array (LBA). Using the LBA observations from 2017 September 23, we measured the source size to be $34\pm1$ mas. During the brightest radio flare on 2017 September 21, the source was detected down to 119 MHz by the MWA, and the radio spectrum indicates a turnover between 250 and 500 MHz, which is most likely due to synchrotron self-absorption (SSA). By fitting the radio spectrum with a SSA model and using the LBA size measurement, we determined various physical parameters of the jet knot (identified in ATCA data), including the jet opening angle ( $\phi_{\rm op} = 4.5\pm1.2^{\circ}$ ) and the magnetic field strength ( $B_{\rm s} = 104^{+80}_{-78}$ mG). Our fitted magnetic field strength agrees reasonably well with that inferred from the standard equipartition approach, suggesting the jet knot to be close to equipartition. Our study highlights the capabilities of the Australian suite of radio telescopes to jointly probe radio jets in black hole X-ray binaries via simultaneous observations over a broad frequency range, and with differing angular resolutions. This suite allows us to determine the physical properties of X-ray binary jets. Finally, our study emphasises the potential contributions that can be made by the low-frequency part of the Square Kilometre Array (SKA-Low) in the study of black hole X-ray binaries. MWA and ASKAP observations of atypical radio-halo-hosting galaxy clusters: Abell 141 and Abell 3404 S. W. Duchesne, M. Johnston-Hollitt, A. G. Wilber Published online by Cambridge University Press: 05 July 2021, e031 We report on the detection of a giant radio halo in the cluster Abell 3404 as well as confirmation of the radio halo observed in Abell 141 (with linear extents $\sim\!770$ and $\sim\!850$ kpc, respectively). We use the Murchison Widefield Array, the Australian Square Kilometre Array Pathfinder, and the Australia Telescope Compact Array to characterise the emission and intervening radio sources from $\sim100$ to 1 000 MHz; power law models are fit to the spectral energy distributions with spectral indices $\alpha_{88}^{1\,110} = -1.66 \pm 0.07$ and $\alpha_{88}^{943} = -1.06 \pm 0.09$ for the radio halos in Abell 3404 and Abell 141, respectively. We find strong correlation between radio and X-ray surface brightness for Abell 3404 but little correlation for Abell 141. We note that each cluster has an atypical morphology for a radio-halo-hosting cluster, with Abell 141 having been previously reported to be in a pre-merging state, and Abell 3404 is largely relaxed with only minor evidence for a disturbed morphology. We find that the radio halo powers are consistent with the current radio halo sample and $P_\nu$–M scaling relations, but note that the radio halo in Abell 3404 is an ultra-steep–spectrum radio halo (USSRH) and, as with other USSRHs lies slightly below the best-fit $P_{1.4}$–M relation. We find that an updated scaling relation is consistent with previous results and shifting the frequency to 150 MHz does not significantly alter the best-fit relations with a sample of 86 radio halos. We suggest that the USSRH halo in Abell 3404 represents the faint class of radio halos that will be found in clusters undergoing weak mergers. Diffuse galaxy cluster emission at 168 MHz within the Murchison Widefield Array Epoch of Reionization 0-h field S. W. Duchesne, M. Johnston-Hollitt, A. R. Offringa, G. W. Pratt, Q. Zheng, S. Dehghan Published online by Cambridge University Press: 18 March 2021, e010 We detect and characterise extended, diffuse radio emission from galaxy clusters at 168 MHz within the Epoch of Reionization 0-h field: a $45^{\circ} \times 45^{\circ}$ region of the southern sky centred on R. A. ${}= 0^{\circ}$, decl. ${}=-27^{\circ}$. We detect 29 sources of interest; a newly detected halo in Abell 0141; a newly detected relic in Abell 2751; 4 new halo candidates and a further 4 new relic candidates; and a new phoenix candidate in Abell 2556. Additionally, we find nine clusters with unclassifiable, diffuse steep-spectrum emission as well as a candidate double relic system associated with RXC J2351.0-1934. We present measured source properties such as their integrated flux densities, spectral indices ( $\alpha$, where $S_\nu \propto \nu^\alpha$), and sizes where possible. We find several of the diffuse sources to have ultra-steep spectra including the halo in Abell 0141, if confirmed, showing $\alpha \leq -2.1 \pm 0.1$ with the present data making it one of the steepest-spectrum haloes known. Finally, we compare our sample of haloes with previously detected haloes and revisit established scaling relations of the radio halo power ( $P_{1.4}$) with the cluster X-ray luminosity ( $L_{\textrm{X}}$) and mass ( $M_{500}$). We find that the newly detected haloes and candidate haloes are consistent with the $P_{1.4}$– $L_{\textrm{X}}$ and $P_{1.4}$– $M_{500}$ relations and see an increase in scatter in the previously found relations with increasing sample size likely caused by inhomogeneous determination of $P_{1.4}$ across the full halo sample. We show that the MWA is capable of detecting haloes and relics within most of the galaxy clusters within the Planck catalogue of Sunyaev–Zel'dovich sources depending on exact halo or relic properties. SPT-CL J2032–5627: A new Southern double relic cluster observed with ASKAP S. W. Duchesne, M. Johnston-Hollitt, I. Bartalucci, T. Hodgson, G. W. Pratt Published online by Cambridge University Press: 18 January 2021, e005 We present a radio and X-ray analysis of the galaxy cluster SPT-CL J2032–5627. Investigation of public data from the Australian Square Kilometre Array Pathfinder (ASKAP) at 943 MHz shows two previously undetected radio relics at either side of the cluster. For both relic sources, we utilise archival Australia Telescope Compact Array (ATCA) data at 5.5 GHz in conjunction with the new ASKAP data to determine that both have steep integrated radio spectra ( $\ensuremath{{\alpha_\mathrm{SE} = -1.52 \pm 0.10}}$ and $\ensuremath{{\alpha_\mathrm{NW,full} = -1.18 \pm 0.10}}$ for the southeast and northwest relic sources, respectively). No shock is seen in XMM-Newton observations; however, the southeast relic is preceded by a cold front in the X-ray–emitting intra-cluster medium. We suggest the lack of a detectable shock may be due to instrumental limitations, comparing the situation to the southeast relic in Abell 3667. We compare the relics to the population of double relic sources and find that they are located below the current power–mass scaling relation. We present an analysis of the low-surface brightness sensitivity of ASKAP and the ATCA, the excellent sensitivity of both allow the ability to find heretofore undetected diffuse sources, suggesting these low-power radio relics will become more prevalent in upcoming large-area radio surveys such as the Evolutionary Map of the Universe. Murchison Widefield Array detection of steep-spectrum, diffuse, non-thermal radio emission within Abell 1127 S. W. Duchesne, M. Johnston-Hollitt, Z. Zhu, R. B. Wayth, J. L. B. Line Diffuse, non-thermal emission in galaxy clusters is increasingly being detected in low-frequency radio surveys and images. We present a new diffuse, steep-spectrum, non-thermal radio source within the cluster Abell 1127 found in survey data from the Murchison Widefield Array (MWA). We perform follow-up observations with the 'extended' configuration MWA Phase II with improved resolution to better resolve the source and measure its low-frequency spectral properties. We use archival Very Large Array S-band data to remove the discrete source contribution from the MWA data, and from a power law model fit we find a spectral index of –1.83±0.29 broadly consistent with relic-type sources. The source is revealed by the Giant Metrewave Radio Telescope at 150 MHz to have an elongated morphology, with a projected linear size of 850 kpc as measured in the MWA data. Using Chandra observations, we derive morphological estimators and confirm quantitatively that the cluster is in a disturbed dynamical state, consistent with the majority of phoenices and relics being hosted by merging clusters. We discuss the implications of relying on morphology and low-resolution imaging alone for the classification of such sources and highlight the usefulness of the MHz to GHz radio spectrum in classifying these types of emission. Finally, we discuss the benefits and limitations of using the MWA Phase II in conjunction with other instruments for detailed studies of diffuse, steep-spectrum, non-thermal radio emission within galaxy clusters. Science with the Murchison Widefield Array: Phase I results and Phase II opportunities – Corrigendum A. P. Beardsley, M. Johnston-Hollitt, C. M. Trott, J. C. Pober, J. Morgan, D. Oberoi, D. L. Kaplan, C. R. Lynch, G. E. Anderson, P. I. McCauley, S. Croft, C. W. James, O. I. Wong, C. D. Tremblay, R. P. Norris, I. H. Cairns, C. J. Lonsdale, P. J. Hancock, B. M. Gaensler, N. D. R. Bhat, W. Li, N. Hurley-Walker, J. R. Callingham, N. Seymour, S. Yoshiura, R. C. Joseph, K. Takahashi, M. Sokolowski, J. C. A. Miller-Jones, J. V. Chauhan, I. Bojičić, M. D. Filipović, D. Leahy, H. Su, W. W. Tian, S. J. McSweeney, B. W. Meyers, S. Kitaeff, T. Vernstrom, G. Gürkan, G. Heald, M. Xue, C. J. Riseley, S. W. Duchesne, J. D. Bowman, D. C. Jacobs, B. Crosse, D. Emrich, T. M. O. Franzen, L. Horsley, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, M. Walker, R. B. Wayth, A. Williams, C. Wu Science with the Murchison Widefield Array: Phase I results and Phase II opportunities Published online by Cambridge University Press: 13 December 2019, e050 The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together $60+$ programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories. The remnant radio galaxy associated with NGC 1534 S. W. Duchesne, M. Johnston-Hollitt Published online by Cambridge University Press: 22 April 2019, e016 We present new observations of the large-scale radio emission surrounding the lenticular galaxy NGC 1534 with the Australia Telescope Compact Array and Murchison Widefield Array. We find no significant compact emission from the nucleus of NGC 1534 to suggest an active core, and instead find low-power radio emission tracing its star-formation history with a radio-derived star-formation rate of 0.38±0.03 M⊙ yr−1. The spectral energy distribution of the extended emission is well-fit by a continuous injection model with an 'off' component, consistent with dead radio galaxies. We find the spectral age of the emission to be 203 Myr, having been active for 44 Myr. Polarimetric analysis points to both a large-scale magneto-ionic Galactic foreground at +33 rad m−2 and a component associated with the northern lobe of the radio emission at -153 rad m−2. The magnetic field of the northern lobe shows an unusual circular pattern of unknown origin. While such remnant sources are rare, combined low- and high-frequency radio surveys with high surface-brightness sensitivities are expected to greatly increase their numbers in the coming decade, and combined with new optical and infrared surveys should provide a wealth of information on the hosts of the emission.
CommonCrawl
Article | Open | Published: 16 April 2019 Signal transmission through elements of the cytoskeleton form an optimized information network in eukaryotic cells B. R. Frieden1 & R. A. Gatenby2 Scientific Reportsvolume 9, Article number: 6110 (2019) | Download Citation Nanoscale biophysics Multiple prior empirical and theoretical studies have demonstrated wire-like flow of electrons and ions along elements of the cytoskeleton but this has never been linked to a biological function. Here we propose that eukaryotes use this mode of signal transmission to convey spatial and temporal environmental information from the cell membrane to the nucleus. The cell membrane, as the interface between intra- and extra-cellular environments, is the site at which much external information is received. Prior studies have demonstrated that transmembrane ion gradients permit information acquisition when an environmental signal interacts with specialized protein gates in membrane ion channels and producing specific ions to flow into or out of the cell along concentration gradients. The resulting localized change in cytoplasmic ion concentrations and charge density can alter location and enzymatic function of peripheral membrane proteins. This allows the cell to process the information and rapidly deploy a local response. Here we investigate transmission of information received and processed in and around the cell membrane by elements of the cytoskeleton to the nucleus to alter gene expression. We demonstrate signal transmission by ion flow along the cytoskeleton is highly optimized. In particular, microtubules, with diameters of about 30 nm, carry coarse-grained Shannon information to the centrosome adjacent to the nucleus with minimum loss of input source information. And, microfilaments, with diameters of about 4 nm, transmit maximum Fisher (fine-grained) information to protein complexes in the nuclear membrane. These previously unrecognized information dynamics allow continuous integration of spatial and temporal environmental signals with inherited information in the genome. Survival and proliferation of living systems require them to continuously acquire, process, and respond to information1 from the environment for threats, opportunities, or (in the case of multicellular organisms) instructions from local tissue2. The cell membrane, as the interface between a cell and its environment, is the site at which much of this environmental information is received. Some environmental changes, such as perturbations in osmolarity, temperature and pH, typically affect all regions of the cell membrane equally and simultaneously. Other information, such as (1) the presence of a potential predator or food source; or, (2) in the tissue of a highly ordered multicellular organism, the space-time location of a target cell must be spatially and temporally resolved. A prior study3 demonstrated that the steep transmembrane ion gradients in eukaryotes are critical for receiving and processing environmental information. Information is received when some perturbation causes the protein gates in transmembrane ion channels to open. The subsequent flow of one or more ions into or out of the cell along these pre-existing electro-chemical gradients can induce local changes (Fig. 1) that promote an adaptive (both fast and targeted) cellular response. For example, an outflow of K+ (the dominant mobile cation in the cytoplasm) may reduce the shielding of fixed negative charges on the inner leaf of the cell membrane, enhancing electrostatic forces for attracting or repelling charges on nearby macromolecules. Furthermore, the activity of many enzymes is dependent on cation concentrations, so that a local fluctuation may increase or decrease their activity4. Information dynamics in and around the cell membrane and transmission to central organelles. The resting state of the membrane, with large transmembrane concentration gradients of K+, Na+ and Cl− is shown in the upper left panel. In the lower left panel, an environmental signal causes the gates in transmembrane K+ channels to open. This allows rapid flow of K+ out of the cell briefly altering the ion concentrations and charge balance in the cytoplasm. Prior studies have shown these ion dynamics alter localization and function of peripheral membrane proteins permitting analysis of and response to the environmental perturbation. This ion flux in the cytoplasm adjacent to the cell membrane can also enter the channel of adjacent microtubules which allows transmission of coarse-grained information to the centrosome (see Fig. 3). The ion flux can also change the electrical potential at the distal end of a microfilament allowing electron flow along the wire-like structure transmitting (Fig. 3) fine-grained information to a protein complex in the nuclear membrane which can alter gene expression and chromosomal location. The role of transmembrane ion movement has been recognized for decades as the mechanism of nerve conduction5. As expressed by the Hodgkin-Huxley (H-H) equations6, propagating depolarization waves along an axon are generated by sequential transmembrane flows of ions through membrane channels. There is one H-H equation for each ion. Our model proposes3 that the ion dynamics that produce a traveling depolarization wave in neurons are, in fact, a specialized application of membrane information dynamics that are universally obeyed by eukaryotes. Here we address the question of how environmental information that is transmitted through the cell membrane through ion fluxes is communicated internally to other components of the cell. We expect that many environmental perturbations (e.g. a localized mechanical deformation by a small environmental object) may only elicit and require a local response. However, some signals received at the membrane, because of their content, amplitude, or spatiotemporal frequency, may require a global (or 'coordinated') cellular response including increased energy production in the mitochondria7,8 and changes in gene expression9 or translation within the nucleus and endoplasmic reticulum10. We propose that information encoded in local fluxes of ions in the cytoplasm adjacent to the cell membrane can be transmitted to other organelles by elements of the intracellular cytoskeleton, its microtubules and microfilaments (Figs 2 and 3). These form organized networks throughout eukaryotic cells that are often complex and dynamic11 (Fig. 2). Distribution of microfilaments and microtubules in eukaryotes. Immunohistochemistry stains showing the distribution of microtubules (green) and microfilaments (red) within normal fibroblasts. Cytoskeletal structures as information conduits. An environmental perturbation that causes transmembrane flow of K+ out of the adjacent cytoplasm (Fig. 2) generates a transient ion gradient along the length of the hollow core of a microtubule or a potential gradient along the microfilament which forms a wire-like conductor for ion flow. Although the elements of the cytoskeleton are primarily involved in cellular shape and movement8, they can also serve as both biomechanical and electrical conduits of information that can alter gene expression9 and chromosomal location9,12. The ability of microfilaments and microtubules to conduct electrons and ions has been extensively documented13. Microfilaments are actin polymers which maintain high levels of negative surface charges permitting highly dynamical interactions with both within the microfilament and in exchange with cytoplasmic counterions14,15. A number of studies have demonstrated charge centers with corresponding counter ion clouds along the microfilament axis permitting ionic waves propagating along its long axis15,16,17,18. This conductance takes on a specific form as Cantiello et al. demonstrated actin filaments propagate electrical signal via soliton waves18 so that the signal is virtually lossless. Ionic conduction along the length of microtubules has also been observed and S. The precise mechanism is not clear19 but may include diffusion along the central channel20 and ion redistribution along the microtubule as a result of variations in cation (Na, K, Ca) flux through nanopores along the microtubule wall21. Furthermore, microtubules are capable of amplifying ionic signal waves22. There is also experimental evidence that microtubules can regulate VDAC ion channels in the mitochondria23 and that microtubules can influence24 and be influenced by the extracellular matrix25 (ECM), so that there is a dynamic and ongoing exchange of information between the cytoskeleton and cell exterior. Finally, recent work by Santelices et al.26 has experimentally demonstrated that microtubule responses to AC electric signals are frequency and ion concentration dependent. The cytoskeleton fibers are often arrayed in organized patterns along the radius of the cell from the nuclear membrane to the cell membrane (Fig. 1). Furthermore, the proximal ends of microtubules, which typically join together in the centrosome27 adjacent to the nuclear membrane and microfilaments, are often bound to multiprotein structures (e.g. the KASH-SUN complex28) in the outer nuclear membrane. Molecular tethers are, thereby, formed that have diverse functions, e.g. gene transcription, as well as transmission of forces for chromosome movements and nuclear migration. Furthermore, actin filaments directly alter nuclear pore ion channel activity, thus altering the ionic milieu of the nucleoplasm29. Hence we investigate the potential of microtubules and microfilaments to act as information conductors that link the cell membrane with central cellular structures including the nucleus, mitochondria and endoplasmic reticulum. These thereby form a distributed information network of conductors. Summary of the Biological Model Our biological information model proposes environmental perturbations can be detected by specialized gates on membrane ion channels. When the gate opens, ions specific to the channel will flow along concentration gradients into or out of the cell. Depending on the number of open channels, and the duration of that open state, the cytoplasm adjacent to the channels will undergo a rapid change in ion concentrations, charge density, and osmolarity. Once the gates are closed, rapid re-equilibration will occur through diffusion from adjacent regions of the cytoplasm and activation of transmembrane ion pumps. When the transmembrane ion flows within one or a few channels (measured to be about 105 ions/second per channel) are very brief, such as depicted by the Hodgkin-Huxley (H-H) equations, we expect the consequences of this change will be entirely localized to the region of the membrane. As described by H-H (see reference 5 below) this ion flux will produce local changes in location and function of peripheral membrane proteins that constitute a rapid local response to the perturbation, so that information transmission to other cellular organelles is not necessary. This is similar to autonomic functions in multicellular organisms that deal with isolated, transient perturbations through local reflexes. In contrast, we expect some environmental information will cause gate openings of multiple ion channels, and/or will maintain the channels in an open state for a longer period of time. This will increase the amplitude, time, and spatial distribution of changes in cytoplasmic ion concentrations. As we will see, it will also maximize the received information about the environment which, in turn, maximizes speed of signal flow and speed of decoding. We hypothesize that optimal cell function and survival will additionally necessitate that this information be transmitted to other organelles so as to elicit a more global cellular response. Examples are increased ATP production by the mitochondria and alterations in gene expression and translation. In general, optimal communication in complex networks will integrate coarse- and fine-grained dynamics. Coarse-grained conduits transmit information from larger temporal and spatial scales to allow more rapid and efficient processing of large scale perturbations30,31. Fine-grained information dynamics focus upon the transmission of finer spatial and temporal scales32. Here, we hypothesize that local ion changes in the cytoplasm adjacent to open transmembrane channels can alter the terminal ends of local cytoskeletal structures. As shown in Figs 2 and 3, microtubules are relatively large11 (~30 nm in diameter) tubes with a hollow center. Empirical studies have demonstrated that signals in the form of ionic waves can be transmitted along the course of a microtubule22. We view information transmitted by the microtubule as "coarse grained30" because: The microtubule is sufficiently large that it will primarily detect ion changes that occur within a cross sectional region similar to its diameter (~30 nm). Most microtubules connect to the centrosome, which is typically positioned adjacent to the nuclear membrane with which it communicates33. While most recognized for its role in microtubule organization and spindle assembly, the centrosome it is also associated with molecules associated with diverse cellular functions including cell-cycle progression, checkpoint control, ubiquitin-mediated degradation33,34 and protein kinase A (PKA)35 which has diverse regulatory functions in cell metabolism36. Thus, the centrosome, in effect, will tend to "average" signals from multiple regions over time and communicates this summation to the nucleus. The microtubule has a number of additional electro-magnetic37,38 properties that could additionally integrate the activities of multiple microtubules within the cell cytoplasm or even extend into the extracellular matrix or adjacent cells. In contrast, microfilaments (diameter ~4 nm) can communicate in fine detail39. Prior studies have demonstrated that microfilaments, which are composed of actin with highly negative surface charge, are highly conductive and, in fact, have been used as nanowires that respond to osmotic and electric potential differences18,22,38,40,41. Because of its small diameter, we propose microfilaments, in contrast to microtubules, will transmit fine-grained information showing fluctuations on the order of microseconds ("fine-grained"). Hence we propose that changes in cytoplasmic ion concentrations near the membrane end of the microfilament–with the other end continuing to have the usual ion concentration distribution–will generate a potential difference across the microfilament "wire," resulting in ion flow. Microfilaments typically attach to multi-protein complexes in the nuclear membrane which have been shown to control both gene expression and chromosomal locations. Thus, in this component of the system, signal transmission is both very rapid–"tunable" (in the sense that the electron flow is dependent on the potential along the length of the microtubule)–and well-resolved both spatially and temporally at both signal transmission and reception. In summary, we propose that elements of the cytoskeleton mediate biomechanical activity and can likewise carry information. In addition, both microtubules and microfilaments connect with mitochondria and the endoplasmic reticulum, and can course along the cytoplasm adjacent to the cell membrane. This allows a broad distribution of information signals to enter many regions of the cell (and even flow to other cells). However, for simplicity, we here focus on information transmission between specifically the cell membrane and the nuclear membrane. Principles of Information Transmission The dynamics governing information transmission have been extensively investigated by the pioneering work of Fisher, Shannon and others (see below). Ideal information transmission occurs when the receiver obtains precisely the information that was sent from its source. This is because, any channel carrying a signal from a sender to a receiver cannot (by definition of a channel) convey more information than is contained in the source signal. Instead, there is inevitable loss of source Shannon information en route during the process of encoding, transmission, reception and decoding of the message. Therefore, minimizing such loss is the realistic goal of such a system. Note that this ignores the evolutionary cost to the system of acquiring the message. Instead, it tacitly assumes every such possible message to be acquired with equal cost, and focuses upon the issue of how well the system can respond to that message, regardless of cost. The assumption is that, realistically, the external system activities producing the message are not controllable by the observer. Hence, from the point of view of survival, evolutionary dynamics will optimize an organism's response to any signal that can affect its fitness. This requires a maximum likelihood estimation42,43 approach rather than, e.g., one seeking a posterior mean, since the latter would require knowing the probability of each such possible message. Meeting this goal of maximum likelihood estimation of external state properties in living systems requires: (a) minimization of the occurrence of data errors and (as below) (b) maximization of the rate of received information. Such a scenario is very beneficial for purposes of biological survival in the random, and possibly hostile, environment we are assuming. As will be seen, ion transmission through microtubules or microfilaments achieves this dual aim (a), (b). As found below, coarse grained and fine grained information acquisition are subject to two different principles of minimum information loss (i) I − J = minimum and (ii)SI − SJ = minimum, where subscripts I and J refer, respectively, to information received and at the environmental source. In this paper we focus on the environmental information that enters the cell in the form of a transmembrane flow of one or more ions. How well can that information be received? Also, can that information be transmitted optimally (as above) to other regions of the cell via the cytoskeleton network? In principle (i), I and J are, respectively, levels of temporal Fisher information when just received and just sent, over the continuous, total time interval (0, T) of flow; likewise in principle (ii) for the Shannon informations SI and SJ. Then, by either principle the information loss is minimized. The particular choice of principle, (i) or (ii), is governed by the fineness of the spatial structure forming the information conduit, as follows. In systems with true fine structure (order of 1–5 nm) information I is the ion's level of Fisher information44, with J the equivalent physical information45. Depending on case, J could be the sum of all ion concentrations, mean times within the system; etc., affecting I. Or it could be the information as represented in a conjugate space to t, such as energy-momentum in quantum-relativistic problems46,47,48. Thus, principle (i), I−J = minimum, operates on the finest level of cellular structure. It has been termed46 EPI ("extreme physical information") and applies to the finest ion signal flow, through microfilaments of actin. Operating on this finest scale allows principle (i) to even give rise to quantum effects45, such as the Schrodinger wave equation (where, in particular, J is the mean kinetic energy47; although this is not the case here). But in all cases, J in principle (i) is the largest possible value of I. It results that principle (i) produces a maximized value for the Fisher information Principle (i) is shown later to give rise to the well-known Hodgkin-Huxley equations. By comparison, principle (ii)SI − SJ = minimum, where SI and SJ are, respectively, the levels of Shannon information (in bits) as, respectively, received and sent. This applies when the system signals are coarse grained. It was first applied to telephone communication (real flow of electron charges through real, metal wires, represented by microfilaments of actin here), by C.E. Shannon49. For such coarse-grained microtubules (order of 25–100 nm) principle (ii) becomes ΔS ≡ SI − SJ = minimum, one of minimum loss of Shannon information (ii). Principle (ii) is, thus, a non-quantum, coarse-grained theory. It results in the highest possible delivered information from an arbitrary source message in the environment. In fact principle (ii) directly derives as a coarse-grained version of principle (i) (see below). As a verification, one solution to problem (ii) of coarser ion flow is found to obey the classical Hodgkin-Huxley equations6 (see Eq. 5, below). These calculations based on minimum information-loss principles (i) and (ii) indicate that real biological systems, such as the neuron of the giant squid obey ion flow rates delivering optimum levels of acquired information3. However, this ignores the issue of how the cells use information in the decoded messages to optimize survival and fitness2. To answer it, the benefits of acquiring and communicating each component of available information of threat or benefit in the environment would have to be weighed against the resources needed to maintain the molecular machinery necessary to cope with it. To do such a calculation would require knowing unknowable probabilities of unknown possible threats. This is, again, why we can only calculate the response to an arbitrary message of threat or benefit as in the preceding section. Ideally, to maintain critically important cell functions loss of information in transmission must be minimized using a fine grain network. Analogously, a person translating a book from ancient hieroglyphics to English does not have to do it perfectly (i.e. with zero error) to produce a generally useful translation for consumption by the general public. However, translation of information regarding, for example, dates and places may be essential for historians or archaeologists. Here, the translation must be as close to perfect as possible (i.e. with minimum error). Since principle (ii) of minimum loss of Shannon information derives (in Sec. 5) as a coarse-grained approximation to the EPI principle (i), and since all eukaryotes contain tubulin, probably all have likewise evolved out of the principle of minimum loss of Shannon information. As will be seen this is a necessary condition for natural selection. In summary, we present a variational principle of biophysics that governs intracellular information flow based upon the granularity of the conduit through which the signal flows. We propose that the principle of minimum loss of Fisher information applies when fine grained information is transmitted electron flow through microfilaments (sometimes called 'actin wires16'). In contrast, the principle of minimum loss of Shannon information during transmission governs coarse grained information carried by ion flow through microtubules. Importantly, however, we note that "minimum loss" also means, in a positive sense, maximum gain and thus can have the effect of increasing fitness. As we noted, these metrics of information transmission are directly related: When Fisher information undergoes coarse graining, it becomes proportional to Shannon information. This is, then, an important bridge between the discrete and continuous aspects of living systems. Fisher Information All information forms used in this paper ultimately arise out of Fisher information. By definition, this obeys44,45,47,48,50: $$I=4\,\int dt\,{[\frac{da}{dt}]}^{2},\,a=a(t)\equiv \sqrt{\,p(t)},$$ where p = p(t) is the probability density on position t for the ion and a(t) is defined as its (real) amplitude. All integrals are over a fixed time interval 0 ≤ t ≤ T of observation.For now, we notice that the form of Eq. (1) is also that of a Lagrangian L in integral \(\int dtL\), and this is conventionally varied as δ\(\int dtL=0\) to derive45 the quantum mechanics obeyed by amplitude law a(t) in scenarios of fine structure 1–5 nm or, alternatively, by classical flow of p(t) in coarser structure of size 25–100 nm. The emphasis in this paper is on the latter (classical domain) behavior. The information I defined by Eq. (1) is also conventionally used to measure, by the relation44,45,48 $${e}_{min}^{2}=1/I,$$ the minimum possible mean-squared error e2 of any estimate of the true time t0 based on its repeated measurement tn = t0 + errori n = 1, … N. Thus, I has the significance of defining how well a quantity on the continuum (here of time values t) can be known. (Notice that In Eq. (2), the larger information I is the smaller is the rms error emin, as one would expect of an information measure). Relation (2) has been the basis for usual past uses of Fisher information I. By comparison, over the recent two decades another, completely different use of information I has arisen. Its aim is, not to merely measure particular values tn of an observable phenomenon, as above, but rather to estimate the actual probability law p(t) governing t in the unknown phenomenon (of physics47, econophysics, biology51, cancer52 growth, chemistry, etc.45,53). The present paper extends these calculations to problems of ion- or electron transmission using principles (i) or (ii). Its Physical Manifestation J, EPI Principle This is by using a principle of extreme "physical information" I − J, $$I-J={\rm{minimum}}$$ through variation of p(t). Although both I and J are metrics of information hey differ in meaning. From the factor \({(\frac{da}{dt})}^{2}\) in Eq. (1), information I governs the amount of 'slope' or 'roughness' in both probability law p(t) and its amplitude law a(t). Also, by Eq. (2), I governs how accurately an unknown coordinate t can be known. The other quantity J in (3) defines the meaning of the information I as a physical quantity. Their difference I−J is called the 'physical information,' since it measures how much net information I−J contributes to the physical effect. Hence Eq. (3) expresses a principle of extreme net physical information (EPI). In our cellular scenario it, in fact, represents a scenario of minimum lost, temporal Fisher information. What does this mean? Quantity I−J is always convex, so it defines a minimum value when varied mathematically. Such minimization means I≈J, i.e. the theoretical information tends to equal its physical manifestation. In fact, in quantum scenarios45,48 I = J, meaning the entire physical manifestation J of information I (here the energy) goes into forming the observable information I. Meanwhile, in our case of cellular information flow J is input as the mean time of ion flow from the cytoplasm adjacent to the cell membrane to its other end at the nuclear membrane. This choice of J was that of Hodgkin and Huxley6. Hence, principle (3) provides the solution for amplitudes a = a(t) in signal transmission of fine structured time scale. The principle (3) of Extreme Physical Information or EPI has been used to derive most of textbook physics46 and some laws of biology54 including a prediction of power law growth \(m(t)=m(0){t}^{\O }\) for early-growth stages of breast cancer52,55 where the constant \(\O =1.618034\) that has been confirmed in multiple studies using mammography data. Transition to Principle of Minimum Kullback-Leibler (KL) Divergence We demonstrate above that the Fisher information-based EPI principle (3) is directly applicable to problems of unknown ion rate functions p(t) or a(t) on the continuum of t, typified by spatial observations on the nanometer (fine) scale 1–10 nm (case of microfilaments). For this scale of problem the information I was found to be Fisher's, given by Eq. (1). However, as noted above, information transmission via microtubules instead requires the ions to flow along a much larger structure (~30 nm in diameter). This is a coarse-grained problem and our goal is to find the 'coarse-grained' rates p(t), q(t). To do this, we examine a transition from the fine-scaled principle (3) to the corresponding coarse-grained problem. We start by using identity p(t) = a2(t) to express Eq. (1) it in terms of the probabilities p(t), $$I=\int dt\frac{{(dp/dt)}^{2}}{p}.$$ However, for application of the EPI principle (3) to this discrete problem of the soma, we need the form of the information (4) where differentials dt are regarded as small but finite, "granular" changes Δt. The latter are defined as follows. In principle the finest time interval dt in principle (4) is of size zero. However, in practice it is the finite time interval Δt during which the ion is located within the cell membrane. This is time between the instant that the ion just enters the cell membrane, from the outside environment, to the time that it just emerges from the cell membrane and enters the cell cytoplasm. On this basis, how large an information value I is delivered by the ion i to the observer during this time interval? Let it obey probability qi(t) in the outside environment. Then probability pi(t) after entering the cytoplasm a short time Δt later is pi(t + t). Then the information I in Eq. (4) is easily found to approximately obey45: $${I}=2/({\rm{\Delta }}{{t}}^{2}){\int }_{0}^{\infty }\,dt{p}_{i}(t)\mathrm{ln}({p}_{i}(t)/{q}_{i}(t))\equiv {H}_{KL}({p}_{i}||{q}_{i})=minimum,\,\,i=\,1,\ldots ,N.$$ This is the information delivered to the observer as limited by the granular nature of the ion and the medium (cell membrane) it passes through. Notation HKL(pi||qi) denotes the Kullback-Leibler (K-L) divergence53,56 (or 'distance') between probabilities pi(t) and qi(t) (these are, equivalently, ion flow rates, since the random variable is the time t). Factor 2/(Δt2) in (5) shows that observing the time with a finer (smaller) "grain size" Δt gives greater information I in cell structure such as a microtubule. This is intuitively correct. More importantly, Eq. (5) also generally represents the loss in Shannon information53 for an ion i passing through the membrane regarded as an information channel. By (5) the loss is, then, explicitly, minimized in this coarse-grained scenario. In summary, the information I is identically the loss in Shannon information45 during the flow from along the microtubule, and this loss is minimized. This is central to the information-based approach here. In summary, both principles (i) and (ii) define scenarios of minimum loss of information, although of different types – Fisher information in (i), and Shannon information in (ii). Finally, even if grain size Δt is not very small, by the approximate nature of principle (5) it may be used as simply a first-order (in change Δt) approximation to the general principle (3). That is, a Shannon information-based calculation (ii) is always at least an approximate solution to the Fisher-information-based one (i). Insertion of Prior Knowledge The minimum in Eq. (3) is to be obtained in the presence of prior knowledge J about the trajectories. These must, e.g., obey normalization. But the key prior knowledge was found by Hodgkin and Huxley6 to be the mean times τi, i = 1, …, N for ions in the system (cochlea). The minimum value in Eq. (5) is also constrained by this knowledge. Using these as additive Lagrange constraints J during minimization of I, principle (5) becomes one of constrained KL (Kullback-Leibler) divergence53 $$\begin{array}{c}I-J={\int }_{0}^{\infty }\,dt{p}_{i}(t)\,\mathrm{ln}({p}_{i}(t)/{q}_{i}(t))+{\lambda }_{1}\int dt\,[{p}_{i}(t)-1]\\ \,\,+{\lambda }_{2}\int dt[{q}_{i}(t)-1]+{\lambda }_{3}[\int dtt{p}_{i}(t)-{\tau }_{i}]=min\end{array}$$ The first right-hand term is the KL divergence in (5). The terms in λ1 and λ2 express normalization of probability densities pi(t) and qi(t) or, more generally, the total presence of ion i over time interval (0, T). Notice that Eq. (6) is of the same general form as the EPI principle (3) and, hence, is called the KLmin principle. It has been found that this KLmin principle derives the equations governing the actual ion trajectories pi(t), i = 1, …, N, i.e. the Hodgkin-Huxley equations6. Further significance is that, because of the required minimum value in principle (5) the Shannon information is maximized for the transit of each ion i (i.e. its information loss is minimized). Although, as was noted, this theory holds for transmembrane ion flows carrying a propagating signal along neurons, it holds as well for ion flux within microtubules. Transmission of information from the cell membrane to the nucleus and other central organelles following ligand binding to membrane receptors has been extensively studied. Typically, the receptor binding triggers (often through intermediate proteins) phosphorylation of messenger proteins in the cytoplasm that then travel to other organelles. These information pathways are widely investigated and play important roles in cellular function as well as cancer development. However, we note that signal transmission via 3 dimensional random walk (notably, not obeying principle (5) of maximum acquired information level) inevitably suffers significant degradation of information regarding the time and location (on the cell membrane) of the perturbation57. Under many circumstances, this lossy information is, nevertheless, sufficient to elicit a necessary response58,59 and has the benefit of low energy consumption60. However, acquiring even more information about the location and time of perturbation may be essential under some circumstances such as locating a predator or a potential food source by single cell eukaryotes, or moving to a correct cellular location within the highly ordered 3 dimensional structure of tissue in a multicellular organism. In prior work, we have proposed that eukaryotes use the difference in ion concentrations in the extra-cellular and intra-cellular fluid as a mechanism to receive, process, and respond to a wide range of perturbations in the environment. Indeed, the value of maintaining this membrane information receiver is evident in studies that show about 40% of the energy budget in eukaryotic cells61 is consumed by the ATP-dependent membrane ion pumps that maintain the gradient. We have proposed cellular information dynamics include ion-specific transmembrane channels which permit communication in the form of ion flows between the environment and the cell. This occurs when the specialized gate (there are well over a hundred different types of gates) is induced to open by the environmental perturbation. Thus, the ion flow in toto actually represents an optimized response (5) to the nature of the perturbation, as well as its time and place. The subsequent local processing and response are described above. In experimental studies of a highly specialized application of this principle – the ion flow carrying a traveling wave in neurons – it is clear that the change in local ion concentrations is rapidly dissipated through diffusion from adjacent regions of the cytoplasm and activation of the transmembrane ion pumps. However, in normal cell function, we anticipate that this information, although transient and spatially localized, will in many situations need to be communicated to other components of the cell. Here we focus on the potential role of microfilaments and microtubules in this communication network. These long linear polymers are often arrayed in organized patterns and frequently observed to be oriented along the radius of the cell from the nuclear membrane to the cell membrane. The potential for both microtubules and microfilaments to transmit signals via ion conduction has been extensively investigated both theoretically and experimentally. Here we propose that extracellular information that is received by specialized gates in membrane ion channels and transmitted through ion transmembrane ion fluxes can be propagated by microfilaments and microtubules to the nucleus and other internal organelles such as the mitochondria. The specific properties of microfilaments and microtubules allow them to carry fine-grain or coarse-grain information respectively. In the case of microtubules, which typically converge on the centrosome, this coarse grain information allows rapid assessment of the overall state of the environment over time. Or in the case of microfilaments, which typically link to protein complexes on the nuclear membrane, the fine-grain information can convey, to the nucleus, detailed information about the spatial and temporal variations of the environment. In prior work3,66,67 we found these information dynamics to be highly optimized. We note that such optimization, obeying information maximization principles (i) or (ii), give rise to optimally fast and effective responses to environmental challenges and benefits. Hence, these are a necessary condition for natural selection to have taken place (and continue). Finally, we note that our analysis ignores possible communication between individual microfilaments and microtubules. In reality, microfilaments and microtubules frequently interface through direct physical contact and cross-linking proteins. It is also likely that the elements of the cytoskeleton can interact in complex way with molecular transduction pathways. This suggests a complex network for signal transmission and analysis that permits rich information dynamics that likely augments and modifies the more well-known and studied information found molecular pathways (e.g. the MAPK pathway) that carry information following ligand binding to a membrane receptor to the nucelus. Farnsworth, K. D., Nelson, J. & Gershenson, C. Living is Information Processing: From Molecules to Global Systems. Acta Biotheor 61, 203–222, https://doi.org/10.1007/s10441-013-9179-3 (2013). Gatenby, R. A. & Frieden, B. R. Information theory in living systems, methods, applications, and challenges. Bull Math Biol 69, 635–657, https://doi.org/10.1007/s11538-006-9141-5 (2007). Gatenby, R. A. & Frieden, B. R. Cellular information dynamics through transmembrane flow of ions. Sci Rep 7, 15075, https://doi.org/10.1038/s41598-017-15182-2 (2017). Page, M. J. & Di Cera, E. Role of Na+ and K+ in enzyme function. Physiol Rev 86, 1049–1092, https://doi.org/10.1152/physrev.00008.2006 (2006). Hodgkin, A. L. The relation between conduction velocity and the electrical resistance outside a nerve fibre. J Physiol 94, 560–570 (1939). Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117, 500–544 (1952). Boldogh, I. R. & Pon, L. A. Interactions of mitochondria with the actin cytoskeleton. Biochim Biophys Acta 1763, 450–462, https://doi.org/10.1016/j.bbamcr.2006.02.014 (2006). Foissner, I. Microfilaments and microtubules control the shape, motility, and subcellular distribution of cortical mitochondria in characean internodal cells. Protoplasma 224, 145–157, https://doi.org/10.1007/s00709-004-0075-1 (2004). Zheng, B., Han, M., Bernier, M. & Wen, J. K. Nuclear actin and actin-binding proteins in the regulation of transcription and gene expression. FEBS J 276, 2669–2685, https://doi.org/10.1111/j.1742-4658.2009.06986.x (2009). Gurel, P. S., Hatch, A. L. & Higgs, H. N. Connecting the cytoskeleton to the endoplasmic reticulum and Golgi. Curr Biol 24, R660–R672, https://doi.org/10.1016/j.cub.2014.05.033 (2014). Alberts, B. Molecular biology of the cell. (Garland Pub., 1983). Starr, D. A. & Fridolfsson, H. N. Interactions between nuclei and the cytoskeleton are mediated by SUN-KASH nuclear-envelope bridges. Annu Rev Cell Dev Biol 26, 421–444, https://doi.org/10.1146/annurev-cellbio-100109-104037 (2010). Woolf, N. J., Priel, A. & Tuszynski, J. A. Nanoneuroscience: structural and functional roles of the neuronal cytoskeleton in health and disease. (Springer, 2009). Tang, J. X. & Janmey, P. A. The polyelectrolyte nature of F-actin and the mechanism of actin bundle formation. J Biol Chem 271, 8556–8563 (1996). Tuszynski, J. A., Portet, S., Dixon, J. M., Luxford, C. & Cantiello, H. F. Ionic wave propagation along actin filaments. Biophys J 86, 1890–1903, https://doi.org/10.1016/S0006-3495(04)74255-1 (2004). Patolsky, F., Weizmann, Y. & Willner, I. Actin-based metallic nanowires as bio-nanotransporters. Nat Mater 3, 692–695, https://doi.org/10.1038/nmat1205 (2004). Hunley, C., Uribe, D. & Marucho, M. A multi-scale approach to describe electrical impulses propagating along actin filaments in both intracellular and in vitro conditions. RSC Advances 8, 12017–12028, doi:0.1039/C7RA12799E (2018). Lin, E. C. & Cantiello, H. F. A novel method to study the electrodynamic behavior of actin filaments. Evidence for cable-like properties of actin. Biophys J 65, 1371–1378, https://doi.org/10.1016/S0006-3495(93)81188-3 (1993). Sataric, M. V., Ilic, D. I., Ralevic, N. & Tuszynski, J. A. A nonlinear model of ionic wave propagation along microtubules. Eur Biophys J 38, 637–647, https://doi.org/10.1007/s00249-009-0421-5 (2009). Odde, D. Diffusion inside microtubules. Eur Biophys J 27, 514–520 (1998). Shen, C. & Guo, W. Ion Permeability of a Microtubule in Neuron Environment. J Phys Chem Lett 9, 2009–2014, https://doi.org/10.1021/acs.jpclett.8b00324 (2018). Priel, A. & Tuszynski, J. A. A nonlinear cable-like model of amplified ionic wave propagation along microtubules. Europhysics Letters 83, 68004, https://doi.org/10.1209/0295-5075/83/68004 (2008). Rostovtseva, T. K. & Bezrukov, S. M. VDAC inhibition by tubulin and its physiological implications. Biochim Biophys Acta 1818, 1526–1535, https://doi.org/10.1016/j.bbamem.2011.11.004 (2012). Baskin, T. I. & Gu, Y. Making parallel lines meet: transferring information from microtubules to extracellular matrix. Cell Adh Migr 6, 404–408, https://doi.org/10.4161/cam.21121 (2012). Putnam, A. J., Schultz, K. & Mooney, D. J. Control of microtubule assembly by extracellular matrix and externally applied strain. Am J Physiol Cell Physiol 280, C556–564, https://doi.org/10.1152/ajpcell.2001.280.3.C556 (2001). Santelices, I. B. et al. Response to Alternating Electric Fields of Tubulin Dimers and Microtubule Ensembles in Electrolytic Solutions. Sci Rep 7, 9594, https://doi.org/10.1038/s41598-017-09323-w (2017). Petry, S. & Vale, R. D. Microtubule nucleation at the centrosome and beyond. Nat Cell Biol 17, 1089–1093, https://doi.org/10.1038/ncb3220 (2015). Starr, D. A. & Fischer, J. A. KASH 'n Karry: the KASH domain family of cargo-specific cytoskeletal adaptor proteins. Bioessays 27, 1136–1146, https://doi.org/10.1002/bies.20312 (2005). Prat, A. G. & Cantiello, H. F. Nuclear ion channel activity is regulated by actin filaments. Am J Physiol 270, C1532–1543, https://doi.org/10.1152/ajpcell.1996.270.5.C1532 (1996). Rangan, A. V., Cai, D. & McLaughlin, D. W. Quantifying neuronal network dynamics through coarse-grained event trees. Proc Natl Acad Sci USA 105, 10990–10995, https://doi.org/10.1073/pnas.0804303105 (2008). Watanabe, H. Coarse-grained information in formal theory of measurement. Measurement 38, 295–302, https://doi.org/10.1016/j.measurement.2005.09.005 (2005). Lindgren, K. An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro. Entropy 17, 3332–3351, https://doi.org/10.3390/e17053332 (2015). Doxsey, S. Re-evaluating centrosome function. Nat Rev Mol Cell Biol 2, 688–698, https://doi.org/10.1038/35089575 (2001). Mamon, L. A. Centrosome as "a brain" of an animal cell. Tsitologiia 50, 5–17 (2008). Diviani, D. & Scott, J. D. AKAP signaling complexes at the cytoskeleton. J Cell Sci 114, 1431–1437 (2001). Turnham, R. E. & Scott, J. D. Protein kinase A catalytic subunit isoform PRKACA; History, function and physiology. Gene 577, 101–108, https://doi.org/10.1016/j.gene.2015.11.052 (2016). Pokorny, J., Hasek, J. & Jelinek, F. Electromagnetic field of microtubules: effects on transfer of mass particles and electrons. J Biol Phys 31, 501–514, https://doi.org/10.1007/s10867-005-1286-1 (2005). Priel, A., Ramos, A. J., Tuszynski, J. A. & Cantiello, H. F. A biopolymer transistor: electrical amplification by microtubules. Biophys J 90, 4639–4643, https://doi.org/10.1529/biophysj.105.078915 (2006). Schaub, S., Meister, J. & Verkhovsky, A. B. Computational approach to evaluate actin network structure and dynamics based on the optical microscopy. Molecular Biology of the Cell 13, 194a–194a (2002). Cantiello, H. F., Patenaude, C. & Zaner, K. Osmotically induced electrical signals from actin filaments. Biophys J 59, 1284–1289, https://doi.org/10.1016/S0006-3495(91)82343-8 (1991). Woolf N. J., P. A. & Tuszynski, J. A. In Nanoneuroscience. Biological and Medical Physics, Biomedical Engineering (Springer, 2009). Nguyen, H. D., Yoshihama, M. & Kenmochi, N. New maximum likelihood estimators for eukaryotic intron evolution. PLoS Comput Biol 1, e79, https://doi.org/10.1371/journal.pcbi.0010079 (2005). Kawashita, S. Y., Sanson, G. F., Fernandes, O., Zingales, B. & Briones, M. R. Maximum-likelihood divergence date estimates based on rRNA gene sequences suggest two scenarios of Trypanosoma cruzi intraspecific evolution. Mol Biol Evol 18, 2250–2259, https://doi.org/10.1093/oxfordjournals.molbev.a003771 (2001). Fisher, R. A. Statistical Methods and Scientific Inference, 2nd ed., (Oliver and Boyd, 1959). Frieden, B. R. & Frieden, B. R. Science from Fisher information: a unification. 2nd edn, (Cambridge University Press, 2004). Frieden, B. R. Physics from Fisher information: a unification. (Cambridge University Press, 1998). Frieden, B. R. Fisher information as the basis for the Schrödinger wave equation. American Journal of Physics 57, 1004–1009, https://doi.org/10.1119/1.15810 (1989). Frieden, B. R. & Gatenby, R. A. Exploratory data analysis using Fisher information. (Springer, 2007). Shannon, C. E. The mathematical theory of communication. 1963. MD Comput 14, 306–317 (1997). Frieden, B. R. Probability, statistical optics, and data testing: a problem solving approach. 3rd edn, (Springer, 2001). Frieden, B. R. & Gatenby, R. A. Power laws of complex systems from extreme physical information. Phys Rev E 72, https://doi.org/10.1103/PhysRevE.72.036101 (2005). Gatenby, R. A. & Frieden, B. R. Application of information theory and extreme physical information to carcinogenesis. Cancer Research 62, 3675–3684 (2002). Kullback, S. & Leibler, R. A. On information and sufficiency. Annals of Mathematical Statistics 22, 79–86, https://doi.org/10.1214/aoms/1177729694 (1951). Frieden, B. R. & Gatenby, R. A. Information Dynamics in Living Systems: Prokaryotes, Eukaryotes, and Cancer. Plos One 6, https://doi.org/10.1371/journal.pone.0022085 (2011). Gatenby, R. A. & Frieden, B. R. Information dynamics in carcinogenesis and tumor growth. Mutat Res 568, 259–273, https://doi.org/10.1016/j.mrfmmm.2004.04.018 (2004). Gatenby, R. & Frieden, B. R. Investigating Information Dynamics in Living Systems through the Structure and Function of Enzymes. Plos One 11, https://doi.org/10.1371/journal.pone.0154867 (2016). Cunningham, J. et al. Intracellular electric field and pH optimize protein localization and movement. PLoS One 7, e36894, https://doi.org/10.1371/journal.pone.0036894 (2012). Motlagh, M. S., Khuzani, M. B. & Mitran, P. On Lossy Joint Source-Channel Coding in Energy Harvesting Communication Systems. Ieee T Commun 63, 4433–4447, https://doi.org/10.1109/Tcomm.2015.2472012 (2015). He, L. D., Han, D. F. & Wang, X. F. Optimal control over a lossy communication network based on linear predictive compensation. Iet Control Theory A 8, 2297–2304, https://doi.org/10.1049/iet-cta.2014.0322 (2014). Barr, K. C. & Asanovic, K. Energy-aware lossless data compression. Acm T Comput Syst 24, 250–291, https://doi.org/10.1145/1151690.1151692 (2006). Guppy, M., Kong, S. E., Niu, X., Busfield, S. & Klinken, S. P. Method for measuring a comprehensive energy budget in a proliferating cell system over multiple cell cycles. J Cell Physiol 170, 1–7, doi:10.1002/(SICI)1097-4652(199701)170:1<1::AID-JCP1>3.0.CO;2-S (1997). This study was supported by the National Cancer Institute Physical Science Oncology Center Grants U54 CA143970 and by the NCI CCSG Support Grant P30 CA076292. College of Optical Science, University of Arizona, Tucson, AZ, USA B. R. Frieden Department of Integrated Mathematical Biology, Moffitt Cancer Center, Tampa, FL, USA R. A. Gatenby Search for B. R. Frieden in: Search for R. A. Gatenby in: B.R.F. developed the information-based models, R.A.G. provided the biological context and hypothesis, both authors wrote the main manuscript, R.A.G. prepared the figures. Correspondence to R. A. Gatenby.
CommonCrawl
Role of organizations in preparedness and emergency response to flood disaster in Bangladesh Babul Hossain1 The present study is to know the role of organizations and make an assessment on their assistance regarding preparedness and emergency response of flood disaster affected people. This study has used a mixed-method approach. Flood-affected people were the respondents to evaluate the organizational role. The study reveals that before the flood in 2017 to minimize the loss and damages, the GOs play a very effective role concerning the arrangement of preparatory meetings and preparing shelter centers, and NGOs play a very useful role in making arrangements for awareness-building training. During the emergency period, the GOs played a comparatively better role in providing CI sheets, agricultural assistance and cash money as relief for establishing housing facilities and emergency support. The NGOs played relatively a better role in providing food, water, clothes, medicine, etc. This study put forward complications such as limited sanctions, disruption of communication, lack of awareness of sufferers, and overlapping. The findings of this study would be a significant for the disaster policymakers, and civil societies. Bangladesh is one of the highly flood disaster-prone countries in the world. Every year thousands of people are being affected by the flood; and they become hopeless by losing their family members, relatives, and entire properties. Therefore, immediately after such incidents, organizational supports are very much needed to them. Almost 80% of the country consists of the flood plain of the GBM basins, and some other minor rivers (Brouwer et al. 2007). Among the flood-prone areas of Bangladesh, Char land (Island) is the most susceptible to frequent flood, and the dwellers of these places are the supreme vulnerable. There is an estimation that approximately 4–5% of the population in Bangladesh lives in the char's land which covers almost 7200km2(Kelly and Chowdhury 2002; Mondal et al. 2015; Paul and Islam 2015). At least 58 major floods hit in Bangladesh from 1954 to 2017, and 20,039 people died and also millions of people affected owing to catastrophic flood throughout the last 47 years (1970–2017) (Relief 2013). In which 1966, 1987, 1988, 1998 and 2007 flood was the most devastating flood which affected millions of people in Bangladesh. Likewise, this country faced one of the worst river flooding events on 12 August 2017, with high water levels record, and the Ministry of Disaster Management and Relief (MoDMR) stated that the floods were the most awful in at least 40 years (Philip et al. 2019). It disrupted people's normal life immensely. Thirty-one (31) districts affected ruthlessly among the 64 districts of Bangladesh (Management 2007), and around 6.9 million people were affected (Philip et al. 2019), with 121 people dead (Nirapad 2017). Apart from these, crop damaged, disruption of communication and education, health issues, food problems, drinking water crises, and massive displacement were the main causes to make them more vulnerable in the flood-prone areas. Thus, flood disaster is such an event that cannot be prevented from happening, but the impacts can be reduced if effective measures are taken in time to reduce their severity, frequency, and possible size. In Bangladesh, response to flood disaster at the national level, the Ministry of Disaster Management and Relief (MoDMR), and the Department of Disaster Management coordinate overall disaster management efforts. At every district, sub-district, and union level, there are disaster management committees. In 1997, the Ministry of Food and Disaster Management (MoFDM) issued a Standing Order on Disaster (SOD)Footnote 1 which, in detail, describes the duties and responsibilities of all the concerned government agencies for disaster management (Hasan et al. 2013). On the other hand, NGOs operate at the grass-root level with communities and local organizations as partners and take a participatory approach to development planning. It is known that NGOs enjoy higher operational flexibility as they are relatively free from bureau structures and red-tapes; and they are able to respond and adapt quickly and easily and often work on behalf of the neediest, the poorest and the most vulnerable group (ISDR 2006). There are several NGOs working in the study villages such as SKS, GUK, SHACO, BRAC, ASA, etc. (Portal 2018). And, there was no exception of organizational activities as regards flood in 2017. Immediately they come forward to help the flood-affected people and extended their helping hand in different phases by taking numerous measures. Therefore, it is apparent that although Govt. and Non-govt. Organizations took various initiatives to support to the flood disaster-affected people; these supports are not sufficient to cover the entire disaster-prone area or to cover entire disaster-affected people. Moreover, is their role fully effective in regaining the affected people's normal course of activities and sustainability in the livelihood? Therefore, the present study is to evaluate the role of organizations regarding preparedness and emergency response and also try to find out the loopholes and drawbacks of organizations' responses to the 2017 flood-affected people in terms of people perception. Flood research scenarios: a brief review In the context of flood disasters researches, most of the researches have been focused on the concept of vulnerability (Burton et al. 1993; Cannon 1994; Cannon et al. 2003; Few 2003; Ibrahim et al. 2017; Nur and Shrestha 2017; Smit and Pilifosova 2003). In the same way, few studies have emphasized the characteristics of the flood, the geographical location, the geomorphological setting and the cultural, political and socio-economic conditions of the people at risk of flooding (Alcántara-Ayala 2002; Choudhury 2005; Few 2003; Mutton and Haque 2004; Thompson and Penning-Rowsell 1994; Zaman 1989, 1993). Also, several studies focusing on the effects of floods and population displacement (Del Ninno et al. 2003; Dun 2011; Gray and Mueller 2012; Islam et al. 2010; Paul and Rasid 1993; Zaman 1996) and the impact of climate change on the upcoming flooding (Kafi 2010; Mirza 2002). There are some studies on the evaluation of existing structural flood dam projects (Hoque and Siddique 1995; Hossain and Sakai 2008), Flood mitigation activities and their effectiveness (Shajahanl 2001) and evaluation flood management strategies, including institutional measures (Adnan 1991; Brammer 1990). Besides, there are many researchers conducted on the context of people's indigenous/local knowledge and coping strategies to response flooding (Hossain et al. 2019; Howell 2003; Islam et al. 2018; Paul and Hossain 2013; Paul and Routray 2010). Thus, most of the research accessible on the literature that uncovers the existing literature is based on socioeconomic perspectives to determine the impact and magnitude of floods and adjust strategies. On the other hand, some articles found as regards to people's perception about flood management and mitigation dealings take on by GOs and NGOs (S. K. Paul and Hossain 2013) and also the role of NGOs to response of flood disaster management (Matin and Taher 2001). After reviewing several pieces of literature, it can be said that although many research works have been done in the field of flood disaster management but a few researches have been done on the organizational role in preparedness and emergency response to flood disaster-affected people. Therefore, there is a clear research gap here. Regarding this, to fill this research gap the study purpose to analyze the preparedness and emergency response to flood disaster carry out by the GOs and NGOs and also evaluate people's perceptions and satisfaction on their performances. Disaster management settings in Bangladesh: GO-NGO collaboration The relationship between the GO and the NGO is a talking point in Bangladesh. After the devastating cyclone of 1970 and the liberation war in 1971, the social structure was changed, and the economy was destroyed. Several non-government organizations were set up in that time to undertake the massive task of rehabilitating the war-ravaged country. In independent Bangladesh, NGOs have emerged and grown very fast. It is often said that Bangladesh is a very fertile land for NGOs. Since the beginning of the 1970s, Bangladesh has virtually become a laboratory for the design and experimentation of different rural development models and approaches. Various agencies of Government of Bangladesh, international donors, and the non-government organizations have experimented with different models and approaches of institution-building for rural and local level development (Aminuzzaman 2000). These organizations were also actively involved in providing their efforts with emergency response, recovery and rehabilitation activities to manage disasters over the periods. Among international organizations CARE, Islamic Development Bank (IDB), United Nations, UNICEF, WFP, and among non-governmental organizations World Vision, Oxfam Australia. Muslim Aid, ASA, Proshika, BRAC, SKS, GUK were especially involved with relief and rehabilitation activities. Besides, Grameen Bank, Proshika, BRAC and ASA, etc. also operate microcredit program that acts as a social safety net during a disaster (Hossain 2012). Comprehensive Disaster Management Program, with the technical assistance of UNDP, is presently in operation for the integration of disaster and development concept as well as for improvement in coordination between GO –NGOs' efforts in response to disasters at all levels. A review of the collaboration indicates three major types of arrangements: (a) Subcontract; (b) Joint implementation; and (c) Government as a financier of NGO projects (Bank 1996). The most common collaboration is the sub-contracting arrangement where Government agencies enter into contracts with NGOs. Joint implementation on a partnership arrangement, where NGOs are involved either as co-financier or joint executing agency with the Government, is least practiced. In the area of micro-credit, there is an emerging trend for the Government to finance NGOs' credit operations. The government, NGOs, people, and friends around the world worked together in minimizing the impact of the calamity through preparedness as mitigation measures as well as coping with the aftermath. The government and non-government organizations worked in a coordinated manner to bring relief to suffering people (Blair 2005). The task was too great, and the scope remained for improving the situation. Based on different studies and documents, it was found that the role of NGOs in disaster management in Bangladesh is significant. Presently, NGOs are giving emphasis to work on preventive measures as a strategy of disaster risk reduction as numerous private donors and different parts of CARE's international organization. Bangladesh has many flood-prone areas in which Rangpur division is one of the most vulnerable areas. Regarding this, Rangpur division has been selected purposively; and multistage area sampling method has been applied for selecting subsequent administrative units as well as the ultimate sampling unit, which is the village. Subsequent administrative units were selected purposively according to the severity of flood in terms of people affected, death toll, losses, and damages. Among the Rangpur division, Gaibandha district is very adjacent to the river basin, and it is the most susceptible district to happen frequent flood. From the Gaibandha district, two Upazila (Sub-district) have been selected. The study area selection procedure is as follows (Fig. 1): The study areas are riverine islands (Char) in Brahmaputra Rivers and geographically isolated from the mainland (see the Fig. 2). The socio-economic condition of the study area is very fragile. There are some poor indicators of the income of these households such as share-cropping, agricultural day labor, and livestock rearing, small business, fishing, boatman etc. However, the flood and riverbank erosion are continuing the process to destroy their crops, crops lands, homesteads. The dwellers of char, about 80% are living under ultra-poor (Islam 2018). Selection process of the study area Map of the study area location (Using Arc GIS 10.5) Data collection, analysis and interpretation This study has used a mixed-method approach. Data for the study have been collected from both the primary and secondary sources. Along with the secondary literature review, questionnaire survey, observation, focus group discussion (FGDs) and key informant interviews (KII) have been conducted from the two study villages that were the most ruthlessly affected by the catastrophe flood in 2017. Flood-affected people were the respondents to evaluate organizational role. The researcher prepared two sets of structured interview schedule with close and open-ended questions to collect data based on the objectives of the study. At first, respondents have been selected among the flood-affected households (total households 1843) of two selected villages by using simple random sampling. According to the prevalent culture of Bangladesh, almost all households are centered by the male heads of households. Therefore, respondents were the heads of the households. For determining a representative sample size from the category, the researcher has used the following statistical formula-(Kothari 2004) $$ n=\frac{z^2\times p\times q\times N}{e^2\left(N-1\right)+{z}^2\times p\times q}=319 $$ Focus group discussion participants were recruited from the household questionnaire survey. Respondents to the survey who had been flooded were asked if they would be willing to participate in a focus group discussion to explore some of the issues in greater depth. Each of the FGDs group had 6–12 number of people participated alongside two or three members of the project team. Interviews and focus groups discussion were held in the Bengali language, and each session was recorded onto audiotape. After completing the interview sessions, collected data have been analyzed according to the objectives of the study. Quantitative data have been analyzed by using statistical tools, i.e. Statistical Package for Social Sciences (SPSS) software version 20 and Microsoft Excel. Qualitative data have been interpreted through textual and document analyses. Five-point Likert scale has been used to analyze data on the attitude, experiences, and satisfaction of the disaster-affected people (Likert 1932). An organization can play a vital role in overcoming its limitations in an organized way when disaster-affected people feel essential help before disaster and during the emergency period. There are two types of organizations in the country (GOs and NGOs) which play their role for the betterment of the affected people in the crucial moment. In Bangladesh, sustainable development is closely linked with disaster reduction which needs an effective disaster management plan. Within the disaster management plan, preparedness and emergency measures are taken by the government and the non-government organizations to minimize the loss and damages caused by natural disasters like floods and also for its restoration. With government organizations, NGOs take part in flood reduction activities, rescue, and recovery operations. Besides this, food and non-food support are also being provided at the time of the flood disaster. Though we cannot protect flood disasters, some early initiatives and preparation can lessen the amount of loss and damages. It can help in saving lives and properties caused by flood disasters; and if a flood occurs, the emergency response should be taken for the severely affected people when they become helpless and hopeless. This study has also designed to portray the role of various organizations during preparedness and emergency situation. Role of organizations in preparedness Disaster preparedness performed to ensure adequate response to its effects as a measure of action taken before the disaster occurred, and relief and recovery from its consequences eliminated the need for any last-minute activity. Various agencies and individuals conduct flood disaster preparedness activities. Everyone has a distinctive role to play and unique accountabilities to achieve when the flood disaster walkout. The aims of flood disaster preparedness are – realizing what to do next in a disaster, knowing how to do it and equipping the right tools to do it successfully. This challenging procedure might be able to take years before achieving a suitable levels and retaining such levels is a continuing determination (Coppola 2006). This section shows the present condition of preparedness system in the study area by following heads and subheads. Preparedness regarding receipt of information Along with Government Organization, the Non-government Organization plays very effective role to manage the flood disaster before happening the flood. In which information is an essential tool of the preparedness system in the flood disaster management perspective in Bangladesh arena. Because of providing real information in time, vulnerable and affected people can save their lives and properties. There is a huge member of staff working in GOs and NGOs concerning disaster-related activities in the diverse territory in Bangladesh. They undertake different kinds of measures to collect disaster related information from numerous sources. Then, if there is a possibility to happen a disaster like a flood, service providers actively work to disseminate the disaster' information accurately to the vulnerable areas. Actually, organizations spread several types of information before happening the flood disaster like what will be the intensity of the upcoming flood, what types of measures should be taken, where should take shelter, and so on. Dissemination of information regarding flood disaster, the medium of information and time span of receiving information has been analyzed as follows. Table 1 shows that 93.7% of respondents received information before the devastating flood in 2017 occurred in the study area, and only 6.3% of respondents did not obtain any information about the flood. Table data also shows that the information receiving rate is almost the same (91.7% and 95.7%) among the two study villages. So, it can be said that the government and non-government organizations were very much prompt to provide information. It is also mentionable that the respondents who did not receive information before the disaster; some of them were out of the study area. Table 1 Dissemination, source and time span of receiving of information Table 1 also shows that 16.1% of respondents received information by radio, and only 8.7% of respondents received information by TV at first. The highest number of respondents (52.2%) received information from the announcement of Union Disaster Management Committee (UDMC) by announcing through loudspeakers, 15.7% of respondents were informed about the flood disaster from their neighbor, and only 7.4% of respondents received flood-related information from other sources. It is significant that announcing through loudspeakers is much popular and important way to provide flood disaster-related information to the people of the flood-prone area as most of the rural people have no electronic device to collect information and they remain engaged with their daily activities in the agriculture and other fields. One of the Key Informant Interviewees has also expressed the same opinion, but he added that the local mosque (a Muslim place of pray) could be the announcement center as part of preparedness just before the disaster. By receiving real information in time, vulnerable affected people can save their lives and properties. Table 1 shows that 85.3% of respondents (out of information receiver) received information in time, and as a result, they could take preparation to save their valuable assets and their family members. Rest 14.7% of respondents received information lately. Concerning this, the respondents claimed, they could not get information in time due to lack of electronic compliance such as radio, television, as well as some of them reported, they were out of the home while providing information by organization. For this reason, they could not take measures timely. Table 1 also shows that the highest number of respondents (90.4% out of 156) from South Ullah received information in time and (79.7% out of 143) of respondents were from Kalur Para. Findings show that most of the respondents (85.3%) from the study areas received information in time and it also indicates the promptness of government and non-government organizations, but because of the severity of the flood, the affected people lost almost everything. Readiness concerning shelter Centre Shelter center is essential for the vulnerable people of the disaster-prone area to save the people and their valuable goods and documents during any devastating natural disaster like flood, cyclone, etc. though the number of shelter center is not enough for the affected people in the study area. There are only a few shelter centers in Rangpur division, which are not enough considering the vast population (Statistics 2016). Generally, the Government makes education institutions as a shelter center in the disaster-prone areas due to lack of available shelter centers. On the other hand, Non-government organizations make temporary basis shelter centers in a high place and local NGOs' offices. Therefore, Organizations tried to create awareness among the people about the severity of flood disaster and tried to motivate for taking shelter in the shelter center. Table 2 shows that 60.5% of respondents took shelter during the last 2017 flood disaster where 39.5% of respondents did not go to the shelter center for taking shelter. Table 2 also shows that the highest number of the respondents (60.8%) from Kalur Para went to the shelter center. Table 2 Distribution of the respondents by going to the Shelter Centre or not Among the respondents who (39.5% out of 118) did not go to the shelter center out of them, 34.7% did not go for long-distance. There were some other reasons for not going to the shelter center such as; 9.3% for weak shelter center, 13.6% for negative attitude, 22.9% for lack of information and 19.5% for taking initiatives lately (see the Fig. 3). Field observation showed that approximately 4 km distance from the mainland to char village, but most of the permanent shelter is more far away from the study area. Usually, most of the people do not take shelter due to their homesickness; they always wait until the floodwater hit their household. Local people always say, "We are dying or alive, we won't go elsewhere from our father, and Grandfather Homestead, rather than we will die here". Therefore, according to the people's perception, it can be said that more shelter centers need to be built to minimize long-distance. Reason for not going to Shelter Center. Source: Field Survey June–September, 2018, December, 2018-April, 2019 For better preparedness, preparatory meeting, training program, preparing shelter center and medical preparation are very much essential to save the people and properties from the uncertain flood disaster. The following table shows the people's perceptions regarding effective initiatives taken by the organizations. The preparatory meeting is very much important to know the real scenario of the general people of the disaster-prone area. Several types of government and non-government organizations arrange the preparatory meeting from time to time. The Table 3 shows that the most of the respondents (53.0%) think that the government organization can arrange the preparatory meeting successfully under Union Disaster Management Committee (UDMC) where NGOs have also representatives but 33.9% of respondents think that NGOs can arrange meeting successfully and rest 13.1% of respondents think both of the organizations can arrange the meeting successfully. The respondents define here success on the basis of the presence of committee members and the implementation of the decision taken in the meeting. Table 3 Initiatives taken by organizations for preparedness effectively Training Program creates awareness among the flood-prone area's people to protect themselves and their valuable goods when flood disaster hit. Generally, organizations arrange the training program including general people of the study area who are under the risk of flood disasters. According to the Table 3 21.6% of respondents think that government organizations arrange the training Program effectively, and the highest number of respondents (54.9%) believe that non-government organizations can arrange the training program effectively and the rest 23.5% of respondents feel that both of the organization can arrange the training Program effectively including the local vulnerable people. It is significant that most of the respondents show their positive attitude towards non-government organizations regarding arranging training Program effectively. Before any flood disaster, shelter center should remain prepared for the poor disaster-affected people whose own shelter is weak and vulnerable. Generally, government organizations are more responsive for preparing the shelter center. On the basis of affected people perception, most of the respondents (43.9%) have argued that the government organizations play an effective role regarding the preparation of shelter center where 33.9% respondents have argued that non-government organization plays an effective role regarding this. Rest 22.2% of respondents have opined that both of the organizations play their role effectively regarding the preparation of shelter center. There are some exceptional cases where NGOs role is very effective regarding preparation of shelter center. South Ullah can be the example for this where 49.3% of respondents think that NGOs play an effective role for preparation of shelter center before the disaster. After flood disaster, because of the unavailability of a medical facility, affected people suffer in many ways. Including first aid facilities, several types of medical facilities are needed for the affected people. Service providing organizations can come forward within very short possible time if they remain prepared. Concerning medical preparation, 29.5% of respondents think that the government organizations remain more prepared for providing services where 50.5% of respondents believe that non-government organizations remain more prepared for providing services and 20.0% of respondents figure out that both organizations remain prepared according to the ability. Emergency response: a review of performance of GOs and NGOs Recovery can be parted into two individual phases, each by way of distinct actions: short-term and long-term. The short-term recovery period immediately monitors the disaster event, while emergency response activities are underway. Short-term regaining activities try to steady victims' lives by preparing them for a long road toward restructuring. These actions often considered as reactive actions or termed "relief," include temporary shelter arrangements, emergency food and water distribution, critical infrastructure restoration, and debris disposal. Short-term retrieval activities tend to be impermanent and often do not always directly avail to the actual long-term progress of the community. By the following sub-section, a short-term (which started from the flood and lasted for 6 months) recovery activity done by GOs and NGOs has been discussed on people's perception. People's perception regarding rescue and relief When the extreme flood-hit a region, at that time all houses inundate due to intrusion of flood water. Then the trapped people leave their homes as soon as possible and need to take shelter to save their life such as near the highway, school, and flood shelter centers. Along with organizations, local people created a group by their next-door neighbor during the flood of 2017. At first, both rescue teams give pay attention on the children and older people concerning their safety. Because children and older people are more fragile during the flood for this reason, they sent them to safer flood-free places. Rescue of family members (during 2017 flood) Rescue operation may not always be needed, but in some special cases, it is very important. Table 4 shows that family members of 19.7% of respondents were rescued during the fatal 2017 flood with the help of local people and government and non-government organizations. Among the rescued people, more than 50% were rescued by GOs. Table 4 Status of rescue operation during 2017 flood During the interview session, Project Implementation Officer (PIO) of Saghatta Upazila has explained that among the NGOs 'SKS' played a vital role in the emergency rescue operation at South Ullah under Bhartkhali Union. On the other hand, PIO of both Upazila has opined that local people were very much cooperative with GOs and NGOs in this respect. Findings show that GOs and neighbors of the respondents played a mentionable role to rescue the affected people who were bounded by the sudden water. Relief received from organizations during the emergency period The Fig. 4 illustrates that among the respondents, 56.7% received aid from government relief Program where 43.3% of respondents did not get any type of aid. Some of the respondents who did not get any assistance from government organizations demanded that they had needed but did not get. On the other hand, among the respondents, 72.4% received aid from non-government relief program where 27.6% of respondents did not get any type of aid. Findings show that coverage of NGOs was higher than the GOs. Respondents also demanded that the quality and quantity of aid of NGOs were better than the aid of GOs. Distribution of the respondents by receiving emergency aid. Source: Field Survey June–September, 2018, December, 2018-April, 2019 Main item of relief received from organizations Just after occurring of the flood disaster, affected people need several types of help as they lose almost everything. People from all walks of life come forward to help distressed people with the government. The Government tries to play its expected role through its various agencies and organizations. By Table 5, the researcher has attempted to represent the position of government organizations and non-government organizations on the basis of providing the main item of relief. Table 5 Item of relief received by the respondents from organizations The Table 5 shows that out of total respondents 53.6% of respondents received food item, 11.4% water, 22.9% cloth, 17.9% medicine, 36.7% cash, 17.9% CI sheet and 8.2% other facilities from GOs and from NGOs Out of total respondents 74.3% received food, 52.4% received pure drinking water, 39.8% received cloth, 57.9% medicine, 10.3% cash, CI sheet 3.1% and 24.8% received other facilities. The organizations particularly emphasize on food items where mainly organizations separated into food and non-food items as relief delivery. In the flood of 2017, the GOs and NGOs provided rice, pulse, edible oil, iodized salt, sugar, baby cereal etc. as food items, and these kinds of items delivered into various food package. On the contrary, CI Sheet, Blankets, Shari, Lungi, Mosquito net, rope, family kit, kitchen Set, ORS pkts, bucket and mug, soap, sanitary pad, washing powder were the main items of non-food as relief. It can clearly be said that GOs played a comparatively better role by providing cash money and CI (Corrugated Iron) Sheet. On the other hand, NGOs played a comparatively better role by providing food, water, cloth, and medicine. Amount of financial help from organizations as relief After the devastating flood disaster in 2017, losing all the belongings of the affected people, including their professional and livelihood materials- such as plow and cattle, boats and nets, they were completely helpless, jobless and their miseries knew no bound. Besides, Organizations also gave pay attention to deliver monetary help as relief for temporary housing, dead parson, flood insurance and so on. The government and non-government organizations tried to provide monetary help with all types of necessary goods. Table 6 shows the condition of monetary help during the emergency period. Table 6 Distribution of the respondents by financial help as relief In analyzing the group of respondents based on the amount of cash received, it is evident that 15.7% of respondents received Tk. 2001 to 4000 as relief from GOs which is the highest among all groups and 15.0% of respondents received Tk. 4001 to 6000 which comes after the former group. GOs provide monetary help in excess of Tk. 6000 to very few respondents. In the case of NGOs, the scenario is slightly different. The largest number of respondents (14 or 4.4%) received monetary help between Tk. 1 to 2000 in case of NGOs out of total respondents. The number of respondents who received monetary help between Tk. 2001 to 4000 is 11 or 3.4% which comes after the previous group. NGOs did not provide monetary help to any respondent in excess of Tk. 6000 during emergency response. One important feature of the above table is that monetary coverage of NGOs is comparatively low than GOs both in terms of amount of cash and number of recipients. From the above discussion, it is observed that the proportion of respondents getting monetary help out of total respondents was very low in the case of NGOs, but in the case of GOs, the scenario was completely different. In respect to GOs, the proportion was 36.7% whereas that of NGOs was only 10.3% out of total respondents. Provision of agricultural production materials Bangladesh is mainly an agricultural based country where almost 80% of people are directly and indirectly relating to agriculture. Thus, most of the people in our agro-based society are engaged with several types of cultivation, and the char land area is no exception. However, the number of crop losers is very high, and the amount of losses is also very high. Observation showed that almost all households confronted crop loss fully due to extreme flood in 2017. The local people usually select cropland to cultivate after the floodwater gone away. Sometimes they cannot cultivate due to seeds and agricultural materials. Concerning this, government and non-government organizations come forward to help the severely affected farmers in the post-flood for providing seeds and materials. For better agricultural support, seed, fertilizer, and equipment are very much important. No farmer can restart their farming without these important items. From Table 7, there were 36.1% of respondents received seed from GOs, 263.5% from NGOs, and 8.4% from both GOs and NGOs. The beneficiaries were supported for home-gardening in order to ensure nutrition and reduced malnutrition. Observation shows that the majority of small, marginal sharecropping farmers had lost their seeds. So, the seeds of eggplant, bottle gourd, spinach, carrot, radish, tomato, beetroot, spinach, okra, gima kalmi, red pumpkin, ash gourd, amaranth, chilly, bitter gourd, and papaya were provided among the affected people who were engaged in farming. The above table also shows that regarding fertilizer, 60.5% of respondents received fertilizer from only GOs and the rest 39.5% did not receive any fertilizer. Government and non-government organizations tried to provide important equipment to the farmer regarding equipment, 12.9% of respondents received equipment from GOs, 9.1% from NGOs and 4.0% of respondents received equipment both from GOs and NGOs. Findings of the above Table 7 display that GOs played a vital role in providing seed, fertilizer, and agricultural equipment than NGOs. Table 7 Agricultural assistance as relief from GOs & NGOs (N = 319) Restoration program provided by organizations after 2017 flood Just after the occurrence of the devastating 2017 flood disaster, to protect overall environmental degradation and to develop a communication system, several types of initiatives had been taken by the government and non-government organizations. Since the total system was collapsed among the study areas, government and non-government organizations started their operations to restore the whole system with food and non-food relief item. It was found from the participant observation and interviews that GOs and NGOs gave emphasis especially for the household development including local infrastructure improvement where GOs repairing 4 km rural earthen road, one educational institution, five (5) tube well and two dams; raising of 35 house plinth, one education's field, and countless removal of garbage from the villages as well as setting new latrine (25). On the contrary, NGOs were engaged mainly 5 km rural road repaired, reconstruction 12 tube well, raised 94 house plinths, setting 40 new latrines, cleaned 5 ponds, and countless medical support. Problem and experience of the affected people In the flood-prone zone particularly in the char land of Bangladesh, almost every year most of the people face several types of problems caused by the flood disaster and gather bitter experience. The poor and vulnerable people (Affected People) face several types of problems to get services from the service providers. With some severe problems, the disaster affected people in the study area gather several types of experiences concerning getting information, taking shelter in the shelter center during the disaster and after the disaster, receiving emergency relief, medical facilities, monetary help, etc. Problems of the affected people Problems of the affected people have been discussed on preparedness and emergency response including some sub-heads. The types of problems faced after the flood in 2017 by the affected people to get services from the organizations have been discussed. Almost in all steps, the 2017 flood-affected people faced several types of problems that need to be discussed on the following heads to reveal the problems of affected people. During preparedness Information is very much significant for the disaster affected people to save the lives and assets from the flood disaster when the flood-hit. By getting disaster-related information properly, everybody can get enough time to go to the shelter center or safe side and can save their valuable goods. The Table 8 shows that the problems regarding receiving information. From the table, it is seen that among the respondents, 40.8% faced problems to get disaster-related information, and 59.2% did not face any type of problem to get information. Table 8 Problems regarding receiving information Again, among the respondents who faced problems (130) to get information, out of them 13.1% think that lack of coordination is the main problem for not getting information whereas according to 21.5% of respondents, taking initiatives lately is the main problem. On the other hand, 26.9% of respondents think that officials are not sincere to announce the information and 38.5% think that equipment (Cyrene, Radio, etc.) are not sufficient to announce the disaster-related information in time. Findings of the Table 8 illustrate that most of the respondents get disaster-related information in time without facing any problem and it also shows the organizational capability for the dissemination of the information. And at the same time, the paucity of equipment and the insincerity of the officials are also seen as the barriers for the affected people to receive disaster-related information. During the disaster period, taking shelter in the shelter center is very essential for saving the lives of the affected people who are vulnerable or risky positions in the flood-prone zone of the country. Evacuation and giving shelter are very important to the affected people and service providers to save the lives, but both the affected people and service providers face some problems regarding taking shelter in the shelter center. The Fig. 7 illustrates the problems faced by the affected people regarding shelter in the shelter center during the disaster period. From the Fig. 5, it can be seen that most of the respondents (53.9%) faced problems regarding shelter in the shelter center where 47.1% of respondents did not face any problem regarding this. Among the respondents who faced problems out of them 23.8, 18, 16.3, 32, and 9.9% faced problems due to long-distance, weak shelter centers, lack of information, homesickness and taking initiatives lately respectively. Findings of the Fig. 5 show that most of the respondents (53.9%) faced problem regarding taking shelter in the shelter center during the disaster and among them a significant number of the respondents (23.8 & 32%) did not go to the shelter center due to long-distance and homesickness. Problems in taking shelter in Shelter Center. Source: Field Survey June–September, 2018, December, 2018-April, 2019 For the period of rescue and relief In the flood-prone zone of Bangladesh when flood disaster hit, the rescue program gets the highest priority to the organizations. With the organizations, the local people also help to rescue the victims. According to the respondents, there are some barriers to the rescue operation. The Table 8 demonstrates the main obstacle of the rescue program in the study area during the devastating flood in 2017. Among the respondents from the beneficiary, 16.3% have pointed out that lack of logistic support is the main barrier to the rescue program. 44.2% have viewed that under developed communication system is another problem, because the study villages are basically char (island) which is entirely disconnected from the mainland. For this reason, there is no convenient direct way to communicate with the mainland. In the normal period like the dry season, people are used to communicating with the mainland by walking, and for carrying heavy goods using horse carriage. During the high flood like 2017, the means of transportation worsened automatically, and the infrastructure couldn't able to support it due to the high floodwater stream as well as riverbank erosion. Therefore, the service provider could not execute rescue program properly. 26.6% of beneficiaries have argued that lack of suitable vehicle is the main barrier and the rest of 12.9% have opined that lack of trained manpower is the main barrier. From the Table 9, according to the highest portion of the respondents, under developed communication system is the main problem considering rescue program. One of the KII (Key Informant Interviews) also has argued in favor of under developed communication systems. He has opined that most of the rural area of flood-prone/Char zone are underdeveloped and condition of roads are not very good, in addition, most of them are kutcha (muddy) as well as water surrounded. Table 9 Main barrier of rescue program Providing adequate food, water and medication are very important to support the affected people, but there were some obstacles to get relief in the emergency period after the 2017 flood. The Table 10 shows the main obstacle to get relief according to the respondents. 11.6% of respondents have viewed that wastage of time is the main obstacle to get relief and 16.6% have argued that communication is the main obstacle. Table 10 Distribution of the respondents by the obstacle to get relief The highest numbers of the respondents (27.9%) have opined that insufficiency is the main obstacle to get relief. Standing a lot of time, when the affected people do not get the minimum desired amount, they become shocked very much. According to 21.0%, 14.4%, and 8.5% of respondents, political influence, nepotism, and corruption are the main obstacle to get relief, respectively. According to the above data, only 21.0% respondents have viewed that political influence was the main barrier to get relief in the emergency period after the flood in 2017, but the observation shows that political influence were the main obstacles to the most of the affected people in the study area at present. There are some influential persons in society who have enough influence on society. They also have enough capacity to exercise power to control society positively. If the influential persons try to influence the wellbeing of the society, they can do that, but for the self-interest sometimes they become part of unfair means. Table 11 shows the responsible person for improper relief distribution in the study area. According to 10.0% and 21.6% of respondents, rural elite and local people's representatives are responsible parsons for improper relief distribution where about one-third of the respondents (34.2%) have viewed that local political leader is the responsible person. On the other hand, about one-fifth of the respondents (20.1%) have argued that subordinate of LPR & LPL is responsible for improper relief distribution and according to 14.1% of respondents; the office staff is responsible for improper relief distribution. Findings of the above Table 11 show that local political leader (34.2%) and local people's representatives (21.6%) are responsible for improper relief distribution but considering the field observation and KII (Key Informant Interviews) opinion, it can be said that local political leaders are mainly responsible for improper relief distribution. Table 11 Responsible person for improper relief distribution Experiences of the flood affected people Organizational initiatives for preparedness On the basis of overall performance regarding initiatives for preparedness, only 6.3% of respondents think that government organizations played a very good role and according to the rest 14.1, 34.8, 34.1 and 10.7%, government organizations play good, average, bad and very bad role respectively. It is significant that according to most of the respondents, the government organizations could not play the expected role regarding preparedness. On the other hand, people's perception was more positive to non-government organizations than government organizations. 15.8% of respondents think that NGOs played a very good role where 31.0, 32.0, 13.7, and 7.5% believe that non-government organizations played good, average, bad and very bad role respectively for preparedness. Figure 6 shows that according to the highest number of respondents, the non-government organizations could play an expected role regarding preparedness. Experiences regarding organizational initiatives for preparedness. Source: Field Survey June–September, 2018, December, 2018-April, 2019 Experience regarding relief Sufficient relief materials help to restore the condition of affected people after any natural disaster. The government and non-government organizations try to provide sufficient relief materials overcoming the limitations in time of need. The following table describes the opinion of the affected people regarding the sufficiency of relief material obtained from GOs and NGOs. Among the total respondents, only 5.6% think that relief material provided by GOs was very sufficient whereas 23.8% believe that relief material provided by NGOs was very sufficient. Again, 14.1% believe that relief material provided by GOs was sufficient whereas 26.4% think that relief material provided by NGOs was sufficient. The highest number of respondents (32%, 102) thinks that relief material provided by GOs was very insufficient whereas only 5.6% of respondents believe that relief material provided by NGOs was very insufficient. From the Fig. 7, it has been revealed that NGOs provided enough relief material to the flood in 2017 affected people than GOs. Experience regarding sufficiency of obtained relief. Source: Field Survey June–September, 2018, December 2018-April, 2019 From a formal and informal interview with the flood disaster-affected people in the study area, it has been found that satisfaction depends on obtaining relief, quality and quantity of relief, time of receiving relief, etc. Most of the affected people received relief (cash, food and non-food item) from GOs or NGOs or both GOs and NGOs. The Fig. 8 shows the level of satisfaction of the respondents for getting relief material from government and non-government organizations. People's satisfaction regarding relief from organizations. Source: Field Survey June–September, 2018, December, 2018-April, 2019 The Fig. 8 shows that regarding GOs relief, only 6.6% of respondents were very satisfied and 16.3% of respondents were satisfied whereas regarding NGOs' relief, 29.5% were satisfied and 24.7% of respondents were very satisfied in terms of quality, quantity and time of receiving emergency relief. Again, regarding GOs relief, 28.8% of respondents were unsatisfied, and 30.2% of respondents were very unsatisfied, but regarding NGOs, 12.2% of respondents were unsatisfied, and only 4.7% of respondents were very unsatisfied. So, based on the respondent's perception, it can be said that the level of satisfaction towards NGOs was higher than the GOs. Flood disaster is a frequent phenomenon in the study villages, for this reason, the dwellers depend on the aids of government organizations and non-government organizations to cope with the overwhelming situation since they lose almost everything. As such the GOs and NGOs provided their services as part of preparedness and emergency response during the 2017 flood period. Both the GOs and NGOs showed their outstanding performance with respect to disseminate information. Most of the respondents (93.7%) received information before the disaster, and the main way of getting information was announcing through loud speakers by GOs and NGOs. Also, they played a very effective role regarding initiatives for preparedness. GOs arranged preparatory meetings and prepare shelter centers very effectively, whereas NGOs organized training programs very successfully. Organizations were very prompt for rescue and relief programs. Just after the occurrence of the 2017 flood, GOs played a very efficient and expected role providing CI Sheet and monetary help, whereas; NGOs played a very applicable and expected role providing food relief, medical and IGA facilities, etc. In respect to restoration program, selected NGOs performed an extraordinary job which attracted the people's concentration. Besides, while receiving service about rehabilitation facilities, the disaster affected people faced several problems and faced bitter experience. Because of corruption, nepotism, improper assessment, and difficult procedure, affected people did not get rehabilitation facilities properly. On the basis of the objectives and found results of the study and the overall situation of the role of organizations regarding preparedness and emergency response of the flood disaster-affected people, the researcher has proposed some subjects to address this study. Along with public participation, organizations should work collectively in the flood crisis period because the present study found out that the lack of local people participation as a main gap in flood management activities. NGOs should provide more financial help as relief avoiding difficult procedures for the affected people. On the other hand, Organizations should make available financial help as loan for regaining the IGA facilities within a short possible of time as most of the char land people lose their IGA facilities which are very much related to the livelihood of the affected people. In addition, existing flood shelter centers can serve a small number of the population in the study area. Therefore, setting up of multipurpose flood shelters is essential, and at least two storied new schools should be built on flood-free highlands so that villagers are less vulnerable to impending floods. Moreover, training and awareness programs on flood preparedness should be more concentrated and steadier. For easy and smooth relief distribution, all kinds of complicated processes from the center to the grassroots level should be avoided. Therefore, corruption, nepotism and political and local interventions should be eliminated to ensure equal and fair distribution of relief goods among flood disaster victims, and the organizations may also increase monitoring and supervision to ensure relief and equitable rehabilitation. Along with short-term rehabilitation, organizations should give concern about long-term rehabilitation. Because of this path can give prosperity to cope with the adverse situation. Therefore, for long-term rehabilitation or recovery, the government and non-government organizations should begin to rebuild the infrastructure in the affected area for the general people and rehabilitate the affected defenseless people for sustainable development. It lasts for years for major flood disasters, so organizations should be more emphasized with respect to financial assistance, housing reconstruction, IGA facilities and new work opportunities beyond localities. In which, financial help is very helpful and important for the overall development of the disaster affected people. The government and non-government organizations should provide financial help for various purposes with several terms and conditions. Affected people who are very much sincere and conscious; they can change their social position using obtained financial help efficiently. Because all types of income-generating activities (IGA) are basically dependent on financial help. The data sets used and analyzed during the current study are available from the corresponding author on request. Standing Orders on Disaster was issued by the National Disaster Management Council (NDMC) under the direction of the Government of the People's Republic of Bangladesh. Adnan S (1991) Floods, people and the environment: institutional aspects of flood protection programmes in Bangladesh, 1990 Alcántara-Ayala I (2002) Geomorphology, natural hazards, vulnerability and prevention of natural disasters in developing countries. Geomorphology 47(2–4):107–124 Aminuzzaman S (2000) Institutional framework of poverty alleviation: an overview of Bangladesh experiences. Paper presented at the Development Studies Network Conference on Poverty, Prosperity and Progress Bank W (1996) Pursuing common goals: strengthening relations between government and development NGOs. World Bank Resident Mission, Dhaka Blair H (2005) Civil society and propoor initiatives in rural Bangladesh: finding a workable strategy. World Dev 33(6):921–936 Brammer H (1990) Floods in Bangladesh: II. Flood mitigation and environmental aspects. Geogr J 156:158–165 Brouwer R et al (2007) Socioeconomic vulnerability and adaptation to environmental risk: a case study of climate change and flooding in Bangladesh. Risk Anal 27(2):313–326 Burton L, Kates R, White G (1993) The environment as hazard, 2nd edn. Guilford Press, New York Cannon T (1994) Vulnerability analysis and the explanation of 'natural'disasters. Disasters Dev Environ 1:13–30 Cannon T, Twigg J, Rowell J (2003) Social vulnerability, sustainable livelihoods and disasters. Report to DFID conflict and humanitarian assistance department (CHAD) and sustainable livelihoods support office, 93 Choudhury AMEA (2005) Socio-economic and physical perspectives of water related vulnerability to climate change: results of field study in Bangladesh. Sci Cult 71(7/8):225 Coppola DP (2006) Introduction to international disaster management. Elsevier, Amsterdam Del Ninno C, Dorosh PA, Smith LC (2003) Public policy, markets and household coping strategies in Bangladesh: avoiding a food security crisis following the 1998 floods. World Dev 31(7):1221–1238 Dun O (2011) Migration and displacement triggered by floods in the Mekong Delta. Int Migr 49:e200–e223 Few R (2003) Flooding, vulnerability and coping strategies: local responses to a global threat. Prog Dev Stud 3(1):43–58 Gray CL, Mueller V (2012) Natural disasters and population mobility in Bangladesh. Proc Natl Acad Sci 109(16):6000–6005 Hasan Z et al (2013) Challenges of integrating disaster risk management and climate change adaptation policies at the national level: Bangladesh as a case. Glob J Hum Soc Sci Geogr Geo-Sci Environ Disaster Manag 13(4):54–65 Hoque MM, Siddique MA (1995) Flood control projects in Bangladesh: reasons for failure and recommendations for improvement. In: Institute of flood control and drainage research. BUET, Dhaka Hossain B, Ajiang C, Ryakitimbo CM (2019) Responses to flood disaster: use of indigenous knowledge and adaptation strategies in Char Village, Bangladesh. Environ Manage Sustainable Dev 8(4):46–74. https://doi.org/10.5296/emsd.v8i4.15233 Hossain MZ, Sakai T (2008) Severity of flood embankments in Bangladesh and its remedial approach. Agricultural Engineering International: the CIGR Ejournal. Manuscript LW 08 004. Vol. X. Hossain MA (2012) Community participation in disaster management: role of social work to enhance participation. Sociology 159:171 Howell P (2003) Indigenous early warning indicators of cyclones: potential application in coastal Bangladesh. Benfield Greig Hazard Research Centre, London Ibrahim NF et al (2017) Identification of vulnerable areas to floods in Kelantan River sub-basins by using flood vulnerability index. Int J GEOMATE 12(29):107–114 ISDR (2006) NGOs & disaster risk reduction: a preliminary review of initiatives and progress made. ISDR, Geneva Islam MR (2018) Climate change, natural disasters and socioeconomic livelihood vulnerabilities: migration decision among the char land people in Bangladesh. Soc Indic Res 136(2):575–593 Islam MR et al (2018) From coping to adaptation: flooding and the role of local knowledge in Bangladesh. Int J Disaster Risk Reduction 28:531–538 Islam SN et al (2010) Settlement relocations in the char-lands of Padma River basin in Ganges delta, Bangladesh. Front Earth Sci China 4(4):393–402 Kafi MAH, Chowdhury ASMT (2010) Probable Impact of Climate Change of Flood in Bangladesh. IWFM – BUET, Dhaka, Bangladesh. Kelly C, Chowdhury MK (2002) Poverty, disasters and the environment in Bangladesh: a quantitative and qualitative assessment of causal linkages. Bangladesh Issues Paper. UK Department for International Development, Dhaka Kothari CR (2004) Research methodology: methods and techniques. New Age International, New Delhi Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 140:55–60 Management DOD (2007) Flood in 2017 updated, Bangladesh (http://www.ddm.gov.bd) Matin N, Taher M (2001) The changing emphasis of disasters in Bangladesh NGOs. Disasters 25(3):227–239 Mirza MMQ (2002) Global warming and changes in the probability of occurrence of floods in Bangladesh and implications. Glob Environ Chang 12(2):127–138 Mondal MS et al (2015) Hydro-climatic hazards for crops and cropping system in the chars of the Jamuna River and potential adaptation options. Nat Hazards 76(3):1431–1455 Mutton D, Haque CE (2004) Human vulnerability, dislocation and resettlement: adaptation processes of river-bank erosion-induced displacees in Bangladesh. Disasters 28(1):41–62 Nirapad (2017) Flood situation updated on august 22 (http://www.nirapad.org.bd/) Nur I, Shrestha KK (2017) An integrative perspective on community vulnerability to flooding in cities of developing countries. Process Eng 198:958–967 Paul BK, Rasid H (1993) Flood damage to rice crop in Bangladesh. Geogr Rev 83:150–159 Paul S, Islam MR (2015) Ultra-poor char people's rights to development and accessibility to public services: a case of Bangladesh. Habitat Int 48:113–121 Paul SK, Hossain MN (2013) People's perception about flood disaster management in Bangladesh: a case study on the Chalan Beel area. Stamford J Environ Hum Habitat 2:72–86 Paul SK, Routray JK (2010) Flood proneness and coping strategies: the experiences of two villages in Bangladesh. Disasters 34(2):489–508 Philip S et al (2019) Attributing the 2017 Bangladesh floods from meteorological and hydrological perspectives. Hydrol Earth Syst Sci 23(3):1409–29 Portal BN (2018) Kurigram District BNP (http://www.kurigram.gov.bd) Relief MODMA (2013) Diseaster report Dhaka: Department of Disaster Management, 2014, p 22 Shajahanl AARMY (2001) Towards sustainable flood mitigation strategies: a case study of Bangladesh. Department of Architecture, BUET, Dhaka Smit B, Pilifosova O (2003) From adaptation to adaptive capacity and vulnerability reduction. In: Climate change, adaptive capacity and development. World Scientific, Singapore, pp 9–28 Statistics BBO (2016) Statistical yearbook of Bangladesh 2014, vol 2016. BBS, Dhaka, p 18 Thompson PM, Penning-Rowsell E (1994) Socio-economic impacts of floods and flood protection: a Bangladesh case study. In: Disasters development and environment. Wiley, Chichester, pp 81–97 Zaman M (1989) The social and political context of adjustment to riverbank erosion hazard and population resettlement in Bangladesh. Hum Organ 48(3):196 Zaman M (1993) Rivers of life: living with floods in Bangladesh. Asian Surv 33(10):985–996 Zaman MQ-U (1996) Development and displacement in Bangladesh: toward a resettlement policy. Asian Surv 36(7):691–703 I would like to express my sincere appreciation to my study respondents who were patient interviewees and provided necessary information for this study. I shall never forget their sincere cooperation and support. Not applicable' for that section. Department of Sociology, School of Public Administration, Hohai University, No.8 Focheng West Road, Jiangning, Nanjing, 210000, China Babul Hossain Corresponding author BH carried out the study, performed the statistical analysis, wrote the protocol, and wrote the draft of the manuscript. The author(s) read and approved the final manuscript. Correspondence to Babul Hossain. The author declares that there is no competing of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Hossain, B. Role of organizations in preparedness and emergency response to flood disaster in Bangladesh. Geoenviron Disasters 7, 33 (2020). https://doi.org/10.1186/s40677-020-00167-7 Organizations (GOs & NGOs) Flood disaster
CommonCrawl
Pedro's pawn game Yesterday, my good friend Pedro told me about Pedro's pawn game which is played with $7$ white and $7$ black pawns on a $7\times7$ checkerboard. In the starting situation, there is a white pawn on each of the seven squares in the bottom row, and a black pawn on each of the seven squares in the top row. In the goal situation, white and black pawns have switched positions: there is a black pawn on each square in the bottom row and a white pawn on each square in the top row. The only allowed pawn moves are from the current square to a horizontally or vertically adjacent empty square. Pedro asked me: What is the smallest number of allowed moves that turn the starting situation into the goal situation? mathematics combinatorics checkerboard GamowGamow $\begingroup$ xnor got the answer first on this question, but I've answered a more general problem. Similarly on your last puzzle, I got the answer first but Martin answered a more general problem. To settle the debate between me and Martin, are you going to accept my answer there and xnor's here, or Martin's there and mine here? $\endgroup$ – Rand al'Thor Mar 25 '15 at 11:09 $\begingroup$ Spin the board around $\endgroup$ – SaturnsEye Mar 25 '15 at 12:31 I give a construction of 92 moves and prove it optimal. As rand al'thor notes, 84 moves forward are inevitable. So, we'll only count "excess moves" that do not move a pawn forward. Each white pawn is in a column opposed against a black pawn that blocks its way. The two cannot pass each other, so each column requires an excess move to unblock it by moving one of its pawns sideways. This gives a minimum of 7 excess moves, so an overall 91 move minimum. Note that we can resolve two adjacent columns in two excess moves by moving out a white piece, moving the black pawn from the other column into its place, and then making the natural moves: OO OO |O XO X| XX | ^ -- ^ X |X |X |X | ^ -- | | | | | | | v | v XX X| XO |O OO OO A single column can also be resolved with two excess moves by moving a pawn sideways, letting the other pass, and moving it back. This lets us resolve the 7 columns in 8 excess moves for a 92-move solution. Can we cut down the solution to 91? No, by a parity argument. On a checkered board, each move changes the parity of the number of pieces on black squares. Since that parity is the same in the start and end configurations, the total number of moves must be even. So, the minimum is 92, which the solution achieves. xnorxnor $\begingroup$ For every piece that vacates a column another piece must move to occupy it. I don't think an odd number of moves is possible. I just got an alert that an answer was edited so hopefully my comment is still relevant. $\endgroup$ – LeppyR64 Mar 25 '15 at 10:30 $\begingroup$ @JasonLepack Indeed, I came to a similar realization about the parity. $\endgroup$ – xnor Mar 25 '15 at 10:32 $\begingroup$ Nice one, xnor! I proved 92 was optimal almost simultaneously, but your proof is better and more intuitive. $\endgroup$ – Rand al'Thor Mar 25 '15 at 10:42 Solution to the question as stated It can be done in 92 moves. Here's how (using standard chessboard notation, rows numbered from a to g and columns from 1 to 7): move the white pawn on a1 to a5 (4 moves) move the white pawn on b1 to b3 (2 moves) move the black pawn on b7 to b4, then a4, then a1 (3+1+3=7 moves) move the black pawn on a7 to a6, then b6, then b1 (1+1+5=7 moves) In 4+2+7+4+7+2=26 moves we've switched the pawns on a1 and b1 with those on a7 and b7. Do the same with the pawns on g1 and h1, g7 and h7. For the final three columns, proceed as follows: move the white pawns on c1, d1, e1 to c3, d5, e3 (2+4+2=8 moves) move the black pawn on c7 to c4, then d4, then d1 (3+1+3=7 moves) move the white pawn on c3 to c7 (4 moves) move the black pawn on d7 to d6, then c6, then c1 (1+1+5=7 moves) move the white pawn on d5 to d7 (2 moves) move the black pawn on e7 to e4, then d4, then d2, then e2, then e1 (3+1+2+1+1=8 moves) move the white pawn on e3 to e7 (4 moves) All this gives a total of 2*26+(8+7+4+7+2+8+4)=52+40=92 moves. As xnor states, it can't be done in less than 91 moves, because each pair of pawns need to 'pass' each other, necessitating an extra move to the side for one of them. In fact it can't be done in exactly 91 moves either, because not all of these shifts to the side can be done in the same direction (the board being finite) and where two shifts in opposite directions 'meet', two pawns would end up in the same square, so one of them needs to shift again (since 7 is odd). So the final answer is 92. A generalisation Let's replace 7 pawns of each colour on a 7x7 board with $n$ pawns of each colour on an $n\times n$ board! What's the answer then? The method I've used above for swapping a1 and b1 with a7 and b7 can be used for all pairs of columns, giving $4(n-1)+2=4n-2$ moves for each pair of columns (e.g. $4*7+2=26$ when $n=7$). The method for swapping c1, d1, e1 with c7, d7, e7 works for any triple, giving $6(n-1)+3+1=6n-2$ moves for a triple (e.g. $6*7-2=40$ when $n=7$). So the whole operation can be done in $k*(4n-2)=n(2n-1)$ moves if $n=2k$ is even, and $(k-1)(4n-2)+(6n-2)=n(2n-1)+1$ moves if $n=2k+1$ is odd. The argument used by xnor to prove 91 is a lower bound when $n=7$ also shows that $n(2n-1)$ is a lower bound for all $n$. When $n$ is odd, xnor's parity argument shows that since the total number of moves must be even, $n(2n-1)+1$ is a lower bound. So the optimal number of moves required to swap $n$ white pawns with $n$ black pawns on an $n\times n$ board is: $n(2n-1)$ if n is even $n(2n-1)+1$ if n is odd $\begingroup$ @Martin This is what's called a PARTIAL SOLUTION. I've established upper and lower bounds, and will now proceed to move them closer together until I reach the answer (unless someone else gets there first). This answer does not deserve a downvote. $\endgroup$ – Rand al'Thor Mar 25 '15 at 10:03 $\begingroup$ @Martin What if my solution is optimal? Then by your own comment, it'd be Gamow's puzzle that deserves a downvote and not my answer. How can you be sure my solution isn't optimal? $\endgroup$ – Rand al'Thor Mar 25 '15 at 10:04 $\begingroup$ @Martin Ah, xnor proved 92 is optimal while I was doing my final edit. I've now also proved that 92 is optimal, but xnor got there first, so credit to xnor. I won't bleat that my answer is better than his because it's more general or whatever. BTW why did you delete your earlier comment? Do you no longer think that 'if my solution is optimal, then this does not deserve to be called a "puzzle"'? $\endgroup$ – Rand al'Thor Mar 25 '15 at 10:39 $\begingroup$ @Martin I've now followed your example and posted a generalised answer. Let's see whether Gamow prefers to accept the answer of the person who got there first (me and xnor respectively) or the most general answer (you and me respectively) on his two latest puzzles. $\endgroup$ – Rand al'Thor Mar 25 '15 at 11:14 $\begingroup$ @xnor Because swapping 2 adjacent pawn-pairs takes 26 moves. I'll edit to clarify. (Don't get me wrong - I think your answer should be accepted! Martin and I were debating this on another question where I answered first and he posted a generalised answer.) $\endgroup$ – Rand al'Thor Mar 25 '15 at 11:15 Not the answer you're looking for? Browse other questions tagged mathematics combinatorics checkerboard or ask your own question. Two chessmasters at work Moving a pawn around on a chessboard Block the snake It's twelve o'clock! A row of 2015 red and white chips Switch The Knights Placing seven point-sized pawns Counting ant movements Possible pawn combinations Oddy Chessboard Walk a Crooked Path
CommonCrawl
A proactive resource allocation method based on adaptive prediction of resource requests in cloud computing Jing Chen ORCID: orcid.org/0000-0001-5689-47421, Yinglong Wang1 & Tao Liu1 With the development of big data and artificial intelligence, cloud resource requests present more complex features, such as being sudden, arriving in batches and being diverse, which cause the resource allocation to lag far behind the resource requests and an unbalanced resource utilization that wastes resources. To solve this issue, this paper proposes a proactive resource allocation method based on the adaptive prediction of the resource requests in cloud computing. Specifically, this method first proposes an adaptive prediction method based on the runs test that improves the prediction accuracy of resource requests, and then, it builds a multiobjective resource allocation optimization model, which alleviates the latency of the resource allocation and balances the utilizations of the different types of resources of a physical machine. Furthermore, a multiobjective evolutionary algorithm, the Nondominated Sorting Genetic Algorithm with the Elite Strategy (NSGA-II), is improved to further reduce the resource allocation time by accelerating the solution speed of the multiobjective optimization model. The experimental results show that this method realizes the balanced utilization between the CPU and memory resources and reduces the resource allocation time by at least 43% (10 threads) compared with the Improved Strength Pareto Evolutionary algorithm (SPEA2) and NSGA-II methods. Cloud computing provides massive computing, storage and network resources to support various services. Users can not only obtain various resources and services anytime and anywhere but also scale out or in resources to ensure their applications' performance or low cost. With the development of big data and artificial intelligence, cloud resource requests possess the characteristics of being diverse, coming in bursts and being sudden. The existing cloud resource allocation methods cannot guarantee the timeliness and optimization of the resource allocation for a large number of sudden resource requests. However, users pay more attention to the timeliness and optimization of their emergent resource requests that can guarantee their applications' performance, and cloud service providers are highly concerned with how to manage the massive resources and improve the resource utilization. An efficient resource allocation method is crucial to meet these goals. The resource allocation process is a problem to find the suitable physical servers upon which to place virtual machines (VMs). In previous studies, some simple heuristic algorithms, such as round robin (RR) [1], best fit (BF) [2] and min–max [3], are applied to solve the cloud resource allocation problem for small-scale cloud platforms. These algorithms are simple and easy, but they are also prone to wasting resources, especially in large-scale cloud platforms. The bin packing problem is one classical cloud resource allocation method. The VM placement problem on physical servers is transformed into the bin packing problem of packing n objects into m boxes, which requires that all objects be placed in the minimum number of boxes. A dynamic bin packing method is proposed to reduce the total cost of cloud resource allocation by permanently closing empty boxes [4]. Another classical cloud resource allocation method is to model VM placement as a mathematical multiobjective optimization problem. The main idea is to express a cloud resource allocation problem as a multiobjective mathematical function and then to use a multiobjective evolutionary algorithm to solve it. Although these methods are effective, most of them do not rapidly allocate resources for a large number of sudden resource requests and reduce resource waste of servers. An effective cloud resource allocation method should meet the following conditions for such requests. First, a large number of sudden resource requests should be processed in a timely manner. Second, the optimal resources should be provided to satisfy the resource requests. Third, the physical servers should be used as little as possible, and the proportion between the different types of resources (number of CPU cores, memory capacity and disk size) should be as uniform as possible to reduce resource waste. The contributions of this paper are summarized as follows. We propose a runs test (RT)-based adaptive prediction algorithm for resource requests. This algorithm is built based on our previously studied ensemble empirical mode decomposition (EEMD)-Autoregressive Integrated Moving Average model (ARIMA) and EEMD-RT-ARIMA algorithms [5, 6], and it can select a more accurate algorithm to implement the short-term prediction of resource requests via an adaptive prediction strategy. We propose a proactive resource allocation strategy that combines the active prediction and the passive response of resource requests, which can allocate resources in advance for the future sudden resource requests to guarantee the timelessness of the resource allocation. We further propose a resource proportion matching model to ensure the uniform usage of different types of server resources, which can reduce resource waste. Then, a mathematical multiobjective optimization problem of the resource allocation is formulated. We improve the Nondominated Sorting Genetic Algorithm with the Elite Strategy (NSGA-II) to accelerate the solution speed of the multiobjective optimization mathematical problem, which further ensures the timelessness of the resource allocation. The rest of this paper is organized as follows. Section II introduces related works. Section III describes the proactive resource allocation approach. Section IV presents the experiments and analysis. Section V concludes this paper. A list of the mathematical notations used in this paper is given in Table 1. Table 1 List of mathematical notations There are some cloud resource allocation methods for big data applications [7], cloud-based software services [8], scientific applications [9], cloud manufacturing [10], workflows [11] and cloud healthcare [12]. Some algorithms or mechanisms have been applied in resource allocation, such as the grasshopper optimization algorithm (GOA) [13], ant-colony optimization and deep reinforcement learning [14], the data-driven probabilistic model [15], and auction mechanisms [16]. Some researchers have proposed the existing resource and task scheduling methods. A systematic review classifies task scheduling approaches into the single cloud environment, multicloud environments and mobile cloud environments for different aims [17]. A comprehensive survey divides the scheduling techniques into three categories: heuristic, meta-heuristic and hybrid schemes [18]. Recently, state-of-the-art multiobjective VM placement mechanisms have been introduced [19]. A review of auction-based resource allocation mechanisms has been comprehensively conducted [20]. Resource allocation methods involve in the aims of reducing cost, minimizing the energy consumption, improving the resource utilization and guaranteeing the quality of service (QoS). A performance-cost grey wolf optimization (PCGWO) algorithm has been proposed to reduce the processing time and cost of tasks [21]. A JAYA algorithm has been used to optimize VM placement and minimize the energy consumption [22]. A fair resource allocation method has been proposed to rapidly and fairly allocate resources and maximize the resource utilization via a flow control policy [23]. These methods cannot provide an effective mechanism to ensure the timelessness of the resource allocation for a large number of sudden resource requests. A multidimensional resource allocation model MDCRA that uses a single weight algorithm (SWA) and a double weight algorithm (DWA) to minimize the number of physical servers, save energy and maximize the resource utilization in cloud computing has been proposed [24]. It models the multidimensional resource allocation problem as a vector bin packing problem. The bin packing problem is an NP hard problem. At present, there is no polynomial complexity optimization algorithm to solve it. Moreover, it only solves the resource capacity constraint without considering incompatible constraints. An energy-efficient resource allocation scheme considers the energy consumption of the CPU and RAM to reduce the overall energy costs and maintain the service level agreement (SLA) [25]. An empirical adaptive cloud resource provisioning model has been proposed to reduce the latency of the resource allocation and SLA violations via speculative analysis [26]. Both methods focus on workload consolidation and prediction with one target of reducing SLA violations while our method considers the trade-off among the number of physical machines, resource performance and proportional matching. A levy-based particle swarm optimization algorithm has been proposed to minimize the number of running physical servers and balance the load of physical servers by reducing the particle dispersion loss [27]. A dynamic resource allocation algorithm has been proposed to solve resource scheduling and resource matching problems [28]. Here, the Tabu search algorithm is used to solve the resource scheduling problem, the weighted bipartite graph is used to solve the resource matching problem for the tasks on the edge servers, and an optimal solution is further proposed to schedule resources between the edge servers and a cloud data center. This algorithm concentrates on the resource scheduling between the edge server and cloud, but our method focuses on VM placement in a cloud data center. In addition, a cloud workflow scheduling algorithm is proposed based on an attack-defense game model, where a task-VM mapping algorithm is presented to improve the workflow efficiency and different VMs are provided for workflow executions [29]. A fog computing trust management approach that assesses and manages the trust levels of the nodes is proposed to reduce the malicious attacks and the service response time in the fog computing environment [30]. Everything-as-a-Resource as a paradigm is proposed to design collaborative applications for the web [31]. The proactive resource allocation method based on prediction is an effective solution to ensure the timelessness of resource allocation. One type of prediction method is based on machine learning. A prediction-based dynamic multiobjective evolutionary algorithm, called NN-DNSGA-II [32], has been proposed by combining an artificial neural network with the NSGA-II [33]. This algorithm first uses the neural network to predict the pareto-optimal solutions as the initial population of the NSGA-II and then solves the multiobjective optimization problem. The empirical results demonstrate that this algorithm outperforms nonprediction-based algorithms in most cases for the Pegasus workflow management system. However, this algorithm cannot predict the future VM requests, but it does predict a better solution to solve the workflow scheduling problem, which cannot alleviate the resource allocation delay for the future VM request increases. A hybrid wavelet neural network method has been proposed to improve the prediction accuracy through training the wavelet neural network with two heuristic algorithms [34]. The machine learning-based prediction needs to conduct training using a large amount of data, which increases the time consumption and thus cannot guarantee a timely resource allocation. A generic algorithm (GA)-based prediction method has been proposed, and its prediction accuracy is better than the gray model at improving the resource utilization of VMs and physical machines (PMs) [35]. An anti-correlated VM placement algorithm, in which the VMs and the overloaded hosts are predicted to provide the suitable VM placement, has been proposed to reduce the energy consumption [36]. Another type of prediction method is based on statistics. ARIMA is a classical prediction model for time series, and it can also be combined with other methods to predict nonstationary time series. ARIMA and other methods are often combined to implement prediction. A model combining the ARIMA and fuzzy regression, in which the prediction accuracy is improved by setting sliding windows, has been proposed to predict network traffic [37]. An adaptive workload forecasting method dynamically selects the best method from the simple exponential smoothing (SES), ARIMA and linear regression (LR) methods to improve the workload forecasting accuracy [38]. However, this method uses the previous predictions of a set of models and different amounts of training data to execute the next prediction, which increases the prediction cost. A framework combining the ARIMA and LR methods has been used to predict VM and PM workloads, PM power consumption and their total costs [39]. The combination of the ARIMA and Back Propagation Neural Network (BPNN) methods improves the workload prediction accuracy and promotes the minimization of the cost of an edge cloud cluster [40]. An adaptive prediction model has been used to select the best one from the LR, ARIMA and support vector regression (SVR) methods to obtain better prediction results according to workload features [41]. An ensemble model ESNemble combines five different prediction algorithms and extracts their features to forecast the workload time series based on an echo state network, and it outperforms each single algorithm in terms of the prediction accuracy [42]. The above methods combine the classic ARIMA model with other prediction methods, which improve the prediction accuracy to a certain degree. However, these methods may achieve low prediction accuracy for the current resource request sequences with complex characteristics and strong fluctuations. Data preprocessing should be performed to smooth the extremely nonstationary sequences to enhance the prediction accuracy. Our previously proposed EEMD-ARIMA and EEMD-RT-ARIMA algorithms improve the prediction accuracy through decomposing a nonstationary sequence into a few relatively stationary component sequences via EEMD method [5, 6]. The main difference between EEMD-RT-ARIMA and EEMD-ARIMA methods is that EEMD-RT-ARIMA method reduces the cumulative error and the prediction time by selecting and reconstructing the component sequences with similar characteristics into few component sequences based on RT values when the original sequence has weak fluctuation. RT [43] is a method to check the randomness of a sequence. A RT is defined a component with successive symbols (0 or 1). For instance, a sequence '111,001,110,011′' includes three components with successive "1" and two components with successive "0." Each component with successive "1" or "0" is regarded as a RT. The total RT number reflects the random fluctuation of the sequence. Any time series can be changed into a sequence with successive symbols (0 or 1) [44]. The larger the total number of RT, the stronger the sequence fluctuates. Once the original sequence has strong fluctuation determined by the RT, EEMD-ARIMA method can get higher prediction accuracy than EEMD-RT-ARIMA due to more stationary component sequences. To address the resource allocation lagging behind resource requests, we propose a proactive resource allocation method based on the prediction of resource requests. Figure 1 shows the implement process of this method. First, a RT-based adaptive prediction method is used to forecast the future resource requests based on the past data of resource requests. Then, a proactive resource allocation strategy is proposed based on the prediction of resource requests. Finally, a multiobjective resource allocation method is proposed and solved by an improved NSGA-II algorithm. Implementation process of a proactive resource allocation. First, a RT-based adaptive prediction method is used to forecast the future resource requests based on the past data of resource requests. Then, a proactive resource allocation strategy is proposed based on the prediction of resource requests. Finally, a multiobjective resource allocation method is proposed and solved by an improved NSGA-II algorithm RT-based adaptive prediction method The prediction method is designed with two goals: reduce the prediction time and improve the prediction accuracy. The prediction procedure is shown in Fig. 2. The m component sequences are extracted from a VM request sequence using the principal component analysis method. Next, these component sequences are detected to find and preprocess the outliers. Then, the RT values of these preprocessed sequences are calculated. Finally, these sequences are predicted by adaptively selecting the EEMD-ARIMA or EEMD-RT-ARIMA method according to the comparison between their RT values and the thresholds. RT-based adaptive prediction method. The m component sequences are extracted from a VM request sequence using the principal component analysis method. Next, these component sequences are detected to find and preprocess the outliers. Then, the RT values of these preprocessed sequences are calculated. Finally, these sequences are predicted by adaptively selecting the EEMD-ARIMA or EEMD-RT-ARIMA method according to the comparison between their RT values and the thresholds A cloud platform provides many VM flavors, such as 2CPU4G (2 CPU cores, 4G memory) and 4CPU8G (4 CPU cores, 8G memory). We cannot predict each type of VM requests due to the high prediction time. Therefore, principal component analysis is first used in our prediction method, which can extract the major component sequences to reduce the prediction time. For example, a VM request sequence with \(n\) types of VMs is denoted as \(S = < s_{1} ,...,s_{i} ,...,s_{k} >\), where \(s_{i}\) represents the VM number of the ith request. A component sequence \(S_{l} {\text{ = < s}}_{l1} {,}...{\text{,s}}_{li} {,}...{\text{,s}}_{lk} { > }\) can be extracted from this sequence \(S\) for the VM type \(l\), where \(s_{li}\) denotes the VM number of the ith request. Thus, an original VM sequence can be divided into many component sequences. We can select the fewest component sequences to implement the prediction of VM requests, where the ratio of the sum of their VM requests to the total number of VM requests (it is called the proportion of VM requests) is beyond the predefined threshold \(T_{th}\) at each sampling point. These components sequences can be regarded as the major component sequences. For example, there are two component sequences \(S_{h} = < s_{h1} ,...,s_{hi} ,...,s_{hk} >\) and \(S_{g} = < s_{g1} ,...,s_{gi} ,...,s_{gk} >\), where \(s_{hi}\) and \(s_{gi}\) are the quantities of the different types of VM requests. For \(\forall s_{hi} \in S_{h}\) and \(s_{gi} \in S_{g}\), $${\text{if}}\;(s_{hi} + s_{gi} )/s_{i} \ge T_{th} ,\;\;\;select(S_{h} ,S_{g} ) \to S_{main}$$ These two component sequences are selected into a set \(S_{main}\) with the major component sequences to implement the prediction. Intuitively, the higher the threshold \(T_{th}\), the more the selected component sequences. The prediction of VM requests is more accurate, which ensures resource allocation to be more correct. However, the more the component sequences, the higher the prediction cost. For instance, if the threshold is set as \(T_{th}^{\prime }\) higher than the one \(T_{th}\), that is \(T_{th}^{\prime } > T_{th}\), three major component sequences \(S_{h}\),\(S_{g}\) and \(S_{l}\) may be selected into the set \(S_{main}\). For \(\forall s_{hi} \in S_{h}\), \(s_{gi} \in S_{g}\) and \(s_{li} \in S_{l}\). $${\text{if}}\;(s_{hi} + s_{gi} + s_{li} )/s_{i} \ge T_{th}^{\prime } ,\;\;\;select(S_{h} ,S_{g} ,S_{l} ) \to S_{main}$$ Then, each component sequence is decomposed into many subsequences to perform the prediction using EEMD-ARIMA or EEMD-RT-ARIMA method. Supposing a component sequence is decomposed into m subsequences and each subsequence cost n seconds to finish the prediction. Here, the running time of each prediction is almost identical, we suppose that it is n seconds for each subsequence. If two component sequences \(S_{h}\) and \(S_{g}\) are selected under the threshold \(T_{th}\), the prediction cost can be calculated as follows. $$C\left( {S_{h} ,S_{g} ,T_{th} } \right) = 2m \cdot n$$ However, if three component sequences \(S_{h}\), \(S_{g}\) and \(S_{l}\) are selected under the threshold \(T_{th}^{\prime }\), the prediction cost will be calculated as follows. $$C^{\prime}\left( {S_{h} ,S_{g} ,S_{l} ,T_{th}^{\prime } } \right) = 3m \cdot n$$ It can be seen that the prediction cost will greatly increase though more component sequences selected by setting a higher threshold can improve the prediction accuracy. This will cause a delayed resource allocation not to ensure the normal running of applications. Therefore, setting the threshold \(T_{th}\) is important, which not only need to reflect the major VM requests but also reduce the prediction time cost. Supposing p major component sequences have been selected to predict the future VM requests and a unselected component sequence \(S_{l}\) has more VM requests than other component sequences. The threshold \(T_{th}\) can be set as an approximation of the minimum value of the proportion of VM requests according to the following formula when one of two conditions is satisfied. The threshold \(T_{th}\) impact the prediction accuracy and the prediction time cost. $$T_{{th}} \leftarrow \min \left\{ {\left( {s_{{11}} + ... + s_{{1k}} } \right)/s_{1} ,...,\left( {s_{{i1}} + ... + s_{{ik}} } \right)/s_{i} ...,\left( {s_{{p1}} + ... + s_{{pk}} } \right)/s_{p} } \right\}$$ $${\text{S}}.{\text{T}}.\;\;\;s_{li} /s_{i} < \varepsilon_{l}$$ $${\text{or}}\;\;\;1/(p + 1) > \varepsilon_{t}$$ \(\varepsilon_{l}\) and \(\varepsilon_{t}\) indicate a threshold of the proportion of VM requests and a threshold of the ratio of prediction time cost, respectively. When both the added proportion of VM requests \(s_{li} /s_{i}\) is beyond this threshold \(\varepsilon_{l}\) and the added ratio of prediction time cost \(1/(p + 1)\) is below this threshold \(\varepsilon_{t}\), the component sequence \(S_{l}\) will be selected to predict the future VM requests. The quartile method is adopted to detect the outlier points of these major component sequences. Firstly, we calculate the first quartile \(Q1\), the third quartile \(Q3\) and the interquartile range (IQR) by the formula \(IQR = Q3 - Q1\), detect the outliers more than 1.5 times over \(Q3\) or less than 1.5 times below \(Q1\) and finally replace these outliers via a cubic spline interpolation method. The preprocessed component sequences are executed using RT method. Then, we set up an adaptive prediction method based on the RT (APMRT). If the RT value of a component sequence is higher than a predefined threshold \(R_{th}\), the EEMD-ARIMA method is selected to predict the future resource requests. Otherwise, the EEMD-RT-ARIMA is selected to make the prediction. Thus, the prediction accuracy can be improved by preprocessing the outliers of the major component sequences and selecting a more accurate prediction method. We can determine the future number of each type of VM requests and proactively allocate resources to guarantee the timeliness of the resource allocation. In this method, the time complexity of extracting a component sequence is \(O(k)\). Thus, the time complexity of extracting m component sequences and data preprocessing becomes \(O(m \cdot k)\). Then, the RT values of the extracted sequences are calculated and predicted by using the adaptive prediction algorithm APMRT. The time complexity becomes \(O(2m \cdot k + q \cdot m \cdot k)\) or \(O(2m \cdot k + p \cdot m \cdot k)\), where \(q\),\(p\) are separately the number of the decomposed component sequences and the new component sequences reconstructed (\(p < q\)). Therefore, the time complexity of the APMRT algorithm is \(O(Q \cdot m \cdot k)\) or \(O(P \cdot m \cdot k)\), which is less than the time complexity \(O(Q \cdot n \cdot k)\) or \(O(P \cdot n \cdot k)\) of all \(n\) component sequences. The prediction time is largely reduced by extracting the main component sequences. Proactive resource allocation strategy A cloud resource allocation algorithm should actively predict the future resource requests and allocate resources in advance to cope with the sudden increase of resource requests in the future. The proactive resource allocation framework is shown in Fig. 3. The RT-based adaptive prediction method is used to predict the future number of VM requests based on past data. A hybrid VM request queue is formed by combining the future VM requests predicted with the current VM requests. Proactive resource allocation strategy. The RT-based adaptive prediction method is used to predict the future VM requests based on past data. A hybrid VM request queue is formed by combining the future VM requests predicted actively with the current VM requests Suppose that the current VM request sequence is denoted as \(V(t) = < v_{1} (t),...,v_{i} (t),...v_{n} (t) >\), where \(v_{i} (t)\) indicates the VM number of the \(i\) th request at time \(t\). The number \(D(t + h)\) of the future \(l\) major types of VM requests at time \(t + h\) predicted via the adaptive prediction method APMRT is denoted as follows. $$D(t + h) = D^{1} \left( {t + h} \right) + ... + D^{i} (t + h) + ... + D^{l} (t + h)$$ \(D^{i} (t + h)\) is the \(i\) th major type of VM requests at time \(t + h\). The total number of VM requests \(N(t)\) at \(t\) time should be the sum of the current number of VM requests \(V(t)\) and the predicted number of VM requests \(D(t + h)\) as follows. $$N(t) = M(t) + D(t + h) \cdot C(t) \cdot P(t)$$ \(M(t){ = }v_{1} (t) + ... + v_{i} (t) + ... + v_{n} (t)\) is the current number of VM requests at \(t\) time. If the predicted number of VM requests \(D(t + h)\) is not less than the threshold \(N_{th}\), some VMs should be allocated resources in advance. The parameter \(C(t)\) should equal to 1 and \(P(t)\) is a percentage (e.g., 30%) of VM requests to be allocated resources in advance with respect to the predicted number of VM requests \(D(t + h)\). Otherwise, it does not need to provide VMs in advance, that is, \(C(t) = 0\). After the predicted number of VM requests \(D(t + h)\) is determined, the VM request sequence should be established. Assuming that the predicted number of VM requests is ordered in descending order from VM type 1 to \(l\), the largest VM requests (i.e., the type 1 VM requests) are placed at the front of the VM request sequence, and the smallest VM requests (i.e., the type \(l\) VM requests) are placed at the end of the VM request sequence. The predicted VM request sequence can be expressed as follows. $$V(t + h) = < v_{1}^{1} (t + h),v_{2}^{1} (t + h),v_{3}^{1} (t + h),...,v_{j}^{i} (t + h),v_{j + 1}^{i} (t + h)...v_{m}^{l} (t + h) >$$ \(v_{j}^{i} (t + h)\) and \(v_{j + 1}^{i} (t + h)\) are the quantities of the \(j\) th and \(j{ + }1\) th VM requests with the same VM type \(i\). Thus, the VM request sequence at time \(t\) can be expressed as follows. $$V^{\prime}(t) = < v_{1} (t),...v_{n} (t),v_{1}^{1} (t + h),v_{2}^{1} (t + h),v_{3}^{1} (t + h),...,v_{j}^{i} (t + h),v_{j + 1}^{i} (t + h),...,v_{m}^{l} (t + h) >$$ Multiobjective resource allocation method Our previous work has presented a multiobjective resource allocation method [45]. This method builds a multiobjective function with the minimum number of the used PMs \(\min \{ \sum\limits_{S} {x_{ij} } \}\) and the minimum total resource performance matching distance between VMs and PMs \(\min \{ \sum\limits_{S} {MD_{ij} } \}\), where \(x_{ij}\) denotes the mapping element between the VM \(v_{i}\) and the PM \(p_{j}\). If the VM \(v_{i}\) is placed on the PM \(p_{j}\), \(x_{ij}\) equals 1. Otherwise, \(x_{ij}\) equals 0. Thus, the formula \(\sum\limits_{S} {x_{ij} }\) represents the total number of the used PMs under a solution S. In the formula of the resource performance matching distance \(MD_{ij} = \sqrt {\sum\limits_{k = 1}^{3} {(npv_{ik} - npp_{jk} )^{2} } }\), \(npv_{ik}\) represents the normalized resource performance variable of VM \(v_{i}\), \(npp_{jk}\) represents the corresponding normalized resource performance variable of the PM \(p_{j}\) and \(k{ = }1,2,3\) denote the CPU, memory and disk resources, respectively. This paper proposes a new resource allocation method based on the prediction of VM requests (RAMPVR), which further considers two issues to improve the previous resource allocation method. One is to reduce the waste of physical resources. If the proportion of different types of resources from a VM request is closer to those free resources of a PM, it is less likely to cause resource waste for this PM. That is, the closer the resource proportion \(v_{i1} :v_{i2} :v_{i3}\) of a VM is to that \(p_{j1} :p_{j2} :p_{j3}\) of a PM, the lower the resource waste, where \(v_{i1}\), \(v_{i2}\) and \(v_{i3}\) represent the requested number of CPU cores, memory capacity and disk size of the VM \(v_{i}\), respectively; and \(p_{j1}\), \(p_{j2}\) and \(p_{j3}\) denote the free number of CPU cores, memory capacity and disk size of the PM \(p_{j}\), respectively. Therefore, we build the resource proportion matching distance model shown in formula (12), where \(p_{jk}\) and \(v_{ik}\) represent the free capacity of resource type k of the PM \(p_{j}\) and the requested resource capacity of the VM \(v_{i}\), respectively, and \(R_{k}\) denotes the coefficient that adjusts the imbalanced values of parameter \(H = p_{jk} \cdot v_{i1} /p_{j1} - v_{ik}\) for different resource types. For instance, if the values of the parameter \(H\) for CPU and disk resources are 2 and 200, the disk will become the dominant resource. Therefore, the adjustment coefficient \(R_{k}\) for the disk resource should be adjusted to a lower value than that for the CPU resource, such as using \(R_{k} = 1\) for the CPU resource and \(R_{k} = 0.1\) for the disk resource. $$MPM_{ij} = \sqrt {\sum\limits_{k = 1}^{3} {\left( {\left( {\frac{{p_{jk} \cdot v_{i1} }}{{p_{j1} }} - v_{ik} } \right) \cdot R_{k} } \right)^{2} } }$$ Thus, we set up a multiobjective optimization problem of resource allocation according to the number of the used PMs \(\sum\limits_{S} {x_{ij} }\), the total resource performance matching distance \(\sum\limits_{S} {MD_{ij} }\) and the total resource proportion matching distance \(\sum\limits_{S} {MPM_{ij} }\) as follows. $$M:\min \left\{ {\sum\limits_{S} {x_{ij} } } \right\}$$ $$\min \left\{ {\sum\limits_{S} {MD_{ij} } } \right\}$$ $$\min \left\{ {\sum\limits_{S} {MPM_{ij} } } \right\}$$ The first goal of the multiobjective optimization problem M of resource allocation is to minimize the total number of the used PMs, as shown in formula (13), which depends on the value of each mapping element \(x_{ij}\) between the VM \(v_{i}\) and the PM \(p_{j}\) under a solution \(S\). The second goal of the problem M is to minimize the total resource performance matching distance under a solution \(S\), as shown in formula (14), which depends on the resource performance matching distance \(MD_{ij}\) between the VM \(v_{i}\) and the PM \(p_{j}\). The third goal of the problem M is to minimize the total resource proportion matching distance under a solution \(S\), as shown in formula (15), which depends on the resource proportion matching distance \(MPM_{ij}\) between the VM \(v_{i}\) and the PM \(p_{j}\). In addition, the total CPU, memory and disk capacities requested by the VMs placed on PM \(p_{j}\) are less than its free CPU, memory and disk capacities, respectively. Thus, the constraint conditions are shown in formulas (16), (17) and (18), respectively. $${\text{S}}.{\text{T}}.\;\;\sum\limits_{S} {v_{i1} } \cdot x_{ij} \le p_{j1}$$ $$\sum\limits_{S} {v_{i2} } \cdot x_{ij} \le p_{j2}$$ Another is to optimize the solution algorithm that accelerates the solution speed of the multiobjective optimization function. The NSGA-II is a classical algorithm for solving a multiobjective optimization problem [46,47,48]. As a Nondominated Sorting Genetic Algorithm, it has been widely applied in solving the multiobjective problem and achieves good effectiveness [39,40,41]. However, the NSGA-II algorithm has a problem that the computation time of the fitness values (i.e., the objective functions) is too long to ensure the timelessness of the resource allocation. Furthermore, the fitness values of a large number of individuals need to be calculated in the population evolution. Hence, we will improve the NSGA-II algorithm to accelerate the solution speed using the parallel computation of the fitness function. We adopt multicore processors to calculate the fitness values of the individuals in parallel, which can accelerate the convergence of the proposed algorithm. The fitness values of each individual are calculated as follows. $$f_{1} (I_{k} ) = \sum\limits_{S} {x_{ij} }$$ $$f_{2} (I_{k} ) = \sum\limits_{S} {MD_{ij} }$$ $$f_{3} \left( {I_{k} } \right) = \sum\limits_{S} {MPM_{ij} }$$ Experiments and analysis Prediction of VM requests We select two time series \(S1\) and \(L1\) of continuous container requests, which are taken from the Alibaba cluster data [49] as the experimental dataset of VM requests. These time series only include the data on CPU and memory resources. We will use the sequence \(S1\) as an example to illustrate the adaptive prediction process. This sequence \(S1\) includes 95 sampling points (475 min) and 28 types of VMs, where each sampling point counts the total number of VMs in a 5-min period. We use the principal component analysis method to extract its component sequences \(S2\) and \(S3\) and calculate the threshold \(T_{th} { = }85{\text{\% }}\) according to the predefined thresholds \(\varepsilon_{l} { = }5{\text{\% }}\) and \(\varepsilon_{t} { = }20{\text{\% }}\) and formula (5)–(7). These sequences are all shown in Fig. 4, where the sequences \(S2\) and \(S3\) represent the VM numbers for the types of 4-core CPU and 1.56 memory (CPU = 400 means 4-core CPU and mem = 1.56 means 1.56 memory) and 8-core CPU and 3.13 memory (CPU = 800 and mem = 3.13), respectively. It is noted that the number of CPU cores and amounts of memory are normalized. Number of new VMs created for online services. The S1 curve depicts the total number of 28 types of new VMs. The S2 and S3 curves depict the quantities of new VMs for the type of CPU = 400 and mem = 1.56 and the type of CPU = 800 and mem = 3.13, respectively It can be observed that the number of VM requests dynamically changes and demonstrates the characteristic of suddenness, which makes the future resource requests difficult to predict. It can also be seen that the sequence \(S2\) with the 4-core CPU and 1.56 memory is consistent with the trend of the sequence \(S1\). The sequence \(S3\) with the 8-core CPU and 3.13 memory is roughly the same as the trend of the sequence \(S1\), but they have some differences in the detailed fluctuations. Next, we use the quartile method to detect the outliers with red " + " shown in Fig. 5. Sequentially, they are replaced by new data generated by a cubic spline interpolation method. Thus, we get the preprocessed sequences shown in Fig. 6. Box-plot of VM request sequences. The first, second and third subgraphs demonstrate the outliers with red " + " of the S1, S2 and S3 sequences, respectively The preprocessed sequences of VM requests. The S1, S2 and S3 curves represent the preprocessed sequences after using a cubic spline interpolation method to replace the outliers of the original S1, S2 and S3 sequences, respectively Then, we use the adaptive prediction method APMRT to implement the prediction for these preprocessed sequences. The RT values of these sequences are first calculated. And the threshold \(R_{th}\) is set as 20, which can be roughly observed from the experimental testing for Alibaba cluster data [6]. It is noted that this threshold \(R_{th}\) is different for different traces or scenarios, which need to be achieved from the experimental testing or via your expert experience. We select the first 80 sampling points as the training data and the next 5 points, 10 points, and 15 points as the testing data, respectively. When the RT value of a sequence is lower than the predefined threshold \(R_{th}\), the EEMD-RT-ARIMA method is selected to execute the prediction; otherwise, the EEMD-ARIMA method is selected. Figure 7 shows the mean absolute percentage error (MAPE) of the prediction results. It can be seen that the MAPEs of the 10-point and 15-point predictions increase greatly compared with those of the 5-point prediction. For instance, the EEMD-RT-ARIMA method achieves a MAPE of 9.87% for the 5-point prediction of the sequence \(S1\), but it achieves MAPEs of 29.62% and 54.99% for the 10-point prediction and 15-point prediction, respectively. Similarly, the EEMD-ARIMA method achieves a MAPE of 11.28% for the 5-point prediction of the sequence \(S2\), but its MAPEs are 38.31% and 64.51% for the 10-point prediction and 15-point prediction, respectively. It implies that both methods are not suitable for long-term prediction but are for short-term prediction. The reason is mainly due to the strong fluctuation of the sampling data in a short time. The EEMD-RT-ARIMA method achieves lower MAPEs than the EEMD-ARIMA method for the sequences \(S1\) and \(S3\), while it is the opposite for the sequence \(S2\). We find that the RT values of S1-S3 are 19, 21 and 19, respectively. This indicates that the proposed prediction method is effective. When a sequence has strong fluctuation, the cumulative prediction error of the component sequences obtained by EEMD decomposition can be less than that caused by the non-stationary sequence. Thus, EEMD-ARIMA method can achieve higher prediction accuracy. Otherwise, EEMD-RT-ARIMA method can reduce more prediction error accumulation than EEMD-ARIMA, and thus, it can achieve higher prediction accuracy. Figure 8 depicts the future 5-point values predicted via the proposed APMRT method. In the same way, we predict the future 5-point values for the sequence \(L1\), as in Fig. 9. It is also possible that other factors impact the prediction accuracy. In the future, we will further study this issue. This paper pays more attention on virtual resource allocation based on an adaptive prediction of resource requests. The MAPEs of adaptive prediction for different types of VM request sequences. The first three columns represent the MAPEs of the 5-point, 10-point and 15-point prediction obtained by EEMD-ARIMA method. The last three columns represent those obtained by EEMD-RT-ARIMA method. The blue, red and green parts represent the MAPEs of the S1, S2 and S3 sequences in each column, respectively Number of the predicted VMs for the S1, S2 and S3 sequences. The first, second and third subgraphs depict the comparison between the actual values and the predicted 5-point values obtained by our proposed method APMRT for the preprocessed S1, S2 and S3 sequences, respectively Number of the predicted VMs for the L1, L2 and L3 sequences. The first, second and third subgraphs depict the comparison between the actual values and the predicted 5-point values obtained by our proposed method APMRT for the preprocessed L1, L2 and L3 sequences, respectively Simulation of the resource allocation As shown in Fig. 8, 519 VMs (4-core CPU and 1.56 memory) and 62 VMs (8-core CPU and 3.l3 memory) are predicted for 425 sampling points. There is sudden growth over 300. Therefore, we should allocate resources for some VMs in advance to alleviate the latency of the resource allocation at 424 sampling points. We set up the ratio of the proactive resource allocation \(P_{i} = 0.3\). Thus, the number of VMs that needs to be created at 424 sampling points can be calculated by the formula 408 + (457 + 62)*0.3 = 564. The number of available PMs is 2972. The resource allocation problem becomes the problem of creating 564 VMs on 2972 available PMs. Similarly, we can observe that the method predicts 281 VMs (4-core CPU and 1.56 memory) and 31 VMs (8-core CPU and 3.l3 memory) for 1065 sampling points for the sequence \(L1\) from Fig. 9. If we set the ratio of the proactive resource allocation \(P_{i} = 1/3\), we should create 423 + (281 + 31)/3 = 527 VMs on 2972 PMs at 1064 sampling points. Even if the prediction may fail, the proactive resource allocation will not be greatly affected. For example, if the predicted number of VM requests is 893 or 297 for 425 sampling point, the MAPE will exceed 50%, that is, the prediction fails. We should create 268 or 89 VMs more than the original number of 408 according to the proactive resource strategy for 424 sampling points. It is not more than the actual number of VM requests for 425 sampling points. However, the more the proactive number of VM requests is, the longer the resource allocation time is for 424 sampling points. Therefore, the prediction error should be limited in a certain range. To verify the effectiveness of the proposed RAMPVR method, we adopt these following metrics to compare our method with others. Number of the used PMs: If the number of the used PMs is less, some idle PMs can be closed to reduce the energy consumption and cost. Resource performance matching distance: The smaller the resource performance matching distance is, the better the VMs match with the PMs regarding their resource performance. Resource proportion matching distance: The smaller the resource proportion matching distance is, the less the resource waste. Resource utilization: A good resource allocation method should maximize and homogenize each type of resource utilization. Time cost of resource allocation: Our prediction-based resource allocation method reduces the VM creation time by allocating resources for the future VM requests in advance. This paper mainly focused on reducing the solving time of this method. The lower the solving time is, the more resource allocation time that is reduced. We set the population size, the crossover probability, the crossover distribution index and the mutation distribution index as 200, 0.85, 20 and 20, respectively; and set the reciprocal of the number of variables as the mutation probability in the simulation. The maximum evaluation times of the fitness values and the maximum number of iterations of populations are set as 20,000 and 100, respectively. We compare the proposed RAMPVR method with the round robin (RR), SPEA2 and NSGA-II methods in terms of the number of the used PMs, the resource performance matching distance, the resource proportion matching distance, the resource utilization and the solving time. SPEA2 is another presentative elite multi-objective evolutionary algorithm [50], which can obtain multiple pareto optimal solutions in a single run. It has been widely used in different domain [41, 52] and has become the standard for performance comparison of multi-objective evolutionary algorithms [53, 54]. Each method is executed 10 times and the respective average results are computed. The experimental results are shown in Tables 2 and 3. Table 2 Experimental results of resource allocation for S1 sequence Table 3 Experimental results of resource allocation for L1 sequence It can be seen from Table 2 that the SEPA2, NSGA-II and RAMPVR methods use different numbers of PMs. Even the RAMPVR method uses different numbers of PMs in different experiments, such as 460 and 462 PMs. The less the number of the used PMs is, the more the saved resource cost is. The CPU and memory utilization of the used PMs are more balanced via resource proportion matching, which will reduce the resource waste. The SPEA2 method achieves CPU utilization of 58.62% and memory utilization of 60.28%, the NSGA-II method obtains 64.01% and 65.45%, and the RAMPVR method achieves 62.80% and 64.29% under the parallel computing of 8 threads, respectively. In addition, they basically keep a similar number of the used PMs, similar resource performance matching and similar resource proportion matching because they achieve the trade-off among them. The RR method demonstrates big differences in these aspects. It uses the most PMs because it adopts a polling mechanism. Furthermore, it achieves the highest resource performance, the highest proportion matching distances, and the most unbalanced resource utilization with CPU utilization of 74.90% and memory utilization of 28.76% due to the polling mechanism, which will cause high resource waste. However, it has a lower solution time of only 0.3 s because it uses a simple heuristic algorithm to solve the problem. Compared with the SPEA2 and NSGA-II methods, the RAMPVR method uses less time to solve the multiobjective functions. For instance, the SPEA2 and NSGA-II methods, respectively, use 1593 and 1551 s to solve the multiobjective problem, but the RAMPVR method only costs 886 s to solve it with the parallel computing of 10 threads. Table 3 also demonstrates this situation. Thus, the VM creation time can be greatly reduced according to the time saved by predicting the VMs in advance. The timelessness and rapidness of resource allocation can be guaranteed. Cloud resource requests demonstrate the characteristics of being diverse, arriving in bursts and being uncertain, which causes the resource allocation to lag behind the resource requests and the quality of service not to be ensured in a cloud platform. This paper proposes a multiobjective resource allocation method based on an adaptive prediction method for resource requests. This method can allocate virtual resources in advance to alleviate the delay problem of resource provision by using an adaptive method to predict the future resource requests. The timelessness of the resource allocation is further guaranteed by improving the NSGA-II algorithm to reduce the solving time of the multiobjective optimization problem. In addition, the various types of resources in a PM are evenly utilized, which reduces resource waste. Two experiments are conducted to verify the effectiveness of our proposed method. The experimental results show that this method realizes the balance between CPU and memory resources and reduces the resource allocation time by at least 43% (10 threads) compared with the SPEA2 and NSGA-II methods. QoS: GOA: Grasshopper optimization algorithm PCGWO: Performance-cost grey wolf optimization SWA: Single weight algorithm DWA: Double weight algorithm NSGA-II: Nondominated Sorting Genetic Algorithm with the Elite Strategy SLA: GA: Generic algorithm ARIMA: Autoregressive Integrated Moving Average Model SES: Simple exponential smoothing Physical machine BPNN: Back Propagation Neural Network SVR: Support vector regression EEMD: Ensemble empirical mode decomposition RT: Runs test IQR: APMRT: An adaptive prediction method based on RT RAMPVR: Resource allocation method based on the prediction of VM requests MAPE: Mean absolute percentage error P. Pradhan, P.K. Behera, N.N.B. Ray, Modified round robin algorithm for resource allocation in cloud computing. Proc. Comput. Sci. 85, 878–890 (2016) S. Shirvastava, R. Dubey, M. Shrivastava, Best fit based VM allocation for cloud resource allocation. Int. J. Comput. Appl. 158(9), 25–27 (2017) M. Katyal, A. Mishra, Application of selective algorithm for effective resource provisioning in cloud computing environment. Int. J. Cloud Comput. Serv. Archit., 4(1), 1–10(2014). X. Chen, J.X. Lin, Y. Ma et al., Self-adaptive resource allocation for cloud-based software services based on progressive QoS prediction model. Sci. China Inf. Sci. 62(11), 1–3 (2019) J. Chen, Y. Wang, A resource request prediction method based on EEMD in cloud computing. Proc. Comput. Sci. 131, 116–123 (2018) J. Chen, Y. Wang, A hybrid method for short-term host utilization prediction in cloud computing. J. Electr. Comput. Eng. 2782349, 1–14 (2019) D. Shen, Research on application-aware resource management for heterogeneous big data workloads in cloud environment. Dongnan University, 2018. X. Chen, J. X. Lin, B. Lin, T. Xiang, Y. Zhang and G. Huang, Self-learning and self-adaptive resource allocation for cloud-based software services. Concurrency Comput. Pract. Exp., 31(23), e4463 (2019). K. Gurleen, B. Anju, A survey of prediction-based resource scheduling techniques for physics-based scientific applications, Mod. Phys. Lett. B, 32(25), 1850295(2018). Y.J. Laili, S.S. Lin, D.Y. Tang, Multi-phase integrated scheduling of hybrid tasks in cloud manufacturing environment. Robot. Comput. Integr. Manuf. 61, 101850 (2020) K. Reihaneh, S.E. Faramarz, N. Naser, M. Mehran, ATSDS: adaptive two-stage deadline-constrained workflow scheduling considering run-time circumstances in cloud computing environments. J. Supercomput. 73(6), 2430–2455 (2017) K. Kavitha, S. C. Sharma, Performance analysis of ACO-based improved virtual machine allocation in cloud for IoT-enabled healthcare. Concurr. Comput. Pract. Exp., e5613 (2019). J. Vahidi, M. Rahmati, in IEEE 5th Conference on Knowledge Based Engineering and Innovation (KBEI). Optimization of resource allocation in cloud computing by grasshopper optimization algorithm, pp. 839–844 (2019). U. Rugwiro, C.H. Gu, W.C. Ding, Task scheduling and resource allocation based on ant-colony optimization and deep reinforcement learning. J. Internet Technol. 20(5), 1463–1475 (2019) S. Shenoy, D. Gorinevsky, N. Laptev, Probabilistic Modelling of Computing Request for Service Level Agreement. IEEE Trans. Serv. Comput. 12(6), 987–993 (2019) Z.H. Liu, Z.J. Wang, C. Yang, Multi-objective resource optimization scheduling based on iterative double auction in cloud manufacturing. Adv. Manuf. 7(4), 374–388 (2019) A. A. Motlagh, A. Movaghar, A. M. Rahmani, Task scheduling mechanism in cloud computing: a systematic review. Int. J. Commun. Syst. e4302 (2019). M. Kumar, S.C. Sharma, A. Goel, S.P. Singh, A comprehensive survey for scheduling techniques in cloud computing. J. Netw. Comput. Appl. 143, 1–33 (2019) N. D. Vahed, M. Ghobaei-Arani, A. Souri, Multiobjective virtual machine placement mechanisms using nature-inspired metaheuristic algorithms in cloud environments: a comprehensive review. Int. J. Commun. Syst. 32(14), e4068 (2019). F. Sheikholeslami, N. J. Navimipour, Auction-based resource allocation mechanisms in the cloud environments: a review of the literature and reflection on future challenges. Concurr. Computat. Pract. Exp., 30(16), e4456 (2018). G. Natesan, A. Chokkalingam, An improved grey wolf optimization algorithm based task scheduling in cloud computing environment. Int. Arab J. Inf. Technol. 17(1), 73–81 (2020) M. A. Reddy, K. Ravindranath, Virtual machine placement using JAYA optimization algorithm. Appl. Artif. Intell. https://doi.org/10.1080/08839514.2019.1689714. S. Souravlas, S. Katsavounis, Scheduling fair resource allocation policies for cloud computing through flow control. Electronics 8(11), 1348 (2019). L. Guo, P. Du, A. Razaque, et al. IEEE 2018 Fifth international conference on software defined systems (SDS). Energy saving and maximize utilization cloud resources allocation via online multi-dimensional vector bin packing (2018), pp. 160–165. N. Gul, I. A. Khan, S. Mustafa, o. Khalid, A. U. R. Khan, CPU-RAM-based energy-efficient resource allocation in clouds. J. Supercomput. 75(11), 7606–7624 (2019). R.L. Sri, N. Balaji, An empirical model of adaptive cloud resource provisioning with speculation. Soft. Comput. 23(21), 10983–10999 (2019) J. J. Prevost, K. M. Nagothu, B. Kelley, et al., in 6th International Conference on System of Systems Engineering (SoSE). Prediction of cloud data center networks loads using stochastic and neural models. (2011), pp. 276–281. H.L. Tang, C.L. Li, J.P. Bai, J.H. Tang, Y.L. Luo, Dynamic resource allocation strategy for latency-critical and computation-intensive applications in cloud-edge environment. Comput. Commun. 134, 70–82 (2018) Y. Wang, Y. Guo, Z. Guo, T. Baker, W. Liu, CLOSURE: A cloud scientific workflow scheduling algorithm based on attack-defense game model. Future Gener. Comput. Syst. 111, 460–474 (2020) M. Al-khafajiy, T. Baker, M. Asim et al., COMITMENT: a fog computing trust management approach. J. Parallel Distrib. Comput. 137, 1–16 (2020) T. Bakera, E. Ugljaninb, N. Facic et al., Everything as a resource: Foundations and illustration through Internet-of-things. Comput. Ind. 94, 62–74 (2018) G. Ismayilov, H.R. Topcuoglu, Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Future Gener. Comput. Syst. 102, 307–322 (2020) K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002) S. Jeddi, S. Sharifian, A water cycle optimized wavelet neural network algorithm for request prediction in cloud computing. Cluster Comput. 22(4), 1397–1412 (2019) F.-H. Tseng, X. Wang, L.-D. Chou, H.-C. Chao, V.C.M. Leung, Dynamic resource prediction and allocation for cloud data center using the multiobjective genetic algorithm. IEEE Syst. J. 12(2), 1688–1699 (2018) R. Shaw, E. Howley, E. Barrett, An energy efficient anti-correlated virtual machine placement algorithm using resource usage predictions. Simul. Model. Pract. Theory 93, 322–342 (2019) H. Mehdi, Z. Pooranian, P. G. V. Naranjo. Cloud traffic prediction based on fuzzy ARIMA model with low dependence on historical data. Trans. Emerg. Telecommun. Technol. e3731 (2018). Zharikov, S. Telenyk, P. Bidyuk, Adaptive workload forecasting in cloud data centers. J. Grid Comput. https://doi.org/10.1007/s10723-019-09501-2. M. Aldossary, K. Djemame, I. Alzamil, A. Kostopoulos, A. Dimakis, E. Agiatzidou, Energy-aware cost prediction and pricing of virtual machines in cloud computing environments. Future Gener. Comput. Syst. 93, 442–459 (2019) C. Li, H. Sun. Y. Chen, Y. Luo, Edge cloud resource expansion and shrinkage based on workload for minimizing the cost. Future Gener. Comput. Syst. 101, 327–340 (2019). P. Singh, P. Gupta, K. Jyoti, TASM: technocrat ARIMA and SVR model for workload prediction of web applications in cloud. Cluster Comput. 22(4), 619–633 (2019) H.M. Nguyen, G. Kalra, T.J. Jun, S. Woo, D. Kim, ESNemble: an Echo State Network-based ensemble for workload prediction and resource allocation of Web applications in the cloud. J. Supercomput. 75(10), 6303–6323 (2019) P. Nakaram, T. Leauhatong, A new content-based medical image retrieval system based on wavelet transform and multidimensional wald-wolfowitz runs test. The 5th Biomedical Engineering International Conference (2012). H. Zang, L. Fan, M. Guo, Z. Wei, G. Sun, and L. Zhang, Short-term wind power interval forecasting based on an EEMD-RT-RVM model. Advances in Meteorology, 8760780(2016). J. Chen, Y. Wang, 2018 Sixth International Conference on Advanced Cloud and Big Data. A cloud resource allocation method supporting sudden and urgent requests, pp. 66–70 (2018). B. Tan, H. Ma, Y. Mei, IEEE Congress on Evolutionary Computation (CEC). A NSGA-II-based approach for service resource allocation in cloud 2017, 2574–2581 (2017) A.S. Sofia, P. GaneshKumar, Multi-objective task scheduling to minimize energy consumption and makespan of cloud computing using NSGA-II. J. Netw. Syst. Manage. 26(2), 463–485 (2018) X. Xu, S. Fu, Y. Yuan et al., Multiobjective computation offloading for workflow management in cloudlet-based mobile cloud using NSGA-II. Comput. Intell. 35(3), 476–495 (2019) Alibaba. cluster-trace-v2018. https://github.com/alibaba/clusterdata/-tree/master/cluster-trace-v2018. E. Zitzler, M. Laumanns, L. Thiele, SPEA2: Improving the strength Pareto evolutionary algorithm, TIK-report 103, Swiss Federal Institute of Technology (ETH) Zurich (2001). J. Jiang, X. Zhang, S. Li, A task offloading method with edge for 5G-envisioned cyber-physical-social systems. Secur. Commun. Netw., 8867094 (2020). X. Xu, X. Liu, X. Yin, Privacy-aware offloading for training tasks of generative adversarial network in edge computing. Inf. Sci. 532, 1–15 (2020) G. Rachana, N. S. Jagannath, S. Urvashi Prakash, Cloud detection in satellite images using multi-objective social spider optimization. Appl. Soft Comput. 79, 203–226 (2019). J. Yang, H. Zhu, T. Liu, Secure and economical multi-cloud storage policy with NSGA-II-C. Appl. Soft Comput. 83, 105649 (2019) This work was supported in part by National Key Research and Development Project of China, Project No. 2018YFB1404501, in part by Shandong Provincial Natural Science Foundation under Grant ZR2020MF034, ZR2019QF014 and ZR2019PF015, in part by the Youth Science Funds of Shandong Academy of Sciences under Grant 2019QN0025, in part by the "Colleges and Universities 20 Terms" Foundation of Jinan City, China, under Grant 2018GXRC015. Shandong Provincial Key Laboratory of Computer Networks, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250101, China Jing Chen, Yinglong Wang & Tao Liu Jing Chen Yinglong Wang Tao Liu Jing Chen is a major contributor in proposing the method, implementing simulation and writing a manuscript. Yinglong Wang gave some ideas and suggestions and revised this manuscript. Tao Liu carried out the partial experimental work and data analysis. All authors read and approved the final manuscript. Correspondence to Jing Chen. Chen, J., Wang, Y. & Liu, T. A proactive resource allocation method based on adaptive prediction of resource requests in cloud computing. J Wireless Com Network 2021, 24 (2021). https://doi.org/10.1186/s13638-021-01912-8 Adaptive short-term prediction Proactive resource allocation Balanced resource utilization Multiobjective optimization
CommonCrawl
Gauge Group and 4-dim Topology 数学笔记 [转载]What's a Gauge? [转载]What's a Gauge?无评论 From: Terence Tao's blog: What's a Gauge. "Gauge theory" is a term which has connotations of being a fearsomely complicated part of mathematics – for instance, playing an important role in quantum field theory, general relativity, geometric PDE, and so forth. But the underlying concept is really quite simple: a gauge is nothing more than a "coordinate system" that varies depending on one's "location" with respect to some "base space" or "parameter space", a gauge transform is a change of coordinates applied to each such location, and a gauge theory is a model for some physical or mathematical system to which gauge transforms can be applied (and is typically gauge invariant, in that all physically meaningful quantities are left unchanged (or transform naturally) under gauge transformations). By fixing a gauge (thus breaking or spending the gauge symmetry), the model becomes something easier to analyse mathematically, such as a system of partial differential equations (in classical gauge theories) or a perturbative quantum field theory (in quantum gauge theories), though the tractability of the resulting problem can be heavily dependent on the choice of gauge that one fixed. Deciding exactly how to fix a gauge (or whether one should spend the gauge symmetry at all) is a key question in the analysis of gauge theories, and one that often requires the input of geometric ideas and intuition into that analysis. I was asked recently to explain what a gauge theory was, and so I will try to do so in this post. For simplicity, I will focus exclusively on classical gauge theories; quantum gauge theories are the quantization of classical gauge theories and have their own set of conceptual difficulties (coming from quantum field theory) that I will not discuss here. While gauge theories originated from physics, I will not discuss the physical significance of these theories much here, instead focusing just on their mathematical aspects. My discussion will be informal, as I want to try to convey the geometric intuition rather than the rigorous formalism (which can, of course, be found in any graduate text on differential geometry). 0.1. Coordinate systems Before I discuss gauges, I first review the more familiar concept of a coordinate system, which is basically the special case of a gauge when the base space (or parameter space) is trivial. Classical mathematics, such as practised by the ancient Greeks, could be loosely divided into two disciplines, geometry and number theory, where I use the latter term very broadly, to encompass all sorts of mathematics dealing with any sort of number. The two disciplines are unified by the concept of a coordinate system, which allows one to convert geometric objects to numeric ones or vice versa. The most well known example of a coordinate system is the Cartesian coordinate system for the plane (or more generally for a Euclidean space), but this is just one example of many such systems. For instance: 1. One can convert a length (of, say, an interval) into an (unsigned) real number, or vice versa, once one fixes a unit of length (e.g. the metre or the foot). In this case, the coordinate system is specified by the choice of length unit. 2. One can convert a displacement along a line into a (signed) real number, or vice versa, once one fixes a unit of length and an orientation along that line. In this case, the coordinate system is specified by the length unit together with the choice of orientation. Alternatively, one can replace the unit of length and the orientation by a unit displacement vector $e$ along the line. 3. One can convert a position (i.e. a point) on a line into a real number, or vice versa, once one fixes a unit of length, an orientation along the line, and an origin on that line. Equivalently, one can pick an origin $O$ and a unit displacement vector $e$. This coordinate system essentially identifies the original line with the standard real line ${\Bbb R}$. 4. One can generalise these systems to higher dimensions. For instance, one can convert a displacement along a plane into a vector in ${\Bbb R}^2$, or vice versa, once one fixes two linearly independent displacement vectors $e_1, e_2$ (i.e. a basis) to span that plane; the Cartesian coordinate system is just one special case of this general scheme. Similarly, one can convert a position on a plane to a vector in ${\Bbb R}^2$ once one picks a basis $e_1, e_2$ for that plane as well as an origin $O$, thus identifying that plane with the standard Euclidean plane ${\Bbb R}^2$. (To put it another way, units of measurement are nothing more than one-dimensional (i.e. scalar) coordinate systems.) 5. To convert an angle in a plane to a signed number (modulo multiples of $2\pi$), or vice versa, one needs to pick an orientation on the plane (e.g. to decide that anti-clockwise angles are positive). 8. To convert a direction in a plane to a signed number (again modulo multiples of $2\pi$), or vice versa, one needs to pick an orientation on the plane, as well as a reference direction (e.g. true or magnetic north is often used in the case of ocean navigation). 9. Similarly, to convert a position on a circle to a number (modulo multiples of $2\pi$), or vice versa, one needs to pick an orientation on that circle, together with an origin on that circle. Such a coordinate system then equates the original circle to the standard unit circle $S^1 := \{ z \in {\Bbb C}: |z| = 1 \}$ (with the standard origin $+1$ and the standard anticlockwise orientation $\circlearrowleft$). 10. To convert a position on a two-dimensional sphere (e.g. the surface of the Earth, as a first approximation) to a point on the standard unit sphere $S^2 := \{ (x,y,z) \in {\Bbb R}^3: x^2+y^2+z^2 \}$, one can pick an orientation on that sphere, an "origin" (or "north pole") for that sphere, and a "prime meridian" connecting the north pole to its antipode. Alternatively, one can view this coordinate system as determining a pair of Euler angles$\phi, \lambda$ (or a latitude and longitude) to be assigned to every point on one's original sphere. 28. The above examples were all geometric in nature, but one can also consider "combinatorial" coordinate systems, which allow one to identify combinatorial objects with numerical ones. An extremely familiar example of this is enumeration: one can identify a set A of (say) five elements with the numbers 1,2,3,4,5 simply by choosing an enumeration $a_1, a_2, \ldots, a_5$ of the set A. One can similarly enumerate other combinatorial objects (e.g. graphs, relations, trees, partial orders, etc.), and indeed this is done all the time in combinatorics. Similarly for algebraic objects, such as cosets of a subgroup H (or more generally, torsors of a group G); one can identify such a coset with H itself by designating an element of that coset to be the "identity" or "origin". More generally, a coordinate system $\Phi$ can be viewed as an isomorphism $\Phi: A \to G$ between a given geometric (or combinatorial) object A in some class (e.g. a circle), and a standard object G in that class (e.g. the standard unit circle). (To be pedantic, this is what a global coordinate system is; a local coordinate system, such as the coordinate charts on a manifold, is an isomorphism between a local piece of a geometric or combinatorial object in a class, and a local piece of a standard object in that class. I will restrict attention to global coordinate systems for this discussion.) Coordinate systems identify geometric or combinatorial objects with numerical (or standard) ones, but in many cases, there is no natural (or canonical) choice of this identification; instead, one may be faced with a variety of coordinate systems, all equally valid. One can of course just fix one such system once and for all, in which case there is no real harm in thinking of the geometric and numeric objects as being equivalent. If however one plans to change from one system to the next (or to avoid using such systems altogether), then it becomes important to carefully distinguish these two types of objects, to avoid confusion. For instance, if an interval AB is measured to have a length of 3 yards, then it is OK to write $|AB|=3$ (identifying the geometric concept of length with the numeric concept of a positive real number) so long as you plan to stick to having the yard as the unit of length for the rest of one's analysis. But if one was also planning to use, say, feet, as a unit of length also, then to avoid confusing statements such as "$|AB|=3$and $|AB|=9$", one should specify the coordinate systems explicitly, e.g. "$|AB| = 3 \hbox{ yards}$and $|AB| = 9 \hbox{ feet}$". Similarly, identifying a point P in a plane with its coordinates (e.g. $P = (4,3)$) is safe as long as one intends to only use a single coordinate system throughout; but if one intends to change coordinates at some point (or to switch to a coordinate-free perspective) then one should be more careful, e.g. writing $P = 4 e_1 + 3 e_2$, or even $P = O + 4 e_1 + 3 e_2$, if the origin O and basis vectors $e_1, e_2$ of one's coordinate systems might be subject to future change. As mentioned above, it is possible to in many cases to dispense with coordinates altogether. For instance, one can view the length $|AB|$ of a line segment AB not as a number (which requires one to select a unit of length), but more abstractly as the equivalence class of all line segments CD that are congruent to AB. With this perspective, $|AB|$ no longer lies in the standard semigroup${\Bbb R}^+$, but in a more abstract semigroup ${\mathcal L}$ (the space of line segments quotiented by congruence), with addition now defined geometrically (by concatenation of intervals) rather than numerically. A unit of length can now be viewed as just one of many different isomorphisms $\Phi: {\mathcal L} \to {\Bbb R}^+$ between ${\mathcal L}$ and ${\Bbb R}^+$, but one can abandon the use of such units and just work with ${\mathcal L}$ directly. Many statements in Euclidean geometry involving length can be phrased in this manner. For instance, if B lies in AC, then the statement $|AC|=|AB|+|BC|$ can be stated in ${\mathcal L}$, and does not require any units to convert ${\mathcal L}$ to ${\mathcal R}^+$; with a bit more work, one can also make sense of such statements as $|AC|^2 = |AB|^2 + |BC|^2$ for a right-angled triangle ABC (i.e. Pythagoras' theorem) while avoiding units, by defining a symmetric bilinear product operation $\times: {\mathcal L} \times {\mathcal L} \to {\mathcal A}$ from the abstract semigroup ${\mathcal L}$ of lengths to the abstract semigroup ${\mathcal A}$ of areas. (Indeed, this is basically how the ancient Greeks, who did not quite possess the modern real number system${\Bbb R}$, viewed geometry, though of course without the assistance of such modern terminology as "semigroup" or "bilinear".) The above abstract coordinate-free perspective is equivalent to a more concrete coordinate-invariant perspective, in which we do allow the use of coordinates to convert all geometric quantities to numeric ones, but insist that every statement that we write down is invariant under changes of coordinates. For instance, if we shrink our chosen unit of length by a factor $\lambda > 0$, then the numerical length of every interval increases by a factor of $\lambda$, e.g. $|AB| \mapsto \lambda |AB|$. The coordinate-invariant approach to length measurement then treats lengths such as $|AB|$ as numbers, but requires all statements involving such lengths to be invariant under the above scaling symmetry. For instance, a statement such as $|AC|^2 = |AB|^2 + |BC|^2$ is legitimate under this perspective, but a statement such as $|AB| = |BC|^2$ or $|AB| = 3$ is not. [In other words, co-ordinate invariance here is the same thing as being dimensionally consistent. Indeed, dimensional analysis is nothing more than the analysis of the scaling symmetries in one's coordinate systems.] One can retain this coordinate-invariance symmetry throughout one's arguments; or one can, at some point, choose to spend (or break) this coordinate invariance by selecting (or fixing) the coordinate system (which, in this case, means selecting a unit length). The advantage in spending such a symmetry is that one can often normalise one or more quantities to equal a particularly nice value; for instance, if a length $|AB|$ is appearing everywhere in one's arguments, and one has carefully retained coordinate-invariance up until some key point, then it can be convenient to spend this invariance to normalise $|AB|$ to equal 1. (In this case, one only has a one-dimensional family of symmetries, and so can only normalise one quantity at a time; but when one's symmetry group is larger, one can often normalise many more quantities at once; as a rule of thumb, one can normalise one quantity for each degree of freedom in the symmetry group.) Conversely, if one has already spent the coordinate invariance, one can often buy it back by converting all the facts, hypotheses, and desired conclusions one currently possesses in the situation back to a coordinate-invariant formulation. Thus one could imagine performing one normalisation to do one set of calculations, then undoing that normalisation to return to a coordinate-free perspective, doing some coordinate-free manipulations, and then performing a different normalisation to work on another part of the problem, and so forth. (For instance, in Euclidean geometry problems, it is often convenient to temporarily assign one key point to be the origin (thus spending translation invariance symmetry), then another, then switch back to a translation-invariant perspective, and so forth. As long as one is correctly accounting for what symmetries are being spent and bought at any given time, this can be a very powerful way of simplifying one's calculations.) Given a coordinate system $\Phi: A \to G$ that identifies some geometric object A with a standard object G, and some isomorphism $\Psi: G \to G$ of that standard object, we can obtain a new coordinate system $\Psi \circ \Phi: A \to G$ of A by composing the two isomorphisms. [I will be vague on what "isomorphism" means; one can formalise the concept using the language of category theory.] Conversely, every other coordinate system $\Phi': A \to G$ of $A$ arises in this manner. Thus, the space of coordinate systems on A is (non-canonically) identifiable with the isomorphism group $\hbox{Isom}(G)$ of G. This isomorphism group is called the structure group (or gauge group) of the class of geometric objects. For example, the structure group for lengths is ${\Bbb R}^+$; the structure group for angles is ${\Bbb Z}/2{\Bbb Z}$; the structure group for lines is the affine group$\hbox{Aff}({\Bbb R})$; the structure group for $n$-dimensional Euclidean geometry is the Euclidean group$E(n)$; the structure group for (oriented) 2-spheres is the (special) orthogonal group$SO(3)$; and so forth. (Indeed, one can basically describe each of the classical geometries (Euclidean, affine, projective, spherical, hyperbolic, Minkowski, etc.) as a homogeneous space for its structure group, as per the Erlangen program.) 0.2. Gauges In our discussion of coordinate systems, we focused on a single geometric (or combinatorial) object $A$: a single line, a single circle, a single set, etc. We then used a single coordinate system to identify that object with a standard representative of such an object. Now let us consider the more general situation in which one has a family (or fibre bundle) $(A_x)_{x \in X}$ of geometric (or combinatorial) objects (or fibres) $A_x$: a family of lines (i.e. a line bundle), a family of circles (i.e. a circle bundle), a family of sets, etc. This family is parameterised by some parameter set or base point x, which ranges in some parameter space or base space X. In many cases one also requires some topological or differentiable compatibility between the various fibres; for instance, continuous (or smooth) variations of the base point should lead to continuous (or smooth) variations in the fibre. For sake of discussion, however, let us gloss over these compatibility conditions. In many cases, each individual fibre $A_x$ in a bundle $(A_x)_{x \in X}$, being a geometric object of a certain class, can be identified with a standard object $G$ in that class, by means of a separate coordinate system $\Phi_x: A_x \to G$ for each base point x. The entire collection $\Phi = (\Phi_x)_{x \in X}$ is then referred to as a (global) gauge or trivialisation for this bundle (provided that it is compatible with whatever topological or differentiable structures one has placed on the bundle, but never mind that for now). Equivalently, a gauge is a bundle isomorphism$\Phi$ from the original bundle $(A_x)_{x \in X}$ to the trivial bundle$(G)_{x \in X}$, in which every fibre is the standard geometric object G. (There are also local gauges, which only trivialise a portion of the bundle, but let's ignore this distinction for now.) Let's give three concrete examples of bundles and gauges; one from differential geometry, one from dynamical systems, and one from combinatorics. Example 1: the circle bundle of the sphere. Recall from the previous section that the space of directions in a plane (which can be viewed as the circle of unit vectors) can be identified with the standard circle $S^1$ after picking an orientation and a reference direction. Now let us work not on the plane, but on a sphere, and specifically, on the surface X of the earth. At each point x on this surface, there is a circle $S_x$ of directions that one can travel along the sphere from x; the collection $SX := (S_x)_{x \in X}$ of all such circles is then a circle bundle with base space X (known as the circle bundle; it could also be viewed as the sphere bundle, cosphere bundle, or orthonormal frame bundle of X). The structure group of this bundle is the circle group $U(1) \equiv S^1$ if one preserves orientation, or the semi-direct product$S^1 \rtimes {\Bbb Z}/2{\Bbb Z}$ otherwise. Now suppose, at every point x on the earth X, the wind is blowing in some direction $w_x \in S_x$. (This is not actually possible globally, thanks to the hairy ball theorem, but let's ignore this technicality for now.) Thus wind direction can be thought of as a collection $w = (w_x)_{x \in X}$ of representatives from the fibres of the fibre bundle $(S_x)_{x \in X}$; such a collection is known as a section of the fibre bundle (it is to bundles as the concept of a graph$\{ (x, f(x)): x \in X \} \subset X \times G$ of a function $f: X \to G$ is to the trivial bundle $(G)_{x \in X}$). At present, this section has not been represented in terms of numbers; instead, the wind direction $w (w_x)_{x \in X}$ is a collection of points on various different circles in the circle bundle SX. But one can convert this section w into a collection of numbers (and more specifically, a function $u: X \to S^1$ from X to $S^1$) by choosing a gauge for this circle bundle – in other words, by selecting an orientation $\epsilon_x$ and a reference direction $N_x$ for each point x on the surface of the Earth X. For instance, one can pick the anticlockwise orientation $\circlearrowleft$ and true north for every point x (ignore for now the problem that this is not defined at the north and south poles, and so is merely a local gauge rather than a global one), and then each wind direction $w_x$ can now be identified with a unit complex number $u(x) \in S^1$ (e.g. $e^{i\pi/4}$ if the wind is blowing in the northwest direction at x). Now that one has a numerical function u to play with, rather than a geometric object w, one can now use analytical tools (e.g. differentiation, integration, Fourier transforms, etc.) to analyse the wind direction if one desires. But one should be aware that this function reflects the choice of gauge as well as the original object of study. If one changes the gauge (e.g. by using magnetic north instead of true north), then the function u changes, even though the wind direction w is still the same. If one does not want to spend the U(1) gauge symmetry, one would have to take care that all operations one performs on these functions are gauge-invariant; unfortunately, this restrictive requirement eliminates wide swathes of analytic tools (in particular, integration and the Fourier transform) and so one is often forced to break the gauge symmetry in order to use analysis. The challenge is then to select the gauge that maximises the effectiveness of analytic methods. $\diamond$$ Example 2: circle extensions of a dynamical system. Recall (see e.g. my lecture notes) that a dynamical system is a pair X = (X,T), where X is a space and $T: X \to X$ is an invertible map. (One can also place additional topological or measure-theoretic structures on this system, as is done in those notes, but we will ignore these structures for this discussion.) Given such a system, and given a cocycle$\rho: X \to S^1$ (which, in this context, is simply a function from X to the unit circle), we can define the skew product$X \times_\rho S^1$ of X and the unit circle $S^1$, twisted by the cocycle $\rho$, to be the Cartesian product $X \times S^1 := \{ (x,u): x \in X, u \in S^1 \}$ with the shift $\tilde T: (x,u) \mapsto (Tx, \rho(x) u)$; this is easily seen to be another dynamical system. (If one wishes to have a topological or measure-theoretic dynamical system, then $\rho$ will have to be continuous or measurable here, but let us ignore such issues for this discussion.) Observe that there is a free action$(S_v: (x,u) \mapsto (x,vu))_{v \in S^1}$ of the circle group $S^1$ on the skew product $X \times_\rho S^1$ that commutes with the shift $\tilde T$; the quotient space$(X \times_\rho S^1)/S^1$ of this action is isomorphic to X, thus leading to a factor map$\pi: X \times_\rho S^1 \to X$, which is of course just the projection map $\pi: (x,u) \mapsto x$. (An example is provided by the skew shift system, described in my lecture notes.) Conversely, suppose that one had a dynamical system $\tilde X = (\tilde X, \tilde T)$ which had a free $S^1$ action $(S_v: \tilde X \to \tilde X)_{v \in S^1}$ commuting with the shift $\tilde T$. If we set $X := \tilde X/S^1$ to be the quotient space, we thus have a factor map $\pi: \tilde X \to X$, whose level sets $\pi^{-1}(\{x\})$ are all isomorphic to the circle $S^1$; we call $\tilde X$ a circle extension of the dynamical system X. We can thus view $\tilde X$ as a circle bundle$(\pi^{-1}(\{x\}))_{x \in X}$ with base space X, thus the level sets $\pi^{-1}(\{x\})$ are now the fibres of the bundle, and the structure group is $S^1$. If one picks a gauge _for this bundle, by choosing a reference point $p_x \in \pi^{-1}(\{x\})$ in the fibre for each base point x (thus in this context a gauge is the same thing as a _section$p = (p_x)_{x \in X}$; this is basically because this bundle is a principal bundle), then one can identify $\tilde X$ with a skew product $X \times_\rho S^1$ by identifying the point $S_v p_x \in \tilde X$ with the point $(x,v) \in X \times_\rho S^1$ for all $x \in X, v \in S^1$, and letting $\rho$ be the cocycle defined by the formula $$S_{\rho(x)} p_{Tx} = \tilde T p_x.$$ One can check that this is indeed an isomorphism of dynamical systems; if all the various objects here are continuous (resp. measurable), then one also has an isomorphism of topological dynamical systems (resp. measure-preserving systems). Thus we see that gauges allow us to write circle extensions as skew products. However, more than one gauge is available for any given circle extension; two gauges $(p_x)_{x \in X}$, $(p'_x)_{x \in X}$ will give rise to two skew products $X \times_\rho S^1$, $X \times_{\rho'} S^1$ which are isomorphic but not identical. Indeed, if we let $v: X \to S^1$ be a rotation map that sends $p_x$ to $p'_{x}$, thus $p'_{x} = S_{v(x)} p_x$, then we see that the two cocycles $\rho'$ and $\rho$ are related by the formula $$\rho'(x) = v(Tx)^{-1} \rho(x) v(x).\tag{1}$$ Two cocycles that obey the above relation are called cohomologous; their skew products are isomorphic to each other. An important general question in dynamical systems is to understand when two given cocycles are in fact cohomologous, for instance by introducing non-trivial cohomological invariants for such cocycles. As an example of a circle extension, consider the sphere $X = S^2$ from Example 1, with a rotation shift T given by, say, rotating anti-clockwise by some given angle $\alpha$ around the axis connecting the north and south poles. This rotation also induces a rotation on the circle bundle $\tilde X := SX$, thus giving a circle extension of the original system $(X,T)$. One can then use a gauge to write this system as a skew product. For instance, if one selects the gauge that chooses $p_x$ to be the true north direction at each point x (ignoring for now the fact that this is not defined at the two poles), then this system becomes the ordinary product $X \times_0 S^1$ of the original system X with the circle $S^1$, with the cocycle being the trivial cocycle 0. If we were however to use a different gauge, e.g. magnetic north instead of true north, one would obtain a different skew-product $X \times_{\rho'} S^1$, where $\rho'$ is some cocycle which is cohomologous to the trivial cocycle (except at the poles). (A cocycle which is globally cohomologous to the trivial cocycle is known as a coboundary. Not every cocycle is a coboundary, especially once one imposes topological or measure-theoretic structure, thanks to the presence of various topological or measure-theoretic invariants, such as degree.) There was nothing terribly special about circles in this example; one can also define group extensions, or more generally homogeneous space extensions, of dynamical systems, and have a similar theory, although one has to take a little care with the order of operations when the structure group is non-abelian; see e.g. my lecture notes on isometric extensions. $\diamond$ Example 3: Orienting an undirected graph. The language of gauge theory is not often used in combinatorics, but nevertheless combinatorics does provide some simple discrete examples of bundles and gauges which can be useful in getting an intuitive grasp of the concept. Consider for instance an undirected graph G = (V,E) of vertices and edges. I will let X=E denote the space of edges (not the space of vertices)!. Every edge $e \in X$ can be oriented (or directed) in two different ways; let $A_e$ be the pair of directed edges of e arising in this manner. Then $(A_e)_{e \in X}$ is a fibre bundle with base space X and with each fibre isomorphic (in the category of sets) to the standard two-element set $\{-1,+1\}$, with structure group ${\Bbb Z}/2{\Bbb Z}$. A priori, there is no reason to prefer one orientation of an edge e over another, and so there is no canonical way to identify each fibre $A_e$ with the standard set $\{-1,+1\}$. Nevertheless, we can go ahead and arbitrary select a gauge for X by orienting the graph G. This orientation assigns an oriented edge $\vec e \in A_e$ to each edge $e \in X$, thus creating a gauge (or section) $(\vec e)_{e \in X}$ of the bundle $(A_e)_{e \in X}$. Once one selects such a gauge, we can now identify the fibre bundle $(A_e)_{e \in X}$ with the trivial bundle $X \times \{-1,+1\}$ by identifying the preferred oriented edge $\vec e$ of each unoriented edge $e \in X$ with $(e,+1)$, and the other oriented edge with $(e,-1)$. In particular, any other orientation of the graph G can be expressed relative to this reference orientation as a function $f: X \to \{-1,+1\}$, which measures when the two orientations agree or disagree with each other. $\diamond$ Recall that every isomorphism $\Psi \in \hbox{Isom}(G)$ of a standard geometric object G allowed one to transform a coordinate system $\Phi: A \to G$ on a geometric object A to another coordinate system $\Psi \circ \Phi: A \to G$. We can generalise this observation to gauges: every family $\Psi = (\Psi_x)_{x \in X}$ of isomorphisms on G allows one to transform a gauge $(\Phi_x)_{x \in X}$ to another gauge $(\Psi_x \circ \Phi_x)_{x \in X}$ (again assuming that $\Psi$ respects whatever topological or differentiable structure is present). Such a collection $\Psi$ is known as a gauge transformation. For instance, in Example 1, one could rotate the reference direction $N_x$ at each point $x \in X$ anti-clockwise by some angle $\theta(x)$; this would cause the function $u(x)$ to rotate to $u(x) e^{-i\theta(x)}$. In Example 2, a gauge transformation is just a map $v: X \to S^1$ (which may need to be continuous or measurable, depending on the structures one places on X); it rotates a point $(x,u) \in X \times_\rho S^1$ to $(x, v^{-1} u)$, and it also transforms the cocycle $\rho$ by the formula (1). In Example 3, a gauge transformation would be a map $v: X \to \{-1,+1\}$; it rotates a point $(x, \epsilon) \in X \times \{-1,+1\}$ to $(x, v(x) \epsilon)$. Gauge transformations transform functions on the base X in many ways, but some things remain gauge-invariant. For instance, in Example 1, the winding number of a function $u: X \to S^1$ along a closed loop $\gamma \subset X$ would not change under a gauge transformation (as long as no singularities in the gauge are created, moved, or destroyed, and the orientation is not reversed). But such topological gauge-invariants are not the only gauge invariants of interest; there are important differential gauge-invariants which make gauge theory a crucial component of modern differential geometry and geometric PDE. But to describe these, one needs an additional gauge-theoretic concept, namely that of a connection on a fibre bundle. 0.3. Connections There are many essentially equivalent ways to introduce the concept of a connection; I will use the formulation based primarily on parallel transport, and on differentiation of sections. To avoid some technical details I will work (somewhat non-rigorously) with infinitesimals such as dx. (There are ways to make the use of infinitesimals rigorous, such as non-standard analysis, but this is not the focus of my post today.) In single variable calculus, we learn that if we want to differentiate a function $f: [a,b] \to {\Bbb R}$ at some point x, then we need to compare the value f(x) of f at x with its value f(x+dx) at some infinitesimally close point x+dx, take the difference $f(x+dx)-f(x)$, and then divide by dx, taking limits as $dx \to 0$, if one does not like to use infinitesimals: $$\displaystyle \nabla f(x) := \lim_{dx \to 0} \frac{f(x+dx) – f(x)}{dx}.$$ In several variable calculus, we learn several generalisations of this concept in which the domain and range of f to be multi-dimensional. For instance, if $f: X \to {\Bbb R}^d$ is now a vector-valued function on some multi-dimensional domain (e.g. a manifold) X, and v is a tangent vector to X at some point x, we can define the directional derivative$\nabla_v f(x)$ of f at x by comparing $f(x+v dt)$ with $f(x)$ for some infinitesimal dt, take the difference $f(x+vdt) – f(x)$, divide by dt, and then take limits as $dt \to 0$: $$\displaystyle \nabla_v f(x) := \lim_{dt \to 0} \frac{f(x+vdt) – f(x)}{dt}.$$ [Strictly speaking, if X is not flat, then x+vdt is only defined up to an ambiguity of o(dt), but let us ignore this minor issue here, as it is not important in the limit.] If f is sufficiently smooth (being continuously differentiable will do), the directional derivative is linear in v, thus for instance $\nabla_{v+v'} f(x) = \nabla_v f(x) + \nabla_{v'} f(x)$. One can also generalise the range of f to other multi-dimensional domains than ${\Bbb R}^d$; the directional derivative then lives in a tangent space of that domain. In all of the above examples, though, we were differentiating functions $f:X \to Y$, thus each element $x \in X$ in the base (or domain) gets mapped to an element $f(x)$ in the same range Y. However, in many geometrical situations we would like to differentiate sections$f = (f_x)_{x \in X}$ instead of functions, thus f now maps each point $x \in X$ in the base to an element $f_x \in A_x$ of some fibre in a fibre bundle $(A_x)_{x \in X}$. For instance, one might want to know how the wind direction $w = (w_x)_{x \in X}$ changes as one moves x in some direction v; thus computing a directional derivative $\nabla_v w(x)$ of w at x in direction v. One can try to mimic the previous definitions in order to define this directional derivative. For instance, one can move x along v by some infinitesimal amount dt, creating a nearby point $x+v dt$, and then evaluate w at this point to obtain $w(x+vdt)$. But here we hit a snag: we cannot directly compare $w(x+vdt)$ with $w(x)$, because the former lives in the fibre $A_{x+vdt}$ while the latter lives in the fibre $A_x$. With a gauge, of course, we can identify all the fibres (and in particular, $A_{x+vdt}$ and $A_x$) with a common object G, in which case there is no difficulty comparing $w(x+vdt)$ with $w(x)$. But this would lead to a notion of derivative which is not gauge-invariant, known as the non-covariant or ordinary derivative in physics. But there is another way to take a derivative, which does not require the full strength of a gauge (which identifies all fibres simultaneously together). Indeed, in order to compute a derivative $\nabla_v w(x)$, one only needs to identify (or connect) two infinitesimally close fibres together: $A_x$ and $A_{x+vdt}$. In practice, these two fibres are already "within O(dt) of each other" in some sense, but suppose in fact that we have some means $\Gamma(x \to x+vdt): A_x \to A_{x+vdt}$ of identifying these two fibres together. Then, we can pull back $w(x+vdt)$ from $A_{x+vdt}$ to $A_x$ through $\Gamma(x \to x+vdt)$ to define the covariant derivative: $$\displaystyle \nabla_v w(x) := \lim_{dt \to 0} \frac{\Gamma(x \to x+vdt)^{-1}( w(x+vdt) ) – w(x) }{dt}.$$ In order to retain the basic property that $\nabla_v w$ is linear in v, and to allow one to extend the infinitesimal identifications $\Gamma(x \to x+dx)$ to non-infinitesimal identifications, we impose the property that the $\Gamma(x \to x+dx)$ to be approximately transitive in that $$\Gamma(x+dx \to x+dx+dx') \circ \Gamma(x \to x + dx ) \approx \Gamma(x \to x+dx+dx')\tag{1}$$ for all x, dx, dx', where the $\approx$ symbol indicates that the error between the two sides is o(|dx| + |dx'|). [The precise nature of this error is actually rather important, being essentially the curvature of the connection $\Gamma$ at x in the directions $dx, dx'$, but let us ignore this for now.] To oversimplify a little bit, any collection $\Gamma$ of infinitesimal maps $\Gamma(x \to x+dx)$ obeying this property (and some technical regularity properties) is a connection. [There are many other important ways to view connections, for instance the Christoffel symbol perspective that we will discuss a bit later. Another approach is to focus on the differentiation operation $\nabla_v$ rather than the identifications $\Gamma(x \to x+dx)$ or $\Gamma(\gamma)$, and in particular on the algebraic properties of this operation, such as linearity in v or derivation-type properties (in particular, obeying various variants of the Leibnitz rule). This approach is particularly important in algebraic geometry, in which the notion of an infinitesimal or of a path may not always be obviously available, but we will not discuss it here.] The way we have defined it, a connection is a means of identifying two infinitesimally close fibres $A_x, A_{x+dx}$ of a fibre bundle $(A_x)_{x \in X}$. But, thanks to (1), we can also identify two distant fibres $A_x, A_y$, provided that we have a path $\gamma: [a,b] \to X$ from $x = \gamma(a)$ to $y = \gamma(b)$, by concatenating the infinitesimal identifications by a non-commutative variant of a Riemann sum: $$\Gamma(\gamma) := \lim_{\sup |t_{i+1}-t_i| \to 0} \Gamma(\gamma(t_{n-1}) \to \gamma(t_n)) \circ \ldots \circ \Gamma(\gamma(t_0) \to \gamma(t_1)),\tag{2}$$ where $a = t_0 < t_1 < \ldots < t_n = b$ ranges over partitions. This gives us a parallel transport map $\Gamma(\gamma): A_x \to A_y$ identifying $A_x$ with $A_y$, which in view of its Riemann sum definition, can be viewed as the "integral" of the connection $\Gamma$ along the curve $\gamma$. This map does not depend on how one parametrises the path $\gamma$, but it can depend on the choice of path used to travel from x to y. We illustrate these concepts using several examples, including the three examples introduced earlier. Example 1 continued. (Circle bundle of the sphere) The geometry of the sphere X in Example 1 provides a natural connection on the circle bundle SX, the Levi-Civita connection$\Gamma$, that lets one transport directions around the sphere in as "parallel" a manner as possible; the precise definition is a little technical (see e.g. my lecture notes for a brief description). Suppose for instance one starts at some location x on the equator of the earth, and moves to the antipodal point y by a great semi-circle$\gamma$ going through the north pole. The parallel transport $\Gamma(\gamma): S_x \to S_y$ along this path will map the north direction at x to the south direction at y. On the other hand, if we went from x to y by a great semi-circle $\gamma'$ going along the equator, then the north direction at x would be transported to the north direction at y. Given a section u of this circle bundle, the quantity $\nabla_v u(x)$ can be interpreted as the rate at which u rotates as one travels from x with velocity v. $\diamond$$ Example 2 continued. (Circle extensions) In Example 2, we change the notion of "infinitesimally close" by declaring x and Tx to be infinitesimally close for any x in the base space X (and more generally, x and $T^n x$ are non-infinitesimally close for any positive integer n, being connected by the path $x \to Tx \to \ldots \to T^n x$, and similarly for negative n). A cocycle $\rho: X \to S^1$ can then be viewed as defining a connection on the skew product $X \times_\rho S^1$, by setting $\Gamma( x \mapsto Tx ) = \rho(x)$ (and also $\Gamma(x \to x) = 1$ and $\Gamma(Tx \to x ) = \rho(x)^{-1}$ to ensure compatibility with (1); to avoid notational ambiguities let us assume for sake of discussion that $x, Tx, T^{-1} x$ are always distinct from each other). The non-infinitesimal connections $\rho_n(x) := \Gamma(x \to Tx \to \ldots \to T^n x)$ are then given by the formula $\rho_n(x) = \rho(x) \rho(Tx) \ldots \rho(T^{n-1} x)$ for positive n (with a similar formula for negative n). Note that these iterated cocycles $\rho_n$ also describe the iterations of the shift $\tilde T: (x,u) \mapsto (Tx,\rho(x)u)$, indeed $\tilde T^n (x,u) = (T^n x, \rho_n(x) u)$. $\diamond$$ Example 3 continued. (Oriented graphs) In Example 3, we declare two edges e, e' in X to be "infinitesimally close" if they are adjacent. Then there is a natural notion of parallel transport on the bundle $(A_e)_{e \in X}$; given two adjacent edges $e = \{u,v\}$, $e'=\{v,w\}$, we let $\Gamma(e \to e')$ be the isomorphism from $A_e = \{ \vec{uv}, \vec{vu} \}$ to $A_{e'} = \{ \vec{vw}, \vec{wv} \}$ that maps $\vec{uv}$ to $\vec{vw}$ and $\vec{vu}$ to $\vec{wv}$. Any path $\gamma = (\{v_1,v_2\}, \{v_2,v_3\}, \ldots, \{v_{n-1},v_n\})$ of edges then gives rise to a connection $\Gamma(\gamma)$ identifying $A_{\{v_1,v_2\}}$ with $A_{\{v_{n-1},v_n\}}$. For instance, the triangular path $(\{u,v\}, \{v,w\}, \{w,u\}, \{u,v\})$ induces the identity map on $A_{\{u,v\}}$, whereas the U-turn path $(\{u,v\}, \{v,w\}, \{w,x\}, \{x,v\}, \{v,u\})$ induces the anti-identity map on $A_{\{u,v\}}$. Given an orientation $\vec G = (\vec e)_{e \in X}$ of the graph G, one can "differentiate" $\vec G$at an edge $\{u,v\}$ in the direction $\{u,v\} \to \{v,w\}$ to obtain a number $\nabla_{\{u,v\} \to \{v,w\}} \vec G(\{u,v\}) \in \{-1,+1\}$, defined as +1 if the parallel transport from $\{u,v\}$ and $\{v,w\}$ preserves the orientations given by $\vec G$, and -1 otherwise. This number of course depends on the choice of orientation. But certain combinations of these numbers are independent of such a choice; for instance, given any closed path $\gamma = \{e_1,e_2,\ldots,e_n,e_{n+1}=e_1\}$ of edges in X, the "integral" $\prod_{i=1}^n \nabla_{e_i \to e_{i+1}} \vec G(e_i) \in \{-1,+1\}$is independent of the choice of orientation $\vec G$ (indeed, it is equal to +1 if $\Gamma(\gamma)$ is the identity, and -1 if $\Gamma(\gamma)$ is the anti-identity. $\diamond$$ Example 4. (Monodromy) One can interpret the monodromy maps of a covering space in the language of connections. Suppose for instance that we have a covering space $\pi: \tilde X \to X$ of a topological space X whose fibres $\pi^{-1}(\{x\})$ are discrete; thus $\tilde X$ is a discrete fibre bundle over X. The discreteness induces a natural connection $\Gamma$ on this space, which is given by the lifting map; in particular, if one integrates this connection on a closed loop based at some point x, one obtains the monodromy map of that loop at x. $\diamond$$ Example 5. (Definite integrals) In view of the definition (2), it should not be surprising that the definite integral$\int_a^b f(x)\ dx$ of a scalar function $f: [a,b] \to {\Bbb R}$ can be interpreted as an integral of a connection. Indeed, set $X := [a,b]$, and let $({\Bbb R})_{x \in X}$ be the trivial line bundle over X. The function f induces a connection $\Gamma_f$ on this bundle by setting $$\Gamma_f(x \mapsto x+dx): y \mapsto y + f(x) dx.$$ The integral $\Gamma_f([a,b])$ of this connection along ${}[a,b]$ is then just the operation of translation by $\int_a^b f(x)\ dx$ in the real line. $\diamond$$ Example 6. (Line integrals) One can generalise Example 5 to encompass line integrals in several variable calculus. Indeed, if $X$ is an n-dimensional domain, then a vector field $f = (f_1,\ldots,f_n): X \to {\Bbb R}^n$ induces a connection $\Gamma_f$ on the trivial line bundle $({\Bbb R})_{x \in X}$ by setting $$\Gamma_f( x \mapsto x+dx ): y \mapsto y + f_1(x) dx_1 + \ldots + f_n(x) dx_n.$$ The integral $\Gamma_f(\gamma)$ of this connection along a curve $\gamma$ is then just the operation of translation by the line integral $\int_\gamma f \cdot dx$ in the real line. Note that a gauge transformation in this context is just a vertical translation $(x,y) \mapsto (x,y+V(x))$ of the bundle $({\Bbb R})_{x \in X} \equiv X \times {\Bbb R}$ by some potential function $V: X \to {\Bbb R}$, which we will assume to be smooth for sake of discussion. This transformation conjugates the connection $\Gamma_f$ to the connection $\Gamma_{f – \nabla V}$. Note that this is a conservative transformation: the integral of a connection along a closed loop is unchanged by gauge transformation. $\diamond$$ Example 7. (ODE) A different way to generalise Example 5 can be obtained by using the fundamental theorem of calculus to interpret $\int_{[a,b]} f(x)\ dx$ of the solution to the initial value problem $$u'(t) = f(t); \quad u(a) = 0$$ for the ordinary differential equation $u'=f$. More generally, the solution u(b) to the initial value problem $$u'(t) = F( t, u(t) ); \quad u(a) = u_0$$ for some $u: [a,b] \to {\Bbb R}^n$ taking values in some manifold Y, where $F: [a,b] \times {\Bbb R}^n \to {\Bbb R}^n$ is a function (let us take it to be Lipschitz, to avoid technical issues), can also be interpreted as the integral of a connection $\Gamma$ on the trivial vector space bundle $({\Bbb R}^n)_{t \in [a,b]}$, defined by the formula $$\Gamma(t \mapsto t+dt): y \mapsto y + F(t,y) dt.$$ Then $\Gamma[a,b]$, this is nothing more than the Euler method for solving ODE. Note that the method of integrating factors in solving ODE can be interpreted as an attempt to simplify the connection $\Gamma$ via a gauge transformation. Indeed, it can be profitable to view the entire theory of connections as a multidimensional "variable-coefficient" generalisation of the theory of ODE. $\diamond$$ Once one selects a gauge, one can express a connection in terms of that gauge. In the case of vector bundles (in which every fibre is a d-dimensional vector space for some fixed d), the covariant derivative $\nabla_v w(x)$ of a section w of that bundle along some vector v emanating from x can be expressed in any given gauge by the formula $$\nabla_v w(x)^i = v^\alpha \partial_\alpha w(x)^i + v^\alpha \Gamma_{\alpha j}^i w(x)^j$$ where we use the gauge to express w(x) as a vector $(w(x)^1,\ldots,w(x)^d)$, the indices $i, j = 1,\ldots,d$ are summed over the fibre dimensions (and $\alpha$ summed over the base dimensions) as per the usual conventions, and the $\Gamma_{\alpha j}^i := (\nabla_{e_\alpha} e_j)^i$ are the Christoffel symbols of this connection relative to this gauge. One example of this, which models electromagnetism, is a connection on a complex line bundle$V = (V_{t,x})_{(t,x) \in {\Bbb R}^{1+3}}$ in spacetime${\Bbb R}^{1+3} = \{ (t,x): t \in {\Bbb R}, x \in {\Bbb R}^3 \}$. Such a bundle assigns a complex line $V_{t,x}$ (i.e. a one-dimensional complex vector space, and thus isomorphic to ${\Bbb C}$) to every point $(t,x)$ in spacetime. The structure group here is U(1) (strictly speaking, this means that we view the fibres as normed one-dimensional complex vector spaces, otherwise the structure group would be ${\Bbb C}^\times$). A gauge identifies V with the trivial complex line bundle $({\Bbb C})_{(t,x) \in {\Bbb R}^{1+3}}$, thus converting sections $(w_{t,x})_{(t,x) \in {\Bbb R}^{1+3}}$ of this bundle into complex-valued functions $\phi: {\Bbb R}^{1+3} \to {\Bbb C}$. A connection on V, when described in this gauge, can be given in terms of fields $A_\alpha: {\Bbb R}^{1+3} \to {\Bbb R}$ for $\alpha = 0,1,2,3$; the covariant derivative of a section in this gauge is then given by the formula $$\nabla_\alpha \phi := \partial_\alpha \phi + i A_\alpha \phi.$$ In the theory of electromagnetism, $A_0$ and $(A_1,A_2,A_3)$ are known (up to some normalising constants) as the electric potential and magnetic potential respectively. Sections of V do not show up directly in Maxwell's equations of electromagnetism, but appear in more complicated variants of these equations, such as the Maxwell-Klein-Gordon equation. A gauge transformation of V is given by a map $U: {\Bbb R}^{1+3} \to S^1$; it transforms sections by the formula $\phi \mapsto U^{-1} \phi$, and connections by the formula $\nabla_\alpha \mapsto U^{-1} \nabla_\alpha U$, or equivalently $$A_\alpha \mapsto A_\alpha + \frac{1}{i} U^{-1} \partial_\alpha U = A_\alpha + \partial_\alpha \frac{1}{i} \log U\tag{2}.$$ In particular, the electromagnetic potential $A_\alpha$ is not gauge invariant (which broadly corresponds to the concept of being nonphysical or nonmeasurable in physics), as gauge symmetry allows one to add an arbitrary gradient function to this potential. However, the curvature tensor $$F_{\alpha \beta} := [\nabla_\alpha, \nabla_\beta] = \partial_\alpha A_\beta – \partial_\beta A_\alpha$$ of the connection is gauge-invariant, and physically measurable in electromagnetism; the components $F_{0i} = -F_{i0}$ for $i=1,2,3$ of this field have a physical interpretation as the electric field, and the components $F_{ij} = -F_{ji}$ for $1 \leq i < j \leq 3$ have a physical interpretation as the magnetic field. (The curvature tensor $F$ can be interpreted as describing the parallel transport of infinitesimal rectangles; it measures how far off the connection is from being flat, which means that it can be (locally) "straightened" via some choice of gauge to be the trivial connection. In nonabelian gauge theories, in which the structure group is more complicated than just the abelian group U(1), the curvature tensor is non-scalar, but remains gauge-invariant in a tensor sense (gauge transformations will transform the curvature as they would transform a tensor of the same rank). Gauge theories can often be expressed succinctly in terms of a connection and its curvatures. For instance, Maxwell's equations in free space, which describes how electromagnetic radiation propagates in the presence of charges and currents (but no media other than vacuum), can be written (after normalising away some physical constants) as $$\partial^\alpha F_{\alpha \beta} = J_\beta$$ where $J_\beta$ is the 4-current. (Actually, this is only half of Maxwell's equations, but the other half are a consequence of the interpretation (*) of the electromagnetic field as a curvature of a U(1) connection. Thus this purely geometric interpretation of electromagnetism has some non-trivial physical implications, for instance ruling out the possibility of (classical) magnetic monopoles.) If one generalises from complex line bundles to higher-dimensional vector bundles (with a larger structure group), one can then write down the (classical) Yang-Mills equation $$\nabla^\alpha F_{\alpha \beta} = 0$$ which is the classical model for three of the four fundamental forces in physics: the electromagnetic, weak, and strong nuclear forces (with structure groups U(1), SU(2), and SU(3) respectively). (The classical model for the fourth force, gravitation, is given by a somewhat different geometric equation, namely the Einstein equations$G_{\alpha \beta} = 8 \pi T_{\alpha \beta}$, though this equation is also "gauge-invariant" in some sense.) The gauge invariance (or gauge freedom) inherent in these equations complicates their analysis. For instance, due to the gauge freedom (2), Maxwell's equations, when viewed in terms of the electromagnetic potential $A_\alpha$, are ill-posed: specifying the initial value of this potential at time zero does not uniquely specify the future value of this potential (even if one also specifies any number of additional time derivatives of this potential at time zero), since one can use (2) with a gauge function U that is trivial at time zero but non-trivial at some future time to demonstrate the non-uniqueness. Thus, in order to use standard PDE methods to solve these equations, it is necessary to first fix the gauge to a sufficient extent that it eliminates this sort of ambiguity. If one were in a one-dimensional situation (as opposed to the four-dimensional situation of spacetime), with a trivial topology (i.e. the domain is a line rather than a circle), then it is possible to gauge transform the connection to be completely trivial, for reasons generalising both the fundamental theorem of calculus and the fundamental theorem of ODEs. (Indeed, to trivialise a connection $\Gamma$ on a line ${\Bbb R}$, one can pick an arbitrary origin $t_0 \in {\Bbb R}$ and gauge transform each point $t \in {\Bbb R}$ by $\Gamma([t_0,t])$.) However, in higher dimensions, one cannot hope to completely trivialise a connection by gauge transforms (mainly because of the possibility of a non-zero curvature form); in general, one cannot hope to do much better than setting a single component of the connection to equal zero. For instance, for Maxwell's equations (or the Yang-Mills equations), one can trivialise the connection $A_\alpha$ in the time direction, leading to the temporal gauge condition $$A_0 = 0.$$ This gauge is indeed useful for providing an easy proof of local existence for these equations, at least for smooth initial data. But there are many other useful gauges also that one can fix; for instance one has the Lorenz gauge $$\partial^\alpha A_\alpha = 0$$ which has the nice property of being Lorentz-invariant, and transforms the Maxwell or Yang-Mills equations into linear or nonlinear wave equations respectively. Another important gauge is the Coulomb gauge $$\partial_i A_i = 0$$ where i only ranges over spatial indices 1,2,3 rather than over spacetime indices 0,1,2,3. This gauge has an elliptic variational formulation (Coulomb gauges are critical points of the functional $\int_{{\Bbb R}^3} \sum_{i=1}^3 |A_i|^2$) and thus are expected to be "smaller" and "smoother" than many other gauges; this intuition can be borne out by standard elliptic theory (or Hodge theory, in the case of Maxwell's equations). In some cases, the correct selection of a gauge is crucial in order to establish basic properties of the underlying equation, such as local existence. For instance, the simplest proof of local existence of the Einstein equations uses a harmonic gauge, which is analogous to the Lorenz gauge mentioned earlier; the simplest proof of local existence of Ricci flow uses a gauge of de Turck that is also related to harmonic maps (see e.g. my lecture notes); and in my own work on wave maps, a certain "caloric gauge" based on harmonic map heat flow is crucial (see e.g. this post of mine). But in many situations, it is not yet fully understood whether the use of the correct choice of gauge is a mere technical convenience, or is more innate to the equation. It is definitely conceivable, for instance, that a given gauge field equation is well-posed with one choice of gauge but ill-posed with another. It would also be desirable to have a more gauge-invariant theory of PDEs that did not rely so heavily on gauge theory at all, but this seems to be rather difficult; many of our most powerful tools in PDE (for instance, the Fourier transform) are highly non-gauge-invariant, which makes it very inconvenient to try to analyse these equations in a purely gauge-invariant setting. 标签 connection, Gauge, Yang-Mills ← 用winedt替换网页Latex为Mathjax \ \ \ 0+ $ \… → MineCraft 服务器搭建 On Debian
CommonCrawl
Crypto 2016: Breaking the Circuit Size Barrier for Secure Computation Under DDH The CRYPTO 2016 Best Paper Award went to Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sharing. Even more interesting, the authors present a protocol where 2PC for branching programs is realized in a way that communication complexity depends only on the input size and the computation is linear in circuit size. The central idea develops around building efficient evaluation of RMS (restricted multiplication straight line) programs. The special feature of RMS is that they allow multiplications only with memory and input values; the additions come for free between memory values. Although this class seems quite restrictive it covers the class of branching programs (logaritmic depth boolean circuits with polynomial size and bounded input). In the 2PC evaluation of RMS, suppose there is a linear shared memory value $[y]$ between the parties $P_1$ and $P_2$. When $P_1$ wants to share an input value $x$ to $P_2$ it sends an ElGamal encryption of $x$, $g^{xc}$ where $c$ is a symmetric ElGamal key. Clearly, the encryption is homomorphic with respect to multiplication, but how can we make any operations between a linear SS (secret shared) value and an ElGamal encryption? This is solved by introducing a distributive DLog procedure which converts the El-Gamal ciphertexts into linear SS values. The method uses a truncated PRF which counts the number of steps until the PRF evaluated in the ElGamal encryption equals to $0$. Unfortunately this algorithm has a probability of outputting an incorrect result but it can be fixed by evaluating multiple instances of the same protocol in parallel and then use an MPC protocol to select the result majority. Of course, there are some caveats at the beginning of the scheme such as converting the key generation procedure to a public key one and removing circularity key assumptions. These are gradually presented by the authors so that it can ease the reader's understanding of the ideas. What I find neat is that at the end of the paper we can see easily how to reduce the communication for general 'dense' arithmetic circuits by splitting them in multiple reduced depth chunks and then apply the RMS programs for each gate (because an addition or multiplication gate can be represented as a branching program). Of course we can spot some open problems left as future work such as: Extend the protocols for larger classes other than branching programs. Protocol only works for $2$ parties. Can we find something with constant communication for multiple parties without using FHE? Can we convert the protocol for malicious parties in some other way rather than a generic complier as in [2]? [1]: Boyle, Elette, Niv Gilboa, and Yuval Ishai. "Breaking the Circuit Size Barrier for Secure Computation Under DDH." [2]: Ishai, Yuval, et al. "Cryptography with constant computational overhead." Proceedings of the fortieth annual ACM symposium on Theory of computing. ACM, 2008. CHES 2016: On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his paper, co-authored with Matthieu Rivain, on bit-sliced higher-order masking. Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to bit-slice Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [GR16]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work. The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the multiplicative complexity, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the parallel multiplicative complexity. While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [CRV14] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound. [CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014 & JCEN 2015. [GR16] Dahmun Goudarzi and Matthieu Rivain. How Fast Can Higher-Order Masking Be in Software? Cryptology ePrint Archive, 2016. Posted by Srinivas Vivek at 2:30 PM No comments: CRYPTO 2016 – Backdoors, big keys and reverse firewalls on compromised systems The morning of the second day at CRYPTO's 2016 started with a track on "Compromised Systems", consisting of three talks covering different scenarios and attacks usually disregarded in the vast majority of the cryptographic literature. They all shared, as well, a concern about mass surveillance. Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have worked hard on this scenario, developing a whole range of tools, with different notions of security and setup assumptions, between which one of the most common axioms is that Alice has access to a trusted computer with a proper implementation of the cryptographic protocol she wants to run. The harsh truth is that this is a naïve assumption. The Snowden revelations show us that powerful adversaries can and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its way to users or using Tailored Access Operation units. Nevertheless, the relevance of these talks was not just a matter of a "trending topic", or distrust on the authoritarian and unaccountable practices of intelligence agencies. More frequently than we would like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.) are found in popular cryptographic software, leaving the final user unprotected even when using honest implementations. In the meantime, as Paul Kocher remembered on his invited talk the day after, for most of our community it passes without notice that, when we design our primitives and protocols, we blindly rely on a mathematical model of reality that sometimes has little to do with it. In the same way as people from the CHES community has become more aware –mainly also the hard way– that relying on the wrong assumptions leads to a false confidence of the security of the deployed systems and devices, I think those of us not that close to hardware should also try to step back and look at how realistic are our assumptions. This includes, as these talks addressed in different ways, starting to assume that some standards might –and most of the systems will— be compromised at some point, and that we understand what can still be done in those cases. How would a Cryptography that worries not only about prevention, but also about the whole security cycle look like? How can the cryptography and information security communities come closer? Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines The reverse firewalls framework was recently introduced by Mironov and Stephens-Davidowitz, with a paper that has already been discussed in our group's seminars and this same blog. A secure reverse firewall is a third party that "sits between Alice and the outside world" and modifies her sent and received messages so that even if her machine has been corrupted, Alice's security is still guaranteed. Their elegant construction does not require the users to place any additional trust on the firewall, and relies on having the underlying cryptographic schemes to be rerandomizable. With this threat model and rerandomization capabilities, they describe impossibility results as well as concrete constructions. For example, in the context of semantically secure public-key encryption, in order to provide reverse firewalls for Bob, the scheme must allow a third party to rerandomize a public key and map ciphertexts under the rerandomized public key to ciphertexts under the original public key. In the same context, Alice's reverse firewall must be able to rerandomize the ciphertext she sends to Bob, in such a way that Dec(Rerand(Enc(m)))=m. Big-Key Symmetric Encryption: Resisting Key Exfiltration The threat addressed in Bellare's talk is that of malware that aims to exfiltrate a user's key, likely using her system's network connection. On their work, they design a schemes that aim to protect against this kind of Advanced Persistent Threats by making secret keys so big that their undetected exfiltration by the adversary is difficult, yet making the user's overhead almost exclusively in terms of storage instead of speed. Their main result is a subkey prediction lemma, that gives a nice bound on an adversary's ability to guess a modest length subkey, derived by randomly selecting bits of a big-key from which partial information has already been leaked. This approach, known as the Bounded Retrieval Model, has been –in the words of the authors—largely a theoretical area of research, whereas they show a fully concrete security analysis with good numerical bounds, constants considered. Other highlighted aspects of their paper were the concrete improvements over [ADW09] and the key encapsulation technique carefully based in different security assumptions (random oracle, standard model). Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results The last talk of the session focused on the concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs with input, which are fundamental building blocks in cryptographic protocols that have already been successfully compromised, as we learnt when the DUAL_EC_DRBG scandal came to light. On their paper, the authors revisit a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it could actually have. By giving concrete "Backdoored PRG" constructions, they show how that model fails. Moreover, they also study robust PRNGs with input, for which they show that Big Brother is still able to predict the values of the PRNG state backwards, as well as giving bounds on the number of these previous phases that it can compromise, depending on the state-size of the generator. [ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages 36{54. Springer, Heidelberg, Aug. 2009. [DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Sofia, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany. Posted by Eduardo Soria-Vázquez at 8:55 PM No comments: CHES 2016: Flush, Gauss, and Reload – A Cache Attack on the BLISS Lattice-Based Signature Scheme Leon Groot Bruinderink presented at CHES a cache-attack against the signature scheme BLISS, a joint work with Andreas Hulsing, Tanja Lange and Yuval Yarom. The speaker first gave a brief introduction on BLISS (Bimodal Lattice Signature Scheme), a signature scheme whose security is based on lattice problems over NTRU lattices. Since such problems are believed to be hard even if in the presence of quantum computers, BLISS is a candidate for being a cryptographic primitive for the post-quantum world. In addition, its original authors proposed implementations making BLISS a noticeable example of a post-quantum algorithm deployable in real use-cases. Informally speaking, a message $\mu$ is encoded in a challenge polynomial $\mathbf{c}$, which is then multiplied by the secret key $\mathbf{s}$ according to the following formula: $$ \mathbf{z} = \mathbf{y} + (-1)^b ( \mathbf{s} \cdot \mathbf{c} ) $$ where the bit $b$ and the noise polynomial $\mathbf{y}$ are unknown to the attacker. It is easy to see that if the attacker gains information about the noise polynomial, some linear algebra operations would lead her to the secret key. The coordinates of $\mathbf{y}$ are independently sampled from a discrete Gaussian distribution, which is implementable in several ways. The ones targeted in the paper are CDT and rejection samplings. In particular, the first method was also covered during the talk therefore I am focusing only on that in this blog post. The idea behind CDT sampling is precomputing a table according to the cumulative distribution function of the discrete Gaussian, drawing a random element and considering it as an index in the table. The element in the cell indexed by the random number is returned. In the end, elements returned by such a procedure will be distributed statistically close to a discrete Gaussian. Although being fast, this has the drawback of needing to store a large table, fact that it is known to be vulnerable to cache-attacks. The peculiarity of the attack carried out by Bruinderink \emph{et al.} is that, since the algorithm does not return the exact cache lines in which the sampling table is accessed, the equations learned are correct up to a small error, say $\pm 1$. The authors managed to translate such an issue into a shortest vector problem over lattices. Then, they run LLL algorithm to solve the problem and retrieve correct equations. Crypto & CHES 2016: 20 years of leakage, leakage, leakage Paul Kocher was invited to give an invited presentation at Crypto and CHES, which was certainly deserved on the 20th anniversary of his paper that has more than 3500 citations on Google Scholar. The content of his talk ranged from tales of his work over philosophical considerations on security to an outlook to the future. It was interesting to see how Kocher, a biologist by training, got into cryptography and managed to break implementations via side channels with rather cheap equipment, including parts from a toy electronics kit. He claimed that they could break every smart card at the time, which was of course disputed by the vendors. In the philosophy part of the talk, the speaker brought up various analogies that I found interesting even though they did not provide direct advice. For example, he compared security to fractals that provide ever more detail the closer you look. More practically, Kocher mentioned building codes and the aviation industry. Both minimize risks by best practices that include safety margins even though these incur extra costs. However, I could not help thinking that aviation killed a fair amount of people before the standards improved. On the outlook, Kocher did not seem particularly optimistic. The world is already complex and full of devices that regularly exhibit security problems, but it will get worse with the Internet of Things, where there will be more devices with longer life spans produced by vendors with less security knowledge while the impact of vulnerabilities will be even higher. He predicted that the security breaches will get worse for 3-5 years at least. In terms of constructive ideas, he suggested to move the security into chips because it won't be ruined by the lower layer there. There already has been a significant move in that direction with Intel's SGX, but there are of course other approaches. Posted by Marcel Keller at 1:01 AM No comments: Crypto 2016: Network Oblivious Transfer On the first day of CRYPTO 2016, Adam Sealfon presented his work with Ranjit Kumaresan and Srinivasan Raghurama on Network Oblivious Transfer. Oblivious transfer (OT) is a two party protocol in which party $A$ inputs two strings and party $B$ a bit $b$: $B$ receives exactly one of the strings according to his bit and finds out nothing about the other string, while $A$ does not find out which of the two strings $B$ chose. If two parties are able to engage in an OT protocol, we say that there is an OT channel between them. OT channels are a good thing to study because they are: Useful: OT has been called MPC (multi-party computation) complete, and the Atom of MPC, since many MPC protocols can be realised using OT; Achievable: e.g. trapdoor permutations can be used to realise them. Suppose we have a network in which all parties have secure connections to all other parties, and some of the parties also have OT channels between them. What can we say about the ability of the network to allow computation of OT-based MPC? In 2007, Harnik et al. asked How Many Oblivious Transfers are Needed for Secure Multiparty Computation? and give a lower bound on the number of OT channels a network must have. The paper presented gave an upper bound which matches the lower bound of the aforementioned paper, and hence allows a complete characterisation of the networks in which OT channels can be established to enable secure MPC. For some intuition as to what this all means, consider the following three graphs. Nodes represent parties in the network, and edges represent OT channels. All parties are assumed to have secure connections to all other parties and we want to have an OT channel between $A$ and $B$. In Figure 1, $A$ and $B$ have an OT channel between them, so we're done. In Figure 2, it turns out that the connections in place already suffice to provide $A$ and $B$ with an OT channel. However, in Figure 3, we cannot form an OT channel between $A$ and $B$. The reason some graphs admit OT channels between certain parties and some do not concerns a property known as splittability. A graph $G$ is called $k$-unsplittable (for $k<n/2$) if for any two disjoint sets of $k$ vertices, there is an edge from a vertex in one set to a vertex in the other; $G$ is called $k$-splittable if this does not hold. The main theorem of the paper states that, assuming a semi-honest adaptive adversary controlling no more than $t$ parties, two parties, $A$ and $B$, in the network can establish an OT channel if and only if $t<n/2$, or $t \ge n/2$ and the graph is $(n-t)$-splittable. Adding the edge $(A,B)$ to Figures 2 and 3 shows this at least looks like it says the right thing, since doing so in Figure 3 shows every 2-split of the graph has an edge between the two partitions. In proving this theorem, the paper provides a good description of the limits of OT-based MPC. Posted by Tim Wood at 10:28 PM No comments: Crypto 2016: A subfield lattice attack on overstretched NTRU assumptions This year's Crypto kicked off this morning in sunny Santa Barbara. The early afternoon session in track A covered asymmetric Ccryptography and cryptanalysis. Shi Bai presented A subfield lattice attack on overstretched NTRU assumptions: Cryptanalysis of some FHE and Graded Encoding Schemes, which is joint work with Martin Albrecht and Leo Ducas. The talk consisted of three main parts, an introduction, a presentation of the subfield attack and a discussion on its implications. The set-up of the problem is the usual one. Let $\Phi_m$ be a cyclotomic power-of-two polynomial and let $R$ be the ring $R = \mathbb{Z}[x]/\Phi_m$. We let $\lambda$ be the security parameter, $n=\phi(m)=poly(\lambda)$, $q=q(\lambda)$ and $\sigma = poly(\lambda)$. The NTRU problem is the following. NTRU Problem: We are given a ring $R$ of rank $n$, a modulus $q$, a distribution $D$ and a target norm $\tau$. Given an element $h = [gf^{-1}]_q$ (subject to $f$'s invertibility modulo $q$) for $f, g \leftarrow D$, the NTRU$(R,q,D,\tau)$ problem is to find a vector $(x,y)\neq (0,0) \in R^2 \mod q$ of Euclidean norm smaller than $\tau\sqrt{2n}$ in the lattice $\Lambda_h^q = \{ (x,y)\in R^2 : hx-y = 0 \mod q \}$. We call the above the NTRU lattice. What the authors mean by overstretched NTRU assumption is the use of super-polynomial modulus $q$ which is utilised in the context of NTRUEncrypt, signature schemes, Fully Homomorphic Encryption schemes and some candidate multilinear maps. The starting point of the attack is that whenever $ |f| \approx |g| \approx \sqrt{n}\sigma \ll \sqrt{nq}$, then the NTRU lattice has an unusually short vector. We also note that, for some target norm, recovering a short enough vector is sufficient to carry the attack. In particular, finding a vector of length $o(q)$ would break applications such as encryption. We note however that in practice, parameters can indeed be set so as to avoid this attack. Let $K$ be the cylotomic field $\mathbb{Q}(x)/\Phi_m$ and $L = \mathbb{Q}(x)/\Phi_{m'}$ a subfield, where we have that $m'|m$ and we let $\zeta_m$ and $\zeta_m'$ be the $m^{th}$, respectively $m'^{th}$ roots of unity. The authors here work with power-of-two cyclotomics, but we note that such a subfield can always be found; indeed we can take the maximal real subfield. The strategy is as follows. We use the fact that $L$ is a subfield of $K$ to use the norm map $N_{K/L}: K \rightarrow L$ to map down NTRU instances to the subfield, assuming we are working on overstretched large modulus $q$. We then apply lattice reduction (e.g. BKZ) to the subfield, solving a potentially easier problem. For an NTRU instance $(h,f,g)$ in the full field, we norm it down to an instance $(h',g',g')$ of the subfield. Now the vector $(f',g')$ is in the subfield NTRU lattice $\Lambda_{h'}^q$ and depending on the parameters, it may be unusually short. The attack then proceeds by running a lattice reduction algorithm on the subfield, which produces a vector $(x',y')$. Then, if that vector is short enough, it is in fact an $\mathcal{O}_K$-multiple of $(f',g')$ and we have $(x',y')=v(f',g')$. This allows to lift $(x',y')$ to the full NTRU lattice $\Lambda_{h}^q$ and thus potentially recover non-trivial information on $f$ and $g$. This produces a sub-exponential attack on bootstrappable YASHE. The work also implies an attack on the latest GGH construction without an encoding of zero. Depending on the multilinear degree, this can even go down to a polynomial attack. Compared to the prior state of the art, this is the best attack there is. In terms of limitations, if the normed down vector $(f',g')$ is not unusually short, then this attack fails. Equally, NTRU-743, NTRU-401 and BLISS are essentially immune. The conclusion of this talk was that in an NTRU assumption set-up, the presence of a subfield, a large modulus and a small $\sigma$ should be considered insecure. Crypto 2016: Provable Security for Symmetric Cryptography On the morning that the CAESAR competition entered its third round, track A of CRYPTO 2016 begin with a session on provable security for symmetric cryptography. It contained 5 talks, all of which were very well presented. In each case the results were given in context, along with a sketch of the key techniques behind their proofs, and very clear diagrams. First up was Viet Tung Hoang, presenting joint work with Stefano Tessaro on the multi-user security of Key-alternating Ciphers. Key Alternating Ciphers can be seen as a generalisation of the Evan-Mansour construction, and are a natural idealisation of the AES design. Often work is done in the single-user setting, leaving multi-user security to be reaching via a hybrid argument. However, this leads to a reduction in security linear in the number of users. The speaker explained two ways in which their work improves upon the previous techniques for applying the H-coefficient techinque to bound adversarial advantages using the statistical distance between possible sets of transcripts, allowing them to achieve tighter bounds.would have possible previously. They termed the first of these the "Expectation Method", where they replace an upper bound with an expected value bound to significantly improve the tightness of one of the internal bounds (specifically, when one is measuring the total probability of an adversary being able to distinguish the systems from a good transcript), while the second is a tightening of the hybrid (by pushing the hybridisation step back to the transcript stage rather than waiting until the final bound has been collected). These are both very neat observations, and it will be interesting to see how easily they can be applied to other related problems. Next, Yannick Seurin gave the first of his two talks, on the Counter-in-Tweak (CTRT) mode for bootstrapping AE from a TBC, based on joint work with Thomas Peyrin. In this work, the authors set out to construct an AE scheme that was: Beyond-Birthday-Bound Secure in the nonce-respecting case Birthday-bound secure in the nonce-abusing case They do so using a generic-composition style approach, demonstrating that a slight variant of SIV mode can be used to combine an encryption and an authentication mechanism that each meet these security requirements such that their composition inherits this security. For their result, an encryption routine is required that takes both a random IV and a nonce. To get this, Yannick explained how one can use a Tweakable Block Cipher to improve upon the classic counter mode, by instead putting the counter into the tweak. Thus their scheme uses a counter (in the tweak) that is initialised with a random IV to encrypt the nonce, security of which is proven using a neat little balls-and-bins game. After a short break, Bart Mennink introduced the XPX construction. His construction generalises single-round most tweakable Even-Mansour constructions by considering them all as being equal to the TBC \[ \begin{array}{cccccccc} & t_{11}K \oplus t_{12}P(K) & & t_{21}K \oplus t_{22}P(K) \\ & \downarrow & & \downarrow \\ m & \to \oplus \to & P & \to \oplus \to & c \\ \end{array} \] under certain sets of tweaks $(t_{11},t_{12},t_{21},t_{22}) \in \mathcal{T}$ (apologies for the terrible diagram!). After describing conditions for such Tweak sets to be weak (ie, totally insecure), he explains that all other sets are in fact reasonably secure. Developing this further, the work then investigates certain forms of related key security, and the conditions one must impose on the tweak set to achieve these. Bart then explained how these results apply to some preexisting schemes, recovering the security of the CAESAR candidates MinAlpha and Prost-COPA (for which the work also demonstrates a certain degree of related key security). Finally, he showed how these results can be applied to the Chaskey MAC algorithm, and suggested a possible modification that would (at the cost of slightly more expensive key rotation) provide some related key security, a method that might also be applicable to sponge-based schemes. The penultimate talk was on "Indifferentiability of 8-Round Feistel Networks" by Yuanxi Dai describing his work with John Steinberger. It is next in a long line of papers seek to best describe the extent to which one can substitute a Fiestel network in for a random permutation, even when the adversary has access to the internal functions. The presentation was well delivered and described the overall intuition behind the proof, and the design of their simulator, but the details of such results are generally very complicated indeed. Finally, Yannick Seurin returned to describe "EWCDM", a block-cipher based MAC construction that one could use to more efficiently instantiate the CTRT mode described previously, based on joint research with Benoît Cogliati, which looks something like: \[ \begin{array}{cccccccc} & & & & N & \to & \downarrow \\ & & & & \downarrow & & \downarrow \\ & & & & E_{k_1} & & \downarrow \\ & & & & \downarrow & & \downarrow \\ M&\to&\text{Hash} & \to & \oplus & \leftarrow & \leftarrow \\ & & & & \downarrow & & \\ & & & & E_{k_2} & & \\ & & & & \downarrow & & \\ & & & & T & & \\ \end{array} \] It is secure up to ~$2^{n/2}$ queries under nonce-reuse, and achieves security for $2^{2n/3}$ queries in the nonce-respecting setting. Moreover, for the nonce-respecting case the actual security level might be even better, since the best known attack in the currently sits at around $2^{n}$ queries, leaving scope for further research. USENIX 2016: How to Scrutinize "Password1" On the first day of USENIX, there was one talk particularly catching my attention. Daniel Lowe Wheeler from Dropbox talked about a password strength estimation, and he started with the USENIX online account registration, which rates "password" as a fair password, "Password" as good, and "Password1" as strong, while "zjwca" is rated as weak. He argued that, while password guessing has improved over the last 35 years, password policy has not evolved much since 1979. Moreover, there are inconsistent and not always helpful password policies. Two previous studies have found 142 distinct policies on 150 sites and 50 distinct policies on 50 sites, respectively. To put an end to this, the authors propose a client-side piece of JavaScript that takes 3 ms to run and gives accurate estimates for online attacks by the best available algorithms. The core estimator takes a minimum rank of the input over lists such as top passwords ("password", "123456", etc.), top surnames (Li, Khan, etc.), and specific information (user name, etc.). It also considers word transformations such as 1337, caps, and reversing, as well as keyboard patterns and sequence patterns. All this information is combined into an estimate how many guesses a sophisticated algorithm would need to find the password. To evaluate the estimates, the authors used a large data set consisting of leaked passwords as well as other sources. On this data set, other password strength estimators perform quite badly, overestimating the number of attempts for a lot of passwords that would be found in less than 10^5 tries. A particular offender is NIST entropy, which is completely oblivious to real-world choices such as "password". In comparison, overestimating happens for very few passwords with zxcvbn. The software is available on https://github.com/dropbox/zxcvbn, and it is already used by a number of companies, most notably WordPress. Crypto 2016: Breaking the Circuit Size Barrier for... CHES 2016: On the Multiplicative Complexity of Boo... CRYPTO 2016 – Backdoors, big keys and reverse fire... CHES 2016: Flush, Gauss, and Reload – A Cache Atta... Crypto & CHES 2016: 20 years of leakage, leakage, ... Crypto 2016: A subfield lattice attack on overstre... Crypto 2016: Provable Security for Symmetric Crypt...
CommonCrawl
Mathematical headaches? Problem solved! Hi, I'm Colin, and I'm here to help you make sense of maths Mathematical quotes Maths For Dummies Basic Maths For Dummies Basic Maths Practice Problems For Dummies Numeracy Tests For Dummies 20 Questions on… Core 1 Maths Core 2: Logs Core 3: Trigonometry Core 4: Integration Glorious Resolution Of Forces In Equilibrium n Mathematical Quotes (where n ~ 100) The Little Algebra Book A baseball with your name on it Written by Colin+ in mechanics 1. At the 1939 World's Fair, San Francisco Seals catcher Joe Sprinz tried to catch a baseball dropped from the Goodyear blimp 1,200 feet overhead. Sprinz knew baseball but he hadn't studied physics — he lost five teeth and spent three months in the hospital with a fractured jaw. - from Futility Closet Futility Closet is one of my new obsessions: hundreds upon hundreds of logic puzzles, oddities, surprises and more (the podcast is one of the highlights of my week). But they've done something I frown on, here, and begged a very important question: how fast was the ball going when Sprinz caught it? In basic mechanics, we ignore air resistance, even though it's going to be significant in this case. We've got a ball dropped from 1200 feet (around 370 metres, rounding sensibly) under gravity. You've got your suvat equations: we know $u=0$, $s=370$, $a = -9.8$ and want to know $v$, so we'll simply say $v^2 = u^2 + 2as$, or $v = -\sqrt{2\times 9.8 \times 370} \simeq - \sqrt{7250}$ or somewhere about 85 metres per second straight down. What's that in sensible speeds? In kilometres per hour… it's a little more than 300. In terms of kinetic energy, a baseball has a mass of about 0.15kg, so we'd need to do something around a kilojoule of work to stop it - about the same as an 80kg runner going at 3.6 m/s (or 13 km/h, enough for a very respectable marathon time. If I ran into a wall, even at my slightly more ponderous pace, I'd definitely know about it - and losing a few teeth is certainly plausible! Now, I've not taken into account the effects of air resistance, which will certainly slow the ball down: that's because the equation gets complicated! The resistance force is $\frac 12 \rho v^2 C_D A$, where $\rho$ is the density of the air (which varies with height and temperature), $v$ is the velocity, $C_D$ is the drag coefficient (about 0.3 for a baseball) and $A$ is the area (about 0.02 m²). At a normal temperature at sea level, $\rho$ is about 1.3 kg/m³, so we'd need to work out something like: $F = (0.15)a = -g + \frac 12 (1.3 )(0.3)(0.02) v^2 \simeq -g + 0.004 v^2$ … and there's no obvious way (to me) to integrate that. (Oddly, it's the constant $g$ that's the problem; otherwise, it's a pretty standard FP2 integrating factor problem. If you know how to do it, let it be known in the comments!) Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. Ask Uncle Colin: A load of parabolics Ask Uncle Colin: Impulse and Speed It's the way they write the questions: a case study How the gravitational slingshot works Ask Uncle Colin: The Timeless SUVAT Equation 2 comments on "A baseball with your name on it" MathbloggingAll A baseball with your name on it http://t.co/rW58GoB6fL srcav Great post from @icecolbeveridge unfortunately I can't help in this case! http://t.co/ALebI0oKVn Sign up for the Sum Comfort newsletter and get a free e-book of mathematical quotations. No spam ever, obviously. Where do you teach? I teach in my home in Abbotsbury Road, Weymouth. It's a 15-minute walk from Weymouth station, and it's on bus routes 3, 8 and X53. On-road parking is available nearby. "I've won a medal, I've won a medal!" sang Tom, warbling. Harry Chapin - Corey's Coming Townes van Zandt - Pancho and Lefty Like I'm going to leave Lily, Rosemary & The Jack… twitter.com/i/web/status/12184… "Can you help me put this inside-in?" #raisingMathematicians It is *always* Dave's fault. twitter.com/Samuel_Hansen/stat… We were discussing something about confidence intervals on the latest @WrongButUseful, and this strikes me as somew… twitter.com/i/web/status/12181… Wordpress theme © allure 2011. All rights reserved.
CommonCrawl
40 years of progress in female cancer death risk: a Bayesian spatio-temporal mapping analysis in Switzerland Christian Herrmann1,2,3, Silvia Ess1, Beat Thürlimann4,5, Nicole Probst-Hensch2,3 & Penelope Vounatsou2,3 In the past decades, mortality of female gender related cancers declined in Switzerland and other developed countries. Differences in the decrease and in spatial patterns within Switzerland have been reported according to urbanisation and language region, and remain controversial. We aimed to investigate geographical and temporal trends of breast, ovarian, cervical and uterine cancer mortality, assess whether differential trends exist and to provide updated results until 2011. Breast, ovarian, cervical and uterine cancer mortality and population data for Switzerland in the period 1969–2011 was retrieved from the Swiss Federal Statistical office (FSO). Cases were grouped into <55 year olds, 55–74 year olds and 75+ year olds. The geographical unit of analysis was the municipality. To explore age- specific spatio-temporal patterns we fitted Bayesian hierarchical spatio-temporal models on subgroup-specific death rates indirectly standardized by national references. We used linguistic region and degree of urbanisation as covariates. Female cancer mortality continuously decreased in terms of rates in all age groups and cancer sites except for ovarian cancer in 75+ year olds, especially since 1990 onwards. Contrary to other reports, we found no systematic difference between language regions. Urbanisation as a proxy for access to and quality of medical services, education and health consciousness seemed to have no influence on cancer mortality with the exception of uterine and ovarian cancer in specific age groups. We observed no obvious spatial pattern of mortality common for all cancer sites. Rate reduction in cervical cancer was even stronger than for other cancer sites. Female gender related cancer mortality is continuously decreasing in Switzerland since 1990. Geographical differences are small, present on a regional or canton-overspanning level, and different for each cancer site and age group. No general significant association with cantonal or language region borders could be observed. Female gender related cancers, in particular cancer of the breast, corpus uteri, ovary and cervix uteri account for more than 40 % of newly diagnosed cancers and for about 30 % of cancer related deaths in Swiss women [1]. In the past decades, female cancer mortality declined in Switzerland and the more developed countries [1] mainly due to advances in the understanding of tumour biology and in early detection, as well as the introduction of targeted therapies. However, differences in the decrease within Switzerland have been reported, such as for breast cancer in four selected cantons [2]. Switzerland is a small, affluent and culturally diverse confederation of 26 relatively autonomous states called cantons. Health care policies are developed at cantonal level resulting in a large geographical variation in health expenditures, control programs and care planning. I.e. population based mammography screening programs were and are implemented at very different time points over a period of more than 20 years in the various cantons. Most studies, including the above, investigated differences on the same regional level –cantons–, but it remained unknown whether these are consistent geographical disparities related to cantonal decisions or artefacts due to the choice of geographical and time units; driven by sub regions or complete region. The only more detailed maps of female cancer mortality rates are those of Schüler and Bopp [3] depicting geographical variation in mortality during 1970–1990 on the basis of so called MS-regions, 106 'unofficial' regions smaller than cantons defined by mobility considerations. Since they have not applied temporal and geographical smoothing, the results may be distorted especially in areas where the population is small. This makes it difficult to distinguish chance variability from real differences. To our knowledge, covariate-adjusted and smooth, nationwide maps of female cancer mortality depicting the changes over time and space are not available. Therefore, we studied geographical and temporal trends of breast, ovarian, cervical and uterine cancer mortality in Switzerland, adding 20 years of data to previous work, using state-of-the-art methodology for results with more detail and fewer artefacts, and without prejudice of geographical unit or shape of time trends. Hence, we used the most detailed available data (municipality level) and accounted for non-linear time trends. We hypothesized similar patterns for the different cancer sites and/or age group. Bayesian spatial models are the state-of-the-art modelling approach for assessing spatio-temporal patterns and trends. They "smooth" or improve estimation of an unstable rate by "borrowing" strength from its neighbours [4]. They can also assess the significance of risk factors taking into account the geographical correlation, and are able to show spatial patterns after adjustment for geographical differences in certain risk factors. Female cancer mortality data was obtained for the period 1969–2011 from death certificates coded centrally by the Swiss Federal Statistical office (FSO). The data include age at death, year of birth and death for each individual, nationality, municipality of residence, the cause of death and co-morbidities. Cause of death and co-morbidities are coded using the 8th revision of the International Classification of Diseases (ICD) until 1994/1995 and afterwards using the 10th revision. The transition to the 10th revision of the ICD-10 was accompanied by changes in death certificate coding practices (priority rules). We used age- and cancer site-specific correction factors as proposed by Lutz et al. [5] for the death counts. We included all cases coded with main causes of death being cancer of the female breast (ICD-10 C50.0-C50.9), cervix (ICD-10 C53.0- C53.9), corpus uterine (ICD-10 C54.0-C55.9) and ovary (ICD-10 C56.9). According to federal regulations, mortality data excluding any identifiable information can be used in epidemiological studies without additional ethics committee approval. Detailed population data on municipality level is only available from census that takes place in Switzerland every 10 years with the last one taking place in 2010. We aggregated the mortality data in five 4-year periods around the census years, i.e. 1969–1972, 1979–1982, 1989–1992, 1999–2002 and 2008–2011, in which population was assumed to be constant. There are around 2,500 municipalities in the country. Over the study period, the number of municipalities has changed due to fusion, separation, deletion or new occurrences. We aligned all data on the 2011 municipality structure using spatial data for 2011 and municipality transition protocols for each year obtained from the FSO. From the same source, we retrieved data on language region (German, French and Italian/Romansh) and urbanisation (Fig. 1). We grouped municipalities classified as central agglomeration city, greater agglomeration and isolated city into "urban" leaving the classification "rural" unchanged. Urbanization classification and language regions in Switzerland Age was grouped into three groups (<55, 55–74, 75+ year olds). The geographical unit of analysis was the municipality. In a preliminary analysis, we investigated SMR ratio values in a non-spatial model. Spatio-temporal Poisson and negative binomial regression models were fitted separately for each age group on the number of deaths aggregated by municipality and year with the mean being equal to the product of the expected death count and age standardised mortality rate. Indirect standardisation used 5 years age intervals. Expected mortality counts for each municipality, year and age group were obtained from the study population using nationwide age-specific mortality rates for all periods. Space and temporal random effects as well as possible non-linear temporal trends were modelled on the log of the mean standardised mortality rate following model formulations of Jürgens et al. [6] (cf. Appendix 1). In particular, municipality-specific random effects were modelled via conditional autoregressive (CAR) models to filter out the noise and highlight the observed patterns. The models were formulated as hierarchical Bayesian models with parameter estimation via Markov chain Monte Carlo simulation (MCMC). We used the Deviance Information Criterion (DIC) to select the regression models from Poisson/Negative binomial regression with or without an additional set of unstructured random effects for each municipality. Data on language and urbanisation were included as covariates in the model. These analyses will indicate whether there are statistically significant differences in the cancer mortality for each one of the above covariates, assessed by 95 % Bayesian Credible Intervals (CI). From the estimates of the model, we produced smoothed maps displaying geographical patterns of female gender cancer mortality for each age group, cancer site and year since 1969 till recent almost to date. Table 1 shows the number of female cancer deaths and crude rates per 100,000 person years in Switzerland by age group within the 4-year periods under investigation. Among the cancer sites studied, breast cancer was the most common cause of death, followed by ovarian, uterine and cervical cancer. Table 1 Female cancer mortality in Switzerland by age group and time period corrected for coding changes Mortality rates continuously decreased for cervical and uterine cancer, and for ovarian cancer in the <55 year olds. For breast cancer and the other age groups of ovarian cancer, mortality rates decreased only as from 1979–1982 and from 1989–1992 for 75+ year olds respectively. Table 2 shows the results of the spatio-temporal regression analysis by cancer site and age group. With the spatial analysis, we could confirm the time trends observed in the crude rates in Table 1, while only in few cases the covariates had a significant effect on the standardized mortality ratio (SMR). Language region had in none of the models a significant effect on mortality, urbanisation only in 3 models: An urban environment was associated with a significantly lower mortality of 55–74 year olds in uterine cancer and <55 year olds in ovarian cancer, and associated with higher ovarian cancer mortality in 75+ year olds. Table 2 Spatio-temporal model estimates of age specific female cancer mortality in Switzerland from 1969–1972 to 2007-2010 In the elderly (75+ year olds), a significant increase in breast and ovarian cancer mortality until 1989–1992 was observed and decreasing only since then (Tables 1 and 2). The spatial patterns of mortality based on smoothed small area estimates (Figs. 2, 3, 4 and 5, Additional file 1) are different for the female cancers and age groups and not homogenous among the country. No general, significant coincidence with cantonal or language region borders could be observed, with the latter additionally being confirmed by spatial regression for all cancer sites and age groups (Table 2). The spatial patterns form either sub-cantonal areas or canton-overspanning areas. Trends and geographical distribution of age standardized breast cancer mortality (SMR) by age group and among selected time periods. Values are calculated and smoothed in relation to the cancer site and age specific all period combined mortality. Darker colours represent a higher mortality for the specific age structure and population in that area and time period, a detailed color key is provided in additional file 2. Trends and geographical distribution of age standardized cervical cancer mortality (SMR) by age group and among selected time periods. Values are calculated and smoothed in relation to the cancer site and age specific all period combined mortality. Darker colours represent a higher mortality for the specific age structure and population in that area and time period, a detailed color key is provided in additional file 2. Trends and geographical distribution of age standardized uterine cancer mortality (SMR) by age group and among selected time periods. Values are calculated and smoothed in relation to the cancer site and age specific all period combined mortality. Darker colours represent a higher mortality for the specific age structure and population in that area and time period, a detailed color key is provided in additional file 2. Trends and geographical distribution of age standardized ovarian cancer mortality (SMR) by age group and among selected time periods. Values are calculated and smoothed in relation to the cancer site and age specific all period combined mortality. Darker colours represent a higher mortality for the specific age structure and population in that area and time period, a detailed color key is provided in additional file 2. For all cancer sites and age group combinations the model 1 with Poisson distributed data and only one, spatially structured, random effect was identified as the best model, with lowest DIC (see Table 3). SMR ratios in the non-spatial models were close to the results presented in Table 2, and significance was the same for all but 4 out of 84 coefficients, with their CIs being close to zero in both models. Table 3 Model selection based on Deviance Information Criterion (DIC) Using modern Bayesian small area modelling and mapping techniques we have been able to show that all investigated groups of women in Switzerland have benefited from progress in cancer control regardless of place of residence in the past 40 years. We observed only small differences in the geographical variation of mortality. A factor, which may have contributed to breast and uterine cancer mortality reductions, is the change in the use of hormone replacement therapy (HRT) [7]. After an association of HRT use with breast cancer occurrence was reported [8], its use declined sharply. We were also not able to show similar spatial patterns in breast and ovarian cancer mortality although they share several life style related, environmental and genetic risk factors. It should be noted however, that hereditary cancer accounts only for about 5-10 % of the cases in breast cancer [9] and about 15 % in ovarian cancer [10]. They are shown to occur at younger age and more advanced stage; still, a visible effect on the mortality map may only be seen in areas with ethnic groups or very large families with a highly elevated risk for hereditary cancer. Such a risk has been described for Ashkenazi Jewish women. The BRCA Ashkenazi founder gene mutations are prevalent in approximately 2 % of these women [11] with communities of Ashkenazi mainly found in urban areas; largest communities are in the cities of Zürich, Geneva and Basel contributing to 1-2 % of the population [12, 13]. However, the breast and ovarian cancer risk in BRCA carriers is affected by genetic modifiers and non-genetic factors, for example, reproductive behaviour, hormonal exposure, lifestyle and risk reduction surgeries [14]. We could not observe an elevated mortality for the three cities in contrast to the surrounding area and it remains unclear to which extent the mortality rates are driven by these hereditary forms of cancer. Considerable differences in health and health related behaviour have been reported for the Swiss language regions including alcohol intake, smoking and a healthy diet [15, 16] but lacked significance as regression factors in our analysis. Only for three cancer site-age group combinations was the urbanisation level identified as a significant factor. Urbanisation is serving as a proxy for access to and quality of medical services, education and health consciousness [3]. By our regression with 20 years of new data, we could not formally confirm an urban–rural gradient for breast cancer as described by Schüler & Bopp [3] as significant. Overall, no general pattern across age groups or cancer sites was present. The reduction of mortality was stronger in the younger age groups, which is probably the result of better survival and therefore a shift in the age of death. This would also explain the temporary increase in breast and ovarian cancer death risk around the year 1990 in the 75+ year olds. In addition, in this age group multi-morbid conditions and fewer treatments are common [17]. Sant et al. [18] noted that poor survival for gynaecological cancers in the elderly could be due to advanced stage at diagnosis, or failure to give adequate treatment, perhaps because of comorbidity. In general, the interpretability of results in this age group is limited due to its small size, more multi-morbid conditions together with possible inconsistencies in death certification over time, because of only allowing one single cause of death. As cancer deaths are rare events and in order to increase the power, different geographical units have been used when analysing cancer mortality data in the past. Some authors have used selected cantons [2] and Schüler & Bopp [3] used for their cancer atlas somewhat smaller mobility regions based on the accessibility to goods and services but which do not take into account population size. As a result, this choice was too aggregated for some urban areas and not aggregated enough for some sparsely populated areas in order to reveal robust, underlying trends. In view that the choice of the geographical unit of analysis may greatly influence results [19], the combination of small geographical units with a state-of-the art smoothing technique enabled a more detailed analysis. With this analysis, we could additionally show the driving age groups or subareas of elevated or reduced mortality in certain regions, while reducing uncertainties due to small numbers and adding an investigation of non-linear time trends. In general, smoothing allows an estimation of the underlying risk, in a sort of a long-year average, rather than the actual situation. However, for single municipalities, without fully eliminating it, the use of Bayesian smoothing reduces the probability to detect narrow areas with specifically high or low risk. Municipalities at the country border may not benefit from smoothing to the same extent as municipalities in the interior of the country due to unknown data on the other side of the border. Therefore, in the interpretation of the results emphasis should be given to the broader spatial patterns rather than to single municipalities. Comparing with the previous work of Schüler & Bopp [3] our study not only extended their work by 20 more years and corrected for non-linear time effects, more importantly, we were able to correct the foreseen overestimation in mortality numbers until 1994, which could not be adequately addressed earlier. Priority rules in the coding of causes of death led to an overestimation in cancer deaths due to their prioritization over other comorbidities. The applied methodology of age standardisation takes advantage of the actual age structure rather than a standard population. There are important limitations to our study. Risk factors affect incidence but are not necessarily linked to mortality [20]. The progression stage of the tumours and their histological type could not be taken into account, as the ICD-classification does not include histological type for the sites studied. The regional case mix and its changes over time therefore may have distorted the results. Further distortions may arise from the uncertainty as to what level the reported main cause of death and comorbidities are comparable in time and between regions, although the central coding speaks in favour of a certain homogeneity in the coding procedure. In the elderly with frequent multi-morbid conditions, the probability of misclassification is higher. Furthermore, after prior analysis the covariates language region and urbanisation level were fixed in time for the municipalities, so that varying developments therein may have resulted in inaccuracies. Female gender related cancer mortality continuously decreased in Switzerland. In most age groups, this decline was significant and quite strong in the past decades, resulting in values more than 6 times lower within 40 years. The strongest reduction of mortality was observed for cervical cancer, followed by uterine, ovarian and breast cancer. Geographical differences are small and do not follow cantonal borders. Spatial patterns were different for each cancer site and age group. The reasons for these differences are manifold, rising awareness, major advances in cancer therapy and ongoing developments in the field had a major impact on the cancer mortality. Information on the geographical patterns and temporal trends of the disease burden at different regional scales are important for the design, implementation and evaluation of programs for cancer control. Access to specialized medical facilities should be increased especially in high priority areas in order to further reduce disparities. However, existing disparities are small. GLOBOCAN 2012 v1.0, Cancer Incidence and Mortality Worldwide [http://globocan.iarc.fr] Bulliard JL, La Vecchia C, Levi F. Diverging trends in breast cancer mortality within Switzerland. Ann Oncol. 2006;17:57–9. Schüler G, Bopp M. Atlas der Krebsmortalität in der Schweiz 1970–1990. Basel: Birkhäuser Verlag; 1997. Bernardinelli L, Montomoli C. Empirical Bayes versus fully Bayesian analysis of geographical variation in disease risk. Stat Med. 1992;11:983–1007. Lutz JM, Pury P, Fioretta G, Raymond L. The impact of coding process on observed cancer mortality trends in Switzerland. Eur J Cancer Prev. 2004;13:77–81. Jurgens V, Ess S, Phuleria HC, Fruh M, Schwenkglenks M, Frick H, et al. Bayesian spatio-temporal modelling of tobacco-related cancer mortality in Switzerland. Geospat Health. 2013;7:219–36. Bouchardy C, Usel M, Verkooijen HM, Fioretta G, Benhamou S, Neyroud-Caspar I, et al. Changing pattern of age-specific breast cancer incidence in the Swiss canton of Geneva. Breast Cancer Res Treat. 2010;120:519–23. Beral V, Million Women Study C. Breast cancer and hormone-replacement therapy in the Million Women Study. Lancet. 2003;362:419–27. Campeau P, Foulkes W, Tischkowitz M. Hereditary breast cancer: new genetic developments, new therapeutic avenues. Hum Genet. 2008;124:31–42. Pal T, Permuth-Wey J, Betts JA, Krischer JP, Fiorica J, Arango H, et al. BRCA1 and BRCA2 mutations account for a large proportion of ovarian carcinoma cases. Cancer. 2005;104:2807–16. Struewing JP, Hartge P, Wacholder S, Baker SM, Berlin M, McAdams M, et al. The Risk of Cancer Associated with Specific Mutations of BRCA1 and BRCA2 among Ashkenazi Jews. N Engl J Med. 1997;336:1401–8. STAT-TAB: Die interaktive Statistikdatenbank [http://www.pxweb.bfs.admin.ch/]. Access date 20.06.2014 Juden in der Schweiz [http://www.swissjews.ch/de/kultur/juden_in_der_schweiz/index.php]. Access date 20.06.2014 Levy-Lahad E, Friedman E. Cancer risks among BRCA1 and BRCA2 mutation carriers. Br J Cancer. 2007;96:11–5. Calmonte R, Galati-Petrecca M, Lieberherr R, Neuhaus M, Kahlmeier S: Gesundheit und Gesundheitsverhalten in der Schweiz 1992–2002. Neuchâtel: Schweizerische Gesundheitsbefragung. Bundesamt für Statistik; Neuchâtel 2005. Lieberherr R, Marquis J-F, Storni M, Wiedenmayer G. Gesundheit und Gesundheitsverhalten in der Schweiz 2007 - Schweizerische Gesundheitsbefragung. Neuchâtel: Bundesamt für Statistik (BFS); 2010. Joerger M, Thurlimann B, Savidan A, Frick H, Rageth C, Lutolf U, et al. Treatment of breast cancer in the elderly: a prospective, population-based Swiss study. J Geriatr Oncol. 2013;4:39–47. Sant M, Aareleid T, Berrino F, Bielska Lasota M, Carli PM, Faivre J, et al. EUROCARE-3: survival of cancer patients diagnosed 1990-94--results and commentary. Ann Oncol. 2003;14 Suppl 5:v61–118. Woods LM, Rachet B, Coleman MP. Choice of geographic unit influences socioeconomic inequalities in breast cancer survival. Br J Cancer. 2005;92:1279–82. Barnett GC, Shah M, Redman K, Easton DF, Ponder BAJ, Pharoah PDP. Risk Factors for the Incidence of Breast Cancer: Do They Affect Survival From the Disease? J Clin Oncol. 2008;26:3310–6. This research was co-funded by the Cancer League Eastern Switzerland and an SNF grant, project no. 32003B_135769. Cancer Registry St. Gallen-Appenzell, St Gallen, Switzerland Christian Herrmann & Silvia Ess Department Epidemiology and Public Health, Swiss Tropical and Public Health Institute, Basel, Switzerland Christian Herrmann, Nicole Probst-Hensch & Penelope Vounatsou University of Basel, Basel, Switzerland Department of Medical Oncology-Haematology, Kantonsspital St. Gallen, St. Gallen, Switzerland Beat Thürlimann Breast Centre, Kantonsspital St. Gallen, St. Gallen, Switzerland Christian Herrmann Silvia Ess Nicole Probst-Hensch Penelope Vounatsou Correspondence to Christian Herrmann. PV, SE conceived of the study. CH carried out the analysis and data acquisition. CH, SE, PV contributed to the analysis of the data and the writing of the manuscript. SE, BT, NP and PV contributed to interpretation of the findings and critically revised the manuscript. All authors read and approved the final manuscript. Appendix model formulations Observed age and cancer site-specific counts of deaths Y it in municipality i(i = 1, …, N) in period t to follow a poisson distribution Y it ~ Pois(μ it ). Age and cancer specific random effects as well as possible non-linear trends were modelled on the log of the mean Age Standardized Mortality Ratio (SMR). $$ \log \left({\mu}_{it}\right)= \log \left({E}_{it}\right)+\alpha +{X}_{ij}^T{\beta}_s+{\Phi}_i $$ where E it is the age and cancer specific expected number of deaths, X is the vector of covariates s related to municipality i and βs the coefficients of associated covariates. Time periods are included as covariates. Spatial correlation by age and cancer specific random effects Φ i on municipality level i, modelled via a Conditional Autoregressive (CAR) process. Spatial dependency among the municipalities was introduced by the conditional prior distribution of Φ i with $$ {\Phi}_i\sim N\left(\frac{\gamma {\displaystyle {\sum}_{\begin{array}{c}\hfill q=1\hfill \\ {}\hfill q\ne i\hfill \end{array}}^N}{c}_{iq}{\Phi}_q}{w_i},,,\frac{\sigma^2}{w_i}\right) $$ where c iq characterizes the degree of spatial influence of municipality i to the remaining municipalities, γ quantifying the overall spatial dependence and w i being the number of neighbours of municipality i. We used the intrinsic version of this CAR model as proposed by Besag, York and Mollie (1991) where c iq takes the value 1 if municipalities are adjacent and 0 otherwise, and γ being equal to one. As further prior distributions we used: $$ \frac{1}{\sigma^2}\sim \varGamma \left(2.01,\ 1.01\right), \kern0.5em \alpha \sim U\left(-\infty, +\infty \right), \kern0.5em {\beta}_s\sim N\left(0,\ 0.01\right) $$ Detailed Figures of SMR development by cancer sites and age groups. Development of age standardized breast (Figures S2a-c), cervical (Figures S3a-c), uterine (Figures S4a-c) and ovarian (Figures S5a-c) cancer mortality (SMR) and spatial differences therein among all time periods by age group. (PDF 5957 kb) Color key for figures 2-5. (PDF 164 kb) Herrmann, C., Ess, S., Thürlimann, B. et al. 40 years of progress in female cancer death risk: a Bayesian spatio-temporal mapping analysis in Switzerland. BMC Cancer 15, 666 (2015). https://doi.org/10.1186/s12885-015-1660-8 Disease mapping Time trends
CommonCrawl
What is the motivation for defining both homogeneous and inhomogeneous cochains? In my few months of studying group cohomology, I've seen two "standard" complexes that are introduced: We let $X_r$ be the free $\mathbb{Z}[G]$-module on $G^r$ (so, it has as a $\mathbb{Z}[G]$-basis the $r$-tuples $(g_1,\ldots,g_r)$ of elements of $G$). The $G$-module structure of $X_r$ comes by virtue of being a $\mathbb{Z}[G]$-module. The boundary maps $\partial_r:X_r\to X_{r-1}$ are $$\partial_{r}(g_1,\ldots,g_r)=g_1(g_2,\ldots,g_r)+\sum_{j=1}^r(-1)^j(g_1,\ldots,g_jg_{j+1},\ldots,g_r)+(-1)^r(g_1,\ldots,g_r)$$ We let $E_r$ be the free $\mathbb{Z}$-module on $G^{r+1}$ (so, it has as a $\mathbb{Z}$-basis the $(r+1)$-tuples $(g_0,\ldots,g_r)$ of elements of $G$). The $G$-module structure of $E_r$ is defined by $g(g_0,\ldots,g_r)=(gg_0,\ldots,gg_r)$. The boundary maps $d_r:E_r\to E_{r-1}$ are $$d_{r}(g_0,\ldots,g_r)=\sum_{j=0}^{r}(-1)^j(g_0,\ldots,\widehat{g_j},\ldots,g_{r})$$ We may then proceed to compute the cohomology of $G$ with coefficients in a $G$-module $A$ using $$0\to \text{Hom}_G(X_0,A)\to\text{Hom}_G(X_1,A)\to\cdots$$ or by using $$0\to \text{Hom}_G(E_0,A)\to\text{Hom}_G(E_1,A)\to\cdots$$ Elements of $\text{Hom}_G(X_r,A)$ are "inhomogeneous cochains" and elements of $\text{Hom}_G(E_r,A)$ are "homogeneous cochains". In either case, all that matters is what happens to the basis elements, so really we can say that an "inhomogeneous cochain" is a function $f:G^r\to A$, and that a "homogeneous cochain" is a function $f:G^{r+1}\to A$ that satisfies $f(gg_0,\ldots,gg_r)=g\cdot f(g_0,\ldots,g_r)$. Lang defines them both in his Topics in cohomology of groups, and says ... we have a $\mathbb{Z}[G]$-isomorphism $X\xrightarrow{\approx}E$ between the non-homogeneous and the homogeneous complex uniquely determined by the value on basis elements such that $$(\sigma_1,\ldots,\sigma_r)\mapsto (e,\sigma_1,\sigma_1\sigma_2,\ldots,\sigma_1\sigma_2\ldots \sigma_r)$$ but Serre defines the only the homogenous cochains in Local Fields and then says that a cochain ... is uniquely determined by its restriction to systems of the form $(1,g_1,g_1g_2,\ldots,g_1\cdots g_i)$. That leads us to interpret the elements of $\text{Hom}_G(E_r,A)$ as "inhomogeneous cochains", i.e. as functions $f(g_1,\ldots,g_i)$ of $i$ arguments, with values in $A$, whose coboundary is given by ... To put it bluntly, my question is: Why are we doing this? I can think of some possible reasons: Historical - perhaps one way was defined first, now the other is more popular, but the older definition is still included out of tradition. Practical - perhaps there are important computations that are significantly easier to see or do using one or the other approach, or where it is useful to switch between them for some reason. Big picture - perhaps there is a high-level interpretation of one or both approaches that ties in with some other field where (co)homology plays a role. So, what's the real motivation for defining both "homogeneous" and "inhomogeneous" cochains? terminology homology-cohomology group-cohomology Zev ChonolesZev Chonoles $\begingroup$ I don't know what the real answer is, but I'll just point out that the inhomogeneous definition has the advantage that it's defined on a tuple of smaller size (r instead of r+1), while the homogeneous definition has the advantage that the boundary map is "simpler" (omitting a term rather than combining via a product). $\endgroup$ – Ted Sep 18 '11 at 17:22 $\begingroup$ The second complex makes it obvious that we are dealing with the homology of a simplicial set. Cycles and cocycles on the first one, on the other hand, are what we usually find in nature (derivations, factor sets in extensions, etc) $\endgroup$ – Mariano Suárez-Álvarez Sep 18 '11 at 18:29 $\begingroup$ @Mariano: I'm afraid I don't really understand your explanation of the first complex; Serre, for example, talks about factor sets using homogeneous cochains. Could expand a bit more on the places we see inhomogeneous cochains "in nature" (in an answer, if you want)? $\endgroup$ – Zev Chonoles Sep 18 '11 at 22:55 $\begingroup$ By the way, one should keep in mind the fact that the two complexes are in fact isomorphic! $\endgroup$ – Mariano Suárez-Álvarez Sep 23 '11 at 3:27 $\begingroup$ @Mariano: Indeed - that was one of the reasons I found the situation a bit perplexing. $\endgroup$ – Zev Chonoles Sep 23 '11 at 3:38 Let me give you three examples where nature picks your first complex: First: Let $G$ be a group, let $A$ be a $G$-module, and let $A\rtimes G$ be the direct product (as a set, this is $A\times G$, and it becomes a group with multiplication such that $(a,g)\cdot(b,h)=(a+g\cdot b,gh)$ for all $a$, $b\in A$ and all $g$, $h\in G$) Consider the projection map $p:A\rtimes G\to G$, which is a group homomorphism. A section of $p$ is a group homomorphism $s:G\to A\rtimes G$ such that $f\circ s=\mathrm{id}_G$. It is immediate to check that a section determines uniquely and is determined uniquely by a function $\sigma:G\to A$ such that $$g\cdot\sigma(h)+\sigma(g)=\sigma(gh)$$ for all $g$, $h\in G$; indeed, the relation between $s$ and $\sigma $ is that $s(g)=(\sigma(g),g)$ for all $g\in G$. The function $\sigma$ is then a $1$-cocycle defined on your first complex. Second: Consider an extension i f 0 ---> A ---> E ---> G ---> 1 of a group $G$ by an abelian group $A$ (whose operation I'll write $+$). Let $\sigma:G\to E$ be a set-theoretic section of $f$. For all $g$, $h\in G$ we have $f(\sigma(g)\sigma(h))=f(\sigma(g))f(\sigma(h))=gh=f(\sigma(gh))$, so that there exists a unique element $\alpha(g,h)\in A$ such that $$\sigma(g)\sigma(h)=\iota(\alpha(g,h))\sigma(gh).$$ There is an action of $G$ on $A$ such that $$\iota(g\cdot a)=\sigma(g)\iota(a)\sigma(g)^{-1}$$ for all $g\in G$ and all $a\in A$. It is eeasy to check that this is indeed an action of $G$ on $A$ by group automorphisms (here it is where we need that $A$ be abelian) In other words, $A$ is a $G$-module. Now, whenever $g$, $h$, $k$ are in $G$ we have $$(\sigma(g)\sigma(h))\sigma(k)=\iota(\alpha(g,h))\sigma(gh)\sigma(k)=\iota(\alpha(g,h)+\alpha(gh,k))\sigma(ghk)$$ and $$\sigma(g)(\sigma(h)\sigma(k))=\sigma(g)\iota(\alpha(h,k))\sigma(hk)=\iota(g\cdot\alpha(h,k))\sigma(g)\sigma(hk)=\iota(g\cdot\alpha(h,k)+\alpha(g,hk))\sigma(ghk).$$ Since multiplication in $G$ is associative, the left-hand sides in these last two equations are equal, so so are their right hand sides---and since $\iota$ is injective, we see that $$g\cdot\alpha(h,k)+\alpha(g,hk)=\alpha(g,h)+\alpha(gh,k)$$ or, equivalently, that $$g\cdot\alpha(h,k)-\alpha(gh,k)+\alpha(g,hk)-\alpha(g,h)=0.$$ This means that $\alpha$ determines a $2$-cocyle on your first the complex. Third: If $G$ is a group, the category of $G$-modules over a field $k$ is a monoidal category $\mathscr M_G$ with respect to the tensor product of representations. If $\alpha:G\times G\times G\to k^\times$ is a $3$-cocycle defined on your first complex and with values in the multiplicative group of $k$, then one can "twist" the associativity isomorphisms of $\mathscr M_G$ using $\alpha$ to obtain a different, slightly more fun monoidal category $\mathscr M_G(\alpha)$, and if you work this out in detail, you will see that again the cocycle condition with respect to your first complex is precisely the pentagon condition for a monoidal structure. These are just three instances where nature picks inhomopgenous cochains. Mariano Suárez-ÁlvarezMariano Suárez-Álvarez The inhomogeneous cochain construction is a standard free resolution of $\mathbb Z$ (the trivial $G$-module) as a $\mathbb Z[G]$-module, and it is explicitly constructed to be such. Since taking $G$-invariants is the same as forming $Hom_G(\mathbb Z, A)$, this is what is needed to compute derived functors of this operation (which is what group cohomology is, from a derived functor point of view). On the other hand, homogenous cochains are what you get if you compute the cohomology of local systems (twisted coefficients) on the classifying space for $G$, which is how group cohomology first arose (explicitly --- there were implicit examples of group cohomology classes much earlier) in the literature. The "homogeneity" reflects the fact that we are computing with a certain $G$-equivariant simplicial complex. Loosely, and roughly, speaking, the inhomogeneous picture is more algebraic, and the homogeneous picture is more topological. Matt EMatt E $\begingroup$ The homogeneous cocycles are very similar to the forming of a boundary of a topological CW-complex. It therefore gives intuition that the homogeneous approach is more useful in algebraic topology. In fact, the homogeneous cocyles formula is the same as Cech cohomology formula when considering cohomology of sheaves. $\endgroup$ – LinAlgMan Aug 5 '13 at 13:08 Not the answer you're looking for? Browse other questions tagged terminology homology-cohomology group-cohomology or ask your own question. Grothendieck topology of sets and Cech cohomology Step in proof that a quotient is isomorphic to cohomology group Let $G$ be a group. Why is $ \operatorname{Ext}_{\mathbb{Z}G}^1(\mathbb{Z},\mathbb{Z})\cong \operatorname{Hom}_{Grp}(G,\mathbb{Z})$? Group Cohomology: Change of Groups Definition of the $0$-coboundary in group cohomology A particular complex of integral group ring is exact: proof of Jacobson
CommonCrawl
Applications of Measure, Integration and Banach Spaces to Combinatorics I'm going to be teaching a Master's level analysis course(measure theory, Lebesgue integration, Banach and Hilbert spaces, and if there's time, some spectral or PDE stuff) in the fall. My problem is that while roughly half of the students will actually be using analysis in their further work, the rest of them are going to specialize in combinatorics, and while I want to convince them that they should know this stuff as part of their general mathematical culture, I'd also like to try to connect it to what they'll be working on. So far, I've found survey articles on applications of Ramsey theory to Banach spaces, and applications of harmonic analysis to additive number theory, but I was wondering whether anyone had some suggestions of references for applications of classical analysis to old-fashioned, classical combinatorics. (I realise that this is a pretty tall order, as on many levels, these two fields are at antipodes.) PS: I'm planning on talking about probability measures on discrete spaces, but I don't think that will convince the combinatorics people that hacking through the construction of the Lebesgue integral could have a practical payoff someday for them. ca.classical-analysis-and-odes co.combinatorics reference-request measure-theory Gordon CraigGordon Craig $\begingroup$ Well, I don't know what you mean by "combinatorics", but there are many connections between functional analysis and discrete mathematics. Try, for example, wapedia.mobi/en/Johnson–Lindenstrauss_lemma and look at Jiri Matoušek's books and expository articles. $\endgroup$ – Bill Johnson Aug 27 '10 at 1:02 $\begingroup$ While the goal of motivating the students by presenting diverse applications is commendable, at this basic level you can't meaningfully say much of depth (as Steve Huntsman commented, probability, a la Feller, is a bridge, but that's already a course of its own). There is also a real danger of losing analysts by emphasizing combinatorics too much. If they view it as an annoying distraction, in a course that isn't even nominally related to combinatorics, they'd be right. Maybe the solution is not to try to be something for everyone, but to assume some maturity on the students' part. $\endgroup$ – Victor Protsak Aug 27 '10 at 1:27 $\begingroup$ I agree. However, here should enter the sensitivity of the teacher. Showing at least connections with different fields is a good thing, and in case the audience reacts positively, one may decide to expand. Also, in presence of a class of students that have already chosen their own specialization, a very nice thing is to schedule a short seminal part with expositions by the students, each one on his/her favourite field. $\endgroup$ – Pietro Majer Aug 27 '10 at 8:44 $\begingroup$ The existence of the internet means that the days of - Me Teacher. Me Teach. You Pupil. You Learn. - are over. Instead tell the students how interconnected mathematics is, and ask each of them to search the internet to find papers that link functional analysis to their own favourite topics and perhaps ask the students to present their findings to the class. Make it a competition to see who can find the most interesting links. $\endgroup$ – user8232 Aug 27 '10 at 17:06 $\begingroup$ Pietro's suggestion about presentations is eminently sensible, I just don't see how to do it in a measure theory - functional analysis class.$$ $$ "Make it a competition to see who can find the most interesting links": I've actually done this once as a motivational exercise in a linear algebra class I taught, and the feel-good effect aside, its instructional value was surprisingly low - while I expected some attempt at understanding, all it did was establish that students knew what a search engine was. Even if that's an improvement over unknown's pidgin education standards, it ain't much. $\endgroup$ – Victor Protsak Aug 28 '10 at 17:01 Here's a nice application of measure theory, precisely, of the the theory of orthogonal polynomials, to a classic problem of counting derangements. Problem: How many anagrams with no fixed letters of a given word are there? For instance, for a word made of only two different letters, say $n$ letters $A$ and $m$ letters $B$, the answer is, of course, 1 or 0 according whether $n = m$ or not, for the only way to form an anagram without fixed letters is, exchanging all the $A$ with $B$, and this is possible if and only if $n=m$. In the general case, for a word with $n_1$ letters $X_1$, $n_2$ letters $X_2$, ..., $n_r$ letters $X_r$, you will find (after the proper use of the inclusion-exclusion formula) that the answer has the form of a sum of products, that looks very much like the expansion of a product of sums, yet it is not. It is not, exactly because of the presence of terms $k!$, that would formally make a true expansion of a product of sums, if only they where replaced by corresponding terms $x^k$. This suggests to express them with the Eulerian integral $k!=\int_0^\infty x^ke^{-x}dx$, with the effect that the said expression becomes an integral (with the weight $e^{-x}$) of a true product of sums: precisely, $$\int_0^\infty P_{n_1} (x) P_{n_2}(x)\cdots P_{n_r}(x)\, e^{-x}\, dx,$$ with a certain sequence of polynomials $P_n$, where $P_n$, has degree $n$. But the above answer for the case $r=2$ gives an orthogonality relation, whence the $P_n$, are the Laguerre polynomials, (up to a sign that is easily decided). Note that in the case with no repeated letters, all $n_i=1$, one finds again the more popular enumeration of permutations without fixed points. Disclaimer: I partially copied this from wikipedia; it's me who wrote it there. The above is my personal amateur's solution, and possibly differs slightly from the vulgata. An on-line reference, with generalizations of the problem, is e.g. Weighted derangements and Laguerre polynomials, D.Foata and D.Zeilberger, SIAM J. Discrete Math. 1 (1988) 425-433. Pietro MajerPietro Majer $\begingroup$ Thanks! That sounds perfect for one of the student talks at the end of term; they'll get to do a bit more on orthogonal polynomials than I'm counting on doing, and then the application will be a nice payoff. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:15 $\begingroup$ Is the intended reference epubs.siam.org/doi/abs/10.1137/0401043?journalCode=sjdmec or emis.ams.org/journals/SLC/opapers/s08foazeil.pdf ? $\endgroup$ – François G. Dorais♦ Dec 1 '13 at 18:01 Fourier Analysis is a major tool in Arithmetic combinatorics (see Tao and Vu's book, they have a chapter named L^p theory, i.e. the theorem of Bourgain's about "long APs in sumsets"). Moreover, one can show applications in Number Theory and Summability theory (those have combinatorial uses), for example, Hardy-Littlewood's proof of the Prime Number Theorem (based on their Tauberain theorem (or Landau's, they are essentialy the same for this perpose)). If you want to dwelve into Ergodic Theory, there are a lot of uses (as one suggested here, Hillel's proof of Szemeradi's theorem, a generalization concerning forms in the plane (by Furstenberg-Katzenelson-Weiss), there are even previous works, in topological dynamics, of proving for example the van-der-warden theorem. And if one wants to go in other direction, Weil's equidistribution theorem, works of Dani and Margulis about the Oppenheim conjecture, and general work in the geometry of numbers by recent fields medalist Lindenstrauss), but covering all those subjects would require about 2 courses (Ergodic Theory and Homogenuous dynamics, prehepes even a course in topological dynamics). If you want to go purely combinatorial, recently there had been a lot of interest in expanders (see the works of Bourgain and Varju (Princeton)). You can show for example Margulis' construction (altough it will require some Lie groups and Rep. theory). In a totally other direction, you can speak of embedding into metric spaces (that stuff has some application in CS), for example, the Johnson-Lindenstrauss Lemma (this Lindenstrauss is the father of the fields medalist). AsafAsaf $\begingroup$ These applications are beautiful, but they are completely unsuitable for presenting to students learning about the Lebesgue theory for the first time. Almost every one of them is a good topic to spend a month on in an advanced graduate course. $\endgroup$ – Victor Protsak Aug 29 '10 at 4:19 $\begingroup$ Thanks, I'll look into these. They do sound over my students' head at this point, but I'll drop them into conversation, and give references. $\endgroup$ – Gordon Craig Sep 13 '10 at 18:59 Give Flajolet's book "Analytic Combinatorics" a shot. Although I'm still not sure how you would insert a generating function and a saddle-point analysis into your course ... anonnnanonnn $\begingroup$ I'm not too sure either, but I'll check out the book. Thanks. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:15 Fourier analytic methods can sometimes be useful to proof combinatorical results. A celebrated result which uses fourier analysis on the boolean cube is the KKL (Kahn-Kalai-Linial) -Theorem about the influence of a variable in a Boolean function. Here are lecture notes which cover that subject. As someone else already mentioned, metric embeddings are an interesting topic where analysis and combinatorics intersect. There is a good survey by Matousek on the subject. Finally I would like to remark that a good method to gain the interest of combinatorialists is to give them interesting problems to think about. So a great analysis book for combinatorialists -although a bit too advanced for your course I guess- is Halmos "A Hilbert Space Problem Book" Paolo Ketter-UmbanzaPaolo Ketter-Umbanza $\begingroup$ Thanks for the references. I'll try to pick a topics out of there for one of their talks, although Halmos's book will be too advanced, as we won't be on to Hilbert spaces until at least the middle of term. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:01 Probability is a thread that can tie analysis and combinatorics together. In particular, Markov processes on various spaces enjoying some nice combinatorial properties have a good many applications (and can also introduce harmonic analysis representation theory of finite groups). A couple of books that look interesting in regard to these sorts of overlaps are here and here. It is likely that with a bit of browsing you can find better examples. Another related area that may provide fertile ground is statistical physics, though this may require more background than you can afford to provide. On the other hand, the thermodynamic limit is a subtle analytic beast that requires a great deal of care and all the theory they could handle in such a course. Steve HuntsmanSteve Huntsman $\begingroup$ Thanks. That sounds out of our league, but it's something I can mention in passing to pique their interest. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:16 If you count additive number theory as combinatorics, there is Fürstenberg's measure theoretic proof of Szemerédi's theorem ("Any 'positive fraction' of the natural numbers contains arithmetic progressions of any length."). Presenting the proof is certainly not possible – that's the subject of an entirely different course – but you could do some small talk and prove Poincaré's recurrence theorem instead, which is a trivial special case of Szemerédi's theorem. In that light, another, if elementary, measure theoretic fact is Minkowski's theorem about lattice points (vectors with integer coordinates) in convex subsets of $\mathbb{R}^n$. I vaguely remember that it can be used in the proof that every natural number is a sum of four squares (can it?), which could be labeled "combinatorics" with heavy squinting. Since this is about convex sets, there is some connection to norms and functional analysis, too ("Geometry of numbers"). Greg GravitonGreg Graviton This isn't exactly what you are looking for, but I highly recommend Using the Borsuk-Ulam theorem by Matousek. The Borsuk-Ulam theorem states that any continuous map $f:S^n \to \mathbb{R}^n$ must map some pair of antipodal points to the same point. Surprisingly, this theorem has many applications in combinatorics, including for example graph colouring. Tony HuynhTony Huynh $\begingroup$ That sounds nice. I'm not sure whether I'll be able to integrate it in my course, but if I can that would certainly interest the combinatorists. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:12 There is a direct connection between Hall's marriage theorem (combinatorics) and Linear programming (linear inequalities). Of course, the latter is about finite dimensions, but prominently features duality and convexity, two important tools in functional analysis. To elaborate on Hall's marriage theorem, consider the following picture: The dots on the left represent men, the dots on the right represent women and the connecting lines indicate whether this man and woman like each other. The question is whether it is possible to arrange simultaneous, monogamous marriages such that everyone marries someone he or she likes. Hall's theorem gives a necessary and sufficient condition for that: for every subset $M$ of men, the set $$W = \lbrace w \text{ woman}\ |\ w \text{ likes } m, m\in M\rbrace$$ of women liked by these men must fulfill $|M| \le |W|$. This problem is also known as perfect matching in a bipartite graph. It turns out that it is equivalent to a maximum flow problem, for which we have the min-cut max-flow theorem, which is equivalent to the duality theorem for linear programming. The details of this equivalence are not very difficult and can be found here: http://web.mit.edu/k_lai/www/6.046/r11-handout.pdf . Unfortunately, I haven't found a ready-made proof of Hall's theorem from the duality theorem, you'd have to work that out yourself for your lecture. The intermediate reformulations are bit long, I don't think it's worth spending more than a cursory remark on them; I'd jump right to the reformulation as linear program. jeq $\begingroup$ There's a book by Reichmeider about Hall, max flow, Dilworth, and a few other equivalent or closely related topics. $\endgroup$ – Gerry Myerson Aug 28 '10 at 13:57 $\begingroup$ Thanks, that sounds promising. I'll investigate Gerry's reference to see if I can make a connection with what we'll be doing. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:05 $\begingroup$ These notes present Menger's theorem in a functional-analytic sort of way: terrytao.wordpress.com/2007/11/30/… Menger's theorem is one of the closely related topics/theorems mentioned by Gerry. $\endgroup$ – Brendan Murphy Dec 8 '13 at 1:04 Here's a theorem of Weyl: Let $a_1,a_2,\dots$ be a sequence of distinct integers. Then the sequence $a_1x,a_2x,\dots$ is uniformly distributed modulo 1 for almost all real numbers $x$. The theory of uniform distribution of sequences, which you may or may not consider to be combinatorics, relies heavily on estimation of exponential sums, and thus on classical analysis. Gerry MyersonGerry Myerson As warm-up exercises using Cauchy-Schwarz and Holder's inequality, you could mention restricted subgraph bounds and incidence bounds. When you rearrange these bounds to bound the number of "rich/popular" points or vertices you get (weak) $L^p$ bounds for the degree of each vertex (viewed a function on the vertex set). In this setting it's easy to see how interpolation works, and it gives you a feel for how $L^p$ spaces work in a probability setting (i.e. on a compact domain). Here's a reference for the restricted subgraph bounds: http://murphmath.wordpress.com/2012/06/19/restricted-subgraph-bounds/. Instead of the convexity of ${x\choose s}$ in $x$, you can use Holder's inequality. The bounds can be rearranged using Chebyshev's inequality to state that (in the notation of that blog post): $$|\{b\in B\colon \mathrm{deg}(b)\geq t\}|\leq C \frac{|A|^s}{t^s}$$ where $C$ is a constant depending on $s$ and $t$. Then using the "layer-cake" theorem (which is a nice exercise using Fubini's theorem!) you get that $$||\mathrm{deg}(b)||_{L^s(B)}^s \leq C'|A|^s\log|A|.$$ I know this is pretty trivial, but this is actually what's at stake in the Kakeya maximal conjecture. Here one has a collection of tubes $T_1,\ldots,T_n$ and the goal is to prove $L^p$ bounds for the function $$ f(x)=\sum_{i=1}^n \chi_{T_i}(x),$$ where $\chi_{T_i}$ is the indicator function on the tube $T_i$. Thus we are seeking $L^p$ bounds for the function that counts how many tubes are incident to to a point $x$! The proof of the two dimensional Kakeya conjecture is essentially the same as the proof restricted subgraph bound for $s=2$ in that blog post---the incidence graph for points and lines in a plane contains no $K_{2,2}$'s, and modulo some technicalities in the Kakeya case, the main tool is Cauchy-Schwarz. In fact, Wolff's paper on the Kakeya problem contains a finite field analog where the proof is exactly by Cauchy-Schwarz. Brendan MurphyBrendan Murphy Markov chains on symmetric groups converging to various distributions other than uniform provide a fertile ground for the marriage between modern combinatorics such as Macdonald polynomials and hard analysis using ratio test etc. For starters one could look at Diaconis and Shashahani's proof of cutoff convergence rate of random transposition walk. It involves looking at Schur polynomials which encode the eigenfunctions of the markov chain. If one looks at walks converging to so-called ewens sampling measure, or jack measure, then jack polynomials replace Schur polynomials as the relevant objects. Recent Diaconis and Ram have studied an auxilliary variable algorithm markov chain converging to a two parameter family of distributions on the symmetric group that use heavily the theory of Macdonald polynomials. This last one will probably be in print in a few months time. John JiangJohn Jiang $\begingroup$ Thanks. That sounds too advanced for my students, but I'll look into it. $\endgroup$ – Gordon Craig Sep 13 '10 at 19:15 Not the answer you're looking for? Browse other questions tagged ca.classical-analysis-and-odes co.combinatorics reference-request measure-theory or ask your own question. Integral of a product of Laguerre polynomials measure spaces as presheaves? Generalising Gelfand's spectral theory Integration in several variables and elementary applications Sard's Theorem For Banach Spaces Applications of linear programming duality in combinatorics Lebesgue measure theory applications Measure on union of measure spaces and on quotient space
CommonCrawl
From Topological to Smooth and Holomorphic Vector Bundles In the last weeks I have been think of the transition from topological vector bundles to smooth and holomorphic vector bundles. This has resulted in a few questions (with a common thread) as follows: Always $\pi:E \to B$ is a topological (complex) vector bundle over a compact base, (A) For any given smooth manifold structure on $B$, can there exist more than one differential structure on $E$ giving $\pi:E \to B$ the structure of a smooth vector bundle. If so, what is an example? (B) Same question as above but replacing smooth by holomorphic. (C) For a choice of smooth vector bundle structure on $\pi:E \to B$, does the de Rham complex of $E$ have an easy relationship with the de Rham complex of $B$. A (very) naive guess would be that $$ \Omega^{\bullet}(E) = \Gamma^{\infty}(E) \otimes_{C^\infty(B)}\Omega^{\bullet}(B), $$ but I can't see that there is a well-defined way to define the differential. (D) Same question as above but for holomorphic structures and the Dolbeault complex dg.differential-geometry complex-geometry vector-bundles Janos ErdmannJanos Erdmann $\begingroup$ I think what you want to hear in relation to (C) is the word "Thom isomorphism" $\endgroup$ – Matthias Ludewig Feb 7 '13 at 11:53 $\begingroup$ For question (A), choose two exotic R^4 that is not diffeomorphism, you can view R^4 is R^3 bundle over R^1 $\endgroup$ – Siqi He Feb 7 '13 at 12:18 $\begingroup$ Siqi He: In your example there is no reason to expect they are vector bundles. $\endgroup$ – Michael Murray Feb 7 '13 at 12:30 $\begingroup$ For (A), what about taking $B$ to be a point and $E$ to be $\mathbb{R}^4$ with two different smooth structures as in Siqi He's comment? I can't see how these examples can fail to be vector bundles. $\endgroup$ – Mark Grant Feb 7 '13 at 13:13 $\begingroup$ Ah, I see what's wrong with my previous comment. The map $\pi\colon E\to B$ is not a smooth vector bundle when $E$ is exotic $\mathbb{R}^4$, since the trivialization $\pi^{-1}(B)\to B\times \mathbb{R}^4$ is not a diffeomorphism. $\endgroup$ – Mark Grant Feb 7 '13 at 14:53 The answer to A is no. The topological bundle is determined by a continuous homotopy class of maps into the classifying space. Choosing a compatible smooth structure means picking a smooth map in that class. Any two such choices are smoothly homotopic. B however is true. Look up the Jacobian of a Riemann surface. How are you going to grade in C ? Where are the forms on $E$ of degree higher then the dimension of $B$? Michael MurrayMichael Murray $\begingroup$ Thank you a lot for your answer. Then for (C), does that mean that in general there is no relationship between the two de Rham complexes? $\endgroup$ – Janos Erdmann Feb 7 '13 at 12:54 $\begingroup$ ..... or at least that there is no well-known relationship. $\endgroup$ – Janos Erdmann Feb 7 '13 at 13:03 $\begingroup$ I realise on reading the other responses that I may have misunderstood C. By de Rham cohomology of $E$ did you mean the de Rham cohomology of $E$ as a manifold or some sort of $E$ valued forms on $B$ cohomology ? $\endgroup$ – Michael Murray Feb 8 '13 at 2:14 $\begingroup$ I mean the de Rham cohomology of $E$ as a Riemannian manifold. I've never seen anyone discuss this, and I was wondering why it's not interesting. $\endgroup$ – Janos Erdmann Feb 23 '13 at 16:08 (A) As Michael Murray remarks, the answer is no (though some care is needed in interpreting the remark that "any two such choices [of maps into the classifying space] are smoothly homotopic") since in this case the classifying space is an infinite-dimensional manifold (the Grassmannian of $n$-planes in $\mathbb{R}^\infty$). (B) An elliptic curve $X$ over $\mathbb{C}$ will admit infinitely many distinct holomorphic structures on the (topologically) trivial line bundle! Choose two (distinct) points $x_0, x_1$; then $\mathcal{O}(x_0-x_1)$ is an example (this is the ideal sheaf of $x_1$ tensored with the dual of the ideal sheaf of $x_0$). One can see that this is distinct from e.g. $\mathcal{O}_X$ because it has no non-zero global sections, whereas $\mathcal{O}_X$ does (constant functions). (C) You need a flat connection to define this complex, as Liviu Nicolaescu remarks. In this case, let $\mathcal{E}$ be the sheaf of flat sections to $E$. Then the de Rham complex is $$\mathcal{E}\otimes_{\mathbb{C}} \Omega^\bullet(B).$$ The differential is defined to be a derivation, which kills $e\otimes 1$ for any section $e$ to $\mathcal{E}$ (this uniquely determines a differential). (D) Perhaps I'm confused, but I think Alex's answer is misleading--one can define a Dolbeault complex for any holomorphic vector bundle; this doesn't require a Hermitian metric. If $\mathcal{E}$ is the sheaf of holomorphic sections to your vector bundle, this complex is defined as $$\mathcal{E}\otimes_{\mathcal{O}_X} A^{0, \bullet}(B).$$ Again, the differential is a derivation which kills sections to $\mathcal{E}$. Here $A^{0, \bullet}$ is the complex of smooth $(0, \bullet)$ forms; the complex above should compute the sheaf cohomology of $\mathcal{E}$. Daniel LittDaniel Litt $\begingroup$ Thanks Daniel. I was little nervous about that point. I was hoping that because $B$ was fixed I could get away with a finite-dimensional Grassmanian but was't sure if it might change as I varied the maps homotopically? I think though the choice o finite dimensional Grassmanian can be fixed by the dimension of $B$? $\endgroup$ – Michael Murray Feb 8 '13 at 1:30 $\begingroup$ You can indeed bound the dimension of the Grassmannian if $B$ is compact, in terms of e.g. the minimal number of contractible open sets required to cover $B$ (think about how you would embed $E$ in a trivial vector bundle). A bit of work should be able to handle the non-compact case, probably; I'll think about it and write something if I come up with a slick argument. $\endgroup$ – Daniel Litt Feb 8 '13 at 1:41 For (C) you need to fix a flat connection (covariant derivative) $\nabla$ on $E$ so that you have the coboundary condition $(d^\nabla)^2=0$. (This may not always be possible; all the Chern classes would have to be torsion classes.) In any case, if that were possible you would obtain a holonomy morphism $$ h:\pi_1(B) \to \mathrm{GL}_r(\mathbb{C}), $$ where $r$ is the rank of $E$. You can think of $h$ as defining a local coefficient system or, equivalently, a locally constant sheaf $\mathscr{S}$ on $B$. This sheaf has a simple geometric description: it is the sheaf determined by the presheaf of covariant constant sections of $E$ with respect to the chosen connection $\nabla$. The cohomology of the DeRham complex determined by $\nabla$ is then the cohomology of the sheaf $\mathscr{S}$. Liviu NicolaescuLiviu Nicolaescu I am under the impression that some of the answers given may be interpreting question (A) in a manner which does not seem --- at least to me --- entirely consistent with how it is stated. Interpreting question (A) quite strictly, it seems to me that there can be in fact two distinct differentiable structures on the total space of a topological vector bundle which both make it into a smooth vector bundle. In fact, for any non-empty manifold $B$ of dimension greater than zero, and any topological vector bundle $E$ over $B$ of dimension greater than zero, there exist uncountably many distinct smooth structures on $E$ for which $E$ becomes a smooth vector bundle over $B$. I will give a very simple (and detailed) example below. Also, I will work --- out of habit --- with real vector bundles, but the exact same construction with ${\mathbb R}$ replaced by ${\mathbb C}$ works equally well for complex line bundles. Let $B$ be a manifold, and $E$ a topological vector bundle over $B$ with projection $\textrm{proj}:E\to B$ (as usual, we confuse the vector bundle with its total space). Assume the total space $E$ admits a differentiable structure which makes it into a smooth vector bundle over $B$. Denote this smooth vector bundle (and its total space seen as a smooth manifold) by $E^{(1)}$, to distinguish it from the topological vector bundle $E$. For any continuous map $f:B\to{\mathbb R}\setminus\{0\}$ we can construct the map $H_f:E\to E$ given by $$ H_f(x)= f(\textrm{proj}(x))\cdot x $$ In other words, the map $H_f$ is the vector bundle map $E\to E$ which sits over the identity $\textrm{id}_B$ on $B$, and which is multiplication by $f(b)$ on the fibre over $b\in B$. It is obvious that $H_f:E\to E$ is an isomorphism of topological vector bundles whose inverse is $H_{\frac 1 f}$ (since $f$ is never zero). But it does not give a map of smooth vector bundles $E^{(1)}\to E^{(1)}$ unless $f$ is itself smooth. Now transfer the smooth structure on $E^{(1)}$ via the homeomorphism $H_f$, and denote the new smooth manifold by $E^{(f)}$. More precisely, $E^{(f)}$ is the topological space $E$ equipped with the unique differentiable structure which makes $H_f:E_0\to E^{(f)}$ a diffeomorphism. Recall that $\textrm{proj}:E^{(1)}\to B$ is a smooth vector bundle. Note also that $H_f$ is both a diffeomorphism $H_f:E^{(1)}\to E^{(f)}$ and an isomorphism of topological vector bundles $H_f:E\to E$. As a consequence, the smooth vector bundle structure on $\textrm{proj}:E^{(1)}\to B$ transfers across $H_f$ to a smooth vector bundle structure on $\textrm{proj}:E^{(f)}\to B$ whose underlying topological vector bundle is $E$. In other words, the topological vector bundle $E$ underlying $\textrm{proj}:E^{(f)}\to B$ is actually a smooth vector bundle when we consider the smooth structure $E^{(f)}$ on the total space. To summarize, we have two smooth vector bundle structures, $E^{(1)}$ and $E^{(f)}$, on the topological vector bundle $E$ over $B$. [By the way, the notation is self-consistent: observe that for $f=1$, $E^{(f)}$ is just the original $E^{(1)}$.] To answer the question (A) affirmatively, and give an example at the same time, assume: the vector bundle $E$ has positive dimension; $B$ is non-empty and has positive dimension. Then I claim: Lemma: The identity function on $E$ is a diffeomorphism (or even just a smooth function) $E^{(f)}\to E^{(1)}$ if and only if $f$ is itself smooth. $\square$ On the one hand, it is easy to check that if $f$ is smooth then $H_f$ is a diffeomorphism $E^{(1)}\to E^{(1)}$ (both it and its inverse $H_{\frac 1 f}$ are smooth). Therefore, the smooth structure on $E^{(f)}$ is by definition the same as the smooth structure on $E^{(1)}$, i.e. the identity is a diffeomorphism $E^{(f)}\to E^{(1)}$. On the other hand, assume that the identity $\textrm{id}_E$ is a smooth function $E^{(f)} \to E^{(1)}$. I will prove that $f$ is itself smooth. Consider the composition of diffeomorphisms $$ G : E^{(1)} \overset{H_f}{\longrightarrow} E^{(f)} \overset{\textrm{id}_E}{\longrightarrow} E^{(1)} $$ By definition of $H_f$: $$ G(x) = f(\textrm{proj}(x))\cdot x $$ and it is fairly easy to use local trivializations for $E^{(1)}$ to conclude that $f$ is smooth. In fact, if $\varphi:U\times {\mathbb R}^n \to E$ is a (smooth) trivialization of $E^{(1)}$ over the open $U\subset B$, it follows that $$ \varphi^{-1}\circ G\circ\varphi(u,v) = (u,f(u)\cdot v) $$ and since $E$ has positive dimension, the restriction of $f$ to $U$ is the last component of the smooth function $$ u\longmapsto\varphi^{-1}\circ G\circ\varphi(u,(0,\ldots,0,1)) $$ Hence $f$ is smooth. In conclusion, under condition 1 above, the smooth structure on $E^{(1)}$ is the same as the smooth structure on $E^{(f)}$ if and only if $f$ is smooth. Furthermore, under condition 2 above, we can then find a continuous map $f:B\to{\mathbb R}\setminus\{0\}$ which is not smooth. For such a choice of $f$, $E^{(1)}$ and $E^{(f)}$ are two distinct smooth vector bundle structures on $E$. In fact, more can be said. If we start with $E^{(f)}$ in place of $E^{(1)}$ and apply the above construction, we can easily describe the result: for any continuous functions $f,g:B\to{\mathbb R}\setminus\{0\}$ it is easy to check that $H_g \circ H_f = H_{f\cdot g}$, which implies $$ (E^{(f)})^{(g)} = E^{(f\cdot g)} $$ Applying the preceding lemma, we conclude that the smooth structure on $E^{(f)}$ coincides with the smooth structure on $E^{(g)}$ if and only if $\frac f g$ is smooth. Consequently, the construction $f\mapsto E^{(f)}$ gives a set of smooth vector bundle structures on $E$ which is in bijection with the quotient of abelian groups $$ C^0(B,{\mathbb R}\setminus\{0\})/C^\infty(B,{\mathbb R}\setminus\{0\}) $$ where the multiplication in each of the groups is given by multiplying functions. It is fairly easy to see that this quotient is uncountable under condition 2 above: by giving for each $b\in B$ a continuous function $f_b:B\to{\mathbb R}\setminus\{0\}$ which is smooth everywhere except at the point $b$, we determine an injection of $B$ into the above quotient of abelian groups. Essential uniqueness of the smooth vector bundle structure on $E$ In light of the above, what can be said regarding uniqueness of the smooth vector bundle structure on $E$? Well, as some of the other answers have indicated, one can use approximation of continuous functions by smooth functions to prove the following result from its topological counterpart (i.e. the usual topological classification of vector bundles). Theorem: Let $B$ be a smooth manifold of dimension $n$. Consider the function $$ \theta:[B,\textrm{Gr}(k,{\mathbb R}^{k+l})]^{\textrm{smooth}}\longrightarrow \textrm{Vec}^{\textrm{smooth}}_B $$ (where the domain is the set of smooth homotopy classes of smooth functions from $B$ into the Grassmannian of $k$-dimensional linear subspaces of ${\mathbb R}^{k+l}$, and the target is the set of isomorphism classes of smooth $k$-dimensional vector bundles over $B$) defined by $$ \theta([f])=f^\ast(\gamma_{k,k+l}) $$ where $\gamma_{k,k+l}$ is the tautological smooth vector bundle over the Grassmannian. Then $\theta$ is a bijection if $l\geq n+2$. $\square$ By using this theorem, its topological counterpart (replace smooth by continuous/topological), and the approximation of continuous functions by smooth functions, one can see that the forgetful map from the set of isomorphism classes of smooth vector bundles over $B$ to the set of isomorphism classes of topological vector bundles over $B$ is a bijection. We do not really need the above theorem to see that isomorphism classes of smooth vector bundles inject into the isomorphism classes of topological vector bundles (although it is a convenient way to prove surjectivity). Using only the approximation of continuous function by smooth functions and smooth partitions of unity, one can approximate any topological isomorphism between smooth vector bundles by a smooth isomorphism. Moreover, given isomorphisms $\varphi,\psi:E\to E'$ of smooth vector bundles over $B$, any homotopy through topological isomorphisms between $\varphi$ and $\psi$ can be approximated by a homotopy through smooth isomorphisms between $\varphi$ and $\psi$. In particular, given two smooth vector bundle structures $E_1$ and $E_2$ on a topological vector bundle $E$ over $B$, there exists a unique homotopy class of smooth vector bundle isomorphisms $E_1\to E_2$ which, as an isomorphism of topological vector bundles, is homotopic to the identity $\textrm{id}_E:E\to E$. Continuing in the same manner, it is not too hard to conclude that the homotopy fibres of the map $$ \textrm{Iso}^{\textrm{smooth}}_B(E_1,E_2) \longrightarrow \textrm{Iso}^{\textrm{top}}_B(E,E) $$ (between the spaces of isomorphisms of smooth/topological vector bundles over $B$) are weakly contractible. Consequently, the map is a weak equivalence. That appears to be the strongest result we can state, to the best of my current knowledge, and especially in light of my examples above. Ricardo AndradeRicardo Andrade $\begingroup$ Ricardo: Just to be clear, you do in fact agree that any two ways of endowing a topological vector bundle with a smooth structure give rise to isomorphic smooth bundles, right? I honestly think this is a counterexample to a straw-man constructed by misreading "more than one differential structure." Clearly it is reasonable to mean "isomorphism class of smooth structure" by "differential structure"; it seems a bit strong to say my and Michael Murray's answer are wrong. I take your point that it's worth drawing a distinction between objects and isomorphism classes of objects, but really... $\endgroup$ – Daniel Litt Feb 8 '13 at 5:05 $\begingroup$ (cont.) that point could have been made with a bit more generosity. $\endgroup$ – Daniel Litt Feb 8 '13 at 5:07 $\begingroup$ @Daniel: I understand you objection, and I apologize if my wording was too strong and unpleasant. I have changed it accordingly, and I hope it reads better now. Nevertheless, I will persevere with my statement, as it seems to me that there exist distinct, non-equivalent differentiable structures on the total space of a vector bundle which make it into a smooth vector bundle. I want to be quite clear that I consider a differentiable structure to be either a maximal smooth atlas, or an equivalence class of smooth atlases. (to be continued) $\endgroup$ – Ricardo Andrade Feb 8 '13 at 6:04 $\begingroup$ What's your definition of equivalent Ricardo ? $\endgroup$ – Michael Murray Feb 8 '13 at 6:08 $\begingroup$ (continuation) I absolutely agree that any two ways of endowing a topological vector bundle with a smooth structure which makes it into a smooth vector bundle give rise to isomorphic smooth vector bundles. However, as I attempt to argue in my answer above, that isomorphism cannot in general be the identity function on the total space. Perhaps I am misunderstanding what you mean. If so, please let me know. $\endgroup$ – Ricardo Andrade Feb 8 '13 at 6:09 For D) For a holomorphic vectorbundle equipped with hermitian metric $h$ and chern connection $ \nabla $ there exists the so called twisted dolbeault-complex: You look only at antiholomorphic forms and extend the antiholomorphic part of the exterior derivative with the antiholomorphic part $ \nabla^{0,1}=\bar \partial $ of the chern connection. Then you can easily prove, that this gives you a complex. Good references are Huybrechts "complex geometry" or Griffiths, Harris "principles of algebraic geometry" Alex_K $\begingroup$ Don't you need flatness too? $\endgroup$ – Mariano Suárez-Álvarez Feb 7 '13 at 16:28 $\begingroup$ No, because the vectorbundle E is holomorphic: Denote $\bar \partial$ the antiholomorphic part of the exterior derivative and let $\Omega^{p,q}(E)$ be the (p,q)-Forms with values in E. Set $\bar \partial_E (\sum_i \alpha_i \otimes s_i):=\sum_i \bar \partial(\alpha_i) \otimes s_i $ on a desomposed section. Then because of holomorphicity $\bar \partial f_{ij}=0$, for $f_{ij} so the operator is welldefined if you change trivialization. (taken from Huybrechts, Complex Geometry, Lemma 2.6.23) $\endgroup$ – Alex_K Feb 8 '13 at 8:40 Not the answer you're looking for? Browse other questions tagged dg.differential-geometry complex-geometry vector-bundles or ask your own question. Rank 2 vector bundles over $\mathbb CP^2$ A topological consequence of Riemann-Roch in the almost complex case Principal bundles and associated vector bundles, the case of the complex projective space (1,0)-forms Divisors and vector bundles in various categories Global Definition of the Dolbeault Complex of a Vector Bundle Classifying Globally Generated Holomorphic Line Bundles over a Flag Manifold Are the Kahler Identities for a Holomorphic Vector Bundle Actually Interesting? Vector bundles on vector bundles Holomorphic natural bundles and operators Confusion surrounding the Koszul-Malgrange theorem Holomorphic structures on vector bundles over $\mathbb C\mathbb P^2$
CommonCrawl